That was a fantastic demo. I'm a mainframe enthusiast myself although I've never had direct access to a real one, I use the open source Hercules mainframe emulator with MVS and other free operating systems. I assume you're using the FLEX-ES commercial
emulator in your demo running what looks like OS/390. I have a few questions about how NetManage works, since I've only used older IBM operating systems before the advent of TCP/IP. Is the web service running directly on the mainframe or would it run in a
middle tier Windows/Linux server with some sort of application server (IIS or otherwise)? Where does the screen scraping take place? This relates back to my previous question. Is the screen scraped on the mainframe which then sends out the final data or is
the entire screen scrape sent to another server for processing which then finally exposes the web service? Finally, if you wanted to expose a portion of a mainframe application which actually modified data rather than retrieving it, would you need to add a
special step to tell the application to take input from another source and then automate the confirm step?
Hi Tim.... The mainframe emulator that Simon was using is a home grown NetManage one. We use it for a whole range of things including tracing "real" mainframe sessions and then being able to replay them for off-line builds like the one you saw or for debug.
The demo that Simon did uses an intermediate server (running under Windows in this case though it can run under Linux/Solaris/AIX, etc.) which handles all of the work and maintains the persistent connections into the host. An application server is not required
to host the application. The whole idea here is that the solution is non-invasive and non-intrusive on the host. No changes to either the host environment or the applications running in it. So all the work is done at the middle tier. An alternative to this
is to use something like IBM's 3270 Bridge for CICS but this is only available as an option to a proportion of the mainframe user base. This allows a third-party bridge exit routine to be developed that runs on the mainframe (as we have done) that gives you
the option to run the entire sequence on the host with just the input going in and the required output coming out. This also, by the way, avoids the x:y coordinate problem of screen scraping as everything is seen as name:value pairs.
In the demo that you saw the intermediate server (called OnWeb) manages the persistent 3270 sessions to the host in real time and presents the Web Service that encapsulates the defined host transaction for consumption by whoever or whatever needs it i.e. the
Avalon Carousel. There is a "design time" and a "run time" element. The design time uses the tool that you saw to define the Web Service (or any other component). It is built and then resides on the intermediate OnWeb server. When the Web Service is called
via its web reference, OnWeb runs the appropriate transaction (screen sequences) and returns the result in the appropriate Web Service format. So in this demo it is all done at the middle tier.
The whole thing is bi-directional. So its not just about retrieving data. You can update systems as well. You can access any transaction as you are using the business logic already in place in that application. So you can do whatever the CICS application
can do! (or a user using that CICS transaction can do). You can expose all of the transactions available (input or output) or just a selected few. You don't have to do everything. No special steps required although you can also add your own business logic
You can also use conditionals and branching. So it can be pretty sophisticated.
Did you notice that the hyperterminal responded almost instantly yet the automated one took 30 seconds?
Not really a fair comparison. The terminal emulator was going a screen at a time under Simon's control. You were seeing the per screen response time and it obviously looks faster (as it did inside ObjectBuilder inside VS.NET when Simon ran it there). The
Web Service was running a navigation of several screens following an automated login before you saw a result. The system also had a default wait time set between each screen in the navigation which you would not normally use. So it is not really apples with
apples ....... You should add up the total time that Simon took when he navigated the whole thing manually and then compare that to the Web Service. However, your point is well made. Individual screen transitions are always going to look faster than completely
I looked at this a few years back with a 3270 screen. The big thing was, if you maintain a persistent connection, have a real terminal logged in and do the ops it can be faster. Their scrip had to authenticate then traverse sub menus, to get to the data.
If you were logged in and ran thread safe calls to the terminal that ran the script then left a menu depth state or retuned to a root menu things would work better.
Another note, at least the IBM box I was on, each user was restricted to only log on one session. The SysOps had a higher session limit. A gui App you are used to making multiple calls. These would probably lock waiting for a clean session.
Right on the money here. You can make a BIG difference to performance and scalability by using session pooling. You can also use a technique called "screen parking" (which I think is what you are describing) where you log in to the session and navigate automatcally to
a landing or parking screen and wait for the "call". Then navigate from there (and back when you have finished) when the real transaction starts. You avoid all of the initial session creation and login overhead by doing that. If you use nailed-up LUs (one
per user for security) you obviously cannot do this but Single Sign-On at the Windows server level can take care of it by isolating the users on the front-end from the "real" security on the back-end. If you are smart and you are dealing with more than one
back-end system you can also interact with them in parallel from the server (these type of systems generally support both synchronous and asynchronous operation). Plus as well as mainframes and AS400s, you can also pull in transactions from SAP, PeopleSoft,
JD Edwards, Oracle, etc. so Composite Applications are readily built with this type of mainframe access as one thread.
Firstly let me ID myself - my name is Peter Havart-Simkin from NetManage. I thought a couple of comments on Simon's show and the issues raised about speed and performance would help - with some real-world experiences.
Firstly, the comments about the effect of multiple virtual machines is right on. I do a similar demo myself on a laptop with multiple VMs and it does slow everything up as can be seem from other apps such as VS.NET and Excel in Simon's video.
So here are some real world numbers as this is "not your Dad's screen scraping!".
Our own benchmarks:
Twin processor Windows server running NT supported 1790 simultaneous 3270 sessions into an IBM mainframe (we targeted 2000 but the loading we put on the mainframe caused the customer to say "OK - we believe you" when the host started to slow up). Same benchmark
with 950 users and transactions every three to four seconds each (actually unrealistic as screen transaction think time is more like 20 - 25 seconds on average). Loading on the NT box 45% to 60%. BTW - processors in the box were 800Mhz each.
Single Windows server running Windows Server 2003 - 2.5Ghz processor and half a gig of RAM. Go to the IBM host, log in, navigate half a dozen screens, grab data off one of the screens, grab two screens of data and amalgamate on to one screen of output, bring
out in XML and then render in HTML using a style sheet into a browser. Repeated 12,000 times in 18 minutes. Sub second response times for whole sequence up to 850 simultaneous users.
Real world scenarios:
Brazilian equivalent of the Social Security. 15th most used web site in the world (not for page hits - for host transactions going through the web to the mainframes). 2 million user sessions a month happening in a four hour period during the day during the
week. Nets out to 200 - 300 user sessions a minute. All running on one Windows server with one as hot standby.
Support center for Visa and MasterCard in Italy. Enough information brought together as services from three back-end systems so that 80% of calls can be answered by the info presented on one screen for the call center operator. 20,000 calls per day into the
So please think screen-scraping on steroids! This isn't the old style client-side screen-scraping that most of us have been used to in the past. Running on a Windows server as a proper purpose-written server-side application is very different. BTW - runs
very nicely next to HIS.
One last item: we have also built a system which delivers mainframe derived Web Services into Office 2003 Outlook (using the Niobe SDK) along similar lines to the one shown by Simon. As you click on a contact in Outlook it fires a web service via OnWeb that
populates that contact on-the-fly as it opens with up-to-date mainframe derived billing information (think OutLook-based CRM!). You can also select a contact and have a WinForm open that has a mix of mainframe interactions (via Web Services) and local Outlook
interactions with completely seamless operation. There is a what seems to be a natural connection in some folks' heads that web enabling an application automatically means delivering to a browser. In fact in most organisations within the firewall that is
not the application that is used every day - its Office. So the services really are much better delivered directly into that app than to a browser sitting next to it. Avalon, of course, will make that a whole lot better!
Forgive the length of the post!! Thought it was worth shooting some misconceptions!