Play Video
1
Chargify Transaction Processing Facility
Chargify Transaction Processing Facility
::2010/07/30::
Play Video
2
What is XTP (eXtreme Transaction Processing)? Step 1: Classifying transactional systems
What is XTP (eXtreme Transaction Processing)? Step 1: Classifying transactional systems
::2009/05/02::
Play Video
3
Chargify Transaction Processing Tour
Chargify Transaction Processing Tour
::2010/09/03::
Play Video
4
Cab Card Video Tutorial - Taking a Card Payment by Chip and Pin
Cab Card Video Tutorial - Taking a Card Payment by Chip and Pin
::2009/11/23::
Play Video
5
Bristol mail delivery issues arise following processing facility move
Bristol mail delivery issues arise following processing facility move
::2011/12/01::
Play Video
6
Quick Service Transaction Processing
Quick Service Transaction Processing
::2011/07/08::
Play Video
7
Condos and Clouds
Condos and Clouds
::2012/01/13::
Play Video
8
httprtvm.gov.ph - Inauguration of Global Payments Facility.mp4
httprtvm.gov.ph - Inauguration of Global Payments Facility.mp4
::2011/09/24::
Play Video
9
Tops Tips for Setting Up a Card Processing Facility
Tops Tips for Setting Up a Card Processing Facility
::2012/07/16::
Play Video
10
Financial Transaction Manager Technical Overview
Financial Transaction Manager Technical Overview
::2013/11/08::
Play Video
11
What
What's new in CICS Transaction Gateway V9.0
::2013/07/11::
Play Video
12
E-commerce - Transaction to Delivery
E-commerce - Transaction to Delivery
::2012/09/25::
Play Video
13
Managing and monitoring online regions
Managing and monitoring online regions
::2014/05/16::
Play Video
14
Guided Tour: Sales Order Processing with Esker
Guided Tour: Sales Order Processing with Esker
::2013/04/30::
Play Video
15
NCRs Interactive Teller: "The ATM of The Future"
NCRs Interactive Teller: "The ATM of The Future"
::2014/05/08::
Play Video
16
How To Look Up A Transaction
How To Look Up A Transaction
::2014/09/03::
Play Video
17
Recurring Transaction Process
Recurring Transaction Process
::2012/07/11::
Play Video
18
Vector Systems
Vector Systems' Process Control Plan: Serve More Industries
::2012/03/15::
Play Video
19
Advance Payment Guarantee (APG) Facility Provider-Bronze Wing Trading
Advance Payment Guarantee (APG) Facility Provider-Bronze Wing Trading
::2012/07/01::
Play Video
20
Importing Chargify Transactions into QuickBooks Online
Importing Chargify Transactions into QuickBooks Online
::2013/02/13::
Play Video
21
Coca Cola, production process.mp4
Coca Cola, production process.mp4
::2011/09/10::
Play Video
22
PioneerRx - Rx Transaction Grid Tutorial
PioneerRx - Rx Transaction Grid Tutorial
::2011/05/19::
Play Video
23
Cloud - Setting up and processing credit cards
Cloud - Setting up and processing credit cards
::2013/04/18::
Play Video
24
Transaction Processes
Transaction Processes
::2008/10/07::
Play Video
25
Imports Transaction - A
Imports Transaction - A
::2011/05/06::
Play Video
26
Discover Integrated Credit Card Processing with GiftLogic Point-of-Sale
Discover Integrated Credit Card Processing with GiftLogic Point-of-Sale
::2014/06/02::
Play Video
27
SAP D&T Academy - How to Log Failed Transactions in the System Table for Replication Server
SAP D&T Academy - How to Log Failed Transactions in the System Table for Replication Server
::2012/12/19::
Play Video
28
Internet Banking Help!
Internet Banking Help!
::2010/11/06::
Play Video
29
TPC Production facility
TPC Production facility
::2008/09/05::
Play Video
30
Credit Card Processing Services, Credit Card Processing Software
Credit Card Processing Services, Credit Card Processing Software
::2012/10/13::
Play Video
31
zTREX: Transaction Examination
zTREX: Transaction Examination
::2011/08/25::
Play Video
32
Credit Card Processing Orange County CA Merchant Processing
Credit Card Processing Orange County CA Merchant Processing
::2014/03/09::
Play Video
33
7300 Series 4 slot   Database transaction benchmark
7300 Series 4 slot  Database transaction benchmark
::2008/06/26::
Play Video
34
low capacitymaize processing unit project report
low capacitymaize processing unit project report
::2013/10/15::
Play Video
35
CREDIT FACILITY
CREDIT FACILITY
::2014/01/11::
Play Video
36
Shape.Net: How To Run a Financial Transaction Report
Shape.Net: How To Run a Financial Transaction Report
::2009/09/24::
Play Video
37
The Coherence Incubator - Processing Pattern New Features Summer 2010
The Coherence Incubator - Processing Pattern New Features Summer 2010
::2010/06/24::
Play Video
38
Easy online transaction with VIPTravelGear.com.au, Australia
Easy online transaction with VIPTravelGear.com.au, Australia
::2009/03/30::
Play Video
39
Nov 2012 1 Condos and Clouds:  Patterns in SaaS Applications
Nov 2012 1 Condos and Clouds: Patterns in SaaS Applications
::2013/07/16::
Play Video
40
Webinar on GSA SmartPay Cards - Level 3 Processing
Webinar on GSA SmartPay Cards - Level 3 Processing
::2012/08/03::
Play Video
41
Installing ePNJPOS - Cloud Based Credit Card Processing Software
Installing ePNJPOS - Cloud Based Credit Card Processing Software
::2012/04/13::
Play Video
42
Algocharge Provide Online payment processing Services
Algocharge Provide Online payment processing Services
::2012/12/21::
Play Video
43
Morgan Stanley Upgraded Global Payments To EW From UW, Upped PT To $44 (GPN)
Morgan Stanley Upgraded Global Payments To EW From UW, Upped PT To $44 (GPN)
::2010/12/17::
Play Video
44
Why Global Payments Inc. Feels at Home in Atlanta
Why Global Payments Inc. Feels at Home in Atlanta
::2010/10/14::
Play Video
45
Chargify Demo - Build Your Business, Not Your Billing System
Chargify Demo - Build Your Business, Not Your Billing System
::2009/09/14::
Play Video
46
s7 9 v3 billing system, batch processing, Mr Liao
s7 9 v3 billing system, batch processing, Mr Liao
::2012/11/25::
Play Video
47
VB.Net 2010 How to make a payment facility
VB.Net 2010 How to make a payment facility
::2012/10/21::
Play Video
48
North America: In partnership with Shape.net - Improve your facility
North America: In partnership with Shape.net - Improve your facility's Member Retention and Loyalty
::2013/11/20::
Play Video
49
Convoluted Cashiering - Design 165
Convoluted Cashiering - Design 165
::2013/10/28::
Play Video
50
Chargify - You Build. We Bill.
Chargify - You Build. We Bill.
::2010/01/12::
NEXT >>
RESULTS [51 .. 101]
From Wikipedia, the free encyclopedia
Jump to: navigation, search
z/TPF
Developer IBM
Written in Basic Assembly Language sks s/390 assembly C C++
OS family z/Architecture assembly language (z/TPF), ESA/390 assembly language (TPF4)
Working state Current
Source model Closed source (Source code is available to licenced users with restrictions)
Initial release 1979
Latest release V1R1 / December, 2005
Platforms IBM System z (z/TPF), ESA/390 (TPF4)
Kernel type Real-time
Default user interface ?
License Proprietary monthly license charge (MLC)
Official website IBM: z/TPF operating system
History of IBM mainframe operating systems

TPF is an IBM real-time operating system for mainframe computers descended from the IBM System/360 family, including zSeries and System z9. The name is an initialism for Transaction Processing Facility.

TPF delivers fast, high-volume, high-throughput transaction processing, handling large, continuous loads of essentially simple transactions across large, geographically dispersed networks. The world's largest TPF-based systems are easily capable of processing tens of thousands of transactions per second. TPF is also designed for highly reliable, continuous (24x7) operation. It is not uncommon for TPF customers to have continuous online availability of a decade or more, even with system and software upgrades. This is due in part to the multi-mainframe operating capability and environment.

While there are other industrial-strength transaction processing systems, notably IBM's own CICS and IMS, TPF's raison d'être is extreme volume, large numbers of concurrent users and very fast response times, for example VISA credit card transaction processing during the peak holiday shopping season.

The TPF passenger reservation application PARS, or its international version IPARS, is used by many airlines.

One of TPF's major components is a high performance, specialized database facility called TPFDF.

A close cousin of TPF, the transaction monitor ALCS, was developed by IBM to integrate TPF services into the more common mainframe operating system MVS, now z/OS.

History[edit]

TPF evolved from the Airlines Control Program (ACP), a free package developed in the mid-1960s by IBM in association with major North American and European airlines. In 1979, IBM introduced TPF as a replacement for ACP — and as a priced software product. The new name suggests its greater scope and evolution into non-airline related entities.

TPF was traditionally an IBM System/370 assembly language environment for performance reasons, and many TPF assembler applications persist. However, more recent versions of TPF encourage the use of C. Another programming language called SabreTalk was born and died on TPF.

IBM announced the delivery of the current release of TPF, dubbed z/TPF V1.1, in September 2005. Most significantly, z/TPF adds 64-bit addressing and mandates use of the 64-bit GNU development tools.

The GCC compiler and the DIGNUS Systems/C++ and Systems/C are the only supported compilers for z/TPF. The Dignus compilers offer reduced source code changes when moving from TPF 4.1 to z/TPF.

Users[edit]

Current users include Sabre (reservations), Amadeus (reservations), VISA Inc (authorizations), American Airlines,[1] American Express (authorizations), EDS SHARES (reservations), Holiday Inn (central reservations), CBOE (order routing), Singapore Airlines, KLM, Garuda Indonesia, Amtrak, Marriott International, Travelport and the NYPD (911 system). Japan Airlines has publicly acknowledged they are running z/TPF.[2]

Operating environment[edit]

Tightly coupled[edit]

TPF is capable of running on a multiprocessor, that is, on mainframe systems in which there is more than one CPU. Within the community, the CPUs are referred to as Instruction Streams or simply I-streams. On a mainframe or in a logical partition (LPAR) of a mainframe with more than one I-stream, TPF is said to be running tightly-coupled.

Due to the reentrant nature of TPF programs and the control program, this is made possible as no active piece of work modifies any program. The default is to run on the main I-stream which is given as the lowest numbered I-stream found during IPL. However users and/or programs can initiate work on other I-streams via internal mechanisms in the API which let the caller dictate which I-stream to initiate the work on. In the new z/TPF, the system itself will try to load balance by routing any application that does not request a preference or affinity to I-streams with less work than others.

In the TPF architecture, each I-stream shares common core, except for a 4Kb in size prefix area for each I-stream. In other instances where core data must or should be kept separate, the application designer typically carves up reserved storage areas into a number of sections equal to the number of I-streams. A good example of the TPF system doing this can be found with TPFs support of I-stream unique globals. Proper access to these carved sections of core are made by taking the base address of the area, and adding to it the product of the I-stream relative number times the size of each area.

Loosely coupled[edit]

TPF is capable of supporting multiple mainframes (of any size themselves - be it single I-stream to multiple I-stream) connecting to and operating on a common database. Currently, 32 IBM mainframes may share the TPF database; if such a system were in operation, it would be called 32-way loosely coupled. The simplest loosely coupled system would be two IBM mainframes sharing one DASD (Direct Access Storage Device). In this case the control program would be equally loaded into core and each program or record on DASD could be potentially accessed by either mainframe.

In order to serialize accesses between data records on a loosely coupled system, a practice known as Record locking must be used. This means that when one mainframe processor obtains a hold on a record, the mechanism must prevent all other processors from obtaining the same hold and communicate to the requesting processors that they are waiting. Within any tightly coupled system this is easy to manage between I-streams via the use of the Record Hold Table. However when the lock is obtained offboard of the TPF processor in the DASD control unit, an external process must be used. Historically the record locking was accomplished in the DASD control unit via an RPQ known as LLF (Limited Locking Facility) and later ELLF (extended). LLF and ELLF were both replaced by the Multipathing Lock Facility (MPLF). To run clustered (loosely-coupled) zTPF requires either MPLF in all disk control units or an alternative locking device called a Coupling Facility. [1] [2]

Processor shared records[edit]

Records that absolutely must be managed by a record locking process are those which are processor shared. In TPF most record accesses are done by using record type and ordinal. So if you had defined a record type in the TPF system of 'FRED' and gave it 100 records or ordinals, then in a processor shared scheme record type 'FRED' ordinal '5' would resolve to exactly the same file address on DASD - clearly necessitating the use of a record locking mechanism.

All processor shared records on a TPF system will be accessed via exactly the same file address which will resolve to exactly the same location.

Processor unique records[edit]

A processor unique record is one that is defined such that each processor expected to be in the loosely coupled complex has a record type of 'FRED' and perhaps 100 ordinals. However, if a user on any 2 or more processors examines the file address that record type 'FRED', ordinal '5' resolves to, they will note a different physical address is used.

TPF attributes[edit]

What TPF is not[edit]

TPF has no built-in graphical user interface (GUI). TPF's built-in user interface is line driven with simple text screens that scroll upwards. There are no mice, windows, or icons on a TPF Prime CRAS (Computer room agent set — "the name given to devices which have been assigned to control the operation of the z/TPF system"[3]). All work is accomplished via the use of typed one or two line commands, similar to early versions of UNIX before X. There are several products available that connect to the Prime CRAS and provide graphical interface functions to the TPF operator, for example the TPF Operations Server. Graphical interfaces for end-users are typically provided through PC based functions.

TPF also does not include a compiler/assembler, text editor, or the concept of a desktop. TPF application source code is typically kept in PDSs on a z/OS system. However, some previous installations of TPF kept source code in z/VM-based files and used the CMS update facility to handle versioning. Currently the z/OS compiler/assembler is used to build TPF code into object modules, producing load files that the TPF "online system" can accept. Starting with z/TPF 1.1, Linux will be the build platform.

Using TPF requires an intimate knowledge of the Operations Guide since there is no shipped support for any type of online command "directory" that you might find on other platforms. Commands created by IBM and shipped by IBM for the running and administration of TPF are referred to as "Z-messages" as they are all prefixed with the letter "Z." Other letters are reserved so that customers may write their own commands.

TPF has extremely limited capability to debug itself. Typically third party software packages such as IBM's TPF Tool Kit, Step by Step Trace from Bedford Associates[4] or CMSTPF,TPF/GI,zTPF/GI from TPF Software Inc.[5] are employed to aid in the tracing and tracking of errant TPF code. Since TPF can run as a second level guest under IBM's z/VM, a user can employ the VM trace facility to closely follow the execution of code. TPF will allow certain types of function traces to operate and dump their data to a tape, typically through user exits that present parameters to a called function or perhaps the contents of a block of storage. There are some other types of trace information that TPF can collect in core while running, and this information gets "dumped" whenever the system encounters a severe error.

What TPF is[edit]

TPF is highly optimized to permit messages from the supported network to either be switched out to another location, routed to an application (specific set of programs) or to permit extremely efficient accesses to database records.

Data records[edit]

Historically all data on the TPF system had to fit in fixed record (and core block) sizes of 381, 1055 and 4K bytes. This was due in part to the physical record sizes of blocks located on DASD. Much overhead was saved by freeing up any part of the operating system from breaking large data entities into smaller ones during file operations, and reassembling same during read operations. Since IBM hardware does I/O via the use of channels and channel programs, TPF would generate very small and efficient channel programs to do its I/O - all in the name of speed. Since the early days also placed a premium on the size of storage media - be it memory or disk, TPF applications evolved into doing very powerful things while using very little resource.

Today, much of these limitations are removed. In fact, only because of legacy support are smaller than 4K DASD records still used. With the advances made in DASD technology, a read/write of a 4K record is just as efficient as a 1055 byte record. The same advances have increased the capacity of each device so that there is no longer a premium placed on the ability to pack data into the smallest model as possible.

Programs and residency[edit]

TPF also had its programs allocated as 381, 1055 and 4K bytes in size and each program consisted of a single record (aka segment). Therefore a comprehensive application needed many segments. With the advent of C-support, application programs were no longer limited to 4K sizes, much larger C programs could be created, loaded to the TPF system as multiple 4K records and read into memory during a fetch operation and correctly reassembled. Since in the past core memory was at a premium, only highly used programs ran 100% of the time as core resident, most ran as file resident. Given the limitations of older hardware, and even today's relative limitations, a fetch of a program, be it a single 4K record or many, is expensive. Since core memory is monetarily cheap and physically much much larger, greater numbers of programs could be allocated to reside in core. With the advent of z/TPF, all programs will reside in core - eventually - the only question is when they get fetched the first time.

Before z/TPF, all assembler language programs were limited to 4K in size. Assembler is a more space-efficient language to program in so a lot of function can be packed into relatively few 4K segments of assembler code compared to C in 4K segments. However, C language programming is much easier to obtain skilled people in, so most if not all new development is done in C. Since z/TPF allows assembler programs to be repackaged into 1 logical file, critical legacy applications can be maintained and actually improve efficiency - the cost of entering one of these programs will now come at the initial enter when the entire program is fetched into core and logical flow through the program is accomplished via simple branch instructions, instead of a dozen or so IBM instructions previously needed to perform what is known as 'core resident enter/back'.

Core usage[edit]

Historically and in step with the previous, core blocks - memory - were also 381, 1055 and 4K bytes in size. Since ALL memory blocks had to be of this size, most of the overhead for obtaining memory found in other systems was discarded. The programmer merely needed to decide what size block would fit the need and ask for it. TPF would maintain a list of blocks in use and simply hand the first block on the available list.

Physical memory was carved into sections reserved for each size so a 1055 byte block always came from a section and returned there, the only overhead needed was to add its address to the physical block table's proper list. No compaction or data collection was required.

As applications got more advanced demands for more core increased and once C became available, memory chunks of indeterminate or large size were required. This gave rise to the use of heap storage and of course some memory management routines. To ease the overhead, TPF memory was broken into frames - 4K in size (and now 1Mb in size with z/TPF). If an application needed a certain number of bytes, the number of contiguous frames required to fill that need were granted.

References[edit]

  1. ^ http://www.tpfug.org/JobCorner/jobs.htm
  2. ^ http://www-03.ibm.com/press/us/en/pressrelease/23914.wss
  3. ^ IBM Corporation. "CRAS support". Retrieved October 17, 2012. 
  4. ^ Bedford Associates. "Bedford Assocites, Inc.". Retrieved October 17, 2012. 
  5. ^ TPF Software. "TPF Software, Inc.". Retrieved October 17, 2012. 

Transaction Processing Facility: A Guide for Application Programmers (Yourdon Press Computing Series) by R. Jason Martin (Hardcover - Apr 1990)

External links[edit]

Wikipedia content is licensed under the GFDL License
Powered by YouTube
LEGAL
  • Mashpedia © 2014