Batchpipes enables one job step to feed data to another job step as and when the data is written, rather than waiting until the end of step and the output dataset is closed.
In traditional processing, if data records are written out to sequential (QSAM and BSAM) data set on disk or tape, they cannot be at the same time read back in by another job. Hence, these two jobs—”writer” and “reader”—cannot run at the same time. This is termed file-level interlock or data-set-level interlock.
With BatchPipes an installation can arrange for the data to be “piped” between the two jobs. The advantage is that the jobs can run concurrently and it is possible, and very usual, to avoid the time to write the data to secondary storage and to read it back. The combination of these two characteristics, if used judiciously, leads to a reduction in the combined elapsed time of the two jobs, as measured from the start of the writer job to the end of the reader job.
BatchPipes maintains a short queue of records being passed between the writer and the reader. The writer adds records to the back of the queue and the reader takes them from the front. This is deemed record-level interlock and allows the reader and the writer to run concurrently.
Criticism: One of the key implementation considerations is scheduling the reader and writer jobs to run together. In practical batch schedules this might not be feasible. Furthermore if any job in the pipeline fails, recovery actions will be wider than just recovering this single job. For these reasons some installations have found it difficult to implement BatchPipes.
Suppose you have 2 jobs. Output of JOB1(FILEA) is being used as input in JOB2.
Then enter the parameters as below in both the JCL:
//STEP05R.DD1 DD DISP=SHR,
The underline portion is the extra line we need to introduce for batchpipe. Subsystem name BP01 and PIPEDEPTH of 60 are default values in my area.