Assembling, Compiling, Link-Editing, and Executing User-Written ProgramsThe only assumption in this documentation is that you are working in a batch mode. That is, you are entering/editing your source program using a Linux or Windows/?? editor (vi, joe, jed, SPF/PC, whatever) and then the source, along with the appropriate JCL statements, are submitted to the reader device on the Hercules console. If you are working under TSO, the procedures will be somewhat different and I have not covered those here.
I would suggest that you create a subdirectory under your main operating system subdirectory for each language you plan to write programs in. This will result in a bit more typing when you "submit" a jobstream to the Hercules reader, but will result in better organization, especially as you write more programs. My directory structure looks like:
After you have written a dozen or more programs, you will appreciate the few minutes you take now to institute some organization.
If you have MVS or MVT up and running under Hercules, you should already be familiar with basic job submission, but I will briefly go over the steps here before we get into assembler/compiler specifics.
An jobstream consists of one or more Job Control Statements (JCL) and may also contain embedded card image data statements. These jobstream statements exist on the host operating system (Linux or Windows/??) as a text file. It can be edited (created or modified) with your favorite text editor (vi, joe, jed, SPF/PC, whatever).
To submit a jobstream to the MVS or MVT system for processing, switch to the console window in which Hercules is executing and follow one of the two procedures below, depending upon whether you are using the semi-graphical control panel or the line-by-line control panel. Reminder: you can switch the Hercules control panel between the two modes by pressing the ESCape key.
Graphical Hercules Console
It is not necessary to press ENTER after steps 1 and 2, but it is necessary after step 3.
Text (Trace) Hercules Console
Press the ENTER key after typing the <file name>.
Note: If you are running MVS (or MVT with HASP installed), you may need to include the parameter eof after the file name to prevent the Operating System from detecting an I/O error on the card reader and flushing the jobstream. If you are running MVT without HASP, specify intrq instead of eof to leave the reader task running. In recent versions of Hercules, the setting of either eof or intrq are persistent, and therefore do not need to be specified unless you wish to change the behavior of the emulated card reader device.
Using either method re-initializes (signals Hercules to close and re-open) the file associated with the simulated card reader. The text file is opened by Hercules and read in, passing the card images to the MVS or MVT reader, which in turn writes them into the system job queue. When the text file is completely read in, if the last card image contained a null statement (// in columns 1 and 2), the job is eligible for processing by an Initiator. When an initiator becomes available, the job will be processed by MVS or MVT.
The jobstream for an assembly or compile will consist of the JCL to invoke the assembler or compiler plus the source language statements that make up your program. If you want to compile, link-edit, and execute the program in a single job, you will also need to include additional JCL for these steps.
There is no reason you cannot include all of the DD statements required by the assembler or compiler in each text file along with your source program statements. But there is always the requirement for one to three work files, usually a library file, input and one or more output files, compiler listing file, etc. You can see it will rapidly get tedious typing in all of those DD statements for every program you write. Fortunately, there is a set of catalogued procedures available for your use that will reduce the required JCL statements to a minimum. And any shortcomings of the catalogued procedures can easily be overcome by submitting override JCL statements to modify the catalogued procedure to provide exactly the functionality you need for each individual program.
The catalogued procedures contained in the SYS1.PROCLIB of MVT are:
The procedures that compile and execute utilize the loader to process the output of the compiler. The loader resolves external references to produce an executable module, which is then executed. The procedures that compile, link-edit, and execute utilize the link editor to process the output of the compiler. The link editor also resolves external references to produce an executable module, but the executable module is written to an output file which may be saved so that the resulting module may be executed again without invoking the compiler and link editor.
The simplest procedures to use are those which invoke the assembler or compiler, pass the output of the compiler to the loader or link editor, and then execute the program. Here is a jobstream which will assemble, link-edit, and execute a "Hello World" program written in assembler: (The numbers to the left of the statements have been added for reference.)
The first statement is just a standard job card. However, a REGION parameter has been specified which will allocate 128K to each step in the job. Although I am not certain why, the allocations provided by the REGION parameters in some of the catalogued procedures are inadequate and the jobs will abend without adding a larger REGION on the job card.
The second statement invokes the catalogued procedure, ASMFCLG. Note that the procedure name is specified alone instead of prefacing it with 'PGM=' as you would to execute a program. This is how the scheduler knows that it needs to search SYS1.PROCLIB to locate a procedure instead of searching SYS1.LINKLIB to locate a program.
Statements three through six are overriding statements. They modify the content of the catalogued procedure by specifying replacement parameter values on four of the DD statements in the procedure. If you look at the catalog procedure listing for ASMFCLG, you will see that the UNIT type for the three work datasets used by the assembler and the dataset to receive the output of the assembler are SYSSQ. If left as they are, the system may attempt to allocate tape devices to these datasets, so I have provided override statements to ensure that they will be allocated disk devices, SYSDA.
Notice that each override statement begins with the name of the step within the catalogued procedure where the statement occurs, followed by a period, followed by the name of the DD statement to which the override is to apply. All of the overrides in this example apply to the ASM step, however if we wanted to override a DD statement in the link-edit step, we would preface the DD name with 'LKED.<ddname>'.
There is no SYSIN DD statement in the catalogued procedure. That is because card image data may not be included inside of a catalogued procedure. So, we must insert the SYSIN DD statement (statement seven), which is followed by the card image data that is the assembler source program. Like the previous override statements, the statement must begin with the name of the step within the catalogued procedure where the statement is to be inserted, followed by a period, followed by the DD name. During the processing of the JCL, the inserted SYSIN DD statement, along with the card image data that is associated with it, will be merged with the statements from the catalogued procedure and will be processed as if all of the statements were present in the text file you originally edited. Warning: Statements in the input jobstream that are overriding statements in cataloged procedures must occur in the order in which they occur in the catalogued procedure. Any statements that are to be appended to a particular step must immediately follow any overriding statements for that step.
In the example I have omitted the '/*' which could be used to indicate the end of the card image data since the system will assume the presence of the '/*' when it processes the null statement (//) which indicates the end of the jobstream. Reminder: if instream statements are preceded with a DD DATA statement, they must be followed by a '/*' statement.
The JCL listing produced when a catalogued procedure is utilized is also slightly different than which you might be used to seeing. Here is a portion of the output from the jobstream shown above: (The numbers to the left of the statements have been added for reference.)
The leading '//' in each statement read from the catalogued procedure has been changed to some other character in the listing. If a statement in the procedure was modified by a statement in the text file, the '//' has been changed to 'X/'. Notice that the parameter changed by the overriding statement is not actually altered on the listing. The statements from the catalogued procedure that are not altered have their leading '//' changed to 'XX'. Statements which are read from the jobstream (ie. the text file you created), are printed unchanged.
Statement pairs four/five, seven/eight, nine/ten, and fourteen/fifteen indicate that an override has been done.
There are simple "Hello World" compile, link-edit, and execute jobstreams along with the resulting output listings available here for the following languages:
In addition to replacing a single parameter on a JCL statement in a catalogued procedure, you can substitute a complete replacement statement. The possibilities are so limitless, that the compile procedures supplied in SYS1.PROCLIB can be made to accomplish any task you need without ever having to completely type in all the JCL from scratch.
If you want to save the object code produced by a compiler, use either a compile or compile and link-edit procedure and override the dataset that receives the compiler output (usually the SYSLIN or SYSGO DD) to point to a disk dataset that is either catalogued or kept:
In order to override a parameter on the EXEC statement (such as PARM, REGION, COND) in a catalogued procedure, specify the parameter name, followed by a period, followed by the name of the step within the catalogued procedure:
To nullify (remove) parameters, code the keyword followed by an equal sign by omitting the value. To nullify an entire parameter that has subparameters, you must nullify each of the subparameters that have been coded; DCB= will not nullify a DCB parameter that has subparameters (RECFM=, LRECL=, etc) coded.
A complete set of options for the Assembler, the language compilers and the linkage editor can be found at: Parameter Options.
November 2004 - I have written a macro, jobstream, and instructions for changing the installed default options for the COBOL compiler (without regenerating the compiler via MVT System Generation). See: MVT COBOL Compiler Default Options.
Here I will use one example to illustrate two advanced situations. The task is to compile and execute a COBOL program where the COBOL source statements are contained in a Partitioned Dataset. The COBOL program reads input from a tape dataset and writes output to a disk dataset. (The numbers to the left of the statements have been added for reference.)
Statement three adds the SYSIN DD to the catalogued procedure to provide the input to the COBOL compiler, but rather than point to in-stream card images, the DD points to a member of a partitioned dataset. Since there is no volume information included with the DD, you can deduce that the dataset is catalogued.
If there are no errors during the compilation and link-editing the program will be executed.
The tape dataset referenced by statements seven and eight will be read in by the program. The disk dataset referenced by statement nine will be extended (records added to the end). And print records will be written to SYSOUT using the DD at statement ten.
Another means of customizing catalogued procedures is through the use of symbolic variables. The RPG procedures use symbolic variables extensively. Here is a portion of the output from the sample jobstream using the RPGECLG procedure: (The numbers to the left of the statements have been added for reference.)
In statements two through seven, each of the pair of items separated by an equal sign (=) represent a symbolic variable name and its default value. In the remainder of the procedure, each time a symbolic variable name is encountered, it is replaced by the value that was supplied on the right of the equal sign. Statements nine, fourteen, seventeen, twenty, and twenty-four indicate that substitution has been done and the JCL statements displayed on those lines are the statements as they appear after the substitution of the symbolic value in place of the symbolic variable name.
But you do not have to accept the default values supplied for the variables. You may supply your own values which will override the defaults. To supply a new value for a symbolic variable, you simply code the symbolic variable name, followed by an equal sign, followed by the value you want to substitute. For example, if the first statement in the listing above had been coded as:
the resulting statement nine would have been:
A much more subtle use of symbolic parameters can be found by looking closely at the DSN parameters in any of the catalogued procedures. They are all specified as symbolic parameters. Here is the SYSGO DD statement from the ASMFCLG procedure:
&LOADSET is actually a symbolic variable, but if no value is supplied for it, it will function as a temporary dataset name (as though &&LOADSET had actually been coded). If you wanted to save the output from the assembler step, you could supply a value for the &LOADSET symbolic variable to point to a valid dataset name and then it would receive the output rather than a temporary dataset.
The MVT COBOL compiler installs with a buffer default that is too small to handle blocked datasets for input (COBOL source read from the SYSIN DD) or output (object modules written to SYSLIN). In order to read or write blocked datasets, or to copy source code from a library dataset, you will need to include a additional parameters to the compiler:
I would also recommend that you change the REGION to 4096K. In February, 2002 I modified the procedures that are installed from the COBOL compiler installation tape to include these modifications.
On 8 November 2004, while helping someone resolve an error in a COBOL program I discovered that there is a catalogued procedure missing from the MVT system that is usually present - COBUCL. I created this missing procedure and recreated the archive that contains the MVT COBOL compiler load modules, link library, and procedures. I also made a small change to all of the procedures that execute the COBOL compiler that will make it easier to override the compiler options and always provide the two required options discussed here - SIZE and BUF. Each catalogued procedure that invokes the compiler now contains a PROC header statement:
which supplies default values to two symbolic variables and the statement in the procedure containing the PARM keyword has been changed to:
which concatenates the two values supplied by the symbolic variables into a single value to pass to the COBOL compiler as its parameter. When executing one of these procedures, if you need to specify alternative compiler options, you may specify them by using the CPARM1 symbolic variable name. You can see an example of this in the COBUCLG example above. Of course, you can always override the value for SIZE and BUF by specifying your own value for the CPARM2 variable or by overriding the entire concatenated value by specifying PARM.COB in an override.
November 2004 - I have written a macro, jobstream, and instructions for changing the installed default options for the COBOL compiler (without regenerating the compiler via MVT System Generation). See: MVT COBOL Compiler Default Options.
Although performing a system generation to install the MVT 21.8f operating system will add the compilers documented here for that operating system, along with the procedures to invoke them, the only language processor that is installed with MVS 3.8 is the Assembler. The MVT compilers may be installed under MVS and used with no problem and I have documented the process to accomplish that under the navigation tab: Compilers for MVS. There are also instructions there for installing additional compilers, such as WATFIV, PL360, PASCAL, etc.
The MVT COBOL compiler pre-dates VSAM, so there is no support for VSAM datasets in this version of COBOL. I have written an Assembler subroutine which can be called from COBOL programs compiled with the MVT compiler which will enable much of the functionality of a more recent COBOL compiler. For information and to download, see: VSAM File Access for COBOL.
Likewise, the PL/1 compiler also predates VSAM, so I wrote a "wrapper" program to allow programs compiled under the MVT PL/1 compiler to call the routine as well. For information and to download, see VSAM File Addess for PL/1.
The MVT COBOL Compiler predates the capability for dynamic subroutine calls (subroutine load module loaded at execution time rather than binding the subroutine code into the main program at compile/link edit time). Ed Liss has written an assembler subprogram that will provide this functionality to COBOL programs compiled with the MVT COBOL compiler. His archive containing his program and installation instructions is available for download from this site: DYNALOAD.
The entries for the Identification and Environment Divisions are usually the most difficult to code, especially when dealing with a compiler as old as the one we have available, simply because there are very few manuals and/or textbooks available that match the compiler. So, prompted by recent questions on one of the Hercules group lists about this area, I decided to add this section to assist others trying to solve this "puzzle".
The Identification Division is relatively straight forward, with only the division header and the Program ID fields required. Here are the entries in a syntax diagram format:
The program name must consist of one word, contain one to eight characters, and begin with an alphabetic character. The optional paragraphs following Program ID must appear in the order shown, if included. Regardless of what is coded for DATE-COMPILED, the actual date compiled will be substituted by the compiler as the source is processed.
The Environment Division is more complex, and the entries required depend greatly upon the individual program. Here are the entries in a syntax diagram format:
The Environment Division entries are divided into two sections, the Configuration Section and the Input-Output Section. The sections, and the paragraphs within, if used, must be coded in the order shown. The Configuration Section may be omitted entirely unless there is a need for the Special-Names paragraph. The Source-Computer and Object-Computer paragraphs are used to document the computer and memory size of the computer on which the program is developed (Source-Computer) and is intended to execute (Object-Computer). The format of the entry for these fields is:
An example of this entry is:
which identifies the source computer as an IBM 360/65 with 256k of memory. The compiler will accept values in the range C through I for the memory size. One table I have in my references lists these values for memory size:
I have a vague recollection of coding Source Computer and Object Computer entries when I first started writing production COBOL programs in the 1970's, but in most of my professional experience they have been omitted.
(Added November 2004) Now to completely contradict the last statement above: I found an old (1981) OS Debugging for the COBOL Programmer in a used bookstore a few weeks ago, and it states that this compiler was used on both the 360 and 370 hardware. If the object code was to be run on 360 hardware, additional instructions are placed in the generated object module to ensure proper boundary alignment. These instructions are not required on 370 hardware, but if the OBJECT-COMPUTER entry incorrectly states IBM-360, the instructions will be produced and will increase the run time of the program. So, for programs compiled and executed under Hercules it will probably be beneficial to have IBM-370 in the SOURCE- and OBJECT-COMPUTER entries.
The Special Names paragraph is used for assigning mnemonic names to functional names used in subsequent Procedure Division statements. In my experience, this has most often been used to assign names to physical channel numbers in a printer carriage control tape. For example:
Thereby allowing a write statement used to print the first line on a new page of continuous form to be written as:
The section of the Environment Division which will be of the most interest will be the entries required for the Input-Output Section, since they are used to define the input and output datasets to be processed by the program, and will therefore be required in the majority of programs written. For each dataset to be processed, there must be a SELECT statement, which names the dataset for use of subsequent COBOL statements, with an ASSIGN clause with associates the dataset name with a physical dataset managed by MVS during the program's execution. The format of the system-name in the ASSIGN clause is:
The device class specifies the general category of device upon which the physical dataset resides, and must be one of these three entries:
The device number is a four or five character designation of a specific hardware model, and more precisely defines the device upon which the dataset may reside. Under MVS, the device number may, and should, be omitted. If it is included, the MVT compiler will accept the following device numbers under the respective device classes:
The organization is coded as either S, D, or I; where S designates sequential file organization, D designates files stored in a random organization where actual keys will be supplied to write and read the records, and I designates files stored using the Indexed Sequential Access Method.
The DDname portion of the system-name specifies the one to eight character name used on a DD JCL statement during execution to associate a physical dataset with the file defined in the COBOL program. Here are some examples copied from programs I have compiled with the MVT compiler running under MVS 3.8j:
The remainder of the entries for the Input-Output Section will be interpreted as described in any contemporary COBOL textbook, with the possible exception of those for Indexed Sequential datasets. However, if you don't already have some experience using Indexed Sequential datasets under MVS, it is probably not something you would attempt without having in your possession a textbook which covers it in depth. If you have the need to process indexed datasets, I would recommend you consider using my VSAM I/O subroutine instead of having to deal with the idiosyncrasies and limitations of ISAM.
Also, see Summary of Environment Division File Statements (PDF)
I am not going to cover all the specifics for the Data Division for the MVT compiler, but I thought it prudent to make a few suggestions about the File Description entries. With modern COBOL compilers, we have come to expect that less is better and rely upon the DD statement and the data management subsystems of the Operating System to fill in whatever information is missing and required. You probably should consider including the following clauses on your FD entries:
The Source Program Library Facility of the time period of the MVT COBOL compiler, otherwise known as the COPY statement, was much less flexible than for later compilers. I have been most successful limiting my use to copying 01 level entries into Working-Storage, Linkage, and as Record Level descriptions beneath File Description (FD) entries in the Data Division. Restrictions (limitations) I have encountered are:
From a COBOL textbook of the same time period as the MVT COBOL compiler:
Some examples from the same textbook:
As stated earlier above, if you utilize the COPY statement, the parameters for SIZE and BUFSIZE must be increased. I usually run my COBOL compiler step with a REGION of 4096K, and specify: PARM='SIZE=2048K,BUF=1024K'.
In answer to a question posted to the IBM-Main discussion group ... yes, the MVT COBOL compiler does include the Report Writer Feature. I have written a tutorial covering the basics of using the Report Writer Feature, with examples at: COBOL Report Writer.
If you are attempting to assemble a program written for a more recent version of MVS (or OS/390 or z/OS), you may want to take a look at Jan Jaeger's extended mnemonic macros - mnemac.
Stephen Powell has made available the macro library he developed. It may be used under either VM/CMS or MVS. For more information and to download the installation files - spmaclib.
Up to sixteen datasets may be used during execution of an ALGOL program; each dataset is referred to in the ALGOL procedure by a dataset number, the value of which may range from 0 through 15 and corresponds to the following DD names:
|1||variable||Returns the value of the character pointer|
|2||variable / literal||Sets the value of the character pointer (within current record)|
|3||variable||Returns the value of the record pointer|
|4||variable / literal||Sets the value of the record pointer (within current section)|
|5||variable||Returns the value of the record length|
|6||variable / literal||Sets the value of the record length|
|7||variable||Returns the value the number of records per section|
|8||variable / literal||Sets the value of the number of records per section|
|9||variable||Returns the value of number of blank spaces serving as a delimiter|
|10||variable / literal||Sets the value of number of blank spaces serving as a delimiter|
|11||variable||Returns a value corresponding to the dataset status: 1 if open; 0 if closed; -1 if exhausted|
|12||variable / literal||If the argument value is 1 and the dataset is closed, the dataset is opened. If the argument value is 0 and the dataset is open, the dataset is closed.|
|13||variable||Returns the value of the record pointer and also stores the value in an internal index for the dataset (for subsequent use by SYSACT)|
|14||variable / literal||Increments the record pointer by the value of the argument (skips records)|
|15||variable / literal||Skips to the next section and sets the record pointer to the value of the argument|
Some basic examples of the use of SYSACT may be viewed at ALGOL test.
ASSIST is a small, high-speed, low-overhead assembler/interpreter system especially designed for use by students learning assembler language. The assembler program accepts a large subset of the standard Assembler Language under OS/360, and includes most common features. The execution-time interpreter simulates the full 360 instruction set, with complete checking for errors, meaningful diagnostics, and completion dumps of much smaller size than the normal system dumps.
The ASSIST package has been available from several Internet sites for some time. In fact, I had acquired an AWS tape image containing the package sometime in 2001, but my pursuit of installing it was interrupted by a hard drive crash and I just did not get back to it until recently. The package, as found on the CBT tape (file #085 on the overflow tape) and at least a couple of other locations I checked, seems to have been last updated in March, 1975. Part of the struggle in getting ASSIST into an easily installable package for Hercules/MVS 3.8 was dealing with some strange spurious hex characters appearing in some of the distribution members where only display characters should have been. I am extremely grateful for the help of Mike Stack at NIU in attempting to help me track down the source of these errors. Eventually, I scrapped the CBT tape copy and used the source from Mike's site at NIU, as well as using his installation jobstream as the model for what I built for Hercules/MVS 3.8. [Mike subsequently moved his ASSIST material to his personal site: http://kcats.org/] Following this route greatly reduced the problems with the source and, with a single exception, the source is exactly what is contained on Mike's site. By building SYSIN jobstreams for the installation reduced the size of the files to download and hopefully prevents introduction of further errors by translation of compressed data from ASCII/EBCIDIC and back repeatedly.
The installation archive - assist.tgz [MD5: 79D3361EF4ADAAD1FC1284C144E98C32] - contains six jobstreams:
assist$.jcl This large jobstream installs the ASSIST load module, the macros ASSIST needs during execution, and a procedure to execute ASSIST. astest00.jcl
These six jobs may be submitted to verify successful installation of ASSIST. They contain many actual programs submitted by students, some with errors still intact.
Following Mike's example, the assist$.jcl. jobstream copies all the original source statements from SYSIN statements in the jobstream to two temporary datasets. Updates are then applied to the datasets using statements also read from SYSIN. All of ASSISTs default options are controlled by settings of symbolic variables in the ASSYSGEN macro. As supplied, pretty much everything is "turned on" and it should be suitable for most folks, as is. If you want to make changes, you should make them in the update statements, not in the original. With a little rearranging, IFOX00 is suitable for assembling ASSIST, so after the updates are made to the temporary datasets, ASSIST is assembled and link-edited to SYS2.LINKLIB. As always, if you don't use SYS2.LINKLIB, simply change the DSN for the target library. The last two steps of the jobstream add the ASSIST macros to SYS1.MACLIB and the catalogued procedure to SYS2.PROCLIB. Again, change as required for your system. The completion codes for all steps of this job should be 0000.
Submit one or all of the test jobs to verify successful installation. Completion codes for all of these jobs should be 0000, even though there will be errors listed for some of the programs in the SYSOUT.
The ASSIST documentation included formatting control characters and were best when processed with the included copyed program (which I have not included here). I have processed the members with the program and subsequently placed the output into PDF files. PDF compression greatly reduces the size of the files and the text in the PDF is 100% searchable. You may chose which, if any, of the documentation members to download and use:
PDF File (Original Member) Pages / Download Size Contents asmintro (ASASSIGN) 247 / 660kb Pennsylvania State University introductory information for beginning students of assembler logic (ASPLMXXX) 181 / 403kb ASSIST System Program Logic Manual usergd (ASUSERGD) 72 / 205kb ASSIST User's Guide xmsysgen (XMSYSGEN) 7 / 24kb Summary of the XMacro package xmwrites (XMWRITES) 22 / 61.2kb XMacro Usage
The first document - asmintro - contains actual assignments given to students and a fair amount of basic information about OS and good assembler programming methods. Likewise, the ASSIST User's Guide - usergd - includes much basic assembler information. The xmsysgen document will probably only be of interest to someone seeking to write additional X macros for the ASSIST system.
A good textbook that incorporates the use of ASSIST in learning 370 Assembler is IBM Assembly Language with ASSIST; Structured Concepts and Advanced Topics by Charles J. Kacmar. As I write this, there are 4 copies listed for sale at www.abebooks.com.
I hope this has provided you with the information you need to get started assembling, compiling, link-editing, and executing programs on your own. If I can answer any questions about using catalogued procedures for the compilers or if you find errors in these instructions, please don't hesitate to send them to me:
This page was last modified on November 15, 2010.