If you plan to do much with your "own private mainframe", you will soon find that you have modified a dataset, or a member of a dataset, and it would really be great if you could just get back the version that you had before you began making changes. Since it is your own private mainframe, it will be up to you to make sure you can get back previous versions of any datasets you modify. The purpose of the information presented here is to present the mechanisms for backing up and restoring datasets.
Certainly it is easy to make a copy of the DASD volumes that hold the copy of MVS 3.8j; that is, all of the datasets that contain the MVS load modules (programs) and supporting data. Not to mention the datasets that control your TSO user id when you log onto TSO, or any datasets you have created and loaded with programs you have entered with RPF or REVIEW. And if you have done more than compile and run a few 'Hello World' type programs, you may even have some VSAM clusters or non-VSAM datasets that you have created. So there can be a variety of data that may be contained on the DASD volumes that it would be a shame to have to recreate. If you need to restore only one or two of the datasets that have been modified and you restore a copy of the DASD volumes on the host Operating System (Windows, Mac, or Linux) you will have also restored every other dataset on the volume to the prior version. That is not an optimum solution.
What you need to contemplate and plan, before you need to use it, is an MVS based solution to backup your data - probably on a periodic basis - and also the means to restore one or more datasets from that backup media onto the DASD volume from which it was backed up. I have also received questions from individuals who wanted to know how to transfer datasets from one instance of MVS to another, either to share with someone else or to move between different configurations of MVS or the other Operating Systems that may be run under MVS 3.8j. There are a number of programs available to run under MVS 3.8j that will easily create backup copies of datasets and later restore either an entire backup set or just one or two datasets from a backup set. I decided to write this while I was working on the latest revision of my instructions/tutorial on installing MVS 3.8j from the IBM distribution tapes. I realized that I had never seen any comprehensive narrative on the topic of backup/restore and thought it might be appreciated. There are several programs that I will cover below. Some of them are easier to use than the others. In some cases there is more than one program that may be used to achieve a backup of a particular type of dataset; there are times when you might choose one over the other. But all of the programs that I describe below can be used to backup and restore the various types of datasets used under MVS 3.8j.
Although card image libraries (where JCL, program source code, etc. are stored) are Partitioned Datasets, which can easily be backed up and restored using DSSDUMP/DSSREST, there are times when it is a preference to process them as card images. So the pair of programs OFFLOAD and PDSLOAD, from CBT Tape File #93 are the appropriate solution. The load modules for both programs are available in the SYSCPK load library (SYSC.LINKLIB).
The JCL required for OFFLOAD is:
//OFFLOAD JOB (1),'OFFLOAD PO',CLASS=A,MSGCLASS=X //OFFLOAD EXEC PGM=OFFLOAD //STEPLIB DD DISP=SHR,DSN=SYSC.LINKLIB //SYSPRINT DD SYSOUT=* //SYSUT1 DD DISP=SHR,DSN=SYSP.CONCPT14.MACLIB //SYSUT2 DD DISP=(,CATLG),DSN=JAY01.OFFLOAD.DATA, // UNIT=SYSDA,SPACE=(CYL,(5,2),RLSE),VOL=SER=PUB001, // DCB=(DSORG=PS,RECFM=FB,LRECL=80,BLKSIZE=3120) //SYSIN DD * O I=SYSUT1,O=SYSUT2,T=IEBUPDTE //
You may download a copy of the JCL at offload.jcl.
The library from which the members are to be extracted is specified with the SYSUT1 DD statement. The sequential dataset where the offloaded copy of the library members is to be created is specified with the SYSUT2 DD statement. The single control card, read from SYSIN, specifies three options: the name of the input DD, the name of the output DD, and the type of control cards to insert before each member in the output dataset. There is not really any reason to change the options as they are shown in the jobstream above.
When this job is submitted, the partitioned dataset is read and all members are copied to the sequential dataset, with control cards inserted before the contents of each member's data. A report is produced showing the members read from the input dataset and a count of the records written to the output dataset. The report produced from the JCL above may be viewed at offload.report.pdf.
The dataset created - JAY01.OFFLOAD.DATA on volume PUB001 - contains each member of the input partitioned dataset, with a control card inserted in front of it. When the dataset is subsequently read by PDSLOAD, the control card will be used to recreate the offloaded member in a new partitioned dataset being created. A listing of the sequential dataset may be viewed at offload.dataout.pdf.
The JCL required for PDSLOAD is:
//PDSLOAD JOB (1),'RELOAD PO',CLASS=A,MSGCLASS=X //PDSLOAD EXEC PGM=PDSLOAD,PARM='NEW' //SYSPRINT DD SYSOUT=* //SYSIN DD DISP=SHR,DSN=JAY01.OFFLOAD.DATA //SYSUT2 DD DISP=(,CATLG,DELETE),DSN=JAY01.CONCPT14.MACLIB, // UNIT=SYSDA,VOL=SER=PUB000, // SPACE=(TRK,(90,,7),RLSE), // DCB=(SYS1.MACLIB) //
You may download a copy of the JCL at pdsload.jcl.
The sequential dataset where the offloaded copy of the library members is to be read from is specified with either the SYSIN DD statement or the SYSUT1 DD statement, controlled by an option (details below). The partitioned dataset into which the offloaded members is to be restored is specified with the SYSUT2 statement. The options for the PDSLOAD program are specified in the PARM= operand on the EXEC statement. The options available are:
NEWread the input from DDname SYSIN, rather than DDname SYSUT1 SPFgenerate SPF statistics in the reloaded members S({name mask})only reload the members in which the name matches the mask; asterisk (*), percent (%), and question mark (?) are treated the same UPDTEwhen special character pair (><) are found in columns 1 and 2 of input, replace with IEBUPDTE control characteres (./) Member Name Checking
NAME=ASISbypass all member name validity checks (default) NAME=CHECKallow all printable characters, except comma, using Codepage 037 NAME=IBMenforce strict IBM JCL standards
When this job is submitted, the records are read from the sequential dataset and reloaded to the partitioned dataset. A report is produced showing the members restored to the output dataset. The report produced from the JCL above may be viewed at pdsload.report.pdf.
An IEHLIST for the reloaded partitioned dataset may be viewed at pdsload.outputds.pdf
IEBCOPY is the IBM utility to use when manipulating partitioned datasets that contain load modules, also called load libraries. Load libraries can also be backed up and restored using DSSDUMP/DSSREST, but there are times that IEBCOPY is the faster solution, and for copying/moving a library from one DASD volume to another, or compressing a load library, IEBCOPY is the best solution.
The JCL required for using IEBCOPY to unload a load library to a tape or a sequential dataset on disk is:
//IEBCOPY JOB (1),'IEBCOPY UNLOAD',CLASS=A,MSGCLASS=X //IEBCOPY EXEC PGM=IEBCOPY,REGION=3M //SYSPRINT DD SYSOUT=* //LIBIN DD DISP=SHR,DSN=JAY001.FORMAT.LOADLIB, // UNIT=SYSDA,VOL=SER=MVSMIG //TAPEOUT DD DISP=(NEW,KEEP),UNIT=(TAPE,,DEFER), // DSN=SYSP.FORMAT.LOADLIB.UNLOAD,LABEL=(1,SL) //SYSUT3 DD UNIT=SYSDA,SPACE=(TRK,(30,15)),DISP=(NEW,DELETE,DELETE) //SYSUT4 DD UNIT=SYSDA,SPACE=(TRK,(30,15)),DISP=(NEW,DELETE,DELETE) //SYSIN DD * COPY INDD=LIBIN,OUTDD=TAPEOUT //
You may download a copy of the JCL at iebcopy.unload.jcl.
The LIBIN DD statement describes the partitioned dataset which is to be unloaded. The TAPEOUT DD statement describes the sequential dataset that will receive the unloaded contents of the dataset described by the LIBIN DD statement. The names for these two DD statements are not fixed as LIBIN and TAPEOUT, as the DDnames used are specified in the control card; however, I usually make the DDnames descriptive of the function of the dataset described by the DD statement. The output dataset may reside on tape or disk, but in this case I am using a tape (het tape image). If using this as a backup mechanism, the het tape image may be copied/moved to archival storage.
The SYSUT3 and SYSUT4 datasets are work datasets for the IEBCOPY program.
IEBCOPY reads its control cards from the SYSIN DD. The only required control card has the format: COPY INDD={from DDname},OUTDD={to DDname}. There is also an optional second control card that may be used to select specific members from the dataset to be unloaded, and it has the format: SELECT MEMBER=({memberA},{memberB},...{memberZ}). If the SELECT control statement is not included, all members are unloaded to the output dataset.
When this job is executed, a tape is produced containing the unloaded contents of the partitioned dataset, and a report of the members unloaded. The SYSOUT output of this job may be viewed at iebcopy.unload.pdf. Output from TAPEMAP of the tape created by this jobstream may be viewed at d05091.tapemap.pdf. As the TAPEMAP program recognizes datasets produced by IEBCOPY, there is also a list of the members that have been unloaded to the dataset on the tape.
The JCL required for using IEBCOPY to reload a load library from a tape or a sequential dataset on disk is:
//IEBCOPY JOB (1),'IEBCOPY RELOAD',CLASS=A,MSGCLASS=X //IEBCOPY EXEC PGM=IEBCOPY,REGION=1M //SYSPRINT DD SYSOUT=*s //TAPEIN DD DISP=(OLD,KEEP),UNIT=(TAPE,,DEFER),LABEL=(1,SL), // VOL=SER=D05091,DSN=SYSP.FORMAT.LOADLIB.UNLOAD //LIBOUT DD DISP=(NEW,CATLG,DELETE),DSN=HMVS01.FORMAT.LOADLIB, // UNIT=SYSDA,VOL=SER=PUB000, // SPACE=(TRK,(300,,2),RLSE) //SYSUT3 DD UNIT=SYSDA,SPACE=(TRK,(30,15)),DISP=(NEW,DELETE,DELETE) //SYSUT4 DD UNIT=SYSDA,SPACE=(TRK,(30,15)),DISP=(NEW,DELETE,DELETE) //SYSIN DD * COPY INDD=TAPEIN,OUTDD=LIBOUT //
You may download a copy of the JCL at iebcopy.reload.jcl.
The DD statements are similar to the IEBCOPY unload job above, except the direction of the movement of data is reflected in the DD names: TAPEIN describes the input tape and LIBOUT describes the output library.
When this job is executed, the tape produced by the previous job is read and the load modules that were unloaded are restored to the partitioned dataset described by the LIBOUT DD statement. The partitioned dataset did not exist prior to executing the job, so the parameters on the DD statement provide the necessary information to allocate space, and upon successful conclusion of the job the dataset will be catalogued. A report of the members reloaded is produced by IEBCOPY. The SYSOUT output of this job may be viewed at iebcopy.reload.pdf.
An IEHLIST for the reloaded partitioned dataset may be viewed at iebcopy.reload.iehlist.pdf.
The JCL required for using IEBCOPY to copy a partitioned dataset is:
//IEBCOPY JOB (1),'IEBCOPY COPY',CLASS=A,MSGCLASS=X //IEBCOPY EXEC PGM=IEBCOPY,REGION=2M //SYSPRINT DD SYSOUT=* //LIBIN DD DISP=SHR,DSN=HMVS01.FORMAT.LOADLIB, // UNIT=SYSDA,VOL=SER=PUB000 //LIBOUT DD DISP=(NEW,CATLG,DELETE),DSN=HMVS02.FORMAT.LOADLIB, // UNIT=SYSDA,VOL=SER=PUB000, // SPACE=(TRK,(300,,2),RLSE) //SYSUT3 DD UNIT=SYSDA,SPACE=(TRK,(30,15)),DISP=(NEW,DELETE,DELETE) //SYSUT4 DD UNIT=SYSDA,SPACE=(TRK,(30,15)),DISP=(NEW,DELETE,DELETE) //SYSIN DD * COPY INDD=LIBIN,OUTDD=LIBOUT //
You may download a copy of the JCL at iebcopy.copy.jcl.
The DD statements are similar to the prior IEBCOPY jobs above, except the both the input and output datasets are partitioned datasets. LIBIN describes the input dataset and LIBOUT describes the output dataset.
When this job is executed, the partitioned dataset described by the LIBIN DD statement is read and the members are copied to the partitioned dataset described by the LIBOUT DD statement. The LIBOUT partitioned dataset - HMVS02.FORMAT.LOADLIB - did not exist prior to executing the job, so the parameters on the LIBOUT DD statement provide the necessary information to allocate space, and upon successful conclusion of the job the dataset will be catalogued. A report of the members copied to the new partitioned dataset is produced by IEBCOPY. The SYSOUT output of this job may be viewed at iebcopy.copy.pdf.
Another situation in which you would use IEBCOPY to copy a partitioned dataset is to expand the space allocation or the directory size of the library. If you have run out of space to add new members to a partitioned dataset (SD37 abend) or directory space (SE37 abend), you can copy an existing partitioned dataset to another (new) partitioned dataset, expanding either the space allocation (primary and/or secondary) and/or the number of directory blocks allocated. Once that copy has completed, delete the original partitioned dataset, then rename the new (expanded) copy of the dataset to the name held by the original partitioned dataset.
As members of a partitioned dataset are deleted and replaced, space allocated to the dataset that was formerly occupied by members that were deleted or replaced becomes vacant and unusable. Eventually the condition can arise that there is no more space available for new members to be added (or existing members to be replaced, as in re-link editing a program and storing the load module in the library). The solution is to compress the library, which processes the library from the beginning to the end, moving active members up, overwriting the unused space, and leaving all the unused space at the end of the dataset.
If you built your MVS 3.8j system using my installation instructions, there is a procedure in your SYS2.PROCLIB named COMPRESS, which uses IEBCOPY to compress any library specified by a parameter. The jobstream below is the equivalent of that catalogued procedure.
There is one caution when using IEBCOPY to compress a library in place: if the IEBCOPY program is interrupted before completion, your library may be left in an unusable state. Therefore, if you have concerns that your computer may be interrupted while a compress operation on a library is taking place, you should first make a backup of the library to be compressed. No other jobs should be accessing the partitioned dataset to be compressed while the compress job is executing.
The JCL required for using IEBCOPY to compress a partitioned dataset is:
//IEBCOPY JOB (1),'IEBCOPY COMPRESS',CLASS=A,MSGCLASS=X //IEBCOPY EXEC PGM=IEBCOPY,REGION=2M //SYSPRINT DD SYSOUT=* //SYSUT1 DD DISP=SHR,DSN=HMVS02.FORMAT.LOADLIB //SYSUT2 DD DISP=SHR,DSN=HMVS02.FORMAT.LOADLIB //SYSUT3 DD UNIT=SYSDA,SPACE=(TRK,(30,15)),DISP=(NEW,DELETE,DELETE) //SYSUT4 DD UNIT=SYSDA,SPACE=(TRK,(30,15)),DISP=(NEW,DELETE,DELETE) //SYSIN DD DUMMY //
You may download a copy of the JCL at iebcopy.compress.jcl.
The DD statements for this job are only slightly different from the previous IEBCOPY jobs above. The input and output datasets are the same physical dataset, and are described by SYSUT1 and SYSUT2, respectively. There is no control statement so SYSIN is a DUMMY DD.
When this job is executed, the partitioned dataset described by SYSUT1/SYSUT2 is examined, moving members occurring later in the dataset upward, overwriting unused space. A report of the members copied and relocated is prodcued by IEBCOPY. The SYSOUT output of this job may be viewed at iebcopy.compress.pdf.
In 2009 a pair of programs were made available for MVS 3.8j - DSSDUMP and DSSREST - which are used to backup (dump) and restore non-VSAM datasets. The assembler source for the two programs is found on CBT Tape File #860, originated by Gerhard Postpischil. From the source code, Gerhard is the author of DSSDUMP, while Charlie Brint is the author of DSSREST. The load modules for both programs are available in the SYSCPK load library (SYSC.LINKLIB). From comments in the programs, it appears that DSSREST was written first, to enable restoration of backup sets created by ADRDSSU, a licensed program used under z/OS. DSSDUMP was then written based upon the DSSREST code.
For most non-VSAM datasets, these are the programs to use if you need to create or restore backup sets on MVS 3.8j.
The basic JCL required for DSSDUMP is:
//DSSDUMP JOB (SYS),'DSSDUMP',CLASS=S,MSGCLASS=X //DSSDUMP EXEC PGM=DSSDUMP,REGION=4096K //STEPLIB DD DSN=SYSC.LINKLIB,DISP=SHR //SYSPRINT DD SYSOUT=* //SYSIN DD * <------------------------------- control cards go here ---------- //TAPEOUT DD //
The syntax of the control cards is:
OPTIONS ENQ | NOENQ ENQ issues an exclusive ENQ TEST for each dataset. Dump continues if DS not available, and issues RC=4. Applies from prior DUMP on. NOENQ (default) dumps DS as is EXPORT modifies the output DSCB1 by removing any expiration date and password flags. TEST bypasses all TAPE output. Note that tape file has already been opened, so tape mount will be required, but tape will be empty. INCLUDE {mask} {mask} specifies a data set name (unquoted). If the mask contains an asterisk, question mark, or percent sign, it is treated as a mask. A name ending in a period is treated as a mask followed by an implied double asterisk (**). Note that a percent sign is treated as a positional mask (one to one correspondence of characters/mask to dsname). Any number of DUMP cards may be used in a run but there is a limit of approximately 700 datasets that may be processed in a run (established at assembly time). DUMP {mask} VOLUME({serial}) processes matching data sets on specified volume serial only. If this results in duplicate data set names, a .D#nnnnnn is appended to duplicates on higher volume serials (i.e., the cataloged entry may be the one that gets renamed). Masking bytes are valid in any position in the mask. EXCLUDE {mask} (optional) follows relevant DUMP card. mask as above. Excludes matching data sets chosen by the previous DUMP/INCLUDE cards. PREFIX {name} causes all data set names to be prefixed by the specified text string. It is not required to be an index level (e.g., SYS9.), but if not, generated names may be syntactically invalid. Result names are truncated to 44 characters, and a trailing period is blanked. Only one PREFIX card may be used per run, and it is mutually exclusive of RENAME and STRIP options. STRIP {name} The specified string is removed from any DSN where it is found. Multiple STRIP and RENAME requests (up to 16, established at assembly time) are supported. RENAME {oldname} {newname} The specified string is replaced in any DSN found and replaced by prefix newname. Up to 16 RENAME and STRIP requests are legal. All strings in PREFIX/RENAME/STRIP are limited to 23 characters (established at assembly time).
Tape produced defaults to RECFM=U, BLKSIZE=65520. For RECFM=U, BLKSIZES in range 7892 through 65520 are supported. For RECFM=V, BLKSIZES in range 7900 through 32760 are supported.
Let's look at a few example jobs. The following jobstream selects all the datasets for user HMVS01 and one dataset for HMVS02. The TEST option specifies no actual backup will be created.
//DSSDUMP JOB (SYS),'DSSDUMP',CLASS=S,MSGCLASS=X //DSSDUMP EXEC PGM=DSSDUMP,REGION=4096K //STEPLIB DD DSN=SYSC.LINKLIB,DISP=SHR //SYSPRINT DD SYSOUT=* //SYSIN DD * OPTIONS TEST DUMP HMVS01.** DUMP HMVS02.CNTL //TAPE DD DSN='DUMP.HMVS.USERS.DATA', // UNIT=(TAPE,,DEFER),DISP=(NEW,KEEP), // DCB=(LRECL=18448,BLKSIZE=18452,RECFM=V) //
You may download a copy of the JCL at dssdump01.jcl. The SYSOUT output of this job may be viewed at dssdump.output01.pdf.
As the output from the TEST run appears to produce the backup I intended, I will remove the OPTIONS TEST card and submit the job again. This time the tape is produced. The SYSOUT output is identical, except for the omission of the OPTIONS TEST control card in the report. A single dataset is produced on the tape that contains the contents of the selected datasets and the control information required to reload them to DASD.
DSSDUMP/DSSREST is an excellent solution for transferring datasets between users or systems. Say that I have decided I want a copy of the dataset containing the STUDENT.DATA that user HMVS01 created to be available for user HMVS02 to use. Granted, there are easier ways to copy a dataset to share with another user on the same system, but this is an example. The JCL to create the backup, with the high level qualifier of the dataset in the backup set renamed for user HMVS02 is:
//DSSDUMP JOB (SYS),'DSSDUMP',CLASS=S,MSGCLASS=X //DSSDUMP EXEC PGM=DSSDUMP,REGION=4096K //STEPLIB DD DSN=SYSC.LINKLIB,DISP=SHR //SYSPRINT DD SYSOUT=* //SYSIN DD * DUMP HMVS01.STUDENT.DATA RENAME HMVS01 HMVS02 //TAPE DD DSN='DUMP.HMVS.USERS.DATA', // UNIT=(TAPE,,DEFER),DISP=(NEW,KEEP), // DCB=(LRECL=18448,BLKSIZE=18452,RECFM=V) //
You may download a copy of the JCL at dssdump02.jcl. When submitted, the job produces the expected tape. The SYSOUT output of this job may be viewed at dssdump.output02.pdf.
Now we will look at the other side of this process, using DSSREST to restore one or more datasets from a backup set.
The basic JCL required for DSSREST is:
//DSSREST JOB (SYS),'DSSREST',CLASS=S,MSGCLASS=X //DSSREST EXEC PGM=DSSREST,REGION=2048K,PARM='' //STEPLIB DD DSN=SYSC.LINKLIB,DISP=SHR //SYSPRINT DD SYSOUT=* //REPORT DD SYSOUT=* //JCLOUT DD SYSOUT=* //SYSUT1 DD UNIT=(TAPE,,DEFER),DSN='DUMP.HMVS.USERS.DATA', // DISP=OLD,LABEL=(1,SL),VOL=SER=D05083 //
As DSSREST was written by another author, the functional design is different from DSSDUMP. Instead of control cards, DSSREST is controlled by parameters passed to the program in the PARM= statement on the EXEC card. I most often do not use any of the special functions that are requested via the PARM; in fact, I most frequently am restoring all the datasets in a backup set, so I do not have the requirement for any special functionality; in fact, it appears that most of the special functions relate to restoring datasets from backup sets created by the licensed program ADRDSSU.
When you run the jobstream above, pointing SYSUT1 to a backup set created by DSSDUMP that resides on tape or disk, DSSREST produces a report of all the datasets in the backup set, along with their characteristics, plus a set of JCL that will restore all the datasets to disk.
The creation of the JCL (in the JCLOUT DD) is triggered by the parameter PARM='', or no parameter. You may download a copy of this JCL at dssrest01.jcl. The output from the three SYSOUT DD statements when the above job is executed may be viewed at dssrest01.pdf. I have combined the output from the three SYSOUT listings into a single pdf for simplicity.
The contents of the JCLOUT SYSOUT is a generated jobstream to restore all the datasets in the backup set. Since we selected a single dataset in DSSDUMP, where the backup set was created, there is only a single dataset that it is possible to restore. So the JCLOUT DD contains this jcl:
//DSSREST JOB TIME=1440 //RESTORE EXEC PGM=DSSREST,REGION=8000K,TIME=1440, // PARM='*' //STEPLIB DD DISP=SHR,DSN=SYSC.LINKLIB //SYSPRINT DD SYSOUT=* //REPORT DD SYSOUT=* //SYSUT1 DD DISP=OLD,DSN=DUMP.HMVS.USERS.DATA, // UNIT=TAPE,VOL=SER=(D05083), // LABEL=(1,SL) //* //SYSUT2 DD DSN=HMVS02.STUDENT.DATA, /*1*/ // DISP=(,CATLG,DELETE),FREE=CLOSE,DCB=(DSORG=PS, // RECFM=FB,LRECL=88,BLKSIZE=8800), // SPACE=(TRK,(2)), // UNIT=3380 VOL=SER=PUB000 //
In the generated JCL, the job card must be modified, as it is only a skeleton. On the generated EXEC statement, PARM='*' requests DSSREST to restore all datasets in the backup set. Had there been multiple datasets in the backup set, there would be multiple SYSUT2 DD cards. DSSREST allocates SYSUT2 dynamically, using the supplied SYSUT2 cards, in sequence, as models for the dynamic allocation. Note that in the generated SYSUT2 DD cards, whether there is one SYSUT2 set or many, for the final parameter, the VOL=SER= specifying the target DASD volume, there is a comma deliberately omitted before the parameter. For each generated SYSUT2 DD, if the dataset is to be restored to the same volume from which it was backed up, you simply need to replace the missing comma; if it is to be restored to a different volume, you must replace the comma and change the Volume Serial Number.
A copy of the generated jobstream with the corrected JOB card and SYSUT2 is available for download at dssrest02.jcl. Running the job will restore HMVS02.STUDENT.DATA from the backup set to the PUB000 volume and catalog it, as per the SYSUT2 DD statement.
Since this dataset did not originally exist with the high level qualifier HMVS02, using DSSDUMP/DSSREST in this manner has effectively copied the dataset HMVS01.STUDENT.DATA, creating HMVS02.STUDENT.DATA. As I initially stated, this could be done more simply using other methods, but this is an example of how a dataset, or group of datasets, could be transferred from one system to another.
Let's go back to the first backup set we created with DSSDUMP above. It contained all of the datasets for user HMVS01 and a single dataset for user HMVS02. Here is the jobstream to examine that backup set, and generate a report and a generated restore jobstream:
//DSSREST JOB (SYS),'DSSREST',CLASS=S,MSGCLASS=X //DSSREST EXEC PGM=DSSREST,REGION=2048K,PARM='' //STEPLIB DD DSN=SYSC.LINKLIB,DISP=SHR //SYSPRINT DD SYSOUT=* //REPORT DD SYSOUT=* //JCLOUT DD SYSOUT=* //SYSUT1 DD UNIT=(TAPE,,DEFER),DSN='DUMP.HMVS.USERS.DATA', // DISP=OLD,LABEL=(1,SL),VOL=SER=D05082 //
I know this was the tape from the first DSSDUMP, because it was created on the tape with serial number D05082. The generated jobstream from this execution is:
//DSSREST JOB TIME=1440 //RESTORE EXEC PGM=DSSREST,REGION=8000K,TIME=1440, // PARM='*' //STEPLIB DD DISP=SHR,DSN=SYSC.LINKLIB //SYSPRINT DD SYSOUT=* //REPORT DD SYSOUT=* //SYSUT1 DD DISP=OLD,DSN=DUMP.HMVS.USERS.DATA, // UNIT=TAPE,VOL=SER=(D05082), // LABEL=(1,SL) //* //SYSUT2 DD DSN=HMVS01.CLIST, /*1*/ // DISP=(,CATLG,DELETE),FREE=CLOSE,DCB=(DSORG=PO, // RECFM=FB,LRECL=80,BLKSIZE=19040), // SPACE=(CYL,(1,1,1)), // UNIT=3380 VOL=SER=PUB000 //SYSUT2 DD DSN=HMVS01.CNTL, /*2*/ // DISP=(,CATLG,DELETE),FREE=CLOSE,DCB=(DSORG=PO, // RECFM=FB,LRECL=80,BLKSIZE=19040), // SPACE=(CYL,(1,1,1)), // UNIT=3380 VOL=SER=PUB000 //SYSUT2 DD DSN=HMVS01.COBOL.SOURCE, /*3*/ // DISP=(,CATLG,DELETE),FREE=CLOSE,DCB=(DSORG=PO, // RECFM=FB,LRECL=80,BLKSIZE=23440), // SPACE=(TRK,(8,15,1)), // UNIT=3380 VOL=SER=PUB000 //SYSUT2 DD DSN=HMVS01.LOAD, /*4*/ // DISP=(,CATLG,DELETE),FREE=CLOSE,DCB=(DSORG=PO, // RECFM=U,BLKSIZE=19069), // SPACE=(CYL,(1,1,1)), // UNIT=3380 VOL=SER=PUB000 //SYSUT2 DD DSN=HMVS01.SOURCE, /*5*/ // DISP=(,CATLG,DELETE),FREE=CLOSE,DCB=(DSORG=PO, // RECFM=FB,LRECL=80,BLKSIZE=19040), // SPACE=(CYL,(1,1,1)), // UNIT=3380 VOL=SER=PUB000 //SYSUT2 DD DSN=HMVS01.STUDENT.DATA, /*6*/ // DISP=(,CATLG,DELETE),FREE=CLOSE,DCB=(DSORG=PS, // RECFM=FB,LRECL=88,BLKSIZE=8800), // SPACE=(TRK,(2)), // UNIT=3380 VOL=SER=PUB000 //SYSUT2 DD DSN=HMVS02.CNTL, /*7*/ // DISP=(,CATLG,DELETE),FREE=CLOSE,DCB=(DSORG=PO, // RECFM=FB,LRECL=80,BLKSIZE=19040), // SPACE=(CYL,(1,1,1)), // UNIT=3380 VOL=SER=PUB000 //
Note that in the generated jobstream, in addition to the job card needing modification, the commas have all been omitted prior to the final parameter (VOL=SER=PUB000) on the seven SYSUT2 DD statements.
Now I have decided I only want to retrieve the HMVS01.SOURCE.COBOL dataset and restore it to the PUB000 volume. So I modify the generated jobstream, adding a proper jobcard, added a step to delete the existing HMVS01.SOURCE.COBOL dataset, and removing all but the single SYSUT2 DD card for the HMVS01.SOURCE.COBOL dataset I want to restore from the backup set, also adding the comma for the VOL=SER=PUB000 parameter. Since I only want to restore one dataset, I include in the PARM field the DSN of the dataset to restore in place of the asterisk (*) that was in the generated jobstream:
//DSSREST JOB (SYS),'DSSREST',CLASS=S,MSGCLASS=X //IEHPROGM EXEC PGM=IEHPROGM //DD1 DD UNIT=SYSDA,VOL=SER=PUB000,DISP=OLD //SYSPRINT DD SYSOUT=* //SYSIN DD * UNCATLG DSNAME=HMVS01.COBOL.SOURCE SCRATCH DSNAME=HMVS01.COBOL.SOURCE,VOL=SYSDA=PUB000,PURGE //RESTORE EXEC PGM=DSSREST,REGION=8000K,TIME=1440, // PARM='HMVS01.COBOL.SOURCE' <----- request single dataset to restore //STEPLIB DD DISP=SHR,DSN=SYSC.LINKLIB //SYSPRINT DD SYSOUT=* //REPORT DD SYSOUT=* //SYSUT1 DD DISP=OLD,DSN=DUMP.HMVS.USERS.DATA, // UNIT=TAPE,VOL=SER=(D05082), // LABEL=(1,SL) //* //SYSUT2 DD DSN=HMVS01.COBOL.SOURCE, /*3*/ // DISP=(,CATLG,DELETE),FREE=CLOSE,DCB=(DSORG=PO, // RECFM=FB,LRECL=80,BLKSIZE=23440), // SPACE=(TRK,(8,15,1)), // UNIT=3380,VOL=SER=PUB000 //
You may download a copy of the JCL at dssrest03.jcl. Because the PARM has specified a dataset to restore, the JCLOUT DD was not generated in the jobstream, is not opened or written to. The report written to the REPORT DD is identical to that written when the skeleton jobstream was generated using tape D05082 as input.
If you wanted to restore two or more datasets from a backup set, but not the entire backup set:
- execute DSSREST with the backup set as input and PARM='' to generate the skeleton JCL to the JCLOUT DD,
- modify the generated JCL:
- correct job card,
- leave the generated PARM='*',
- remove or change to comments the generated SYSUT2 DD statements for any datasets not to be restored,
- submit the corrected jobstream to restore the datasets for which you have left SYSUT2 DD statements.
Note: You must leave the remaining SYSUT2 DD statements in the order they were created in, because they match the order of the datasets in the backup set. You will also receive IEC130I SYSUT2 DD STATEMENT MISSING informational messages when the job is executed for all datasets in the backup set for which you deleted (or commented out) the generated SYSUT2 DD statements.
VSAM datasets are referred to as clusters and reside in VSAM data spaces, either independent data spaces are sub-allocated from a large data space that is shared among a number of VSAM clusters. Yes, you can back up and restore an entire catalog, but that again puts you in the situation of potentially restoring collateral data that you might not need to restore. So the granularity I am going to approach for a backup/restore event is a cluster. For an indexed dataset, the cluster will contain an index and a data component. For a relative or sequential (or entry-sequenced as it is called in VSAM parlance) dataset, there is only a data component. Each of these three organizations has a set of information which define the characteristics of the data stored, and when you list the data with IDCAMS, that information is shown as the cluster. So when we back up, or restore, the cluster, the defining information is transferred as well as the data component (where the information stored in the dataset resides), and in the case of an indexed cluster, the index component (where the sequence keys reside) is also transferred. If there are alternate indexed and paths defined for a cluster, you will need to include those in the backup and restore as well.
The single utility program that allows manipulation of VSAM objects is IDCAMS. IDCAMS, as well as VSAM itself, is very complex, but you only need to know a few commands to have a very powerful tool to use. If you want to know more about VSAM later, you can always take a look at my VSAM tutorial, but I will provide all you need to know to use IDCAMS to backup and restore VSAM objects right here.
The simplest method to get data into and out of a VSAM cluster is the REPRO command, which does just what it sounds like: it reproduces data. It can be used to copy the contents of a non-VSAM dataset into a VSAM cluster or copy the contents of a VSAM cluster into a non-VSAM dataset. Many times REPRO is used as a backup mechanism in commercial environments and, in some cases, it may be the preferred method. But, if you use REPRO, you always have to manage the deletion and recreation of the empty cluster. So for a simple method of backup/restore, the command of choice is EXPORT and IMPORT.
To back up a VSAM cluster, here is the JCL you need:
//VSEXP JOB 1,'EXPORT VS CLUSTERS',CLASS=A,MSGCLASS=X //IDCAMS EXEC PGM=IDCAMS,REGION=2M //TAPEOUT1 DD UNIT=(TAPE,,DEFER),DISP=(,KEEP), --| // LABEL=(1,SL),DSN=BACKUP.VSTESTKS | //TAPEOUT2 DD UNIT=AFF=TAPEOUT1,DISP=(,KEEP),VOL=REF=*.TAPEOUT1, | These DD statements // LABEL=(2,SL),DSN=BACKUP.VSTESTK1 | define the datasets //TAPEOUT3 DD UNIT=AFF=TAPEOUT1,DISP=(,KEEP),VOL=REF=*.TAPEOUT1, | that will be written // LABEL=(3,SL),DSN=BACKUP.VSTESTK2 | to the tape image //TAPEOUT4 DD UNIT=AFF=TAPEOUT1,DISP=(,KEEP),VOL=REF=*.TAPEOUT1, | containing the backed // LABEL=(4,SL),DSN=BACKUP.VSTESTK3 | up VSAM clusters. //TAPEOUT5 DD UNIT=AFF=TAPEOUT1,DISP=(,KEEP),VOL=REF=*.TAPEOUT1, | // LABEL=(5,SL),DSN=BACKUP.VSTESTK4 ==| //SYSPRINT DD SYSOUT=* //SYSIN DD * /* VERIFY TO ENSURE CATALOG INFORMATION IS UPDATED */ VERIFY DATASET(MVS380.VSTESTKS.CLUSTER) VERIFY DATASET(MVS380.VSTESTK1.CLUSTER) VERIFY DATASET(MVS380.VSTESTK2.CLUSTER) VERIFY DATASET(MVS380.VSTESTK3.CLUSTER) VERIFY DATASET(MVS380.VSTESTK4.CLUSTER) /* EXPORT FOR BACKUP */ EXPORT MVS380.VSTESTKS.CLUSTER - OUTFILE(TAPEOUT1) TEMPORARY EXPORT MVS380.VSTESTK1.CLUSTER - OUTFILE(TAPEOUT2) TEMPORARY EXPORT MVS380.VSTESTK2.CLUSTER - OUTFILE(TAPEOUT3) TEMPORARY EXPORT MVS380.VSTESTK3.CLUSTER - OUTFILE(TAPEOUT4) TEMPORARY EXPORT MVS380.VSTESTK4.CLUSTER - OUTFILE(TAPEOUT5) TEMPORARY //
The comments I have added (in magenta) do not exist in the JCL. You may download a copy of the JCL at idcams.export.cluster.jcl. I include the VERIFY commands to ensure that catalog information has been updated for each cluster prior to backing up the cluster; if a VSAM cluster was opened and not closed properly, in some instances the catalog information is not updated, and this added step ensures that the copy of the cluster that is copied to backup is perfect.
The command to actually create the backup is: EXPORT clustername OUTFILE(ddname) TEMPORARY. The keyword TEMPORARY tells IDCAMS that this is a backup operation, and causes IDCAMS to set an indicator in the VSAM cluster that a temporary copy has been made. The default is PERMANENT, which causes IDCAMS to delete the VSAM cluster after the copy is made, so you must remember to include the TEMPORARY keyword. Why does IDCAMS set an indicator in the VSAM cluster information to note that a copy has been made? The answer is that you cannot restore the backup copy over the VSAM cluster if this indicator has not been set. It is a safeguard to prevent the unintentional overwriting of a VSAM cluster. The SYSOUT output of this job may be viewed at idcams.export.cluster.pdf
The jobstream creates five separate datasets on the single tape. It will be a single tape image because of the coding of UNIT=AFF= and VOL=REF= JCL parameters for the second through the final tape DD statement pointing back to the first tape DD statement. Output from TAPEMAP of the tape created by this jobstream may be viewed at d05052.tapemap.pdf.
To restore one or more VSAM clusters from this backup, here is the JCL you need:
//VSIMP JOB 1,'IMPORT VS CLUSTERS',CLASS=A,MSGCLASS=X //IDCAMS EXEC PGM=IDCAMS,REGION=2M //TAPEIN1 DD UNIT=(TAPE,,DEFER),DISP=OLD,VOL=SER=D05052, // LABEL=(3,SL),DSN=BACKUP.VSTESTK2 //TAPEIN2 DD UNIT=AFF=TAPEIN1,DISP=(,KEEP),VOL=REF=*.TAPEIN1, // LABEL=(5,SL),DSN=BACKUP.VSTESTK4 //SYSPRINT DD SYSOUT=* //SYSIN DD * /* IMPORT FROM BACKUP */ IMPORT INFILE(TAPEIN1) - OUTDATASET(MVS380.VSTESTK2.CLUSTER) IMPORT INFILE(TAPEIN2) - OUTDATASET(MVS380.VSTESTK4.CLUSTER) //
To illustrate that it is not necessary to restore all of the VSAM clusters included in a backup set, I have elected to restore only two of the clusters in the job above. You may download a copy of the JCL at idcams.import.cluster.jcl. Like the JCL for the export jobstream, coding of UNIT=AFF= and VOL=REF= JCL parameters for the second (or all but the first of the input DD statements if you have more than two datasets on the input tape) specifies that all the datasets are contained on a single tape image. The SYSOUT output of this job may be viewed at idcams.import.cluster.pdf.
Remember I mentioned that IDCAMS sets an indicator in the VSAM cluster information to note that a copy has been made? When you restore a VSAM cluster, that indicator is reset. If you attempt to restore the cluster a subsequent time, you will receive an error, which can be seen on the SYSOUT output at idcams.import.cluster.error.pdf. The error is reported because the target VSAM cluster is not flagged as having been exported. If there is a circumstance where you absolutely needed to restore a backup onto a VSAM cluster that had not been exported, there are two solutions. One would be to run an EXPORT jobstream, discard the output dataset created by the job, then run the IMPORT jobstream to load from the backup set you actually want to exist in the VSAM cluster. Alternatively, you can use a jobstream that deletes and defines the VSAM cluster, then uses the IMPORT command with the INTOEMPTY parameter:
//VSIMP JOB 1,'IMPORT VS CLUSTERS',CLASS=A,MSGCLASS=X //IDCAMS EXEC PGM=IDCAMS,REGION=2M //TAPEIN1 DD UNIT=(TAPE,,DEFER),DISP=OLD,VOL=SER=D05052, // LABEL=(3,SL),DSN=BACKUP.VSTESTK2 //TAPEIN2 DD UNIT=AFF=TAPEIN1,DISP=(,KEEP),VOL=REF=*.TAPEIN1, // LABEL=(5,SL),DSN=BACKUP.VSTESTK4 //SYSPRINT DD SYSOUT=* //SYSIN DD * /* DELETE VSAM CLUSTERS TO BE RESTORED */ DELETE MVS380.VSTESTK2.CLUSTER CLUSTER PURGE DELETE MVS380.VSTESTK4.CLUSTER CLUSTER PURGE /* DEFINE EMPTY CLUSTERS TO BE RESTORED */ DEFINE CLUSTER ( - NAME ( MVS380.VSTESTK2.CLUSTER ) - VOLUMES ( MVS380 ) - RECORDSIZE ( 32 32 ) - RECORDS( 20 10 ) - KEYS ( 2 0 ) - INDEXED - ) - DATA ( - NAME ( MVS380.VSTESTK2.DATA ) - ) - INDEX ( - NAME ( MVS380.VSTESTK2.INDEX ) - ) DEFINE CLUSTER ( - NAME ( MVS380.VSTESTK4.CLUSTER ) - VOLUMES ( MVS380 ) - RECORDSIZE ( 17 17 ) - RECORDS( 230 10 ) - KEYS ( 6 0 ) - INDEXED - ) - DATA ( - NAME ( MVS380.VSTESTK4.DATA ) - ) - INDEX ( - NAME ( MVS380.VSTESTK4.INDEX ) - ) /* IMPORT FROM BACKUP */ IMPORT INFILE(TAPEIN1) - OUTDATASET(MVS380.VSTESTK2.CLUSTER) INTOEMPTY IMPORT INFILE(TAPEIN2) - OUTDATASET(MVS380.VSTESTK4.CLUSTER) INTOEMPTY //
You may download a copy of the JCL at idcams.import.intoempty.jcl. The SYSOUT output of this job may be viewed at idcams.import.intoempty.pdf.
Another consideration if you are exporting/importing clusters with alternate indexes and paths defined is the sequence in which the VSAM objects are exported and imported. You should export alternate indices in advance of exporting the base clusters upon which they are defined. You should import base clusters in advance of importing alternate indices which are defined over the base cluster. This is an extremely important issue if you are doing PERMANENT rather than TEMPORARY exports, because of how IDCAMS handles deletion of clusters/indexes, but it is relevant for TEMPORARY exports with subsequent imports as well. When you import a backup onto a VSAM base cluster (a restore operation), prior to executing the import operation IDCAMS will first delete the base cluster and all the associated VSAM objects. The backup of the base cluster only contains the information necessary to restore the base cluster, so if you have already restored the alternate indices, the import of the base cluster will leave you with only the base cluster on your system.
Here is the JCL to export (backup) a base cluster with an alternate index and path:
//VSEXPA JOB 1,'EXPORT VS CLUSTERS',CLASS=A,MSGCLASS=X //IDCAMS EXEC PGM=IDCAMS,REGION=2M //TAPEOUT1 DD UNIT=(TAPE,,DEFER),DISP=(,KEEP), // LABEL=(1,SL),DSN=BACKUP.STUDENT.AIX //TAPEOUT2 DD UNIT=AFF=TAPEOUT1,DISP=(,KEEP),VOL=REF=*.TAPEOUT1, // LABEL=(2,SL),DSN=BACKUP.STUDENT.FILE //SYSPRINT DD SYSOUT=* //SYSIN DD * /* VERIFY TO ENSURE CATALOG INFORMATION IS UPDATED */ VERIFY DATASET(MVS380.STUDENT.FILE) /* EXPORT FOR BACKUP */ EXPORT MVS380.STUDENT.AIX - OUTFILE(TAPEOUT1) TEMPORARY EXPORT MVS380.STUDENT.FILE - OUTFILE(TAPEOUT2) TEMPORARY //
You may download a copy of the JCL at idcams.export.base.plus.aix.jcl.
And here is the JCL to restore from the backup created in the jobstream shown above:
//VSIMPA JOB 1,'IMPORT VS CLUSTERS',CLASS=A,MSGCLASS=X //IDCAMS EXEC PGM=IDCAMS,REGION=2M //TAPEIN1 DD UNIT=(TAPE,,DEFER),DISP=OLD,VOL=SER=D05081, // LABEL=(2,SL),DSN=BACKUP.BACKUP.STUDENT.FILE //TAPEIN2 DD UNIT=AFF=TAPEIN1,DISP=(,KEEP),VOL=REF=*.TAPEIN1, // LABEL=(1,SL),DSN=BACKUP.STUDENT.AIX //SYSPRINT DD SYSOUT=* //SYSIN DD * /* IMPORT FROM BACKUP */ IMPORT INFILE(TAPEIN1) - OUTDATASET(MVS380.STUDENT.FILE) LISTCAT LVL(MVS380.STUDENT) IMPORT INFILE(TAPEIN2) - OUTDATASET(MVS380.STUDENT.AIX) LISTCAT LVL(MVS380.STUDENT) //
You may download a copy of the JCL at idcams.import.base.plus.aix.jcl.
There is no command to explicitly export or import the PATH component that relates the AIX to the base cluster. IDCAMS will automatically include the PATH when the export and import of the alternate index is done.
I hope that you have found my instructions useful. If you have questions that I can answer to help expand upon my explanations and examples shown here, please don't hesitate to send them to me:
Return to Site Home Page Frequently Asked Questions
This page was last updated on May 12, 2020 .