OnDemand User Group
Support Forums => z/OS Server => Topic started by: 2ma2 on October 13, 2016, 01:30:02 AM
-
I know this has been discussed before, but I see the readme for OnDemand 9.5.0 still recommends backup of the OD database, knowing that this is not valid after running the upgrade steps.
The Readme says:
2) Run "arsdb -I <instance_name> -vd ARSAGPERMS ARSANN ARSAPP ARSAPPUSR ARSNAMEQ
ARSPRTOPTS ARSSYS ARSDFBDT" to drop the tables.
So the question is: would it be worth to consider renaming the tables, in stead of DROP tables ?
RENAME TABLE DB2T.ARSSYS TO ARSSYS_OLD
RENAME TABLE DB2T.ARSAPP TO ARSAPP_OLD
..
Then you would only have to rename the tables back if a fallback is necessary?
RENAME TABLE DB2T.ARSSYS_OLD TO ARSSYS
RENAME TABLE DB2T.ARSAPP_OLD TO ARSAPP
...
Then the OBID, PSID and DBID will not be out of sync.
Are there anyone out there who has tried a fallbabk after upgrade ?
-
Hi Marthin,
We do our fallback a bit different. We've run a restore this way probably about a dozen times with no real issues.
We unload the cmod tables to a backup directory and if we need to fall back we drop the tables and recreate them with the previous version and loading from the backups.
This is the JCL we use to unload the pre upgrade tables:
//**************************************************************
//* This job is used to back up all CMOD tables (14P) AT 8.5.
//* this is done as part of the fallback process.
//**************************************************************
//**************************************************************
//* Copy statement to UNLOAD CMOD tables prior to migration
//**************************************************************
//CPYLOAD EXEC PGM=IKJEFT01,COND=(4,LT)
//OUTHFS DD PATH='/etc/ond/14P/temp/upgd95.cmd',
// PATHDISP=(KEEP,DELETE)
//INMVS DD *
cd /SYSTEM/etc/ond/14P/backup/
/usr/lpp/ars/V8R5M0/bin/arsdb -I ond14P -vx
/*
//SYSTSPRT DD SYSOUT=(,)
//SYSTSIN DD *
OCOPY INDD(INMVS) OUTDD(OUTHFS) TEXT CONVERT(YES) PATHOPTS(USE)
/*
//*
//**************************************************************
//* Run the command created in previous step
//**************************************************************
//RNLOAD EXEC PGM=BPXBATCH,REGION=0M,COND=(4,LT)
//STEPLIB DD DSN=ONDDODP.V10.SDSNEXIT,DISP=SHR
// DD DSN=ONDDODP.V10.SDSNLOAD,DISP=SHR
// DD DSN=ONDDODP.V10.SDSNLOD2,DISP=SHR
//STDIN DD PATH='/etc/ond/14P/temp/upgd95.cmd',PATHOPTS=(ORDONLY)
//SYSOUT DD SYSOUT=(,)
//SYSERR DD SYSOUT=(,)
//STDOUT DD SYSOUT=(,)
//*
The following JCL would fallback to 8.5 from 9.5:
//**************************************************************
//* This job is used to fall back a CMOD instance (14P) to 8.5.
//* This is accomplished in 3 steps:
//* 1) Deleting all system tables under the upgraded release.
//* 2) Re-Create all system tables.
//* 3) Restore the CMOD Table Data from unloaded tables.
//**************************************************************
//**
//**************************************************************
//* Delete all CMOD tables with prompt of "Y" in STDIN.
//**************************************************************
//DELSYS EXEC PGM=BPXBATCH,REGION=0M,COND=(4,LT),
// PARM='SH /usr/lpp/ars/V9R5M0/bin/arsdb -I ond14P -vd'
//STEPLIB DD DSN=ONDDODP.V10.SDSNEXIT,DISP=SHR
// DD DSN=ONDDODP.V10.SDSNLOAD,DISP=SHR
// DD DSN=ONDDODP.V10.SDSNLOD2,DISP=SHR
//STDIN DD PATH='/SYSTEM/etc/ond/responseyes.txt', <= Reply to DELETE
// PATHOPTS=(ORDONLY)
//SYSOUT DD SYSOUT=(,)
//SYSERR DD SYSOUT=(,)
//STDOUT DD SYSOUT=(,)
//**
//**************************************************************
//* Copy statement to Create/Build CMOD System tables.
//**************************************************************
//CPYCRT EXEC PGM=IKJEFT01,COND=(4,LT)
//OUTHFS DD PATH='/etc/ond/14P/temp/upgd85.cmd',
// PATHDISP=(KEEP,DELETE)
//INMVS DD *
cd /etc/ond/14P/temp/
/usr/lpp/ars/V8R5M0/bin/arsdb -I ond14P -vrt -O
/*
//SYSTSPRT DD SYSOUT=(,)
//SYSTSIN DD *
OCOPY INDD(INMVS) OUTDD(OUTHFS) TEXT CONVERT(YES) PATHOPTS(USE)
/*
//*
//**************************************************************
//* Run command created in previous step
//**************************************************************
//RNCREAT EXEC PGM=BPXBATCH,REGION=0M,COND=(4,LT)
//STEPLIB DD DSN=ONDDODP.V10.SDSNEXIT,DISP=SHR
// DD DSN=ONDDODP.V10.SDSNLOAD,DISP=SHR
// DD DSN=ONDDODP.V10.SDSNLOD2,DISP=SHR
//STDIN DD PATH='/etc/ond/14P/temp/upgd85.cmd',PATHOPTS=(ORDONLY)
//SYSOUT DD SYSOUT=(,)
//SYSERR DD SYSOUT=(,)
//STDOUT DD SYSOUT=(,)
//*
//**************************************************************
//* Copy statement to Load from backups.
//**************************************************************
//CPYLOAD EXEC PGM=IKJEFT01,COND=(4,LT)
//OUTHFS DD PATH='/etc/ond/14P/temp/upgd85.cmd',
// PATHDISP=(KEEP,DELETE)
//INMVS DD *
cd /etc/ond/14P/backup/
/usr/lpp/ars/V8R5M0/bin/arsdb -I ond14P -vi
/*
//SYSTSPRT DD SYSOUT=(,)
//SYSTSIN DD *
OCOPY INDD(INMVS) OUTDD(OUTHFS) TEXT CONVERT(YES) PATHOPTS(USE)
/*
//*
//**************************************************************
//* Run command created in previous step
//**************************************************************
//RNLOAD EXEC PGM=BPXBATCH,REGION=0M,COND=(4,LT)
//STEPLIB DD DSN=ONDDODP.V10.SDSNEXIT,DISP=SHR
// DD DSN=ONDDODP.V10.SDSNLOAD,DISP=SHR
// DD DSN=ONDDODP.V10.SDSNLOD2,DISP=SHR
//STDIN DD PATH='/etc/ond/14P/temp/upgd85.cmd',PATHOPTS=(ORDONLY)
//SYSOUT DD SYSOUT=(,)
//SYSERR DD SYSOUT=(,)
//STDOUT DD SYSOUT=(,)
//*
This has worked pretty well for us but I suggest running through it with a test instance a number of times to get comfortable with it.
-
<LIKE> :)
-
Nice ! Something like this should have been included in the readme in the first place by IBM.
No doubt that it seems to work. But I don't hope you have needed it too many times :D.
What is the reason to first copy the commands and then run them in next steps, instead of just run them ?
/Marthin
-
Truthfully I can't remember the exact reason we created the command then ran them in a separate step. I think we were running into issues when trying to run the various commands using the PARM='SH..' parameter of BPXBATCH so we just went the "easy" route.
-
Martin,
An additional to Greg's process. Using ARSDB previous version/Current version works for our fallback process nicely. Smooth and clean. We've tested it multiple times for fallback process to doing the actual upgrade.
We did, however, get bit with an upgrade that "tooled along" for a couple of days. Loads had happened (activity). Realizing an unacceptable problem (with the Banner), we did our fallback process. The "next loadid" in the ARSAG table was incremented in the upgraded version. When we did our fallback process, all of the "next loadids" that were incremented for those couple of days were restored to PRE-MIGRATION values. This was BAD. After the fallback was complete, duplicates happened for all of the Application Groups that were loaded for those couple of days. Learning from our pain, we would have taken a copy of the all of the new SYSTEM tables, identifying the application groups that were loaded and incremented and incremented out backed up/previous version tables PRIOR to starting LOADs after the fallback.
For a bit more clarification on what Greg posted for our fallback process. Although you are seeing the 6-8 tables that are dropped, they are NOT the only ARS* tables that are effected by the upgrade. There are ALTER statements that happen to other tables as well. OBIDs are an issue and IMAGCOPYs don't provide much comfort for tables that are Dropped/Recreated. Our process uses ARSDB for the whole process of Fallback.
1) ARSDB unload of ALL tables (run under current version of CMOD)
2) ARSDB upgrade using new version. This will both Drop/recreate that list of tables as well as altering other Tables/indexes.
3) Fallback - ARSDB drop all tables using new version. (run from V95 directory)
4) Fallback - ARSDB create all tables under current version. (run from v85 directory)
5) Fallback - ARSDB load all of the unloaded backed up data. (run from v85 directory) CMOD will build all of the system tables the way the CURRENT version was before starting. Also, ARSDB will load all of that data properly under the pre-migration format (DDL)
6) Fallback - ARSDB will rebuild all necessary indexes for the correct version as well.
-
Hi scottnys,
this is right. if you have new loads you have to do much more. If we do a migration p.e. from V1 to V2 we run a script that stores every file before it is loaded in Ondemand.
In case of a fallback the following has to be done after restoring the old database content and preventing the duplicates that you have mentioned by changing arsag.
- loading all files that have been loaded in V2
- deleting all loads that have been loaded with V2 before the fallback (In general this can only be done after the final migration to v2)
regards
Egon
-
Thanks Scotnys and Egon
I am confident that only arsdb is doing the fallback. Especially to avoid the issues with OBID's.
Egon - do you have en example of the script that copies all loads before storing them?
/Marthin
-
Hi Marthin,
we are loading from spool (Z/OS) p.e. dest1. We change the load process from listening p.e for dest1 to dest12. We have a module that is listening for dest1. This module copies the file in the file system and at end it changes the dest1 to dest12.
regarding
Egon
-
Hi Marthin,
We do our fallback a bit different. We've run a restore this way probably about a dozen times with no real issues.
We unload the cmod tables to a backup directory and if we need to fall back we drop the tables and recreate them with the previous version and loading from the backups.
This is the JCL we use to unload the pre upgrade tables:
//**************************************************************
//* This job is used to back up all CMOD tables (14P) AT 8.5.
//* this is done as part of the fallback process.
//**************************************************************
//**************************************************************
//* Copy statement to UNLOAD CMOD tables prior to migration
//**************************************************************
//CPYLOAD EXEC PGM=IKJEFT01,COND=(4,LT)
//OUTHFS DD PATH='/etc/ond/14P/temp/upgd95.cmd',
// PATHDISP=(KEEP,DELETE)
//INMVS DD *
cd /SYSTEM/etc/ond/14P/backup/
/usr/lpp/ars/V8R5M0/bin/arsdb -I ond14P -vx
/*
//SYSTSPRT DD SYSOUT=(,)
//SYSTSIN DD *
OCOPY INDD(INMVS) OUTDD(OUTHFS) TEXT CONVERT(YES) PATHOPTS(USE)
/*
//*
//**************************************************************
//* Run the command created in previous step
//**************************************************************
//RNLOAD EXEC PGM=BPXBATCH,REGION=0M,COND=(4,LT)
//STEPLIB DD DSN=ONDDODP.V10.SDSNEXIT,DISP=SHR
// DD DSN=ONDDODP.V10.SDSNLOAD,DISP=SHR
// DD DSN=ONDDODP.V10.SDSNLOD2,DISP=SHR
//STDIN DD PATH='/etc/ond/14P/temp/upgd95.cmd',PATHOPTS=(ORDONLY)
//SYSOUT DD SYSOUT=(,)
//SYSERR DD SYSOUT=(,)
//STDOUT DD SYSOUT=(,)
//*
The following JCL would fallback to 8.5 from 9.5:
//**************************************************************
//* This job is used to fall back a CMOD instance (14P) to 8.5.
//* This is accomplished in 3 steps:
//* 1) Deleting all system tables under the upgraded release.
//* 2) Re-Create all system tables.
//* 3) Restore the CMOD Table Data from unloaded tables.
//**************************************************************
//**
//**************************************************************
//* Delete all CMOD tables with prompt of "Y" in STDIN.
//**************************************************************
//DELSYS EXEC PGM=BPXBATCH,REGION=0M,COND=(4,LT),
// PARM='SH /usr/lpp/ars/V9R5M0/bin/arsdb -I ond14P -vd'
//STEPLIB DD DSN=ONDDODP.V10.SDSNEXIT,DISP=SHR
// DD DSN=ONDDODP.V10.SDSNLOAD,DISP=SHR
// DD DSN=ONDDODP.V10.SDSNLOD2,DISP=SHR
//STDIN DD PATH='/SYSTEM/etc/ond/responseyes.txt', <= Reply to DELETE
// PATHOPTS=(ORDONLY)
//SYSOUT DD SYSOUT=(,)
//SYSERR DD SYSOUT=(,)
//STDOUT DD SYSOUT=(,)
//**
//**************************************************************
//* Copy statement to Create/Build CMOD System tables.
//**************************************************************
//CPYCRT EXEC PGM=IKJEFT01,COND=(4,LT)
//OUTHFS DD PATH='/etc/ond/14P/temp/upgd85.cmd',
// PATHDISP=(KEEP,DELETE)
//INMVS DD *
cd /etc/ond/14P/temp/
/usr/lpp/ars/V8R5M0/bin/arsdb -I ond14P -vrt -O
/*
//SYSTSPRT DD SYSOUT=(,)
//SYSTSIN DD *
OCOPY INDD(INMVS) OUTDD(OUTHFS) TEXT CONVERT(YES) PATHOPTS(USE)
/*
//*
//**************************************************************
//* Run command created in previous step
//**************************************************************
//RNCREAT EXEC PGM=BPXBATCH,REGION=0M,COND=(4,LT)
//STEPLIB DD DSN=ONDDODP.V10.SDSNEXIT,DISP=SHR
// DD DSN=ONDDODP.V10.SDSNLOAD,DISP=SHR
// DD DSN=ONDDODP.V10.SDSNLOD2,DISP=SHR
//STDIN DD PATH='/etc/ond/14P/temp/upgd85.cmd',PATHOPTS=(ORDONLY)
//SYSOUT DD SYSOUT=(,)
//SYSERR DD SYSOUT=(,)
//STDOUT DD SYSOUT=(,)
//*
//**************************************************************
//* Copy statement to Load from backups.
//**************************************************************
//CPYLOAD EXEC PGM=IKJEFT01,COND=(4,LT)
//OUTHFS DD PATH='/etc/ond/14P/temp/upgd85.cmd',
// PATHDISP=(KEEP,DELETE)
//INMVS DD *
cd /etc/ond/14P/backup/
/usr/lpp/ars/V8R5M0/bin/arsdb -I ond14P -vi
/*
//SYSTSPRT DD SYSOUT=(,)
//SYSTSIN DD *
OCOPY INDD(INMVS) OUTDD(OUTHFS) TEXT CONVERT(YES) PATHOPTS(USE)
/*
//*
//**************************************************************
//* Run command created in previous step
//**************************************************************
//RNLOAD EXEC PGM=BPXBATCH,REGION=0M,COND=(4,LT)
//STEPLIB DD DSN=ONDDODP.V10.SDSNEXIT,DISP=SHR
// DD DSN=ONDDODP.V10.SDSNLOAD,DISP=SHR
// DD DSN=ONDDODP.V10.SDSNLOD2,DISP=SHR
//STDIN DD PATH='/etc/ond/14P/temp/upgd85.cmd',PATHOPTS=(ORDONLY)
//SYSOUT DD SYSOUT=(,)
//SYSERR DD SYSOUT=(,)
//STDOUT DD SYSOUT=(,)
//*
This has worked pretty well for us but I suggest running through it with a test instance a number of times to get comfortable with it.
Thanks Marthin! This is a great script but, I think that the -O parameter in the CPYCRT step only creates the ODF tables and not ALL of the ARS* tables:
/usr/lpp/ars/V8R5M0/bin/arsdb -I ond14P -vrt -O
I added a second command without the -O in order to include ALL the ARS* tables.
-
I attempted the fallback process and ran into an issue when CMOD tries to insert into the SLxx tables. It receives an SQL error -117 indicating that the number of insert values in the value list of the insert operation is not the same as the number of object columns. I think this is because these tables were not dropped/re-created for 8.5. I ran the following during fallback:
/usr/lpp/ars/bin/arsdb -I ARCHIVE -vrt * ARS create
/usr/lpp/ars/bin/arsdb -I ARCHIVE -vrt -O * ODF create
/usr/lpp/ars/bin/arsdb -I ARCHIVE -vi * IMPORT
/usr/lpp/ars/bin/arssyscr -I ARCHIVE -l * System log
/usr/lpp/ars/bin/arssyscr -I ARCHIVE -a * System load
Any suggestions? How are the SLxx (System log tables) re-created?
-
I attempted the fallback process and ran into an issue when CMOD tries to insert into the SLxx tables. It receives an SQL error -117 indicating that the number of insert values in the value list of the insert operation is not the same as the number of object columns. I think this is because these tables were not dropped/re-created for 8.5. I ran the following during fallback:
/usr/lpp/ars/bin/arsdb -I ARCHIVE -vrt * ARS create
/usr/lpp/ars/bin/arsdb -I ARCHIVE -vrt -O * ODF create
/usr/lpp/ars/bin/arsdb -I ARCHIVE -vi * IMPORT
/usr/lpp/ars/bin/arssyscr -I ARCHIVE -l * System log
/usr/lpp/ars/bin/arssyscr -I ARCHIVE -a * System load
Any suggestions? How are the SLxx (System log tables) re-created?
There is a new COLUMN in the SLxx table names TIME_STAMP_DT so the table now has 19 columns. The 8.5 INSERT being attempted has 18: INSERT INTO ARSSRVER.SL31 VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
-
I've been performing A LOT of testing reverting back from 8.5.0.6 to 9.5.0.7 and this procedure will NOT work in this case. Whenever a column is added to a DB2 table, and an INSERT with the VALUE clause is performed, the correct number of columns must be provided in the INSERT (such as in this case). When you revert back to the previous release of CMOD (8.5.0.6) the INSERT is also reverted back to a lesser number of columns than the table actually has. This results in the -117 error. I did not find that any of the arsdb command nor the arssyscr commands DROP and reCREATE the SLxx tables. As a result, the ONLY way this can be accomplished is by using DB2 tools such as UNLOAD and LOAD after performing your own DROP and CREATE of the tables.
If anyone has any other solution, I would welcome it.
-
You'll probably break the System Log records written by the newer version of CMOD, but you should be able to do this:
Use arstblsp to close the SL table. Rename your System Log Application Group to be something else like "System Log v9".
Then use arssyscr to create a new System Log App Group.
You should be able to start back up under v8. But again, this may break CMOD in new and interesting ways.
-JD.
-
Hi
Interesting Reading.. but I've challanged Greg for many years do put a fallback funcionality into the Product.
Because today it's a mess.
I've been Reading all replies, but I'm missing one thing(which may have been there but I haven't seen it).
If you upgrade OD to a new version and doing all the arsdb-stuff fine,
as soon as you do one load of an Appl Group... you're toast!
If you 'back-out' your updates in the ARS-tables... you will end up with a missing load in ARSLOAD-table .
And one index-table, that's not consistent with the other ARS-tables, because you've loaded data into the Appl Group.
And since you probably load the data into OAM.
There will be data there that are non reacheable from Ondemand, because the Resource used on the load is not registered in ARS-table and so on.
/H Carlberg
-
Hi
Interesting Reading.. but I've challanged Greg for many years do put a fallback funcionality into the Product.
Because today it's a mess.
I've been Reading all replies, but I'm missing one thing(which may have been there but I haven't seen it).
If you upgrade OD to a new version and doing all the arsdb-stuff fine,
as soon as you do one load of an Appl Group... you're toast!
If you 'back-out' your updates in the ARS-tables... you will end up with a missing load in ARSLOAD-table .
And one index-table, that's not consistent with the other ARS-tables, because you've loaded data into the Appl Group.
And since you probably load the data into OAM.
There will be data there that are non reacheable from Ondemand, because the Resource used on the load is not registered in ARS-table and so on.
/H Carlberg
That's absolutely right! We use OAM to store the objects so there would be a total De-synchronization between CMOD and OAM unless you backup and restore OAM objects as well (what a mess). I think the moral of the story is: Do not attempt a fallback or backout. If there's a problem after migration, work through it using PMR's, DBA's, etc. Make sure that your management (and customers) know the ramifications of falling back.
-
Sorry I missed much of this discussion. The process I listed was meant more for an immediate backout. Anything loaded after the initial migration would be lost after falling back In our one of our cases it was dummy data for test loads as part of the verification process which we deemed expendable. If you started loading production data you're in a different situation and the process I listed less optimal. We actually ran into this situation with another customer and the fallback was much more difficult, though not impossible. In a very basic summary: We had to identify all loads processed after the migration which, fortunately, wasn't an overwhelming number. Did a DB2 unload of the ARSRES table to help identify any new resources loaded during that period. Ran our fallback procedure which reverts all system tables to pre migration status but doesn't touch the index data. We then had to increment the LOAD_ID and RESGRP manually for each Application group that had been loaded in to cover the loads made after migration. Then we added the new entries to ARSRES using the ARSRES unload we did earlier. Since everything we do loads through OAM nothing is set to expire by Load and ARSLOAD isn't an issue.
-
Sorry I missed much of this discussion. The process I listed was meant more for an immediate backout. Anything loaded after the initial migration would be lost after falling back In our one of our cases it was dummy data for test loads as part of the verification process which we deemed expendable. If you started loading production data you're in a different situation and the process I listed less optimal. We actually ran into this situation with another customer and the fallback was much more difficult, though not impossible. In a very basic summary: We had to identify all loads processed after the migration which, fortunately, wasn't an overwhelming number. Did a DB2 unload of the ARSRES table to help identify any new resources loaded during that period. Ran our fallback procedure which reverts all system tables to pre migration status but doesn't touch the index data. We then had to increment the LOAD_ID and RESGRP manually for each Application group that had been loaded in to cover the loads made after migration. Then we added the new entries to ARSRES using the ARSRES unload we did earlier. Since everything we do loads through OAM nothing is set to expire by Load and ARSLOAD isn't an issue.
Greg, unfortunately not ALL tables are DROPPED and recreated as I found out during fallback. The System Log tables (SLxx) did not get reverted to the previous release and this the -117 after backout. I've been testing a DB2 procedure which would allow me to DROP and CREATE these tables with their previous number of columns. Using the arsdb process to export/drop/recreate/import these tables does not work when falling back from 9.5.0.7 to 8.5.0.6
-
In terms of identifying loads that have been successful after the "migration point-in-time", I have a query I run to identify them. You'll have to update the SLxx table to be queried and the DATE/TIME in the query:
-- Version 9.5.0.7 and above only
-- Extract information from the CMOD System Log for all LOAD activity past
-- a certain date/time. Can be used to determine any LOADS performed after
-- an upgrade of CMOD for fallback/backout purposes
--
-- Create TEMP1 containing the APPLICATION, APPLICATION_GROUP, and File(...) variable which
-- will be used to extract the JOB and DDNAME into the TEMP2 temporary table
WITH
TEMP1 AS
(SELECT
DATE(TIME_STAMP_DT-5 HOURS) AS DATE, TIME(TIME_STAMP_DT)-5 HOURS AS TIME, SUBSTR(MSG_TEXT, LOCATE_IN_STRING(MSG_TEXT, 'File(', 1, 1, OCTETS) + 5 -- Start of File(...)value
, (LOCATE_IN_STRING(MSG_TEXT, 'InputSize(', 1, 1, OCTETS) - 2) - (LOCATE_IN_STRING(MSG_TEXT, 'File(', 1, 1, OCTETS) + 5)) AS FILEVAL,
SUBSTR(MSG_TEXT, LOCATE_IN_STRING(MSG_TEXT, 'Appl(', 1, 1, OCTETS) + 5 -- Start of App(...)value
, (LOCATE_IN_STRING(MSG_TEXT, 'InputFileSize(', 1, 1, OCTETS) - 2) - (LOCATE_IN_STRING(MSG_TEXT, 'Appl(', 1, 1, OCTETS) + 5)) AS APP,
SUBSTR(MSG_TEXT, LOCATE_IN_STRING(MSG_TEXT, 'Name(', 1, 1, OCTETS) + 5 -- Start of Name(...)value
, (LOCATE_IN_STRING(MSG_TEXT, 'LoadId(', 1, 1, OCTETS) - 2) - (LOCATE_IN_STRING(MSG_TEXT, 'Name(', 1, 1, OCTETS) + 5)) AS APP_GRP
--
FROM ARSSRVER.SL32 -- <-- Change
--
WHERE (MSG_NUM = 87) -- Only include successful loads
AND DATE(TIME_STAMP_DT-5 HOURS) >= '02/02/2017' -- Included date(s)
AND (TIME(TIME_STAMP_DT-5 HOURS) >= '15:26:00' -- From TIME
AND TIME(TIME_STAMP_DT-5 HOURS) <= '23:35:00') -- To TIME
AND SUBSTR(MSG_TEXT -- Exclude the JCL% applications
, POSSTR(MSG_TEXT, 'Name(') + 5, ((POSSTR(MSG_TEXT, ') LoadId(')) - (POSSTR(MSG_TEXT, 'Name(') + 5))) NOT LIKE 'JCL%')
-- Create TEMP2 by extracting the JOB and DDNAME from the File(...) variable in the lob
,
TEMP2 AS
(SELECT DATE, TIME, SUBSTR(FILEVAL, LOCATE_IN_STRING(FILEVAL, '.', 1, 1) + 1, (LOCATE_IN_STRING(FILEVAL, '.', 1, 2) - 1) - (LOCATE_IN_STRING(FILEVAL, '.', 1, 1))) AS JOB,
SUBSTR(FILEVAL, LOCATE_IN_STRING(FILEVAL, '.', 1, 2) + 1, (LOCATE_IN_STRING(FILEVAL, '.', 1, 3) - 1) - (LOCATE_IN_STRING(FILEVAL, '.', 1, 2))) AS DDNAME,
APP, APP_GRP
FROM TEMP1)
SELECT *
FROM TEMP2
-
Thanks. That could be handy in a number of occasions.
One thing I find interesting is we never ran into the SLxx issue. Our existing SLxx tables never got modified with new column, only new instance created under 9.5 have the extra SLxx column.
-
The new column seems to have appears after the arssyscr commands which alter the System Log tables. Here's a before and after snapshot of the SLxx tables on our system where the number of columns goes from 18 to 19. It appears that the new column (TIME_STAMP_DT) is replacing the old and "proprietary" column TIME_STAMP. No longer do we need to compute the time (only the GMT difference):
Before:
Name Schema T DB Name TS Name Cols Rows Chks C
* * * * * * * * *
------------------ -------- - -------- -------- ------ ----------- ---- -
SL29 ARSSRVER T ARSDBASE SL29 18 20000013 0
SL30 ARSSRVER T ARSDBASE SL30 18 20000011 0
SL31 ARSSRVER T ARSDBASE SL31 18 2850767 0
After:
Name Schema T DB Name TS Name Cols Rows Chks C
* * * * * * * * *
------------------ -------- - -------- -------- ------ ----------- ---- -
SL29 ARSSRVER T ARSDBASE SL29 19 20000013 0
SL30 ARSSRVER T ARSDBASE SL30 19 20000011 0
SL31 ARSSRVER T ARSDBASE SL31 19 2850767 0
I'm going to try and DROP/RECREATE and LOAD the SLxx tables without the new column as part of my fallback. The only concern is: Are there any other tables which will exhibit the same INSERT issue? We'll find out.
-
Hi,
you will not find a general solution for a fallback, if loads are done and you want to preserve the loads that have been done. To avoid these problems we are doing the loads again after the fallback. After the final migration the unnecessary loads are deleted.
regards
Egon
-
One thing I find interesting is we never ran into the SLxx issue. Our existing SLxx tables never got modified with new column, only new instance created under 9.5 have the extra SLxx column.
This is because of the switch from the old internal date formats (number of seconds since January 1st, 1970, UTC) to the DB2 date format. The new date field is added, and the old date field is left unused and empty.
-JD.
-
Thanks, I was aware of the new format for the SLxx tables using the DB2 date format. We just never saw the requirement to run arssyscr in the instructions to upgrade from 8.5 to 9.5 so we never ran it.
-
I wanted to circle back and let everyone know how we "sort of" simplified the fallback to the previous (8.5) release from 9.5. As you know, going to a new release "may" and "will" change the structure of tables, columns, tablespaces, etc. The only clean way to return to the old release would be to return to the same database structure. Now that we have tested this process and it works well but, you will need DBA (and tools) help. Note also that I'm only posting on the DB2 aspect of the fallback and not the entire fallback.
We start off by using IBM's DB2 Tools to "migrate" the current database (ARSDBASE) to a new database (XRSDBASE). The tool extracts the DDL from the current database and unloads all tablespaces and tables as well. It will then create a new database (XRSDBASE) using the DDL and LOAD all tables. This essentially is creating a mirror copy of the ARSDBASE "prior" to migrating to the new release.
Once the copy of the database has been performed, we then perform the migration of OnDemand to the new release.
To fallback to the previous release/database you have 2 choices: 1: point OnDemand to the migrated database XRSDBASE or 2: drop and recreate the ARSDBASE using the reverse process of copying from XRSDBASE.
We chose option 1 by configuring OnDemand to use a new (XRSDBASE) database which in turn returned it to the 8.5 release. Here are the high-level steps we followed:
1. Shutdown CMOD
2. Restore the OnDemand PDS's to the previous release
3. Mount the previous release zFS/HFS on its production mountpoint
4. Backout any PROCLIB changes made
5. Recompile and BIND any exits and/or in-house written programs
6. Reload any dynamic exits (SETPROG EXIT)
7. Reconfigure OnDemand to use new database
8. Restart CMOD and test
As I mentioned, we have performed this migration and fallback scenario and it has worked successfully for us.
Then there's the issue of missing loads which you will have to work around.
Hope this helps.
-
Hi Hakan,
if you know the loadids from the buggy migration , you can do an unload after the final successful migration. This works fine and we have tested it. CMOD need not to find the loadids in the system log. If you have all reports that have been loaded by the buggy migration, you can load them again. We have worked with a one week window to decide whether we go back or not
Of course Ondemand could / should provide this or another technique as a general solution.
regards
Egon
-
Hi Egon
Yes, in a way you can do that.
But we have some reports that are loaded from z/OS(with input files from GDG's), but many are also loaded from windows directly into a z/OS OnDemand Server.
And trying to keep in sync with all the files that are loaded from different platforms, I wouldn't want to do that.
Also some of the files, loaded from z/OS, are loaded a couple of times a day. So the GDG-limit might only be 10, and that is being rolled a couple of times per week.
Setting the environment to keep track of all loads... that's a mess, and you're toast if you miss one, it might be the 'Log' from the internet banking.
And since some of the Reports are viewable from Customers... if you do an unload and trying to reload them... what if the Customer are looking for the report and it's not there? But it was there a couple of minutes ago!! Nightmare
So... It must be in the product. Wrong... it should !!! be in the product, the possibility to upgrade/downgrade and still be in sync
At least, that's my opinion. Only look at DB2, where you can upgrade and do a fallback without 'any?' problem, at least it's possible
Regards
/Hakan Carlberg
-
Hi Hakan,
it's my opinon as well, the IBM should provide the solution. To have no dependency between different versions, I think a solution must track the loads to be able to replay the loads after a fallback. Otherwise you have to look for the details of the changes between version 1 and version 2 which is not relevant for this approach.
By the way if you switch to new functions mode after a DB2 migration you cannot do a fallback afterwards. Also with DB2 you have a point of no return.
regards
Egon
-
Thank you all for the many tips, tricks and suggestions to this topic.
I can see that not only for me, fallback plans are a big issue when considering upgrading OnDemand.
The bottom line is:
IBM, please add a fallback feature to OnDemand ASAP, so all these good people don’t have to invent their own.
-
/H Carlberg, you are totally right. :(
After there have been loads in the upgrade, the individual "next LOAD_ID"s (in the ARSAG) have been incremented. If you fall back, then they are being re-set to before the migration. That is bad!!!
To enhance the fall back procedure is to have an inventory of how many applications had loads to it and preserve the LOAD_ID of where it got to (how many loads). When you fall back to before the upgrade, update the restored LOAD_IDs to the preserved values (or greater). That way, when you start loading at the restored version, the loads won't be Overlaying/replacing loads that were done at the upgraded version.
Resources weren't a problem, providing there were no changes to them during the migration.
This was a painful lesson. We used the image copies to get back the OAM objects for the reports. We Unloaded from the IMAGECOPY the documents that were overlaid. We then went thru each appl grp that had loads, manually altered the "load_id" so there were no overlaps and LOAD RESUMEed them back into the tables - not pretty for sure.
-
Hi scottnys,
you can avoid duplicate loadids by manipulating arsag. You can avoid segment splits by manipulate arsseg. The needed SQL have been accepted by IBM of course for a special migration. Being aware of this a DB2 fallback is possible and you can load reports that are lost by this fallback. After the final migration you delete the reports from OAM with arsadmin unload. These unloads are accepted by ARSSOCKD (even if no knowledge them).
It's still true that this is not our task. But it works. The main idea of the technique is to separate the migration / fallback process from the details of the changes between version (a) and (b). Preconditions are
a) the same documents can be loaded / unloaded with version(a) and version (b)
b) going back to the old content of the database. (here the kind of table changes have to be observed for a proper reconstruction)
regards
Egon
-
Hi everybody
I know, I'm stubborn...
But..
".. if you switch to new functions mode after a DB2 migration you cannot do a fallback afterwards.."
Yes, that's true when you've switched to NFM..
but it's not a big bang(Nice Tv-show though), and you don't have to mess around with a bunch unload/load and other stuff.
And as long that you're in CM-mode you can always return, you don't switch to NFM until you're sure it will work.
Since going to NFM, you can return to CM*, and that code will probably work for you.
Still, let IBM fix this fallback stuff, it shouldn't be that hard to fix.
And it is a mess(At least I think that)
Regards
/H Carlberg