Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - leodejong

Pages: 1 [2]
16
Ok
I have found it. The solution was hidden in the  manual:

(When adding another instance to the ARS.INI file, you must set the HOST statement of the default instance to the host name alias of the library server.)

You probably wonder why: It is because we must do a massive extraction of data with arsdoc get, for a specific marketing project. And because off internal chargeback and because the majority of the CPU is used in the ARSSOCK, we want to separate the normal online retrievals from the batch retrievals.
And I can give this ARSSOCK task a lower priority in WLM, so I want monopolize the system.
Leo

17
Hi,
I'm trying to run a second instance of ARSSOCK on the same LPAR and accessing the same database.

I have modified ars.ini:
[@SRV@_ARCHIVE]                             
HOST=                                       
PORT=8021                                   
-  -  -  -  -  -  -  -  -  -  -  -  -  -  -
SRVR_INSTANCE=XODBV71                       
SRVR_INSTANCE_OWNER=XO1XPDB                 
SRVR_OD_CFG=/etc/ars/ars.cfg               
SRVR_SM_CFG=/etc/ars/ars.cache             
[@SRV@_TEST]                               
HOST=                                       
PORT=8024                                   
-  -  -  -  -  -  -  -  -  -  -  -  -  -  -
SRVR_INSTANCE=XODBV71                       
SRVR_INSTANCE_OWNER=XO1XPDB                 
SRVR_OD_CFG=/etc/ars/arst.cfg               
SRVR_SM_CFG=/etc/ars/arst.cache             

So the same INSTANCE and OWNER but different PORT.

When i start ARSSOCK i use the parameter PARM='/TEST'  (where is this documented ?)

That works, i that respect that i can connect with a client.

But i can't get i to work with batch ARSLOAD or ARSDOC get.
ARSLOAD ends with message:
arsload: Could not connect to server to establish log id

And in /var/ars/tmpt/arssock.err i see message:

Thu May  3 17:00:58 2012: JONGDELB asid(003F)(651) -> getaddrinfo errno = 129, errno2 = X'0594003D', rc = 1

This means:
ReasonCode: 0594003D                                       
  Module: Unknown  ErrnoJr: 61 JRDIRNOTFOUND               
  Description: A directory in the pathname was not found   

My questions are:
Is this possible what i'm trying ?
What is the directory that can't found.?
Or any other hints and tips

Leo 




18
z/OS Server / Re: move a CMOD Instance to another lpar
« on: March 26, 2012, 12:59:55 PM »
Hi eef,

We here at Rabo did this a few years ago. But we are DASD only no opticals.
I have followed the following scenario:
First you have to do all the OAM set-up steps from the OAM PISA (Parmlib, STC's RACF)
Then we have decided to use a dedicated DB2 for OAM/Ondemand, so i created a new DB2 and ran all the OAM setup's until i was able to store and retrieve an OAM objects on the new system.
Also installed a new Ondemand instance on the new LPAR. Verified that it worked. So now i considered the receiving LPAR was ready to receive the data.
Then i stopped DB2 on source LPAR, dumped all the DB@ catalog,OAM and Ondemand VSAM datasets with ADRDSSU on tape. But if you have shared DASD, you can use that.
On the target LPAR, i also stopped the new DB2 and deleted all the datasets (catalog, OAM and Ondemand tablespaces and IC).
Restore datasets from source.
I have decided for a DB2 coldstart, so i recreated and inited the BSDS and logs. Set a conditionalrestart record with the RBA from the source DB2.
Started DB2 in access(maint), replied the coldstart reply. If the VSAM datasets have a different high-level-qualifier, and you have DB2 V9+, you can run REPAIR CHANGE VCAT (or something similar) .
If applicable change any authorisations in DB2 from the userid which are used on the source lpar to the one used on the target LPAR.
Do a IDCAMS DEFINE NONVSAM (NAME(collection) COLLECTION RECATALOG) for every collection name in the OAMADMIN.CBR_COLLECTION_TBL.
If testing, allways go from bottom to top. First test if OAM works (TSO OSREQ STORE/RETRIEVE), then

Hope this is complete. It was some time ago.
Leo

19
z/OS Server / How do i re-archive AFP data (ardoc get,unload and load)
« on: October 28, 2008, 08:16:11 AM »
We are busy converting from OnDemand V2 to V7.
One of the thing we hit is the following: In V2 we occasionally wanted to re-archive data, because of various reasons: (correction of wrong definitions or taking a copy of data for development).
This was done by running three steps
1) IODBPRINT to print the data to an OS dataset
2) IODBSTOR to store the data with the new definitions or at an other system
and optionally:
3) IODBDLET to delete the original SRT entry.

If have tried to do the same thing in V7. I managed to do this with line-data but AFP is given my trouble.
When i do an "ARSDOC GET -X nnnnn -a" i get the data in a unix file.
But to my knowledge it is impossible to run ARSLOAD directly from an USS file.
And when i view the output there is no CR/LF to indicate an end of file. How do copy this output to an OS RECFM=VBS file.
Or is there a more clever way of doing this.

Thanks in advance,
Leo de Jong
Rabobank Netherlands

20
z/OS Server / Re: jcl needed to run arsxml in batch
« on: September 15, 2008, 02:35:13 AM »
This is the way we do it:

//S1    EXEC PGM=IEBGENER                                             
//SYSPRINT  DD SYSOUT=*                                               
//SYSIN     DD DUMMY                                                   
//SYSUT2    DD PATH='/u/winderr/select.xml',PATHDISP=KEEP,             
//    PATHMODE=(SIRWXU,SIRWXG,SIROTH),PATHOPTS=(OWRONLY,OCREAT,OTRUNC)
//SYSUT1    DD *                                                       
<?xml version="1.0" encoding="IBM037" ?>                               
<onDemand xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"       
   xsi:noNamespaceSchemaLocation="/usr/lpp/ars/bin/xml/ondemand.xsd" >
   <folder name="AAO080">                                             
   </folder>                                                           
</onDemand>                                                           
/*                                                                     
//ARSEXPRT EXEC OMVSBAT                                               
//STEPLIB DD DISP=SHR,DSN=SYS3.SARSLOAD                               
//SYSIN   DD *                                                         
JAVA_HOME=/usr/lpp/java/J1.4                                           
PATH=/usr/lpp/Printsrv/bin:/bin:/usr/sbin:/usr/local/sys2/bin:\       
$JAVA_HOME/bin:/usr/lpp/dfsms/bin:/usr/lpp/ldap/bin:/usr/lpp/ars/bin:.
LIBPATH=$LIBPATH:/usr/lpp/ars/bin/xml/                                 
export LIBPATH                                                         
export JAVA_HOME                                                       
/usr/lpp/ars/bin/arsxml export -v -h ARCHIVE -x -e c \                 
        -w IBM037 -i select.xml -d /u/winderr/ -r pd -o output.xml     
//OSHOUT1 DD SYSOUT=*,DCB=(RECFM=F,LRECL=255)                         

The JAVA_HOME was needed because the XML runs in 32 bit.
Hope this helps.
Leo

21
Report Indexing / Re: Why not have DATE as an Index field?
« on: September 15, 2008, 02:27:51 AM »
Bill,

I assume you are using CMOD on z/Os. In that environment i would definitely put an index on the Date column in combination with "Single table for all loads". Software segmentation is typically a unix solution for partitioning. On z/OS i would suggest using DB2 native partitioning if the table gets too large.  Leo

22
z/OS Server / Re: Storage Set exit?
« on: September 15, 2008, 02:11:51 AM »
I don't think there is an exit in V7 to handle this. I have considered this exit in past for this purpose.
But we have converted the OAM 32K tables to LARGE partitioned tablespaces with a DSSIZE of 64G and up to 128 partitions.
In this set up we don't expect to hit any full condition in any foreseeable future.
But we do monitor size of the different groups from time to time (monthly) and change collection-names if size makes the image-copy time excessive. If you want details sent me an email. (see profile)
Leo

23
z/OS Server / Re: Recent migration from CMOD 2.1 ?
« on: July 11, 2008, 03:02:30 AM »
Hi, we have just finished the migration of the indexes in combination with double archiving just like you are planning but we don't use ODF.
A few tips
- Take a good look at the migration job. Use the option MIGR-BY=RUNDATE. Because using the other option will inhibit dual archiving.
- Change the FTP step to a TSO OCOPY command. No hassle with passwords.
- Realise that you might have to run different ARSLOAD jobs for every version of the V2 ACT and specify the corresponding AG on the job. That is not very clear from the Mig guide.
- ARSLOAD as used with loading the migrated index entries can only handle USS file smaller then 2Gb. That is a PITA because 1) the message  is not very clear (file not found) 2) you have to rerun ARSZIMIG with a smaller date range (see next point) and it can be a trial and error exercise to get it right. Later i wrote a program which splits the OS dataset in 2Gb parts at the right record.
- Program ARSZIMIG inserts rows in the ARSOD table, thats why it is so slow. That can be a problem If you ever have to rerun ARZIMIG.
- Be very careful with updating or adding V2 report definitions during the migration period. Keep the definitions in sync. Note there is a ARSMDT table which maps V2 definitions to V7 definitions. It is not documented where this table is used, but i kept in sync with the actual definitions.
- Make some queries to count the number index entries in V2 and V7 to check the progress en result of the migration process.
- We have about 1400 report definition. So i made a set of tools with rexx and ISPF to automate the migration process  and check the progress. If you do have such numbers invest in such tooling.
Hope this help a bit.
leo de Jong, rabobank, netherlands

Pages: 1 [2]