Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - Mehmet S Yersel

Pages: [1] 2 3 4
1
You need to add the -r a option for exporting applications contained within the application group. This part should be:

${ARSXML} export -h ${HOST} -u ${ADM} -p ${PW} -i ${XMLFILE} -r apl -v >${tempout}

2
Thank you RJ, for the feedback.

I think, this should be a potential enhancement to how arsdoc get should operate when this unique condition is encountered.

In the meantime, I shall retrieve all such objects for all involved reports using arsdoc get and create generic indexer files as well to mimic what arsdoc get should have done in the first place.
 
Thanks again,
-Mehmet

3
Hi All,

I am working on a CMOD to CMOD migration where I extract using arsdoc get with generic indexer and then load into target CMOD system. There are some reports that have originally been indexed with floating triggers at every line of every page, and, CMOD createed pointers to the same object from 1000s of different hit list records, as expected. For example, if the first 353 pages of the report are all one document, every line in all these 353 pages are loaded into CMOD as different document indexes (my hit list).

So far so good.

And the problem is when I try to retrieve this report with generic indexer using arsdoc get. CMOD. retrieves 1 hit list record, writes it to the .IND file as the first entry, and then it goes to retrieve the object it points to and it then writes that to the .OUT file. For every item in the hit list it repeats the process and creates a huge file, 100s of times larger than the original.

This use case is the one where CMOD is not giving you exactly what was stored and exactly as it was stored.

Is there a way for arsdoc get to generate the input file exactly as it was ingested for the use case described above?

Thanks,
-Mehmet 

4
Hi All,

I am loading migration data from a CMOD 9.5 into a target CMOD 10.5 system using Generic Indexer. For migrated reports, I need to skip the PostProcessor.sh script because these reports already have been processed with the PostProcessor.sh script. However, Day forward reports need the post processor to populate some index fields upon ingestion for the first time.

I know, normally, migration reports need to go to a separate AG and day forward reports need to go to separate AG. But, client didn't want this option. They want to load into same AG both day forward as well as migration data.

Second alternative is to manipulate the script. I am loading from a Windows platform into Linux. So, modifying the post processor script to distinguish between the 2 data types wouldn't work because the file paths and script names cannot be the same for Windows (where migration data is loaded from) and Linux (where day forward data is loaded from) platforms. If loading of migration reports starts from a Linux platforms, same as day forward reports, this method would work. But that doesn't look like a viable option due to time it would take to set up.

Third alternative is to add new applications for migration data, remove post processor scripts and load migration data this way.

But, ideally, I should be able to tell ARSLOAD to skip Post Processor Script for a particular report. And I don't see this option being available (maybe an enhancement request). Is there a way to skip a post processor script for a certain report without having to remove it from Load Instructions?

Thanks in advance,
-Mehmet

 

5
z/OS Server / Re: OAM database error
« on: August 11, 2022, 12:06:38 PM »
Do you have Virtual Tape (or TAPE-less TAPE)? You can make changes to your management classes assigned to the OAM collections such that data stays on DASD for ## days and migrates to TAPE (which happens to be TAPEless TAPE) after that. By Migration of OAM objects to TAPE, you are suddenly infinitely scalable; yet, you will need to take care of the directory tables in DB2. They need to be partitioned to not get stuck with 64 GB limit there too if you have that much growth.

It has been a few years I have not worked on OAM, but when I worked, I had even written utilities to merge OAM instances live while users were accessing them; I hope there has not been too many new things to invalidate what I said above. If so, I am sorry; I tried to help.

6
MP Server / Re: Converting mainframe Hex character
« on: July 12, 2022, 05:54:10 AM »
Is it possible the code page document created in is not matching the one used to load it in CMOD?
Check this link out, there is a list of differences between the 2 code pages: 
https://www.ibm.com/support/pages/conversion-character-differences-between-ccsid-037-and-ccsid-500

7
Report Indexing / Re: Application group question
« on: May 19, 2021, 10:12:07 AM »
Have you identified all offending loads from System Log and System Load? If so, maybe you can consider:

1. arsdoc get with the -X option to extract an entire report
2. arsadmin unload -L option to physically delete the entire report
3. arsload to reload with correct application definitions

Maybe you can repeat this process with a script for each identified file to put everything back in synch with the modified upstream data feed and remove impact to your CMOD system.     

8
z/OS Server / Re: CMOD Retention based on GDG like versions
« on: December 31, 2020, 06:53:17 AM »
CMOD only supports retentions in days. There is no functionality to map GDG based retention to CMOD. When we do migrations, this is one of the first thing we ask system owners to adjust: converting the GDG based retentions which some archive systems have into days based retentions.

I understand the business need and it makes sense to keep certain number of reports... but CMOD doesn't have this functionality. Maybe a new enhancement request is needed for this if it is critical for the business to continue having GDG-like retentions. 

9
z/OS Server / Re: Enhanced Retention Management and expiration
« on: December 31, 2020, 06:48:59 AM »
I think changing the management class for the impacted objects would work.

Once the ODPENDT is hit, OSMC will readjust the retention and a new ODPENDT will be set. 

10
z/OS Server / Re: Enhanced Retention Management and expiration
« on: December 22, 2020, 02:40:52 PM »
I don't remember having this problem of duplicate index at all.

To avoid duplicate index, maybe you can try selecting the most recent range of OAM objects and updating their ODCREATS/ODPENDT first in order to avoid 5 years old objects colliding with them? For example, WHERE ODCREATS BETWEEN '2020-12-01' and '2020-12-21' for all objects created within the current month until end of day yesterday? Then you do the same for updating the month of November etc... make sure you also update ODPENDT = CURRENT DATE so that OSMC processing sets expiry dates according to these updated values during next cycle processing.

I remembered the reason I had to do it this way: multiple storage groups were using the same management class and retention change was not needed for all storage groups. I had collateral damage if I changed at management class level. In this case, if I changed the management class, all store groups were impacted. To avoid this collateral damage, this was the method I used to isolate the documents I needed to retain.

Maybe you don't need to make changes in object directory attributes if all store groups that use this management class are required to be retained exactly the same. If that's the case, you need to change the management class properties and that's it. And change it back when needed. Still you may need to manipulate the OSM Object Directory to force expire objects that will no longer needed to be retained when you undo this change because their ODPENDT will remain some future date no longer needed.

 

11
z/OS Server / Re: Enhanced Retention Management and expiration
« on: December 21, 2020, 06:41:39 AM »
Sorry for the wrong name of the Object Create Time Stamp... I tried to remember it without looking at the manuals and I was close enough, so it is ODCREATS actually, you are right.

In CMOD Admin client, under [Application Group -> Storage Management -> Life of Data and Indexes] you can update the retention days to a higher value so that it synchs up with OAM retention. However, any new archive will store the new OAM objects with previous management class properties -- original retention days. If purge hold is temporary and before standard retention is reached the hold processing is removed, you are good and nothing you need to do in OAM; maybe undo the update only for the range of objects you originally updated and nothing other than that. If HOLD processing extends beyond the life of the standard life cycle processing in OAM, then you may lose newly added objects when they are up for deletion. You need to update newly added objects as well periodically to avoid losing them.

12
z/OS Server / Re: Enhanced Retention Management and expiration
« on: December 18, 2020, 12:12:12 PM »
Thanks for describing the environment in good enough detail so we can understand the problem better.

As I understand, ARSMAINT takes care of the indexes and OAM built in life cycle management (OSMC) is used to expire objects. In this case, again, depending on the situation you have different options:

I would definitely go with updating the ODCRETDT/ODPENDDT attributes such that OSMC when inspects the objects avoids expiring them. Just remember to restore to normal.

for example ODSCRETDT is 2020-01-01 and you want this object to have a +6 months extension on top of the original. You can run a DB2 command on the object directory to modify ODCREATDT to ODCREATDT + 6 months and ODPENDT to CURRENT DATE; this will ensure OSMC processing looks at the object in next cycle and gives it new expiry.

You must test this first and observe different use cases.

I have done this a lot but it could be dangerous if you are not sure what you are doing.

13
z/OS Server / Re: Enhanced Retention Management and expiration
« on: December 17, 2020, 06:51:48 AM »
1. ... how will reports that do not contain a HOLD, expire under normal circumstances? 
2. How do I control certain reports' expiration dates as I do using OAM today? 
3. Will I lose this capability?

1. You need to first document the "normal circumstances". Whether OAM does life cycle management via daily DFHSM processing or not in addition to ARSMAINT process. Who (CMOD/DFHSM) deletes what? Who is in control for each different scenario in your situation? You will need a storage engineer from OAM support for some details if you are not well versed in OAM side.
2. This depends on what you find out in #1. Let's say OAM expire objects in addition to and in coordination with CMOD ARSMAINT processing. You will then need to identify the storage groups related to the application groups, find their storage/management classes (life cycle policy) and make changes there... You can make global changes by asking storage engineers to change the life cycle rules on storage/management classes; or you can do a directory update of select OAM objects by manipulating the ODCREATDT/ODPENDDT ... I have done both depending on the requirement and my comfort level. Updating the OAM directory tables with SQL query is very risky if you are not well versed in OAM. You should test it thoroughly if you choose this option. Making changes to storage/management class life cycle management rules as well could be risky if you are not granular enough to impact only the intended objects.
3. Everything depends on what you find in #1 and what you do in #2 and how you do it.   

EDIT: I should have said OSMC instead of DFHSM; I realized that I started to mix these 2 acronyms sometimes. As much as I know to look for the OSMC keyword in the logs when checking what OAM house keeping tasks did, I mix these keywords when speaking about them. Thanks for correcting.

14
z/OS Server / Re: SQL for reports
« on: December 02, 2020, 02:16:15 PM »
Just to get you started, here is a sample query that will provide you list of  [User x Group x Folder] in a situation where access is controlled by Folder.

SELECT                     
       U.USERID AS USER         
     , G.NAME AS GROUP           
     , F.NAME AS FOLDER         
  FROM CCSYSAD.ARSUSER U         
     , owner.ARSUSRGRP UG     
     , owner.ARSFOLPERMS FP   
     , owner.ARSFOL F         
     , owner.ARSGROUP G                 
 WHERE U.UID = UG.UID           
   AND UG.GID = FP.ID           
   AND FP.FID = F.FID           
   AND UG.GID = G.GID           
WITH UR;                         

In your specific situation, you need to know how access is provided (directly to user id,  via AG and/or App or via folder only etc..) and build different queries to obtain your list.

I hope this helps you start from a step ahead. You will need to look at CMOD system tables and build the E-R Diagram for your queries. You may need multiple queries and join their results in some creative ways.

Good luck
-Mehmet

15
MP Server / Re: Changing Application Group Field Indexes to Filters
« on: October 20, 2020, 09:01:56 AM »
From the manual "Indexes provide fast and direct database access, but require more time to create and maintain." which you have already identified as the culprit because you have 10 indexes.

Also from the manual, " [A filter] is not used to identify a document or the field is always used with an index field to refine the results of a query. A filter causes a sequential search of the database."

So, there will be a sequential search at retrieval time each time any of the filter fields is provided as input, thus elongating the retrieval times. Personally, I would not remove all 7 indexes all at once without observing the impact on online/real-time user experience. If retrievals are going to be impacted to a degree not acceptable to users, maybe dropping 1 index at a time and observing for a period of few days would ensure you don't rock the boat for users.

Pages: [1] 2 3 4