Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - Mehmet S Yersel

Pages: 1 2 [3] 4
31
Hi Lizette -

Before you start, verify that the OAM storage/management class is defined to handle life-cycle as you expect (30 day DASD then 83 months on TAPE, etc). If not, this might be where the problem is. Assuming the storage/management class is defined correctly and assigned to the objects in question, then you should expect DFHSM to act accordingly.

To verify DFHSM works properly, you can run a query on the OAM Object Directory table and identify the objects (or object count) due for DFHSM processing. DFHSM started task will have 2 type of activities during its processing window:

1. Create backups of newly added objects or any objects so far not backed up yet (due to DFHSM window closing without going through all storage groups)
2. Migration (or expiration) of objects with ODPENDT =< CURRENT DATE

If there are no objects in any storage group that meet any of the 2 criteria above, there will be no action taken on that particular OAM Storage Group during DFHSM processing.

If any of the objects has not been backed up yet (secondary copy attributes blank (TAPE and VOLUME), can't remember the attribute names but if you pull the object directory structure, you can tell which one they are), or there are objects in need of DFHSM processing (ODPENDT <= today which means there is something  in need of expiration or migration),  then you should expect activity on that  storage group.

If there are no errors, maybe there are other things such as a small processing window not sufficient to go through all the pending activities. Or, when CPU utilization is very high, DFHSM might not be able to get enough processing cycles and it may fall behind. Make sure to see DFHSM task visiting every storage group before it ends. If it ends and there are still objects to process,  you may see a lag until you can catch up.

One final thing to check: If DFHSM procesisng is good and so far you have not seen any irregularities, check DB2 internals. Many years ago we had a DB2 table problem where deleted rows were not being physically deleted and the table size was constantly growing and adding new extents despite the fact that there were same number of deleted rows as newly added ones. It turned out to be a DB2 problem where space from deleted rows were not being reclaimed for use. OAM DASD sits on DB2 AUX tables. Make sure your DB2 tech team verifies that there is no DB2 problem and that if there are deletions, these deletions finally get deleted physically as well and don't stay there forever.

Good luck with your investigation of the root cause.
I hope this helps,
Mehmet

32
System Load has that information and it retrieves it from ARSLOAD table. You can probably build a join between ARSLOAD table and your segment table.

Link to ARSLOAD table details: https://www.ibm.com/support/knowledgecenter/en/SSEPCD_10.5.0/com.ibm.ondemand.administeringmp.doc/dodsc018.htm

I hope this helps,
Mehmet

33
What if we add an optional TimeZone property to the fields in folders?

Let's say we have CMOD Server loads in GMT timezone. ICN server is in EST timezone.

Display Format: %Y-%m-%d-%H.%M.%S
Default Format: %Y-%m-%d-%H.%M.%S

These will assume the default server time zone.

If we were to change the definitions to say:

Display Format: %Y-%m-%d-%H.%M.%S GMT
Default Format: %Y-%m-%d-%H.%M.%S GMT

we should be able to override the server time zone which is in EST and always display the correct timezone in GMT.

34
Content Navigator / Re: Rolling up reports into one view ?
« on: April 22, 2020, 07:17:39 AM »
As far as I know, ICN/CMOD does not offer a sub-query or a sub-menu for reports.  However, Windows Client provides a similar feature: Folder -> Authority -> Full Report Browse. If your users are allowed to use desktop client and they don't need ICN, you may be in luck. Otherwise, ICN doesn't have this feature and they were not planning to do it ever for performance reasons -- as of about 2 years ago when we were asking L2 support about this issue.

When migrating a system in one of my clients from Web AMMO to CMOD, this became an issue. Web AMMO had a feature to search/view the reports in full as well as by individual documents. CMOD's Full Report Browse feature was a matching capability and unavailable in ICN.

I hope this helps,

- Mehmet

35
Hey Guys, Good Day!

I hope you are all safe and sound from COVID-19... I have an ICN problem and hoping to get some direction for how to resolve it.

Problem: We have a Red Hat 7.7/WAS ND 9.0.0.6/ICN 3.0.4 server and it runs in Eastern time zone.  We have several CMOD instances in different time zones. There is 1 ICN desktop for each CMOD instance. When pulling in the selection criteria for reports, there is a field  called REPORT_DATE and it gets populated with "Last 6 months" interval and it is based on the ICN server time zone.

Is there a way to override this behavior and make sure it populates REPORT_DATE interval from CMOD instance's time zone?


Any ideas are appreciated.

Best Regards,
Mehmet

36
z/OS Server / Re: OAM Object Name Reference
« on: April 07, 2020, 07:06:09 AM »
Well I think I may be SOL here since the AG has an "Expire in 90 Days" setting and since I'm not changing AGs but rather OAM MC/date, then indexes would expire before OAM.  Any way to get around this?

Admin Client -> Applications -> select your application and right click -> View an Application -> Advanced Options -> Under Life of Data and indexes you will have 2 options:

Use application group value
Expire in ##### days

This might be the other place to make corresponding changes.  As always, test everything even if clear as day :)

37
z/OS Server / Re: OAM Object Name Reference
« on: April 07, 2020, 06:19:14 AM »
I was able to identify the documents in the OSM_OBJ_DIR that I would want to change the expiration for.  One issue though, in your UPDATE statement you're updating the ODEXPDT but in my OAM table ODEXPDT is set to '0001-01-01'.  What does look like an expiration date is ODPENDDT which is set to '2024-02-05'.  Would that be the date to change?

I wouldn't recommend changing the ODEXPDT. As a general rule, we should avoid changing any derived attributes that the system could override and undo and worse of, cause data loss. Instead, change management class as if you were storing the object new, right now. Whatever the desired storage/management classes are, change to these classes. Next, you need to tell OSMC to adjust all other object transition dates (migration/expiration) by updating the ODPENDT to today or some old day.

If your management classes dictate a multi-tier storage (DASD, then TAPE (or tapeless-tape etc)) , ODPENDT would indicate the date of next transition to be applied by OSMC. By setting it to today after updating management class, you would be quite safe in protecting the data integrity within OAM.

In the past, I have migrated large OAM systems from one sysplex to onother remote sysplex on an external network, directly OAM-to-OAM on same sysplex due to LPAR consolidations where we had to combine OAM instances and migrate data from one OAM/LPAR into the other... some crazy stuff...  So, this is what I strongly suggest to do, don't update the ODEXPDT because it is a derived field. Let OSMC calculate that for you based on ODPENDT and management classes. 

38
z/OS Server / Re: OAM Object Name Reference
« on: April 07, 2020, 05:39:55 AM »
Question:  If I update the management class in the OAM object directory for the documents in question, would that effectively change it's retention?

You also have to set the ODPENDT to , say today or a date in the past, thus forcing OSMC nightly processing to 'look' at this object and recalculate its retention and expiration dates.


It has been a long time that I had done this and I don't currently have access to an OAM based CMOD system to test and verify what I remember as doing. I hope my memory is not fooling me.

Here is what i have found from the documentation: https://www.ibm.com/support/knowledgecenter/SSLTBW_2.1.0/com.ibm.zos.v2r1.idao200/noselct.htm

I hope this helps,
Mehmet

39
I am building a new ICN server with ICN 3.0.6. and when I tried to import a custom desktop, I got this message:

"The desktop configuration file cannot be imported because it was created from another version of IBM Content Navigator.
The desktop configuration was exported from IBM Content Navigator, Version 3.0.4. However, you are running IBM Content Navigator, Version 3.0.6.

Export the desktop configuration from IBM Content Navigator, Version 3.0.6 and import the file again."

I understand, the message is very clear. However, is there a shortcut? A quick way to do this rather than going through same exact manual steps for each minor version change.

Thanks for your feedback,
Mehmet

40
Go to Help --> About and at the very bottom you will see something like "Version 10.1.0.5 (64-bit)"

41
Hi Nolan -

We have successfully tested the approach to selectively pick up JES3 reports using the FORM value and loading them into designated application groups.

I wanted to provide an update ...

Thanks again,

-M

42
Hi Nolan -


Thank you very much for the detailed response. I am more encouraged to continue this approach now  8).

I have one more question:

- We use JES3, in JESPARM the output class is defined as:
 
SYSOUT,CLASS=A,OUTDISP=(WRITE,WRITE),HOLD=EXTWTR

How would that compare to class properties in your system?

Best Regards,

-M

43
I am considering to run multiple ARSYSPIN started tasks to load different mix of reports into different mix of Application Groups.

Here is the use case:
- 2 different JES3 CLASSes (let's say CLASS=A and CLASS=B) will be monitored for input
- 9 different FORMs that represent 9 different applications will be ingested into up to 9 different AGs depending on retention

I am considering to run multiple ARSYSPIN clones such that:
1. ARSYSPN1 is configured to read SELFORM=(FRM1) from JESCLASS=AB and store into APPL="FRM1", APPLGROUP="FRM1"
2. ARSYSPN2 is configured to read SELFORM=(FRM2) from JESCLASS=AB and store into APPL="FRM2", APPLGROUP="FRM2"
... and so on

From what I can understand by reading the reference manual, the parameters should allow me to do this as long as I clone ARSYSPIN and customize it properly. Here is where I am looking for reference: https://www.ibm.com/support/knowledgecenter/en/SSQHWE_10.1.0/com.ibm.ondemand.administeringzos.doc/doday002.htm

My Questions:
- Has anybody tried tweaking ARSYSPIN and running multiples of it with different parameters against same CLASS(es)?
- Is this a viable approach?
- Do you see any problems with this approach?

I appreciate your insights and any feedback you may have before I start experimenting with the idea.

-M

44
z/OS Server / Re: JES Extractor
« on: July 25, 2019, 11:59:21 AM »
If you are running CMOD on non-mainframe, and your reports are generated in JES on the mainframe side, you need to work with your MVS systems Programmer to find out the options and whether you are entitled to use MVS Download. If so, your system programmer needs to set up a started task to push the reports to your system via MVS Download.

On the server side you will configure arsjesd to receive  files sent by MVS Download.

There may be other alternatives but this is a very reliable and fast method to move files to non-mainframe platform from JES SPOOL.

45
Content Navigator / Re: ICN Certificate
« on: April 11, 2019, 06:08:51 PM »
Hi Ed -

Thank you for the information you have provided. I haven't yet started implementing them, but I will need the steps you have detailed for securing mainframe to ICN connection  ;D

On the other hand, I was able to complete the SSL configuration for our 2 ICN environments - not the most straight forward process due to the exceptions I encountered along the way. Here are the high level steps if anyone is going through the same requirement of providing secure connection to ICN:

- Install IBM HTTP Server with Java
- Install WebSphere Plugins with Java
- Create Self signed Certificate
- Get the certificate issued by your CA
- install your signed certificates - root & intermediate - on the HTTP server
- open x443 port and make sure HTTPS works from the secure port
- once HTTPS works, use /navigator and that too should work

Along the way you may get a lot of errors, firewall issues and you may need to do a lot of research to move past each issue.

Good luck,
Mehmet
-

Pages: 1 2 [3] 4