Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Messages - jsquizz

Pages: [1] 2 3 4 5 6 ... 29
Note that this installation is on AIX.

My basic process was to upgrade to the latest ODWEK ( as the HTML5 viewer is included in the update (I was running with ODWEK 9.5).  As you probably already know, ODWEK is no longer provided separately but is part of the CMOD install tar installation.  The readme.txt file included in the CMOD (untared) installation directory details some additional steps required for the new release (

Steps taken:

1.   Download and install CMOD 10.1 base code (/opt/IBM/ondemand/V10.1) <=== may not be required if already installed.
2.   Download and install CMOD from FixCentral
3.   Update ICN configuration ( and change ODWEK installation directory to newly installed release (/opt/IBM/ondemand/V10.1/www) then build and deploy
4.   Included in the CMOD installation readme.txt:
     a.   Copy /opt/IBM/ondemand/V10.1/jars/gson-2.8.1.jar to /opt/IBM/ondemand/V10.1/www/api
     b.   Update WAS configuration and add /opt/IBM/ondemand/V10.1/jars/gson-2.8.1.jar to CLASSPATH
     c.   Update CLASSPATH (LIBPATH) in WAS to /opt/IBM/ondemand/V10.1/www
5.   Install GSKIT8.  Instructions included in ODWEK installation readme.txt.  This was an important step for me as ICN was unable to open the repository without it.
6.   Update ICN Customer Property ODWEK_INSTALL_DIR to new /opt/IBM/ondemand/V10.1 directory

To activate the HTML5 viewer:
1.   Create a directory in the WAS infrastructure to hold the new viewer contained in (e.g. <WAS install>/AppSrv01/installedApps/TEST/navigator.ear/navigator.war/viewers/LineDataViewer)
2.   Unzip the /opt/IBM/ondemand/V10.1/www/viewers/ file into the …/navigator.war/viewers/LineDataViewer directory previously created

3.     Finally, update ICN Viewer Map using ICN admin to use Line Data HTML Viewer

I've been able to view line-data reports in both Edge and FireFox (something I was never able to do with the java line data applet  :) )

Of course, this is a summary of the install/implementation process.  There are many details left out such as WAS cleanup, bounce, etc.

Thank you for these steps. I'm actually in the process of implementing this myself. You gave me a head start haha!

Can confirm that your steps work perfectly

MP Server / Re: Issue configuring new environment
« on: March 22, 2021, 08:43:43 AM »
You didn't mention the platform...

If it's AIX, Ed's on the right track - check to make sure you have the XLC libraries.  See here:

If it's Linux, check to make sure you have the DB2 links created with db2ln.


Whoops, yeah my bad, RHEL

Let me try db2ln. Thanks

MP Server / Re: Issue configuring new environment
« on: March 19, 2021, 03:29:58 PM »
No error message anywhere of any kind?


Nope, Just-

Code: [Select]
[archive@ondemand config]$ arsdb -I archive -gcv
ARS4014E Unable to load >DB2<

I am used to just doing instance named archive and su - to that user and doing the configuration, I've never had any issues doing it that way using the default configs. I am wondering if I am missing a step after creating the instance and adding archive user to - db2iadm1

MP Server / Issue configuring new environment
« on: March 19, 2021, 07:12:18 AM »
I'm working on a POC and for some reason I cant get the install to work. I've done plenty of these so I am scratching my head. I have a feeling its simple.

1) Instance is created as "db2inst1"-

Code: [Select]
[db2inst1@ondemand ~]$ db2ilist
[db2inst1@ondemand ~]$ groups
[db2inst1@ondemand ~]$

2) "archive" user is part of db2iadm1, attempting to run arsdb -gcv -I archive

Code: [Select]
[archive@ondemand config]$ groups
[archive@ondemand config]$ arsdb -I archive -gcv
ARS4014E Unable to load >DB2<

3) ars.cfg / ars.ini are both basically untouched

Code: [Select]
[archive@ondemand config]$ more ars.ini

Code: [Select]

I verified that all the directories are created, owned by archive:db2iadm1, set to 775

I think it's something with the archive profile, permissions or something like that. Usually I use all the default settings - archive instance/archive db.. But I want the default db2instance name of db2inst1 because I will be installing other products on here and I would like to keep things organized :) anyone have suggestions? I unfortunately don't have a dba to help me.

« on: March 12, 2021, 05:29:27 PM »

yup. dealing with that now, however--nobody can seem to track down who's monitoring and or why!

« on: March 12, 2021, 03:48:17 PM »
Second retrieval box is your option for 66.

Just as an FYI, I suggest you dont enable database query logging. I forget which box is which..One is 65 msg, one is 226. (IBM has suggested to me in the past to keep them turned on to only debug issues if needed). I've seen high volumes of queries being logged that caused performance issues to the point of bringing down a library server.

Other / Re: Suggestion needed to Speed up Retrievals from TSM
« on: March 08, 2021, 11:50:01 AM »
I hate to bump such an old OLD thread.. However I think you're still an active user.

What did you decide on?

Report Indexing / Re: Loading large CSV using Generic indexer
« on: March 05, 2021, 03:52:38 PM »
Thanks for the reply!...Not going to bother opening a PMR for this.   Application end is looking to splitting this up in 2 pieces.

Ironic. I'm trying to open up a very large data dump from oracle with excel, it's a .del file with maybe 500k rows? Excel isnt liking it. I got the same error message you got.

Sounds to me like it's probably hitting some kind of resource limit on local PC, memory or cpu..or something

Documentation / Re: CMOD / TSM and moving to AWS compatibility
« on: March 05, 2021, 08:24:35 AM »
I think that as long as you're within the compatibility matrix, you're fine -- OS, database, storage, etc.  I haven't seen any special requirements just because you're in the cloud.


Was told the same thing, except for moving from OnPrem to Azure. They mentioned that if there's an issue with the underlying IAAS within Azure, they wouldn't or couldn't support it.

Just as a heads up..I built POC environments for both AWS/Azure, they run nicely.

Report Indexing / Re: Loading large CSV using Generic indexer
« on: March 05, 2021, 08:22:10 AM »
Agree..changing the max rows made no difference.  After loading file was not able to view all rows in CSV.  We are going back to application to see if they are able to split the file.  If they are resistant, we will have to go to IBM.  As always, appreciate the quick response!!   Take care


Since you mention that..

I've seen this before, Granted- IT was a very VERY old CMOD system.. Maybe 8.5?

Our business partners were sending CSV files with like, 200-300k rows. They wanted to pickup something on each row of the file. CMOD would get to like the ..10th to last row and just quit. PMR didn't resolvge anything. We just told them to send it as smaller files.

Report Indexing / Re: Loading large CSV using Generic indexer
« on: March 04, 2021, 12:54:40 PM »
What version of CMOD?

MP Server / Arsdoc get — best optimized way to extract and reload
« on: March 03, 2021, 09:21:30 PM »
So I am working on a migration and I’m just looking at some possible ways to extract and reload my data.

My thought - arsdoc get with -L, read from parameter file with a listing of load ID’s. Do this on an app group by app group basis. We only have like 10, but a good amount of data.

We have a process, a weird one- that extracts from a “short term” application group and loads into a “long term” application group. This was handled by a third party years ago, and since I joined I redid it with a few basic shell scripts. My script generates a list of dates of the month prior, and loops through that + an app group name, and does all the magic. This process is very good. I can’t wrap my head around why they are going from one app group to another. I think it’s for performance at the db level or something

I figure there’s other ways, where I can extract a month of data per file, etc. so I guess my three options I’m trying to weigh

1) retrieve an entire load, reload with arsload
2) retrieve an entire day, reload with arsload
3) retrieve an entire month, reload with arsload.

I’m leaning towards one. I also think that would be easier to recover if there’s some kind of issue.

What’s everyone’s thoughts.

MP Server / determine oldest doc in an app group?
« on: February 24, 2021, 05:24:30 PM »
I have about 2k app groups, need to figure out the oldest record per application group.

I'm looking through the tables -> arsag caught my eye, more specifically 'last_doc_dt'

I ran a simple query, and it's coming back blank.

Code: [Select]

  1 record(s) selected.

Is there anything on lets say, arsseg I can also try? I was also thinking I could use arsseg table.

Code: [Select]
START_DT TIMESTAMP The minimum (oldest) date of documents stored in this folder, in database-native timestamp format.

The CMOD System Load (SA*) tables should be your first stop.  There's no AppGroup field in the OnDemand System Log without doing a full-text search, which is terrible for performance.  A few tweaks of SQL should help you get what you need.

Also, loading 100k files per month is terrible.  Load larger files less frequently to reduce the overhead in CMOD.  :)


Yep, i tinkered with the SA* tables tonight for a few minutes after I posted this and got my results.

I agree with you about the loads. There’s literally a month worth of files. Tens of thousand. With one single document per file.

MP Server / Any easy/fancy ways to gather all loads in one year time frame?
« on: February 23, 2021, 05:20:23 PM »
Business folks are asking for a count of all documents loaded in 2020 for a specific folder. Usually not a big deal, only 6 AG's in that folder..

How I've done it in the past is use the system load / system log folder, and put it into excel..and some simple parsing. But these are large application groups, some have 100k+ loads per month. I've learned obviously that the CMOD client doesn't like copying out more than 30-40k at a time.

My Idea? -- I'm thinking possibly also of just hitting up the SA* tables, of which we have two of. Anyone have any more ingenious ways of doing this?

Pages: [1] 2 3 4 5 6 ... 29