Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - Maciej Mieczakowski

Pages: 1 [2]
16
Hard to say...have you tried to install Adobe and CMOD client once again? It's worth to check how many Adobe products end use have and get rid of some if are not needed. In most scenario it fix the issue. Sometimes it's hard to find out the root cause.

17
MP Server / Re: Using Permanent Cache With Small First Volume
« on: November 05, 2015, 02:03:02 AM »
Thanks Alessandro for that Great explanation!, I totally agree with your point of view and would also suggest to have all cache filesystems the same size as recommended.

If there are multiple file systems in the ARS.CACHE file, Content Manager OnDemand uses the file system with the greatest amount of space free to store the objects. Here is an explanation showing that CMOD "balances cache by archiving document data to the cache file system with the most free bytes".

http://www-01.ibm.com/support/docview.wss?uid=swg21409251

18
Documentation / Re: Loadids - Maximal number
« on: November 05, 2015, 01:45:54 AM »
I'm just curious, what will happen when you really load those 14 billion of rows into single APPGR? Will the new coming data start failing to load and you will see that explanation in system log, or will you face any unexpected behavior? Did any of you faced that? Any experience?

I do not check all the apps and the object counter, so I'm curious if I should start worrying about that and maybe prepare some preventive action for those with high load ratio ;)

19
Generally arspdf32.api needs to be copied from mentioned original location "C:\Program Files (x86)\IBM\OnDemand32\PDF" to ../Adobe/plug-ins directory (for instance C:\Program Files (x86)\Adobe\Acrobat 11.0\Acrobat\plug_ins) where it's invoked when needed. But as far I understand the document  retrieval request invokes the ARSPDF32.API OLE control to interface to Adobe Acrobat. That's why you need to have Adobe Standard or Professional as Adobe Reader does not have OLE interface included.
In case your app is configured as "user defined PDF", then the whole retrieval is handled out of client. SO the application that is assigned to handle pdf files in system does the job then, so this api is not needed in that case at all and it can work with Adobe Reader.

20
MP Server / Re: ARSLOAD best practices for failed and successful load
« on: November 04, 2015, 01:40:41 AM »
Exactly as Justin says it can be easily solved with short shell script if you uses Linux/UNIX.

You can also use another approach without ARSLOAD usage

I do the same thing as you described, but with a day delay (as I have a complete list of files that have not been processed). So I do not even use arsload functionality but simply query systemlog and take the complete list of files that failed to load from previous day. I do not bother about files that have been successfully processed, all that matters from my perspective is data that failed to load as there is a need to check this out and reprocess if possible.

If you need a report of successfully processed data you can simply use 87 records from system log as well.

21
MP Server / Re: Migration Document stored in CACHE to TSM
« on: October 30, 2015, 07:13:52 AM »
Indeed I see my mistakes here. I choosed a wrong entry from systemlog about exact destination of stored object, it supposed to be CACHE destination

Application Group Object Store: Name(APP1) Agid(5679) NodeName(-CACHE-) Nid(85) Server(-LOCAL-) ObjName(4009FAAA) Time(0.000)

in parallel we sent to TSM storage node
Application Group Object Store: Name(APP1) Agid(5679) NodeName(TA0410Y) Nid(85) Server(-LOCAL-) ObjName(4009FAAA) Time(0.057)

The procedure I described above I used when we have lost data in TSM due DB crash (unexpected server reboot), and we had to do point in time restore as roll-forward restore did not worked out due to last transaction crash and TSM didn't wanted to start. So we are loading data to cache and to TSM in fly, and we simply have lost data entries from TSM DB for certain period of time as we go back to latest backup (point in time restore). I hope it's clear now that we simply had to move data from CACHE to TSM and this is what this procedure is about.

Sorry I should have describe that earlier , my mistake :) And you realized me it's off topic. So @Yousuf it's different scenario and indeed you should use Alessandro's procedure :)

22
MP Server / Re: ARSLOAD extreme slow
« on: October 30, 2015, 05:55:40 AM »
I faced once an issue with quite slow loading for scenario where data have been loaded from multiple loading directories at once (we had over 20 loading daemons each was loading data to different APPGR, CMOD server 8.5.0.6). We had also a big consumption of CPU, kernell usage was oscillating around 40%, so we had a real problem...

When 1 loading daemon was processing it was taking less then 1s to process data and sent to TSM/cache/db, but when 20 daemons  were loading it was taking around 30s per each file!

After a long time spent with IBM on call, we found out interesting issue, that was not mentioned at all in product documentation (a tech note should be added, I don't know if it was).

We couldn't find any documented information about specifically setting GSKit libraries in the LIBPATH. I had issues with gsk links in /usr/lib  (run /usr/lib/libgsk* , my result was:

exec(): 0509-036 Cannot load program /usr/lib/libgsk7acmeidup.so because of the following errors:
        0509-151 The program does not have an entry point or
                   the o_snentry field in the auxiliary header is invalid.
        0509-194 Examine file headers with the 'dump -ohv' command.

We couldn't find a reason of it, but we found out that when you set LIBPATH parameter and add exact location of your GSKIT directory it start working correctly :) I assume it was due to GSKIT location. We guess the software assume that the /usr/lib is set in LIBPATH by default

For me it solved the issue, now it takes around 1s instead of 30s to process data from 20 loading directories in parallel. I'm not sure how it is in 9.5 now. Will test that pretty soon :) But maybe it's worth trying, of course after you do some more research

23
MP Server / Re: Migration Document stored in CACHE to TSM
« on: October 30, 2015, 05:31:20 AM »
There is another approach to use arsadmin store command. If you have few objects to migrate it does the job as well

I do that in few steps:
1. Get the object name you are interested in  - use systemlog message 82 ex.

Application Group Object Store: Name(APP1) Agid(5679) NodeName(TA0410Y) Nid(85) Server(-LOCAL-) ObjName(4009FAAA) Time(0.057)

2. Get Primary Node ID - like from above example Nid(85)

3. Get FILESPACE_NAME as it's called in TSM or Application Group Identifier  as it's called in OnDemand (in my example APP1 = Agid 5679 = WYE), you can get this from ARSAG table (AGID_NAME)

SELECT NAME,AGID,AGID_NAME FROM ROOT."ARSAG" WHERE NAME='APP1'

NAME                                                                    AGID      AGID_NAME
------------------------------------------------------------ ----------- ---------
APP1                                                                    5679       
WYE



or from Administration Client in AG properties, under Storage Management tab ->Advanced -> Application Group Identifier label

4. Find a a symbolic link in /arscache/retr/AGID_NAME/DOC directory to find relative path to the object stored in cache:

ls -l /cache_path/retr/AGID_NAME/DOC | grep ObjName

ex.
ls -l /ars/arscache1/retr/WYE/DOC/ | grep 4009FAAA
lrwxrwxrwx    1 root     system           35 Jun 29 2015  4009FAAA-> /ars/arscache1/16853/WYE/DOC/4009FAAA

5. Build a final command that will migrate data from cache to TSM

./arsadmin store -h serv_name -u user -p passwd -g app_gr_name -n prinid-secnid -d dir_of_cached_obj ObjName

ex.
/usr/lpp/ars/bin/arsadmin store -h ondemand.server.com -u admin -p admin -g APP1 -n 85-0 -d /ars/arscache1/16853/WYE/DOC 4009FAAA

And that's it ;) File will be copied to TSM, it will be removed after all from cache location where it was stored previously and stored for another period of time in cache that is set in APPGR, same as during a the reload. So the relative location of that cache file will change.

I hope it's clear :)

24
MP Server / Re: System Log query failing while using ARSDOC QUERY
« on: August 12, 2015, 02:42:00 AM »
Hi,

I think you forgot to narrow the search to message 30 only that you also mentioned about. I run pretty similar script for checking license utilization. So add a missing part and it should work ;)

/usr/lpp/ars/bin/arsdoc query -h localhost -u <userid> -p <password> -f 'System Log' -i "where time_stamp between 1439308800 and 1439395199 AND msg_num=30" -H

I also run it from root account as it requires much memory for execution, so you can rise the limits (ulimit) or use root account instead

25
MP Server / Re: Error Message 107 in the arssock.err file
« on: August 10, 2015, 04:54:03 AM »
It might be the issue you might never find the answer :) I had quite similar issue once I was updating (manually not using arsxml) an application group with some string indexes. When I confirmed the small changes which was making one field as updateable system set it also as CFS-OD, which was not my intention and is not possible to do normally. As the result no data has been loading. It took me few hours till I noticed that and had to correct it directly in DB, as there was no option to unset that CFS-OD from admin client, as it's greyed out for string data fields.

I did not registered PMR for this and had never faced the same issue. So I explained it to myself that there had to be some sort of issues with network or some bug very hard to repeat.

26
It all depends how the data shall be accessible after all and what customer/end users want to achieve ;) I have been using such approaches:

 1. Create a one big generic index file (scripting) and load one data file (report) into system. Index file consist of multiple indexes that points to that single report. So many rows added into DB, one object file sent to TSM. It weren't big reports up to few hundreds, but works as expected for end users.

P.S Now in CMOD 9.5 it would be maybe even better to use full text search server instead as it gather all data from reports:) But it was not available in previous CMOD ver and I have not yet tested it :) Maybe someone who uses it can share its experience, as it might be a good tool

2. Modify report (if possible) and add a header if data is well organized (for instance 100). Sometimes it's easy to use sed, awk, regex or simply even vi to deal with it if you uses UNIX for CMOD. After such pre-processing let system index that.

3. Use index values that has nothing in common with a data that's planned to be loaded. That's the easiest approach. And as you mentioned I also used as indexes the values that were taken from data file name.

As segment date I uses a load date if possible, and let system fill it in while loading. By Adding default value in application for segment date field ('t') system catch a current date when data is processed. So there is not even a need to add this value to index file if generic index file is used.

All depends what customer want to achieve :)

27
MP Server / Re: Error Message 107 in the arssock.err file
« on: August 05, 2015, 04:30:07 AM »
I bet you have checked that, but it's worth to add that the explanation of errno value, which might provide additional information, resides in /usr/include/sys/errno.h on UNIX systems  :)

28
MP Server / Re: CMOD, TSM & DB2 upgrade in same server
« on: July 27, 2015, 03:57:44 AM »
I will strongly advice to go from DB2 9.5 to 10.X and not to DB2 9.7, because DB2 V9.7 is not supported anymore, so it doesn't make sense to upgrade to this version anymore, at least not in your case.
Except if you have a very strong reason to do that.

I started digging after your post and according my findings it seems that DB2 9.7 will still be supported for a couple of years

http://www-01.ibm.com/support/docview.wss?uid=swg21168270

Maybe you were thinking about DB2 9.5 that is not supported anymore since end of April 2015  ;)

29
MP Server / Re: Max Concurrent ARSLOAD daemons
« on: July 28, 2014, 02:25:43 AM »
Hi,

I'm also interested to know the reason of this requirement, maybe you want to load 100 000 documents in a very short time and have to do that periodically? We had such requirement once, and as a solution we created one common indexfile for all documents for certain application - over 300k objects (small PDF). They were all loaded almost at once...much faster then loading it from deamon one by one.  ;)

BR
Maciek

Pages: 1 [2]