Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - Lars Bencze

Pages: 1 2 3 4 [5] 6 7
61
MP Server / Re: How to index fully composed AFP with ACIF
« on: January 16, 2018, 10:21:32 AM »
There is of course this variant of error too. After reading the document above, it seems to imply that the Formdef is not to be found in the file, after all?

9: ARS4302I Indexing started, 2174218 bytes to process
 : APK415I CC=NO
 : APK415I CCTYPE=A
 : APK415I CONVERT=YES
 : APK415I TRC=NO
 : APK415I CPGID=278
 : APK415I DCFPAGENAMES=YES
 : APK415I UNIQUEBNGS=YES
 : APK415I IMAGEOUT=ASIS
 : APK415I FORMDEF=DUMMY
 : APK415I RESTYPE=ALL
 : APK415I INPUTDD=D:\OnDemandDirectories\arsload\ACME.VPAK_X.kivratestfil3.kivratest3.20171214.131155.ARD
 : APK415I OUTPUTDD=E:\arstmp\ACME.VPAK_X.kivratestfil3.kivratest3.20171214.131155.ARD.out
 : APK415I INDEXDD=E:\arstmp\ACME.VPAK_X.kivratestfil3.kivratest3.20171214.131155.ARD.ind
 : APK415I RESOBJDD=E:\arstmp\ACME.VPAK_X.kivratestfil3.kivratest3.20171214.131155.ARD.res
 : APK420S AN ERROR OCCURRED WHILE ATTEMPTING TO OPEN DUMMY RETURN CODE 28.
 : APK412I MODULE APKSRIAX HAS RETURNED WITH RETURN CODE 255.
: APK532S A FORM DEFINITION WITH A MEMBER NAME (DUMMY) WAS NOT FOUND OR WAS INVALID - RETURN CODE 28.
 : APK441I ACIF AT IPTR923 HAS COMPLETED ABNORMALLY WITH RETURN CODE 16.
1: ARS4309E Indexing failed


RC 28 means "File not found", and it only searches for a File named "DUMMY" if it did not find a valid Formdef inside the AFP file itself.
How can I browse/check the input file to see if it contains an FDEF - do I use a HexEditor and scan for a certain combo? (Which Hex combo in that case?)

62
MP Server / Re: How to index fully composed AFP with ACIF
« on: January 16, 2018, 09:51:55 AM »

63
MP Server / Re: How to index fully composed AFP with ACIF
« on: January 16, 2018, 09:04:22 AM »
Hi Ed - arsafpd is VERY happy with the content! :) In fact, the TLE listing in my post is created by arsafpd.
Also, the AFP file looks fine using for example IBM's "AFP Workbench for Windows" (included in odwin client installations).

64
MP Server / How to index fully composed AFP with ACIF
« on: January 16, 2018, 08:32:46 AM »
Warning: incoming "stupid" question.
Although having worked with CMOD for many years, I have not worked a huge lot with AFP files.
I have currently received an - allegedly - fully composed AFP file, with all its resources inline.
Now, how the eitch do I set up the Indexer information to properly process and index this file?
It seems that no matter what parameters I use, I end up with either APK459S or with APK210S+APK420S.
Which one seems to boil down to whether I use "CONVERT="NO"or YES.
Error message type one - you can see my settings as a part of this:
: ARS4302I Indexing started, 2174218 bytes to process
: APK415I CC=YES
: APK415I CCTYPE=A
: APK415I TRC=YES
: APK415I DCFPAGENAMES=YES
: APK415I UNIQUEBNGS=YES
: APK415I IMAGEOUT=ASIS
: APK415I INDEXOBJ=ALL
: APK415I FORMDEF=DUMMY
: APK415I RESTYPE=ALL
: APK415I INPUTDD=D:\OnDemandDirectories\arsload\ACME.VPAK_X.kivratestfil3.kivratest3.20171214.131155.ARD
: APK415I OUTPUTDD=E:\arstmp\ACME.VPAK_X.kivratestfil3.kivratest3.20171214.131155.ARD.out
: APK415I INDEXDD=E:\arstmp\ACME.VPAK_X.kivratestfil3.kivratest3.20171214.131155.ARD.ind
: APK415I RESOBJDD=E:\arstmp\ACME.VPAK_X.kivratestfil3.kivratest3.20171214.131155.ARD.res
: APK210S DATA IN AN INPUT RECORD OR RESOURCE IS INVALID: A REQUIRED TRIPLET OR SELF-DEFINING PARAMETER WITH ID '85'X WAS MISSING FROM A MFC STRUCTURED FIELD.
: APK420S AN ERROR OCCURRED WHILE ATTEMPTING TO OPEN T1001143 RETURN CODE 28.

: APK412I MODULE APKSRIAX HAS RETURNED WITH RETURN CODE 255.
: APK104S DATA IN AN INPUT RECORD OR RESOURCE IS INVALID: ECF STRUCTURED FIELD IS NOT ALLOWED OR FORMS AN INVALID SEQUENCE.
: APK105I THE ERROR REPORTED ABOVE OCCURRED IN LOGICAL RECORD NUMBER 75, WHOSE SEQUENCE NUMBER IS 75, AND RESOURCE NAME IS F1FAL.
: APK441I ACIF AT IPTR923 HAS COMPLETED ABNORMALLY WITH RETURN CODE 16.
: ARS4309E Indexing failed


Error message type two:
: ARS4302I Indexing started, 2174218 bytes to process
: APK415I CC=YES
: APK415I CCTYPE=A
: APK415I TRC=YES
: APK415I CONVERT=NO
: APK415I DCFPAGENAMES=YES
: APK415I UNIQUEBNGS=YES
: APK415I IMAGEOUT=ASIS
: APK415I INDEXOBJ=ALL
: APK415I FORMDEF=DUMMY
: APK415I RESTYPE=ALL
: APK415I INPUTDD=D:\OnDemandDirectories\arsload\ACME.VPAK_X.kivratestfil3.kivratest3.20171214.131155.ARD
: APK415I OUTPUTDD=NUL
: APK415I INDEXDD=E:\arstmp\ACME.VPAK_X.kivratestfil3.kivratest3.20171214.131155.ARD.ind
: APK415I RESOBJDD=NUL
: APK459S INDEX NEEDED FOR THE GROUPNAME WAS NOT FOUND.
: APK441I ACIF AT IPTR923 HAS COMPLETED ABNORMALLY WITH RETURN CODE 16.
: ARS4309E Indexing failed
: ARS4324E File >D:\OnDemandDirectories\arsload\ACME.VPAK_X.kivratestfil3.kivratest3.20171214.131155.ARD<


So what am I doing wrong here? Or is the file indeed corrupted some way?
The TLE tags part looks OK though:
  796 BDT Begin Document ACME_VPAK                           0012 D3A8A8
  797   BNG Begin Named Page Group G0000001                 0010 D3A8AD
  798     TLE Tag Logical Element                           0019 D3A090
          TLE Fully Qualified Name Triplet (02)
          TLE  0B Attribute Name
          TLE  Name = 'KANALVAL'
          TLE Attribute Value Triplet (36)
          TLE  Value = '1'
  799     TLE Tag Logical Element                           0017 D3A090
          TLE Fully Qualified Name Triplet (02)
          TLE  0B Attribute Name
          TLE  Name = 'DIGBREV'
          TLE Attribute Value Triplet (36)
          TLE  Value = ''
  800     TLE Tag Logical Element                           002F D3A090
          TLE Fully Qualified Name Triplet (02)
          TLE  0B Attribute Name
          TLE  Name = 'KALLA'
          TLE Attribute Value Triplet (36)
          TLE  Value = 'B20171214FECDB113113800012'
  801     TLE Tag Logical Element                           001F D3A090
          TLE Fully Qualified Name Triplet (02)
          TLE  0B Attribute Name
          TLE  Name = 'PNR'
          TLE Attribute Value Triplet (36)
          TLE  Value = '197503271234'
  802     IMM Invoke Medium Map A4_S_B1                     0010 D3ABCC
  803     BPG Begin Page P0000001                           0010 D3A8AF
  804       NOP No Operation                                0011 D3EEEE
            NOP 'ISISTEST'
...
  848     EPG End Page P0000001                             0010 D3A9AF
  879   ENG End Named Group G0000001                        0010 D3A9AD
  880 EDT End Document ACME_VPAK                             0010 D3A9A8
...(more pages)


65
MP Server / Re: Alternative delete methods
« on: January 09, 2018, 07:31:55 AM »
Thank you both Justin and Nolan for your help.
Yes, I read the same documentation and noted that it could be reloading it into the old table, but it was unspecified.

PS: I will go searching for the "jack" setting when I have some time over.... ;) Or maybe we will write an addon for that too.

66
MP Server / Re: Alternative delete methods
« on: January 08, 2018, 04:06:33 AM »
Hi, very interesting thoughts.
According to another source I have (I have not verified this yet due to a lack of time), the ERM does NOT keep the segmentation intact.
Do you have a source where I can verify that this is indeed the case?
From my tests with ERM, it does not delete or reload Jack. Unless you run arsmaint -D 100, but that is to my understanding not part of ERM but of base CMOD.
(Running "arsmaint -D 100 ... -G AppGroup" is another thing I have also not verified yet. during my last attempt, it seemed to try to reload every single LoadID in the Application Group - NOT an option as you understand... :) )

67
MP Server / Alternative delete methods
« on: January 02, 2018, 02:07:37 AM »
With the new GDPR regulation coming into effect by May 2018, many companies need better ways to delete their documents, especially from OnDemand.
Most regulatory agencies don't accept the "lazy delete" done with arsdoc delete and by the ODWEK API, where only the database record pointing to the document is removed, while the document data itself is left intact on disk (and other storage).
(Most of you guys here on the forum would succeed in restoring such a "deleted" document, if you had access to the database and the files on disk.)

Are there any other options?
If we do an export + reload (after removing the document that is to be deleted) and then unload the original load, a new (minor) problem appears: the segmentation order is disrupted. Example: Say that you for a given Application Group have 100 or more segment tables, and they have been created sequentially by/with daily printouts.
Then you unload and reload a batch, which for this example is 5 years old. When you reload it, it ends up in the CURRENT segment table, and the START_DT (Start date) column for the current table will be set to a much earlier date. If you repeat this, the segmentation will eventually become really messed up and searches will be slower, since a lot of (unnecessary) tables will be searched.

Are there any better methods out there to delete data from the FAA* files?
Has anyone been bold/crazy enough to investigate a solution which overwrites part of the data file on disk? (NOT recommended!)
Can you "re-open" a segment table for writing, temporarily? (As far as I know, you can only close a table and that automatically creates a new one. Of course, you could close the current table, reload the old data into a new table, and then close that table too. But that would create a whole lot of new tables over time.)
Can you forcefully move a batch of documents from one segment table to another?
Any other solution?

Please share your thoughts and solutions here. Also if you happen to know that IBM has a solution for this up the sleeve, I'd like to know.

68
MP Server / Re: ARSDOC UPDATE
« on: December 01, 2017, 03:04:24 AM »
Sometimes you can receive strange messages when you try updates.
I would suggest that you create a COPY of the Folder you are using. In the copied folder, remove all other Application Groups.
Then try the update command using the copied folder instead.

This procedure has helped me out at least once.

69
MP Server / Restoring an IMPLIED_HOLD?
« on: December 01, 2017, 02:50:22 AM »
Hi guys,

To delete a document held by Enhanced Retention Management, you must first release all Holds on that document. Fine.
You can release any hold using for example arsdoc hold_release. This includes removing an Implied Hold, using "-l IMPLIED_HOLD".
However, what if you accidentally remove the implied hold for the wrong document? How do you reset it properly?
arsdoc hold_add does not seem to accept "-l IMPLIED_HOLD":

2017-12-01 10:20:26.804052: ARS6822I Attempting login for userid 'odadmin' on server 'ARCHIVE' ...
2017-12-01 10:20:26.834828: ARS6080I Login successful
2017-12-01 10:20:26.834960: ARS6131I Searching for hold 'IMPLIED_HOLD' ...
2017-12-01 10:20:26.913433: ARS6085E Search unsuccessful
2017-12-01 10:20:26.913715: ARS6133E Unable to get hold information.  The hold does not exist or the user does not have permission to access the hold.
2017-12-01 10:20:26.914632: ARS6026I arsdoc completed.


So. We don't want to export the entire batch and reload it again.
Can we "cheat" in a "half nice" way and run:
arsdoc update ... -i "<suitable SQL to find the document to update" -n Lockdown=<current value of Lockdown +16384>
?
Or how else do we restore the Implied Hold to the document?
(As far as I know, Implied Holds are only stored in the "Lockdown" field, as the value 16384, not in any other ARSHOLD* table. Let me know if this is wrong - if so, the suggested action above will obviously not work.)


70
Windows Client / Tracing the Windows Client
« on: November 20, 2017, 04:39:48 AM »
Is there a "secret" way to activate trace logging for the Windows Client? Or at least more extensive logging to the Windows Event Log?

71
Report Indexing / Re: ARSLOAD Syntax for Using File Name to Index
« on: June 29, 2017, 04:32:59 AM »
I asked a similar question once to IBM (or was it here on the ODUG Forum?), and I was told that the -B simply does not work when you have Generic indexing selected. D'oh.  :o
That did not quite make sense to me, but the ability to handle "-b" is apparently only built into the other indexing engines.

72
No, it doesn't care about the retention time. It simply grabs the index data row from the DB table and shreds it. (The document itself is left on disk)

It sounds to me that what you really want to do is to run arsmaint -t <date>. I don't know about a smart way to check that via the Java API, but you could certainly check the database for expiry dates and only run arsdoc delete if the test evaluates to TRUE.
But normal document expiry is best handled by running arsmaint by the documentation.

73
Hi and sorry for the delay in replying.

Well, yes, you "kind of" need to install the certificate on the ICN machine, which is the CMOD client in this case:

Quote
Both ondemand.kdb and ondemand.sth files need to be placed on the workstation where the Content Manager OnDemand clients are installed. Download both files to the config subdirectory under the client installation directory.

There seems to be some information missing on the Linux page, but I suggest you look at the full procedure starting with "To create a CA-signed digital certificate, do the following steps:". For example, the description for Windows-based OnDemand seems to keep all information in one page:
https://www.ibm.com/support/knowledgecenter/SSEPCD_9.5.0/com.ibm.ondemand.installmp.doc/dodww067.htm
but I suppose the description for AIX or Solaris are more similar to Linux, technically.
I suggest you read through the entire procedure in the Windows page above, then you check out that you have correctly executed all the steps in creating and verifying your CA certificate:
https://www.ibm.com/support/knowledgecenter/en/SSEPCD_9.5.0/com.ibm.ondemand.installmp.doc/dodlx152.htm
and when that is done, complete the CA cert installation for Linux:
https://www.ibm.com/support/knowledgecenter/SSEPCD_9.5.0/com.ibm.ondemand.installmp.doc/dodlx120.htm
and make sure the key files are installed on the ICN machine.

As usual :), more info can usually be found when checking the same documentation page for another OS/platform.
I hope this helps you - please let us all know how you managed once you get it to work!

74
MP Server / Re: Helpl Loading PDF Files
« on: February 17, 2017, 07:11:03 AM »
Bud Paton of IBM has created an excellent Powerpoint presentation which tells you the fastest way to load documents.
I suppose you have the metadata in a separate file?
If so, I would create a small script which builds the .ind file out of that metadata. If I don't misrecall, it is faster if you point to each individual PDF file instead of concatenating them into one big .out file and using offset + length. (Just use GROUP_OFFSET:0 and GROUP_LENGTH:0 to load the entire file)

If you have PDF Indexer fields defined in the form itself, well... that's an entirely different story.
If the PDF Indexing is not fast enough, it MAY be possible to use that little tool, whats-it-called-again - arspdump? To convert all text data in the PDF to a TXT representation, which you with a lot of luck could parse with a script, which may or may not build a generic indexing file (.ind) faster than running PDF Indexer.

I suggest you make small batches of maybe 100 docs and check out which method is the fastest.
(I always ask the guys that create the PDF files to create an .ind file as well, while they're at it... Or I tell them to include the data as PPD Page-Piece Info) :)

75
MP Server / Re: Delay during unload in 8.5
« on: February 14, 2017, 05:48:24 AM »
A little update.
There are some 280 million rows in the ARSHOLDMAP table as well.

When customer has "Use ERM?" set to "Yes", the time to Unload one Load ID takes about 3 minutes.
When customer turns this setting off (to "No") and continues to Unload from the same Application, each unload takes about 3 SECONDS.
A few samples from the System Log:
...
17-02-09 11:00:29 84 Name(APPGR2) Agid(6715) LoadId(5FAA-16870-16870) Rows Deleted(0) SM UnLoad Ready(1)
17-02-09 11:03:33 84 Name(APPGR2) Agid(6715) LoadId(6FAA-16926-16926) Rows Deleted(49) SM UnLoad Ready(1)
17-02-09 11:10:29 84 Name(APPGR2) Agid(6715) LoadId(7FAA-16926-16926) Rows Deleted(1) SM UnLoad Ready(1)
("Use ERM?" for APPGR2 is switched from "Yes" to "No" here)
17-02-09 11:11:51 84 Name(APPGR2) Agid(6715) LoadId(8FAA-16926-16926) Rows Deleted(1) SM UnLoad Ready(1)
17-02-09 11:11:52 84 Name(APPGR2) Agid(6715) LoadId(9FAA-16926-16926) Rows Deleted(37) SM UnLoad Ready(1)
17-02-09 11:11:54 84 Name(APPGR2) Agid(6715) LoadId(10FAA-16926-16926) Rows Deleted(33) SM UnLoad Ready(1)
...

It seems like the "Use ERM?" = No switch bypasses a whole lot of processing, and that the slowness to delete is not due to the number of Loads itself, but rather to the high number of LoadIDs in the ARSHOLDMAP table. That table is however indexed over 8 of its 10 columns, so how can it get any better?
Will try to run some statistics and automatic performance analysis on the DB to see if it recommends any new indices/indexes.

Pages: 1 2 3 4 [5] 6 7