OnDemand User Group
Support Forums => z/OS Server => Topic started by: plb3945 on August 19, 2010, 02:55:13 AM
-
Hello,
I would like to know the maximum size of file you can load in CMOD by ARSLOAD? At the moment we have a lot of sizes from 4 Mo to 10 Mo to load in CMOD, and one 4 Mo takes 4 hours for indexing and load. Is it normal which size is appropriate?. Do you have a document which indicate the total treatment time to load differents sizes of file?
Thanks for your reply
Best regards
Philippe
-
Hello Philippe,
That's an interesting question, but the problem with such question is that it spawns new questions:
- What kind of document are you talking about?
- What kind of indexer are you using?
- How many fields are indexing?
- How many pages are in this file?
- If ACIF what kind of indexing are you using?
Without any of these answers, it is just difficult/impossible to answer you.
For my experience, I can archive with Generic indexer several hundred thousand documents (in PDF) in between 20-40 minutes. And the total size is around 1 or 2 Go.
If you could give us more information, maybe we could help you in giving some possible comparison, and maybe we could help you to optimize the ACIF indexer (if you are using this indexer) (if it needs to be optimized!)
Cheers,
Alessandro
-
Also, depending on how compressible the data is, you truly may be able to "fit 5 pounds in a 3 pound bag."
-
Hello,
To reply to AlessandroPerucchi :
What kind of document are you talking about? TXT (user define)
What kind of indexer are you using? GENERIC
How many fields are indexing? 2
How many pages are in this file? I don't have this information, several applications, different files
If ACIF what kind of indexing are you using? NO ACIF
To reply to Ed Arnold:
Data compression: OD77
Resource compression: OD77
Compressed object size(K): 100
Thanks for your help
Best regards
Philippe
-
From everything I've heard so far, there is a serious problem. There's no way that running such a modest load should take that long.
What versions are you using (DB2 & CMOD)? Can you give us a sample of the generic index file contents? How many 'records' or 'documents' are inside this text file that you're loading, or is it one document per file?
In the times I've seen painfully slow performance like this in the past, it's been a software bug, or one of the components (DB2, TSM, or OS) is severely constrained in cpu, memory, or disk.
-JD.
-
Philippe - I agree completely with what Justin has said.
FYI - it's not the compression type in use that determines how much compression you'll get as much as it is the "compressability" of the document. (A huge document of all zeroes compresses very well, a small document of random data - not so well).
Let us know if and how you have resolved your situation.
Ed