Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - Alessandro Perucchi

Pages: 1 2 3 4 [5] 6 7 8 9 10 ... 65
61
MP Server / Re: Upgrade from 8.5 to 10.1
« on: October 13, 2017, 02:29:45 PM »
From my experience with upgrade from CMOD, you can go from CMOD V8.3 -> V9.5 without any stop anywhere. I've been there and done it.
You can go basically from V8.5 -> V10.1 without any problems too. (haven't tested it, but I am sure that from V8.3 -> V10.1 should be ok without problems too).

Of course if you have RDF... then you must stop at V9.0 first, like you've experienced.

Having worked with IBM, I can say that some IBMers are morons when it comes to CMOD, they think they know the beast, when in fact they know nothing. So when you hear one of them that you need to do a stop to every version, then you know that he doesn't know what he is talking about...

There was a need to stop from at version 7.1.2.8 because of a change of users, and it was clearly explain in the upgrade documentation.
Now if you read the Upgrade path to CMOD V10.1 (http://www-01.ibm.com/support/docview.wss?uid=swg21446135) you can go basically from V9.0 to V10.1.
And having tested, you can go from V8.5 -> V10.1 without any problems...

Now what is a problem, and could cause a LOT of headache, is not the server part, but the client part.
Depending on what you have, you might need to recode part of your java program, deploy the CMOD Client, you are still with eClient / WEBi / arswww.cgi... and because of that you need to be extra careful with the version of the server. And the Client / Server Matrix is your friend to know where to go, and when to stop...

I had a customer, where I could not upgrade to CMOD V9.5, because they had a lot of legacy code, and the highest I could go with them was CMOD Server V9, with ODWEK V8.5, and I was using WEBi, because they still had IE V6...

So the problem of the upgrade is not CMOD itself... but the whole infra around it that will dictate what you can or cannot do.

62
OD/WEK & JAVA API / Re: ICN cannot access odwek tmp directory
« on: April 30, 2017, 06:29:34 AM »
In the ICN "System" or "General" preferences (don't remember exactly), you should have an option to setup the tmp directory of CMOD.

63
in this situation the oracle RAC must be remote.

That is no problem, just read this documentation as reference:  http://www-01.ibm.com/support/docview.wss?uid=swg27019582

and cache storage ,external storage ,temporary file path ,print file path and data load directory  must be shared .
and if its share which type of storage is support in this situation (NAS,SAN )

What do you mean shared?
Every customer that I was working with was using mainly SAN.
I know that some (rare) where using NFS.

So if by shared you mean SAN, then yes it is possible.

As long as you can ensure stability, everything is fine.

64
Hello Valentin,

if you look at this post, part of question is answered:

http://www.odusergroup.org/forums/index.php?topic=2166.0

Concerning the number per Application, you need to look at the segment directly, and know which field name is the "application ID" field (maybe agid / appid) and do like a SELECT appid, count(*) from SEGMENT_TABLE group by appid

I hope that helps.

65
You must know what file format is behind your application, and therefore you will know what fileformat you have.
With the Java API from ODWEK, you can know that quite easily.
From the command ARSDOC GET, you don't... except to parse the metadata to know which Application the document comes from, and then parse the Application config to know what is the mime type of the corresponding file (OUT).

66
iSeries / Re: Compression - Which File Types Should be Compressed?
« on: April 22, 2017, 06:46:54 AM »
Well, if you don't know if the compression will be worth doing it, my advice is to take some sample files and try each compression with the command "arsadmin compress" and compare the compression ratio.

Only then you will be able to have a clear view if you need or not a compression, and if you want a compression which one provides the best compression for you files.

My advice is for one type of files to take a few samples, not only one in order to have a better overview of the gain/loss of each compression algorithm.

67
wawad,

I have read the documentation you provided, and it seems pretty clear for me...

You must have the following to be able to have high availability active/active:

First Step)

Pre-requisite:
- DB2 pureScale
- Oracle RAC,
- SQL Server Cluster

If you don't have that, then you cannot use the new functionality. So before doing anything else, setup your database accordingly.

Second Step)
Configuration of zookeeper

If you follow exactly what the documentation is saying, then you should be ok.

I will paraphrase here, since it seems not clear for you:

You must have at least 3 servers for zookeeper, you must have an odd number of servers >=3, so 3, 5, 7, 9 or more... 3 is the minimum which is advised.

Let say you have 3 servers:  zoo1, zoo2 and zoo3.

On each server, you must do the following:

- Download zookeeper from https://zookeeper.apache.org/
- Unzip the file in one directory (ex:   /opt/zookeeper)
- copy the file zoo_sample.cfg into zoo.cfg in the directory /opt/zookeeper/conf
- modify the new zoo.cfg to suit you taste and servers
- add in the zoo.cfg the following 3 lines:

server.1=zoo1:2888:3888
server.2=zoo2:2888:3888
server.3=zoo3:2888:3888

- in zoo.cfg you have an entry for the option 'dataDir' in this directory, you will need to create a file called 'myid', and you will need to write an ID inside this file, nothing more.
The ID to write is easy to find... in the previous step you have this line:

server.ID=zoo1:2888:3888

the ID is what you must put in your 'myid' file in the directory pointed by the option 'dataDir' from the zoo.cfg file.

so, to make it crystal clear:

in server zoo1, you need to write the file 'myid' with the content
Code: [Select]
1
in server zoo2, you need to write the file 'myid' with the content
Code: [Select]
2
in server zoo3, you need to write the file 'myid' with the content
Code: [Select]
3
- Go in the directory /opt/zookeeper/conf and write the following command in unix/linux:

   $ export ZOOBINDIR=$PWD
   $ . "${ZOOBINDIR}"/zkEnv.sh
   $ java -cp ${CLASSPATH} org.apache.zookeeper.server.auth.DigestAuthenticationProvider userid:password

  You can use any user / password that you want, it is not a user/pwd which is in CMOD or a Unix credential.
  BUT you must keep them the same on all your zookeeper servers.

- Start your zoo keeper server, in the directory /opt/zookeeper/bin

   $ ./zkServer.sh start


Last Step)
Once you have done that for each zookeeper server, you can do the following now for CMOD:

- For each that you have CMOD Servers, you need to update the CMOD Stash file (the one define in the ars.ini with the option SRVR_OD_STASH) and in each one you need to use the same user / password that you have used for zookeeper credential, by using the following CMOD command:

    $ arsstash -a 10 -s <stash_file> -u userid

- Then you can (re)start your arssockd/arsobjd


And if everything is working correctly, you can access any library servers, and access your CMOD to view/arsload/administer it.

I hope what I said is clearer than the official documentation. If not, then please ask !!


68
MP Server / Re: CMOD 8.5 on RHEL 6.2 with remote Oracle 11g database
« on: April 22, 2017, 06:09:56 AM »
Ok...

I also live in Switzerland, and I can tell you, codepage are a nightmare, and you DO NOT want to play with it, especially if you are new in the field.
I've had with my customers horror stories about that aspect... What I will tell you now, if the experience I have with that topic and CMOD from the last 17 years.

If the documentation says that once set, one must NOT change this setting, then, it means that one must NOT change this settings if he doesn't know what it means and the impact behind the scene.

Codepage in any systems are a NIGHTMARE, and especially in systems were you need to convert from one to another to another to another... And CMOD in that regard can be quite difficult to debug when you enter that field... and if you don't know anything, then the nightmare becomes HELL.

That said, what is important for CMOD is the following:

1) Database codepage.
    When you create your database, you must specify a codepage (Oracle, DB2, SQL Server, mySQL, mariaDB, ...)
2) When you created the DB with CMOD
    It was before V8.5.0.3 or after? I mean before or after the introduction of the field ARS_ORIGINAL_CODEPAGE
    If it was before, CMOD was writting in the database with DB codepage that you had setup at that time.
    If it was after, CMOD writes normally in UTF-8 in the database, independently of the database codepage.

Meaning, if you have a database that was created with codepage WIN1252 with CMOD 8.3 in Windows using Oracle.
Then your codepage will be 1252, and during the migration you will need to use ARS_ORIGINAL_CODEPAGE=1252 (for example).
that way CMOD will be able to continue to write with this codepage.

If you do a clone of your database, and you don't look at the original codepage, and choose to use the UTF-8 (which was not the original), then you are nearly certainly in trouble.
Because you might have migrated the database, or thinking that you have cloned correctly your database, but in fact you have corrupted it, because CMOD doesn't write things in the Database as one will think it will write in it for international chars. Normally you would need to import each rows separately and taking care of the codepage at that level, not the db level.

So my advice, especially if you are new to CMOD is the following:

- Don't play with codepages and CMOD, you will loose
- Clone your original database with EXACTLY the same codepage it was set originally
- If CMOD tells you to use ARS_ORIGINAL_CODEPAGE=923, then STICK with it. Don't play around, again you will loose

Once that work, then if you are good in database and codepages, you can try to play around, and see how to convert from your current codepage to a new one... but that is a huge work, and it means to check every bits and bytes of your database to ensure that everything works as expected (old data, new data, etc...), every interface to CMOD, every client with CMOD... you will need to test everything to ensure that what you did didn't break anything.

I hope I could give you a small view on that creepy aspect of CMOD... and that I could give an answer to your question.



69
MP Server / Re: Autostart instances on Linux
« on: March 13, 2017, 03:25:50 AM »
Hi,

your question has nothing to do with being new with CMOD. It has to do with the knowledge of your operating system.

If you know how to start / stop manually CMOD, then with that knowledge, you should be able to auto start / stop your instance for your operating system.

In your case, you are using GNU/Linux RedHat V6.5, then look at this link https://blog.hazrulnizam.com/create-init-script-centos-6/ It will explain you how to create a service in RedHat / CentOS environment.

Then for CMOD, as you already know, but I will repeat it, since you are new in CMOD.
In order to start CMOD, you must ensure 2 things:

1) The CMOD database is already started
2) You are connected with the CMOD Instance owner user

How to start DB2?
You have 2 ways, normally with the CMOD instance owner:
CMOD way) arsdb -gkv -I <cmodinstance>
DB2 way) db2start

If you are not sure which user needs to start DB2, then you need to check the CMOD configuration files corresponding to ars.cfg and search for the key DB2INSTANCE this will give you the DB2 Instance owner, and do a "db2start" with this user.
Even better, check with your DB2 dba, if you have them, in order to check how DB2 is started...

How to start CMOD?
If DB2 is started, and this is a PREREQ... then you can simply run the command, with the CMOD instance owner:

arssockd -S -I <cmodinstance>

How to find the name of you instance owner? Look at the file ars.ini and look at the key SRVR_INSTANCE_OWNER this will give you the name of the linux user which is the CMOD instance owner.


So basically a script to start CMOD could be:

Code: [Select]
arsdb -gkv -I MYCMOD
arssockd -S -I MYCMOD

and to stop it

Code: [Select]
arssockd -T -I MYCMOD
arsdb -hv -I MYCMOD

Hope that you can now implement your auto start / stop for CMOD according to your company rules / requirements.


70

Well if you do that (arsdoc delete OR ODFolder.deleteDocs) it will only delete the indexe(s) from the database and that's it.
No retention, nothing...

The object behind in the storage manager (cache or tsm or...) will not be deleted. Meaning that even with "arsmaint -t <date>" it will not delete the orphans objects in the storage manager.
This would be a "manual" operation to remove that orphan object.

71
MP Server / Re: Archiving large Audio/Video files
« on: February 21, 2017, 08:45:07 AM »
Hi Alessandro, good to see you are still around of CMOD  8)
Looking at the archiving of large files there is something about 123FAAA$ files but as I have understood these are to be used with AFPs (only?) this as I understood also refers to the tick box in APP Load setup called "Large Object". But yet, this could be just my misunderstanding of the respective field and surrounding process.

Cheers,
 N.

The $ at the end of the DOC_NAME means that the object is using the "Large Object" functionality. This is used by Line Data or AFP.

72
Content Navigator / Re: CIWEB1008 Repository not available in ICN
« on: February 21, 2017, 08:43:00 AM »
Stupid question, but did you create a database and have the ICN tables created in it?

73
Content Navigator / Re: ICN and CMOD on mixed platforms
« on: February 21, 2017, 08:40:33 AM »
Not documented in the Navigator installation https://www.ibm.com/support/knowledgecenter/SSEUEX_2.0.3/com.ibm.installingeuc.doc/eucde013.htm

Yes and?
You are installing separate components, and you MUST read each documentation on each components.
That's how IBM is handling that... instead of repeating everything for everything, they expect the people to read the appropriate documentations.
That's why I have provided you the links on the corresponding documentations, to prove you that everything is there.
Again, you must read the whole thing, not just what you want to.

If you are not happy with the current documentation, please open a PMR, and explain how to improve the documentation.

74
MP Server / Re: ARSMAINT slowness
« on: February 14, 2017, 07:01:02 AM »
I am executing this arsmaint:

arsmaint -n 40 -x 60 -cdeimrsv

executed every 6AM and 6PM

So basically, you are using arsmaint with every option possible together?

- Expiration of indexes (-d)
- Expiration of cache (-c, -n, -x)
- Migration cache to TSM (-m)
- Migration of indexes tables to TSM (-e)
- Expiration of indexes tables (-i)
- Reorg of indexes tables (-r)
- Check of cache + Statistics (-s, -v)

So as you can see, you are doing 7 different actions with only 1 command...
And then you ask why it is slow?

I cannot answer your question. But I can help you see which one of these 7 actions is taking the most time.
THEN and only THEN, it is possible to have an idea of what could be the cause.

From what I see, you don't need option -e and -i, since you are not using "Migration of Indexes".

Here is a little script that could help you doing that:

Code: [Select]
#!/bin/ksh

echo "$(date) - Start migration Cache to TSM"
arsmaint -m
echo "$(date) - End migration Cache to TSM"

echo "$(date) - Start Expiration Cache"
arsmaint -c -n 40 -x 60
echo "$(date) - End migration Cache"

echo "$(date) - Start checking Cache integrity"
arsmaint -sv
echo "$(date) - End checking Cache integrity"

echo "$(date) - Start Expiration of Indexes"
arsmaint -d
echo "$(date) - End Expiration of Indexes"

echo "$(date) - Start Database Reorg"
arsmaint -r
echo "$(date) - End Database Reorg"

I let you put that script in a nicer form, put the output in some logs files.
It will do exactly what you are doing right now, but you will be able to see each steps easily, and more importantly how much time each step is consuming.

The other way to do it, would be to look into to the "System Log", search for the userid "ARSMAINT", and analyse the logs entries to understand each step, what it is doing, and what it taking so long.

My idea would be that the option -m is taking a lots of time, but that's only my idea... only by measuring it, you can be sure.


Concerning 3GB files... what are these files? it is 1 file, with 1 index? Or this is 1 file which at the end indexed by CMOD with ACIF indexer?
Or is it a file that has a generic index with it? that split for CMOD into multiple indexes?

Can you look at the cache the size of the object that have? Is it still 3GB, or is it less?


Quote
Lastly, need you advise on the between choosing from CMOD 9.5.0.3 and 9.5.0.7, which you have used to be much stable and less bugs.

You tell me... here are the bug correction between 9.5.0.3 and 9.5.0.7:

Code: [Select]
    2.2.9.5.0.4) Release (9.5.0.4)
      PI45715 - BURST COLUMN IN JES OUTPUT QUEUE NOT SHOWING VALUE OF YES
      PI46497 - THE MESSAGE ARS1159E DOES NOT SHOW THE NAME OF THE MISSING
                OBJECT
      PI46630 - Julian dates not calculated correctly for leap years
      PI46714 - CD-ROM ARSDD FAILS WITH ERROR "UNABLE TO REGISTER THE RESOURCE"
      PI48540 - arsload fails to load a generic index file w LO support
      PI47061 - ODF - ERRONEOUSLY MARKING THE DISTRIBUTION COMPLETE
      PI49820 - Upper case .IND in file causes ARSLOAD to fail
      PI52114 - ARSEXOAM FAILING AFTER UPGRADE TO CONTENT MANAGER ONDEMAND FOR
                Z/OS VERSION 9.5
      PI52282 - Use of Document Size in an Application Group can cause
                duplicate rows to be loaded
      PI56823 - TSM filespace prefixed w/ARCHIVE for default ARCHIVE instance

    2.2.9.5.0.5) Release (9.5.0.5)
      PI53798 - Load fails with ARS1127E message
      PI55412 - Load fails with ARS1176E message after applying the fix for
                PI52559
      PI57099 - Crash may occur when segment date is date(old style) and
                incorrect format is used to specify the segment date range in
                the -S option
      PI57664 - ACIF seg fault when collecting > 65k overlays in a res file

    2.2.9.5.0.6) Release (9.5.0.6)
      PI51782 - ODF Distributions not processed due to ARS1607E error in
                ARSRPSUB
      PI58677 - ODF Manifest not printing properly
      PI59581 - Invalid parameter list passed to the arslog exit
      PI59697 - ODF is changing the OnDemand report LRECL from 673 to 32753,
                causing a PAGEDEF/FORMDEF transform problem
      PI60252 - CMOD Document graphical annotation attributes not supported
      PI60746 - Character set name changed in MCF structured field
      PI62112 - PDF indexing using PDF metadata is changing metadata values
                of the ingested document
      PI62180 - Segmentation fault in arspdoci
      PI62221 - Cannot load into the system documents that are produced under
                UNIX related systems
      PI62797 - ARSLOAD running as a started task (STC) terminates with
                ARS4328E ARSSAPIR failed
      PI62843 - Error 125 with errno 183 "Unable to create symbolic link from
                file"
      PI63284 - ARSMAINT delete reports more deleted rows in message 84 than
                were loaded

    2.2.9.5.0.7) Release (9.5.0.7)
      PI65405 - When using arsload at time zone change (daylight saving time)
                in UTC+0, the time stored in database is invalid. Therefore
                documents cannot be retrieved
      PI65699 - Separate sysout banner dataset allocated with wrong parms
      PI68622 - PDF Indexer indexes each page as a separate document using PPDs
      PI64639 - 9.5 PDF Indexer performance degrades when PDF document is very
                large
      PI67507 - Defunct process buildup in arsload
      PI70021 - PDF Indexer skipping Adobe PDF/UA records
      PI70830 - arsload throws ARS4091I about absence of PDF Indexer although
                the application specifies XENOS Indexer

Do you want a corrected code which is new, or a code which is 18 month old?


And in addition to that, if you have a problem with 9.5.0.3.... the IBM Support will ask you first to upgrade to 9.5.0.7 and ensure that you don't have a problem, and if you have, then they will create a fix based on 9.5.0.7...

So in my own experience... don't try to be smart here, simply go to the latest version possible, if you have a problem with the latest FP, then IBM will be way way way more reactive to solve the problem asap.
If you are using an old version, then they need to build an environment to that version to test... then they will to check somewhere/sometime if that also is a problem with the latest fix pack...
meaning more time lost, and also with all the discussion about trying on your side with the latest FP, etc...
So if you are upgrading to V9 to V9.5... then there is no hesitation, just go to the V9.5 latest FP.

I know for facts, some integrator, or some customers, which don't trust FP, and stays with the vanilla CMOD version, and they never applies fix pack... and they run it for years, they have problems, and every time I was involved it was a pain...

my advice, try to be, at least with fix packs, as near as possible to the latest.
For each new CMOD version, then try it as soon as possible in your dev, test system to ensure it works as much as possible without breaking anything.
For production... I would wait for the 1st or second fix pack...
I am quite conservative here.


In all cases, before putting any version into production it is clear that the FP or the new version MUST be tested first in your test environment....


Again, IBM is always answering the quickest for fixes if the customer is using the latest version with the latest fix pack.

After that, this is your own choice.


I hope I could be of some help.

75
MP Server / Re: Inserting PPD'S using Itext
« on: February 14, 2017, 06:16:59 AM »
thank you for sharing :-)

Pages: 1 2 3 4 [5] 6 7 8 9 10 ... 65