Recent Posts

Pages: 1 2 3 [4] 5 6 7 8 9 10
31
z/OS Server / `
« Last post by Ed_Arnold on March 12, 2024, 09:40:15 PM »
PH56940: ABEND0C4 RC10 AT DSNXOD3 OFFSET04726 MAY OCCUR FOR THE QUERY USING SINGLE INDEX ACCESS

Error description
ABEND0C4 RC10 AT DSNXOD3 OFFSET04726 MAY OCCUR when explaining
an SQL statement whose access path contains single index access.

KEYWORDS: ABEND0C4 SQLEXPLAIN SQLACCESSPATH
Local fix
Avoid Explain on the impacted SQL statement.
Problem summary
****************************************************************
* USERS AFFECTED:                                              *
* All users of Db2 12 and Db2 13 for z/OS who                  *
* explain queries whose access path contains                   *
* single index access.                                         *
****************************************************************
* PROBLEM DESCRIPTION:                                         *
* ABEND0C4 at DSNXOD3 offset 04726 may                         *
* happen when explaining an SQL                                *
* statement whose access path contains                         *
* single index access.                                         *
* Abend only happens when the accessed                         *
* storage is not available. Otherwise,                         *
* incorrect information may be populated                       *
* into column IBM_SERVICE_DATA of                              *
* PLAN_TABLE.                                                  *
****************************************************************
* RECOMMENDATION:                                              *
* Apply corrective PTF when available                          *
****************************************************************
For example:
EXPLAIN ALL SET QUERYNO = 1 FOR
SELECT C1,C2 FROM T1
WHERE T1.C1 = 100;
Index IX(C1) is chosen to access T1. When populating data into
PLAN_TABLE, an invalid array index zero is used that may result
in ABEND0C4 at DSNXOD3 offset 04726 or incorrect information in
column IBM_SERVICE_DATA of PLAN_TABLE.
Problem conclusion
Db2 has been modified to correctly process the aforementioned
SQL statement.
Additional Keywords ABEND0C4 SQLEXPLAIN SQLINDEX
32
no - It's a connection pool. Connections are established when the REST services start.
33
Hi,

I am working with the ODWek REST API on IBM Content Manager OnDemand version 10.5.0.4
When performing the follow HTTP POST request

URL = http://localhost/cmod-rest/v1/hits/
HTTP Method = POST
Body
Code: [Select]
{
    "query": "where DateField between 230301 and 240301",
    "folder": "MYFOLDER"
}

Response
Code: [Select]
Information has been modified on the server.  Please logoff, logon, and retry the operation.
Bit surprised I get this response as I thought the ODWek REST API would take care of logoff, log on to CMOD right?
34
MP Server / Re: Help needed: reverse-engineer on-disk cache format in MP V9.5
« Last post by Mattbianco on March 11, 2024, 03:47:54 AM »
Okay...

I think I figured it out now. The second pair of offset (106262) and length (104751) is the location in the uncompressed data file of the compressed "block" where the first pair of offset (0) and length (21894) of the actual document:

1136FAAA       0     21894       106262      104751   U   O   0   1   0

I fooled myself when running arsadmin decompress without offset and length and got the first section decompressed without issues.
Didn't realize that these pdf documents could be compressed so efficiently that 211013 bytes would be decompressed into two files of 638685 + 637376 byte.

Once again, CMOD and it's storage structures impress me! There are indeed some truth to the "things were better back in the day" phrases.
A more recently built solution would probably have been very hard to recover from in a situation where the database with the metadata was lost. Placing the small metadata on disk in the cache file systems is a real life saver!
35
MP Server / Re: CMOD Migration to new DC
« Last post by Mattbianco on March 11, 2024, 03:20:24 AM »
What I once did, (on a cache-only system) was;

Nightly cron job on the "current" (old/source) server doing DB2 exports into files with DB2MOVE + DB2LOOK,
then rsync of the database exports and cache file systems,
On the receiving (new/target) server, another cron job that re-created the DB2 database from the export files, and then performed a CMOD DB version upgrade (because why not ;D )

Then we had time to test the new installation with current contents, and we even did the all new loads/ingestions in both systems for a while (with the older system being mandatory, since the new one was being rebuilt each night).

When we were satisfied that the new system was functional and reliable, we stopped syncing and killed the old server.

I'm not saying that this is how it should be done, but it worked very well for us. The rsync was very useful since we had a lot of data (being a cache-only setup with everything online on disk), and didn't want a very long downtime to make the switch.
36
MP Server / Help needed: reverse-engineer on-disk cache format in MP V9.5
« Last post by Mattbianco on March 11, 2024, 02:08:31 AM »
Background: cache-only setup, database backup failure, cache filesystem backups (with TSM) complete.
Documents stored "as is" with the generic indexer only, so no separation of resources and document data.

Need to recover some documents from AG where arsmaint -c and arsmaint -d have been run where the segment date incorrectly was set to 1970-01-01 on some documents...

I've restored the affected DOC files from backup (into another folder), both the 1136FAA1 (OD77-compressed metadata) and 1136FAAA (OD77-compressed documents).
I've run arsadmin decompress on the restored files, and have noticed that the ...FAA1 files contain the document metadata, and the ...FAAA etc files contain the documents themselves.

This far, I've noticed that the first line (sometimes lines) begin with "<" and end with ">" and in between contain tab-separated AG field names.
Then follows lines with the metadata, one line for each document. First comes the values of the AG field names, in the same order as in the <>-enclosed header, and then some CMOD-specific fields that could look like this:
1136FAAA   0   21945   0   106262   U   O   0   1   0

The first is obviously the name of the file containing the documents, and the second and third one (0 + 21945) is the byte offset and length of the document, after decompression, in the document data file.

But, what are the other fields? 0, 106262, U, O, 0, 1, 0 ?

In this example file, the first 29 documents make perfect sense. Here is the CMOD-data for documents 27 - 32 in the decompressed 1136FAA1 file:

1136FAAA  572939     21888            0      106262   U   O   0   1   0
1136FAAA  594827     21887            0      106262   U   O   0   1   0
1136FAAA  616714     21971            0      106262   U   O   0   1   0
1136FAAA       0     21894       106262      104751   U   O   0   1   0
1136FAAA   21894     22109       106262      104751   U   O   0   1   0
1136FAAA   44003     22005       106262      104751   U   O   0   1   0


The 1136FAAA file is exactly 616714 + 21971 byte after decompression of the entire file, so, at the same time the offset counter drops back to zero, and the second pair of "counters" increase, I don't understand where to find these remaining documents.

Does anyone here know what the 0 / 106262 / 104751 in the columns after the first offset+length pairs mean?
Do you think there could be a way to salvage the remaining documents from the cache backups, without using the database?

Thanks!
Matt
37
Content Navigator / Re: Issue connecting to ICN Repository
« Last post by jsquizz on March 06, 2024, 01:31:01 PM »
This issue was very weird. Here's what I did.

1) Verified ALL classpaths/paths at the OS level, and the websphere level -> Including pointing to the new version of log4j/gson
2) Uninstalled CMOD V10.5
3) Reinstalled CMOD V10.5 + FixPack 7
4) Redeployed the ICN ear file
5) Repository successfully added.

Essentially, a reinstall of the base CMOD, fixed us up.
38
MP Server / Re: Updating retention for historical loads
« Last post by Ed_Arnold on March 05, 2024, 12:43:49 PM »
I still don't know the answer to your question, but...

I was thinking to do arsxml update.

...that's fine. 

I was concerned that the intention was to use straight SQL to change table values.

Ed Arnold
39
MP Server / Re: Updating retention for historical loads
« Last post by ODSA on March 04, 2024, 01:01:37 PM »
Hi Ed, I mean, I have more than 2K AGs , so I was thinking to do arsxml update.
Your recommendation would be to do it from Admin client ?

Thanks!
40
MP Server / Re: Updating retention for historical loads
« Last post by Ed_Arnold on March 04, 2024, 11:37:57 AM »
ODSA -

I don't know the answer to your question but...

> I am planning to do a direct DB update to change the retention to 7 for all the AGs.

...this sounds very dangerous.

Ed Arnold
Pages: 1 2 3 [4] 5 6 7 8 9 10