Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - ewirtz

Pages: 1 ... 4 5 6 7 8 [9]
121
Other / Re: Data Encryption CMOD MP and DB2
« on: December 19, 2011, 02:45:16 AM »
Hi Steve,
I don't have an experience with Vormetric. But I have experience with the developement of a PCI complient client server application to decrypt and encrypt card numbers. This application is very quickly: about 1000 transactions / second (Z/OS, AIX,  Windows environment). So I know it's possible to optimize encription logic.
Regarding CMOD I think the following would help. The logic structure (regarding CMOD) keeps unencrypted (p.e. printer control characters / CR  / LF). The data itself will be encrypted. If needed (part) of the indexes will be encrypted. Doing it like this arsmaint is not affected because it doesn't know that it works with encrypted data. But you need additional logic implemented in input, index and preview exits and the frontend. You can use the open ssl library or ICSF in Z/OS to implement such a logic.
This looks very complex. But with a good modular design this challenge can be met.

regards

Egon

122
MP Server / Re: Active/Active CMOD Configuration
« on: December 01, 2011, 03:23:58 AM »
Hi,

on Z/OS another combination might work if you have a sysplex and a db2 sharing group.

1. define a virtual IP
2. start ARSSOCKD on both images of the sysplex listening to the virtual IP
3. If ARSSOCKD has implemented a proper rollback / commit logic it shall work

With this technique you would have a simple workload and the system will still work if one image is down. I think it will work, because we are running several ARSLOAD in parallel with direct DB2. This is similar to more than one ARSSOCKD running against the same DB2. But of course IBM should state that it works.

regards

Egon

123
z/OS Server / Re: System folder security and RACF
« on: June 16, 2011, 05:37:57 AM »
Hi,

we are reading the racf groups of the current user within the security exit. It's a little bit tricky to get it threadsafe. Within the module we do some checks using the list of racf groups. Using this technique uppercase is no problem, because we can transform whtat we want.

Regards

Egon

124
Hi Justin,
you are right. A lot of tricks can be used to improve performance. I have a lot of experience in DB2 optimizing (Z/OS). If we have a business date that would imply overlapping intervals of segmentation I think DB2 patitioning is a simple solution for this issue. You could implement this in the 'Table space creation exit' which uses the OnDemand configuration to find a canonic mappng it to DB2.

- table split is deactivated
- segmantatu?on field is used for partitioning
- all indices with leading segmantation field will be partioned indexes.

I think this is a general soultion for this class of issues. Because of the mapping logic the tablespace creation is still controlled by the admin configuration. It's near 'out of the box'

cheers

Egon

125
Hi,

for me this implies that segmenting does not really help if reports can be delivered for the past. In general it makes no sense to use technical segmentation dates because business people will look for business relevant fields. They should not need not to know the time of the report creation.

A simple soulution of this performance issue is to avoid segmentation. By using DB2 partitioning we can reach the aim of segmentation with a better performance.


Egon

126
Hi,

i'm just thinking about segment field usage.

If a business date is used as a segment field, the following could happen:

- a segment is closed for the date intervall between 2007 and 2010
- you have a current segment 2011-2011
- you get a new report in 2011 for 2007 (some correction for the past)

What is the influence of such a load?

Will we have the following segements afterwards?

- 2007-2010
- 2007-2011

If this is true, the segmentation would not help at all for such kind of business segment fields.

Does anybody know the details?

127
Hi Alessandro,
Hi Justin,

if you want to see the the reports of one fiscal year, you have to look in two years, if the search and segment field is a technical date.

It would be better to search for the fiscal year itself. This implies to look in all segments (even if it is indexed), if the technical date is still the segment field.

But if you both state that the segment field shall not be touched I think it's just a topic lessns learned.

Cheers

Egon




128
Hi Justin,

I have added some details before. But I cannot see them in the post chain. So I post it again.

Sometimes a technical date ist taken for segmentation. If you are searching for a business date two segments have to be searched because the end of the year (business date) is stored in the next year (technical date). So it would be fine to switch from technical date to business date in segmentation.

with kind regards

Egon

129
Hi,
we want to optimze searching in production. Sometimes the segment field is bad. We want to change such a segmentation field. Do you know how to change it. (maybe some tricky SQL)

Egon

130
z/OS Server / Re: converting system log timestamp integer field
« on: December 08, 2010, 12:40:38 AM »
you can get p.e. the date with the following SQL:
SELECT DATE('01.01.1970')  + (<integer>/86400) DAYS FROM SYSIBM.SYSDUMMY1;

Pages: 1 ... 4 5 6 7 8 [9]