OnDemand User Group

Support Forums => MP Server => Topic started by: tjspencer2 on May 29, 2012, 09:54:54 AM

Title: Sizing up Filesystems for Library Server and Object Server
Post by: tjspencer2 on May 29, 2012, 09:54:54 AM
My infrastructure folks are needing to understand how to lay out (size) the PROD filesystems for our Library Server and Object Server machines - we're on AIX.  I know what my volume is going to be and was looking for some guidance as to how to allocate space.  We're new to OnDemand and don't yet have guidance from our vendor on this.
Title: Re: Sizing up Filesystems for Library Server and Object Server
Post by: demaya on May 30, 2012, 04:11:17 AM
Hi,

if I understand your question right, here's my answer:
We have several filesystems:
- each database instance has it's own filesystem "grouped" into 2 resourcegroups (for switching via hacmp)
- additional we have filesystems for the origlog / mirrlog of db2 per db-instance
- per RG we have one "load"-filesystem for all the temporary stuff that arsload wants to load

Hope that helps :-)

Cheers
Title: Re: Sizing up Filesystems for Library Server and Object Server
Post by: Justin Derrick on May 30, 2012, 06:48:35 AM
This is sort of a hole with no bottom.  :)

You can avoid the most common (and painful) mistakes by considering the following:





That's just off the top of my head, I'm sure there's many more that others can add.

Good luck!
Title: Re: Sizing up Filesystems for Library Server and Object Server
Post by: LWagner on June 05, 2012, 03:47:56 PM
If its not too late to add more, I will.

You may find you need to do your own load tests to get the most accurate information about compression and total storage requirements  Then calculate accurately how many iterations you will keep on Tier1 storage, and how many on tier two. And don't neglect indexes and syslog.

I did not trust IBM's initial estimates for migrating indexes from version 2.1 to version 8.4 on zOS, and I did my own estimates on my largest applications.  I think my growth was not double as predicted by IBM, but a factor a lot higher, five or ten perhaps. Calculating the entire migration, we did not have space to complete it.  I triple checked my numbers, and we ordered more storage in order to do the migration, which took about 18 MONTHS.  My estimates from samples were right on the money.

Then came Customer Service moving from AFP bills to PDFs on version 8.4.  Compression of PDFs is NOT good in 8.4, and we saw storage requirements go up by a factor of about sixty.  Again, we had to order more storage before we could go live with the new bills.  Cutting back to 4 months on tier1 from 8 months, and required to use partitioned table spaces, we use 94% to 95% of the 1 terabyte storage purchased for this.  We are migrating the bills to AIX with CMOD 8.5, where PDF compression is improved by 90% to 95%.  So 4 months data will shrink to between 50 and 100 Gigabytes.  To load PDFs, we currently use 8 virtual Windows servers to load 500 plus files a day in about 2 hours at 2:00 AM.  The file count is based on the maximum file size limitations, since our PDFs expanded on CMOD 8.4.   

What was fun about this, is that now when I make an estimate, its taken almost as gospel from day one.  I go back to my numbers and no one can argue about excessive estimates.  And I'm considered the major space hog, with the biggest databases we have.