Author Topic: Sizing up Filesystems for Library Server and Object Server  (Read 3083 times)

tjspencer2

  • Jr. Member
  • **
  • Posts: 80
    • View Profile
My infrastructure folks are needing to understand how to lay out (size) the PROD filesystems for our Library Server and Object Server machines - we're on AIX.  I know what my volume is going to be and was looking for some guidance as to how to allocate space.  We're new to OnDemand and don't yet have guidance from our vendor on this.

demaya

  • Guest
Re: Sizing up Filesystems for Library Server and Object Server
« Reply #1 on: May 30, 2012, 04:11:17 AM »
Hi,

if I understand your question right, here's my answer:
We have several filesystems:
- each database instance has it's own filesystem "grouped" into 2 resourcegroups (for switching via hacmp)
- additional we have filesystems for the origlog / mirrlog of db2 per db-instance
- per RG we have one "load"-filesystem for all the temporary stuff that arsload wants to load

Hope that helps :-)

Cheers

Justin Derrick

  • IBM Content Manager OnDemand Consultant
  • Administrator
  • Hero Member
  • *****
  • Posts: 2230
  • CMOD Guru for hire...
    • View Profile
    • Tenacious Consulting
Re: Sizing up Filesystems for Library Server and Object Server
« Reply #2 on: May 30, 2012, 06:48:35 AM »
This is sort of a hole with no bottom.  :)

You can avoid the most common (and painful) mistakes by considering the following:

  • Database volumes & cache filesystems should be on your fastest, most reliable disk technology.
  • Index & temporary filesystems can be on your next tier (ie, less fast, less reliable, less expensive).
  • For the database, try to match RAID stripe size, filesystem block size, and database page size.
  • Ensure that there are no artificially low limits on filesystem sizes for database & cache -- they will grow.  A lot.

  • Read the installation guide to learn how to calculate compression ratios, so you can accurately estimate the required size of your cache.
  • Create separate 'landing zone' directories for each application group. If one AG's data volume becomes a problem, you can put it into it's own filesystem where it won't gobble up all the space and impact other AGs.  This is especially important when you're unable to load data due to downtime/bugs.

  • Don't store your own data (scripts, logs, resources) under /usr/lpp/ars.  Logs should go into their own filesystem.  System-related scripts (OS-level performance monitoring & error reporting) should go into /usr/local/.  CMOD-specific scripts should go into /home/archive/bin.  I recommend creating a structure like /home/archive/resources/AGName/AppName/Version for storing AFP resources that you need to store locally.


That's just off the top of my head, I'm sure there's many more that others can add.

Good luck!
IBM CMOD Professional Services: http://TenaciousConsulting.com
Call:  +1-866-533-7742  or  eMail:  jd@justinderrick.com
IBM CMOD Wiki:  https://CMOD.wiki/
FREE IBM CMOD Education & Webinars:  https://CMOD.Training/

Interests: #AIX #Linux #Multiplatforms #DB2 #TSM #SP #Performance #Security #Audits #Customizing #Availability #HA #DR

LWagner

  • Guest
Re: Sizing up Filesystems for Library Server and Object Server
« Reply #3 on: June 05, 2012, 03:47:56 PM »
If its not too late to add more, I will.

You may find you need to do your own load tests to get the most accurate information about compression and total storage requirements  Then calculate accurately how many iterations you will keep on Tier1 storage, and how many on tier two. And don't neglect indexes and syslog.

I did not trust IBM's initial estimates for migrating indexes from version 2.1 to version 8.4 on zOS, and I did my own estimates on my largest applications.  I think my growth was not double as predicted by IBM, but a factor a lot higher, five or ten perhaps. Calculating the entire migration, we did not have space to complete it.  I triple checked my numbers, and we ordered more storage in order to do the migration, which took about 18 MONTHS.  My estimates from samples were right on the money.

Then came Customer Service moving from AFP bills to PDFs on version 8.4.  Compression of PDFs is NOT good in 8.4, and we saw storage requirements go up by a factor of about sixty.  Again, we had to order more storage before we could go live with the new bills.  Cutting back to 4 months on tier1 from 8 months, and required to use partitioned table spaces, we use 94% to 95% of the 1 terabyte storage purchased for this.  We are migrating the bills to AIX with CMOD 8.5, where PDF compression is improved by 90% to 95%.  So 4 months data will shrink to between 50 and 100 Gigabytes.  To load PDFs, we currently use 8 virtual Windows servers to load 500 plus files a day in about 2 hours at 2:00 AM.  The file count is based on the maximum file size limitations, since our PDFs expanded on CMOD 8.4.   

What was fun about this, is that now when I make an estimate, its taken almost as gospel from day one.  I go back to my numbers and no one can argue about excessive estimates.  And I'm considered the major space hog, with the biggest databases we have.