I came across this little write up in one of IBM case studies. This might be helpful in deciding the maximum number of rows per database table.
The maximum rows value, which determines how many data rows will be
loaded into each database table, is used for segmenting the index data and
deciding when to close a database table and open a new one. The default
Chapter 2. IBM Content Manager OnDemand for Multiplatforms implementation guidelines 33
value of 10,000,000 rows is recommended for balancing the performance of
data loads and queries. However, if the expiration type is segment, use a
maximum rows value such that the data will expire (the table will be deleted)
in a reasonable amount of time. For example, if you load 1 million rows to an
application group in one year, and the data is to be retained for seven years,
the default value of 10 million rows in a data table is too large. The table will
not fill up until the tenth year, and the table will not be deleted until all data in
the table is seven years old, which means the table will exist for 17 years.
When a wildcard search is specified and no index field is used (or segment
date), a full table scan takes place. If the table contained 10 million rows, a
wildcard search would be looking at nine million rows of ?expired? data in the
17th year. A good rule of thumb is that no table should hold more than 10% of
the data that would be loaded during the lifetime of that data when expire by
segment is specified (and thus the lifetime of a table is 110% of that of the
data). 10% of seven years (84 months) is 8.4 months, you can round up to
nine months. Nine months of data would be 750,000 rows, so maximum rows
per table would be 750,000.