It all depends how the data shall be accessible after all and what customer/end users want to achieve
I have been using such approaches:
1. Create a one big generic index file (scripting) and load one data file (report) into system. Index file consist of multiple indexes that points to that single report. So many rows added into DB, one object file sent to TSM. It weren't big reports up to few hundreds, but works as expected for end users.
P.S Now in CMOD 9.5 it would be maybe even better to use full text search server instead as it gather all data from reports:) But it was not available in previous CMOD ver and I have not yet tested it
Maybe someone who uses it can share its experience, as it might be a good tool
2. Modify report (if possible) and add a header if data is well organized (for instance 100). Sometimes it's easy to use sed, awk, regex or simply even vi to deal with it if you uses UNIX for CMOD. After such pre-processing let system index that.
3. Use index values that has nothing in common with a data that's planned to be loaded. That's the easiest approach. And as you mentioned I also used as indexes the values that were taken from data file name.
As segment date I uses a load date if possible, and let system fill it in while loading. By Adding default value in application for segment date field ('t') system catch a current date when data is processed. So there is not even a need to add this value to index file if generic index file is used.
All depends what customer want to achieve