Bud Paton of IBM has created an excellent Powerpoint presentation which tells you the fastest way to load documents.
I suppose you have the metadata in a separate file?
If so, I would create a small script which builds the .ind file out of that metadata. If I don't misrecall, it is faster if you point to each individual PDF file instead of concatenating them into one big .out file and using offset + length. (Just use GROUP_OFFSET:0 and GROUP_LENGTH:0 to load the entire file)
If you have PDF Indexer fields defined in the form itself, well... that's an entirely different story.
If the PDF Indexing is not fast enough, it MAY be possible to use that little tool, whats-it-called-again - arspdump? To convert all text data in the PDF to a TXT representation, which you with a lot of luck could parse with a script, which may or may not build a generic indexing file (.ind) faster than running PDF Indexer.
I suggest you make small batches of maybe 100 docs and check out which method is the fastest.
(I always ask the guys that create the PDF files to create an .ind file as well, while they're at it... Or I tell them to include the data as PPD Page-Piece Info)