I have another question.
You mentioned that you are exporting from one environment, and these images (IMG1-3) are from when you load them back into the new environment, correct?
It looks like you are loading them using, hmm, the "Graphical PDF Indexer", correct? Or are you using "Internal indexing" (PPDs)?
The load process spends HUGE amounts of time indexing. Therefore, one way to speed up the load would be to use the -g flag during export (arsdoc get ... -g = Create generic indexer file), and then to use the flags -X G (="Find a generic indexing file") during arsload.
From what I can see, the PDF Indexer (which I assume that you are using) does not manage to compress the PDF documents during load. They actually wind up a bit BIGGER than they came, which is not unusual, at least not for smaller files. So you will not lose anything in size by using Generic Indexing instead of PDF Indexing.
WHICH, brings me to the next possible issue. Seeing that OnDemand is unable to compress the multi-document files to any degree makes me suspect that the PDF files may have "PDF Compression" turned on. This is a known factor that slows down OnDemand indexing a lot. Sometimes to a factor of 50 or 100.
Now, since these files are already created and you are just moving them, I guess you can't re-build them. But for future files, talk to the guys creating these PDF files and ask them to turn PDF Compression COMPLETELY OFF. If I recall Bud Paton correct, that is "Level 0 PDF Compression". Yes, the files will be much larger at delivery, but OnDemand should process them a huge lot faster.
I hope that either or both of these tips may help you!