safemode wrote:we're using the on the fly compression for all the textures as it is now anyway. unless you're running in highest quality mode with no compression at all, everything you look at in the game is compressed to the same compression codec that the dds files are compressed in using the same routines. Probably not with the GL_NICEST flag though like gimp-dds will use.
I've seen many cases of users disabling compression because of the quality decrease, mostly with modern cards that have more than enough video memory. I'm talking about PR and privateer-related mods, which use much smaller texture sets.
safemode wrote:gimp will do batch recompression to dds, it does it cross platform, with open source and thus can be maintained indefinitely and edited if the need be.
Maybe gimp can do high-quality compression too - I've never seen that plugin - but the point is still clear: precompressing dds
offline is always better, quality-wise, than letting the driver do it. I don't think users would like the wait for that to happen during setup... so I'd rather distribute them in dds form already, and you said it yourself: dds + gz does a very good job (and sometimes better than png), compression-ratio-wise.
Anyway... I never ever considered the possibility of having the dds be alone, it's absolutely necessary to have a separate repository with editing-friendly media. I'd prefer xcf (or similar editable formats, with layers still separate and all), but whatever we have will do - but certainly in a lossless format.
safemode wrote:take a png, compress it with your nvidia compressor. Send the compressed dds to me, i'll compress the same texture with gimp and see if there is any difference, positive or negative.
It almost sounds as if you didn't believe me. Almost hostile... I hope I'm misunderstanding you.
If you don't believe me... ok... your loss. Do the test if you want.
safemode wrote:I vote for renaming the current data5.x to something else. The next release is going to be a 5.x release, it only makes sense that the data module is data5.x.
I'm ok with that. Move the current 5.x to ogre_branch/data6.x if you will.
Huh... cool stuff.
Anyway, that's completely not to the point. We (the article and I) are talking totally different things. In their case, the data doesn't fit in memory. In our case, it
must fit in memory. So all the overheads are present in the read() call version too - since the buffer must hold the entire file anyway, and you can't disable paging. Also, it's the kernel itself doing the thing... it can optimize however it wants, contrary to linux - we're talking Windows here (I don't know any linux-based DirectX), and they're talking FreeBSD.
safemode wrote:That being said, our cpu use seems to be stuck in python much more than in loading textures. This should decrease pauses some, but not enough i think. We need to work on either optimizing the python or moving key functions into C++ to speed it up.
I agree - I tried to say that, only with less details. Loading, provided on-the-fly compression is taken out of the equation, is not the issue.
Though back in the days I did profile VS, I noticed there's a relatively fair amount of time spent at loading sprites - they produce expensive calls to open() / close() (opening and closing files is rather expensive, no matter the cache). At the time I tried adding caches for the most important classes, but a unified resource cache would go a long way into fixing that (and better, more readable and modular code).