The problem:
Color precision was a complete lack of it at 8,8,8-bits per channel, and then came DDS and reduced that to 5,6,5...
The problem is already pretty serious on the bright side, where a single step in green intensity is 100 * 1/2^6 = 100/64 = ~1.5% intensity.
Red and blue steps, being 5 bits, are 3%.
But the problem gets real bad at lower intensities. At 50% green, a step is 3%. At 50% red or blue, a single step is a whopping 6% change.
This is NOT good...
A solution (the best, IMO):
If I was writing my own engine, I would forgo half of the compression benefit of dds, use dxt5, and use the rgb channels for chroma only, and the alpha channel for luma. The alpha channel in dxt5 is actually 8-bits.
But while dxt1 achieves 6:1 compression, dxt5 stops at 3:1.
Personally, I think the sacrifice is worth it, and I'd do this without a second thought; but I doubt I'd be able to convince anyone here.
So, I'm going to propose a compromise, instead:
The proposed solution:
We've got all these material numbers in xmesh...
Code: Select all
<Mesh scale="1.000000" reverse="0" forcetexture="0" sharevert="0" polygonoffset="0.000000" blend="ONE ZERO" texture="dark.png" >
<Material power="60.000000" cullface="1" reflect="1" lighting="1" usenormals="1">
<Ambient Red="0.500000" Green="0.500000" Blue="0.500000" Alpha="1.000000"/>
<Diffuse Red="1.000000" Green="1.000000" Blue="1.000000" Alpha="1.000000"/>
<Emissive Red="0.000000" Green="0.000000" Blue="0.000000" Alpha="1.000000"/>
<Specular Red="1.000000" Green="1.000000" Blue="1.000000" Alpha="1.000000"/>
</Material>
I don't mean the code using them; I believe it is, already. I mean "using them" in the sense of writing useful numbers to them, instead of always 1.0 or 0.0 or the occasional 0.5...
At PU we've been working on and off on a Blender Compositor Nodes-based texturing platform. Now it's rather imminent. Just to describe the project briefly...
Blender's Nodes are a substitute for Gimp/Photoshop layering system. You've got all kinds of input and output nodes and processing nodes. Input and output nodes are typically textures. Processing nodes include blending operations, math ops, color ops, filters such as blurs, and even moving ops such as vertical and horizontal offsets. You create networks of nodes, connecting outputs of some to inputs of others. Those connections look like spaghetti and are called "noodles"; and represent the flow of texel data.
The best part of it is that ALL operations are done at 32-bit float precision per channel. Result is only dithered and quantized for writing a texture.
So...
So far we got parts of the grand noodle working. Once it's done, you'll be able to give it a set of input textures, including diffuse and specular material colors, a bumpmap, an ambient occlusion bake, a paint layer, and optional textures such as shininess and hi-to-lo rez normal baking, then you click start and after an hour of number crunching, the noodle will spit out a set of textures: diffuse, specular, glow, shininess and normal map.
It will include such features as modulation of ambient occlusion by the bumpmap (by difference of gaussians), material detection from diffuse and specular colors (like between aluminium, titanium, stainless, steel, chrome, bronze, copper, etceteras, glass, ceramic, paints) and apply appropriate shininess, as well as procedural rusts typical of titanium and steel; bleaching of paints from light exposure, dirt streaks emanating from bumpmap features such as grooves, impacts and scratches that peel paint and clean rust and put in dents, and many more.
Needless to say, once this "La Grande Noodle" as we call it is operational, we'll share it with Vegastrike. It will make texturing ships a walk in the park.
Well, if there's interest in this, I could try and make the noodle also normalize the rgb channels so that at least a few pixels are at 255 red, at least some pixels are 255 green and some pixels are 255 blue; and save the correction multiplier so that we can enter it into the xmesh.
Actually, for the emissive (glow) texture, which I use for a prebaked ambient light contribution, I have a better idea; but it's a long story.
Now, some things in xmesh may have to change... Well, not for my thing; but because there's stuff in xmesh that doesn't belong there...
Code: Select all
<Ambient Red="0.500000" Green="0.500000" Blue="0.500000" Alpha="1.000000"/>
<Diffuse Red="1.000000" Green="1.000000" Blue="1.000000" Alpha="1.000000"/>
<Emissive Red="0.000000" Green="0.000000" Blue="0.000000" Alpha="1.000000"/>
<Specular Red="1.000000" Green="1.000000" Blue="1.000000" Alpha="1.000000"/>
Now, emissive is not needed either for two half-reasons:
Half of the glow texture's use is for stop-lights and windows and whatnot, and these lights are usually bright. No need for normalizing.
The other half of the glow texture's use is for ambient light baking. The diffuse texture is multiplied in Gimp or in Blender by the baked ambient occlusion, and then this is added to the glow texture. It looks great, BUT, it doesn't respond to actual ambient light intensity and color.
The good thing is that light fixtures are bright, while ambient light contribution is dim. So, in the shader, I'm planning to modulate the glow texture by ambient light, but only for dimmer colors...
Ammount of modulation = 1-(luma(glowtexture))^2 or something.
But then, all I need is the Ambient color figure I was talking about, per system, obtained from the backgrounds. So, emissive color I won't need either. And as for normalizing ambient contribution, not needed because, my actual plan is to use the diffuse normalization value. And I'm planning to use gamma of 0.5 for the glow texture, anyways, and degamma it in the shader (which is just the color dot itself --i.e.: squaring the channels, since gamma of 0.5 is just the square root).
Finally, I won't need shininess, because shininess will piggyback as the alpha channel of the specular texture, and the alpha channel in dxt5 is 8 bits, so I could have shininess power of up to 256 in unit increments, which is good enough.
Deployment:
This is the harder part, of course.
Hopefully, having the LaGrandeNoodle will simplify ship texturing so much that in both Vegastrike and PU every ship will get redone in like no time.
Probably there will need to be auxiliary noodles to extract info from existing texture jobs.
There will also be a need for simplifying or automating color info editing for the meshes.
But no matter, there will be a transition period where new shaders and old shaders will have to exist side-by-side. If I may suggest, perhaps a parameter in xmesh should be born to indicate shader version.
Enough typing for now; got a splitting headache...