Proposal: Normalized textures

For collaboration on developing the mod capabilities of VS; request new features, report bugs, or suggest improvements

Moderator: Mod Contributor

chuck_starchaser
Elite
Elite
Posts: 8014
Joined: Fri Sep 05, 2003 4:03 am
Location: Montreal
Contact:

Proposal: Normalized textures

Post by chuck_starchaser »

This is NOT a feature request; rather, a feature I think I can implement myself, with a bit of help. The object of this post is to seek moral support.

The problem:
Color precision was a complete lack of it at 8,8,8-bits per channel, and then came DDS and reduced that to 5,6,5...
The problem is already pretty serious on the bright side, where a single step in green intensity is 100 * 1/2^6 = 100/64 = ~1.5% intensity.
Red and blue steps, being 5 bits, are 3%.
But the problem gets real bad at lower intensities. At 50% green, a step is 3%. At 50% red or blue, a single step is a whopping 6% change.

This is NOT good...

A solution (the best, IMO):
If I was writing my own engine, I would forgo half of the compression benefit of dds, use dxt5, and use the rgb channels for chroma only, and the alpha channel for luma. The alpha channel in dxt5 is actually 8-bits.
But while dxt1 achieves 6:1 compression, dxt5 stops at 3:1.
Personally, I think the sacrifice is worth it, and I'd do this without a second thought; but I doubt I'd be able to convince anyone here.

So, I'm going to propose a compromise, instead:

The proposed solution:
We've got all these material numbers in xmesh...

Code: Select all

<Mesh scale="1.000000" reverse="0" forcetexture="0" sharevert="0" polygonoffset="0.000000" blend="ONE ZERO"  texture="dark.png" >
<Material power="60.000000" cullface="1" reflect="1" lighting="1" usenormals="1">
	<Ambient Red="0.500000" Green="0.500000" Blue="0.500000" Alpha="1.000000"/>
	<Diffuse Red="1.000000" Green="1.000000" Blue="1.000000" Alpha="1.000000"/>
	<Emissive Red="0.000000" Green="0.000000" Blue="0.000000" Alpha="1.000000"/>
	<Specular Red="1.000000" Green="1.000000" Blue="1.000000" Alpha="1.000000"/>
</Material>
...and my proposal is to use them.
I don't mean the code using them; I believe it is, already. I mean "using them" in the sense of writing useful numbers to them, instead of always 1.0 or 0.0 or the occasional 0.5...

At PU we've been working on and off on a Blender Compositor Nodes-based texturing platform. Now it's rather imminent. Just to describe the project briefly...

Blender's Nodes are a substitute for Gimp/Photoshop layering system. You've got all kinds of input and output nodes and processing nodes. Input and output nodes are typically textures. Processing nodes include blending operations, math ops, color ops, filters such as blurs, and even moving ops such as vertical and horizontal offsets. You create networks of nodes, connecting outputs of some to inputs of others. Those connections look like spaghetti and are called "noodles"; and represent the flow of texel data.
The best part of it is that ALL operations are done at 32-bit float precision per channel. Result is only dithered and quantized for writing a texture.

So...

So far we got parts of the grand noodle working. Once it's done, you'll be able to give it a set of input textures, including diffuse and specular material colors, a bumpmap, an ambient occlusion bake, a paint layer, and optional textures such as shininess and hi-to-lo rez normal baking, then you click start and after an hour of number crunching, the noodle will spit out a set of textures: diffuse, specular, glow, shininess and normal map.
It will include such features as modulation of ambient occlusion by the bumpmap (by difference of gaussians), material detection from diffuse and specular colors (like between aluminium, titanium, stainless, steel, chrome, bronze, copper, etceteras, glass, ceramic, paints) and apply appropriate shininess, as well as procedural rusts typical of titanium and steel; bleaching of paints from light exposure, dirt streaks emanating from bumpmap features such as grooves, impacts and scratches that peel paint and clean rust and put in dents, and many more.
Needless to say, once this "La Grande Noodle" as we call it is operational, we'll share it with Vegastrike. It will make texturing ships a walk in the park.

Well, if there's interest in this, I could try and make the noodle also normalize the rgb channels so that at least a few pixels are at 255 red, at least some pixels are 255 green and some pixels are 255 blue; and save the correction multiplier so that we can enter it into the xmesh.

Actually, for the emissive (glow) texture, which I use for a prebaked ambient light contribution, I have a better idea; but it's a long story.

Now, some things in xmesh may have to change... Well, not for my thing; but because there's stuff in xmesh that doesn't belong there...

Code: Select all

   <Ambient Red="0.500000" Green="0.500000" Blue="0.500000" Alpha="1.000000"/>
   <Diffuse Red="1.000000" Green="1.000000" Blue="1.000000" Alpha="1.000000"/>
   <Emissive Red="0.000000" Green="0.000000" Blue="0.000000" Alpha="1.000000"/>
   <Specular Red="1.000000" Green="1.000000" Blue="1.000000" Alpha="1.000000"/>
I don't know what "ambient" is doing there. A ship has no eyes and knows nothing about what the ambient light is. Ambient should be a per-system thing, NOT a per-ship thing; and should be computed on the fly by averaging the background color (or reading the last mipmap ;-)) for each system visited, btw; so as to have no chance of human error causing disagreement between background and ambient light colors or intensities.

Now, emissive is not needed either for two half-reasons:
Half of the glow texture's use is for stop-lights and windows and whatnot, and these lights are usually bright. No need for normalizing.
The other half of the glow texture's use is for ambient light baking. The diffuse texture is multiplied in Gimp or in Blender by the baked ambient occlusion, and then this is added to the glow texture. It looks great, BUT, it doesn't respond to actual ambient light intensity and color.
The good thing is that light fixtures are bright, while ambient light contribution is dim. So, in the shader, I'm planning to modulate the glow texture by ambient light, but only for dimmer colors...
Ammount of modulation = 1-(luma(glowtexture))^2 or something.
But then, all I need is the Ambient color figure I was talking about, per system, obtained from the backgrounds. So, emissive color I won't need either. And as for normalizing ambient contribution, not needed because, my actual plan is to use the diffuse normalization value. And I'm planning to use gamma of 0.5 for the glow texture, anyways, and degamma it in the shader (which is just the color dot itself --i.e.: squaring the channels, since gamma of 0.5 is just the square root).

Finally, I won't need shininess, because shininess will piggyback as the alpha channel of the specular texture, and the alpha channel in dxt5 is 8 bits, so I could have shininess power of up to 256 in unit increments, which is good enough.

Deployment:
This is the harder part, of course.
Hopefully, having the LaGrandeNoodle will simplify ship texturing so much that in both Vegastrike and PU every ship will get redone in like no time.
Probably there will need to be auxiliary noodles to extract info from existing texture jobs.

There will also be a need for simplifying or automating color info editing for the meshes.

But no matter, there will be a transition period where new shaders and old shaders will have to exist side-by-side. If I may suggest, perhaps a parameter in xmesh should be born to indicate shader version.

Enough typing for now; got a splitting headache...
chuck_starchaser
Elite
Elite
Posts: 8014
Joined: Fri Sep 05, 2003 4:03 am
Location: Montreal
Contact:

Re: Proposal: Normalized textures

Post by chuck_starchaser »

chuck_starchaser wrote:If I may suggest, perhaps a parameter in xmesh should be born to indicate shader version.
Let me retract that:
NOT "shader version", but rather "texture packing version".

Important difference, because shaders may improve over time in ways that bfxm's wouldn't care. But in the ways that the bfxm would care, so would the opengl code that has to emulate what the shaders do in the absence of shader support.
What all three of: the bfxm, the ogl code and the shaders, need to be synchronized about is in the type of "texture packing"; --that is, what goes in the alpha channel of what?, are the colors of this or that texture normalized?, do they use gamma other than 1?, etceteras...

I'm already toying with a new idea, --a major departure from the first post:

One thing the LaGrande Noodle currently does is premultiply three textures by functions of the Ambient Occlusion baking (ao). Namely,
  • Diffuse texture is premultiplied by the square root of the ao.
  • The diffuse is also pre-multiplied by the ao (no square root; just plain) to become a contribution to the light map (glow texture); and
  • the specular materials base texture is premultiplied by the square (NOT the square root) of the ao, so as to hide away the "artifact" resulting from the fact that environment mapping lacks the ability to self-reflect --i.e.: parts of the ship reflecting off other parts of the ship.
But what I'm thinking now, for four reasons, is to NOT pre-multiply these base textures, but to throw the ambient occlusion into the alpha channel of the glow map, and do the multiplications in the shader.

The reasons are as follows:
  • It will take less shader instructions to do this than to try to discriminate light map from lights in the glow map --for the sake of modulating the former but not the latter by the ambient light color.
  • It will reduce banding artifacts caused by the low precision of dxt rgb channels. IOW, having pre-baked ao introduces smooth gradients into the textures, --smooth gradients being something that dxt's 5-6-5-bit precision particularly sucks at. Instead, the gradients would be produced in the shader and come from an 8-bit ao (from the the glow texture's alpha channel, as I said).
  • Most importantly, it will allow the ambient light contribution to be modulated independently from the the light map. What does this mean? Well, the "light map" is a separate baking, representing light contributions from static lights on a model. Vegastrike hasn't used this technique much yet, but it should: It looks fantastic (see the Cutter; that's in-game, not even using shaders). Nice thing about light-mapping is that it shows shadows (pre-baked), which kind of helps your eyes not notice the lack of shadows in the main lighting. But the static light-map is an additive contribution to the glow map that should NOT be modulated by ambient color; whereas ambient light should be. But if we pre-mix the two in the glow map, then there'd be no way for the shader to seperate them. So, having the ambient occlusion baking sitting by itself in the glow map's alpha channel, and at 8-bit precision, to boot, allows us to generate the ambient light contribution in the shader, and to modulate it by ambient light color independently, before adding it to the lights and static light map coming from the glow-map's rgb.
  • Most importantly, too, it would allow existing texturing jobs to work with the new shaders, and would make a partial conversion to the new packing easier: If the alpha channel is not present in the glow map, we can simply assume it to be 1.0, which simply doesn't modulate anything. To partially convert the textures for a unit, all you need to do is bake an ambient occlusion in Blender, and throw it into the glow map's alpha channel. Same as we we treat shininess currently: just throw it in the specular texture's alpha channel when ready.
EDIT:
BTW, I'm also thinking about having a bit of Fresnel in the shaders. What's fresnel? Well, some materials change in specularity with view angle. For such materials, specular intensity is a function of the view angle and the material's dielectric constant. I'm NOT thinking about anything complicated in terms of shader code, just some rough approximation, first of all. Secondly, I'm not thinking about having the dielectric constant of materials packed into some texture channel. Rather, I'm thinking of having a single bit to distiguish dielectrics from non-dielectrics. Metals are non-dielectric, so fresnel doesn't apply to them. The only dielectrics would be windows and glossy paints. So, what I AM thinking is about picking some texture for which we have NO alpha channel (yet), and, rather than add an 8-bit alpha channel, add a 1-bit alpha (dxt3), and use that bit to indicate whether the material is supposed to be a dielectric material.
So, for the LaGrande Noodle, instead of having a single "paint" input texture, we'd have two input textures: "matte_paint" (non-dielectric) and "glossy paint" (dielectric).


EDIT2:
Here's my new texture packing proposal, as per the above:

Code: Select all

texture name |     bits per channel    |          content            | TYPICAL HIGH-QTY TEXTURE SIZES, COMPRESSION
-------------| ----------------------- | --------------------------- | ------------------------------------------------
texture name | red | grn | blu | alpha | rgb encodes | alpha encodes | fighters | corvettes | carrier/stn | compression
============ | === | === | === | ===== | =========== | ============= | ======== | ========= | =========== | ===========
 diffuse.png |  5  |  6  |  5  |   8   | diff color  |     alpha     |   1024   |    2048   |    4096     | DXT5
specular.png |  5  |  6  |  5  |   8   | spec color  |   shininess   |   1024   |    2048   |    4096     | DXT5
emissive.png |  5  |  6  |  5  |   8   | ligts+Lmap  |   ao baking   |    512   |    1024   |    2048     | DXT5
  normal.png |  5  |  6  |  5  |   8   | ~7-bit dV   |    8-bit dU   |   2048   |    4096   |    8192     | DXT5
  damage.png |  5  |  5  |  5  |   1   |  dmg color  | is dielectric |   1024   |    2048   |    4096     | DXT3
  detail.png |  5  |  6  |  5  |   8   |  diff color |   shininess   |    256   |     512   |    1024     | DXT5
NOTE: For the normalmap, my idea is to use all three rgb channels for dV,
with +/-(1/128) offsets in the red and blue channels, respectively, so as to
get an extra bit of resolution out of them; then compute the height of the
normals in the shader.

Another idea, if we only have one type of transparent material per ship (glass), would be to specify its transparency in the xmesh, and to use 1 bit alpha (dxt3) for the diffuse texture. When alpha is one, the material is opaque; when the alpha bit is zero, then we use the alpha value specified in the bfxm. Or, to make this compatible with the current texture packing, xmesh could specify "MinAlpha". Whether we get a 0 or 1 value from the texture, or a value varying in-between (from 8-bit alpha), we simply clip the value so it's never less than MinAlpha.
klauss
Elite
Elite
Posts: 7243
Joined: Mon Apr 18, 2005 2:40 pm
Location: LS87, Buenos Aires, República Argentina

Post by klauss »

I like it all. The idea about dielectrics. The packing. Maybe the normalmap is a bit complicated for the shader. Diffuse and normalmaps have to be sampled multiple times if parallax is implemented - and you're forgetting about height BTW, but I'm not sure parallax is THAT useful for ships, and in any case it can be thought of as a different packing with different texture sets, which takes me to that I like the idea of having some "texture packing" annotation in the bfxm. It would be rather simple to code, and truly truly useful.

Thumbs up.

BTW: 1-bit alpha is DXT1, not DXT3. DXT1c is DXT1 without alpha. IIRC.
Oíd mortales, el grito sagrado...
Call me "Menes, lord of Cats"
Wing Commander Universe
safemode
Developer
Developer
Posts: 2150
Joined: Mon Apr 23, 2007 1:17 am
Location: Pennsylvania
Contact:

Post by safemode »

there is dxt1, dxt2, dxt3, dxt4, dxt5

dxt1 has a 1 bit alpha channel.

dxt2, 3 have an explicit alpha channel (no blending or interpolating)

dxt4 has interpolated alpha

dxt5 has blended alpha


obviously, 2 and 4 aren't ever really mentioned, and are rarely used.

dxt1 is dxt1. It's structure never changes. It always has 1bit alpha.

That doesn't mean you can't change the way you deal with them. That's where all these variations for normal maps and such come from. They pack the DXT file differently, and your game is supposed to deal with the textures appropriately different from normal. but they are all loaded into GL the same way, and they are all physically built the same way, there is no underlying difference between a normal mapped dxt5 file and a regular one. the compressor is just doing some conversion for you that otherwise you'd have to do yourself, prior to compression.
Ed Sweetman endorses this message.
klauss
Elite
Elite
Posts: 7243
Joined: Mon Apr 18, 2005 2:40 pm
Location: LS87, Buenos Aires, República Argentina

Post by klauss »

safemode wrote:dxt1 is dxt1. It's structure never changes. It always has 1bit alpha.
No, no. That's not true. You have two variants of DXT1 - DXT1c and DXT1 (plain). Both of them have a special value, but each gives it different meanings. DXT1c, IIRC, makes it mean "black" (regardless of the interpolants). DXT1 (plain) makes it mean "transparent", hence the 1-bit alpha.

DXT2 & 4 are weird bugs, they are equivalent to DXT1 & 3 respectively, but have premultiplied alpha whatever good that may be for. I think it makes for better bilinear interpolation in some usages, but nobody ever uses them.
Oíd mortales, el grito sagrado...
Call me "Menes, lord of Cats"
Wing Commander Universe
safemode
Developer
Developer
Posts: 2150
Joined: Mon Apr 23, 2007 1:17 am
Location: Pennsylvania
Contact:

Post by safemode »

they are the same thing. Just one refers to the alpha bit to 0, one to 1.

http://cs.elderscrolls.com/constwiki/in ... _algorithm


Furthermore. people like to say that dxt1 compresses better quality than dxt5. This is not the case. They both compress rgb exactly the same The only difference is in the alpha channels between the formats. How they're decoded is totally up to the graphics card though, and some may end up "optimizing" things that may make one look like it's better than another. In the end, we choose for smallest memory footprint for functionality. If we can get away with 1 bit alpha, then we use dxt1, if we can get away with non blended alpha, dxt3 is the way to go (rare), otherwise we use dxt5. We want the smallest footprint in videoram and on disk as possible while remaining extremely fast.

All your nvcompress and such targets, are merely different ways to pack the files. The files themselves have a format that doesn't vary.
Ed Sweetman endorses this message.
klauss
Elite
Elite
Posts: 7243
Joined: Mon Apr 18, 2005 2:40 pm
Location: LS87, Buenos Aires, República Argentina

Post by klauss »

That article is wrong.
I mean it.
I've seen the S3TC specs.

Whether DXT1 is in "transparent" mode or not depends on whether c1 > c2 or not. (c1 & c2 being the 16-bit codes that encode the two interpolant colors). When it's in transparent mode, true, alpha is regarded as totally white - bit it's never encoded. Instead, the code reserved for transparency is used for black. So, in theory, DXT1c would be better for line-art (things with black text/shapes on it). Each block can be in either mode, but somehow compression utilities make you decide globally. Another weird thing.

It's also wrong in that DXT1 rgb colors are equally encoded as in DXT3 and DXT5. They're similarly encoded, but since DXT3/5 both don't encode alpha or black in the bits reserved for RGB, so it has one extra interpolation step. Ie: better quality.

Furthermore, hardware has been limited by the standard in that DXT1 must be interpolated in 16 bits. This produces a lot of banding (as if DXT1 textures where 16-bit pngs). DXT3/5 have no such limitation. Though I think most GPUs have some sort of configuration option to override this and make DXT1 interpolate in high quality (most applications never notice and almost always are benefitted from this rather than the opposite).
Oíd mortales, el grito sagrado...
Call me "Menes, lord of Cats"
Wing Commander Universe
chuck_starchaser
Elite
Elite
Posts: 8014
Joined: Fri Sep 05, 2003 4:03 am
Location: Montreal
Contact:

Post by chuck_starchaser »

So, which is 5,6,5 no alpha?
klauss wrote:I like it all. The idea about dielectrics. The packing.
YEY! Thanks! :D
Maybe the normalmap is a bit complicated for the shader.
Not sure in what sense you mean. In using rgb for dU?, or in forcing the recomputation of vector height in the shader?

In terms of rgb encoding, what I meant is that, in the Blender noodle, before saving the dU value to a png as rgb, I add/subtract offsets from the red and blue values. The idea is that as the input value increases, the three channels grow in syncopated steps, like
red increments
green increments
blue increments
green increments
red increments
green increments
blue increments
green increments
....
Green increments twice as often as the other two because it has 6 bits instead of 5.
But all the shader has to do with that is average... du=0.333*(r+g+b).

In terms of the shader having to recompute the normal's height, I didn't explain the whole idea. The whole idea is for the Blender noodle to save a special flavor of dU/dV: Instead of them being the sine of the angle, they'd be (tangent of the angle)/2. This limits the normal's range to +/-arctan(2), or +/-63.4 degrees, but it has the advantage that all I have to do in the shader is to double the incoming du/dV values, assume the height to be 1.0 (I mean, write 1.0 to it), and renormalize.
Diffuse and normalmaps have to be sampled multiple times if parallax is implemented - and you're forgetting about height BTW, but I'm not sure parallax is THAT useful for ships, and in any case it can be thought of as a different packing with different texture sets,
Exactly, your shaders will definitely need a new packing, as you need height, and PRTP and PRTN.
Maybe PRTP could carry hight in the alpha channel, and PRTN have PRT luma (for both P and N).
That way you only use the lower precision rgb channels for chroma.
Last edited by chuck_starchaser on Thu Mar 27, 2008 7:29 pm, edited 5 times in total.
safemode
Developer
Developer
Posts: 2150
Joined: Mon Apr 23, 2007 1:17 am
Location: Pennsylvania
Contact:

Post by safemode »

so you're saying dxt1 rgb, uses it's 1 bit alpha bit to encode black rather than the alpha layer, whereas dxt1 rgba uses the 1 bit to encode the alpha layer, and black is treated like a normal mixture of color?

I haven't dealt with the encoding aspect of DXT, only the decoding. They're two different animals obviously.

I support both types of dxt1, but i'm not sure if anyone actually uses dxt1rgb as compressors default to rgba if they support rgb mode dxt1 at all.

beware of the -rgb flag in nvcompress, i believe that's used to do uncompressed DDS files, which i do not support.
Ed Sweetman endorses this message.
klauss
Elite
Elite
Posts: 7243
Joined: Mon Apr 18, 2005 2:40 pm
Location: LS87, Buenos Aires, República Argentina

Post by klauss »

I believe nvcompress actually does select the mode per-block as it should be. That's only a belief though. But I'm positive hardware-based (driver-based actually) GIMP does not, and produces lower-quality DXT1a than DXT1c. So people should either refrain from using GIMP or, better yet, pick the right format (ie: if you don't need transparency, use DXT1c).
Oíd mortales, el grito sagrado...
Call me "Menes, lord of Cats"
Wing Commander Universe
safemode
Developer
Developer
Posts: 2150
Joined: Mon Apr 23, 2007 1:17 am
Location: Pennsylvania
Contact:

Post by safemode »

luckily decoding is easy.
Ed Sweetman endorses this message.
chuck_starchaser
Elite
Elite
Posts: 8014
Joined: Fri Sep 05, 2003 4:03 am
Location: Montreal
Contact:

Post by chuck_starchaser »

Okay, I'd like to get cracking on this, if there are no objections. I have three personal problems, tho: A broken compiler, poor knowledge of the engine, and no svn commit (I'm not asking, just mentioning); so I'll need help.

The plan:

STEP 1:

What I'd like to try to do, first of all, is fix the ambient light issue, and maybe implement cube-map support in passing.
Presently, I gather, when you jump to a new system, the engine checks for the presence of its required spheremap in .vegastrike/textures/backgrounds. If it doesn't find it, it generates it.
At the graphics end of things, the graphics code is apparently sending a envColor uniform to the shader. Its value comes from a unit's bfxm file, which makes absolutely no sense whatsoever to me.

What I would like to do is, perhaps try and put in cubemap support and get rid of the spheremap generation (which safemode was recently asking for help with). Instead, have the code check for, or else generate, a little text file in the backgrounds folder. This file would simply contain the envColor. To generate this file, it would read the last LOD (single texel) of the six sides of the cubemap textures, and average them; then write the result to this text file.

If there are no objections, what I need help with, first of all, is with pointers as to where in the code these things happen. Namely, in what files are the relevant sections of code where spheremap genarion and use are contained, as well as the graphics section where envColor is sent to the shader.

Then I will probably fix the shaders a little bit with regards for how envColor is used. Last time I looked at the shaders, I seem to remember there being some some hard to understand stuff, like envColor and ambient light being different things, which shouldn't be, and seeing some ambient light component being added separately to the specular light contribution. Ambient light should come from nowhere except the averaging of the environment textures; and it's not necessary to compute it separately for diffuse and specular. One can simply add diffuse and specular colors and multiply by envColor to get the ambient contribution to the final pixel color.

The result of this will be to have ambient color and intensity, visible on the dark sides of ships, that agree intuitively with the color and intensity of the background. It will be a small but noticeable improvement.


STEP 2:

I'll think about it after STEP 1 is done and working. It would probably be adding a TexturePackingVersion variable in the bfxm, so that we can then change one ship or unit at a time.

STEP 3:

Pick one ship and one station to be guinea pigs, give them a good new texturing using the LaGrande Noodle, new packing format, with the dielectric bit, and getting the new shaders to work.
chuck_starchaser
Elite
Elite
Posts: 8014
Joined: Fri Sep 05, 2003 4:03 am
Location: Montreal
Contact:

Post by chuck_starchaser »

Okay, I'd like to get cracking on this, if there are no objections. I have three personal problems, tho: A broken compiler, poor knowledge of the engine, and no svn commit (I'm not asking, just mentioning); so I'll need help.

The plan:

STEP 1:

What I'd like to try to do, first of all, is fix the ambient light issue, and maybe implement cube-map support in passing.
Presently, I gather, when you jump to a new system, the engine checks for the presence of its required spheremap in .vegastrike/textures/backgrounds. If it doesn't find it, it generates it.
At the graphics end of things, the graphics code is apparently sending a envColor uniform to the shader. Its value comes from a unit's bfxm file, which makes absolutely no sense whatsoever to me.

What I would like to do is, perhaps try and put in cubemap support and get rid of the spheremap generation (which safemode was recently asking for help with). Instead, have the code check for, or else generate, a little text file in the backgrounds folder. This file would simply contain the envColor. To generate this file, it would read the last LOD (single texel) of the six sides of the cubemap textures, and average them; then write the result to this text file.

If there are no objections, what I need help with, first of all, is with pointers as to where in the code these things happen. Namely, in what files are the relevant sections of code where spheremap genarion and use are contained, as well as the graphics section where envColor is sent to the shader.

Then I will probably fix the shaders a little bit with regards for how envColor is used. Last time I looked at the shaders, I seem to remember there being some some hard to understand stuff, like envColor and ambient light being different things, which shouldn't be, and seeing some ambient light component being added separately to the specular light contribution. Ambient light should come from nowhere except the averaging of the environment textures; and it's not necessary to compute it separately for diffuse and specular. One can simply add diffuse and specular colors and multiply by envColor to get the ambient contribution to the final pixel color.

The result of this will be to have ambient color and intensity, visible on the dark sides of ships, that agree intuitively with the color and intensity of the background. It will be a small but noticeable improvement.


STEP 2:

I'll think about it after STEP 1 is done and working. It would probably be adding a TexturePackingVersion variable in the bfxm, so that we can then change one ship or unit at a time.

STEP 3:

Pick one ship and one station to be guinea pigs, give them a good new texturing using the LaGrande Noodle, new packing format, with the dielectric bit, and getting prototype shaders to work.
safemode
Developer
Developer
Posts: 2150
Joined: Mon Apr 23, 2007 1:17 am
Location: Pennsylvania
Contact:

Post by safemode »

if you do cubemaps, i dont think we'd even need to generate anything in game. We'd create the cubemap as a dds file and load it for the backgound and use that to do the ambient lighting too.
Ed Sweetman endorses this message.
chuck_starchaser
Elite
Elite
Posts: 8014
Joined: Fri Sep 05, 2003 4:03 am
Location: Montreal
Contact:

Post by chuck_starchaser »

safemode wrote:if you do cubemaps, i dont think we'd even need to generate anything in game. We'd create the cubemap as a dds file and load it for the backgound and use that to do the ambient lighting too.
That's true; we could just read the last LOD of the six sides and average the color on the fly, to get the ambient light color value; --no need to save the value to a file.
So, any pointers as to where I can get started, in the code?
safemode
Developer
Developer
Posts: 2150
Joined: Mon Apr 23, 2007 1:17 am
Location: Pennsylvania
Contact:

Post by safemode »

Well, as a first step, I would support reading them in (as a DDS file). I can add that in. All that is is a cat'd DDS file basically with a known order of the sides. so you have side 1 with it's mipmaps, then side 2 with it's mipmaps, then etc. So reading that in is a trivial fix to the DDS code. What will be a problem is teaching the gfx code (aux_texture.cpp and background etc) that we aren't dealing with 6 seperate images and that when we want to do something regarding the light and what'not we have to pull the GL subimages of the cubemap.

The gl code would probably have to be taught to treat the subimages as part of one texture, so if it gets rid of one (due to mem restraints) it has to get rid of all of them.

The gl loading itself would happen in gl_texture.cpp, and that would be the easiest aspect. since it's already in a format GL likes, we just load it as a cubemap.

this may be all much easier than one thinks, since we might be able to get away with treating a cubemap like 6 seperate textures by referencing it's subimages whenever we need to do any mucking around.
Ed Sweetman endorses this message.
chuck_starchaser
Elite
Elite
Posts: 8014
Joined: Fri Sep 05, 2003 4:03 am
Location: Montreal
Contact:

Post by chuck_starchaser »

Okay, gotta go to the bank now, but I'll take a look at the code when I get back.
chuck_starchaser
Elite
Elite
Posts: 8014
Joined: Fri Sep 05, 2003 4:03 am
Location: Montreal
Contact:

Post by chuck_starchaser »

Haven't looked at the code yet, but at least I have some progress to report on the LaGrande Noodle.
http://wcjunction.com/phpBB2/viewtopic.php?p=8038#8038
I'll look at the code when I get back in a couple of hours.
chuck_starchaser
Elite
Elite
Posts: 8014
Joined: Fri Sep 05, 2003 4:03 am
Location: Montreal
Contact:

Post by chuck_starchaser »

Alright, for a humble start, I think than in gldrv/gl_init.cpp, line 491, we need to insert,

Code: Select all

    glEnable(GL_TEXTURE_CUBE_MAP_EXT);
    glTexParameteri(GL_TEXTURE_CUBE_MAP_EXT, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
    glTexParameteri(GL_TEXTURE_CUBE_MAP_EXT, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
There's something cube-map related in line 524. I'm not sure how related...
We need to query for GL_TEXTURE_CUBE_MAP_EXT; donno how that's done in the engine; I'm sure it's trivial.
I suppose the first thing to do is to get cubemaps to work in ogl, right?
We need to,

Code: Select all

glTexImage2D( GL_TEXTURE_CUBE_MAP_POSITIVE_X_EXT, 0,
GL_RGB8, w, h, 0, GL_RGB, GL_UNSIGNED_BYTE, face_px );
glTexImage2D( GL_TEXTURE_CUBE_MAP_NEGATIVE_X_EXT, 0,
GL_RGB8, w, h, 0, GL_RGB, GL_UNSIGNED_BYTE, face_nx );
glTexImage2D( GL_TEXTURE_CUBE_MAP_POSITIVE_Y_EXT, 0,
GL_RGB8, w, h, 0, GL_RGB, GL_UNSIGNED_BYTE, face_py );
glTexImage2D( GL_TEXTURE_CUBE_MAP_NEGATIVE_Y_EXT, 0,
GL_RGB8, w, h, 0, GL_RGB, GL_UNSIGNED_BYTE, face_ny );
glTexImage2D( GL_TEXTURE_CUBE_MAP_POSITIVE_Z_EXT, 0,
GL_RGB8, w, h, 0, GL_RGB, GL_UNSIGNED_BYTE, face_pz );
glTexImage2D( GL_TEXTURE_CUBE_MAP_NEGATIVE_Z_EXT, 0,
GL_RGB8, w, h, 0, GL_RGB, GL_UNSIGNED_BYTE, face_nz );
and then use GL_REFLECTION_MAP_EXT.
safemode
Developer
Developer
Posts: 2150
Joined: Mon Apr 23, 2007 1:17 am
Location: Pennsylvania
Contact:

Post by safemode »

no, I would use glTexSubImage2d there are some cubemap howto's out and about that i've perused. i'll have to find them again later.
Ed Sweetman endorses this message.
chuck_starchaser
Elite
Elite
Posts: 8014
Joined: Fri Sep 05, 2003 4:03 am
Location: Montreal
Contact:

Post by chuck_starchaser »

We need to edit the cubemaps? Sorry, I know nothing about VS.
There's also void glCompressedTexSubImage2D for compressed images.
ace123
Lead Network Developer
Lead Network Developer
Posts: 2560
Joined: Sun Jan 12, 2003 9:13 am
Location: Palo Alto CA
Contact:

Post by ace123 »

Correct me if I'm wrong, but I'm pretty sure the code you are talking about already exists in star_system.cpp

I see the star_system code, if using cubemaps, will use 6 lighting textures with cubemaps.

For some reason, it was implemented as an ifdef instead of an if/else, but that should be easy enough to change, something like

Code: Select all

#ifdef NV_CUBE_MAP
if (gl_options.using_cube_maps) {
...
} else
#endif
{
...
}
Though I also suspect everyone's version of GL will have cubemap extensions anyway, and it might be easier to remove the ifdefs.


EDIT: Just found the "gl_options.cubemap" variable that checks for cubemap support using the GL extension functions -- Probably would be best to check for that one.
Also, putting a glEnable in an init function is probably a bad idea... the code already disables GL_TEXTURE_CUBE_MAP_EXT whenever displaying other textures.

I know the GFX functions in the gldrv directory support cubemaps... though maybe it needs to be slightly changed for reflection cubemaps.
klauss
Elite
Elite
Posts: 7243
Joined: Mon Apr 18, 2005 2:40 pm
Location: LS87, Buenos Aires, República Argentina

Post by klauss »

Yep, VS has a lot of code to support cubemaps. I'm not sure if it was tested or not, but it was disabled for some reason. I guess Hellcat's the only one who would know.

In any case loading of an entire cubemap out of a single dds is not done, cubemap code in VS predates DDS code. But it's not necessary. It's not desireable either. If you did that, you'd have to have a separate file for the environment mapping version than for the background cube (you don't want to use a cubemap to render the background). Although that is a good idea in many levels, I believe current VS content producers would be... bothered.

Instead, loading the separate 6 sides used for the background into each cubemap's side is the best option, IMO, and code for that is already in. Each Texture object has a "Target" that can be used to put them into a specific side of the cubemap - but you need not play with it, it's already coded. Maybe all you have to do is test the existing code.
Oíd mortales, el grito sagrado...
Call me "Menes, lord of Cats"
Wing Commander Universe
safemode
Developer
Developer
Posts: 2150
Joined: Mon Apr 23, 2007 1:17 am
Location: Pennsylvania
Contact:

Post by safemode »

klauss wrote: In any case loading of an entire cubemap out of a single dds is not done, cubemap code in VS predates DDS code. But it's not necessary. It's not desireable either. If you did that, you'd have to have a separate file for the environment mapping version than for the background cube (you don't want to use a cubemap to render the background). Although that is a good idea in many levels, I believe current VS content producers would be... bothered.
pyramid is currently the only VS content provider for backgrounds (even though he's doing planets mostly now). So I dont think it would be too hard to get 1 person moved over to making the cubemapped DDS file rather than 6 seperate DDS files that is the current way of doing it. The only difference between the two methods is that you're dealing with 1 file. A cubemap dds is just a concattenated DDS file in a specific order with other assumptions related to cubemaps (all sides are equal). All we do when we load up the six sides individually is end up making a cubemap without really calling it that. so why not skip the intermediary step and just use cubemapped DDS files?

I see no reason to need 1 set of background images for environment and 1 for the background. They can both use the exact same cubemap and not have to duplicate memory at all. Though, we may use spheremaps for reflections for some other reason (like easier to run the lighting algo's over).

Basically, all cubemaps will end up doing is saving on the intermediary function calls and sequential reads from disk when loading a background. You get one read, one call to load it to GL, but from then on, cubemaps are dealt with just like seperate images as far as i've seen. Not sure how it effects the reflection mapping, but i suspect sphere mapping makes it easier at the expense of quality (and the extra texture you need to load).
Ed Sweetman endorses this message.
klauss
Elite
Elite
Posts: 7243
Joined: Mon Apr 18, 2005 2:40 pm
Location: LS87, Buenos Aires, República Argentina

Post by klauss »

The problem with using the cubemap for the background is performance. When you render the background, only a small portion of the environment is actually needed. At most three of the 6 sides will ever be used, resulting in quite a performance increase when rendered with 6 separate textures (instead of one single cubemap). I'm talking at the GL level - however you load it, if you create a single cubemap texture and with it map the environment box, you'll get less performance than if you map the box with 6 separate textures.

Now... the "right" way to do it is to have 6 separate textures loaded for the background, and then build a lower-res cubemap for environment mapping. Either programatically or by loading a separate dds cubemap of lower res than the 6 separate background images.

The not-so-right way to do it that I was proposing was using the 6 separate images to build the cubemap (yet still use them separately for the background). The code you introduced for loading downsampled dds (by loading the lower mip levels) would allow you to reduce the cubemap's resolution on-the-fly quite easily. So... I'm basically saying that it should be done programatically. It's "not so right" since downsampling cubemaps requires more sphistication than that, but it should be enough for now.
Oíd mortales, el grito sagrado...
Call me "Menes, lord of Cats"
Wing Commander Universe
Post Reply