New Hornet model

Discuss the Wing Commander Series and find the latest information on the Wing Commander Universe privateer mod as well as the standalone mod Wasteland Incident project.
Post Reply
chuck_starchaser
Elite
Elite
Posts: 8014
Joined: Fri Sep 05, 2003 4:03 am
Location: Montreal
Contact:

Post by chuck_starchaser »

So, klauss, is this how the models need to be baked?

Image

Then export, then turn the radiant planes to the other side, nuke it again, and export a second mesh?

The exporter I sent you by email was buggy; this new one works. Here's the mesh in Ogre; no texture:

Image

I was just thinking, though; if using a separate lightmap unwrap would be really much better than using vertex lighting, there's another solution: There are automatic unwrapping scripts out there. They don't do a superb job, but probably good enough for ambient lighting. If you could take two differently uv-mapped meshes and combine them to have dual uv's, this might solve the problem.
klauss
Elite
Elite
Posts: 7243
Joined: Mon Apr 18, 2005 2:40 pm
Location: LS87, Buenos Aires, República Argentina

Post by klauss »

Yes, dual UV unwrap with automatic ambient unwrap would work.
It's just that it needs a rework of the shaders that could get tricky (too many interpolants), but that's my problem I guess.

About the baking - you almost got it.
There are a few details necessary to guarantee quality output: a) the emission planes should be curved into a sphere, to make them as clos to equally distant to the object's surface as possible. Also, r=x, g=y, b=z, with one baking doing +x +y +z, and the other doing -x -y -z. That's the PRT_P and PRT_N maps respectively. You also want to normalize the intensity, making it saturate on fully exposed places, to make it easier for artists to tweak the intensity of ambient illumination manually later. I think I posted a screenshot of the baking setup I used for the Hornet... didn't I?

EDIT: Yep
Oíd mortales, el grito sagrado...
Call me "Menes, lord of Cats"
Wing Commander Universe
chuck_starchaser
Elite
Elite
Posts: 8014
Joined: Fri Sep 05, 2003 4:03 am
Location: Montreal
Contact:

Post by chuck_starchaser »

I had missed that.
Here's an idea, if you're not already doing so.
In GIMP, you could start with 50% grey, then add PRT_P, and subtract PRT_N, both at 50% blending, if they are full range. That way you only need one PRT texture in the shader. This would make, for instance, surfaces that are aligned with the x axis and receive a bit of red light from both bakings, to show zero light from either direction; but I think this might be a benefit, rather than a problem.

What are interpolants? Is that another term for (trilinear filtering) texture units?
klauss
Elite
Elite
Posts: 7243
Joined: Mon Apr 18, 2005 2:40 pm
Location: LS87, Buenos Aires, República Argentina

Post by klauss »

Interpolants == varying, the opposite of uniforms, the variables that get transported form vertex shaders to pixel shaders - they're called interpolants because the hardware interpolates them automagically, perspective correction and all - pretty cool feature, BTW - you normally don't appreciate perspective-correct interpolation until you don't have it.
Usually, they're texture coordinate slots. But the more generalized language doesn't call them that (That is, GLSL) because they need not be used to address textures - in fact, you usually have more interpolants than texture units.

About adding P+N together - that gets nice AO-ish results for non-uniquely mapped surfaces, but it greatly decreases quality (mostly on specular reflections), and completely destroys the ability of the technique to render soft shadows, as it destroys directionality.
Oíd mortales, el grito sagrado...
Call me "Menes, lord of Cats"
Wing Commander Universe
chuck_starchaser
Elite
Elite
Posts: 8014
Joined: Fri Sep 05, 2003 4:03 am
Location: Montreal
Contact:

Post by chuck_starchaser »

Ah, okay, I knew that variables passed from vertex to fragment shader were interpolated, but somehow I didn't make the connection to the term "interpolants"... I need more coffee...
About adding P+N together - that gets nice AO-ish results for non-uniquely mapped surfaces ...
Hmm, never thought it would; my idea was simply to save a texture unit.
as it destroys directionality
Not arguing; I'm just confused. It would seem to me directionality is still there; it just passes through zero when a plane is aligned with a given axis. What seems to me it would destroy is the light contribution from light reflected off other surfaces. I guess this would be bad enough, though.

EDIT: Maybe I didn't explain my idea well; I didn't mean *adding* the two together, but rather *subtracting*. Add 50% of PRT_P to 50% grey, then subtract 50% of PRT_N, then using the texture as signed color with 0x80 offset. Then, the shader subtracts 0x80 from the color, and inverts the result for negative axis illumination. Final result saturated to zero, if negative.

Though, like I said, I think this would destroy secondary reflections, so we'd have to give them up and make sure the radiosity baking doesn't compute them, if it does.
Last edited by chuck_starchaser on Mon Sep 18, 2006 5:33 pm, edited 1 time in total.
klauss
Elite
Elite
Posts: 7243
Joined: Mon Apr 18, 2005 2:40 pm
Location: LS87, Buenos Aires, República Argentina

Post by klauss »

I think you're probably confused because you think you'll never have +X contribution if you have -X, but that's not the case at all. Or, perhaps you think you won't get +X contribution if your normal points towads -X, which would be the case sometimes but not all the time (you know... radiosity bounces, so it may take the long route but it could probably reach the other side).

Anyway... the real problem with that approach is:

Code: Select all

     _______
     |    /\
     |    || I1
     |    
     | <---I0  
     | --->O0
     |
     |
     |    ||
     |    \/ O1
     |______
So... I0 has to bounce towards O0, which produces a reflection of the environment quite normally.

But... I1 would find its path towards the outside obstructed, so it would not reflect the environment. In this example, both +Y and -Y are obstructed, and so your system should work. But now consider

Code: Select all

     _______  <--- reflective
     |     _  
     |    |\
     |      \
     |       I1
     | 
     |

     ^
     |
   black
There, the reflection of I1 would hit the black plane. But not any reflection would, and so the mapping depends on directionality. If you sum up things, you will loose the ability to tell the difference, and so the reflection won't get fully obstructed. In fact, it's worse: it will get partially obstructed (not so bad now, but wait), but all the time - even when they ray wouldn't be stopped by anything. So... it would produce quite unrealistic artifacts, and the eye has a talent for detecting those.

In fact, one of the reasonds why AO-ed reflections look so good is because of self occlusion of reflection beams. I wouldn't want to loose that.
Oíd mortales, el grito sagrado...
Call me "Menes, lord of Cats"
Wing Commander Universe
chuck_starchaser
Elite
Elite
Posts: 8014
Joined: Fri Sep 05, 2003 4:03 am
Location: Montreal
Contact:

Post by chuck_starchaser »

Right, I think I get it now. Not quite yet... While I see your argument, I'm also thinking that even if some important info is lost in the x (red) channel, there would be enough info in the y (green) channel to sort of make up for it. But the issue with secondary reflections I have no problem seeing. Just for direct illumination by a light source at an angle, it would seem to me that even if a plane whose normal is tilted a bit towards -X and with my idea gets no light at all from the x-axis direction component of a light's coordinates, as long as the light has a y coordinate, it would get light through its Y-positive normal component. I'm not sure whether the correct amount, but the thing is, the problem with X is zero when the light comes from (x,0,0), and increases as the angle increases; but the Y contribution gets larger so it covers the X artifact. Wouldn't it?
I totally see the problem with secondary reflections, though. But, come to think of it, they'll be grey-scale, anyhow; won't they? You can't radiosity bake with material colors, or it would change the angles...
klauss
Elite
Elite
Posts: 7243
Joined: Mon Apr 18, 2005 2:40 pm
Location: LS87, Buenos Aires, República Argentina

Post by klauss »

Yes, grayscale. That's a pity.

But compensating is still a problem. The +-x, +-y, +-z contrbutions aren't added just like that, they're taken off the environment.

Say, I sample the incoming direction from +x, +y, +z, -x, -y, -z, and then modulate each by the PRT components and add them. You can't add before modulations without losing correctness, but the artifact is rather general and not overly wrong-looking for diffuse lighting, only for secondary reflections.

The process is completely different for soft shadows, though, and the "adding up" is incompatible with them. The soft-shadows thing works on the principle that PRT has self-occlusion data. So... if I somehow sample that data (retrieve whether a ray in a certain direction reaches the ambient), then I can sample shadowing for lights in the outside. The sampling formula is quite simple, at first, with a few tricks applied later to tweak its appearance as a shadow (without the tweaks, it looks too bland):

lighting = dot((+x, +y, +z),saturate(direction)) + dot((-x,-y,-z),saturate(-direction));

where (+x,+y,+z) is the positive PRT, and (-x,-y,-z) is the negative one.
BTW: that's 3 instructions, I think, after optimization (5 a-priori).
Oíd mortales, el grito sagrado...
Call me "Menes, lord of Cats"
Wing Commander Universe
chuck_starchaser
Elite
Elite
Posts: 8014
Joined: Fri Sep 05, 2003 4:03 am
Location: Montreal
Contact:

Post by chuck_starchaser »

Ah, yes; I can't say I get the math, but I guess my method could wreck havoc with raytracing stuff.

Okay, here's a new idea that just came to me, for soft shadows, if it's not too much computation: The rgb coded contributions turned into a vector, the light to fragment being another vector. Their dot product tells the alignment. Self-shadowing encoded in the PRT would distort the alignment and cause a darkening. Hey; I guess that's what you're doing! dot((+x, +y, +z),saturate(direction)) ...
Anyways, what I just thinking is that computing a power of that would make the shadow look like it comes from a bimodal distribution source, which is not exactly the same thing as a circle, but gets closer. But then again, this would cause a shininess effect on all materials... Maybe a function that causes a fast fall to darkness... Hey! Just square it. Well, all materials have at least that much shininess...
klauss
Elite
Elite
Posts: 7243
Joined: Mon Apr 18, 2005 2:40 pm
Location: LS87, Buenos Aires, República Argentina

Post by klauss »

Beware of the dot... the dot does not compute angles by any chance, it's an abbreviation of:

+x * saturate(direction.x) + +y * saturate(direction.y) + +z * saturate(direction.z) + ...

That is... I use the vectors as arrays of stuff, and the dot as a SIMD operation, rather than as a vector math stuff. saturate() is used to separate the direction in two components: positive and negative. .x/.y/.z, the canonical projections, are used to further separate them in 3 components each, resulting in 6 axis-aligned directions. Direction is thus the sum of all those components. I use those components as a basis, in which the PRT is represented. I know the formula will give precise results if sampled at those vectors (basis vectors), and I extrapolate the property of linearity of the PRT function, and (arbitrarily) say that it will give linear combinations of those values if sampled at equivalent linear combinations (weighed sums) of the basis vectors. So, the dot() only operates on sets of representation coefficients, not real vectors. I hope that makes things clearer, not darker ;)
Oíd mortales, el grito sagrado...
Call me "Menes, lord of Cats"
Wing Commander Universe
chuck_starchaser
Elite
Elite
Posts: 8014
Joined: Fri Sep 05, 2003 4:03 am
Location: Montreal
Contact:

Post by chuck_starchaser »

Dark as negative light, for me. When you compute dot between two vectors, don't you get the cos of the angle? Okay, no; dot between two normalized vectors, I should say; but that's just a matter of normalizing them. And I guess you're not adding the results of the multiplications, but let's say you did, and squared the result, and then squared it again, for good measure; shouldn't that give you what you want, namely, soft shadows that aren't too soft?
klauss
Elite
Elite
Posts: 7243
Joined: Mon Apr 18, 2005 2:40 pm
Location: LS87, Buenos Aires, República Argentina

Post by klauss »

I scaled, rather than square.
Hm... squaring could look good, after scaling... perhaps.
Oíd mortales, el grito sagrado...
Call me "Menes, lord of Cats"
Wing Commander Universe
chuck_starchaser
Elite
Elite
Posts: 8014
Joined: Fri Sep 05, 2003 4:03 am
Location: Montreal
Contact:

Post by chuck_starchaser »

Try it. Even squaring twice might look even better. I mean, I'm sure talcum poweder might have very low shininess, but most ship making materials would have shininess of at the very least 4, I'm sure. So, nothing to lose, and it will make the shadows more defined.
chuck_starchaser
Elite
Elite
Posts: 8014
Joined: Fri Sep 05, 2003 4:03 am
Location: Montreal
Contact:

Post by chuck_starchaser »

I'm thinking, maybe you wouldn't need to square if you make the shadow evaluation wait till phong shading power is calculated; --like, combine the two things. Otherwise you could square once or twice, and then subtract 2 or 4 from the shininess.
I'm not sure what you mean by scaling; what I'm talking about is normalizing the two vectors prior to dotting them, so you get a number from -1 to +1. Saturate it so it's 0 to 1, then square it or fourth-power it. Then that multiplies the final illumination, scalings aside. Or else it multiplies the dot product used for shininess, prior to powering. Something along the lines, I'm not sure.

I'm thinking of some REALLY crazy ideas... Detail textures:

How about ... I really have to sit down and think about the math for all this. No color-keying; something more elegant and algorithmic. Okay:
Say, the Bengal, has a lot of green parts, some red parts, and some grey parts. Even a blue part. So, the idea is this:
In one texture, I could have one repeating pattern in the green channel, another in the red channel, and another in the blue channel. For example, the green channel could look like planks in like a brick-wall pattern. The red channel could be a finer-grained honeycomb. The blue channel could have like rivets or pipes or just polka dots. Then, the shader modulates by color correspondence: The parts of the ship that are green get, in proportion to their saturation, a greater share of the detail texture's green channel's modulation. Red parts of the ship (the lights) get the finer honeycomb pattern that's in the red channel of the detail texture, and the blue parts of the ship get the rivets or dots.
On top of that, I could sprinkle some salt and pepper noise on the detail texture. And the grey (low saturation) parts of the ship would receive the low saturation noise.
And on top of that I could add some blurry noise, just for good measure, which shows on everything.
As to the type of modulation, that's open; could be bump-map for the green channel, specularity for the blue channel, emissivity for the red, and shininess for the salt and pepper. So the green surfaces look groovey, the red lights look like full of little point-lights, the blue parts look like they have smoothed rivets of a different material, and the grey parts look like they've seen their share of micrometeorites and ammo hits.
Have I gone totally mad? :)
Actually, the math is not hard: The detail texture is multiplied by the color of the material, or could be vertex color, if this is on a separate pass from the main texture. The green channel, after multiplication, is used as bump data. The blue channel as specularity. The red channel as emissivity.
Then you compute a grey scale equivalent of this color; --i.e.: average of the components, subtract this average, multiply the resulting color by itself, so as to square the terms, and compute its intesity (add the 3 colors). This result is the saturation of the multiplication, and we use it to divide a constant; that is, get an inverse, and use that to modulate shininess.
Imagine that! All that variety coming out of a stupid 256x256 texture!

And a loaded question: Will the normal/paralax/whatever mapping offset the detail textures? This one's crucial, I think...
klauss
Elite
Elite
Posts: 7243
Joined: Mon Apr 18, 2005 2:40 pm
Location: LS87, Buenos Aires, República Argentina

Post by klauss »

chuck_starchaser wrote:And a loaded question: Will the normal/paralax/whatever mapping offset the detail textures? This one's crucial, I think...
Most likely - parallax occlusion has an expensive computation for the parallax/occlusion, whose output is a simple offset - after that, the shader progresses as usual.

About details... something like that I had in mind. I was thinking about that for planets, but I don't plan to use color keying, rather, I'll use "sliding masks" - that is hard to explain, mostly when I haven't resolved some issues. Let me revisit the subject when I have.
Oíd mortales, el grito sagrado...
Call me "Menes, lord of Cats"
Wing Commander Universe
Zeog
ISO Party Member
ISO Party Member
Posts: 453
Joined: Fri Jun 03, 2005 10:30 am
Location: Europe

Post by Zeog »

chuck_starchaser wrote:The parts of the ship that are green get, in proportion to their saturation, a greater share of the detail texture's green channel's modulation. Red parts of the ship (the lights) get the finer honeycomb pattern that's in the red channel of the detail texture,[...]
... and the yellow parts get the best of both worlds. :P (Sorry could not resist.) What's wrong with using three grayscale images instead? Same information, same memory requirements, less math. Is that what you mean by "masks", klauss? Is it that GPUs are optimized for RGB textures and a grayscale image would take the same amount of memory as an RGB image?
chuck_starchaser
Elite
Elite
Posts: 8014
Joined: Fri Sep 05, 2003 4:03 am
Location: Montreal
Contact:

Post by chuck_starchaser »

Hey, Zeog! How are you doing? How's the pseudorandom galaxy generator coming along?

Yeah, the problem with using 3 grey-scale textures is it's less efficient in terms of texture units. You'd be using 3 texture units in the gpu, all 3 of them doing the exact same thing, just pulling different data. Using RGB, or RGBA for that matter, is like packing all those grey-scale textures into one. Texture units are a precious resource. The newest videocards have like 6 or 8 of them, but if you want any kind of compatibility with slightly older GPU's, you got to watch how many you use.
So, it's just a way of packing more info in less textures; nothing to do with color, really.

In the particular case of the Bengal, though, color keying is the key :)

No, frankly, come to think of it, I'd rather have RGBA vertex color controlling detail texture mix or function; as not all ships are as "primarily colorful" as the Bengal. And having color on vertexes is super-cheap memory-wise, compared to textures. Too bad the color would probably have to go through an interpolant, even when no interpolation is needed...
Uhh... it IS needed. If you want a smooth transition from shiny, scratchy surface in the middle of a landing strip, to dark rusty look at the edges...
spiritplumber
Developer
Developer
Posts: 1831
Joined: Mon Mar 07, 2005 10:33 pm
Contact:

Post by spiritplumber »

.... what the heck are you guys talking about? :shock: I'm so ignorant... I'll keep messing with the data side for now ok? :)
My Moral Code:
- The only sin is to treat people as if they were things.
- Rules were made for people, not the other way around.
- Don't deceive. Real life is complicated enough.
- If all else fails, smash stuff.
chuck_starchaser
Elite
Elite
Posts: 8014
Joined: Fri Sep 05, 2003 4:03 am
Location: Montreal
Contact:

Post by chuck_starchaser »

These little technical conversations are going to have good consequences, spirit.
When you see the result of them you'll be breathless. Just imagine this: As you fly closer to the Bengal, not only you'll see the main texture and bump map, but more and more detail will be emerging, like subtle, bumpmapped grooves on the armor, speckles of light at the tips of the wings on the main tower, scratches and dirt on the metal hull... Just like in indoor FPS games where you have exquisitely detailed floor tiles and walls... Think of Doom 3 or Half Life 2. They all use the trick of repeating, tileable textures combined with larger lightmaps. In the current plan, there will be 3 levels of texturing:
1) A lightmap for the entire ship (actually 2 textures, with a lot more info in them than a simple light map, allowing soft ambient shadows and more...)
2) The main texture, parts of which may be re-used here and there: For example, for the main hull, the two symmetric halves are mapped to the same place on the texture, to increase efficiency, and therefore detail; and features that are repeated on the ship can share a common spot on the texture as well. This is not possible in the light-map.
3) Detail texture, which is a small texture that tiles multiple times all over the ship, and blends with, or modulates the main texture.
It's going to be unbelievable; believe me.

Klauss. You don't have to give up the color of reflected light. Just stick it in the alpha channel, either as 8-bit color table, or use the upper nibble for chroma and the lower nibble for saturation.
About details... something like that I had in mind. I was thinking about that for planets, but I don't plan to use color keying, rather, I'll use "sliding masks" - that is hard to explain, mostly when I haven't resolved some issues. Let me revisit the subject when I have.
Well, of course I've no idea what sliding masks are, but I think that using vertex color for controlling detail texture content would be very powerful. One could throw 4 different tileable patterns in RGBA, have one modulate specularity, one modulating shininess, one modulating color and one for bumpiness; or two channels for uv normal, one for specularity AND shininess, and one for diffuse color; and then the colors on the mesh could control the influence of the four channels.
The problem I see is coming up with a standard function for each channel, so we don't have to have a gazillion different shaders...
Maybe the best thing would be to start from first principles: What kinds of detail we want to represent.
Okay, there's three main categories:
1) Detail of the surface itself
2) Damage on the surface
3) Stuff on the surface

In more detail:

1) Surface itself:
- 1a) Bumpiness/grooviness
- 1b) Rust
- 1c) Material changes, such as rivets
2) Damage:
- 2a) Scratches (new, shiny)
- 2b) Scratches (old, rusted or dirty)
3) Stuff on surface:
- 3a) Wind-marks
- 3b) Smoke-marks

That would be 7 channels, and we only got 4 (RGBA), but...

Rust is basically a material change.
2a and 2b could be combined: Above 0x80 it's a shiny scratch; below 0x80 it's a dark scratch.
3a and 3b can be combined, also: both reduce specularity and shininess; and both reduce saturation. Above 0x80 it brightens the material; below 0x80 it darkens it.
So, we're down to four:

A) Bumpiness
B) Rust and material changes
C) New and old scratches
D) Windmarks and smoke-marks

The problem is B, because material changes could require changes in diffuse color, specular color and/or shininess. So, let's brainstorm about the types of materials we might need. First of all rust must be doable; and rivets. Rust comes of two kinds: Steel rust and Titanium rust. Titanium rust produces rainbow effects depending on the angle of reflection. Steel rust darkens reflectivity, specularity and reddens diffuse. Would there be a way of placing the two on a parametric continuum? Rivets, depends on what they are made of, but generally they'd desaturate color and increase specularity.
I think that we could say that, with a value above 0x80 we proportionately de-saturate and brighten specularity. Below 0x80 we darken the material: red darkens first but stops, green darkens next and stops at a value lower than red stops, and blue darkens last, but goes to zero. Specularity and shininess diminish together below 0x80.

That's it; we got it all. So, to summarize:

Red channel = bumpiness; or could be 1 nibble for U and 1 nibble for V, as in UV normal mapping.

Green channel = rust or material: Below 0x80 it darkens diffuse color along different curves, and darkens specularity and shininess linearly. Above 0x80 it desaturates diffuse color and brightens specularity.

Blue channel = scratches. Above 0x80 it brightens specularity, below it darkens it. Either way it produces depressions. And either way it desaturates diffuse color (removes paint).

Alpha channel = windmarks and smokemarks: No grooviness. Above 0x80 it brightens diffuse color. Below 0x80 it darkens diffuse color. Either way it desaturates diffuse color but darkens specularity and shininess.

The greatest advantage of this scheme is that it preserves independence of the four channels. I don't have to make one channel have a feature because another channel has it and I need a combination of both effects. I can work independently: Do a texture of smoke marks, a texture of wind marks, combine them, done with Alpha. Make a texture of scratches; done with the blue channel. Make a perlin noise for bumpiness. Done. Make a texture of rivets and rusts; done. And just throw them into the right channels. Then work on vertex colors to control them: Areas near engines or hot parts would have a low green color value, to get the rust. Areas that need rivets would have a high green value, to get the rivets. Areas that need neither would have green = 0x80.
Frontal areas exposed to micrometeorites would have a high blue value, to get bright, young scratches and impacts. Areas on the sides would have a middle-low blue value, to get old, rusty scratches. Areas that are secluded or protected would have blue = 0x80, and get no scratches at all.
Frontal areas would have a high alpha, also, to get wind-marks; and areas near missile and torpedo launchers would have a low alpha value, to get smoke marks. And each material could control its own bumpiness by the amount of red color.
For seams between planks, I could use a very low value in the scratches channel, and a very low blue value in vertex color to select them.

Actually, for the red channel (perlin noise), I could use a vertex color with red above 0x80 to use it for bump-mapping, and a value below 0x80 to control specularity and shinines with it.

Don't worry; I'll write the wiki... :D
klauss
Elite
Elite
Posts: 7243
Joined: Mon Apr 18, 2005 2:40 pm
Location: LS87, Buenos Aires, República Argentina

Post by klauss »

Huge post. I got a few ideas out of it... like UVW detail normalmaps can have a dual function, both bump and color detail.

I had the idea of using several maps for detail, each with an alpha channel specifying the "sliding mask".

UVW could be for lighting (both normal & intensity).
RGB could have an optional mode, splat or modulation, since splat masks are important for planets and not covered by UVW (and RGB modulation may be important if you don't want bumpmapping).
SRC would be Shininess-Reflectiveness-CloudHeight, with reflectiveness being a specmap luminance multiplier.

I'm not truly set on the way of specifying multiple sets of detail maps, nor on the maps themselves (the above ones are a temptive spec) or their significance. I could use either 3D textures (way cool), but has lots of drawbacks (like no mipmapping possible - yack), or hardcoded sectors - I think that last one is the best one, as it maps beautifully with the "set selector" method I want to use (either a per-vertex or per-pixel RGBA attribute), and allows mipmapping almost unhindered.

Believe you me, with four sets of those three maps, you can do whatever you want. The thing is... I need 3 or 4 texture units, and lots of instructions. I don't think I can add this to any nontrivial shader, let alone planet shaders.

For planet shaders, I can, but I'd have to convert the beautiful 2-pass technique into an ugly 4-pass one. I don't want to. So either I'll concentrate on a single type of detail map (like UVW combined lighting detail), or simply use the ugly method as a top-lod material.

So... as you see... lots of buts. That's why I don't have a spec yet. But I tried the overall detail technique on planets (in bits), and it's gorgeous. It has to be done. Ships would get it by transferability (sorry... I'm taking logic courses and I can't help but use their language).
Oíd mortales, el grito sagrado...
Call me "Menes, lord of Cats"
Wing Commander Universe
chuck_starchaser
Elite
Elite
Posts: 8014
Joined: Fri Sep 05, 2003 4:03 am
Location: Montreal
Contact:

Post by chuck_starchaser »

Glad to hear you've tried detailed textures for planets and they worked well.

I have the solution for all the problems you're mentioning. Yes ;-)

And the solution is: Forget planets when discussing ships, and forget ships when discussing planets.
I understand the desire to keep the shader count low, but planets absolutely need separate shaders, a different set of textures, and so on. Ships don't have atmospheres or cloud covers, and planets don't need specular normal mapping or scrach marks.

So, talking about ships, and only about ships: If you re-read my previous post, you'll see that I got all of those functions you mention using multiple detail textures for, in a single RGBA detail texture: Instead of using one channel for shinines, one channel for diffuse, and so on, as I though of doing at first, instead, I assigned each channel to a particular effect:

Red for bumpiness alone, typically perlin noise bumpiness.
Green for rust, rivets and the like.
Blue for scratches and inter-panel grooves.
Alpha for smokemarks and windmarks.

This is hardly applicable generally to all materials, but has all we need for ship materials.
To do this, however, each channel does more than one thing:

Both the red and blue channels affect bumpiness.
Both the green and blue channels may raise or lower reflectivity.
The alpha channel can only lower reflectivity, but may raise or lower diffuse.
All channels except the red may lower diffuse saturation.

Yes, each channel is a complete different story, because I have them assigned to overall effects, rather than to specific functions.

And what I neglected to mention: They have to be applied in a specific order: First red and blue's bumpiness, then green, followed by blue if the value is below 0x80, then by alpha, and finally by blue values above 0x80.

In other words:
1) random bumpiness
2) scratch-related bumpiness
3) rust and rivets
4) dark, deep, old scratches
5) wind and smoke marks
6) young, shiny scratches
klauss
Elite
Elite
Posts: 7243
Joined: Mon Apr 18, 2005 2:40 pm
Location: LS87, Buenos Aires, República Argentina

Post by klauss »

I get the idea. A combined-mode detail map. Rest assured, I'll make such variants.

The problem is with masks. If you want a separate mask for each property set, as per my original idea, you need a separate alpha channel for each. That has to be like that because of texture compression. Also, RGBA is not free, compared to RGB. RGB accepts DXT1 compression, which is 4 bits per pixel. RGBA needs DXT5 compression, which is 8 bits per pixel - twice the amount. Well... unless your alpha is 1-bit, and where you have transparency you also have black RGB - which is surely not the case for detail maps... I think. But that's the artist's choice, the engine doesn't really care - I just take the worst case for memory consumption estimates.

BTW: I know it's hard following me without an exact explanation of what the masks are... but I can't easily explain without screenshots, and I don't have the implementation "shottable" right now ;)
Oíd mortales, el grito sagrado...
Call me "Menes, lord of Cats"
Wing Commander Universe
chuck_starchaser
Elite
Elite
Posts: 8014
Joined: Fri Sep 05, 2003 4:03 am
Location: Montreal
Contact:

Post by chuck_starchaser »

I'll take 2 RGB's instead of one RGBA then :D

God! With 2 RGB's I can do everything! Let me work on it; I'll post a new plan tomorrow.
Zeog
ISO Party Member
ISO Party Member
Posts: 453
Joined: Fri Jun 03, 2005 10:30 am
Location: Europe

Post by Zeog »

<OT>
chuck_starchaser wrote:Hey, Zeog! How are you doing? How's the pseudorandom galaxy generator coming along?
Long time no see, I hope you're better now. :) Actually I shouldn't read these forums at all. I'm procrastinating... must finish thesis.

About the galaxy generator: mkruer was reviving that discussion recently: http://vegastrike.sourceforge.net/forum ... 0370#70370 and we were gathering a great number of already existing approaches and projects. This spawed a few other threads and attracted new people. Unfortunately, progress is now stalled again...
</OT>
chuck_starchaser
Elite
Elite
Posts: 8014
Joined: Fri Sep 05, 2003 4:03 am
Location: Montreal
Contact:

Post by chuck_starchaser »

LOL. Tell me about procrastination. Been procrastinating about four years getting a medical card. When I was going to physiotherapy for my shoulder I got all the papers to apply, and I collected all the bills. I had 2 months time to get the medical card and have them pay my bills retroactively. Let the chance go away, and still haven't applied. Yeah, I'm much better. Still can't scratch my back with my right hand, but I can use my left one half the time :)
About the galaxy generator ... Unfortunately, progress is now stalled again...
Check out the infinity engine. Game coming out next year. Billions... of star systems. Full galaxy. Photorealistic planets and moons you can fly down to and land on, with exquisit terrains and clouds and flora and dynamic weather systems with occasional tropical storms, and cities too. Terrain LOD's that smoothly increase in detail in real time. Realistic planet and moon sizes and distances; asteroid belts, space stations and ships and trading and piracy. I put a link on the neighboring thread. Hope that shakes you guys up enough to get on with it... ;-)

He's just one programmer; the rest of the team are artists. And he's using a hierarchical seed system like we were discussing last year. If you go back to a planet, you'll find it just the way you remember it. No randomness. Massively multiplayer, also. Not open source, tho; but the game will likely be free.

Ok, here's the links again
http://fl-tw.com/Infinity/infinity_media.php

And a video here: Notice what it says in the opening screen: 60 to 120 fps... on a slow machine to begin with. Choppiness is an artifact of the video.
http://downloads.planetmirror.com/pub/m ... _moddb.avi
Post Reply