CineMut shader family - Opaque

For collaboration on developing the mod capabilities of VS; request new features, report bugs, or suggest improvements

Moderator: Mod Contributor

Post Reply
klauss
Elite
Elite
Posts: 7243
Joined: Mon Apr 18, 2005 2:40 pm
Location: LS87, Buenos Aires, República Argentina

Post by klauss »

I don't understand you chuck.

You can't tell me it's that difficult to grasp. Any real modeller knows very well what a material's shininess value is. And it's really easy to understand that if you have a shininess texture, white alpha means whatever you typed in shininess, and black alpha means 0 shininess. Simple. Very very simple.

Any game modeller knows equally well what diffuse, ambient, emissive and specular colors are. They know equally well that only diffuse may have an alpha, the others have their alpha ignored. If it's not like this in the shaders, then that is something to fix. Not removing them altogether!

I want the tools to be modeller-friendly of course, but I don't want to make tools for non-modellers, if someone doesn't know what shininess is, he's not a modeller. We can't be aiming that low, let alone when we'll have so much graphical complexity. You're talking about dielectric bit, fresnel and whatnot... that's way more complicated than shininess chuck!

If you want to remove envColor, which is used to implement on the shader that "reflective" bit you find awkward, it's your call. I wouldn't remove it for Vega Strike because the dataset already relies too much on it. Besides... it's not that difficult to grasp. Maybe it is on the shader, but shader editors are programmers by definition. The bit only says: the surface is shiny and reflects the environment. Simple. I only added envColor to make the shader always reflect the environment, only it does so more diffusely when reflective="0" so that it approximates what the artist inteded when he said "this unit does not reflect the environment".

BTW: I'm already capping the environment LOD to the maximum for a 512x512 spheremap, as you said. For a 1024x1024 cubemap I believe you could increase the cap by 1, but for environment LOD, not shininess. The cap for shininess should be arbitrarily decided as whatever you want since there's no implementation detail stopping you.
Oíd mortales, el grito sagrado...
Call me "Menes, lord of Cats"
Wing Commander Universe
chuck_starchaser
Elite
Elite
Posts: 8014
Joined: Fri Sep 05, 2003 4:03 am
Location: Montreal
Contact:

Post by chuck_starchaser »

klauss wrote:I don't understand you chuck.

You can't tell me it's that difficult to grasp. Any real modeller knows very well what a material's shininess value is. And it's really easy to understand that if you have a shininess texture, white alpha means whatever you typed in shininess, and black alpha means 0 shininess. Simple. Very very simple.
To you and me, Klauss, that's simple. You and I have programming background. We both understand gpu's, and OpenGL. We both understand basic optics and physics. AND we both have artistic sides to us as well. The two of us are Guinness Book of Records nomination worthy examples of individuals straddling two universes. But people in the real world are not like the two of us. The real world is sharply divided between artistic "feeler" types, and brainy types; who don't even understand each other at all. This was the subject of a famous book: Zen and the Art of Motorcycle Maintenance, by Robert Pirsig. Probably the best book I've ever read. Highly recommended.
Any game modeller knows equally well what diffuse, ambient, emissive and specular colors are. They know equally well that only diffuse may have an alpha, the others have their alpha ignored. If it's not like this in the shaders, then that is something to fix. Not removing them altogether!
No, they don't, generally speaking. And here there's yet another issue you bring up: You'll rarely find a good modeler who is also a good texturer, or viceversa. I could name examples, but won't do it here; I'll send you a PM. Right now, though, the intricacies and bells and whistles of xmesh are almost forcing the modeler and the texturer to be the same person; and this is a BAD thing.
I want the tools to be modeller-friendly of course, but I don't want to make tools for non-modellers, if someone doesn't know what shininess is, he's not a modeller.
Completely false. I don't know ONE modeler that have any insights on the limitations of the representation process; except those whom I've explained things to repeatedly and using tons of (visual) examples for things. Take the issue of solid bevels, for example. I can mention two modelers that now (sort of) understand how it works, after months of explaining it to them, again and again. The concept of lack of self-occlusion in environment mapping is another thing that is almost hopeless to try to explain to most modelers. I can teach you some telepathy exercises you can try, to get into the mindset of artists. If you have the courage to enter their consciousness, you'll be devastated. Complete lack of understanding how 3D graphics work. :D
Klauss, do you cook?
When you're cooking, do you think about the chemical reactions of food mixing? Well, to artists, 3D graphics are black magic. They've no idea how anything works; and most can't even hope to grasp such concepts as shininess. They can become good at using these things, however, by practical experience. But if you introduce too many bells and whistles and variables, then the experience becomes too complex; too hard to learn from.
We can't be aiming that low, let alone when we'll have so much graphical complexity. You're talking about dielectric bit, fresnel and whatnot... that's way more complicated than shininess chuck!
Indeed it is; but I'm also planning to produce a materials library, like I said. Artists will only need to pick a material from a palette and paint with it.
If you want to remove envColor, which is used to implement on the shader that "reflective" bit you find awkward, it's your call. I wouldn't remove it for Vega Strike because the dataset already relies too much on it. Besides... it's not that difficult to grasp. Maybe it is on the shader, but shader editors are programmers by definition. The bit only says: the surface is shiny and reflects the environment. Simple. I only added envColor to make the shader always reflect the environment, only it does so more diffusely when reflective="0" so that it approximates what the artist intended when he said "this unit does not reflect the environment".
Klauss, I can't believe you don't see the irony in your own words. WHO is this artist who supposedly says "this unit does not reflect the environment"? NO artist will ever touch that, because I, as an artist, still don't get what it means, and this is the second time you try to explain it to me. And I still don't get this EnvColor, either, BTW.
BTW: I'm already capping the environment LOD to the maximum for a 512x512 spheremap, as you said. For a 1024x1024 cubemap I believe you could increase the cap by 1, but for environment LOD, not shininess. The cap for shininess should be arbitrarily decided as whatever you want.
I wasn't talking about LOD range; I know that; I was planning to bring up the subject of blurring it faster than by powers of two, too; but later.
What I was trying to say is this:
1024 implies a blur level. Namely, pixel size... Okay, let me think...
Equivalent blur radius would probably be the arc-sine of about 1.2 times 1/1024. 1.2/1024 = 0.001171875
arcsin(0.001171875) = .06 degrees.
So, my question is, what's the shininess value that produces blurring of an angular radius of 0.06 degrees?
The range of representation should be from 1.0 to THAT.
The cap for shininess should be arbitrarily decided as whatever you want.
What I want is that.
I DON'T WANT to have a knob to turn, to scale the shininess of a whole ship. I want to be able to count on that a given alpha value in the spec will produce a consistent shininess level always.
By the same token, I don't want any of those diffuse, specular, ambient, glow and whatever colors, either.
What we artists want and need is a reliable platform that produces consistent results, based on what you paint on the texture and on as little else as possible.

IOW, what I would suggest is a new xmesh format with anything that's not strictly necessary removed for good. But that can wait. However what I intend for the new shaders to do is to take a giant step forward by ignoring all the unnecessary parameter clutter in xmesh.
klauss
Elite
Elite
Posts: 7243
Joined: Mon Apr 18, 2005 2:40 pm
Location: LS87, Buenos Aires, República Argentina

Post by klauss »

Ok, we're getting closer to understanding ;)

Yes, I was talking about modellers knowing what those are by experience or understanding of their visual effects, as in examples. They know that if they put in a shininess value of "x" they'll get this or that effect. In that vein, I'd argue that we should strive to mimic modelling tools: try to make shininess levels equivalent to blender's, or 3DS Max, if anyone uses it.

Chuck, I've been in contact with many game artists lately... I know what they're capable of understanding. Perhaps the gap is between pros and amateurs.

But I still would appreciate a knob that turns shininess up or down globally for a unit. Try to see if artists would appreciate it too.

I know what you're trying to do now. You're trying to put together a palette of colors for diffuse, spec, glow, etc... textures that will convey the materials library you're working on. So you'd paint each texture with that palette and sort of get a "materials texture" composed of several "parameter textures" all painted at the same time with that palette. For that, you need all "materials" have the same shininess. Very very specific to your weird idea. Cool, yes, but...

...what about fixed function? What about per-vertex lighting? Are you willing to require ps2 hardware (GeForce6, ATI Radeon 9800). Remember that many onboard solutions don't have that.


Solving your equation, Maple says: 4119410
;)

(edited: sorry, the previous solution was quite inaccurate)

(I computed the exponent required for the blurring to only account 10% of the region outside the pixel)
Oíd mortales, el grito sagrado...
Call me "Menes, lord of Cats"
Wing Commander Universe
chuck_starchaser
Elite
Elite
Posts: 8014
Joined: Fri Sep 05, 2003 4:03 am
Location: Montreal
Contact:

Post by chuck_starchaser »

klauss wrote:Ok, we're getting closer to understanding ;)
I hope.
Yes, I was talking about modellers knowing what those are by experience or understanding of their visual effects, as in examples. They know that if they put in a shininess value of "x" they'll get this or that effect. In that vein, I'd argue that we should strive to mimic modelling tools: try to make shininess levels equivalent to blender's, or 3DS Max, if anyone uses it.
I'd argue we should NOT mimic modeling tools. What exactly IS good about them? That they have more controls, settings, bells and whistles than any living mind can humanly grasp, and most of them overlapping, when not downright redundant? Look at Blender: To this day I've no idea how to get a shininess of 44.6 via a texture. Impossible to understand all the obscure settings for the display of ANY texture, let alone shininess. Pefect example of what we should be trying to ***avoid***.
Chuck, I've been in contact with many game artists lately... I know what they're capable of understanding. Perhaps the gap is between pros and amateurs.
You're thinking like a programmer Klauss. Artists aren't "pros" or "amateurs". It's a lot more complicated than that. Well, perhaps in classical musicians, or in the performance arts, you can say that. Not in this business: You're either an artist or you're not; and it will never be judged on your understanding of shininess; you may be expected to get to know and become proficient with your tools; but most texturers don't even feel that they should have to know technical things, such as what shininess IS, or how it works, mathematically.
And I sympathize with them. That's like trying to convert people from one religion to another. That doesn't mean one cannot ask artists to follow certain basic rules of style for a given project; but to expect them to understand that this or that parameter "multiplies shininess" or "adds 77" to it, that's not cool. Do you realize a lot of people out there think of nothing more than punching numbers in a calculator when you mention "multiply"? You need to practise telepathy. Many people don't even have an intuitive feeling of basic math. Much less when applied to a subtle visual attribute they can barely grasp --i.e.: they might recognize it as "variable" in a sensual way; yet hardly perceive it as "parametric" (quantifiable).
But I still would appreciate a knob that turns shininess up or down globally for a unit. Try to see if artists would appreciate it too.
This has already been tried. The experiment has taken years. The data is in. The answer is: Negative. It's over. Go through the meshes we currently have in-game and you'll see totally random and absolutely wrong values of shininess, in every one of them.

Klauss; the only reason shininess was put there is because there were no shaders at the time. And NOBODY knew what to do with it. Now that shininess comes from the texture, it's time to get rid of that parameter.

I do not want a control of shininess in xmesh. It's a leg-trap. Someone who doesn't like shiny things is going to texture 17 ships with that parameter at 50, and the day we want to add a few shiny things we'll have to redo all those textures. And all for what? What IS the advantage of having that there, exactly?
I know what you're trying to do now. You're trying to put together a palette of colors for diffuse, spec, glow, etc... textures that will convey the materials library you're working on. So you'd paint each texture with that palette and sort of get a "materials texture" composed of several "parameter textures" all painted at the same time with that palette. For that, you need all "materials" have the same shininess.
Why all materials with the same shininess? I don't understand a thing you're saying.
What I want is to have a library of materials... In fact, multiple small libraries... as blender files. One for metals, one for plastics, another for ceramics, another for matte paints, another for glossy paints, and so on.
When you pick "titanium", you get the right values for diffuse, specular and shininess. You assign that to the sections of the model you want to look like titanium. At the end, you bake 3 textures of the model: Diffuse, sepecular and shininess, and they become you material base textures for the three, as a starting point. The rest of the texturing work will be layers that go on top of that, and which will be processed using noodles.
But for titanium to always look like titanium, I need to be able to count on the shininess that comes from it to be THE ***final*** shininess; NOT to be affected by other settings.
...what about fixed function? What about per-vertex lighting? Are you willing to require ps2 hardware (GeForce6, ATI Radeon 9800). Remember that many onboard solutions don't have that.
Well, hell; I can't make old hardware do things it cannot do. Backwards compatibility is a separate subject.

Wait a minute! I think I know what you're getting at...
You're saying we still need the shininess in xmesh because there are gpu's that won't be able to read it from a texture!
You're right, Klauss.
Okay, maybe now we're getting close to an understanding...
Alright, let's leave the parameter there; but let's agree that it is there for compatibility, and that the shader won't use it.

In fact, then we POSITIVELY cannot use it in shaders. Why? Because you wouldn't give a whole ship a shininess of 255 just because there's a little chrome trim somewhere; you'll give that "power" in xmesh a value that is appropriate to whatever material makes up most of the ship; which will probably be a low shininess, like 20 to 50. If the shader were to use that value as a scaling factor, then you'd never be able to make the chromed trim shiny via the texture (unless you used HDR :D).

BTW, to serve the No Shader options, or for gpu's that don't support textureCubeLod(), what we should do is have the code use a 64x64 or so mipmap of the env map; because no matter what you make shininess in terms of phong shading, full shininess env mapping looks ridiculous on metal. Looks like shrink-wrapped, or dressed in latex.

EDIT:
Damn! Frankly, for compatibility modes, we might as well fix shinines to 25 in code, rather than have a "power" in xmesh. Because, I tell you what: If you have that parameter in xmesh, everybody will continue to do what they've been doing all these years: Set it at 100 for no reason whatsoever; just because they haven't a clue what it is for; and the ships will continue to look ridiculous in compatibility modes. And by fixing the value in code, we can make sure that even in those modes, the env mipmap level used agrees with the shininess value used for phong shading. Simplest solution.
For glass objects (we can detect glass from the blending mode) we can fix the shininess to 250 and use the top mipmap.
And by the way, I see no reason why we couldn't modulate the color of a star by the 1.7 power formula, for compatibility modes, also; and stick that into OpenGL's "specular color" for the star. Yeah, I knew there had to be a use for specular color for lights... :D
Solving your equation, Maple says: 4119410
;)

(edited: sorry, the previous solution was quite inaccurate)

(I computed the exponent required for the blurring to only account 10% of the region outside the pixel)
Who's Maple? So, what's the answer, then?

P.S.: Been playing around with ATI's CubeMapGen. We can use this beauty to make cube maps for baking PRT's, btw. Well, we got that in Xnormal already. Nice program. Too many controls, tho :D
klauss
Elite
Elite
Posts: 7243
Joined: Mon Apr 18, 2005 2:40 pm
Location: LS87, Buenos Aires, República Argentina

Post by klauss »

Ok... you know what I think is the problem?
Mesher.

Having to use mesher to modify the materials. Mesher is a commandline tool - the biggest enemy of any artist.

Furthermore, I myself get a weird feeling thinking that tinkering with shininess requires messing 10 meshes or something like that (you have to take apart the bfxm into each LOD and component, edit the materials, then assemble it all back).

In essence, bad bad bad tools. That's the problem.

If there was either a frontend for mesher that does all that and lets you easily tweak materials, and ideally visually do it (with feedback in realtime), then things would be totally different.

BTW: would it be better if we encode shininess logarithmically but where 50% gray = 1.0 and have it multiplied by the shininess in xmesh? You could set the shininess in xmesh to the "base shininess", the one that applies to most parts of the mesh. Then it's easy to know if the mesh will look good in compatibility modes: if the texture is pretty much 50% gray, all's ok. If it regularly strides from that, then it's not. ;)

Just an idea.

PS: Whenever we get a real GUI system in VS, one fully accessible through scripting for instance, we can have a "material editor mission" - ie, a python script that shows a unit on screen, uses mesher to break it apart and gather material/mesh info, lets you edit it with nice sliders and all, save and update the view on screen.

Oh... and Maple is a math program, one really cool btw. The latest version has its frontend coded in java and... well... sucks. We use it at the UBA to do math stuff. It can integrate both symbolically and numerically, as well as solve complex equations both symbolically and numerically. And when I asked it to solve the equation you described, it answered with the number I wrote above - the shininess exponent that is equivalent to mapping an unmodified 1024x1024x6 cubemap is 4 millions and something. High isn't it?
Oíd mortales, el grito sagrado...
Call me "Menes, lord of Cats"
Wing Commander Universe
chuck_starchaser
Elite
Elite
Posts: 8014
Joined: Fri Sep 05, 2003 4:03 am
Location: Montreal
Contact:

Post by chuck_starchaser »

klauss wrote:Ok... you know what I think is the problem?
Mesher.

Having to use mesher to modify the materials. Mesher is a commandline tool - the biggest enemy of any artist.
You got that right. But if the prospect of using mesher gives me shivers, what really scares me is xmesh. I know it's going to take me a number of tries to get all those numbers right. And why should we be forced to set so many parameters we don't want in the first place?
Furthermore, I myself get a weird feeling thinking that tinkering with shininess requires messing 10 meshes or something like that (you have to take apart the bfxm into each LOD and component, edit the materials, then assemble it all back).
Well, exactly. It took me two days of meshering and xmeshing work to put together the refinery bfxm; and that is without LOD's; just 6 sections. The first day it just wasn't working at all; then I emailed Hellcat and he suggested I load the sections in a different order. That worked. Then the second day was having to do everything again and again because of this or that mistake. Pure hell.
In essence, bad bad bad tools. That's the problem.
Or, tools at all. Tools are supposed to serve a purpose. Mesher serves the purposes of xmesh, NOT *my* purposes. I'd be perfectly happy specifying a blend mode and a technique in units.csv. Why do we need mesher at all?
Well, we need it for putting meshes together, for sure. That could be done with an interface as simple as a tiny window with a sign that says Next LOD, and you just drag and drop them in order, then hit a Done button.
Not even that; just give me a standard naming convention for .obj's AND for textures, and running mesher in that folder, without any parameters or interfaces, creates a bfxm.
We don't need ANY parameters to add burden and confuse the texturing pipeline.
If there was either a frontend for mesher that does all that and lets you easily tweak materials, and ideally visually do it (with feedback in realtime), then things would be totally different.
I DON'T WANT to tweak any materials! Tweaking is for the birds. The look of a model in game should be controlled exclusively from the textures.
There's no way to produce a dependable library of materials, if you have ways of tweaking things afterwards.
Nobody wants to tweak materials in xmesh. Look at the xmeshes out there right now. Shininess is always 100; exactly wrong. And those stupid colors are either 0,0,0 or 1,1,1 or wrong. Just get rid of them.
BTW: would it be better if we encode shininess logarithmically but where 50% gray = 1.0 and have it multiplied by the shininess in xmesh? You could set the shininess in xmesh to the "base shininess", the one that applies to most parts of the mesh. Then it's easy to know if the mesh will look good in compatibility modes: if the texture is pretty much 50% gray, all's ok. If it regularly strides from that, then it's not. ;)
It would only be slightly better, in that I would then set power to 16.000 always, which would look better in compatibility mode than 256. What would be best would be to do it the other way around: Have mesher read the the alpha channel of the specular texture, find out what's the shininess value most commonly used (or perhaps the average, but it needs to discard background somehow... well, not if it's properly padded), and write that value down to the bfxm's power parameter, for the benefit of compatibility mode.
But just don't mess with the shininess coming from the texture. The shininess coming from the texture is sacred. If it's not right, then the texture (or the materials library) should be fixed. NO tweaking!
PS: Whenever we get a real GUI system in VS, one fully accessible through scripting for instance, we can have a "material editor mission" - ie, a python script that shows a unit on screen, uses mesher to break it apart and gather material/mesh info, lets you edit it with nice sliders and all, save and update the view on screen.
Please; I DON'T WANT material editing. I want a materials library I can count on looking the same way in every ship and station. That requires NO... and I mean NO... material parameters in xmesh, at all, whatsoever, of any kind.
If the materials library is not right, I want the artists to shout obscenities at me so I can fix it; NOT to tweak xmesh behind my back.
Besides, even for those artists that DO get the drift about those xmesh tweks, their presence is a disservice. Why? Because then artists won't even care for their materials to be right, if they know they can tweak them afterwards.
That's shooting ourselves on the foot. Bad bad BAD!
It's like the volume control syndrome. You got volume controls in every damn receiver or drive or turn-table, volume control in preamps, volume control in amps, volume controls in headphones... Even some speaker cabinets have gain setting switches. So which of the infinite combinations do you use to set the volume level you want? Needless, useless confusion.
Oh... and Maple is a math program, one really cool btw. The latest version has its frontend coded in java and... well... sucks. We use it at the UBA to do math stuff. It can integrate both symbolically and numerically, as well as solve complex equations both symbolically and numerically. And when I asked it to solve the equation you described, it answered with the number I wrote above - the shininess exponent that is equivalent to mapping an unmodified 1024x1024x6 cubemap is 4 millions and something. High isn't it?
Very high.
I gather, then, that the formula we've been using for LOD is horribly flawed.
Okay, I'm settling on 1-256 logarithmic as the range then. That's for this shader, though; --the opaque material shader.
The next shader, the "glass and fire shader", which I'm working on now, I might make it span a 16 - 4096 log shininess range, perhaps.
So, what's the angular radius of blur for a shininess of 256? AND 4096?
I want to make sure our mipmap LOD range is exactly right.

By the way, I was going to suggest, we should make sure that our "180 degree blur" (90 degree radius) occurs at the 4x4 mip level, rather than at the 1x1 mipmap level, so that we enjoy a bit more color continuity. I don't think we should even use the 2x2 and 1x1 mipmap levels at all.
chuck_starchaser
Elite
Elite
Posts: 8014
Joined: Fri Sep 05, 2003 4:03 am
Location: Montreal
Contact:

Post by chuck_starchaser »

Klauss, could you pass me the formulas you use for shininess <=> angle?
T.I.A.
chuck_starchaser
Elite
Elite
Posts: 8014
Joined: Fri Sep 05, 2003 4:03 am
Location: Montreal
Contact:

Post by chuck_starchaser »

I've changed the title of this thread from "Genesis of a Shader" to...

CineMut shader family - Opaque

What does "CineMut" mean?

Sine is Latin for "without". Mut is Latin for "change", "alteration", "mutation", "adjustment"...
Sine Mut therefore would mean "without adjustment", or "tweak-less".
But CineMut reads more cinematic ... :D

Opaque indicates the member of this family, --which family will include this shader for opaque materials; a "FireGlass" (dual role) shader for glass (and transparent materials) as well as "fire" (engine exhausts); and another shader for painting sky backgrounds.


And why the change of name?

Well, today I called Klauss in Google chat to try and come to some agreement about all these parameters in xmesh. At first it seemed that our differences were irreconcileable. But in the end we did reach agreement. The key element was that somehow Klauss thought that this shader was intended to be some kind of "default" shader. As I said to him it could never be a default shader. Why? Because only by specifying this shader through a specific technique in xmesh can it be invoked. The default behaviour, when no technique is specified in xmesh, is to use the "old" (current) shaders, which make use of such parameters as color and shininess adjustments in xmesh.

Klauss' concern was that not all artists would feel as I do about establishing a library of "real materials", and that perhaps many would prefer instead to paint with colors they "feel" will look right, and then do final adjustments via the xmesh color and shininess parameters, rather than fix the texture.
I hope that he is wrong, and that the CineMut shaders family will be the most popular; but he could be right.


And so, what's going to be so good about CineMut shaders?

With CineMut shaders, the motto is "it's all in the textures". No if's or but's; it IS all in the textures. The final look of the ship in-game will depend on the textures and nothing more; --no tweaks or parameters anywhere. If you use a set of diffuse, specular and shininess colors that look like nickel in one ship, they will look like nickel in another ship, or on a space station. As an artist, you'll be able to count on that. There will be libraries of materials for Blender: One for metals, one for ceramics, another for paints, and so on; and you'll be able to pick materials and KNOW what they will look like in game, because there will be at least one ship or station using all of them that you can observe in-game, and read the material labels. No more endless cycles of tweaking and testing... No more wondering and crossing fingers...

But a reliable library of materials would not be possible if their looks could be tweaked afterwards via parameters in xmesh. If the shader used those colors and "power" parameters from xmesh, the only way a library could ensure its consistency would be by mandating what the parameters in xmesh ought to be; but then, what would be the point of reading them?


But to get back to shady politics, there will probably be a sister, AdMut family of shaders that do make use those parameters, for those artists who do want to tweak with their creations rather than try to make the textures be just right.

Either way, it seems that the parameters will stay in xmesh. The only difference between AdMut and CineMut shader families is that CineMut will summarily ignore them xmesh parameters with prejudice.

Well, there IS another difference: Reading those parameters and using them to modulate colors takes a number of shader instructions, and shader instructions are a precious resource. This shader, CineMut Opaque, takes 128 instructions, exactly.
All other things being equal, AdMut shaders will have to make some sacrifice or other in order to take those parameters into account. Chances are, therefore, this CineMut family will have one or two superior features relative to AdMut.


And why will the parameters be in xmesh if the shader will ignore them?

Simply because they will be needed for compatibility mode, such as "no shader" mode (traditional OpenGL pipeline). That is, suppose you model a new ship and call for use of a CineMut shader in the (new) "technique" field of xmesh; but some player out there has a very old videocard and she sets the shader in vssetup to "No Shader". Well, the old OpenGL pipeline cannot read shininess from the texture; it needs to receive it as a mesh level, global parameter. And where will this parameter come from? From the "Power" parameter in xmesh, of course.


So, it just seemed right to commit these private discussions to the forum for public scrutiny, since there was no secrecy intent, and it's a subject of "public interest" around here; and it seemed also right to give this special shader family a descriptive name, to put it into perspective, and assure everyone this is not intended as a "default" shader. I do believe CineMut will be the shader family of choice, but never any kind of "default".
safemode
Developer
Developer
Posts: 2150
Joined: Mon Apr 23, 2007 1:17 am
Location: Pennsylvania
Contact:

Post by safemode »

So then we're back to needing a new mesher app that has a simple gui and allows for up-front hand editing of "power" values.

This should probably be added to the todo list for 0.5.2 then.
Ed Sweetman endorses this message.
chuck_starchaser
Elite
Elite
Posts: 8014
Joined: Fri Sep 05, 2003 4:03 am
Location: Montreal
Contact:

Post by chuck_starchaser »

Indeed!

Hadn't Snow_Cat written a front end GUI for mesher? Or was it something else he wrote? Did he finish it? Was it in a portable language?


EDIT:
New ideas for CineMut Opaque:
This is probably wishful stuff; I'm not at all sure I CAN implement these ideas without pushing shader instruction count through the roof. However, ideas are ideas; and they MUST be committed to writing, lest they be lost.


Composite shininess:

Many metal finishes, rather than being best represented by a single shininess value, are best represented by a mixture of two. Nickel is a good example of a material that seems to, more often than not, have this kind of appearance: If you look at nickle or nickle-plated parts, what you see on reflection is like a blend of a VERY sharp image, and a VERY blurred one; rather than a single image of some in-between blurriness. This is dramatically noticeable; I'd recommend you grab some nickle part, such spade-type "quick disconnect" connectors used in electronics and look at the reflections on it. Same is true of some stainless steel objects; --except that in most stainless finishes, the sharp reflection is so faint it's almost unnoticeable.
I suppose this is caused by the initial finishing process producing a very low shininess, and a mechanical polishing post-process that leaves part of the surface mirror-like flat, but parts of it untouched; such that, at the microscopic level, it might look like a very flat top surface but marred by rough craters.
The present CineMut Solid shader is so damn close to being able to do this it's not even funny. CineMut already reads the environment map twice: Once with shininess-controlled LOD, and once without. The without LOD read is presently there for the fresnel component of reflectivity (which applies to materials marked as dielectric; NOT to metals). But I could, of-course, use it for blending with the LOD variety of environment mapping, instead (without fresnel modulation). The only gottcha is that I don't have a parameter or channel to indicate to the shader in what proportion it should mix LOD and non-LOD env mappings.


Fresnel shininess:

Currently, CineMut assumes that fresnell reflections have virtually infinite shininess. These reflections use standard (LOD-less) fetches of the environment map. Why didn't I use shininess for fresnel? Because I thought (and still think) that dielectric reflections will be used by most artists for glossy paints, --which don't need to be, but most often are very shiny; and because I wanted (and still want) to reserve the shininess channel to modulate a second-layer specularity: That of the material beneath or suspended within the transparent layer of a glossy paint, thus permitting metalized paints. That is, if you want to represent green metalized car paint, give the diffuse channel the shade of green you want, mark the material as dielectric by setting alpha to 1.0 in the damage texture, and finally give it a shade of gray in specular, and put the spec's alpha channel to 0.0 to indicate the lowest shininess, and voila: You got a first layer of diffuse greeness, a second layer of metal dust (ultra low shininess specularity), plus faint but sharp surface specularities (fresnel modulated) representing the glossy, highly polished, dielectric surface finish.
No doubt Vegastrike will have the most sophisticated paint representations of ANY engine out there, real or imagined.
BUT...
What about when the dielectric surface layer is NOT highly polished?
Lacking the ability to represent un-polished dielectrics is a pity, really.
What it occurred to me last night is that when the shininess channel (spec alpha) has information in it (reads less than 1.0), AND the specular color has no apparent info (reads as black), then perhaps we could interpret this as meaning that the shininess value that's there is meant to apply to dielectric, fresnel reflections.
I'm expressing the idea in "if, then, else" terms, but it could also be put into a continuous function form, like "the specular color dot itself lerps the use of shininess between fresnel and non-fresnel specularity components" or something along the lines.
I'm fighting back the temptation to just do it, even as I write this. What gives me the strength to resist the temptation is, first of all, the concern with instruction count; but also, the unresolved issue of composite shininess materials I mentioned earlier.
(I keep wondering is there's any way to free two birds of one shot...)


EDIT2:
I'm edging closer and closer to using diffuse alpha for something more useful than alpha... Hmmm... No, I can't; we'd not be able to have alpha-testing...


EDIT3:
Only metalized paints, which, for us, are "dielectrics", use mixes of diffuse and specular colors. Non-dielectrics (metals) are usually black in specular.
Perhaps, when a material is marked as non-dielectric, I could use diffuse color as a channel to convey the blend ratio of non-LOD environment mapping.
loki1950
The Shepherd
Posts: 5841
Joined: Fri May 13, 2005 8:37 pm
Location: Ottawa
Contact:

Post by loki1950 »

Hadn't Snow_Cat written a front end GUI for mesher? Or was it something else he wrote? Did he finish it? Was it in a portable language?
He never finished and it was windows only

Enjoy the Choice :)
my box::HP Envy i5-6400 @2Q70GHzx4 8 Gb ram/1 Tb(Win10 64)/3 Tb Mint 19.2/GTX745 4Gb acer S243HL K222HQL
Q8200/Asus P5QDLX/8 Gb ram/WD 2Tb 2-500 G HD/GF GT640 2Gb Mint 17.3 64 bit Win 10 32 bit acer and Lenovo ideapad 320-15ARB Win 10/Mint 19.2
safemode
Developer
Developer
Posts: 2150
Joined: Mon Apr 23, 2007 1:17 am
Location: Pennsylvania
Contact:

Post by safemode »

It really shouldn't be rocket science. Get python's standard gui to create a main window, then some buttons that designate the switches available to mesher, then two file selection buttons and text inputs to select/display/edit the filenames you supply the mesher.

Then a series of text input squares allowing the modification of the "power" values, and you're pretty much done. This really shouldn't be that hard and it doesn't have to be sexy, python uses tkinter as it's standard gui toolkit.

Have a final button to "mesherize" which executes the mesher app with the correct flags and filenames and then have it open the resultant file and insert the new "power" values and overwrite the file with the modified data.

Then voila, without any compiling or additional requirements and what not you'd have a multi-platform mesher frontend that puts the power value modifications right up front for you to play with.

ok, now back to our regularly scheduled topic. :)
Ed Sweetman endorses this message.
chuck_starchaser
Elite
Elite
Posts: 8014
Joined: Fri Sep 05, 2003 4:03 am
Location: Montreal
Contact:

Post by chuck_starchaser »

loki1950 wrote:
Hadn't Snow_Cat written a front end GUI for mesher? Or was it something else he wrote? Did he finish it? Was it in a portable language?
He never finished and it was windows only
Damn!



@Safemode: And most importantly: Blend mode and Technique.



More elucubrations...

I think I'm settled on the two last ideas:

For non-dielectric materials, diffuse luma can be made to control blending ratio of LOD and non-LOD components of environment mapping. This has its problems; but I'll deal with something even more problematic first...

For dielectric materials, specular luma could control the extent to which shininess is applied to (middle layer) specular color, or to fresnel reflections.
This is hugely problematic, at least at first thought: The main problem is that if, say, the balance were 40% or 60% we'd need two LOD fetches of different LOD (blur factor) level; --one for specular, one for fresnel. But currently I have two fetches, yes, but while one of them is an LOD fetch, the other is non-LOD. Perhaps I could change this, and make them both LOD fetches; but I'm reluctant to do that because our representable shininess range goes up to 256, rather than to 4 million; and NON-LOD fetching of the env-map is the equivalent of a shininess of about 4 million, according to Klauss, and that's a bit of a precious jewel to have around. Glossy paints can certainly use it.

Another solution would be to make fresnel reflections be a variable blend of LOD and non-LOD env-mapping, with the blending ratio controlled by specular luma.
One attractive aspect of this latter solution is that blending of LOD and non-LOD reflectivities is something we already need for composite shininess metals, which hints at the possibility of finding shared code optimizations.

That is, we'd
1) make shininess-controlled LOD and non-LOD fetches of the env map up front (early in the shader code),
2) compute a blend of the two based on diffuse luma (for metals)
3) compute a blend of the two based on spec luma (for dielectrics)
4) decide which we used (2 or 3) based on the "is_dielectric" bit.

NOTE: As shininess of the dielectric surface decreases, fresnel modulation should decrease also, to account for the fact that the surface isn't flat. Probably the easiest way to implement this, since the algorithm just detailed would make dielectrics a dual shininess composite, would be to add the LOD component of specularity without fresnel modulation; in other words, add it to the fresnel component after fresnel modulation. This will allow us to represent plastics quite realistically; which was a deep worry of mine until now: Plastics reflect dielectrically (desaturatedly), but, unlike glossy paints, they show little fresnel modulation and low glossiness.

The problem I haven't dealt with yet is the giving up of diffuse color in the case of non-dielectrics. I'm far less than 100% sure about the wisdom of that. Then again, I'm far more than 50% sure about it... :D Okay, 75%.
Let's revisit my old argument:
Diffuse materials are rare. Most materials are specular; and some have extremely low shininess, close to one, and we tend to confuse that with their being "diffuse", but low shininess specularity and diffuse reflectivity are totally different animals. Even rubber is more specular than diffuse (when clean, that is; it can be quite diffuse when dusty). We only really find truly diffuse materials in some special paints (not the kind you find at home renovation outlets; rather paints used in optical equipment, and, to some extent, in some ceramics, fabrics and some special types of paper, like canson (heck, even newspaper paper is more specular than diffuse).
Diffuse materials are rare because their surfaces have to be complex enough that most photons entering the material must bounce several times before managing to get out of it --a tall order. That's the only way in which photons entering a material can bounce back out in a totally random direction, without even a statistical preference; which is what the diffuse reflectivity equation implies.
Can we think of any way in which a metal could have a diffuse component to reflectivity? I suppose if we made micron-thin metal wire, weave it into a sparse mesh, then pile up multiple layers of this mesh, we might be able to make a diffuse metal finish; but it would have to be done quite intentionally. I don't see how metal could possibly have diffuse reflectiviy from typical fabrication technologies, such as casting, lamination, extrusion or plating. That's why I said elsewhere that the fact we even have diffuse lighting in 3D graphics is because someone thought of a diffuse lighting equation first. If specular lighting had come first, I dare doubt whether the industy would even have seen a need to have diffuse lighting at all. And yet, to this day, we're so stuck on diffuse thinking that we even start our texturing jobs by painting a diffuse texture... But I'm getting side-tracked...
What I'm not quite sure what happens is when metals rust. Well, each metal is a complete different story, in this regard; I know that; but my question is, can rust change a surface structure so as to make it diffuse?
Well, the answer is obviously yes; thouh I've no idea how it happens.
Take really rusty iron, for instance: This is probably one of the most perfectly diffuse materials you'll ever find.
Alas! This leads to an inescapable conclusion that is the opposite of what I was hoping: Iron or steel with partial rusting can only be represented by a blend of diffuse red/orange and specular gray. Doh!
So, I absolutely cannot use diffuse luma to control LOD/non-LOD blending of environment mappings. :(
Back to the drawing board...

EDIT:
Well, there's really no lack of crazy solutions: I could steal the blue channel from the damage texture and use it for composite shininess blend ratio. I just don't like craziness...

EDIT2:
Well, maybe stealing the green channel wouldn't be so crazy :D

EDIT3:
Hell, if I do that, then I wouldn't have to use spec luma to determine fresnel LOD mix; I could just use the damage.green for LOD/non-LOD mixing, and then decide by "is_dielectric" whether this mix is to be applied to specular or to fresnel reflection.
This is looking very enticing... How important can be damage color, anyways? I could have LaGrande make the red channel of damage be a blend of the original: one part red and two parts green, and then have the shader recreate green by mixing 3 parts of input red and 1 part of input blue, and be done with it. Heck; damage specular would be more important to have than damage diffuse, and we don't have it. And damage normalmap would be even more useful, and we don't have it either. IOW, our way of representing damage is a bad joke, anyways.

EDIT4:
Just a rough draft of what changes are needed:

Code: Select all

//read & demultiplex specular texture
temp4 = texture2D( specMap ... )
spec_color = temp4.rgb
shininess_LOD = 7.0 * ( 1 - temp4.a )
shininess = pow( 256, temp4.a )
//read & demultiplex damage texture
temp4 = texture2D( damgMap ... )
dmg_color = temp4.rgb
is_dielectric = temp4.a
LOD_blend = temp4.g
spec_LOD_blend = LOD_blend * ( 1 - is_dielectric );
frsn_LOD_blend = LOD_blend * is_dielectric;
dmg_color.g = lerp( 0.25, temp4.r, temp4.b );
//read environment map without shininess modulation
env_color = textureCube( ..., ... )
//read environment map with shininess modulation
envLoD_color = textureCubeLod( ..., ..., shininess_LOD )
//Fresnel factor
fresnel_alpha = lerp( is_dielectric, 0.0, 0.06 + 0.94 * ( 1 - dot( normal, view ) )^2 )
fresnel_beta = 1 - fresnel_alpha
//Layer 1 (diffuse)
(Diffuse material stuff)
//Layer 2 (non-dielectric specular)
spec_env_contrib = spec_mat * fresnel_beta * lerp( spec_LOD_blend, env_color, envLoD_color )
//Layer 3 (fresnel)
frsn_env_contrib = white * lerp( frsn_LOD_blend, fresnel_alpha * env_color, envLoD_color )
//etceteras...
EDIT5:
For the purposes of the material library, Blender can presently only bake diffuse color to a texture. This is not a major problem because, to bake specular, all one has to do is save one's file first, bake the diffuse color, then go through the list of materials copying specular color to diffuse, bake the diffuse again, but call the file spec.png, then close blender without saving.
But we got other things: LOD/non-LOD blend ratio, is_dielectric and shininess...
Well, Blender also has a "mirror color", so I guess these three parameters can be put in mirror.rgb channels. The mirror color bake would look pretty funny, but so be it; LaGrande can take those channels and apply them appropriately.

EDIT6:
Alright, I'll try to implement it now, and see what kind of instruction count I get. Wish me luck; I'll need it.
chuck_starchaser
Elite
Elite
Posts: 8014
Joined: Fri Sep 05, 2003 4:03 am
Location: Montreal
Contact:

Post by chuck_starchaser »

Heck, if I'm going to steal a channel from the damage texture, then
a) might as well steal blue; which is the least important color in terms of luma perception, and
b) might as well give the damage texture a full alpha channel, and
b1) use the blue channel for is_dielectric and
b2) use the alpha channel for LOD/non-LOD env mapping blend ratio

Almost done, btw.
ace123
Lead Network Developer
Lead Network Developer
Posts: 2560
Joined: Sun Jan 12, 2003 9:13 am
Location: Palo Alto CA
Contact:

Post by ace123 »

I'm totally in favor of a good UI for mesher. That code is a mess and I already spent an hour or two to get rid of crashes when I gave it invalid arguments (I couldn't get the thing to work since it kept segfaulting.)

It should only take a day to hack one up in C/C++ using GTK.

If we want to go the route of Python, it might be possible to use Boost to export the conversion functions to Python... or we could make it spawn the other mesher program, but that's annoying since then we need a frontend and we aren't much better off--the idea is to only use the command line arguments for bulk conversions.

struct.pack/unpack is really powerful, so it would be possible to do Obj<->Bfxm in Python, but I really don't know if it's best to rewrite the Ogre conversion in python unless it's really simple to do... and if it depends on C++ code it may as well be entirely in C++.
chuck_starchaser
Elite
Elite
Posts: 8014
Joined: Fri Sep 05, 2003 4:03 am
Location: Montreal
Contact:

Post by chuck_starchaser »

DONE! :D (Well, at least it compiles...)

Code: Select all

//CineMut Opaque shader (high end)
uniform int light_enabled[gl_MaxLights];
uniform int max_light_enabled;
//samplers
uniform samplerCube cubeMap;
uniform sampler2D diffMap;   //1-bit alpha in alpha, for alpha-testing only
uniform sampler2D specMap;   //sqrt(shininess) in alpha
uniform sampler2D glowMap;   //ambient occlusion in alpha
uniform sampler2D normMap;   //U in .rgb; V in alpha
uniform sampler2D damgMap;   //"is_dielectric" in 1-bit alpha
uniform sampler2D detailMap; //.rgb adds to diffuse, subtracts from spec; alpha mods shininess
//other uniforms
uniform vec4 cloakdmg; //.rg=cloak, .ba=damage
//envColor won't be needed, since we're fetching it from the envmap

//NOTE: Since the term "binormal" has been rightly deprecated, I use "cotangent" instead :)

vec3 lerp( in float f, in vec3 a, in vec3 b)
{
    return (1.0-f)*a + f*b;
}
vec3 fastnormalize( in vec3 input ) //less accurate than normalize() but should use less instructions
{
    float tmp = dot( input, input );
    tmp = 1.5 - (0.5*tmp);
    return tmp * input;
}
vec3 norm_decode( in vec4 input )
{
    //The LaGrande normalmap noodle does away with the z-term for the normal by encoding U and V
    //as 0.5*tan( angle ), where angle is arcsin( U ) or arcsin( V ), respectively. To fit that
    //into a 0-1 range, we multiply by 0.5 once again, and add 0.5.
    //To reverse the encoding, we first subtract 0.5, then multiply by four, fill the z term with
    //1.0, and normalize. But multiplying by four is not needed if instead we fill the z term with
    //0.25, instead; *then* normalize:
    vec3 result;
    result.x = 0.3333*(input.r+input.g+input.b) - 0.5;
    result.y = input.a - 0.5;
    result.z = 0.25;
    return normalize( result ); //can't use fastnormalize() here
}
vec3 imatmul( in vec3 tan, in vec3 cotan, in vec3 norm, in vec3 light )
{
    return light.xxx*tan + light.yyy*cotan + light.zzz*norm;
}
float shininess2Lod( in float alphashininess ) 
{ 
    //return clamp( 7.0 - log2( shininess + 1.0 ), 0.0, 7.0 );
    return 0.5 + 7.0 * ( 1.0 - alphashininess );
}
float alpha2shininess( in float alpha )
{
    return pow( 255.0, alpha ); //means that alpha is log255( shininess )
}
float limited_shininess( in float shine )
{
    float limit = 50.0; //50^2 is 2500. 2500*0.001 = 2.5 --enough risk of saturation!
    return (shine*limit)/(shine+limit);
}
float specularNormalizeFactor( in float limited_shininess )
{
    return pow(1.7/(1.0+limited_shininess/10.0),-1.7);
}
vec3 ambientMapping( in vec3 normal )
{
    vec4 result = textureCubeLod( cubeMap, normal, 7.7 );
    return result.rgb * result.a;
}
vec3 envMapping( in vec3 reflection )
{
    vec4 result = textureCube( cubeMap, reflection );
    return result.rgb * result.a;
}
vec3 envMappingLOD( in vec3 reflection, in float LoD )
{
    vec4 result = textureCubeLod( cubeMap, reflection, LoD );
    return result.rgb * result.a;
}
float soft_NdotL( in float NdotL ) //for soft penumbras
{
    float s = 1.0 - (NdotL*NdotL); //s is 1.0 at penumbra point, falls slowly
    s *= s; //falls faster
    s *= s; //falls much faster either way from the penumbra point
    s *= NdotL; //s is now zero at penumbra but has tiny +/- wavelets to the sides
    return clamp( 0.98*NdotL + 0.02 - s, 0.0, 1.0 ); //we shrink NdotL by 2%, shift
    //it up by 2%, and subtract the s wavelet to flatten the penumbra area
}
float selfshadow( in float sNdotL ) //use of soft NdotL should be most correct
{
    float s = clamp(1.0 - sNdotL, 0.0, 1.0);
    s *= s;
    s *= s;
    s *= s;
    return clamp(1.0 - s, 0.0, 1.0);
}
void lightingLight
   (
   in vec3 light, in vec3 normal, in vec3 vnormal, in vec3 reflection,
   in vec3 lightDiffuse, in float lightAtt, in float ltd_gloss,
   inout vec3 diff_acc, inout vec3 spec_acc
   )
{
    float NdotL = clamp( dot(normal,light), 0.0, 1.0 );
    float sNdotL = soft_NdotL( dot(vnormal,light) );
    float RdotL = clamp( dot(reflection,light), 0.0, 1.0 );
    float selfshadow = selfshadow( sNdotL );
    float spec = pow( RdotL, ltd_gloss );
    diff_acc += ( NdotL * lightDiffuse.rgb * lightAtt * selfshadow );
    spec_acc += ( lightDiffuse.rgb * lightAtt * selfshadow );
}

#define lighting(name, lightno_gl, lightno_tex) \
void name( \
   in vec3 normal, in vec3 vnormal, in  vec3 reflection, \
   in float limited_gloss, \
   inout vec3 diff_acc, inout vec3 spec_acc) \
{ \
    lightingLight( \
      normalize(gl_TexCoord[lightno_tex].xyz), \
      normal, vnormal, reflection, \
      gl_FrontLightProduct[lightno_gl].diffuse.rgb, \
      gl_TexCoord[lightno_tex].w, \
      limited_gloss, \
      diff_acc, spec_acc); \
}

lighting(lite0, 0, 5)
lighting(lite1, 1, 6)

void main()
{
    ///VARIABLE DECLARATIONS
    //vector variables
    vec4 temp4; //all-purpose vec4 temporary
    vec3 eye_vec3;
    vec3 vnormal_vec3;
    vec3 normal_vec3;
    vec3 tangent_vec3;
    vec3 cotangent_vec3; //"binormal" ;-)
    vec3 reflect_vec3;
    vec3 light0_vec3;
    vec3 light1_vec3;
    //material color variables
    vec3 diff_mat3;
    vec3 damg_mat3;
    vec3 spec_mat3;
    vec3 glow_mat3;
    vec3 darkenin3; //color to limit specularity as a function of damage
    //inferent light variables
    vec3 light0_il3;
    vec3 light1_il3;
    vec3 ambient_il3; //to be fetched from envmap via normal
    vec3 specular_il3; //to be fetched from envmap via reflect vector
    vec3 spec_light_acc3; //specular light accumulator
    vec3 diff_light_acc3; //diffuse light accumulator
    vec3 env_il3;
    vec3 envLoD_il3;
    //afferent light variables
    vec3 diff_contrib_al3;
    vec3 spec_contrib_al3;
    vec3 envm_contrib_al3;
    vec3 frsn_contrib_al3;
    vec3 glow_contrib_al3;
    vec3 amb_contrib_al3;
    //accumulator:
    vec4 result4;
    //scalar factors and coefficients
    float ao_glow_fac1; //plain ambient occlusion factor, used for ambient contribution
    float ao_spec_fac1; //squared ambient occlusion, used for specular modulation
    float ao_diff_fac1; //square root of ambient occlusion, used for diffuse
    float is_dielectric_mat1; //0.0 = metal; 1.0 = dielectric
    float gloss_mat1; //A.K.A. "shininess"
    float limited_gloss_fac1; //smooth-limited gloss to use for spotlights
    float gloss_LoD1;
    float spec_gloss_adj;
    float fresnel_alpha;
    float fresnel_beta;
    float spec_gloss_blend_mat1; //controls blend of non-LOD and LOD env mapping for spec
    float frsn_gloss_blend_mat1; //controls blend of non-LOD and LOD env mapping for fresnel
    //interpolated mesh data fetches
    vec2 texcoords2 = gl_TexCoord[0].xy;
    vnormal_vec3 = fastnormalize( gl_TexCoord[1].xyz );
    tangent_vec3 = gl_TexCoord[2].xyz;
    cotangent_vec3 = gl_TexCoord[3].xyz;
    ///MOSTLY TEXTURE FETCHES (start with spec, as we'll need shininess at the earliest time):
    //read specular into temp4, then spec_mat gets .rgb, and gloss_mat gets 255^(.a)
    temp4 = texture2D(specMap,texcoords2);
    spec_mat3 = temp4.rgb;
    gloss_LoD1 = shininess2Lod( temp4.a );
    gloss_mat1 = alpha2shininess( temp4.a );
    //read normalmap into temp4, then .rgb goes to U, .a goes to V (tangent space just for now)
    temp4 = texture2D(normMap,texcoords2).rgba;
    normal_vec3 = norm_decode( temp4 );
    //read glow texture into temp4, then .rgb^2 goes to glow_mat, and .a goes to ao_glo_fac
    temp4 = texture2D(glowMap,texcoords2).rgba;
    glow_mat3 = temp4.rgb * temp4.rgb; ao_glow_fac1 = temp4.a; 
    //read diffuse into temp4, then rgb goes to diff_mat, a goes to alpha
    temp4 = texture2D(diffMap,texcoords2).rgba;
    diff_mat3 = temp4.rgb; result4.a = temp4.a;
    //read damage texture into temp4, then .rg goes to damg_mat3, .b goes to is_dielectric and
    //.a goes to LOD/non-LOD gloss blend
    temp4 = texture2D(damgMap,texcoords2);
    damg_mat3 = temp4.rgb;
    is_dielectric_mat1 = temp4.b;
    damg_mat3.b = 2.0 * temp4.g - temp4.r;
    spec_gloss_blend_mat1 = temp4.a * ( 1.0 - is_dielectric_mat1 );
    frsn_gloss_blend_mat1 = temp4.a * is_dielectric_mat1;
    //blend damage back into the diffuse
    diff_mat3 = lerp( cloakdmg.b, diff_mat3, damg_mat3 );
    //we need a darkening color to limit specularity as a function of damage, also
    temp4 = vec4( 1.0 );
    darkenin3 = vec3( 0.333 * dot( damg_mat3, damg_mat3 ) );
    darkenin3 = lerp( cloakdmg.b, temp4.rgb, darkenin3 );
    //and darken specular material by it
    spec_mat3 *= darkenin3;
    //read detail texture into temp4; then .rgb modulates diffuse/spec, .a modulates shininess
    temp4 = texture2D(detailMap,16.0*texcoords2);
    temp4 -= vec4( 0.5 ); temp4 *= 0.12345;
    diff_mat3 -= temp4.rgb; spec_mat3 += temp4.rgb; gloss_mat1 -= temp4.a;
    ///OTHER PRE-PER-LIGHT COMPUTATIONS
    //normalmapping-derived vector computations (normal (using tangent))
    normal_vec3 = fastnormalize(imatmul(tangent_vec3,cotangent_vec3,vnormal_vec3,normal_vec3));
    //reflection vector computation
    reflect_vec3 = -reflect( eye_vec3, normal_vec3 );
    //compute smooth shininess limit for spotlights
    limited_gloss_fac1 = limited_shininess( gloss_mat1 );
    //initialize accumulators
    diff_light_acc3 = spec_light_acc3 = vec3( 0.0 );
    //and might as well compute the shininess adjusted specularity
    spec_gloss_adj = specularNormalizeFactor( limited_gloss_fac1 );
    //and might as well compute other gammas of ambient occlusion
    ao_diff_fac1 = sqrt( ao_glow_fac1 );
    ao_spec_fac1 = ao_glow_fac1 * ao_glow_fac1;
    ///PER-LIGHT COMPUTATIONS
    if( light_enabled[0] != 0 )
     lite0(normal_vec3,vnormal_vec3,reflect_vec3,limited_gloss_fac1,diff_light_acc3,spec_light_acc3);
    if( light_enabled[1] != 0 )
     lite1(normal_vec3,vnormal_vec3,reflect_vec3,limited_gloss_fac1,diff_light_acc3,spec_light_acc3);
    //we will process the accumulators later, to give the above loops time to finish
    //AMBIENT CONTRIBUTION
    //assume the environment cube map is encoded with gamma = 0.5 (but keep it in the family ;-)
    amb_contrib_al3 = ambientMapping( normal_vec3 );
    env_il3 = envMapping( reflect_vec3 ); //fresnel env mapping, while we're at it...
    amb_contrib_al3 *= ( amb_contrib_al3 * ao_glow_fac1 * diff_mat3 );
    env_il3 *= env_il3;
    //now we can multiply material albedos by the flavors of ambient occlusion
    spec_mat3 *= ao_spec_fac1;
    diff_mat3 *= ao_diff_fac1;
    ///FRESNEL STUFF begins
    //Fresnel evaluates to a coefficient that will be used to blend white specular specularity with
    //the diffuse AND specular contributions, in the case of dielectrics. For non-dielectrics, fresnel
    //will be zero. Shininess for fresnel specularity is always maxed out. Specularity and shininess
    //specified through textures, with a dielectric material, will constitute a "third layer" for the
    //material, and will allow representation of metallized paints.
    fresnel_alpha = 1.0 - clamp( dot( eye_vec3, normal_vec3 ), 0.0, 1.0 );
    fresnel_alpha *= fresnel_alpha;
    fresnel_alpha = clamp( 0.0625 + ( 0.9375 * fresnel_alpha ), 0.0625, 1.0 );
    fresnel_alpha *= ( is_dielectric_mat1 * (1.0-cloakdmg.b) ); //cloakdmg.b is the damage
    fresnel_beta = 1.0 - fresnel_alpha; // ;-)
    ///ENVIRONMENT MAPPING
    //shininess to env map LOD
    //read env LOD (reflect); .rgb^2 goes to specular_il3 (assume gamma=0.5)
    //assume the environment cube map is encoded with gamma = 0.5 (but keep it in the family ;-)
    envLoD_il3 = envMappingLOD( reflect_vec3, gloss_LoD1 );
    envLoD_il3 *= envLoD_il3;
    //FRESNEL STUFF continues; now we apply it:
    //essentially, total specular contribution is
    //specular_material * (1-fresnel) * LODenv + fresnel * env (fresnel shininess always maxed out)
    frsn_contrib_al3 = lerp( frsn_gloss_blend_mat1, envLoD_il3, fresnel_alpha * env_il3 );
    diff_mat3 *= fresnel_beta;
    envm_contrib_al3 = fresnel_beta * spec_mat3 * lerp( spec_gloss_blend_mat1, envLoD_il3, env_il3 );
    //diffuse contribution also gets multiplied by 1-fresnel
    diff_contrib_al3 = diff_light_acc3 * diff_mat3 * ao_diff_fac1 * fresnel_beta;
    //specular contribution is a bit of a hard question. Theoretically it should be multiplied by
    //1-fresnel, but then we should add fresnel reflection of lights to the lighting loop, which
    //would be expensive. Furthermore, specular spotlights are already gloss-limited to account for
    //non-point-light sources; and this would apply to fresnel reflectivity. In summary, forget it.
    //What the spec contribution needs to be multiplied by is the specular gloss adjustment; AND
    //faded down by damage
    spec_contrib_al3 = spec_light_acc3 * spec_mat3 * ao_spec_fac1 * spec_gloss_adj;
    //GLOW (we got it, already, in glow_mat3; well, not quite; we want to darken it by damage
    glow_mat3 *= ( 0.5 * (2.0-cloakdmg.b) );
    //process accumulations
    result4.rgb = amb_contrib_al3 + diff_contrib_al3 + frsn_contrib_al3 + spec_contrib_al3 + glow_mat3;
    //ALPHA and CLOAK
    result4.rgb *= result4.a;
    result4 *= cloakdmg.rrrg;
    //WRITE
    gl_FragColor = result4;
}
NVShaderPerf says,

Code: Select all

NVShaderPerf : version 2.0, build date Jun 11 2008, 19:15:47
Copyright (C) 2002-2008, NVIDIA Corporation
=====================================================================
Performance analysis of cinemut_opaque.fp
Fragment Performance Setup: Driver 174.74, GPU G70, Flags 0x0
Results 75 cycles, 9 r regs, 128,000,000 pixels/s
And here's the whole output:

Code: Select all

!!ARBfp1.0
OPTION NV_fragment_program2;
# cgc version 2.0.0012, build date Jan 30 2008
# command line args: -profile fp40 -oglsl
# source file: cinemut_opaque.fp
#vendor NVIDIA Corporation
#version 2.0.0.12
#profile fp40
#program main
#semantic light_enabled
#semantic max_light_enabled
#semantic cubeMap
#semantic diffMap
#semantic specMap
#semantic glowMap
#semantic normMap
#semantic damgMap
#semantic detailMap
#semantic cloakdmg
#semantic gl_FrontLightProduct : state.lightprod.front
#var int light_enabled[0] :  : c[0] : -1 : 1
#var int light_enabled[1] :  : c[1] : -1 : 1
#var int light_enabled[2] :  :  : -1 : 0
#var int light_enabled[3] :  :  : -1 : 0
#var int light_enabled[4] :  :  : -1 : 0
#var int light_enabled[5] :  :  : -1 : 0
#var int light_enabled[6] :  :  : -1 : 0
#var int light_enabled[7] :  :  : -1 : 0
#var int max_light_enabled :  :  : -1 : 0
#var samplerCUBE cubeMap :  : texunit 6 : -1 : 1
#var sampler2D diffMap :  : texunit 3 : -1 : 1
#var sampler2D specMap :  : texunit 0 : -1 : 1
#var sampler2D glowMap :  : texunit 2 : -1 : 1
#var sampler2D normMap :  : texunit 1 : -1 : 1
#var sampler2D damgMap :  : texunit 4 : -1 : 1
#var sampler2D detailMap :  : texunit 5 : -1 : 1
#var float4 cloakdmg :  : c[2] : -1 : 1
#var float4 gl_FragColor : $vout.COLOR : COL : -1 : 1
#var float4 gl_FrontLightProduct[0].ambient : state.lightprod[0].front.ambient :  : -1 : 0
#var float4 gl_FrontLightProduct[0].diffuse : state.lightprod[0].front.diffuse : c[3] : -1 : 1
#var float4 gl_FrontLightProduct[0].specular : state.lightprod[0].front.specular :  : -1 : 0
#var float4 gl_FrontLightProduct[1].ambient : state.lightprod[1].front.ambient :  : -1 : 0
#var float4 gl_FrontLightProduct[1].diffuse : state.lightprod[1].front.diffuse : c[4] : -1 : 1
#var float4 gl_FrontLightProduct[1].specular : state.lightprod[1].front.specular :  : -1 : 0
#var float4 gl_FrontLightProduct[2].ambient : state.lightprod[2].front.ambient :  : -1 : 0
#var float4 gl_FrontLightProduct[2].diffuse : state.lightprod[2].front.diffuse :  : -1 : 0
#var float4 gl_FrontLightProduct[2].specular : state.lightprod[2].front.specular :  : -1 : 0
#var float4 gl_FrontLightProduct[3].ambient : state.lightprod[3].front.ambient :  : -1 : 0
#var float4 gl_FrontLightProduct[3].diffuse : state.lightprod[3].front.diffuse :  : -1 : 0
#var float4 gl_FrontLightProduct[3].specular : state.lightprod[3].front.specular :  : -1 : 0
#var float4 gl_FrontLightProduct[4].ambient : state.lightprod[4].front.ambient :  : -1 : 0
#var float4 gl_FrontLightProduct[4].diffuse : state.lightprod[4].front.diffuse :  : -1 : 0
#var float4 gl_FrontLightProduct[4].specular : state.lightprod[4].front.specular :  : -1 : 0
#var float4 gl_FrontLightProduct[5].ambient : state.lightprod[5].front.ambient :  : -1 : 0
#var float4 gl_FrontLightProduct[5].diffuse : state.lightprod[5].front.diffuse :  : -1 : 0
#var float4 gl_FrontLightProduct[5].specular : state.lightprod[5].front.specular :  : -1 : 0
#var float4 gl_FrontLightProduct[6].ambient : state.lightprod[6].front.ambient :  : -1 : 0
#var float4 gl_FrontLightProduct[6].diffuse : state.lightprod[6].front.diffuse :  : -1 : 0
#var float4 gl_FrontLightProduct[6].specular : state.lightprod[6].front.specular :  : -1 : 0
#var float4 gl_FrontLightProduct[7].ambient : state.lightprod[7].front.ambient :  : -1 : 0
#var float4 gl_FrontLightProduct[7].diffuse : state.lightprod[7].front.diffuse :  : -1 : 0
#var float4 gl_FrontLightProduct[7].specular : state.lightprod[7].front.specular :  : -1 : 0
#var float4 gl_TexCoord[0] : $vin.TEX0 : TEX0 : -1 : 1
#var float4 gl_TexCoord[1] : $vin.TEX1 : TEX1 : -1 : 1
#var float4 gl_TexCoord[2] : $vin.TEX2 : TEX2 : -1 : 1
#var float4 gl_TexCoord[3] : $vin.TEX3 : TEX3 : -1 : 1
#var float4 gl_TexCoord[4] :  :  : -1 : 0
#var float4 gl_TexCoord[5] : $vin.TEX5 : TEX5 : -1 : 1
#var float4 gl_TexCoord[6] : $vin.TEX6 : TEX6 : -1 : 1
#var float4 gl_TexCoord[7] :  :  : -1 : 0
#const c[5] = 2 0.5 1.5 0.98000002
#const c[6] = 1 0.02 0 0.333
#const c[7] = 16 0.12345 255 50
#const c[8] = 5 0 1.7 -1.7
#const c[9] = 0.25 0.33329999 7.6999998 0.9375
#const c[10] = 0.0625 7 7.5
PARAM c[11] = { program.local[0..2],
		state.lightprod[0].front.diffuse,
		state.lightprod[1].front.diffuse,
		{ 2, 0.5, 1.5, 0.98000002 },
		{ 1, 0.02, 0, 0.333 },
		{ 16, 0.12345, 255, 50 },
		{ 5, 0, 1.7, -1.7 },
		{ 0.25, 0.33329999, 7.6999998, 0.9375 },
		{ 0.0625, 7, 7.5 } };
TEMP R0;
TEMP R1;
TEMP R2;
TEMP R3;
TEMP R4;
TEMP R5;
TEMP R6;
TEMP R7;
TEMP R8;
TEMP R9;
TEMP R10;
TEMP RC;
TEMP HC;
OUTPUT oCol = result.color;
TEX   R0, fragment.texcoord[0], texture[1], 2D;
ADDR  R0.x, R0, R0.y;
DP3R  R0.y, fragment.texcoord[5], fragment.texcoord[5];
RSQR  R0.y, R0.y;
MULR  R1.xyz, R0.y, fragment.texcoord[5];
TEX   R8, fragment.texcoord[0], texture[3], 2D;
MOVR  R5.xw, c[5].yyzx;
ADDR  R0.x, R0.z, R0;
MADR  R2.x, R0, c[9].y, -R5;
ADDR  R2.y, R0.w, -c[5];
MOVR  R2.z, c[9].x;
DP3R  R0.x, R2, R2;
RSQR  R1.w, R0.x;
MULR  R2.xyz, R1.w, R2;
MULR  R4.xyz, R2.y, fragment.texcoord[3];
DP3R  R0.x, fragment.texcoord[1], fragment.texcoord[1];
MADR  R0.x, -R0, c[5].y, c[5].z;
MULR  R0.xyz, R0.x, fragment.texcoord[1];
DP3R  R0.w, R0, R1;
MADR  R4.xyz, R2.x, fragment.texcoord[2], R4;
MADR  R2.xyz, R0, R2.z, R4;
TEX   R4, fragment.texcoord[0], texture[4], 2D;
MADR  R1.w, -R0, R0, c[6].x;
MULR  R1.w, R1, R1;
MULR  R1.w, R1, R1;
MULR  R1.w, R0, R1;
MADR  R0.w, R0, c[5], -R1;
DP3R  R2.w, R2, R2;
MADR  R1.w, -R2, c[5].y, c[5].z;
MULR  R2.xyz, R1.w, R2;
DP3R_SAT R1.x, R1, R2;
ADDR_SAT R0.w, R0, c[6].y;
ADDR_SAT R0.w, -R0, c[6].x;
MULR  R0.w, R0, R0;
MULR  R1.xyz, R1.x, c[3];
MULR  R0.w, R0, R0;
MADR  R7.z, R4.y, c[5].x, -R4.x;
MOVR  R7.xy, R4;
MADR_SAT R0.w, -R0, R0, c[6].x;
MOVR  R6.xyz, c[6].z;
MOVXC RC.x, c[0];
MULR  R1.xyz, fragment.texcoord[5].w, R1;
MULR  R6.xyz(NE.x), R0.w, R1;
MOVR  R5.xyz, c[6].z;
MULR  R1.xyz, fragment.texcoord[5].w, c[3];
MULR  R5.xyz(NE.x), R1, R0.w;
DP3R  R0.w, fragment.texcoord[6], fragment.texcoord[6];
RSQR  R0.w, R0.w;
MULR  R1.xyz, R0.w, fragment.texcoord[6];
DP3R  R0.x, R0, R1;
MADR  R0.y, -R0.x, R0.x, c[6].x;
MULR  R0.y, R0, R0;
MULR  R0.y, R0, R0;
MULR  R0.y, R0.x, R0;
MADR  R0.x, R0, c[5].w, -R0.y;
DP3R_SAT R0.y, R1, R2;
ADDR_SAT R0.x, R0, c[6].y;
ADDR_SAT R0.x, -R0, c[6];
MULR  R0.w, R0.x, R0.x;
MULR  R0.xyz, R0.y, c[4];
MULR  R0.w, R0, R0;
MULR  R1.xy, fragment.texcoord[0], c[7].x;
TEX   R1, R1, texture[5], 2D;
ADDR  R1, R1, -c[5].y;
MULR  R1, R1, c[7].y;
MADR_SAT R0.w, -R0, R0, c[6].x;
MULR  R0.xyz, fragment.texcoord[6].w, R0;
MOVXC RC.x, c[1];
MADR  R6.xyz(NE.x), R0.w, R0, R6;
MULR  R0.xyz, fragment.texcoord[6].w, c[4];
MADR  R5.xyz(NE.x), R0, R0.w, R5;
DP3R_SAT R0.x, R2, R3;
ADDR  R0.x, -R0, c[6];
MULR  R0.y, R0.x, R0.x;
MOVR  R0.x, c[10];
MADR  R0.x, R0.y, c[9].w, R0;
MADR  R0.y, R4.z, -c[2].z, R4.z;
MAXR_SAT R0.x, R0, c[10];
MULR  R3.w, R0.x, R0.y;
MADR  R0.xyz, R8, -c[2].z, R8;
MADR  R8.xyz, R7, c[2].z, R0;
TEX   R0, fragment.texcoord[0], texture[2], 2D;
RSQR  R4.x, R0.w;
ADDR  R2.w, -R3, c[6].x;
RCPR  R4.x, R4.x;
ADDR  R8.xyz, -R1, R8;
MULR  R9.xyz, R8, R4.x;
MULR  R9.xyz, R9, R2.w;
MULR  R6.xyz, R6, R9;
MULR  R6.xyz, R6, R4.x;
MULR  R9.xyz, R6, R2.w;
MOVR  R2.w, c[9].z;
TXL   R6, R2, texture[6], CUBE;
MULR  R6.xyz, R6, R6.w;
MULR  R10.xyz, R0.w, R6;
DP3R  R2.w, R7, R7;
MULR  R4.x, R2.w, c[2].z;
MULR  R8.xyz, R10, R8;
MADR  R8.xyz, R6, R8, R9;
TEX   R6, fragment.texcoord[0], texture[0], 2D;
MOVR  R2.w, c[2].z;
MADR  R2.w, R4.x, c[6], -R2;
MADR  R6.xyz, R2.w, R6, R6;
DP3R  R2.w, R2, R3;
MULR  R2.xyz, R2, R2.w;
MADR  R3.xyz, -R2, c[5].x, R3;
TEX   R2, -R3, texture[6], CUBE;
MULR  R2.xyz, R2, R2.w;
MULR  R2.xyz, R2, R2;
MULR  R0, R0, R0;
ADDR  R1.xyz, R6, R1;
MULR  R1.xyz, R0.w, R1;
MULR  R1.xyz, R5, R1;
MULR  R1.xyz, R0.w, R1;
POWR  R0.w, c[7].z, R6.w;
ADDR  R1.w, -R1, R0;
MULR  R2.w, R4, R4.z;
ADDR  R0.w, R1, c[7];
MOVR  R7.xyz, -R3;
MADR  R7.w, -R6, c[10].y, c[10].z;
TXL   R7, R7, texture[6], CUBE;
MULR  R3.xyz, R7, R7.w;
MULR  R3.xyz, R3, R3;
MADR  R3.xyz, -R2.w, R3, R3;
MULR  R2.xyz, R3.w, R2;
MADR  R2.xyz, R2.w, R2, R3;
RCPR  R2.w, R0.w;
ADDR  R2.xyz, R8, R2;
MOVR  R0.w, c[6].x;
MULR  R1.w, R1, R2;
MADR  R0.w, R1, c[8].x, R0;
RCPR  R0.w, R0.w;
MULR  R0.w, R0, c[8].z;
POWR  R0.w, R0.w, c[8].w;
MADR  R1.xyz, R1, R0.w, R2;
ADDR  R0.w, R5, -c[2].z;
MULR  R0.xyz, R0.w, R0;
MADR  R0.xyz, R0, c[5].y, R1;
MULR  R0.xyz, R0, R8.w;
MOVR  R0.w, R8;
MULR  oCol, R0, c[2].xxxy;
END
# 141 instructions, 11 R-regs, 0 H-regs
So, looks like we get 100 fps at 1280 x 1024 with a G70 gpu.
With a G80 gpu it's like 500 fps. I suppose that's without FSAA.


With these new changes, to summarize, we get,

a) The ability to represent metals with partial polish, which are very common. Nickle is typically finished such that you see a faint but sharp reflection (high shininess, say 255) superposed on a blurry reflection (shininess of 20, say); which is NOT equivalent to a single shininess of some value between 20 and 255; but rather a clear mixing of two distinct shininesses.

b) The ability to represent dielectric materials of low shininess, such as plastics.

Additionally, while before we had a 1-bit alpha channel to represent is_dielectric, we now have a 5-bit channel. By using a value such as 0.8 for is_dielectric, we might be able to at least hint at materials of lower dielectric constant than that of glass (which is mapped to 1.0). We'll have to experiment with this new feature.


To put this in more sensorial terms, what I'm hoping for is that we'll be able to have say a white ceramic wall, with a metal door painted glossy white, and a white plastic cover on that door; and they all will look "white", but you'll be able to recognize that the ceramic is ceramic, the paint is paint, and the plastic is plastic.

And you'll be able to have a stainless pipe held by nickle-plated brackets, and both metals have pretty much exactly the same color, and yet you'll be able to easily tell the look of nickle (blending two shininesses of 20 and 200) from the look of stainless steel (having a single shininess of 50).


I'll get back to work on the FireGlass shader, now.
safemode
Developer
Developer
Posts: 2150
Joined: Mon Apr 23, 2007 1:17 am
Location: Pennsylvania
Contact:

Post by safemode »

I dont see why we need to modify the console utility at all (aside from bugfixes). it works. Why can't we just make a simple python Tkinter frontend that executes the console utility ? No need to export functions or do any of that mess. The end result is the same, it requires no additional compilation/dependencies.

Why would you want to do more work in having to create all new code, then have to debug it on all our OS's we support?


PS:

I moved the star discussion over to another thread because i didn't want to detract from the shader work going on and much of it had nothing to do with the shader.

http://vegastrike.sourceforge.net/forum ... hp?t=11653
Ed Sweetman endorses this message.
chuck_starchaser
Elite
Elite
Posts: 8014
Joined: Fri Sep 05, 2003 4:03 am
Location: Montreal
Contact:

Post by chuck_starchaser »

By the way, here's the GPU Shader Analyzer output for CineMut Opaque:

Code: Select all

======== Begin r520 neutral format pixel shader: 0 =============
 Shader stats:
     RS Instructions:         6
     TEX Instructions:        9
     ALU Instructions:       89
     ALU Instruction slots:  89
     CF Instructions:         5
     Pix Size:               16
     Highest Const:          16
     Start Addr:              0
     End Addr:              102
 
 RS Instructions:
 
   rs 00:                            r00.rg-- = txc00
   rs 01:                            r01.rgb- = txc01
   rs 02:                            r02.rgb- = txc02
   rs 03:                            r03.rgb- = txc03
   rs 04:                            r04.rgba = txc05
   rs 05:                            r05.rgba = txc06
 
 US Program:


  0 tex 00    :  r06.rgba = lookup(r00.rgrr, tex05)
  1 tex 01    :  r12.rgba = lookup(r00.rgrr, tex06)
  2 tex 02    :  r11.rgba = lookup(r00.rgrr, tex01) ign_unc
  3 tex 03    :  r16.rgba = lookup(r00.rgrr, tex04) ign_unc
  4 tex 04    :  r10.rgba = lookup(r00.rgrr, tex03) sem_wait sem_grab ign_unc
  5 alu 00 rgb:             r00.rg- = mad(r00.rg0.0, (+1.6000000E+01).aar, 0.0) sem_wait
         alpha:             r07.a   = mad(r06.a, 1.0, neg(0.5))  
  6 alu 01 rgb:             r06.r-- = dp3(r06.rgb, 1.0)
         alpha:             r01.a   = ln2(c14.b)  
  7 tex 05    :  r00.rgba = lookup(r00.rgrr, tex02) sem_wait sem_grab ign_unc
  8 alu 02 rgb:             r07.r-- = mad(r06.r0.00.0, c14.arr, neg(r06.0.50.00.0))
         alpha:             r01.a   = mad(r01.a, r12.a, 0.0)  
  9 alu 03 rgb:             r07.--b = mad((+2.5000000E-01).rra, 1.0, 0.0)
         alpha:             r02.a   = rcp((+1.0000000E+01).a)  
  10 alu 04 rgb:             r06.r-- = dp3(r07.rab, r07.rab)
         alpha:             r03.a   = rsq(r16.a)  
  11 alu 05 rgb:             r08.r-- = sop()
         alpha:                       ex2(r01.a)  
  12 alu 06 pre:  srcp.a   = 1.0-2.0*r00.a
  12 alu 06 rgb:             r06.r-- = mad(neg(srcp.a0.00.0), c13.g0.00.0, 0.0)/2 sem_wait
         alpha:             r00.a   = rsq(r06.r)  
  13 alu 07 rgb:             r07.rgb = mad(r07.rab, r00.aaa, 0.0)
         alpha:             r00.a   = mad(r01.b, r01.b, 0.0)  
  14 alu 08 rgb:             r09.r-- = d2a(r01.rg0.0, r01.rg0.0, r00.rra)/2
         alpha:                       mad(0.0, 0.0, 0.0)  
  15 alu 09 rgb:             r03.rgb = mad(r03.rgb, r07.ggg, 0.0)
         alpha:                       mad(0.0, 0.0, 0.0)  
  16 alu 10 rgb:             r02.rgb = mad(r02.rgb, r07.rrr, r03.rgb)
         alpha:                       mad(0.0, 0.0, 0.0)  
  17 alu 11 pre:  srcp.rgb = c14.rgb-r09.rgb
  17 alu 11 rgb:             r13.rgb = mad(r01.rgb, srcp.rrr, 0.0)
         alpha:                       mad(0.0, 0.0, 0.0)  
  18 alu 12 rgb:             r02.rgb = mad(r13.rgb, r07.bbb, r02.rgb)
         alpha:                       mad(0.0, 0.0, 0.0)  
  19 alu 13 rgb:             r03.r-- = dp3(r02.rgb, r02.rgb)/2
         alpha:                       mad(0.0, 0.0, 0.0)  
  20 alu 14 pre:  srcp.rgb = r11.rgb+r11.rgb
  20 alu 14 rgb:             r01.--b = mad(neg(r11.0.00.0r), 1.0, srcp.0.00.0g)
         alpha:                       mad(0.0, 0.0, 0.0)  
  21 alu 15 rgb:             r01.rg- = mad(r11.rgb, 1.0, 0.0)
         alpha:             r01.a   = mad(r11.a, 1.0, 0.0)  
  22 alu 16 pre:  srcp.rgb = c14.rgb-r03.rgb
  22 alu 16 rgb:             r14.rgb = mad(r02.rgb, srcp.rrr, 0.0)
         alpha:                       mad(0.0, 0.0, 0.0)  
  23 alu 17 rgb:             r02.r-- = dp3(r01.rgb, r01.rgb)
         alpha:             r00.a   = mad(r00.r, r14.b, 0.0)  
  24 alu 18 rgb:             r11.r-- = d2a(r00.rr0.0, r14.rg0.0, r00.rra)*2
         alpha:             r00.a   = mad(r02.r, c12.a, 0.0)  
  25 alu 19 rgb:             r09.rgb = mad(r14.rgb, neg(r11.rrr), r00.rrr)
         alpha:                       mad(0.0, 0.0, 0.0)  
  26 alu 20 pre:  srcp.rgb = 1.0-c00.rgb
  26 alu 20 rgb:             r02.rgb = mad(r01.rgb, c00.bbb, 0.0)
         alpha:             r00.a   = mad(r00.a, c00.b, srcp.b)  
  27 alu 21 pre:  srcp.rgb = 1.0-2.0*r00.rgb
  27 alu 21 rgb:             r01.rgb = mad(neg(srcp.rgb), c13.ggg, 0.0)/2
         alpha:                       mad(0.0, 0.0, 0.0)  
  28 alu 22 rgb:             r12.rgb = mad(r12.rgb, r00.aaa, r01.rgb)
         alpha:                       mad(0.0, 0.0, 0.0)  
  29 alu 23 pre:  srcp.rgb = 1.0-c00.rgb
  29 alu 23 rgb:             r02.rgb = mad(r10.rgb, srcp.bbb, r02.rgb)
         alpha:                       mad(0.0, 0.0, 0.0)  
  30 alu 24 pre:  srcp.rgb = r08.rgb-r06.rgb
  30 alu 24 rgb:             r00.r-- = mad(srcp.r0.00.0, c15.g0.00.0, 0.0)
         alpha:             r00.a   = mad(srcp.r, 1.0, c15.g)  
  31 alu 25 rgb:             r10.rgb = mad(r16.rgb, r16.rgb, 0.0)
         alpha:             r00.a   = rcp(r00.a)  
  32 alu 26 rgb:             r15.rgb = mad(neg(r09.rgb), 1.0, 0.0)
         alpha:             r00.a   = mad(r00.r, r00.a, 0.0)  
  33 alu 27 rgb:             r03.rgb = mad(0.0, 1.0, 0.0)
         alpha:             r00.a   = mad(r02.a, r00.a, 1.0)  
  34 alu 28 rgb:             r06.r-- = mad(r16.arr, r03.arr, 0.0)
         alpha:             r00.a   = rcp(r00.a)  
  35 alu 29 rgb:             r07.r-- = mad(r16.arr, r16.arr, 0.0)
         alpha:             r00.a   = mad(r00.a, c15.b, 0.0)  
  36 alu 30 pre:  srcp.rgb = c07.rgb+c11.rgb
  36 alu 30 rgb:   alu_result_r(<) = mad(nab(srcp.r0.00.0), 1.0, 0.0)
         alpha:             r00.a   = ln2(r00.a)  
  37 alu 31 rgb:                       mad(0.0, 0.0, 0.0)
         alpha:             r00.a   = mad(r00.a, c16.r, 0.0)  
  38 alu 32 rgb:             r16.r-- = sop()
         alpha:                       ex2(r00.a)  
  39 cf  00    :  IF c    (0x0f 0 JUMP      NONE INCR INCR 0 0 56) ign_unc( alu(36): red lt )
  40 alu 33 rgb:             r00.r-- = dp3(r04.rgb, r04.rgb)
         alpha:                       mad(0.0, 0.0, 0.0)  
  41 alu 34 rgb:                       mad(0.0, 0.0, 0.0)
         alpha:             r00.a   = rsq(r00.r)  
  42 alu 35 rgb:             r04.rgb = mad(r04.rgb, r00.aaa, 0.0)
         alpha:                       mad(0.0, 0.0, 0.0)  
  43 alu 36 rgb:             r00.r-- = dp3(r13.rgb, r04.rgb)
         alpha:             r00.a   = mad(r14.b, r04.b, 0.0)  
  44 alu 37 rgb:             r00.-g- = mad(r00.0.0r0.0, c16.0.0g0.0, c16.0.0b0.0)
         alpha:             r01.a   = mad(neg(r00.r), r00.r, 1.0)  
  45 alu 38 rgb:             r04.r-- = clamped d2a(r14.rg0.0, r04.rg0.0, r00.rra)
         alpha:             r00.a   = mad(r01.a, r01.a, 0.0)  
  46 alu 39 rgb:             r04.rgb = mad(r04.rrr, c02.rgb, 0.0)
         alpha:             r00.a   = mad(r00.a, r00.a, 0.0)  
  47 alu 40 rgb:                       mad(0.0, 0.0, 0.0)
         alpha:             r00.a   = clamped mad(neg(r00.r), r00.a, r00.g)  
  48 alu 41 rgb:                       mad(0.0, 0.0, 0.0)
         alpha:             r00.a   = clamped mad(neg(r00.a), 1.0, 1.0)  
  49 alu 42 rgb:                       mad(0.0, 0.0, 0.0)
         alpha:             r00.a   = mad(r00.a, r00.a, 0.0)  
  50 alu 43 rgb:                       mad(0.0, 0.0, 0.0)
         alpha:             r00.a   = mad(r00.a, r00.a, 0.0)  
  51 alu 44 rgb:                       mad(0.0, 0.0, 0.0)
         alpha:             r00.a   = clamped mad(neg(r00.a), r00.a, 1.0)  
  52 alu 45 rgb:             r00.rgb = mad(r00.aaa, r04.aaa, 0.0)
         alpha:                       mad(0.0, 0.0, 0.0)  
  53 alu 46 rgb:             r08.rgb = mad(r04.rgb, r00.rgb, 0.0)
         alpha:                       mad(0.0, 0.0, 0.0)  
  54 alu 47 rgb:             r03.rgb = mad(c02.rgb, r00.rgb, 0.0)
         alpha:                       mad(0.0, 0.0, 0.0)  
  55 cf  01    :  ELSE    (0x00 0 JUMP      NONE NONE DECR 1 1 58)
  56 alu 48 rgb:             r08.rgb = mad(r03.rgb, 1.0, 0.0)
         alpha:                       mad(0.0, 0.0, 0.0)  
  57 cf  02    :  ENDIF   (0x00 1 JUMP      NONE DECR NONE 1 0 0)
  58 alu 49 pre:  srcp.rgb = c08.rgb+c11.rgb
  58 alu 49 rgb:   alu_result_r(<) = mad(nab(srcp.r0.00.0), 1.0, 0.0)
         alpha:                       mad(0.0, 0.0, 0.0)  
  59 cf  03    :  IF c    (0x0f 0 JUMP      NONE INCR NONE 0 0 76) ign_unc( alu(58): red lt )
  60 alu 50 rgb:             r00.r-- = dp3(r05.rgb, r05.rgb)
         alpha:                       mad(0.0, 0.0, 0.0)  
  61 alu 51 rgb:                       mad(0.0, 0.0, 0.0)
         alpha:             r00.a   = rsq(r00.r)  
  62 alu 52 rgb:             r05.rgb = mad(r05.rgb, r00.aaa, 0.0)
         alpha:                       mad(0.0, 0.0, 0.0)  
  63 alu 53 rgb:             r00.r-- = dp3(r13.rgb, r05.rgb)
         alpha:             r00.a   = mad(r14.b, r05.b, 0.0)  
  64 alu 54 rgb:             r00.--b = mad(r00.0.00.0r, c16.0.00.0g, c16.0.00.0b)
         alpha:             r01.a   = mad(neg(r00.r), r00.r, 1.0)  
  65 alu 55 rgb:             r05.r-- = clamped d2a(r14.rg0.0, r05.rg0.0, r00.rra)
         alpha:             r00.a   = mad(r01.a, r01.a, 0.0)  
  66 alu 56 rgb:             r05.rgb = mad(r05.rrr, c05.rgb, 0.0)
         alpha:             r00.a   = mad(r00.a, r00.a, 0.0)  
  67 alu 57 rgb:                       mad(0.0, 0.0, 0.0)
         alpha:             r00.a   = clamped mad(neg(r00.r), r00.a, r00.b)  
  68 alu 58 rgb:                       mad(0.0, 0.0, 0.0)
         alpha:             r00.a   = clamped mad(neg(r00.a), 1.0, 1.0)  
  69 alu 59 rgb:                       mad(0.0, 0.0, 0.0)
         alpha:             r00.a   = mad(r00.a, r00.a, 0.0)  
  70 alu 60 rgb:                       mad(0.0, 0.0, 0.0)
         alpha:             r00.a   = mad(r00.a, r00.a, 0.0)  
  71 alu 61 rgb:                       mad(0.0, 0.0, 0.0)
         alpha:             r00.a   = clamped mad(neg(r00.a), r00.a, 1.0)  
  72 alu 62 rgb:             r00.rgb = mad(r00.aaa, r05.aaa, 0.0)
         alpha:                       mad(0.0, 0.0, 0.0)  
  73 alu 63 rgb:             r08.rgb = mad(r05.rgb, r00.rgb, r08.rgb)
         alpha:                       mad(0.0, 0.0, 0.0)  
  74 alu 64 rgb:             r03.rgb = mad(c05.rgb, r00.rgb, r03.rgb)
         alpha:                       mad(0.0, 0.0, 0.0)  
  75 cf  04    :  ENDIF   (0x00 1 JUMP      NONE DECR NONE 1 0 0)
  76 alu 65 rgb:             r00.rgb = mad(neg(r09.rgb), 1.0, 0.0)
         alpha:             r14.a   = mad(c16.a, 1.0, 0.0)  
  77 alu 66 pre:  srcp.a   = 1.0-r12.a
  77 alu 66 rgb:             r01.rgb = mad(r02.rgb, 1.0, neg(r01.rgb))
         alpha:             r15.a   = mad(srcp.a, (+7.0000000E+00).a, 0.5)  
  78 alu 67 rgb:             r02.rgb = mad(r12.rgb, r07.rrr, 0.0)
         alpha:             r00.a   = clamped mad(r11.r, 1.0, 0.0)/2 
  79 alu 68 rgb:             r02.rgb = mad(r03.rgb, r02.rgb, 0.0)
         alpha:             r01.a   = mad(r11.b, r11.a, 0.0)  
  80 tex 06    :  r03.rgba = lookup(r00.rgbr, tex00) ign_unc
  81 tex 07    :  r04.rgba = lookup_lod(r14.rgba, tex00) ign_unc
  82 tex 08    :  r05.rgba = lookup_lod(r15.rgba, tex00) sem_wait sem_grab ign_unc
  83 alu 69 pre:  srcp.a   = 1.0-r00.a
  83 alu 69 rgb:             r00.rgb = mad(r03.rgb, r03.aaa, 0.0) sem_wait
         alpha:             r00.a   = mad(srcp.a, srcp.a, 0.0)  
  84 alu 70 rgb:             r03.rgb = mad(r04.rgb, r04.aaa, 0.0)
         alpha:             r00.a   = mad(r00.a, (+9.3750000E-01).a, (+6.2500000E-02).r)  
  85 alu 71 rgb:             r04.rgb = mad(r05.rgb, r05.aaa, 0.0)
         alpha:             r00.a   = max(r00.a, (+6.2500000E-02).a)  
  86 alu 72 rgb:             r00.rgb = mad(r00.rgb, r00.rgb, 0.0)
         alpha:             r00.a   = min(r00.a, 1.0)  
  87 alu 73 pre:  srcp.rgb = 1.0-c00.rgb
  87 alu 73 rgb:             r05.rgb = mad(r16.aaa, r03.rgb, 0.0)
         alpha:             r02.a   = mad(r11.b, srcp.b, 0.0)  
  88 alu 74 rgb:             r04.rgb = mad(r04.rgb, r04.rgb, 0.0)
         alpha:             r00.a   = mad(r00.a, r02.a, 0.0)  
  89 alu 75 rgb:             r06.-g- = mad(r07.0.0rr, r16.0.0rr, 0.0)
         alpha:             r02.a   = mad(r01.a, r00.a, 0.0)  
  90 alu 76 pre:  srcp.a   = 1.0-r00.a
  90 alu 76 rgb:             r07.rgb = mad(r06.rrr, srcp.aaa, 0.0)
         alpha:             r00.a   = mad(neg(c00.b), 1.0, (+2.0000000E+00).a)/2 
  91 alu 77 rgb:             r08.rgb = mad(r08.rgb, r07.rgb, 0.0)
         alpha:             r03.a   = mad(r10.a, c00.r, 0.0)  
  92 alu 78 rgb:             r07.rgb = mad(r08.rgb, r07.rgb, 0.0)
         alpha:                       mad(0.0, 0.0, 0.0)  
  93 alu 79 rgb:             r00.rgb = mad(r02.aaa, r00.rgb, 0.0)
         alpha:                       mad(0.0, 0.0, 0.0)  
  94 alu 80 rgb:             r03.rgb = mad(r03.rgb, r05.rgb, r07.rgb)
         alpha:                       mad(0.0, 0.0, 0.0)  
  95 alu 81 pre:  srcp.a   = 1.0-r01.a
  95 alu 81 rgb:             r00.rgb = mad(r04.rgb, srcp.aaa, r00.rgb)
         alpha:                       mad(0.0, 0.0, 0.0)  
  96 alu 82 rgb:             r00.rgb = mad(r01.rgb, r03.rgb, r00.rgb)
         alpha:                       mad(0.0, 0.0, 0.0)  
  97 alu 83 rgb:             r00.rgb = mad(r06.ggg, r02.rgb, r00.rgb)
         alpha:                       mad(0.0, 0.0, 0.0)  
  98 alu 84 rgb:             r03.rgb = mad(r10.rgb, r00.aaa, r00.rgb)
         alpha:                       mad(0.0, 0.0, 0.0)  
  99 alu 85 rgb:  out3.rgb =           mad(0.0, 1.0, 0.0)
         alpha:  out0.a   =           mad(r10.a, c00.g, 0.0)  
  100 alu 86 rgb:  out0.rgb =           mad(r03.aaa, r03.rgb, 0.0)
         alpha:  out3.a   =           mad(1.0, 1.0, 0.0)  
  101 alu 87 rgb:  out2.rgb =           mad(0.0, 1.0, 0.0)
         alpha:  out2.a   =           mad(1.0, 1.0, 0.0)  
  102 alu 88 rgb:  out1.rgb =           mad(0.0, 1.0, 0.0)
         alpha:  out1.a   =           mad(1.0, 1.0, 0.0)  last


======== End r520 neutral format pixel shader =============
This is for a Radeon 1900 (R580) GPU. I'm not familiar with ATI GPU's and there's many to choose from; but just for an example. I've no idea why it compiles to like 140 instructions on a G70 but only to 102 on a Radeon.
chuck_starchaser
Elite
Elite
Posts: 8014
Joined: Fri Sep 05, 2003 4:03 am
Location: Montreal
Contact:

Post by chuck_starchaser »

I didn't get a fraction as much as I hoped to get done this weekend. I was hoping to have the fire/glass shader done. Instead, I kept finding errors and shortcomings with the opaque shader. Thing is, I was trying to come up with the glass shader by modifying a copy of the opaque.
Well, what I did get done, however, was worth it.
My previous penumbra function was horrible.
The self-shadow function was numerically off.
The lightingLight function was wrong, and wasn't fully optimized.

Let's begin with the penumbra: In my previous function I was putting together a weird wave function by adding harmonics. Nothing intrinsically wrong with that, but the particular wavelet I was subtracting happened to flatten the illumination wave at the penumbra point. That was the exact opposite of what I needed. What I needed was a maximal derivative at the penumbra point. and then a curve that would end up at zero with a derivative of zero at about half a degree towards the dark side of the penumbra point.

Well, lo and behold....

Image

The X axis is angle in radians, so, where it says -0.015 that's about -1 degrees. The Y axis is diffuse illumination, and goes up to 1, so here we are in a zone of almost total darkness.
My new algorithm begins by biasing the normal. Instead of normal dot light,
I do (normal + 0.02*light) dot light. For angles around a cylinder, this is equivalent to light = cos(angle)+0.02.
I was interested in the penumbra, so I made 90 degrees my zero, so my formulas use sin(angle); same difference.
But why 0.02 and not 0.01?
Well, my intent was to produce penumbra softennesses like we are familiar with, given the apparent size of the sun.
Now, the Sun has an apparent diameter of 1 degree, plus or minus.
So I should be shooting for 0.5 degrees, right?
Nope. Reason being, we're kind of overdoing it with the self-shadowing, by having limiting of NdotL by vNdotL, and then again
multiplying by the self_shadow_step() function...
So, to compensate for that overdoing it, I went to 1 degree radius.
Besides, in VS, most of the time, stars look bigger than the Sun from Earth; not sure why.

Anyways, the red line is fn1(x)=sin(x)+0.02
The deep blue line is fn2(x)=2500*fn1(x)^3
The third line is fn3(x)=fn1(x)*fn2(x)/(fn1(x)+fn2(x)), which is my final formula.
Note that at the penumbra point, the illumination is half as it would be with a point source with bias (fn1(x)), which is exactly right, as only half the sun or star is above the horizon. Well, the derivative at this point is not maximal, but pretty almost :D



New code:

Code: Select all

//NEW SHADER (high end)
uniform int light_enabled[gl_MaxLights];
uniform int max_light_enabled;
//samplers
uniform samplerCube cubeMap;
uniform sampler2D diffMap;   //1-bit alpha in alpha, for alpha-testing only
uniform sampler2D specMap;   //log256(shininess) in alpha
uniform sampler2D glowMap;   //ambient occlusion in alpha
uniform sampler2D normMap;   //U in .rgb; V in alpha (special encoding; see norm_decode())
uniform sampler2D damgMap;   //"dielectricness" in blue, specular blend in alpha
uniform sampler2D detailMap; //.rgb adds to diffuse, subtracts from spec; alpha mods shininess
//other uniforms
uniform vec4 cloakdmg; //.rg=cloak, .ba=damage
//envColor won't be needed, since we're fetching it from the envmap

//NOTE: Since the term "binormal" has been rightly deprecated, I use "cotangent" instead :)

vec3 lerp( in float f, in vec3 a, in vec3 b)
{
    return (1.0-f)*a + f*b;
}
vec3 fastnormalize( in vec3 v ) //less accurate than normalize() but should use less instructions
{
    float tmp = dot( v, v );
    tmp = 1.5 - (0.5*tmp);
    return tmp * v;
}
vec3 norm_decode( in vec4 nmap_in )
{
    //The LaGrande normalmap noodle does away with the z-term for the normal by encoding U and V
    //as 0.5*tan( angle ), where angle is arcsin( U ) or arcsin( V ), respectively. To fit that
    //into a 0-1 range, we multiply by 0.5 once again, and add 0.5.
    //To reverse the encoding, we first subtract 0.5, then multiply by four, fill the z term with
    //1.0, and normalize. But multiplying by four is not needed if instead we fill the z term with
    //0.25, instead; *then* normalize:
    vec3 result;
    result.x = 0.3333*(nmap_in.r+nmap_in.g+nmap_in.b) - 0.5;
    result.y = nmap_in.a - 0.5;
    result.z = 0.25;
    return normalize( result ); //can't use fastnormalize() here
}
vec3 imatmul( in vec3 tan, in vec3 cotan, in vec3 norm, in vec3 light )
{
    return light.xxx*tan + light.yyy*cotan + light.zzz*norm;
}
float shininess2Lod( in float alphashininess ) 
{ 
    //return clamp( 7.0 - log2( shininess + 1.0 ), 0.0, 7.0 );
    return 0.5 + 7.0 * ( 1.0 - alphashininess );
}
float alpha2shininess( in float alpha )
{
    return pow( 255.0, alpha ); //means that alpha is log255( shininess )
}
float limited_shininess( in float shine )
{
    float limit = 50.0; //50^2 is 2500. 2500*0.001 = 2.5 --enough risk of saturation!
    return (shine*limit)/(shine+limit);
}
float specularNormalizeFactor( in float limited_shininess )
{
    return pow(1.7/(1.0+limited_shininess/10.0),-1.7);
}
vec3 ambientMapping( in vec3 normal )
{
    vec4 result = textureCubeLod( cubeMap, normal, 7.7 );
    return result.rgb * result.a;
}
vec3 envMapping( in vec3 reflection )
{
    vec4 result = textureCube( cubeMap, reflection );
    return result.rgb * result.a;
}
vec3 envMappingLOD( in vec3 reflection, in float LoD )
{
    vec4 result = textureCubeLod( cubeMap, reflection, LoD );
    return result.rgb * result.a;
}
void soft_penumbra_NdotL( in vec3 normal, in vec3 vnormal, in vec3 light, out float NdotL, out float vNdotL )
{
	float s2 = 0.975 * dot( vnormal+0.02*light, light );
	float s1 = 0.975 * dot( normal+0.02*light, light );
	float s4 = s2*s2*s2*2500.0;
	float s3 = s1*s1*s1*2500.0;
	vNdotL = clamp( 2.0*s2*s4/(s2+s4), 0.0, 1.0 );
	NdotL  = clamp( s1*s3/(s1+s3), 0.0, vNdotL );
}
float self_shadow_step( in float vNdotL )
{
    return 1.0 - pow( 1.0 - vNdotL, 68.0 );
    //Note: vNdotL has a value of 0.01 at the penumbra point. 1-0.01=0.99
    //0.99^68=0.5 ... So the number looks magical but isn't.
}
void lightingLight
   (
   in vec3 light, in vec3 normal, in vec3 vnormal, in vec3 reflection,
   in vec3 lightDiffuse, in float lightAtt, in float ltd_gloss,
   inout vec3 diff_acc, inout vec3 spec_acc
   )
{
	float NdotL, vNdotL, RdotL, selfshadow, incident_light, spec;
	soft_penumbra_NdotL( normal, vnormal, light, NdotL, vNdotL );
	selfshadow = self_shadow_step( vNdotL );
	RdotL = clamp( dot( reflection, light), 0.0, vNdotL+vNdotL );
    incident_light = lightDiffuse.rgb * lightAtt * selfshadow;
    spec = pow( RdotL, ltd_gloss );
    diff_acc += ( NdotL * temp );
    spec_acc += ( spec * temp );
}

#define lighting(name, lightno_gl, lightno_tex) \
void name( \
   in vec3 normal, in vec3 vnormal, in  vec3 reflection, \
   in float limited_gloss, \
   inout vec3 diff_acc, inout vec3 spec_acc) \
{ \
    lightingLight( \
      normalize(gl_TexCoord[lightno_tex].xyz), \
      normal, vnormal, reflection, \
      gl_FrontLightProduct[lightno_gl].diffuse.rgb, \
      gl_TexCoord[lightno_tex].w, \
      limited_gloss, \
      diff_acc, spec_acc); \
}

lighting(lite0, 0, 5)
lighting(lite1, 1, 6)

void main()
{
    ///VARIABLE DECLARATIONS
    //vector variables
    vec4 temp4; //all-purpose vec4 temporary
    vec3 eye_vec3;
    vec3 vnormal_vec3;
    vec3 normal_vec3;
    vec3 tangent_vec3;
    vec3 cotangent_vec3; //"binormal" ;-)
    vec3 reflect_vec3;
    vec3 light0_vec3;
    vec3 light1_vec3;
    //material color variables
    vec3 diff_mat3;
    vec3 damg_mat3;
    vec3 spec_mat3;
    vec3 glow_mat3;
    vec3 darkenin3; //color to limit specularity as a function of damage
    //inferent light variables
    vec3 light0_il3;
    vec3 light1_il3;
    vec3 ambient_il3; //to be fetched from envmap via normal
    vec3 specular_il3; //to be fetched from envmap via reflect vector
    vec3 spec_light_acc3; //specular light accumulator
    vec3 diff_light_acc3; //diffuse light accumulator
    vec3 env_il3;
    vec3 envLoD_il3;
    //afferent light variables
    vec3 diff_contrib_al3;
    vec3 spec_contrib_al3;
    vec3 envm_contrib_al3;
    vec3 frsn_contrib_al3;
    vec3 glow_contrib_al3;
    vec3 amb_contrib_al3;
    //accumulator:
    vec4 result4;
    //scalar factors and coefficients
    float ao_glow_fac1; //plain ambient occlusion factor, used for ambient contribution
    float ao_spec_fac1; //squared ambient occlusion, used for specular modulation
    float ao_diff_fac1; //square root of ambient occlusion, used for diffuse
    float is_dielectric_mat1; //0.0 = metal; 1.0 = dielectric
    float gloss_mat1; //A.K.A. "shininess"
    float limited_gloss_fac1; //smooth-limited gloss to use for spotlights
    float gloss_LoD1;
    float spec_gloss_adj;
    float fresnel_alpha;
    float fresnel_beta;
    float spec_gloss_blend_mat1; //controls blend of non-LOD and LOD env mapping for spec
    float frsn_gloss_blend_mat1; //controls blend of non-LOD and LOD env mapping for fresnel
    //interpolated mesh data fetches
    vec2 texcoords2 = gl_TexCoord[0].xy;
    vnormal_vec3 = fastnormalize( gl_TexCoord[1].xyz );
    tangent_vec3 = gl_TexCoord[2].xyz;
    cotangent_vec3 = gl_TexCoord[3].xyz;
    ///MOSTLY TEXTURE FETCHES (start with spec, as we'll need shininess at the earliest time):
    //read specular into temp4, then spec_mat gets .rgb, and gloss_mat gets 255^(.a)
    temp4 = texture2D(specMap,texcoords2);
    spec_mat3 = temp4.rgb;
    gloss_LoD1 = shininess2Lod( temp4.a );
    gloss_mat1 = alpha2shininess( temp4.a );
    //read normalmap into temp4, then .rgb goes to U, .a goes to V (tangent space just for now)
    temp4 = texture2D(normMap,texcoords2).rgba;
    normal_vec3 = norm_decode( temp4 );
    //read glow texture into temp4, then .rgb^2 goes to glow_mat, and .a goes to ao_glo_fac
    temp4 = texture2D(glowMap,texcoords2).rgba;
    glow_mat3 = temp4.rgb * temp4.rgb; ao_glow_fac1 = temp4.a; 
    //read diffuse into temp4, then rgb goes to diff_mat, a goes to alpha
    temp4 = texture2D(diffMap,texcoords2).rgba;
    diff_mat3 = temp4.rgb; result4.a = temp4.a;
    //read damage texture into temp4, then .rg goes to damg_mat3, .b goes to is_dielectric and
    //.a goes to LOD/non-LOD gloss blend
    temp4 = texture2D(damgMap,texcoords2);
    damg_mat3 = temp4.rgb;
    is_dielectric_mat1 = temp4.b;
    damg_mat3.b = 2.0 * temp4.g - temp4.r;
    spec_gloss_blend_mat1 = temp4.a * ( 1.0 - is_dielectric_mat1 );
    frsn_gloss_blend_mat1 = temp4.a * is_dielectric_mat1;
    //blend damage back into the diffuse
    diff_mat3 = lerp( cloakdmg.b, diff_mat3, damg_mat3 );
    //we need a darkening color to limit specularity as a function of damage, also
    temp4 = vec4( 1.0 );
    darkenin3 = vec3( 0.333 * dot( damg_mat3, damg_mat3 ) );
    darkenin3 = lerp( cloakdmg.b, temp4.rgb, darkenin3 );
    //and darken specular material by it
    spec_mat3 *= darkenin3;
    //read detail texture into temp4; then .rgb modulates diffuse/spec, .a modulates shininess
    temp4 = texture2D(detailMap,16.0*texcoords2);
    temp4 -= vec4( 0.5 ); temp4 *= 0.12345;
    diff_mat3 -= temp4.rgb; spec_mat3 += temp4.rgb; gloss_mat1 -= temp4.a;
    ///OTHER PRE-PER-LIGHT COMPUTATIONS
    //normalmapping-derived vector computations (normal (using tangent))
    normal_vec3 = fastnormalize(imatmul(tangent_vec3,cotangent_vec3,vnormal_vec3,normal_vec3));
    //reflection vector computation
    reflect_vec3 = -reflect( eye_vec3, normal_vec3 );
    //compute smooth shininess limit for spotlights
    limited_gloss_fac1 = limited_shininess( gloss_mat1 );
    //initialize accumulators
    diff_light_acc3 = spec_light_acc3 = vec3( 0.0 );
    //and might as well compute the shininess adjusted specularity
    spec_gloss_adj = specularNormalizeFactor( limited_gloss_fac1 );
    //and might as well compute other gammas of ambient occlusion
    ao_diff_fac1 = sqrt( ao_glow_fac1 );
    ao_spec_fac1 = ao_glow_fac1 * ao_glow_fac1;
    ///PER-LIGHT COMPUTATIONS
    if( light_enabled[0] != 0 )
     lite0(normal_vec3,vnormal_vec3,reflect_vec3,limited_gloss_fac1,diff_light_acc3,spec_light_acc3);
    if( light_enabled[1] != 0 )
     lite1(normal_vec3,vnormal_vec3,reflect_vec3,limited_gloss_fac1,diff_light_acc3,spec_light_acc3);
    //we will process the accumulators later, to give the above loops time to finish
    //AMBIENT CONTRIBUTION
    //assume the environment cube map is encoded with gamma = 0.5 (but keep it in the family ;-)
    amb_contrib_al3 = ambientMapping( normal_vec3 );
    env_il3 = envMapping( reflect_vec3 ); //fresnel env mapping, while we're at it...
    amb_contrib_al3 *= ( amb_contrib_al3 * ao_glow_fac1 * diff_mat3 );
    env_il3 *= env_il3;
    //now we can multiply material albedos by the flavors of ambient occlusion
    spec_mat3 *= ao_spec_fac1;
    diff_mat3 *= ao_diff_fac1;
    ///FRESNEL STUFF begins
    //Fresnel evaluates to a coefficient that will be used to blend white specular specularity with
    //the diffuse AND specular contributions, in the case of dielectrics. For non-dielectrics, fresnel
    //will be zero. Shininess for fresnel specularity is always maxed out. Specularity and shininess
    //specified through textures, with a dielectric material, will constitute a "third layer" for the
    //material, and will allow representation of metallized paints.
    fresnel_alpha = 1.0 - clamp( dot( eye_vec3, normal_vec3 ), 0.0, 1.0 );
    fresnel_alpha *= fresnel_alpha;
    fresnel_alpha = clamp( 0.0625 + ( 0.9375 * fresnel_alpha ), 0.0625, 1.0 );
    fresnel_alpha *= ( is_dielectric_mat1 * (1.0-cloakdmg.b) ); //cloakdmg.b is the damage
    fresnel_beta = 1.0 - fresnel_alpha; // ;-)
    ///ENVIRONMENT MAPPING
    //shininess to env map LOD
    //read env LOD (reflect); .rgb^2 goes to specular_il3 (assume gamma=0.5)
    //assume the environment cube map is encoded with gamma = 0.5 (but keep it in the family ;-)
    envLoD_il3 = envMappingLOD( reflect_vec3, gloss_LoD1 );
    envLoD_il3 *= envLoD_il3;
    //FRESNEL STUFF continues; now we apply it:
    //essentially, total specular contribution is
    //specular_material * (1-fresnel) * LODenv + fresnel * env (fresnel shininess always maxed out)
    frsn_contrib_al3 = lerp( frsn_gloss_blend_mat1, envLoD_il3, fresnel_alpha * env_il3 );
    diff_mat3 *= fresnel_beta;
    envm_contrib_al3 = fresnel_beta * spec_mat3 * lerp( spec_gloss_blend_mat1, envLoD_il3, env_il3 );
    //diffuse contribution also gets multiplied by 1-fresnel
    diff_contrib_al3 = diff_light_acc3 * diff_mat3 * ao_diff_fac1 * fresnel_beta;
    //specular contribution is a bit of a hard question. Theoretically it should be multiplied by
    //1-fresnel, but then we should add fresnel reflection of lights to the lighting loop, which
    //would be expensive. Furthermore, specular spotlights are already gloss-limited to account for
    //non-point-light sources; and this would apply to fresnel reflectivity. In summary, forget it.
    //What the spec contribution needs to be multiplied by is the specular gloss adjustment; AND
    //faded down by damage
    spec_contrib_al3 = spec_light_acc3 * spec_mat3 * ao_spec_fac1 * spec_gloss_adj;
    //GLOW (we got it, already, in glow_mat3; well, not quite; we want to darken it by damage
    glow_mat3 *= ( 0.5 * (2.0-cloakdmg.b) );
    //process accumulations
    result4.rgb = amb_contrib_al3 + diff_contrib_al3 + frsn_contrib_al3 + spec_contrib_al3 + glow_mat3;
    //ALPHA and CLOAK
    result4.rgb *= result4.a;
    result4 *= cloakdmg.rrrg;
    //WRITE
    gl_FragColor = result4;
}
Haven't tried to compile yet; there's probably errors; just for the record. Falling asleep...
chuck_starchaser
Elite
Elite
Posts: 8014
Joined: Fri Sep 05, 2003 4:03 am
Location: Montreal
Contact:

Post by chuck_starchaser »

Sorry I'm still spending so much time on the first shader; but I figure, once art starts getting committed and
material libraries start getting built, it will be too late if something's not exactly right; so I want the shininess
representation to be the best it can possibly be, and everything else to be just perfect too.

So here's an update; couple of functions:

Code: Select all

float alpha2shininess( in float alpha )
{
    //The formula used to compute shininess from alpha is just an ad-hoc formula
    //that produces *useful* linearites across the alpha range; --with gradual change
    //at the bottom of the curve, but rising fast at the top. Almost linear with the
    //with log of radius of specular light-spots, but not quite...
    float temp = (1.0625+alpha) / (1.0625-alpha);
    return temp * temp * temp;
    //tests:
    //  Alpha  Shininess  Angular radius of specular highlights
    //   0/256     1.000  67.08
    //   1/256     1.022  66.35 1.10% angular decrement
    //  32/256     2.032  47.06
    //  33/256     2.078  46.53 1.14%
    //  64/256     4.215  32.67
    //  65/256     4.315  32.29 1.18%
    //  96/256     9.141  22.19
    //  97/256     9.375  21.91 1.28%
    // 128/256    21.433  14.49
    // 129/256    22.051  14.29 1.40%
    // 160/256    57.385   8.86
    // 161/256    59.360   8.71 1.72%
    // 192/256   195.112   4.80
    // 193/256   203.928   4.70 2.13%
    // 224/256  1103.370   2.02
    // 225/256  1182.430   1.95 3.59%
    // 254/256 24953.974   0.42
    // 255/256 29791.000   0.39 7.69%
}
It might not be obvious at first sight, but this way of mapping alpha values to shininess is infinitely superior than
a linear mapping. First let's look at the alpha-to-shininess graph:

Image

Notice the Y axis (shininess) is logarithmic. About 2/3 of the alpha range spans shininesses from 1 to 100 in
an exponential way. Then shininess starts rising faster than exponentially. This is important because it allows
us to reach close to 30,000, which in turn allows us to make full use of the environment map mipmaps without
cheating (or without cheating too much, anyways).

But you'll get a better idea of what's going on by looking at the alpha to specular spotlight angular radius:

Image

By "angle" here I mean, assuming a point light source, and you're looking at a reflection of it on a surface
of some shininess, you see a bright spot. The brightest point in that spot is where the reflected view vector
hits the point light. The angle is the visual angle from that point by which the brightness falls to 1/2 of the
brightness of the brightest spot.
This angle gets smaller as the shininess (and therefore as the specular texture's alpha channel value)
increases. The curve resembles a negative exponential curve (natural decay), except towards the right
end... As the angle gets pretty small, trying to continue our exponential decay would stop the curve at a
pretty big angle. To get a really small angle (very high shininess) would cause the rest of the useful range
to lose resolution. But letting angle fall linearly would become too coarse. This formula is an almost perfect
middle path compromise between those two problems.

And the nicest part of it is that the formula doesn't involve expensive functions like trigonometry or
exponentials; just one addition, one subtraction and a division.

Then I started working on the shininess-to-LOD function, which traditionally involves an expensive
log function, and often cheats a lot in order to use the full mipmap range. Thanks to the previous function,
which allows pretty high shininesses, the mipmap range is pretty much all used up; and then I replaced the
log function with a square and a square-root, which also gave me a small and smooth amount of cheating built-in.

Code: Select all

float alpha2LOD( in float alpha )
{
    //The following is an approximation of the true formula. It avoids
    //using a logarithm, plus it makes better use of env-map mipmaps.
    //The true formula would be 8+log2( tan( spotlight radial angle ) )
    //The approximation is 9*sqrt(1.0-alpha)+(0.5-alpha)^2-0.25
    float tmp1 = 1.0-alpha;
    float tmp2 = 0.5-alpha;
    float tmp1 = sqrt(tmp1);
    float tmp2 *= tmp2;
    return 9.0*tmp1 + tmp2 - 0.25;
}
The black line is the theoretical LOD; the red line is the approximation:

Image

Finally, as shininess goes down and specular light-spots grow bigger, the same amoun of light is more spread out;
so the absolute value of brightnes of specular spotlights must decrease as shininess goes down.
Now, we were doing this already; but we used a strange formula with weird powers and magical numbers.
That was in error. I discovered there was a problem while playing with this nice little open source program called Graph.
The pretty complex formula for shininess as a function of specular spotlight angular radius became
almost a perfect straight line when I changed the graph representation from linear to logarithmic. Visually
I then deduced that the curve closely apprximated,

shininess ~= 4500 / (angular radius (degrees))^2

So, shininess is proportional to the inverse of the square of the angle.
But notice that the solid angle is, by definition, proportional to the angle squared; and the concentration of light
in a specular lightspot is also inversely proportional to solid angle!!!
Which implies that lightspot absolute brightness is simply linearly proportional to shininess!!!!!
Next question was, what's the absolute multiplication constant.
Now, the phong formula is:

brightness = (reflection dot light)^(shininess) * ... (light, material...)

and the regular diffuse formula is

brightness = (normal dot light) * ... (light, material...)

The dot of two normalized vectors is the cosine of the angle; so when shininess is 1.0, the spreads of the
two formulas are equivalen (only shifted in illuminational position).
So the "gain" of specular reflections should be equal to diffuse, at shininess of 1.0.
IOW, there is NO magical constant involved.
Spotlight brightness is simply,

brightness = (reflection dot light)^shininess * shininess * ... (light, material...)

Now, before someone points this out, this IS going to result in very frequent saturations to white.
Try multiplying any amount of specularity by several hundred...
But I say heck; so be it! Let the specular highlights saturate. What's wrong with that? Light saturation
is something we've evolved to get used to: First with the natural saturation of the receptors in our
retinas... and then since childhood we get used to accepting saturation in photography and film.
So it's not like accepting saturation is going to make our graphics any less than photo-realistic; is it?
safemode
Developer
Developer
Posts: 2150
Joined: Mon Apr 23, 2007 1:17 am
Location: Pennsylvania
Contact:

Post by safemode »

that all depends on if it really adversely affects gameplay.

Only way to tell is to use it and see.

Then we really have to ask ourselves. Is the game being looked at through sensors and cameras or are we viewing the game from the eyes of a person through cockpit glass or an imaginary 3rd person trailing view or such? I see a lot of possibilities with a sensor/camera only view of space when flying around, especially if all those ideas could be done with shaders.
Ed Sweetman endorses this message.
chuck_starchaser
Elite
Elite
Posts: 8014
Joined: Fri Sep 05, 2003 4:03 am
Location: Montreal
Contact:

Post by chuck_starchaser »

From a point of view of graphics, it doesn't matter whether you're using cameras or your own eyes.
Your monitor is a lot more limited than either one.

From a point of view of game-immersion, you wouldn't even want to bring the question to the player's attention.
That's a party pooper to even think about. Of course we all want to believe we're looking at the world through
our own eyes; or we wouldn't be playing a 3D game.

From a point of view of modeling, you NEED to be looking at the world through actual windows.
Why?
Because windows are the ultimate visual reference for the sizes of ships. If you get rid of windows,
you get rid the only hope of capturing a sense of scale --in a game that so far has never had one.
safemode
Developer
Developer
Posts: 2150
Joined: Mon Apr 23, 2007 1:17 am
Location: Pennsylvania
Contact:

Post by safemode »

what i was thinking was the possibility of viewing the universe through alternate sensors other than just visible light. IR mode, Radiation mode overlay etc. Things that aren't possible if you're just looking through a glass cockpit window.

Other things that would differ from using your eyes and using cameras and sensors is that cameras and sensors can have a much better ability to view bright objects, dark objects, resolve objects at greater distances etc than the human eye. You could also have the normal light lens react to brightness much like modern day sunglasses can by darkening in proportion to the intensity of the light, thus if you look at the sun, the sun will automatically be darkened to acceptable non-saturated levels but the surrounding view will not be darkened at the same level. This could also be thought to be possible to do in post-processing of the image that sensors are reading.

So, the idea that it's your eyes or cameras and sensors is quite important. With eyes we have an ideal and a set of rules we'd have to try to follow to produce something realistic. With cameras and sensors, we can cater to gameplay while having an explanation to any visual effect, intended or not.

I think it's also something that the player asks already anyway. HUD images for one, realism issues for another, With sensors and cameras viewing everything and no direct eye use, we can have all sorts of in-cockpit visual modes and HUD helpers and effects, along with damage effects of scrambling your in-cockpit view and what not. Then we can have different shaders for in-cockpit mode and external views since the external views would be as if through human eyes.

It's not about the monitor's limitations, it's about simulating the differences between the two by exaggerating them so that the monitor is capable of differentiation.

hrm.. I just think it'd be neat for it to be readily apparent that in-cockpit view is not viewing space through your eyes, but rather through the ship's cameras and sensors.

The sense of scale is very confusing anyway when you're dealing with space. Reference points are hundreds of miles away, they're usually bigger than anything you'd see on earth, they're usually moving much faster than anything could on earth. It all works together to screw with your brain. Having windows on a ship just for the sake of reference help is just as bad as anything else we fake for the sake of gameplay, only i dont think it has much of an effect on gameplay as you give it.

I would make the external view of the cockpit as the surface for the visual and non-visual sensors viewable from within the cockpit. That would explain away all current ships with "cockpits" as soon from outside of the ship. All bigger ships would obviously have sensors positioned throughout the the ship on it's faces.

If you want to provide scale to the game, then it should be done by giving the ship an appropriate texture. Big ships should be made up of tons of pieces of metal riveted and welded together. Smaller ships should be much more uni-body in makeup. Big ships would have lots of lights and maybe actual windows depending on the type. It's this detail of texture that would provide scale. And it's that kind of detail in the textures that i dont think we really provide at the moment. Maybe we cant. i dont know.

anyways. it's just an idea. If we dont want any additional types of views that would go hand in hand with cameras and sensors providing all the in-cockpit viewing, then we should just state as a fact that the cockpits are transparent, and all space viewing in and out of cockpit is done through a glass with the pilots eyes.
Ed Sweetman endorses this message.
chuck_starchaser
Elite
Elite
Posts: 8014
Joined: Fri Sep 05, 2003 4:03 am
Location: Montreal
Contact:

Post by chuck_starchaser »

safemode wrote:what i was thinking was the possibility of viewing the universe through alternate sensors other than just visible light. IR mode, Radiation mode overlay etc. Things that aren't possible if you're just looking through a glass cockpit window.
I'm a big believer of "if you're gonna do it, do it right"; and what you're proposing can't possibly be done right, because we don't know what's the "color" of stainless steel or whatever material you pick, in various IR or UV ranges. Heck, I spent days googling to find just *visual* spectral data for materias and found almost nothing; forget about IR and UV spectrography... And you'd have to know all that to simulate such views, and you'd have to enter that information as additional sets of textures in which rgb channels represent bands of reflectivity in IR and UV wavelengths, and simulate the methodology by which the beyond-visual-range spectroscopy of the materials are mapped to rgb values. Otherwise you'll end up with unrealistic, ad-hoc formulas to produce IR-goggle-like images.
Well, you can do something like that with a special shader, easily, but it would be very ad-hoc. You'd have to say "metals are dark in IR but bright in UV; dielectrics are bright in IR but dark in UV"; and many other such un-supported, absurd assumptions in order to come up with something, and not nothing.
Other things that would differ from using your eyes and using cameras and sensors is that cameras and sensors can have a much better ability to view bright objects, dark objects, resolve objects at greater distances etc than the human eye.
False. At least at present time, the human eye sensibility spans like 9 decades of intensities; whereas the best ccd cameras might span 3. In any case, if you postulated cameras in the fourth milenium that span 11 decades of intensities, how do you represent their myraculous ability with computer monitors that only span 2 decades of light intensity? We got enough trouble as it is, representing what the eyes can see.
You could also have the normal light lens react to brightness much like modern day sunglasses can by darkening in proportion to the intensity of the light, thus if you look at the sun, the sun will automatically be darkened to acceptable non-saturated levels but the surrounding view will not be darkened at the same level. This could also be thought to be possible to do in post-processing of the image that sensors are reading.
This is indeed a worthy pursuit, but I've been struggling with this idea for a long time: The problem is that to effect auto-adjustment of brightness and gamma you need to take feedback from the final color information going to the screen; but feedback is expensive... Any reversal of direction of data transfer on the bus to the videocard is mightily expensive. Otherwise there'd be a number of interesting things we could be doing already.
I think it's also something that the player asks already anyway. HUD images for one, realism issues for another, With sensors and cameras viewing everything and no direct eye use, we can have all sorts of in-cockpit visual modes and HUD helpers and effects, along with damage effects of scrambling your in-cockpit view and what not. Then we can have different shaders for in-cockpit mode and external views since the external views would be as if through human eyes.
What people ask for and what they really want are not the same thing, necessarily, --or even often. I'll give you pretty unrelated but valid examples, non-the-less: In WCU we had players asking for larger planets. Once they were there they didn't like them. They asked to be able to fly all kinds of ships; and spiritplumber slaved at making all kinds of ships flyable; but players ALWAYS wanted to fly the very ships they couldn't, so there was nothing short of making ALL ships flyable to make them happy.
They wanted fleet commands, and spiritplumber gave them to them, and most never used them; not even to give spiritplumber feedback on them.
They wanted multiplayer, got it, never even tried it.
They asked to be able to mine asteroids, and spiritplumber again slaved at making asteroids mineable, and when it was done nobody cared for becoming professional miners.
All they really wanted was to "try this or that ship", or to have the experience of mining asteroids, etceteras.
Most of the time players tend to demand features that they merely want to experience (once or twice), rather than features they want to live with all the time.
"One nigth stand" features, rather than "marriage" features.
So I'm very skeptical about "listening to your customers". Customers are mostly basket case, flipping idiots. (Yes, in business specially.)
That's why at the PU forum we have a checklist/guideline for people to read before proposing a feature:
http://wcjunction.com/phpBB2/viewtopic.php?t=8
And even so a lot of them propose garbage features that would take months to implement, just because they want to experience them for 5 minutes.
It's not about the monitor's limitations, it's about simulating the differences between the two by exaggerating them so that the monitor is capable of differentiation.
Exactly. This is the holly grail; but again, it requires feedback, which is expensive.
hrm.. I just think it'd be neat for it to be readily apparent that in-cockpit view is not viewing space through your eyes, but rather through the ship's cameras and sensors.
Frankly, I'm dumbfounded. Maybe you and I are just diametrically different people; to me this would be the exact opposite of neat.
The sense of scale is very confusing anyway when you're dealing with space.
This topic was being hotly debated in these boards before your time. It doesn't have to be. I remember someone once mentioning some space game, can't remember which, and saying "If you guys were right... but you're wrong, because in this old game such and so, ships were ultra low poly, the graphics were awful, and yet when you were flying next to a capital ship you certainly got the sense it was huge."
The problem of sense of scale in Vegastrike, I still believe, comes from two snafus: The lack of windows and small, human-sized features; and the excessive accelerations that ships are capable of.
Reference points are hundreds of miles away, they're usually bigger than anything you'd see on earth, they're usually moving much faster than anything could on earth.
I think we can easily make mental adjustments for high speeds, as long as it takes a long time to accelerate to such speeds. But if you can be stationary near the rear of a 5 kilometer ship, hit the gas, and shoot past the front of it in a few seconds, then there's no way you're going to believe the ship is 5 kilometers long; because accelerations of 10 or 20 G's are completely outside of our daily --or even once in a lifetime-- experience.
Hell, 20G's is the level of vibrational acceleration electronic devices for use in military applications have to withstand. 20 G's is so brutal an acceleration most materials collapse under their own weight. You could never build a ship with external thrusters that could structurally withstand 20 G's of acceleration. Not with any known or even theoretical materials. It's a complete absurdity.
It all works together to screw with your brain. Having windows on a ship just for the sake of reference help is just as bad as anything else we fake for the sake of gameplay, only i dont think it has much of an effect on gameplay as you give it.
With ridiculously high accelerations, I grant you that much. Indeed, putting windows would just make Clydesdales look like they are 50 meter long toys with tiny windows.
I would make the external view of the cockpit as the surface for the visual and non-visual sensors viewable from within the cockpit. That would explain away all current ships with "cockpits" as soon from outside of the ship. All bigger ships would obviously have sensors positioned throughout the the ship on it's faces.
I don't like it, but it's your mod, not mine.
If you want to provide scale to the game, then it should be done by giving the ship an appropriate texture.
Well, this is an artistic debate that has no place here. The idea that textures should be a means to convey small detail is a popular fallacy. With textures you're limited by texel size. If you have a texture for the Clydesdale, and even if you manage the length of the ship to fit the height of the texture, and even if you make the texture 4096x4096, your texel's real size will be larger than one meter. But to convey the size of a ship, texel size should be no greater than a square inch.
Geometry doesn't have this limitation; you can make features as small as inch-scale in a 5 kilometer ship.
The most important purpose of the texture is to give models color and material, as well as wear. Features are like icing in the cake; but you shouldn't rely 100% on the texture for "detail". The smallest detail needs geometry. The medium but high relief detail also needs geometry. And even if you confined all detail to being medium sized and low relief, you still need geometry to support the believability of the texture; otherwise you can see that stuff is just "painted on".
Big ships should be made up of tons of pieces of metal riveted and welded together. Smaller ships should be much more uni-body in makeup.
ALL ships should look like they are built of parts. NO ship EVER should look uni-body. That's the first mistake you can tell the work of a newbie versus a seasoned modeler. Large and small ships should definitely differ in the number of parts that make up the body; but the number should never get anywhere nearly as low as 1 part.
Big ships would have lots of lights and maybe actual windows depending on the type.
Yes!
It's this detail of texture that would provide scale. And it's that kind of detail in the textures that i dont think we really provide at the moment. Maybe we cant. i dont know.
Exactly; we can't. That's the problem. For a 5 kilometer ship, no texture size can be large enough to convey the sense of scale. You are forced to use geometry for the smallest details, as you need to go deep down (one or two orders of magnitude) into sub-pixel scale. Another possibility is using a lot of sub-units or sub-meshes with multiple instances. The new shaders will also sport "detail textures" which will add noise at close range, to hide pixelation. But, ultimately, the problem is that Vegastrike is biting more than it can chew in matters of scale. 5 kilometer ships. 10 kilometer ships. 20 kilometer ships... The environment map is the limit... But how do you represent that?
Hey, movies can do that, no problem. You fly towards the death star, and as you get closer and closer, more and more detail becomes visible; until you fly into a tiny feature and inside of it it's the size of a city. Sure. If you have 64,000 processors working in parallel to render a movie frame by frame you can do that; the sky is the limit. But how do you do that with a game model and real time OpenGL?
What I would suggest is a return to sanity. Re-scale everything. Make the Clydesdale 500 meters long, instead of 5 kilometers; etceteras. But this is not my mod.
anyways. it's just an idea. If we dont want any additional types of views that would go hand in hand with cameras and sensors providing all the in-cockpit viewing, then we should just state as a fact that the cockpits are transparent, and all space viewing in and out of cockpit is done through a glass with the pilots eyes.
That would be my position; but I know that JackS and Hellcat want the view to be, not even camera views, but actually computer reconstructions of the reality out there. According to JackS, you cannot even see through shields, so that pilot views HAVE to be virtual reality constructs. Gives me butterflies in the stomach to even think about it. This is, BTW, one of the main reasons why I divorced myself from Vegastrike-the-game; --why "it's not my mod".

In any case, my shaders are for photo-realistic representation. Anything else will need different shaders, probably.
But I wash my hands from it all; though I think that no matter what you decide, realism has to be the basis; the starting point. Otherwise I would think I'm wasting my time; but probably not. Probably whatever the Vegastrike-mod's powers that be may eventually decide will be based on slight modifications of the CineMut shader family. But even if not, the *engine* needs photorealistic shaders, even if the mother of all mods doesn't use them.
chuck_starchaser
Elite
Elite
Posts: 8014
Joined: Fri Sep 05, 2003 4:03 am
Location: Montreal
Contact:

Post by chuck_starchaser »

Well, now that I've decided to have dielectric constant as an input, rather than a single bit alpha "is_dielectric",
I figured might as well compute fresnel reflectivity as a function of view angle AND dielectric constant.
Took me the whole day messing around with Graph and DataFit to get an approximation algorithm; and it's
not too approximate; but then again, approaching a 2D manifold in 3D is space is hard.

The x axis in the graph below is not angle, but rather cos of the angle. Why? Because my input is cos of
the angle (from the V dot N product). The y axis is reflectivity, and there are 4 pairs of curves, for 1.5, 2.0, 2.5
and 3.0 dielectrics. Note that glass is 1.5, and diamond is 2.5.

For each pair of curves, the grayish one is the true equation; the greenish one is the approximation algorithm
I came up with.

Image

Granted, there's quite a discrepancy, specially at 3.0 dielectric constant. On the other hand, polarized light is
perceived brighter than non-polarized; so the discrepancy might be beneficial.
Granted also, the approximation formulas look bigger than the true formulas; but that's because there are
constants there with a lot of decimal places precision. Those are the coefficients that DataFit found best fitting.
I tried changing them even slightly and the curve goes to hell.
Here's the 3D curve fitting from DataFit:

Image

The black dots represent actual values, and the lines projecting from
the dots are the Y discrepancies.

And the code:

Code: Select all

float fresnel( in float cosVdotN, in float k )
{
	float tmp3 = (k-2.0)/(k-0.938132952);
	float tmp2 = 1.0-cosVdotN*0.5629435458;
	tmp3 *= (tmp3*0.0037965349);
	float tmp1 = (1.0-k)/(1.0+k);
	tmp3 += 0.067312919667;
	tmp1 *= tmp1;
	tmp2 += (tmp3*sqrt(cosVdotN));
	tmp2 *= tmp2;
	tmp2 *= tmp2;
	tmp2 *= (tmp2*(1.0-tmp1));
	return tmp1 + tmp2;
}
Post Reply