Brainstorming on shadows

Discuss the Wing Commander Series and find the latest information on the Wing Commander Universe privateer mod as well as the standalone mod Wasteland Incident project.
klauss
Elite
Elite
Posts: 7243
Joined: Mon Apr 18, 2005 2:40 pm
Location: LS87, Buenos Aires, República Argentina

Post by klauss »

chuck_starchaser wrote:I'm totally confused now about the shadows thing. You said that having independent, floating shapes would be the killer, so the dolphin jumping armor idea, that goes above and below the hull, would make the armor a single mesh.
Yes... that's probably the best idea, since it lets all the engine-side optimizations take place.
chuck_starchaser wrote:The hull would have shadow casting turned off, of course.
No. Meaning: yes, but it can't be both a single mesh and have parts on parts off - either all on, or all off. That's why, in order to make some parts cast shadows and some parts not, you have to separate them into multiple meshes. There is a reason for that... it's not an arbitrary interface limit... it's that the algorithm would get all messed up if you did that, unless it internally divided the thing in two (on & off) - and where would be the fun in doing that?.
chuck_starchaser wrote:My problem is I don't know much about this stencil shadow technique.
Read on.
Especially:
GameDev wrote:The stencil shadow algorithm requires that the occluders be closed triangle meshes. This meant that every edge in the model must only be shared by 2 triangles thus disallowing any holes that would expose the interior of the model.
(always learning) - apparently, it will never work without closed meshes. Makes sense... but I thought algorithms handled that, only poorly - apparently not at all.
Another point for texture shadows, I'd say.
Care to work on ways to avoid z-fighting in texture self-shadows?
Oíd mortales, el grito sagrado...
Call me "Menes, lord of Cats"
Wing Commander Universe
chuck_starchaser
Elite
Elite
Posts: 8014
Joined: Fri Sep 05, 2003 4:03 am
Location: Montreal
Contact:

Post by chuck_starchaser »

Hey! I'm game.

Yeah, btw, I didn't explain my dolphin idea properly; the dolphin armor is one mesh, and the hull's another. But anyhow, let's beat the texture shadows' Z-fighters to a pulp. Remember that paper that I sent you once? I think there was some very big idea lurking in it... I can find it again, but it's working time still... After 4:30 I can look for it. The math was over my head, but I got the feeling of it... And we got a huge simplification: One light source, at infinity.
chuck_starchaser
Elite
Elite
Posts: 8014
Joined: Fri Sep 05, 2003 4:03 am
Location: Montreal
Contact:

Post by chuck_starchaser »

Okay, here's an idea:
The problem with depth shadow maps is that you never know exactly how much to bias the darn z. Now, how about, when we're rendering to z-buffer from the light source, before we write to the fragment, we read what's in the fragment and shove it to a second depth shadow texture. Once we're done, our standard depth shadow texture will have the nearest z for every fragment; and the other texture will have the depth of the second-nearest.
Now, we can use the average of the two depths to get an automatically optimized bias depth.
Nah, that doesn't work, because we only want to save the old value WHEN we're about to overwrite it. No such function... Or we can do the comparison manually, but that's not trivial, I guess... Are there floating point texture formats? Well, we could even use some float to rgba and rgba to float hack, I suppose...

Here's another idea:
Make sure the lowest LOD for a ship is worth its shadow. Then: Use stencil shadows, using that LOD as the caster, for inter-ship/inter-object shadows, and depth shadow maps for intra-ship self-shadowing. Now we can optimize z-buffer range for each ship or unit.
klauss
Elite
Elite
Posts: 7243
Joined: Mon Apr 18, 2005 2:40 pm
Location: LS87, Buenos Aires, República Argentina

Post by klauss »

Wuuu... you may have something there... about optimizing depth range, combined with the paper.

I'm thinking about rotating the Z-buffer, but rendering with normal frustrum projection.
Hm... could work...
...but I think it requires ps3.0 - replacing the depth component on a fragment shader.
Oíd mortales, el grito sagrado...
Call me "Menes, lord of Cats"
Wing Commander Universe
chuck_starchaser
Elite
Elite
Posts: 8014
Joined: Fri Sep 05, 2003 4:03 am
Location: Montreal
Contact:

Post by chuck_starchaser »

I'm not sure there are any international couriers sporting SR71's, but I'll send you this ps3 card the fastest way possible, tomorrow; meantime maybe I can do something over here. Just tell me what I need to do to get set up for shader work.
I'm thinking about rotating the Z-buffer, but rendering with normal frustrum projection.
Hm... could work...
But if you render normal frustrum you get wrong z occlusion, no? Maybe I don't get it.

Here's another idea:
We could use front face or backface for depth shadow rendering. Better yet, use both, then apply a formula for bias. The formula is the tricky part:
I think one big mistake being made is using additive bias. Bias should be multiplicative, like, say, 1.01, which would be guaranteed 2 LSB bias regardless of depth magnitude. In the case where the backface is far from the front face, we can can bias by half the thickness. So whichever is greater: 1/2 the thickness or 1%.
Last edited by chuck_starchaser on Sat Mar 11, 2006 12:19 am, edited 1 time in total.
klauss
Elite
Elite
Posts: 7243
Joined: Mon Apr 18, 2005 2:40 pm
Location: LS87, Buenos Aires, República Argentina

Post by klauss »

I'll try to write the shader and you'll see if it works ;)
Besides... I'm not sure yet if that's ps3.0 or not - I think it is.
And I should be able to write it without running it - it's actually pretty simple. The tricky part is optimizing range in all conditions: close lights, far lights, etc, etc...

EDITED: deleted all because the concept is all screwed up - I just can't account for offscreen elements, and even accounting for on-screen ones would perhaps require a too complex shader for the shadow texture. I was still thinking like a CPU, rather than a GPU.

But I think I can do a variant of it with from-the-light projection...
Last edited by klauss on Sat Mar 11, 2006 12:32 am, edited 2 times in total.
Oíd mortales, el grito sagrado...
Call me "Menes, lord of Cats"
Wing Commander Universe
chuck_starchaser
Elite
Elite
Posts: 8014
Joined: Fri Sep 05, 2003 4:03 am
Location: Montreal
Contact:

Post by chuck_starchaser »

Sorry, I was editing the post when you posted...
I wrote:Here's another idea:
We could use front face or backface for depth shadow rendering. Better yet, use both, then apply a formula for bias. The formula is the tricky part:
I think one big mistake being made is using additive bias. Bias should be multiplicative, like, say, 1.01, which would be guaranteed 2 LSB bias regardless of depth magnitude. In the case where the backface is far from the front face, we can can bias by half the thickness. So whichever is greater: 1/2 the thickness or 1%.
Not sure what you're talking about, but I'm eager to try it.
"Per light"? Serious? I thought you'd just do shadows from the star, and to hell with the rest... :)
klauss
Elite
Elite
Posts: 7243
Joined: Mon Apr 18, 2005 2:40 pm
Location: LS87, Buenos Aires, República Argentina

Post by klauss »

I just deleted most of it because of huge complications (I knew it was too good to be true... It's getting late for innovation perhaps), in the hope you wouldn't see it and get all confused.
Oíd mortales, el grito sagrado...
Call me "Menes, lord of Cats"
Wing Commander Universe
chuck_starchaser
Elite
Elite
Posts: 8014
Joined: Fri Sep 05, 2003 4:03 am
Location: Montreal
Contact:

Post by chuck_starchaser »

I was born confused; don't worry about me. I was writing about this 1% bias, myself.. I think my brain switched to 8-bit mode, for a moment.
Okay, so, that paper; I'm sure it's still on my server; I'll look for it; but if I remember correctly, a light source at infinity is a special case; I would suggest we exploit that and do shadows from the star only, not from other light sources. Should be enough. Infinitely better than no shadows at all. And if we do one shadow map per unit, we could optimize z-buffer range so much as to be able to apply the shadows with GL_EQUAL :D
chuck_starchaser
Elite
Elite
Posts: 8014
Joined: Fri Sep 05, 2003 4:03 am
Location: Montreal
Contact:

Post by chuck_starchaser »

More ideas, klauss:

Okay, my previous mental model was: Use depth shadow textures for self-shadowing, and stencil shadows for inter-unit shadowing. Use a simplified and airtight mesh (lowest LOD?) for inter-object shadowing. Optimize depth buffer for each unit: i.e. set the near and far planes just before and after the nearest and furthest vertex of a unit from the light's perspective, so as to utilize the full 24-bits of the z-buffer.

Now I have what I think is a better idea:

Same idea of a per-unit optimized shadow map for self-shadowing. But instead of using stencil shadows for cross-unit shadowing, we edge-detect the depth maps for the infinite/finite depth boundary, and throw shadows from those edges. Lots more efficient than projecting from the geometry, and the maps already reflect the fully detailed shadow. We can probably go as fancy as we want about the edge detection, but since we cannot generate geometry in a shader, I'd suggest we have, say, a 256 sided polygon that a vertex shader can deform to fit the shape of the shadow. For a very simple algorithm, we could, for each vertex in the polygon, do a binary search along a radius, to find the shadow's edge.

I was also thinking about soft-shadows....

My thinking goes somewhat along the lines of that paper; can't find it right now, btw... Question is, how to compute or fake soft shadows economically; as in without doing multiple samples. I think I got a vague idea how:
Allright, our light source is usually at infinity, so we can compute shadow depth from the light's perspective using parallel projection. But the source is a disk spanning some degrees. Now, suppose that instead of a point source at infinity, we had a point source at close range. Shadows would grow larger, quicker. Put another way, if you're sitting on this local light source, nearer objects appear larger than further ones. Now, it so happens that the enlarged shadows cast by a nearer light source are sort of what you get if you throw the full, expanded penumbra of a soft shadow together with the shadow.
The reverse is a bit trickier: If you consider a soft shadow minus the penumbra, shadows grow smaller as they travel further. That's roughly equivalent to the light source being local, but on the other side of a scene, and casting light and shadows backwards.
If we can tweak the transforms to produce depth shadows for both these cases, we can then use some kind of linear or sigmoidal interpolation. Or better yet, we could "interpolate to z-bias", then modulate lighting in proportion to z offset from 0% to 100% at full bias or greater, so that inaccuracies would be automatically filtered.
There's one problem with this: A local point light source would spread the shadows (squeeze them in the far-side case); but I think this could be roughly compensated for by scaling down the shadow plus penumbra map, and scaling up the shadow minus penumbra map, by reciprocal ratios, till most features are relatively re-centered in the two maps.
A dangerous hack, I would agree, but on the other hand, if the shadows are soft, and with their softness encoded as bias, it will probably make artifacts look right, anyhow :D

More:
This may overlap with the ideas on that paper, again, but I'm not sure to what extent, so I'm throwing it all in:
Instead of starting by rendering shadow depth from the light source onto an arbitrary plane, we place a black-filled billboard in front of the camera, representing the screen. The goal, at the end of it, is to have such z-depth on the texels of this texture that we can simply mix it with the rendering of all the geometry and have shadows drawn correctly as the result. IOW, where a pixel should be in shadow, z of this black texture should be less than the z of the scene at that pixel; and more, if NOT in shadow.
So, first we render the scene normally (without texturizing tho) onto this black texture, to get equal z to the scene.
Our texture now is a sort of 2.5 D representation that includes all the shadow receivers we care about, though not all the shadow casters, of course.
Now we apply a lossless transform to this, and rotate it onto a plane from the light's perspective, plus we give it a bit of bias. Lossless, in the sense that resolution should be guaranteed to stay the same or increase. The advantage of all this is, we already have a scene, with calculated z from the light source. We draw now from the light source, but usually the z-test fails (unless there IS and occluder), but so, in many cases, we don't draw. At the end of it, we apply the inverse transform, back to camera view, but we make sure that we use a transform that is precision reciprocal to the first, so as to avoid artifacts due to incompatible samplings. In plainer words, a pixel in the original scene should be well represented by from one to four --or more-- pixels in the from-the-light perspective, and map back to the same pixel, after non-shadow state is confirmed, or depth is rewritten. Actually, no, what I mean is: Whatever inaccuracies are incurred going from camera to light space, should be undone by the inverse transform going back, such that a pixel that isn't in shadow and which is surrounded by pixels who aren't in shadow either, should, at the end of both transforms, have the same z. This should avoid artifacts and pixelated shadow edges. What's that called? A "reversible" transform?

Jesus! 11:20 AM already, better get that stuff repacked and get moving towards the post office...

EDIT:
Done! Sent Express and registered and insured and well packaged and with a couple of DDR400 sticks, 256 megs each. Supposedly it will take 7 business days to get there, so, around Wednesday the 22nd. I hope the card is still okay; I didn't even open the original envelope; I just threw the envelope into a box, with some extra bubblewrap.

Yeah, btw, they told me there are no international couriers sporting SR71's. What's with that? Where did all the decomissioned SR71's go? Somebody needs to grab that niche of the courier market before the SR71's rust away...
Actually, I'm not totally sure the sticks were DDR400. Forgot to look at them again before I threw them in the box. I'm pretty sure they were, tho; cuz I remember when I bought sticks for my new mobo I though "gee, I spent all this money on memory and they aren't even faster than the ones I had".
klauss
Elite
Elite
Posts: 7243
Joined: Mon Apr 18, 2005 2:40 pm
Location: LS87, Buenos Aires, República Argentina

Post by klauss »

chuck_starchaser wrote:More ideas, klauss:

Okay, my previous mental model was: Use depth shadow textures for self-shadowing, and stencil shadows for inter-unit shadowing. Use a simplified and airtight mesh (lowest LOD?) for inter-object shadowing. Optimize depth buffer for each unit: i.e. set the near and far planes just before and after the nearest and furthest vertex of a unit from the light's perspective, so as to utilize the full 24-bits of the z-buffer.
Pity is... it would be best to use stencil for self-shadowing, and texture for the rest - but that won't work, because at close range we need high detail (and at close range it is when self-shadowing matters).

chuck_starchaser wrote:Now I have what I think is a better idea:

Same idea of a per-unit optimized shadow map for self-shadowing. But instead of using stencil shadows for cross-unit shadowing, we edge-detect the depth maps for the infinite/finite depth boundary, and throw shadows from those edges. Lots more efficient than projecting from the geometry, and the maps already reflect the fully detailed shadow. We can probably go as fancy as we want about the edge detection, but since we cannot generate geometry in a shader, I'd suggest we have, say, a 256 sided polygon that a vertex shader can deform to fit the shape of the shadow. For a very simple algorithm, we could, for each vertex in the polygon, do a binary search along a radius, to find the shadow's edge.
Neat idea... if it were to run on a CPU.
But not so much on a GPU - shaders have no context, and to do that kind of thing without context would imply heavy overheads. I don't think good performance could be achieved. Besides, stencil shadow volumes cannot be used if they don't perfectly match the geometry, because they mess up the entire scene - you'd have to divide the rendering in multiple stages, which would be inefficient as the number of stages incrase.
chuck_starchaser wrote:Allright, our light source is usually at infinity, so we can compute shadow depth from the light's perspective using parallel projection.
Don't assume that.
From engine perspective, everything is a point light. I actually never found myself in the need to use directional lights.
Why? Because of the universe's scale - every light source is within the scene, and not infinitely far away. Remember: planets are within reach.
It is not impossible to do some sort of automatic conversion from point to directional when you're far away from the source... but I wouldn't consider such advanced issues now - lets leave that for advanced refinements of the engine. Thing is, things get tricky when you can get close to your lightsource. How would you consider the sun when you're less than a solar radius from it? Certainly not a point light. Nor a directional light. I would like to eventually consider such issues, so we'll work on advanced lighting then.
For the shadow problem, I'd first consider point sources, since those are the most common (or most useful approximations, at least).

About the 2.5D thing... I didn't get it.
Alright, the idea of projecting from the camera really has to hold some ground for artifacts to go away... there's no way otherwise. But there's no GPU-friendly way to do that mapping either... not 1-to-1, which is what's required. So... in the end... you can't guarantee a 1-to-1 mapping between shadow texture texels and screen pixels... thus, you're plagued with sampling errors - not only rounding errors.
I think I have a way to compute automatic biases, by making the shadow texture rendring pass aware of the camera's frustum - so, when it renders a texel, it computes an appropriate bias by estimating how much could a pixel's sample deviate from the texel's sample. Highly heuristic... I would have to try that.
Other than that, I found no solution. Perhaps... that added to the paper's techniques could show good results for inter-object shadowing. But self-shadowing would require at least artists being careful not to test their luck. There's always a case that cannot be accounted for: parallel planes. In some orientations, they wouldn't get properly shadowed, no matter how you select biasing or resolution allocation.

chuck_starchaser wrote:Done! Sent Express and registered and insured and well packaged and with a couple of DDR400 sticks, 256 megs each. Supposedly it will take 7 business days to get there, so, around Wednesday the 22nd. I hope the card is still okay; I didn't even open the original envelope; I just threw the envelope into a box, with some extra bubblewrap.
:D :D
Ehm... words...
:D :D
...not necessary.
chuck_starchaser wrote:Yeah, btw, they told me there are no international couriers sporting SR71's. What's with that? Where did all the decomissioned SR71's go? Somebody needs to grab that niche of the courier market before the SR71's rust away...
:lol: That wouldn't help, though. If something takes more than 2 days to get somewhere, it isn't because airplanes aren't fast enough - even with standard speeds, you can get anywhere within 48 hours. It's administrative overhead what has to be fixed, IMNSHBNSSO.
Oíd mortales, el grito sagrado...
Call me "Menes, lord of Cats"
Wing Commander Universe
chuck_starchaser
Elite
Elite
Posts: 8014
Joined: Fri Sep 05, 2003 4:03 am
Location: Montreal
Contact:

Post by chuck_starchaser »

klauss wrote:
chuck_starchaser wrote:More ideas, klauss:

Okay, my previous mental model was: Use depth shadow textures for self-shadowing, and stencil shadows for inter-unit shadowing. Use a simplified and airtight mesh (lowest LOD?) for inter-object shadowing. Optimize depth buffer for each unit: i.e. set the near and far planes just before and after the nearest and furthest vertex of a unit from the light's perspective, so as to utilize the full 24-bits of the z-buffer.
Pity is... it would be best to use stencil for self-shadowing, and texture for the rest - but that won't work, because at close range we need high detail (and at close range it is when self-shadowing matters).
What sense would it be best, then? I would think using texture shadows for self-shadowing would be best in every way, in the sense that texture shadow incurs precision problems when when it spans a lot of light and camera z, such as full scenes with wide angles. Using one shadow texture per unit we minimize that problem, plus optimize z precision by scaling z range from the light to only the needed range. That's ridiculous amounts of precision.
chuck_starchaser wrote:Now I have what I think is a better idea:

Same idea of a per-unit optimized shadow map for self-shadowing. But instead of using stencil shadows for cross-unit shadowing, we edge-detect the depth maps for the infinite/finite depth boundary, and throw shadows from those edges. Lots more efficient than projecting from the geometry, and the maps already reflect the fully detailed shadow. We can probably go as fancy as we want about the edge detection, but since we cannot generate geometry in a shader, I'd suggest we have, say, a 256 sided polygon that a vertex shader can deform to fit the shape of the shadow. For a very simple algorithm, we could, for each vertex in the polygon, do a binary search along a radius, to find the shadow's edge.
Neat idea... if it were to run on a CPU.
But not so much on a GPU - shaders have no context, and to do that kind of thing without context would imply heavy overheads. I don't think good performance could be achieved. Besides, stencil shadow volumes cannot be used if they don't perfectly match the geometry, because they mess up the entire scene - you'd have to divide the rendering in multiple stages, which would be inefficient as the number of stages incrase.
I don't know what "having no context" means; but I see a problem and maybe this problem is what you mean. We could throw these 256-sided "rings" into the mesh file, like an LOD. We're already onto two passes no matter what, right? So, in the first pass we compute a shadow texture for each unit, and, as a last step for the unit we process this ring by wrapping it to the shadow by binary search of the texture, in the vertex shader. There's enough instructions to do a binary search through the radius. And this ring is set to project shadow planes from all its non-zero edges. So, we end up with 256 shadow planes or less, per unit.
In the second pass we apply self-shadowing for each unit from its shadow texture, and mix in stencil shadowing from other units. Each unit avoids stencil shadows from itself by the ring being placed at the far end of it from the light's perspective.
chuck_starchaser wrote:Allright, our light source is usually at infinity, so we can compute shadow depth from the light's perspective using parallel projection.
Don't assume that.
From engine perspective, everything is a point light. I actually never found myself in the need to use directional lights.
Why? Because of the universe's scale - every light source is within the scene, and not infinitely far away. Remember: planets are within reach.
It is not impossible to do some sort of automatic conversion from point to directional when you're far away from the source... but I wouldn't consider such advanced issues now - lets leave that for advanced refinements of the engine. Thing is, things get tricky when you can get close to your lightsource. How would you consider the sun when you're less than a solar radius from it? Certainly not a point light. Nor a directional light. I would like to eventually consider such issues, so we'll work on advanced lighting then.
For the shadow problem, I'd first consider point sources, since those are the most common (or most useful approximations, at least).
You're mixing 3 unrelated things. Point or non-point source is unrelated to whether it's at infinity or not. And directional lights are lights that project like a cone.
The light from a star is a non-point-source, but to treat it like not a point source is the subject of *soft shadows*.
Now, whether a light is considered at infinity (parallel projection; --NOT "directional") is independent of its pointiness. The light from a star IS, for all intents and purposes, a light at infinity. Even if you were at 1/100th of a radii from a star, it still is. What changes there is the pointiness of it; but not whether it can be considered at infinity. It is still practically at infinity because, from each point on the surface of the star, light rays are arriving at you parallel, to whatever precision you can practically measure.
And the reason I suggested considering it so in the math, is because in that paper, a light at infinity was a special case. So, I was suggesting that as a simplification, NOT as a fancy feature. I think you disregarded the whole idea for soft shadows on the basis of my suggesting generalizing to a light at infinity. If so, please read that again. That whole idea could be the ticket to have soft shadows (model non-point-like sources). That's what happens when you get close to a star: shadows get softer.
About the 2.5D thing... I didn't get it.
Alright, the idea of projecting from the camera really has to hold some ground for artifacts to go away... there's no way otherwise. But there's no GPU-friendly way to do that mapping either... not 1-to-1, which is what's required. So... in the end... you can't guarantee a 1-to-1 mapping between shadow texture texels and screen pixels... thus, you're plagued with sampling errors - not only rounding errors.
I realize the sampling problem; this was a shot in the air; I was throwing the idea that IF there was some exactly reversible transform by guaranteeing say minimum 1:4 correspondence ...
Yeah, my suggestion amounted to a triple pass... I'm still thinking about it, though, because... Well, it has evolved (or devolved) in my mind...:

(EDIT: OBSOLETE)

1a) projecting from camera, IOW rendering, like with textures, lighting, everything but shadows, plus z, of course.
1b) Transform z to light perspective z (alpha of zero), at double the rez or something, interpolating or whatever. (EDIT: HERE THE PROBLEM...) (This could be done in the same pixel shader as 1a; we just transform each pixel to light perspective and write it to a separate target as we're rendering the scene.)
2a) Render z-only from the light to a depth texture with default alpha of zero, with GL_LESS on Z, and setting alpha to 1 if and when over-writing. (Note that this is faster than the usual method, which has to draw *everything* from the light perspective. If most of what we see in a given scene is lit, z-test will fail a lot of the time.)
2b) Transform back to a black-filled texture from camera-perspective without z, just interpolated alphas; and write over the original scene.

Hey! This is only two passes, now, btw.
And it's one transform, not two, as the second no longer requires z...
HEY!!!!
Our first transform is quite precise, because we're going from geometry, --NOT from a freaking z--, to light perspective z. So we only incurr ONE z precision distortion, NOT two. We're NOT writing that per-pixel z to a half-float and then reading it back and transforming it; we go directly from geometry to the light's z!!! Okay, I know what you're thinking: "the standard algorithm doesn't transform camera z to light z"; true, but the standard algo renders from light to z, then transforms from light to camera z, and compares z, so there are two precision distortions.
EUREKA!!!!!!
I think I have a way to compute automatic biases, by making the shadow texture rendring pass aware of the camera's frustum - so, when it renders a texel, it computes an appropriate bias by estimating how much could a pixel's sample deviate from the texel's sample. Highly heuristic... I would have to try that.
Sounds like a good idea!
Other than that, I found no solution. Perhaps... that added to the paper's techniques could show good results for inter-object shadowing. But self-shadowing would require at least artists being careful not to test their luck. There's always a case that cannot be accounted for: parallel planes. In some orientations, they wouldn't get properly shadowed, no matter how you select biasing or resolution allocation.
Hey, parallel planes show artifacts in RL :D
...IMNSHBNSSO.
What's that?

EDIT:
DOH!!!! Can't transform from camera to light perspective with interpolation, can we? Where did I leave my brains?...

Back to standard algorithm. Here's yet another idea:
First pass is rendering from the light's perspective, but writing to two targets, or a combined target. The idea is to generate two z-buffers simultaneously: Each fragment from the light has computed z from the light AND from the camera, both computed at full precision but stored in z-precision format. The first z is used to decide whether to write or not to write. The second z is written if the first is written, not if not. No advantage yet...
On the second and final pass, we have two ways to determine light or shadow:
a) Computing z from the light and comparing to interpolated z from the light (first z from first pass), and
b) Comparing z from the camera to the interpolated z-from-camera from the second z from the first pass.
Interpolation is automatic, of course; just mentioning.
So, now perhaps one of the two methods yields values that are too close to decide, but the other gets a better result. The idea is to multiply the results together, kind of:
factor1 = computed z from light / interpolated read of z from light
(lit if > 1; dark if < 1)
factor2 = z from camera / interpolated read of z from camera
(quite possibly lit if ~1, for sure not lit if << 1 or >> 1)

light up the fragment if factor1 - (factor2 - 1.0f)^2 > 1

EDIT: Still thinking about it...
Last edited by chuck_starchaser on Tue Mar 14, 2006 1:49 am, edited 2 times in total.
hurleybird
Elite
Elite
Posts: 1671
Joined: Fri Jan 03, 2003 12:46 am
Location: Earth, Sol system.
Contact:

Post by hurleybird »

...IMNSHBNSSO...

Ummm, lets see... In My Not So Humble But Not Strictly Serious Opinion?

-Well, I think I got everything but the two S's right....
chuck_starchaser
Elite
Elite
Posts: 8014
Joined: Fri Sep 05, 2003 4:03 am
Location: Montreal
Contact:

Post by chuck_starchaser »

hurleybird wrote:...IMNSHBNSSO...

Ummm, lets see... In My Not So Humble But Not Strictly Serious Opinion?
In My Not So Humble But Not So Snotty Opinion

In My Not So Humble But Not So Stuck-up Opinion

In My Not So Humble But Not So Smarty-Panty Opinion

In My Not So Humble But Not So Smug Opinion

In My Not So Humble But Not So Supercilious Opinion

In My Not So Humble But Not So Swaggering Opinion

In My Not So Humble But Not So Smart-alecky Opinion

...
klauss
Elite
Elite
Posts: 7243
Joined: Mon Apr 18, 2005 2:40 pm
Location: LS87, Buenos Aires, República Argentina

Post by klauss »

@Hurley: Unbelievable... you guessed it :shock:
chuck_starchaser wrote:1a) projecting from camera, IOW rendering, like with textures, lighting, everything but shadows, plus z, of course.
1b) Transform z to light perspective z (alpha of zero), at double the rez or something, interpolating or whatever. (EDIT: HERE THE PROBLEM...) (This could be done in the same pixel shader as 1a; we just transform each pixel to light perspective and write it to a separate target as we're rendering the scene.)
2a) Render z-only from the light to a depth texture with default alpha of zero, with GL_LESS on Z, and setting alpha to 1 if and when over-writing. (Note that this is faster than the usual method, which has to draw *everything* from the light perspective. If most of what we see in a given scene is lit, z-test will fail a lot of the time.)
2b) Transform back to a black-filled texture from camera-perspective without z, just interpolated alphas; and write over the original scene.
Uuuu...
I don't think you meant it the way I'm thinking, but I see some potential.
Rather than using it as an authoritative sample... I could use the screen samples as a correcting sample: every time I render from the light, I check the from-camera target, using some sort of transform. If it matches relative angular placement, it means it belongs to the same texel I'm rendering. So... I can use that depth value to correct the from-light sample and avoid completely any kind of artifact. Cool - In fact, I can also autocompute bias: if multiple on-screen samples match, I adjust bias. I'd need two z-channels on the depth texture, which is possible if I encode the second depth value (min/max) in the color components.

Nice.

Sadly, I don't think it maps well with Ogre... I'd have to code all that by hand, which I'd hate.

About why stencil would be best for self-shadowing: it shows no artifacts, and is mighty simple - its strength is at self-shadowing. Shadow textures, however, have their weakness at self-shadowing.

I do have a solution for smooth shadows, which could be considered overkill (but... hey... tell me it isn't tempting): dynamic ambient occlusion - it's all you need. And it works on ps3 :D
Oíd mortales, el grito sagrado...
Call me "Menes, lord of Cats"
Wing Commander Universe
chuck_starchaser
Elite
Elite
Posts: 8014
Joined: Fri Sep 05, 2003 4:03 am
Location: Montreal
Contact:

Post by chuck_starchaser »

klauss wrote: I do have a solution for smooth shadows, which could be considered overkill (but... hey... tell me it isn't tempting): dynamic ambient occlusion - it's all you need. And it works on ps3 :D
Not sure what you mean; if radiosity is baked in the glow texture, there's no need for dynamically computing ambient; besides, ligth from the sun only spans a few degrees... It's a different kind of softness. Unless you're talking about a narrow angle multisampling, but then I wonde where you're thinking of doing that. In the from the camera rendering you can't because you don't know the occluders. In the from the light rendering you need different angles, so it's like multiple passes... I stand by my previous proposal: Distort the light's position so it's close to the scene, from one side and from the other. Get shadow plus penumbra and shadow minus penumbra depth maps, and interpolate; but I'm not sure how interpolation would be done on a gpu. Some geometry trick, perhaps... Like producing a 3D mask we slap on top of the scene...

I was having a very vague new idea now... Vague because I have a feeling, more than a certainty, that depth measured from a median perspective between the camera and the light might be less prone to artifacts. And this might be amenable to optimization with dual GPU or SLI... Render from the camera with an additional computed z from the light, in one GPU. Render from the light with an additional computed z from the camera in another GPU. "Average" the two results before deciding whether a fragment is in light or shadow.

Nah...

1) Calculate the median (between camera and light) projection transforms in the CPU (transform matrices or quaternions), once per frame.
2) Rendering from the light with an extra z in this median perspective, color encoded.
3) Rendering from camera, but instead of computing the light's-z, compute median z and compare with that extra median z we computed when rendering from the light.

Maybe?

I have a feeling it would ***drastically*** cut down the sampling problem...

EDIT:
Holly crap! Why base the descision only on z? We could record interpolated surface normal, for example, in the from-the-light rendering --which would interpolate nicely--; or even some color info too; and have these extra circumstantial evidence for our descision making algorithm: "Do these parameters match closely enough that it could be the same spot we're talking about?" -type of fuzzy logic. In many cases where the z descision is ambiguous, a very different surface normal would settle the question towards "in shadow".

And r.e. precision: what about w-buffers? Could we use logarithms throughout? Add and subtract instead of mul/div...?
Hey! As a matter of fact, z is not distance from camera, but scene depth (parallel projection). And distance from a light at infinity (which a star IS) is also parallel. This means that transforming camera z to light's z, from an arbitrary plane orthogonal to the light, is linear. Never mind all these complicated matrices... We can compute the few transforming factors in the CPU once per frame. Same goes for computing z to some median view space between camera and light. Uhmm... actually, the xy we need to compute in from the light space isn't so linear, is it?...

Another thing...
One problem adding itself to the sampling problem is the linear interpolation of depth samples. Is there a way we could use an anisotropic sampler to get a better z-from-light (or from median) interpolation?
Last edited by chuck_starchaser on Tue Mar 14, 2006 2:21 pm, edited 1 time in total.
klauss
Elite
Elite
Posts: 7243
Joined: Mon Apr 18, 2005 2:40 pm
Location: LS87, Buenos Aires, República Argentina

Post by klauss »

chuck_starchaser wrote:Not sure what you mean
Read, then.
The idea is that radiosity, when the emitters are moving, cannot be baked.
That paper only talks about ambient occlusion, IIRC - but it can easily be modded.
Oíd mortales, el grito sagrado...
Call me "Menes, lord of Cats"
Wing Commander Universe
chuck_starchaser
Elite
Elite
Posts: 8014
Joined: Fri Sep 05, 2003 4:03 am
Location: Montreal
Contact:

Post by chuck_starchaser »

klauss wrote:
chuck_starchaser wrote:Not sure what you mean
Read, then.
The idea is that radiosity, when the emitters are moving, cannot be baked.
That paper only talks about ambient occlusion, IIRC - but it can easily be modded.
Interesting. Just one bounce would be enough, IMO; can't see the diff beween single and double bounce... Modded as in creating a set of emitters representing a non-point-like source?
How do you implement a tree traversal of emitters in the gpu, anyways?

So what about the rest of my post? The median idea; I think it's my best idea so far.
klauss
Elite
Elite
Posts: 7243
Joined: Mon Apr 18, 2005 2:40 pm
Location: LS87, Buenos Aires, República Argentina

Post by klauss »

chuck_starchaser wrote:So what about the rest of my post? The median idea; I think it's my best idea so far.
I don't know. I don't get it at all.
Perhaps some explicative drawings?

(and I'm splitting this thread)
Oíd mortales, el grito sagrado...
Call me "Menes, lord of Cats"
Wing Commander Universe
chuck_starchaser
Elite
Elite
Posts: 8014
Joined: Fri Sep 05, 2003 4:03 am
Location: Montreal
Contact:

Post by chuck_starchaser »

klauss wrote:
chuck_starchaser wrote:So what about the rest of my post? The median idea; I think it's my best idea so far.
I don't know. I don't get it at all.
Perhaps some explicative drawings?

(and I'm splitting this thread)
Ok, I'll try to draw the median idea at lunchtime.
chuck_starchaser
Elite
Elite
Posts: 8014
Joined: Fri Sep 05, 2003 4:03 am
Location: Montreal
Contact:

Post by chuck_starchaser »

Image

Image

Image

IOW, as the light angle approaches 90 degrees to surface normal, z values change very rapidly and sampling ratio from camera to ligth space contracts to one to many; and as the view vector approaches 90 degrees to surface normal, sampling ratio of camera to light space expands to many to one. In both cases we have precision issues due to extremely unequal samplings. But by using an intermediate depth space, we scale down angles in those problem regions, which results in tangent values many orders of magnitude smaller than infinities :), so we get an animal that's much better than the sum of its parts...

Uhmmm.... Make that test in the second page a test for "rough equality", rather than a less-than test...

And the anisotropic thing... well... I'm really not sure if it's supposed to help or hamper; maybe we need *inverse* anisotropy, for all I know, (like having normal and anisotropic interpolation and subtracting a fraction of the latter from the former)?... Very vague idea. Essentially, where dz/dx and/or dz/dy change rapidly, some way of getting a better than linear interpolation out of the poor hardware, IF there's any way. But never mind; I think the median idea alone will decimate the artifacts.
chuck_starchaser
Elite
Elite
Posts: 8014
Joined: Fri Sep 05, 2003 4:03 am
Location: Montreal
Contact:

Post by chuck_starchaser »

I might add, it might look like I'm suggesting the median z changes the sampling in any way. It doesn't. What it does is use a z direction that yields more linearly interpolable z values. Imagine we're looking at a cylindrical tank and a light is on the right at 90 degrees. The 90 degrees of the cylinder that are the region of interest (because it's both, lit and visible) sports 2 places of high concavity in the z sampling: one from the camera, one from the light. But those 90 degrees of interest, from a median view at 45 degrees to my right, look like +/-45 degrees. Z values from this direction don't tend to jump to infinity and back; --at least not at the places where it matters. Concavities, within the region of interest, are much less, and therefore less prone to linear interpolation errors.
chuck_starchaser
Elite
Elite
Posts: 8014
Joined: Fri Sep 05, 2003 4:03 am
Location: Montreal
Contact:

Post by chuck_starchaser »

Yeah, I might add it doesn't help, since linear interpolation is linear interpolation, no matter what concavity.

Back to the drawing board...

Hey, no it does have its merit!:
If when we're scanning in camera view, in second pass, we're looking at a spot where the light is almost tangential to the surface, and compute a z from the light; then we look up the z for that point in the from the light depth texture and find that z in that area is changing drastically, so, even with interpolation, we get an error that is amplified by tan(89 degrees) or whatever. Our bias is quite insufficient at such spots. With z comparisons from a median perspective, sampling is the same, but z changes more slowly at such regions, so it's more in the numerical range of our bias.

Well, I'm not sure whether the median z is the right solution, but I think this clarifies a question of bias for us: The right bias to use is one that is measured along the surface normal, rather than along z from the light.
spiritplumber
Developer
Developer
Posts: 1831
Joined: Mon Mar 07, 2005 10:33 pm
Contact:

Post by spiritplumber »

I thought that tan(89) was a lot?


y'all lost me ^_^, i better get back to the AI
My Moral Code:
- The only sin is to treat people as if they were things.
- Rules were made for people, not the other way around.
- Don't deceive. Real life is complicated enough.
- If all else fails, smash stuff.
chuck_starchaser
Elite
Elite
Posts: 8014
Joined: Fri Sep 05, 2003 4:03 am
Location: Montreal
Contact:

Post by chuck_starchaser »

Yeah, we're tackling a shadowy subject, with shady techniques... Expect sombre and obscure posts around here :D :D :D
Post Reply