klauss wrote:chuck_starchaser wrote:More ideas, klauss:
Okay, my previous mental model was: Use depth shadow textures for self-shadowing, and stencil shadows for inter-unit shadowing. Use a simplified and airtight mesh (lowest LOD?) for inter-object shadowing. Optimize depth buffer for each unit: i.e. set the near and far planes just before and after the nearest and furthest vertex of a unit from the light's perspective, so as to utilize the full 24-bits of the z-buffer.
Pity is... it would be best to use stencil for self-shadowing, and texture for the rest - but that won't work, because at close range we need high detail (and at close range it is when self-shadowing matters).
What sense would it be best, then? I would think using texture shadows for self-shadowing would be best in every way, in the sense that texture shadow incurs precision problems when when it spans a lot of light and camera z, such as full scenes with wide angles. Using one shadow texture per unit we minimize that problem, plus optimize z precision by scaling z range from the light to only the needed range. That's ridiculous amounts of precision.
chuck_starchaser wrote:Now I have what I think is a better idea:
Same idea of a per-unit optimized shadow map for self-shadowing. But instead of using stencil shadows for cross-unit shadowing, we edge-detect the depth maps for the infinite/finite depth boundary, and throw shadows from those edges. Lots more efficient than projecting from the geometry, and the maps already reflect the fully detailed shadow. We can probably go as fancy as we want about the edge detection, but since we cannot generate geometry in a shader, I'd suggest we have, say, a 256 sided polygon that a vertex shader can deform to fit the shape of the shadow. For a very simple algorithm, we could, for each vertex in the polygon, do a binary search along a radius, to find the shadow's edge.
Neat idea... if it were to run on a CPU.
But not so much on a GPU - shaders have no context, and to do that kind of thing without context would imply heavy overheads. I don't think good performance could be achieved. Besides, stencil shadow volumes cannot be used if they don't perfectly match the geometry, because they mess up the entire scene - you'd have to divide the rendering in multiple stages, which would be inefficient as the number of stages incrase.
I don't know what "having no context" means; but I see a problem and maybe this problem is what you mean. We could throw these 256-sided "rings" into the mesh file, like an LOD. We're already onto two passes no matter what, right? So, in the first pass we compute a shadow texture for each unit, and, as a last step for the unit we process this ring by wrapping it to the shadow by binary search of the texture, in the vertex shader. There's enough instructions to do a binary search through the radius. And this ring is set to project shadow planes from all its non-zero edges. So, we end up with 256 shadow planes or less, per unit.
In the second pass we apply self-shadowing for each unit from its shadow texture, and mix in stencil shadowing from other units. Each unit avoids stencil shadows from itself by the ring being placed at the far end of it from the light's perspective.
chuck_starchaser wrote:Allright, our light source is usually at infinity, so we can compute shadow depth from the light's perspective using parallel projection.
Don't assume that.
From engine perspective, everything is a point light. I actually never found myself in the need to use directional lights.
Why? Because of the universe's scale - every light source is within the scene, and not infinitely far away. Remember: planets are within reach.
It is not impossible to do some sort of automatic conversion from point to directional when you're far away from the source... but I wouldn't consider such advanced issues now - lets leave that for advanced refinements of the engine. Thing is, things get tricky when you can get close to your lightsource. How would you consider the sun when you're less than a solar radius from it? Certainly not a point light. Nor a directional light. I would like to eventually consider such issues, so we'll work on advanced lighting then.
For the shadow problem, I'd first consider point sources, since those are the most common (or most useful approximations, at least).
You're mixing 3 unrelated things. Point or non-point source is unrelated to whether it's at infinity or not. And directional lights are lights that project like a cone.
The light from a star is a non-point-source, but to treat it like not a point source is the subject of *soft shadows*.
Now, whether a light is considered at infinity (parallel projection; --NOT "directional") is independent of its pointiness. The light from a star IS, for all intents and purposes, a light at infinity. Even if you were at 1/100th of a radii from a star, it still is. What changes there is the pointiness of it; but not whether it can be considered at infinity. It is still practically at infinity because, from each point on the surface of the star, light rays are arriving at you parallel, to whatever precision you can practically measure.
And the reason I suggested considering it so in the math, is because in that paper, a light at infinity was a special case. So, I was suggesting that as a simplification, NOT as a fancy feature. I think you disregarded the whole idea for soft shadows on the basis of my suggesting generalizing to a light at infinity. If so, please read that again. That whole idea could be the ticket to have soft shadows (model non-point-like sources). That's what happens when you get close to a star: shadows get softer.
About the 2.5D thing... I didn't get it.
Alright, the idea of projecting from the camera really has to hold some ground for artifacts to go away... there's no way otherwise. But there's no GPU-friendly way to do that mapping either... not 1-to-1, which is what's required. So... in the end... you can't guarantee a 1-to-1 mapping between shadow texture texels and screen pixels... thus, you're plagued with sampling errors - not only rounding errors.
I realize the sampling problem; this was a shot in the air; I was throwing the idea that IF there was some exactly reversible transform by guaranteeing say minimum 1:4 correspondence ...
Yeah, my suggestion amounted to a triple pass... I'm still thinking about it, though, because... Well, it has evolved (or devolved) in my mind...:
(EDIT: OBSOLETE)
1a) projecting from camera, IOW rendering, like with textures, lighting, everything but shadows, plus z, of course.
1b) Transform z to light perspective z (alpha of zero), at double the rez or something, interpolating or whatever. (EDIT: HERE THE PROBLEM...) (This could be done in the same pixel shader as 1a; we just transform each pixel to light perspective and write it to a separate target as we're rendering the scene.)
2a) Render z-only from the light to a depth texture with default alpha of zero, with GL_LESS on Z, and setting alpha to 1 if and when over-writing. (Note that this is faster than the usual method, which has to draw *everything* from the light perspective. If most of what we see in a given scene is lit, z-test will fail a lot of the time.)
2b) Transform back to a black-filled texture from camera-perspective without z, just interpolated alphas; and write over the original scene.
Hey! This is only two passes, now, btw.
And it's one transform, not two, as the second no longer requires z...
HEY!!!!
Our first transform is quite precise, because we're going from geometry, --NOT from a freaking z--, to light perspective z. So we only incurr ONE z precision distortion, NOT two. We're NOT writing that per-pixel z to a half-float and then reading it back and transforming it; we go directly from geometry to the light's z!!! Okay, I know what you're thinking: "the standard algorithm doesn't transform camera z to light z"; true, but the standard algo renders from light to z, then transforms from light to camera z, and compares z, so there are two precision distortions.
EUREKA!!!!!!
I think I have a way to compute automatic biases, by making the shadow texture rendring pass aware of the camera's frustum - so, when it renders a texel, it computes an appropriate bias by estimating how much could a pixel's sample deviate from the texel's sample. Highly heuristic... I would have to try that.
Sounds like a good idea!
Other than that, I found no solution. Perhaps... that added to the paper's techniques could show good results for inter-object shadowing. But self-shadowing would require at least artists being careful not to test their luck. There's always a case that cannot be accounted for: parallel planes. In some orientations, they wouldn't get properly shadowed, no matter how you select biasing or resolution allocation.
Hey, parallel planes show artifacts in RL
...IMNSHBNSSO.
What's that?
EDIT:
DOH!!!! Can't transform from camera to light perspective with interpolation, can we? Where did I leave my brains?...
Back to standard algorithm. Here's yet another idea:
First pass is rendering from the light's perspective, but writing to two targets, or a combined target. The idea is to generate two z-buffers simultaneously: Each fragment from the light has computed z from the light AND from the camera, both computed at full precision but stored in z-precision format. The first z is used to decide whether to write or not to write. The second z is written if the first is written, not if not. No advantage yet...
On the second and final pass, we have two ways to determine light or shadow:
a) Computing z from the light and comparing to interpolated z from the light (first z from first pass), and
b) Comparing z from the camera to the interpolated z-from-camera from the second z from the first pass.
Interpolation is automatic, of course; just mentioning.
So, now perhaps one of the two methods yields values that are too close to decide, but the other gets a better result. The idea is to multiply the results together, kind of:
factor1 = computed z from light / interpolated read of z from light
(lit if > 1; dark if < 1)
factor2 = z from camera / interpolated read of z from camera
(quite possibly lit if ~1, for sure not lit if << 1 or >> 1)
light up the fragment if
factor1 - (factor2 - 1.0f)^2 > 1
EDIT: Still thinking about it...