Brainstorming on shadows

Discuss the Wing Commander Series and find the latest information on the Wing Commander Universe privateer mod as well as the standalone mod Wasteland Incident project.
klauss
Elite
Elite
Posts: 7243
Joined: Mon Apr 18, 2005 2:40 pm
Location: LS87, Buenos Aires, República Argentina

Post by klauss »

"We'll see, where the shadows can't get"

I was thinking of storing the surface's normal. That, added to its depth, could be of some use. But I didn't figure out all the details there either.
Oíd mortales, el grito sagrado...
Call me "Menes, lord of Cats"
Wing Commander Universe
chuck_starchaser
Elite
Elite
Posts: 8014
Joined: Fri Sep 05, 2003 4:03 am
Location: Montreal
Contact:

Post by chuck_starchaser »

Yeah, I haven't finished thinking yet... :D ... Donno how to subtract 2.5 inches along the normal from z, as I can't subtract a vector from a scalar. I think this is going to go in a direction along the lines of choosing between comparing computed light z to interpolated light z, and comparing camera z to interpolated camera z, depending on whether the normal to light vector angle, or normal to view vector angle is greater; with a bias either way.
chuck_starchaser
Elite
Elite
Posts: 8014
Joined: Fri Sep 05, 2003 4:03 am
Location: Montreal
Contact:

Post by chuck_starchaser »

I GOT IT !!!!!

First the solution:

The algorithm substitutes a new term for the bias in present use.
First pass: standard, except no bias introduced.
Second pass:
(Instead of: transforming view x,y,z to light perspective x,y and reading z from the light to compare...)
1) Compute distance from camera to fragment.
2) Compute distance from light to fragment.
3) Add the two together (or do something fancier, like using the dot product of the view and light vectors to adjust bias, but forget this for now.)
4) Multiply that distance by a tweakable constant that determines bias; to be determined experimentally, but say it were 0.000777. Thus, if the fragment is 5 meters away from the camera and 5 meters away from the light, we get 7.77 millimeters bias. This may be too little, but maybe not, we shall see....
5) multiply that by the normalized surface normal
6) add the resulting 7.77 mm long vector, our 3D bias, to the camera-to-fragment vector.
7) NOW, use *that* point's x,y,z to transform to light-perspective x and y, and get a z value to compare with its computed distance to the light, in the standard manner, render if not in shadow.

Why does this work (or definitely should)?

a) It automatically compensates for increasing errors from sampling in the cases of view ray or light ray being nearly tangent to the surface.
b)It scales bias with distance (to both, camera and light).

What it results in is, effectively, a 3D bias: It makes the shadow of an object shrink inwards from the surface of the object. Actually, no; it makes objects expand outwards a little when tested for illumination. In the case of an object's own shadow, its shadow remains recessed under the surface of the expanded object, pretty evenly. But the perceived result is of shadows getting narrower, anyhow.
The only artifact I forsee this introducing is this slight narrowing of shadows, but this is a very hard to tell kind of artifact, as the viewer would need to be able to measure angles accurately to tell that they aren't exactly right. The narrowing increases with distance, so a broomstick 59 yards away might lose its shadow completely.


Now, this method also suggests one way of computing soft shadows; but it gets a bit complex; definitely ps3:

Same as above, steps 1 through 7, except that if we find the fragment is in shadow, we will now find out "how much" in shadow it is:

Call the angle of the disc of the star alpha.
We take the difference between the shadow depth, from the depth map, and the actual distance from the sample to the light; --i.e. calculate how far the sample is being shadowed from. We multiply that by sine of alpha and use the resulting length as our new bias, instead of, or in addition to, the 7.77 millimeters, or whatever basic bias we obtained before. AND, we go back to step 5, but unlike step 5 above, we use a modified step 5 in that we rotate this 3D bias vector so that it is normal to the light vector, and pointing in the direction of the shortest way out of the volume of the shadow. The way we can approximate this is by testing bias in 4 directions normal to the light vector. For each direction, repeat steps 6 and 7. If none of the new points is in light, the original point is 100% in shadow. But if one or more of those directions tested postitive, we reorient our bias in one of 8 directions depending on the pattern of the previous four tests, so as to point in the likely shortest way into light.
In the next loop we begin by halving the length of our bias vector and repeating steps 6 and 7.
If the new point is in shadow, we're at over 50 percent in penumbra; otherwise we're at less than 50% penumbra.
Wash and repeat, binary search-like.
klauss
Elite
Elite
Posts: 7243
Joined: Mon Apr 18, 2005 2:40 pm
Location: LS87, Buenos Aires, República Argentina

Post by klauss »

Hm... I see where you're going with the bias.
Might work... I'll have to make a proof-of-concept.
That may take some time.
Oíd mortales, el grito sagrado...
Call me "Menes, lord of Cats"
Wing Commander Universe
chuck_starchaser
Elite
Elite
Posts: 8014
Joined: Fri Sep 05, 2003 4:03 am
Location: Montreal
Contact:

Post by chuck_starchaser »

Glad you think it might work. I really think it will; and even better in conjunction with that paper; though in post-transform screen space, we may need to go back to a constant bias, rather than a multiplicative one.

Now, to tackle the problem of shadows between objects again. Okay, if generating geometry out of the per-unit shadow maps in shaders isn't doable, I have another solution: Doing it in the CPU; and it's NOT that I'm not aware of the problems with that; but let me explain:

1) The problem with inverting data direction in the bus, for what I've read (AGP I mean) is that it takes long to switch. If we do only one switch and send all we need to send back once per frame, that shouldn't take long.

2) The maps are big, but all we need to send is a 1 bit per texel version of each depth map, and which is highly amenable to to RL compression, if there's any way to do that in a shader.

3) We can send these maps to the CPU *after* we flip buffers, and get the stencil shadow geometry update, back from the CPU, in time for the next frame. This does NOT have to result in shadows that lag the movements of the objects by one frame, however. The *positions* of those geometries can be tied to the current frame positions of the units that own them. It's only the angular rotation of the model casting the shadow that would lag behind by a frame.


Needless to say it again, but I'd love to help with the shader programming; I just need some directions as to how to get started. When I did that shady planet thing, I started from a simple shader demo I downloaded and modified; but when I got the ogre development cvs I had no idea what to do with it. The shader code looked very neat, with like pages of related subroutines here and there... super spiffy; but I couldn't follow like what called what; like I couldn't see the forest for the trees...
klauss
Elite
Elite
Posts: 7243
Joined: Mon Apr 18, 2005 2:40 pm
Location: LS87, Buenos Aires, República Argentina

Post by klauss »

BTW: I found a solution for LODded stencil shadows.
It's not my idea... it came from the net... but it involves biasing the stencil mesh manually.

It's something an artist must do, but works fine - You just make sure your "shadow caster" mesh lays within the "visible mesh" all the time.
Then, you can have both self-shadowing and hi-poly models. :D
Oíd mortales, el grito sagrado...
Call me "Menes, lord of Cats"
Wing Commander Universe
chuck_starchaser
Elite
Elite
Posts: 8014
Joined: Fri Sep 05, 2003 4:03 am
Location: Montreal
Contact:

Post by chuck_starchaser »

Hmmm.... I see a logistical problem with that approach, though: It would add to the work of making a ship. Personally I don't mind, but generally speaking, modelling is a pretty scarce human resource, and if we make the work harder to do, it will be harder to get.

It could work if the shadow mesh could double as an LOD, as you seem to suggest; but I see two problems with that:

a) Less detail for the shadows
b) Forces having an LOD in the first place, and/or forces having a medium-high detail LOD.

The other possibility is developing some software or plugin or expansion to mesher, that generates the shadow mesh; but if we do that, the generated mesh cannot easily double as an LOD, because if the program automatically "optimizes" the mesh, after shrinking it, it will be nar impossible to get the seams for the UV unwrapping going the same way, and so the LOD will need a complete redoing of the unwrapping and texturing.

Unless the software preserves seams and seam vertexes, that is; that might be doable...
klauss
Elite
Elite
Posts: 7243
Joined: Mon Apr 18, 2005 2:40 pm
Location: LS87, Buenos Aires, República Argentina

Post by klauss »

I think I'll take that approach, though.

a) It's optional.
b) Requires no work (I think not, at least - the framework I'm working on can easily accomodate for that, given some extra enums).
c) Works on both texture and stencil shadows... only it solves nothing with texture shadows (only speeds them up).
d) Allows me to use Ogre's builtin shadow generation, which is a lot of work already done.
e) It can easily be automated in a way that would serve most applications (bah... easily, I don't know. But possible it is).
f) Why would you want it to serve as LOD if you're going to automatically generate it? Besides... I think it can't have seams if you're using stencil shadows (it may be possible to have them, if the "edge detector" detects and rejoins them - which I think it does).
g) If the better texture shadows are implemented, this method is still applicable on top of it.

All good things.
Oíd mortales, el grito sagrado...
Call me "Menes, lord of Cats"
Wing Commander Universe
chuck_starchaser
Elite
Elite
Posts: 8014
Joined: Fri Sep 05, 2003 4:03 am
Location: Montreal
Contact:

Post by chuck_starchaser »

Okay, I'm sold; so we need to think of a way to autogenerate this shadow mesh, I suppose? Hmmm... If it was just a matter of shrinking along normals it would be one thing... Problem is with smoothing groups, with vertex normals. Ah, never mind; first we rejoin duplicate vertices, copy the normals that come with the faces, then recompute the facet normals of the smooth groups... So, maybe it's time I get back to work on that obj reading code for that album thing. Can't remember what I got stuck with...
And it has to be easily further shrinkable along normals in the shader, if we want to avoid z artifacts by increasing bias with distance... So we need non-normalized vertex normals, rather than face normals, with such lenghts as indicate how much to move the vertex in order to achieve shrinkage of faces along normals...
Post Reply