Yeah, I have to confess to a little bit of exaggeration about it's finality; as Klauss wrote
this morning,
Cool.
But I bet it will change after coding the shader, where we can see
where the packing fails or performs poorly and how to improve it. I
wouldn't think it's "almost done", on the contrary.
And, in fact, I don't know yet what to even name the "angle hack" channel,
or the two detail control channels; and Klauss doesn't know exaclty yet what the six
wavelet channels will contain. So, it's not ready for wiki-fy-ing just yet; but it will be
once it settles.
The documentation for CineMut and LaGrande will be a huge project we'll have to
tackle at once and get it all done; but before we get to that we'll probably have
finished the PU demo with all models CineMut-ized, as we'll need the experience
of USING CineMut and LaGrande somewhat extensively to be sure of its being
stable enough to document. Documenting prematurely would multiply the amount
of work, if one has to change the documentation on a weekly basis.
So, I posted PNG's of the packing just because I do feel we reached a milestone
in finding the way to organize all these channels into two neat atlases; -one using
compression and one not; one devoted to material parameterizations, and the
other devoted to geometry and self-shadowing. It's a moral milestone. A bragging
point
What I haven't bragged about because it gets really technical is the huge, HUGE
improvements to the detail of the channels. Storing differentials in U and V texture
coordinates, for example: LaGrande Blender noodles can compute these at 32-bit
floating point precision, off-line, amplify the signals, and store them to those
channels optimizing the use of the bits; whereas before the shader had to sample
the texture 5 times and spend a lot of instructions to compute the differentials in
real-time, only to get poor results. This will allow far better and smoother static
light specular reflections, radiosity, self-specularity and caustics.
Having all these textures in two atlases reduces the need for texture units from
about 10 before, to now 4 (2 atlases, 1 detail texture, and cube-map).
There are now 13 samplings for the shader to do, plus 5 of the env map, plus one
of the detail texture... That's 19 so far; and a few more for paralax. May sound like
a lot; but the last version of CM was doing 18 texture samplings for about half as
many features.
In terms of memory requirements, we've gone up a bit, but not by very much.
We've also added anisotropics...
Anisotropic filtering is what allows such things as CD or vynil record surfaces
reflections, which show a sort of high shininess in one direction but low in the
transversal direction. Some degree of anisotropy is commonly seen in sheet-metal,
particularly stainless steel, where the manufacturing method often leaves parallel
scratches on the metal surface.
But another use of anisotropic filtering that's much more needed is for filtering
reflections of the environment on small radius curvature surfaces. Bevels, for
example, desperately need anisotropic filtering, for specular reflections of the
env map to look right.
As I was saying to Klauss a while ago, CM will perhaps not shine at any one
particular feature...
* there are better shadows out there than we can achieve
* there are gpu raytracing shaders that can do self reflections better
* there are shaders that can do fairly accurate radiosity
* there are shaders that use like 10 textures for full material BRDF modeling
* there are shaders that use multiple textures for spherical harmonics
* there are shaders that can do incredible bump-mapping paralax
But we got tricks, some already implemented, some still "on paper" that
will get about 90% results in all of these areas at a fraction of the cost in
terms of instruction count.
So CM will be THE shader that "has it all", --and does it all single-pass.