New Hornet model

Discuss the Wing Commander Series and find the latest information on the Wing Commander Universe privateer mod as well as the standalone mod Wasteland Incident project.
Post Reply
micheal_andreas_stahl
Elite Hunter
Elite Hunter
Posts: 1030
Joined: Mon Apr 10, 2006 10:02 am
Location: Gemini, Troy, Helen

Post by micheal_andreas_stahl »

It is a pitty yer can't get that Hornet done quick enough. I was going to badger the Privateer people to get for confed/miltia patrols.
"The bullets come out of the slim end, mate!"

Sniper after dominating another Sniper
Team Fortress 2
klauss
Elite
Elite
Posts: 7243
Joined: Mon Apr 18, 2005 2:40 pm
Location: LS87, Buenos Aires, República Argentina

Post by klauss »

Well, DAO is right now per-pixel, as I couldn't yet find a way to export per-vertex data into RenderMonkey (I may be able to do it much more easily for the ingame engine, but debugging/tweaking the ingame engine is much harder, so I do all previous work in RM instead).

As for parallax mapping, it's a complicated thing to explain, but it more or less implies several raytracing steps (complex ones).

Let me try to find the paper...

EDIT: Did find the paper

EDIT2: Look - pretty. (that's soft self-shadows done by making use of the DAO data, called PRT - cool huh?). BTW - I managed to optimize parallax raytracking quite a lot, allowing me to sqeeze DAO into the 2.0 parallax mapping shader :D
Oíd mortales, el grito sagrado...
Call me "Menes, lord of Cats"
Wing Commander Universe
DualJoe
ISO Party Member
ISO Party Member
Posts: 387
Joined: Tue Mar 21, 2006 2:37 pm
Location: Netherlands
Contact:

Post by DualJoe »

@Chuck: Your solution with the Gimp works :D. (BTW Klauss and me posted the textures in this thread not that long ago)

@Klauss: I've gotten Rendermonkey to play nice and it looks even better when everything moves. Great work on that shader. I have run into a little snag however and need your help.

The glossy look I mentioned earlier is the effect of the added env-map-reflection. Am I doing something wrong or is the alpha-channel of the specmap also linked to the mirror/envmap? If so could it be seperated to allow for more freedom with materials? What I'm trying to do is let the metal have a relatively high amount of specular and reflection, but broad specular-highlights and for the green paint the exact opposite. While the metal is looking good, I can't get thight highlights on the paint without making the paint more reflective of the environment. I have a working solution by making the highlights on the paint broader than the metal, but it makes the paint look more like powder than I would like.
klauss
Elite
Elite
Posts: 7243
Joined: Mon Apr 18, 2005 2:40 pm
Location: LS87, Buenos Aires, República Argentina

Post by klauss »

It's quite easy, actually, since direct lighting is done in a separate pass.
If you more or less can find your way through RM, you'd have to add a texture, and change the "specMap" references in the "GI" pass to a different texture (that would allow you to specify different textures to the GI/direct pass). GI "specmap" manages environment reflections, direct "specmap" manages the classic specularity.
Having the same texture but separate "modulation colors" (like fvSpecular) is also possible and easy, but it involves a bit of more grunt work. Namely, add a color variable, and change the name of the variable in the shader code itself (only the vertex shader) so that it points to the new, separate variable instead.

However, I should point out that there's not much reason to do that. Maybe to have separate colors, but not separate textures. Classic specularity is the exact same phenomenon as environment reflections. Thus, when they're treated in an unified way as they are now, it gives the overall look a hightened consistency and thus looks best that way. The only problem is relative intensities: the not-physically-correct way things are computed introduces some dynamic range issues, blah, blah, meaning: relative intensities among the different features may not be correct. In that case, separate "modulation colors" may be useful to manually tweak them.

So... in synthesis: I'm already considering adding separate "GI" colors so that artists may tweak both stages independently, but I don't think having separate textures is of any use (though it's quite trivial - trivial enough so that the Ogre "templates" will certainly have that possibility, since they can be made to fall back to using the same for both if only one is specified - is that cool or what?).
Oíd mortales, el grito sagrado...
Call me "Menes, lord of Cats"
Wing Commander Universe
DualJoe
ISO Party Member
ISO Party Member
Posts: 387
Joined: Tue Mar 21, 2006 2:37 pm
Location: Netherlands
Contact:

Post by DualJoe »

Thanks for the answers Klauss unfortunately I've come up with more questions.

I saw some references in the Wiki on logo's and faction-painting as being seperate textures, mapped to certain points on top of the model. If paint and logo's are seperate, then the main model and textures are finished.

What is the verdict on the glass-material? I currently made it transparent blue, but its completely opaque in rendermonkey.

What should I do for the damage-, glowmaps, lights and such? Or will this all be dynamically done by the new engine? I could also use some info on subunits and stuff like engine glow. What should I do with the guns and landinggear?

In the meantime I would like to make a complete package that can be used in the current engine for the folks over at the Privateer-thread. Sadly I haven't got a clue on how to do that. I read the wiki, but I only found it very confusing. Can someone point me in the right direction?

In short, what still needs to be done to have a complete model working ingame both for the old as for the new graphic-engine?

OT
I think someone should have a go at rewriting the wiki to make it more coherent. I know that a lot of things are in flux at the moment, but a central place for the most up to date info is better, than to have to read through loads of posts for single pieces of information.

On a side note, it maybe a good idea to include a content-development-manual next to the player-manual. I'm starting to feel that making your own contribution to vegastrike, or its mods, is half the game. Well in my case I've spent countless more hours on making the hornet than I did actually playing vegastrike or privateer.
chuck_starchaser
Elite
Elite
Posts: 8014
Joined: Fri Sep 05, 2003 4:03 am
Location: Montreal
Contact:

Post by chuck_starchaser »

Dual, glad it worked.
Klauss, those pics of the engines are amazing. That self-shadowing is really dynamic?! Superbly realistic. There's some excessive darkness around the weapon mounts on the wings, but I suppose that's painted on. When's transparency coming? Can't wait to see the pilot and the interior through the glass.
The paper is interesting, tho it doesn't explain much. What's "tangent space"? Some kind of Laplace transform?
micheal_andreas_stahl
Elite Hunter
Elite Hunter
Posts: 1030
Joined: Mon Apr 10, 2006 10:02 am
Location: Gemini, Troy, Helen

Post by micheal_andreas_stahl »

Would it be able to be used in Privateer?
"The bullets come out of the slim end, mate!"

Sniper after dominating another Sniper
Team Fortress 2
klauss
Elite
Elite
Posts: 7243
Joined: Mon Apr 18, 2005 2:40 pm
Location: LS87, Buenos Aires, República Argentina

Post by klauss »

chuck_starchaser wrote:Klauss, those pics of the engines are amazing. That self-shadowing is really dynamic?!
Yep - completely dynamic. Low-resolution, though. I could up the resolution quite a bit using SH, as with a true PRT, but I have no idea whatsoever how to precompute true SH-based PRT. Thus, I'll stay with this simplified way, which can be easily baked with Blender's radiosity tool. Most PRT papers consider the precomputing stage a prerequisite, as if you had to know how to do that already, or there were tools all over the place. Perhaps there are... but I don't know any.
chuck_starchaser wrote:There's some excessive darkness around the weapon mounts on the wings, but I suppose that's painted on.
Yes. I think they're meant to be scorch marks or some kind of stuff like that.
chuck_starchaser wrote:When's transparency coming?
Transparency is tricky. As in, it's tricky combining multiple passes on transparent materials. Transparent materials, thus, are better done in a single pass, but all this DAO stuff is on multiple passes right now. And I suspect always, as the PRT textures are plural, and thus squeezing all the textures needed in a single pass would be a challenge: diffuse, specmap, normalmap, PRTP, PRTN, glow, environment-D, environment-F, that's a lot (I need 2 PRT textures and 2 environments with different filterings)

Ok... anyway... transparency is tricky as if it has to be done in multiple passes, a lot of consideration has to go on each pass. BUT it is indeed possible, and only a matter of careful configuration. That means, though RenderMonkey doesn't do transparency, the final materials in Ogre will. But I'll probably specify two variants, one with and one without transparency, as it probably will be much more efficient if it can be considered fully opaque. Oh... and that's really important, as the shaders are reeeally heavy and overdraw must be avoided (that's the difference between 150FPS and 30FPS).

So... the glass should go on a separate sub-object with its own "transparent material", for best efficiency. I know I always said multiple batches are bad, but in this case, it's actually beneficial - you must have noticed by now how subtle all this is: The thing is with transparency I must guarantee back-to-front ordering, but that maximizes overdraw. Hence, I must draw all opaque things beforehand, front-to-back (if possible, or unordered with z-write only, and thus very quickly, minimizing "heavy" overdraw). That only leaves a tiny portion back-to-front, which is the cockpit glass.

Needless to say, this is all related to true alpha-blending. As in, semitransparent stuff. If your transparency kind is fully-opaque against fully-transparent, there's a better technique for that which does not require special sorting or anything, called alpha-testing, which can be applied to multiple passes just as well. That would be the preferred kind, of course, since it's mose efficient and all, but it's not always artistically applicable.

<OT climax>

chuck_starchaser wrote:The paper is interesting, tho it doesn't explain much. What's "tangent space"? Some kind of Laplace transform?
Remember linear algebra?
Ok... picture a vector, in euclidean 3-space. You can represent it with any basis, but we usually choose the "canonical" base, which is a special case of an orthonormal base. But... in any base, a vector would be represented by 3 coordinates.
Basically, and risking telling you a lot you already know:

Base b = { i,j,k } <-- i,j,k in R3
V = V1 * i + V2 * j + V3 * k = (V1,V2,V3)b <-- V1, V2, V3 in R

That means, V is represented by the coordinates V1,V2,V3 in base b.

Ok... since the b is an orthonormal base, you have:

V1 = dot(V,i)
V2 = dot(V,j)
V3 = dot(V,k)

Now... i,j,k could be any orthonormal base.
Picture, thus, a triangle in a mesh. That triangle has a normal, N.

Let's, arbitrarily, say k=N.
That leaves us with a whole plane in which to choose another vector, i or j, since all we have to guarantee is orthonormality - that is, dot(i,N)=0, dot(j,N)=0, dot(i,i)=1, dot(j,j)=1. (dot(N,N)=1 already)

Let's choose, then, i=T as the "tangent" vector. What's the tangent? the direction in which the local UV gradient is (1,0) - basically, the direction of the positive U axis (I mean... texture coordinates, the direction inside the triangle in which texture coordinates grow in U and only in U).

The binormal=B, being cross(T,N), is (oh coincidence) the direction in which the gradient is (0,1). You have to do a little math to prove that, but it's true.

Ok... so if you set i=T, j=B, k=N, you have trivial representations of all vectors of interest (the normal, the tangent, and the binormal).
The computation of T, B and N is done on a per-vertex basis, based on neighbouring vertices and texture coordinates, in the CPU, as part of the conversion process.

If you represent vectors in this base, now, you have "tangent space". It's the same space, but centered around the current vertex, and represented in base ts={T,B,N}, so N=(0,0,1)ts, T=(1,0,0)ts, B=(0,1,0)ts.

This is really neat, actually, since if you now code surface normals (in a normalmap) in this base, since N is the geometry's normal and not the "lighting normal", you'll have that they'll be close (but not equal) to (0,0,1). But more importantly, since they're based on N, if you rotate N, you rotate the normal map too - thus, you can freely deform the geometry, and the normalmap will "adapt" on its own. That's cool.

Furthermore, it's much easier to do projections towards the face's plane: just delete the third coordinate. In synthesis, this representation is very handy for working with surface-related stuff, since it centers everything on the surface, making your surface representation very simple (the TB plane), and thus most calculations simple too.

There are caveats, though. Since the vectors that make up the base, TBN, are interpolated linearly by the rasterizer (as all vectors represented in the base), you'll find out that under some conditions things loose correctness.
For instance, if your TBN vectors vary wildly across a surface, intemediate values won't make up an orthonormal base (they'll loose that property). More importantly, linearly interpolated vectors represented in tangentspace won't match the tangentspace itself, so errors start to happen and... well... bad things along with them. To avoid such wild variations, you have to assure a certain tesselation/unwrapping pattern, and be careful with some other modelling aspects. I don't know all of it, someone really should try to dig up those guidelines and make them part of a modelling how-to, because fixing those issues, while possible, is not easy, since it relates tesselation with unwrapping as a whole.
Hm... ranting already...


</OT>


To use this model with the old engine (eg - privateer)

a) No landing gear. Sorry.
b) Mix the DAO-PRT with the diffuse map, to have static AO baked in. How? Add up all components, making a grayscale pattern (merge the two with additive or 50% alpha blending, then convert to grayscale). Adjust contrast to make most of it white, and then modulatively merge it with the diffuse map. Perhaps you'll also want to do this with the specmap too (but with different contrast).
c) Also bake the bumpmap into the diffuse map (it may also help with the specmap). I'm not exactly sure how to do this, but I think the GIMP has a plugin or builtin function to do this kind of thing.
d) Remove alpha from the specmap - the old engine does not support that.
e) Done.

About engine thingies, in the old engine it's in "Lights" in the units.csv. You have to figure out the coordinates and size (only radial size, the other size is hardcoded - bad, I know), and put them in those fields (I'm not sure about the details, perhaps you'll figure it out, as I did, by looking at the others, or perhaps someone else knows all the details, or perhaps you just ask me again and I'll look them up again).

In the new engine, engine thingies will be the same (you have to add their coordinates in the corresponding place, which will NOT be the units.csv - that's awful), but I'm not sure which features to add. Main feature added is higher flexibility, in that you'll be able to specify the direction in which they fire, their sizes in all dimensions, etc... all which is needed to add not only main engines, but also maneuvering thrusters and reverse thrust engines. But there may be other features, like shader-based materials (for better effects), and stuff like that. I still didn't get to that part.
Oíd mortales, el grito sagrado...
Call me "Menes, lord of Cats"
Wing Commander Universe
DualJoe
ISO Party Member
ISO Party Member
Posts: 387
Joined: Tue Mar 21, 2006 2:37 pm
Location: Netherlands
Contact:

Post by DualJoe »

klauss wrote:
chuck_starchaser wrote:There's some excessive darkness around the weapon mounts on the wings, but I suppose that's painted on.
Yes. I think they're meant to be scorch marks or some kind of stuff like that.
My mistake, I had forgotten to turn off the ao-layer. I'll post updated versions after I've had another go at the engines and gunmounts. Those are the last bits that still bother me.
chuck_starchaser
Elite
Elite
Posts: 8014
Joined: Fri Sep 05, 2003 4:03 am
Location: Montreal
Contact:

Post by chuck_starchaser »

klauss wrote:Basically, and risking telling you a lot you already know:
No risk at all; I didn't understand a word of it. Anyways, the self-shadowing is amazing; can't imagine how you do it.
About the transparency thingy: Just keep in mind that the only situation you'd care for alpha in most ships is to make holes on them --i.e.: alphatest. For windows... windows can have separate geometry and simply specify a "thickness", along with transmissive and reflective colors. No texture. The pixel shader could modulate that "thickness" by the secant of the viewing angle to the normal, to account for the increased traversal of the light when the angle is shallow (mind you, refraction will shorten that distance), and conversely, the sine of the angle could modulate reflectivity. Something along the lines; you'd know better. But the idea is, separate shader; separate geometry, for like windows and stuff. JMHO.
klauss
Elite
Elite
Posts: 7243
Joined: Mon Apr 18, 2005 2:40 pm
Location: LS87, Buenos Aires, República Argentina

Post by klauss »

chuck_starchaser wrote:
klauss wrote:Basically, and risking telling you a lot you already know:
No risk at all; I didn't understand a word of it.
Ok... simplest form ever:
Tangentspace is a coordinate system in which a surface's normal is (0,0,1), (1,0,0) points in the direction of growing U coordinates, and (0,1,0) points in the direction of growing V coordinates. (0,0,0) touches the surface, too.

When one says a shader does its work in tangentspace, it means light and camera vectors are all sent in this coordinate system. Since it's a geometry-centric coordinate system (makes geometry be at the center of things), it usually simplifies modelling of complex surfaces (that is, treating a simple surface, like a triangle, as if it were complex, as a volumetric layer or a bumpy surface).

It has issues, mostly with unorderly tesselation or unwrapping of geometry. It's the modeller's task to avoid those issues.
Oíd mortales, el grito sagrado...
Call me "Menes, lord of Cats"
Wing Commander Universe
chuck_starchaser
Elite
Elite
Posts: 8014
Joined: Fri Sep 05, 2003 4:03 am
Location: Montreal
Contact:

Post by chuck_starchaser »

klauss wrote:
chuck_starchaser wrote:
klauss wrote:Basically, and risking telling you a lot you already know:
No risk at all; I didn't understand a word of it.
Ok... simplest form ever:
Tangentspace is a coordinate system in which a surface's normal is (0,0,1), (1,0,0) points in the direction of growing U coordinates, and (0,1,0) points in the direction of growing V coordinates. (0,0,0) touches the surface, too.
Ah, now; that makes it a lot easier.
It has issues, mostly with unorderly tesselation or unwrapping of geometry. It's the modeller's task to avoid those issues.
I can feel some of the issues, already, though the ones I feel aren't necessarily unique to tangent space. In the first figure below, with the lower line representing a cross-section of a surface, linear interpolation between parallel normals N1 and N2 is a set of parallel normals, even though the normal half way in between should actually be a bit rotated counterclock-wise as per the modeller's intention.

Code: Select all

       N2 _
         |\
           \
            \________
            /v2
  N1 _     /
    |\    /
      \  /
_______\/
        v1



       N2 _
         |\
           \
            \____+___
            /v2
  N1 _     /
    |\    x
      \  /
___+___\/
        v1
Solution is to add cuts: First one at the 'x', which will produce a correct normal in between the other two. And then we need two more cuts at the '+' signs, if the upper and lower surfaces are meant to look flat, so as to avoid N1 and N2 getting interpolated smoothly towards the left and right, respectively. But as you probably remember, this is the same issue that led me to putting quadruple lines along the small radius edges on the body of the hornet and where the wings are welded. But I guess this becomes even more pressing when normal maps are used, as they will draw more attention to naive normal interpolations.
DualJoe
ISO Party Member
ISO Party Member
Posts: 387
Joined: Tue Mar 21, 2006 2:37 pm
Location: Netherlands
Contact:

Post by DualJoe »

Just popping in to say I'm still alive and have not forgotten about this.

I can't work on the Hornet at the moment, because I've been called back into active service and haven't got access to a suitable computer or internet-connection. I also don't have an estimate on when or if I'll have time to work on it anytime soon. I'm expecting my orders next week.

I'll keep you posted.
spiritplumber
Developer
Developer
Posts: 1831
Joined: Mon Mar 07, 2005 10:33 pm
Contact:

Post by spiritplumber »

Be safe and take care, and thanks!
My Moral Code:
- The only sin is to treat people as if they were things.
- Rules were made for people, not the other way around.
- Don't deceive. Real life is complicated enough.
- If all else fails, smash stuff.
mkruer
Site Administrator
Site Administrator
Posts: 1089
Joined: Thu Jan 02, 2003 10:07 am
Contact:

Post by mkruer »

spiritplumber wrote:Be safe and take care, and thanks!
Ditto
I know you believe you understand what you think I said.
But I am not sure you realize that what you heard is not what I meant.

Wing Commander Universe Forum | Wiki
Wing Commander: The Wasteland Incident
klauss
Elite
Elite
Posts: 7243
Joined: Mon Apr 18, 2005 2:40 pm
Location: LS87, Buenos Aires, República Argentina

Post by klauss »

mkruer wrote:
spiritplumber wrote:Be safe and take care, and thanks!
Ditto
Ditto²
Oíd mortales, el grito sagrado...
Call me "Menes, lord of Cats"
Wing Commander Universe
spiritplumber
Developer
Developer
Posts: 1831
Joined: Mon Mar 07, 2005 10:33 pm
Contact:

Post by spiritplumber »

Meow... Offtopic, but related in that it's about models.

Nobody was likely to do some WC2-era space stations, so I did.

http://www.spiritplumber.tzo.com/caernarvon.png Caernarvon Station
http://www.spiritplumber.tzo.com/wc2depot.png WC2 Terran depot
(kilrathi depot is already in WCU)

http://www.spiritplumber.tzo.com/fuelbase.png My version of the Pioneer fuel depot that's currently big news on the CIC (this took me about 80 minutes to make)
My Moral Code:
- The only sin is to treat people as if they were things.
- Rules were made for people, not the other way around.
- Don't deceive. Real life is complicated enough.
- If all else fails, smash stuff.
chuck_starchaser
Elite
Elite
Posts: 8014
Joined: Fri Sep 05, 2003 4:03 am
Location: Montreal
Contact:

Post by chuck_starchaser »

Wow; they look great.
I'm thinking, maybe I went overboard with the carrier...
spiritplumber
Developer
Developer
Posts: 1831
Joined: Mon Mar 07, 2005 10:33 pm
Contact:

Post by spiritplumber »

Meow? How so?

http://www.spiritplumber.tzo.com/kala.png Doodle from a drawing by BradMick I saw once. Supposedly a predecessor of the dralthi.
My Moral Code:
- The only sin is to treat people as if they were things.
- Rules were made for people, not the other way around.
- Don't deceive. Real life is complicated enough.
- If all else fails, smash stuff.
chuck_starchaser
Elite
Elite
Posts: 8014
Joined: Fri Sep 05, 2003 4:03 am
Location: Montreal
Contact:

Post by chuck_starchaser »

Well, in that I made the work so hard for myself, with all that separation between armor and hull, that I ended up in a rut. I think I over-reached. Anyways, we're stepping all over Dual's thread here. I want to get back to work on the Bengal again, but it was getting so hard towards the end I'm afraid to touch it. Anyways, right now I have a real excuse: a rush at work.
klauss
Elite
Elite
Posts: 7243
Joined: Mon Apr 18, 2005 2:40 pm
Location: LS87, Buenos Aires, República Argentina

Post by klauss »

Chuck... one thing I do when I can't find the head or tail to what I myself wrote: start over.

Perhaps if you start over the Bengal, you'll get much better results. If you do that, you might consider the first one as a "concept" work, trying to figure out how to best do it, and now (with all the added knowledge), you do it right from step 1.

That's an idea, if you find yourself lost within your own work. Not to say that the bengal doesn't look great.
Oíd mortales, el grito sagrado...
Call me "Menes, lord of Cats"
Wing Commander Universe
chuck_starchaser
Elite
Elite
Posts: 8014
Joined: Fri Sep 05, 2003 4:03 am
Location: Montreal
Contact:

Post by chuck_starchaser »

Yeah, I had that idea in my mind well suppressed. I think you're right, as usual. And now I know a lot more about Blender than I knew back then...
Still, that's a lot of work to throw away...
Another thing I could do is stitch the outer armor back together and use alpha testing for the seams. How would that play, shadows-wise? What kind of shadows tech did you settle on, finally, if you did? And what about the old question of texture re-use. Is your present algorithm UV-based?, or can I use a section of the texture multiple times?
klauss
Elite
Elite
Posts: 7243
Joined: Mon Apr 18, 2005 2:40 pm
Location: LS87, Buenos Aires, República Argentina

Post by klauss »

Self-shadows would use the soft shadows technique using the PRT-ish GI baking. The only problem is that they are... soft. And soft isn't always right. I'm reasarching on a few hybrids to be able to get approximate hard shadows with that as well. It would be a hybrid-resolution PRT, and it would certainly qualify for paper material if it worked ;) - but still just on my head for now.

Alpha testing would be troublesome with that, unless you get an alpha-aware radiosity baker. But... you can replace the alpha-tested surface with an ad-hoc surface (that is, take the original and cut it as the alpha says), bake with that, and then throw away the cut surface. That is... it's not impossible to handle, even limiting yourself to blender-supported tools.

The present algorithm is temporarily uv-based. What does that mean? I need a unique UV-mapping to be able to export the radiosity data - but once that's done, I can use mesher to stamp that information back into the vertices. I'd do that with Blender, but I don't know how to - maybe you find a way to export vertex color info to an obj?

If you use a texture transport for the PRT (the color-coded radiosity baking), then it needs unique UV unwrapping. That is no reusage. If you use a vertex transport for the PRT, you get lower PRT resolution, which ends up being even softer shadows, mind you, but you get full freedom for texture reusage.

An option would be to use a texture transport with a separate unwrapping for the PRT layer - that's possible too, but I'd have to see if I can fit all that info in the shader (I'd have to add an interpolant for the secondary coordinate set).

So... you have options. I'll let your creative needs guide my implementation tasks - that is, I'll focus on whatever option is most creatively needed.
Oíd mortales, el grito sagrado...
Call me "Menes, lord of Cats"
Wing Commander Universe
chuck_starchaser
Elite
Elite
Posts: 8014
Joined: Fri Sep 05, 2003 4:03 am
Location: Montreal
Contact:

Post by chuck_starchaser »

That's immensly fair, klauss. And I hate to take advantage of it, but I think the option of texture reuse is really needed. Not so much in small ships, but with carriers and large stations there are plenty of opportunities to reuse some sections of the texture dozens of times; --say a type of door or hatch, or cluster of windows, or dish antenna, or non-moving weapon, or any other greeble that is used in multiple locations. Equal-sized armor planks, landing strip markings, you name it. Not to mention symmetry folding for the rest.
I think it would be a huge step forward for the engine to allow a separation between texture and light-map, the way many RPG and FPS games have been for a long time. Now, I'm aware that Blender won't handle dual UV's, and so I'd lean more towards your vertex-based light-mapping. If the modeller is aware of the issue, it only takes 2 seconds to add an extra cut wherever it may be necessary to prevent extra-long ambient shadows off small features. I'd be doing the same thing near corners that are set smooth anyways, to prevent smoothing across big, flat surfaces, so this just completes the circle by dictating extra cuts near bends, regardless of solid or smooth shading. Makes the rules simpler, in fact.
So, I take it you'd need then at least one non-reuse texture specially baked. Or a mesh with vertex colors. I think that when you do radiosity in Blender, the color is put into the vertices; and that it's the baker script that makes a texture out of it. So the question is how to bypass the texture part. I think I saw a post at the Ogre forum, somebody asking about vertex color exporting from Blender via the Ogre Mesh exporter. I'll see if I can find it again.
I think my alpha test idea was a non-starter; would not save any geometry at all, in fact. Well, not sure... I'll think some more.
Post Reply