klauss wrote:First, the precision in representation is not such an issue. Mostly because meshes are specified in local coordinates, and units group multiple meshes, transforming each to position them. Then, units themselves use local coordinates and are transformed by higher units, then bla bla, and the end is that almost any structure is represented in local coordinates, and transformed to viewspace only for rendering. But, if coordinates end up being so far away from the origin (which, in viewspace, is the viewer itself), then the precision loss won't be noticeable.
Klauss, you're forgetting *world space*. The transformation doesn't go directly from modelspace to viewspace, it goes through worldspace, and if an object is next to you, but world coordinates are 1000 miles away because they were somewhere in space and you flew some distance, than that object next to you will suffer. What you're describing sounds like what I'm advocating: World-space whose origin tracks the camera.
Hell, you can prove this point by going with spec a few light years away, you'll still see your ship perfectly rendered (actually, I tried with light hours only, I'll try with years just for fun).
Maybe the VS engine already repositions world space to track the camera, or periodically. There was a flight sim I played once that if you got too far from base, the plane and the camera started jumping between low precision positions relative to each other: Every time you moved the joystick a bit, the whole cockpit jumped a few inches up or down or to the sides.
About the way to render things in 3 separate layers: far, near and cockpit. That's what I said VS was using. So, yes: it works. Far meshes are drawn without z tests, near and cockpit with them. Drawing the cockpit without z tests would add the requirement of splitting intersecting faces, and sorting them out. It's a mess. The only trick is that between near objects and the cockpit (which is nearest), the z-buffer gets cleared and the frustum recomputed. As simple as that.
Intersecting faces could be split off-line or during design; just a matter of specifying NO intersecting faces. Microshaft changing a lightbulb: Just make darkness the standard...
The point is: VS has unique requirements from the graphics engine. It makes sense for VS to have its own engine. Of course it's a lot of work. But would be much harder trying to hack another engine not designed to cope with VS requirements to make it work. VS is already using BSP for collision detection, if I'm not wrong. I'm not sure what is it using for visibility (I guess nothing, actually, just test the viewing frustum). But, since BSP code is already there for collision detection, it wouldn't be overly hard to use it for planetside walking.
I have no personal experience in these matters, but what I read is that BSP is very expensive to dynamically update and keep balanced, so that its use is with static level designs.
Mostly, if planetside walking is implemented as a different engine state (non-seamless, technically speaking, but could be made look seamless). In fact, for planetside walking we could actually use another engine, now that I think of it. It would make the most sense, because of the disparities in engine requirements. Basically, base interface should be replaced by a new interface based on another engine (like Ogre). After all, it won't need to interact with VS space/atm-flight engine. Space and atm-flight would, however, need to be handled by the same egine (VS's), otherwise attempting even semi-seamless transitions would be a total mess.
You're probably right. Would be nice to come up with a universal engine, though, but probably too hard. And yet, the prospect of having a level-less, continuous world tantalizes me...