Graphic engine and graphic technologies

Development directions, tasks, and features being actively implemented or pursued by the development team.
Post Reply
rewpparo
Hunter
Hunter
Posts: 83
Joined: Sat Jun 11, 2005 8:11 pm
Location: Rouen, france

Graphic engine and graphic technologies

Post by rewpparo »

I've been looking a little bit at VS's source code, it looks like you're using a custom 3D engine for this game. Is there some reason why you didn't use one of the open source engines aviable in GPL ?
I've been messing around with the Ogre3D engine for quite some time now, and I believe its use in VS would be relatively simple and very profitable.

As for features that would be usefull, just to name a few, skeletal animation (if you wish to make a ship flap its wings ^^). particule engines that I believe would be usefull for weapons, engines, nebulaes... , cell shading, bump mapping, shaders, render-to-texture (would allow for screens in the cockpit displaying the GUIs). BSP support and camera tracking could be used to make a 3D environment on the planets and stations (BSP are simple and fun to make, expect a lot of user-created content)

from a developper point of view, Ogre is GPL and well documented (for developpers AND content creators). And it has a structure that is very similar to the one used in vega strike (if you want to look at the doxygen doc, universe = scenemanager, unit = entity, etc...) so I believe a port is just a matter of binding existing functions to ogre instead of VS's current graphical engine. Also, Ogre keeps up to date with the latest GPU technologies, so you can forget all this and concentrate on making the actual game :)

if the team is interested in the concept, I can have a deeper look, and see what I can do.
klauss
Elite
Elite
Posts: 7243
Joined: Mon Apr 18, 2005 2:40 pm
Location: LS87, Buenos Aires, República Argentina

Post by klauss »

Unfortunately, hellcat, the one with that kind of knowledge, won't be around for a few weeks. So you'll have to wait for his answer.

But I can tell you, however, that I doubt Ogre3D can handle the insane distances in VegaStrike. It's a mess trying to propperly render a world where the closest poly is, say, at z=10, and the farthest one is, say, at z=143253242387 (just an example).

For us to migrate to an "external" engine, the engine would have to support that kind of ranges.
Oíd mortales, el grito sagrado...
Call me "Menes, lord of Cats"
Wing Commander Universe
chuck_starchaser
Elite
Elite
Posts: 8014
Joined: Fri Sep 05, 2003 4:03 am
Location: Montreal
Contact:

Post by chuck_starchaser »

I suggested Ogre3D a while back for planet-side walking. The problem, as klauss says, is the insane distances, and more than insane distance ratios. Plus, I would add, the needs of a space sim are very different from those of a flight sim, and the latter's are very different from a walking on land engine.
And we don't just want to "switch" engines, I think most of us want continuity from space to to atmospheric flight, to outdoor and indoor walking. No engine exists, as yet, that can do this. If we manage to pull it off, it will be a first.
rewpparo
Hunter
Hunter
Posts: 83
Joined: Sat Jun 11, 2005 8:11 pm
Location: Rouen, france

Post by rewpparo »

As for ranges, ogre's coordinates are in floats (can be changed to anything, but will require full recompile of ogre), I don't know if that's enough for you. Any other tricks used to get the distances working ?
Ogre has that advantage that the whole engine can be reconfigured at will with the use of plugins. You can basically replace in a plugin the whole engine, or any part you want. So if ogre doesn't have your tricks, we can give them to him through a plugin, and still make full use of the graphical eye candy it provides, and will provide.
About the all purpose engine, I think ogre is probably the closest thing there is to that. Ogre isn't specialised in any specific engine type, although it has plugins for most of rhe current stuff (BSP, outdoors, generic for space....) however I don't know i we can actually mix them, ie. activate a BSP level somewhere far away, get close to it and come inside the level. my guess is that it could be done with a bit of a hacking.
Also, I have a work in progress somewhere : a plugin for ogre that can render a planet at any level of detail (from space to ground). it dynamically adds detail to the parts of the planets that are close enough to the camera. it's not fully operational, but I could continue my work on that if it's of any use. Should provide seamless space and atmospheric flight, maybe ground walking.

I must say at this point that I don't have any market shares of ogre, i'm just a fan ^^
klauss
Elite
Elite
Posts: 7243
Joined: Mon Apr 18, 2005 2:40 pm
Location: LS87, Buenos Aires, República Argentina

Post by klauss »

The problem with depth ranges or, as chuck_starchaser put it with much more precision, depth ratios, is that medium-res Z-buffers don't perform well on those conditions.
In VS, if you try, you'll notice that a 16-bit Z-buffer is not enough to render both the cockpit and the exterior, and I'm not talking about the whole exterior, but just the "Near" exterior (distant planets and the background are rendered in what is called the "Far" Z-plane, and don't use up Z-buffer). Even the "near" exterior only is drawn with little quality on 16-bit Z-buffers.
A 24-bit Z-buffer is enough for a quality rendering of the exterior, but still lacks the precision to render the cockpit. Cockpit render had to be hacked, yet creating another Z-buffer layer, for a total of 3: 3D cockpit, near meshes, far meshes.
A 32-bit Z-buffer would probably allow much more freedom, but I still doubt it could render the entire system (far planets included).
I was working on a rendering technique which dynamically partitioned the world so that full use of Z-buffer precision was achieved, and when it was not enough, it would create new layers on demand. But, it's not ready, and it's not presently in CVS (I'm just experimenting with it). I doubt Ogre3D did such a thing, which is cumbersome, and impacts heavily in performance if not done carefully. And I'm not sure such a plugin could be created. I don't know about Ogre3D that much. Perhaps you could enlighten us: could we modify the rendering sequence that much with a plugin?
Oíd mortales, el grito sagrado...
Call me "Menes, lord of Cats"
Wing Commander Universe
rewpparo
Hunter
Hunter
Posts: 83
Joined: Sat Jun 11, 2005 8:11 pm
Location: Rouen, france

Post by rewpparo »

Well yes you can, but I think you'd have to write your own rendering subsystem. would be a pain in the ass to update each time a new ogre is out.... not really an option. I'll do some more checking on this, there may be a way
but 3 Z buffers... you got me on this one, and maybe you got ogre too
chuck_starchaser
Elite
Elite
Posts: 8014
Joined: Fri Sep 05, 2003 4:03 am
Location: Montreal
Contact:

Post by chuck_starchaser »

Your planet thing sounds interesting. Shehrazade here is working on just that very idea, and I'm not sure how far he's got with it. Frankly I'm not sure the z-buffer problem is the toughest one. I think the toughest one is precision of representation. If you represent a planet using floats, and place the center of coordinates at its center of gravity, the best case precision you get at the surface is about 2 feet. One can certainly have local coordinate system representations, but then the bigger problem comes in: Where do you place the (0,0,0) origin for the "world". If you place it somewhere in space, by the time you get to the ground, precision is back to the range of feet. The only solution, IMO, is to have the world coordinates centered and continuously tracking the camera.
So that's my *ultimate* question: Can Ogre be modified to do that?
rewpparo
Hunter
Hunter
Posts: 83
Joined: Sat Jun 11, 2005 8:11 pm
Location: Rouen, france

Post by rewpparo »

I suspect you do it by iterating the objects and changing the coordinates depending on cam movement. That you can definitely do.
in ogre everything dealing with coordinates, vertex, textures, etc.. are completely customisable by using plugins. It's just the part close to GPU metal I'm not so sure we can easily tune, like Z buffers, whose use I thought was pretty standard

However I may have found a way to do something about the z buffer part. You can give priorities to meshes you want to draw, and disable Z buffer checks. Tell me if this would do the trick :
far meshes are drawn first, no depth check, they remain in background
near meshes are then drawn over using the framebuffer
cockpit is then drawn on the foreground, above everything else, no depth check.
Duality
Daredevil Venturer
Daredevil Venturer
Posts: 583
Joined: Sun Feb 16, 2003 12:58 am
Location: West Coast of USA
Contact:

Post by Duality »

It would be cool if vegastrike was developed using a game or a graphics engine that involves rapid development which is opensource.

The problem is, most of the game/graphic engines are that themajority of them are only availible on Windows and sometimes Linux.
Halleck
Elite
Elite
Posts: 1832
Joined: Sat Jan 15, 2005 10:21 pm
Location: State of Denial
Contact:

Post by Halleck »

Um, but vegastrike does have an opensource engine that is developed for windows, macos, and linux. What's all the fuss about?
Duality
Daredevil Venturer
Daredevil Venturer
Posts: 583
Joined: Sun Feb 16, 2003 12:58 am
Location: West Coast of USA
Contact:

Post by Duality »

I think well in my own mind the fuss is about, is mabie the game takes forever to develop because mostly it has been created from scratch.

I have no clue if vegastrike is hard coded or not.
smbarbour
Fearless Venturer
Fearless Venturer
Posts: 610
Joined: Wed Mar 23, 2005 6:42 pm
Location: Northern Illinois

Post by smbarbour »

It would take a MAJOR effort to redo everything in another engine. I highly doubt that the proprietary model format is used in anything other than the Vega Strike engine. The only program I know of that supports it is Wings. It would be insane to switch engines at this point. The only thing that switching engines would do is provide a different rendering engine (and the one we are using is working now) so why change?
I've stopped playing. I'm waiting for a new release.

I've kicked the MMO habit for now, but if I maintain enough money for an EVE-Online subscription, I'll be gone again.
chuck_starchaser
Elite
Elite
Posts: 8014
Joined: Fri Sep 05, 2003 4:03 am
Location: Montreal
Contact:

Post by chuck_starchaser »

rewpparo wrote: However I may have found a way to do something about the z buffer part. You can give priorities to meshes you want to draw, and disable Z buffer checks. Tell me if this would do the trick :
far meshes are drawn first, no depth check, they remain in background
near meshes are then drawn over using the framebuffer
cockpit is then drawn on the foreground, above everything else, no depth check.
Indeed I think it would work; only nit-pick is the order: I'd draw cockpit first, using stencil, avoids overdraw; tho I'm not sure if use of stencils is compatible with glsl. But really, if a ship is so far that it only spans a few pixels on the screen I wouldn't even care whether a ship behind it shows in front. And as for depth range, it can be adjusted dynamically between frames, as the needs may change when approaching a planet's landing strip.
My second-ultimate question would be whether ogre could handle integer coordinate systems; --I suppose the answer is "plugin", but I'm asking just in case; I just don't know how high-level it goes. Was wondering because in a city, for instance, you'd probably want precision that stays constant across the city; but most engines out there use floats for representation, and therefore objects close to the origing (eg. center of the city) enjoy positional accuracies down to amstrongs, but on the outskirsts you either have hairs as thick as your thumb, or you're bald.. ;-) ... (which often impacts physics, as well).

@Halleck: The fuss is about implementing atmo flight and planetside walking. Planetside requires very different occultation techniques from those of space, and pretty complex. Atmo flying is pretty complex too, in that you have one object, a planet, which can't LOD everywhere equally, the way ships do; but must LOD locally, nearer patches more so than farther ones. Like per-sub-object LOD. Ogre3D is a kick-ass engine, and already has glsl shaders, all kinds of physics, and if the OP has software capable of implementing atmo flying in Ogre and it works, we'd have two needs out of 3 covered, and it would be a matter of adding space flight. I'd rather we stay with VS and perfect it, but we shouldn't close our minds completely to other possibilities, such as merging the engines, or perhaps re-implementing VS-compatible interfaces as an Ogre plug-in. I'll probably take a look at it hoping to find a reason not to go that route, but I will look at it, anyhow.
CubOfJudahsLion
Confed Special Operative
Confed Special Operative
Posts: 286
Joined: Tue Dec 21, 2004 3:11 am
Location: Costa Pobre
Contact:

Post by CubOfJudahsLion »

smbarbour wrote:It would be insane to switch engines at this point. The only thing that switching engines would do is provide a different rendering engine (and the one we are using is working now) so why change?
Agreed. Why break what works?

Besides:

a) hellcatv intends to add GL shader support to the engine, and some coders around here are already getting their way with shaders (namely chuck_starchaser and his cool Earth.) So far, being wholly original has worked for VS. You may disagree with me, but there's a best-integral-results rule that tells me this is the best way.

c) the issue is coordinate systems, anyway, not rendering. A layered coordinate transformation system, for instance, has different representations for size for objects in different ranges of distance/size (planets in a 'cosmic' layer, ships and stations in a 'local context' layer) and have different coordinate conversion functions to transfer geometry between them and to the rendering engine. This even can be made into an additional stage that goes before the existing rendering engine.

EDIT: correctly credited chuck_starchaser for the cool Earth :D
Last edited by CubOfJudahsLion on Fri Jun 17, 2005 4:17 am, edited 1 time in total.
chuck_starchaser
Elite
Elite
Posts: 8014
Joined: Fri Sep 05, 2003 4:03 am
Location: Montreal
Contact:

Post by chuck_starchaser »

That's me, with the "Cool Earth", thank you ;-)
It wasn't rendering I said was my argument for considering Ogre, but the fact that it already has all that's needed for planetside. As far as coordinate systems, that's right, to have a continuous engine at all scales we need to be able to handle multiple coord systems simultaneously, and we need also a camera-tracking world coordinate system. Neither Ogre nor VS have this built into them.
CubOfJudahsLion
Confed Special Operative
Confed Special Operative
Posts: 286
Joined: Tue Dec 21, 2004 3:11 am
Location: Costa Pobre
Contact:

Post by CubOfJudahsLion »

Name on post above corrected.

Of course, the principle stands: either way the coordinate handlers would have to be written. Likewise, atmos physics would be generalization of the existing physics model which, while not trivial, is certainly not the hardest aspect of planetary flight.

Oh, I forgot. We'd still have to worry about endless 'details': models for terrain and structures, LODs, algorithms for locally varying detail that only increases in subdivided plane you're getting closer to, atmospheric scattering, rendering cloud layers--and providing the art for all that. It's also matter of feasibility/resources in a big way, so I stand corrected.
chuck_starchaser
Elite
Elite
Posts: 8014
Joined: Fri Sep 05, 2003 4:03 am
Location: Montreal
Contact:

Post by chuck_starchaser »

The beauty of it is that, once we get to the ground, we probably wouldn't have to create much content; there'd be thousands of people out there eager to contribute content. And one great feature we could add to accelerate that process further would be *in-game construction*.
Well, we'd need to take a hard look at Ogre3D, or any engine for that matter. There are many considerations, such as type of occultation algorithm and representations. Probably the fastest indoor engine out there is Quake. But it's a highly compromised engine. It achieves speed by many optimizations but to a great extent by its visibility techniques. While some engines use BSP's, and others use portals, Quake uses 3 vis algorithms: BSP, portals, and another one I'm not sure what's it called, or even whether it has a name. The space within a level gets divided into some large number of cubes, and a large array of single bits in a 2-dimensional array with those cubes' enumerations as its sides indicates, for any pair of cubes, whether there is a line of visibility between the two. The good side of using all these techniques is that you hardly ever overdraw: efficiency is very high. BUT, all this stuff is precomputed, and therefore a level cannot be destructible, nor constructible for that matter. IOW, the levels are completely static. Furthermore, you cannot have a continuous world: You are stuck with having separate 'levels'.
rewpparo
Hunter
Hunter
Posts: 83
Joined: Sat Jun 11, 2005 8:11 pm
Location: Rouen, france

Post by rewpparo »

CubOfJudahsLion wrote:
smbarbour wrote:It would be insane to switch engines at this point. The only thing that switching engines would do is provide a different rendering engine (and the one we are using is working now) so why change?
Agreed. Why break what works?
As for that, the main reason is : programers should focus on developping a game, not a graphic engine. what ressources you have shouldn't be wasted developping stuff that has already been developped.
yes, porting VS to ogre would probably take some time and ressources, but look at it as an investment. The time spent now doing that will save time in the future as you won't have to develop top technology, and keep up to date with GPU standards. Also, ogre has already octree, BSP, terrain plugin, and many optional optimisations for various situations.
Another reason is one that will touch the heart of open source advocates out there. I think ogre is the best candidate for a general purpose graphic engine right now. so we contribute expanding it with the VS plugin. That would give wannabe developers out there a ready to use space engine, that they could reuse in their apps. That helps people develop their own game in the future. The best interest of open source isn't a working comercial quality game, it's giving everyone the means to build one.
As for user created contens, Ogre has exporters for all major modelers (including 3Dmax, maya, blender and wings) as well as a lot of tools for particules, materials.... helps a lot to motivate users.

As for the integer coordinates system, I don't really know what's the point of this. We don't want city wide acuracy, we want planetary or system wide acuracy. Floats are the best candidate for that; and the coordinate tracking seems to be the best trick.
Just for the reccord, I don't think this would be easy. Coordinate precision in declared in a macro in the most widely used .h. We could alter it, but would require a completely custom ogre shipped with VS.
dust
Explorer
Explorer
Posts: 8
Joined: Fri Jun 17, 2005 6:23 am

Post by dust »

http://www.openscenegraph.org/ check also out the screenshots, it´s used for outdoor, like Nasa's Blue Marble and indoor. it also works with other api like http://www.delta3d.org/ does with http://www.openal.org/ http://ode.org/ and others.

approved working space simulator is http://www.shatters.net/celestia/ would be nice to use the celestia universe in vegastrike.

a good articel about scenegraphs of tomorrow http://www.realityprime.com/scenegraph.php#future
chuck_starchaser
Elite
Elite
Posts: 8014
Joined: Fri Sep 05, 2003 4:03 am
Location: Montreal
Contact:

Post by chuck_starchaser »

An integer coordinate system would allow, for instance, a large metropolis to be modeled to 0.1 mm accuracy all across.
4 billion x 0.1 mm = 400,000,000 mm = 400,000 m = 400 km, though you'd only use about 100 km to avoid integer overflow in calculations. For planet orbits you'd want doubles. For planets you could use floats, agreed, but you could also use ints and have 1 cm accuracy, for the same data size price of using floats. Remember floats only have 23 bits of mantissa. But for houses you could use char indexes into a a table of shorts. That's what's really needed for a continuous universe: The ability to juggle multiple representations, at multiple scales, and a world coordinate system (this one definitely in floats) that's continuously updated to track the camera.
klauss
Elite
Elite
Posts: 7243
Joined: Mon Apr 18, 2005 2:40 pm
Location: LS87, Buenos Aires, República Argentina

Post by klauss »

First, the precision in representation is not such an issue. Mostly because meshes are specified in local coordinates, and units group multiple meshes, transforming each to position them. Then, units themselves use local coordinates and are transformed by higher units, then bla bla, and the end is that almost any structure is represented in local coordinates, and transformed to viewspace only for rendering. But, if coordinates end up being so far away from the origin (which, in viewspace, is the viewer itself), then the precision loss won't be noticeable.

One would only be required to make sure meshes don't have huge disparities in feature sizes: the buildings and the planet should be separate meshes, otherwise the buildings could not use their own local coordinate system.

Hell, you can prove this point by going with spec a few light years away, you'll still see your ship perfectly rendered (actually, I tried with light hours only, I'll try with years just for fun).

About the way to render things in 3 separate layers: far, near and cockpit. That's what I said VS was using. So, yes: it works. Far meshes are drawn without z tests, near and cockpit with them. Drawing the cockpit without z tests would add the requirement of splitting intersecting faces, and sorting them out. It's a mess. The only trick is that between near objects and the cockpit (which is nearest), the z-buffer gets cleared and the frustum recomputed. As simple as that.

The point is: VS has unique requirements from the graphics engine. It makes sense for VS to have its own engine. Of course it's a lot of work. But would be much harder trying to hack another engine not designed to cope with VS requirements to make it work. VS is already using BSP for collision detection, if I'm not wrong. I'm not sure what is it using for visibility (I guess nothing, actually, just test the viewing frustum). But, since BSP code is already there for collision detection, it wouldn't be overly hard to use it for planetside walking. Mostly, if planetside walking is implemented as a different engine state (non-seamless, technically speaking, but could be made look seamless). In fact, for planetside walking we could actually use another engine, now that I think of it. It would make the most sense, because of the disparities in engine requirements. Basically, base interface should be replaced by a new interface based on another engine (like Ogre). After all, it won't need to interact with VS space/atm-flight engine. Space and atm-flight would, however, need to be handled by the same egine (VS's), otherwise attempting even semi-seamless transitions would be a total mess.
Oíd mortales, el grito sagrado...
Call me "Menes, lord of Cats"
Wing Commander Universe
chuck_starchaser
Elite
Elite
Posts: 8014
Joined: Fri Sep 05, 2003 4:03 am
Location: Montreal
Contact:

Post by chuck_starchaser »

klauss wrote:First, the precision in representation is not such an issue. Mostly because meshes are specified in local coordinates, and units group multiple meshes, transforming each to position them. Then, units themselves use local coordinates and are transformed by higher units, then bla bla, and the end is that almost any structure is represented in local coordinates, and transformed to viewspace only for rendering. But, if coordinates end up being so far away from the origin (which, in viewspace, is the viewer itself), then the precision loss won't be noticeable.
Klauss, you're forgetting *world space*. The transformation doesn't go directly from modelspace to viewspace, it goes through worldspace, and if an object is next to you, but world coordinates are 1000 miles away because they were somewhere in space and you flew some distance, than that object next to you will suffer. What you're describing sounds like what I'm advocating: World-space whose origin tracks the camera.
Hell, you can prove this point by going with spec a few light years away, you'll still see your ship perfectly rendered (actually, I tried with light hours only, I'll try with years just for fun).
Maybe the VS engine already repositions world space to track the camera, or periodically. There was a flight sim I played once that if you got too far from base, the plane and the camera started jumping between low precision positions relative to each other: Every time you moved the joystick a bit, the whole cockpit jumped a few inches up or down or to the sides.
About the way to render things in 3 separate layers: far, near and cockpit. That's what I said VS was using. So, yes: it works. Far meshes are drawn without z tests, near and cockpit with them. Drawing the cockpit without z tests would add the requirement of splitting intersecting faces, and sorting them out. It's a mess. The only trick is that between near objects and the cockpit (which is nearest), the z-buffer gets cleared and the frustum recomputed. As simple as that.
Intersecting faces could be split off-line or during design; just a matter of specifying NO intersecting faces. Microshaft changing a lightbulb: Just make darkness the standard... :D
The point is: VS has unique requirements from the graphics engine. It makes sense for VS to have its own engine. Of course it's a lot of work. But would be much harder trying to hack another engine not designed to cope with VS requirements to make it work. VS is already using BSP for collision detection, if I'm not wrong. I'm not sure what is it using for visibility (I guess nothing, actually, just test the viewing frustum). But, since BSP code is already there for collision detection, it wouldn't be overly hard to use it for planetside walking.
I have no personal experience in these matters, but what I read is that BSP is very expensive to dynamically update and keep balanced, so that its use is with static level designs.
Mostly, if planetside walking is implemented as a different engine state (non-seamless, technically speaking, but could be made look seamless). In fact, for planetside walking we could actually use another engine, now that I think of it. It would make the most sense, because of the disparities in engine requirements. Basically, base interface should be replaced by a new interface based on another engine (like Ogre). After all, it won't need to interact with VS space/atm-flight engine. Space and atm-flight would, however, need to be handled by the same egine (VS's), otherwise attempting even semi-seamless transitions would be a total mess.
You're probably right. Would be nice to come up with a universal engine, though, but probably too hard. And yet, the prospect of having a level-less, continuous world tantalizes me...
klauss
Elite
Elite
Posts: 7243
Joined: Mon Apr 18, 2005 2:40 pm
Location: LS87, Buenos Aires, República Argentina

Post by klauss »

Klauss, you're forgetting *world space*. The transformation doesn't go directly from modelspace to viewspace, it goes through worldspace, and if an object is next to you, but world coordinates are 1000 miles away because they were somewhere in space and you flew some distance, than that object next to you will suffer. What you're describing sounds like what I'm advocating: World-space whose origin tracks the camera.
Nope. You're wrong there. Meshes never get transformed to worldspace. Matrices are composed, and sent directly to the OGL in the modelview matrix. So, basically, you go directly from modelspace to viewspace. If we ever encounter a precision problem, it will be due to badly-conditioned intermediate matrices, which can be fixed as easily as changing the floats for doubles in the Matrix class. It would still be converted to floats when passing it to the OGL, but at that point, it wouldn't matter.
have no personal experience in these matters, but what I read is that BSP is very expensive to dynamically update and keep balanced, so that its use is with static level designs.
Yep. And bases are mostly static. So we win.
On the subject, I think there's no method dynamic enough. I had an idea some time ago, but hellcat made it puff like a soap bubble: use proxies and occlusion query on groups of objects at a time. Actually, I never talked about it with hellcat, but talking about other things he pointed out that occlusion queries stall the pipeline, which is almost true (only untrue with an NVidia-specific extension, but we can't rely on a vendor-specific extension, that wouldn't be right). Anyway, it would be the method: all it needs are unit groupings, which are easily done with some heuristics mixed with static hints, and the rest handles by itself.
Oíd mortales, el grito sagrado...
Call me "Menes, lord of Cats"
Wing Commander Universe
chuck_starchaser
Elite
Elite
Posts: 8014
Joined: Fri Sep 05, 2003 4:03 am
Location: Montreal
Contact:

Post by chuck_starchaser »

klauss wrote:
Klauss, you're forgetting *world space*. The transformation doesn't go directly from modelspace to viewspace, it goes through worldspace, and if an object is next to you, but world coordinates are 1000 miles away because they were somewhere in space and you flew some distance, than that object next to you will suffer. What you're describing sounds like what I'm advocating: World-space whose origin tracks the camera.
Nope. You're wrong there. Meshes never get transformed to worldspace. Matrices are composed, and sent directly to the OGL in the modelview matrix. So, basically, you go directly from modelspace to viewspace. If we ever encounter a precision problem, it will be due to badly-conditioned intermediate matrices, which can be fixed as easily as changing the floats for doubles in the Matrix class. It would still be converted to floats when passing it to the OGL, but at that point, it wouldn't matter.
Well, perhaps the matrices are precombined, but the world coordinates do exist and are used, wrong? If the origin of "world" coordinates is in Alpha Centauri (and we're on Earth) it doesn't seem to me the fact would be irrelevant. It should be. And it would be, if I have my way; but it isn't yet. No?
have no personal experience in these matters, but what I read is that BSP is very expensive to dynamically update and keep balanced, so that its use is with static level designs.
Yep. And bases are mostly static. So we win.
I was going to say "So we lose, because then we can't have a distributed universe, loaded on demand; and we can't have multiple users adding content dynamically"; but maybe we can, anyhow, by making it a function of servers to update BSPs whenever needed, in a low priority thread...
What do you think of this idea?:
Say we were to model Buenos Aires at some point ;-), we could surround every city block by hint planes, have precomputed BSP's for each block, then the BSP algorithm could always keep a hint plane as root node. It could then keep the bsp's for the 4, 5 or 6 blocks visible, and each time it loads a new block bsp and/or drops an old one, it can roughly rebalance simply by choosing a new hint plane as root node.
On the subject, I think there's no method dynamic enough. I had an idea some time ago, but hellcat made it puff like a soap bubble: use proxies and occlusion query on groups of objects at a time. Actually, I never talked about it with hellcat, but talking about other things he pointed out that occlusion queries stall the pipeline, which is almost true (only untrue with an NVidia-specific extension, but we can't rely on a vendor-specific extension, that wouldn't be right).
What I've read on the subject is that switching data directions on the video card is expensive. Even glGetState(), or whatever the name was, is a nono. This website I found long ago on ogl optimizations advised to keep track of the GPU state in a variable, rather than query it, and that if one needs to query things from the GPU, to put all those queries together and only switch bus directions once per frame.
jackS
Minister of Information
Minister of Information
Posts: 1895
Joined: Fri Jan 31, 2003 9:40 pm
Location: The land of tenure (and diaper changes)

Post by jackS »

World coordinates of objects are kept in doubles, so one has centimeter accuracy on the origins for each local coordinate system up to ~ 70 billion kilometers (~2.7 Light-days) from the origin of the star system. We've been considering precision issues surrounding star system scales for at least the last couple of years (exact timeframes escape my memory), and believe the fundamental support mechanisms for such to have been addressed quite some time ago.
Post Reply