Code reformatting project [DONE]

Development directions, tasks, and features being actively implemented or pursued by the development team.
Post Reply
klauss
Elite
Elite
Posts: 7243
Joined: Mon Apr 18, 2005 2:40 pm
Location: LS87, Buenos Aires, República Argentina

Re: Code reformatting project

Post by klauss »

Well, you're mostly right.

If you take a look at the audio branch, you'll see very few bare pointers. Almost every pointer is a shared smart pointer, or an auto pointer, but few bare pointers. So it is indeed possible to avoid bare pointers to that point, and it's in fact quite a comfy way of coding.

But sometimes bare pointers are a good thing. They're fast, shared pointers aren't as fast. They're copyable - only with care. You could argue they can be replaced by references, but that wouldn't be accurate. References are supposed to point to always-valid instances - that is, for the lifetime of the variable. So references are used in nested scopes because that's the only easy way in which you can guarantee that:

Code: Select all

std::string mystring;
do_something_to(mystring);
// ^ takes a reference because, during the whole lifetime of do_something_to,
//    mystring will be alive and the reference valid.
But you can't copy references - that is, you can, with awkward syntax and a lot of faith, because when you copy references, the references may escape beyond the lifetime of the object and you get segfaults.

So pointers are good in those cases: you can't pay for a smart pointer, but you have to make the programmer aware that it's a "dangerous" reference, one to be handled with care: ownership must be transferred with a clear protocol, for instance. References are designed to let you forget about ownership, because with nested lifetimes, ownership is clearly given to the outer scope.

Too much mumbo jumbo? ;)
Oíd mortales, el grito sagrado...
Call me "Menes, lord of Cats"
Wing Commander Universe
chuck_starchaser
Elite
Elite
Posts: 8014
Joined: Fri Sep 05, 2003 4:03 am
Location: Montreal
Contact:

Re: Code reformatting project

Post by chuck_starchaser »

No; that's the perfect amount of mumbo jumbo, except that I knew it already; but there was more in my mind
that didn't come out. In a properly designed piece of software the lifetimes of objects would be guaranteed
to exceed the lifetimes of references to them floating around. Well, perhaps not always. But then again,
if it's known to the programmer that there will be references (pointers) to objects that may be deleted
without notice, then it is imperative for that programmer to provide a means of ensuring that the users of
such unpredictable references MUST check them for NULLness; --at least in debug mode.

Code: Select all

template <class T>
class chk_ptr
{
#ifndef _NDEBUG
    static const size_t mask = SIZE_T_MAX;
    mutable
#endif
    T * thing;
public:
    ~chk_ptr(){ assert( sizeof(size_t) == sizeof(T*); } //or use static assert
    chk_ptr const & operator=( T * t )
    {
#ifdef NDEBUG
        thing = t;
#else
        thing = t^mask;
#endif
    }
    template < size_t X >
    bool operator==( size_t X ){ assert( 0 |! "No comparison!" ); }
    template
    bool operator==< NULL >( size_t X )
    {
#ifndef NDEBUG
        if( reinterpret_cast<size_t>(thing) > SIZE_T_MAX>>1 )
            thing ^= SIZE_T_MAX;
#endif
        return thing == NULL;
    }
    T & operator->()
    {
#ifndef NDEBUG
        if( reinterpret_cast<size_t>(thing) > SIZE_T_MAX>>1 )
            assert( 0 |! "Mistake: chk_ptr used before being checked for null!!!" );
#endif
        return *T;
    }
    //etceteras
}
Then again, how would the deleter NULL our pointer? We'd have to pass around
references to this pointer. So, I think you're talking about badly-planned-code examples.
Objects to which a lot of references and/or pointers are passed around should be ref-counted.
Or else we'd need Observer_ptr, with hidden chains, so they get the news of an object's demise.

And it's not true references can't be copied; they can be in a ctor's initializer list:

Code: Select all

class T{...};
T t;
T& rt;
class X
{
    T& t_;
public:
    X( T & at ) : t_(at) {} //no problemo
};
And the main reason shared ptr's are slow is the boosters' obsession with thread-safety, so they
encumbered shared ptr with a stupid mutex. Politics. We could hack it.
klauss
Elite
Elite
Posts: 7243
Joined: Mon Apr 18, 2005 2:40 pm
Location: LS87, Buenos Aires, República Argentina

Re: Code reformatting project

Post by klauss »

re. copying references: as I said... awkward and counterproductive.

re. boost and threads: wrong AFAIK - boost's shared ptrs use atomic increment/decrement operations that are thread-safe up to a point, and it's documented, and even implemented with assembler (lock movq bla bla) when on supported platforms.

But they're slower than bare pointers, nonetheless. In fact, nothing can be as fast as bare pointers - doing something will always be slower than doing nothing ;) - And thread safety is good - you can't deny that. Mostly if everyone's pushing for more parallelism.

Your observer pointer, I think, is boost's weak reference BTW. Both bare pointers and weak references are a common solution to cyclic references, which are the pitfall of reference counting. So reference counting isn't a panacea, if you regard it as such you inevitably end up creating cyclic references (they're part of very common and perpetuated-by-java patterns), and with cyclic references and reference counting come memory leaks.
Oíd mortales, el grito sagrado...
Call me "Menes, lord of Cats"
Wing Commander Universe
chuck_starchaser
Elite
Elite
Posts: 8014
Joined: Fri Sep 05, 2003 4:03 am
Location: Montreal
Contact:

Re: Code reformatting project

Post by chuck_starchaser »

We're not in sync, lately.
klauss wrote:re. copying references: as I said... awkward and counterproductive.
If an object will need a reference to something, it can get it in the constructor and make its own copy. There's nothing awkward about it. References are like unchangeable pointers, so it makes sense they'd be for long term use, rather than casually copyable like changeable pointers. The point is, you use a pointer when you need to change what you want it to point to. Case in point, target selection. Otherwise one should be using references, instead; --yes; passed in constructors. The lifetime of the object receiving the reference should be less than what the reference refers to. If that's not the case, 99% of the chances is the design sucks.
re. boost and threads: wrong AFAIK - boost's shared ptrs use atomic increment/decrement operations that are thread-safe up to a point, and it's documented, and even implemented with assembler (lock movq bla bla) when on supported platforms.
I was once considering using shared_ptr and ended up discussing it in the boost mailing list and was told it used a mutex. Maybe it has changed since then.
But they're slower than bare pointers, nonetheless. In fact, nothing can be as fast as bare pointers - doing something will always be slower than doing nothing ;) - And thread safety is good - you can't deny that. Mostly if everyone's pushing for more parallelism.
A solution in the absence of a problem is not good. An inter-thread shared_ptr should be thread-safe. An intra-thread shared ptr should not be burdened with thread concerns.
Your observer pointer, I think, is boost's weak reference BTW.
No; it's not. Remember the Observer pattern? An observer_ptr would be one that is subscribed to news about what it points to. The moment the object is deleted, all observer_ptr's to it are NULL'ed. Of course, this is just a mental exercise; my point was to make a reductio ad absurdo.
Both bare pointers and weak references are a common solution to cyclic references, which are the pitfall of reference counting. So reference counting isn't a panacea, if you regard it as such you inevitably end up creating cyclic references (they're part of very common and perpetuated-by-java patterns), and with cyclic references and reference counting come memory leaks.
Heard about the cyclic reference problem; but you're talking about garbage collection and whatnot. I was simply saying that in a good design these problems shouldn't come up, because it should be provable that users of shared objects live well within the shared object's lifetime.
safemode
Developer
Developer
Posts: 2150
Joined: Mon Apr 23, 2007 1:17 am
Location: Pennsylvania
Contact:

Re: Code reformatting project

Post by safemode »

Personally, I wouldn't really bother thinking about thread safety or anything related to it right now. We're so far away from caring about anything that has to do with threading VS that it would be nothing but a waste of time to argue details. Let the architecture come first before we think about what should be threaded, since if the structure of the classes is done right, it should be apparent where we can become parallel with the least amount of friction.... and that's not possible right now. Too many things dont follow any decent api rules.


focus on the immediate, and pretend like you wont have time to implement stuff in the future...cuz you may not. Right now that seems to be cleanup and document, but not alter how things function (or do so as absolutely little as possible). we still want to make a release sometime this year.
Ed Sweetman endorses this message.
klauss
Elite
Elite
Posts: 7243
Joined: Mon Apr 18, 2005 2:40 pm
Location: LS87, Buenos Aires, República Argentina

Re: Code reformatting project

Post by klauss »

chuck_starchaser wrote:
Your observer pointer, I think, is boost's weak reference BTW.
No; it's not. Remember the Observer pattern? An observer_ptr would be one that is subscribed to news about what it points to. The moment the object is deleted, all observer_ptr's to it are NULL'ed. Of course, this is just a mental exercise; my point was to make a reductio ad absurdo.
Hm... I don't see the difference then. Boost's weak references get nullified too. They hold a linked list of references to an object to do that - classic implementation. Although I'm less sure about their implementation of weak references than of the atomic increment in shared_ptr - that's a fact. In fact, the version of boost used in VS already has that enhancement.

And ya, if you use a reference or any other bare pointer for that matter, and you can't prove lifetime properties, then your design does actually suck. I was, however, pointing out that certain simple programming patterns (like passing references to enclosed execution frames - ie passing a reference to a function I'm calling) are inherently safe and let you program without testing every step you make. Copying references happily is the complete opposite.
safemode wrote:Personally, I wouldn't really bother thinking about thread safety or anything related to it right now. We're so far away from caring about anything that has to do with threading VS that it would be nothing but a waste of time to argue details. Let the architecture come first before we think about what should be threaded, since if the structure of the classes is done right, it should be apparent where we can become parallel with the least amount of friction.... and that's not possible right now. Too many things dont follow any decent api rules.
Not that far. The audio branch has a threaded audio subsystem.
It just won't be threaded in its first phase because of obvious reasons, but the design is built around thread safety.
Oíd mortales, el grito sagrado...
Call me "Menes, lord of Cats"
Wing Commander Universe
chuck_starchaser
Elite
Elite
Posts: 8014
Joined: Fri Sep 05, 2003 4:03 am
Location: Montreal
Contact:

Re: Code reformatting project

Post by chuck_starchaser »

klauss wrote:
chuck_starchaser wrote:
Your observer pointer, I think, is boost's weak reference BTW.
No; it's not. Remember the Observer pattern? An observer_ptr would be one that is subscribed to news about what it points to. The moment the object is deleted, all observer_ptr's to it are NULL'ed. Of course, this is just a mental exercise; my point was to make a reductio ad absurdo.
Hm... I don't see the difference then. Boost's weak references get nullified too. They hold a linked list of references to an object to do that - classic implementation.
My apollogies, then; I didn't know that about weak_ptr<>. Sounds like a really drastic solution, but I guess it needed to exist, though I'd think 2^16 times before using weak_ptr<> ... probably a sign the design is weak.
Although I'm less sure about their implementation of weak references than of the atomic increment in shared_ptr - that's a fact. In fact, the version of boost used in VS already has that enhancement.
Okay, when I was discussing shared_ptr with boosters it was at least a couple of years ago, and
your link wrote:Starting with Boost release 1.33.0, shared_ptr uses a lock-free implementation on the following platforms: ...
Heck, maybe it was my telling them I could not use shared_ptr that gave them second thoughts. So, shared ptr's may be somewhat slower than raw pointers right now, but where molasses when I looked into them back then.
And ya, if you use a reference or any other bare pointer for that matter, and you can't prove lifetime properties, then your design does actually suck. I was, however, pointing out that certain simple programming patterns (like passing references to enclosed execution frames - ie passing a reference to a function I'm calling) are inherently safe and let you program without testing every step you make. Copying references happily is the complete opposite.
Exactly. And what I was trying to say was that copying pointers happily should only happen when strictly necessary, --i.e. when the pointer needs to change what it points to; otherwise the design is too happy for its own good.
safemode wrote:Personally, I wouldn't really bother thinking about thread safety or anything related to it right now. We're so far away from caring about anything that has to do with threading VS that it would be nothing but a waste of time to argue details. Let the architecture come first before we think about what should be threaded, since if the structure of the classes is done right, it should be apparent where we can become parallel with the least amount of friction.... and that's not possible right now. Too many things dont follow any decent api rules.
Not that far. The audio branch has a threaded audio subsystem.
It just won't be threaded in its first phase because of obvious reasons, but the design is built around thread safety.
Sound is the most obvious part of the engine to threadify, though. Safemode is talking about all the rest, and I totally agree: Forget thread safety, reorganize the code, find where the best places are to separate execution, serialize communications between them, thread-safetyfy those inter-thread communication channels... and if the code in each thread uses its own private memory, then, within each thread-tied code section nothing needs to be thread safe. Otherwise you end up with locks and mutexes all over the place, which defeat the performance gains of multi-threading and makes code behavior inscrutable.
chuck_starchaser
Elite
Elite
Posts: 8014
Joined: Fri Sep 05, 2003 4:03 am
Location: Montreal
Contact:

Re: Code reformatting project

Post by chuck_starchaser »

And I use the term "serialize inter-thread communcations" loosely, but pointedly:
It could include message passing; it could include coded commands, functors, whatever;
but it could also include my invention of a few years back (it's probably been invented
already, but anyhow), namely an inter-thread memory manager that has two equally
sized memory blocks A and B. At first, thread a has read/write access to block A, and
read-only access to block B, while consumer thread b only has has read access to
block B. Once both threads report they are done for that frame, the two blocks are
swapped, so that whatever thread a wrote to block A now becomes available to
thread b, and thread a can overwrite block B.
The negotiation and block swap needs a mutex or interlock, but it is a mutex that is
only suffered once per frame, as opposed to having mutexes all over the code.

A few years ago, I took two packages from BOOST and combined them:
  • A fast, block allocator.
  • The (then incomplete and abandoned) thread library.
And what I put together was an ultra-fast, per-thread, block allocator; with the
allocator class being a thread-local object.
Then I tuned performance even more using AMD's Code Analyst. I could maybe
find the code, though it'd be obsolete, given that boost has a brand new thread
library.
But anyhow, I was working also on overriding the Delete operator for pointers
issued by these allocators, so as to have memory returned to the allocator of the
thread that produced them, upon deletion, and then let the allocator recycle the
memory if possible; point being no cumulative cross-thread memory migration.

Just mentioning this here to drive the point that what we need is to identify
the islands, or continents first, of code sharing or needing to share memory, and
where the minimal interfaces lie; turn those interfaces into thread secure comm
channels, and make sure each continent or island's memory is tightly isolated
from the others. Within each island, NOTHING needs to be thread-safe, because
you know that only one thread has access.

THAT is the way to do multi-threading; NOT by using thread-safe everything.

EDIT:
But anyhow, that's food for thought for the future. Right now, the first thing is
to clean up the code (making all class members private, const correctness,
virtualizing dtors, expliciting single arg ctors, using better names for classes,
identifying packages (sets of classes that work together) and putting them into
folders, adding comments to all files).
Then comes refactoring (limiting files to one class in public space per .h/.cpp,
removing unnecessary includes, moving includes from .h to .cpp, using pimpl's
in many places, removing dead code and obsolete comments, replacing pointers
with references where possible, or with smart pointers otherwise, identifying
general intents and using standard design patterns where applicable, adding
DBC (design by contract) basic constructs, such as asserts on all functions, and
class invariant checks, etceteras).
Then comes memory space refactorings: Identifying packages that need to
share memory and packages that don't, and identifying the "islands" we can
then run in separate threads; building thread-safe bridges between them,
and separating the threads.

Of course, bug-fixing and new features will have to run in parallel with all
the above; --can't just stop progress until all the refactoring is done.
Last edited by chuck_starchaser on Wed Feb 24, 2010 7:18 pm, edited 1 time in total.
klauss
Elite
Elite
Posts: 7243
Joined: Mon Apr 18, 2005 2:40 pm
Location: LS87, Buenos Aires, República Argentina

Re: Code reformatting project

Post by klauss »

chuck_starchaser wrote:And I use the term "serialize inter-thread communcations" loosely, but pointedly:
It could include message passing; it could include coded commands, functors, whatever;
but it could also include my invention of a few years back (it's probably been invented
already, but anyhow), namely an inter-thread memory manager that has two equally
sized memory blocks A and B. At first, thread a has read/write access to block A, and
read-only access to block B, while consumer thread b only has has read access to
block B.
Game state double-buffering. I used to teach it to game-dev-wannabes a while back ;)
In fact, the sound system will use that technique.
Thread-safe design isn't making everything thread-safe, it's defining and documenting what needs to be thread-safe and why, and what needn't be so and why.

Well... a better name would be thread-aware design (tm).
Oíd mortales, el grito sagrado...
Call me "Menes, lord of Cats"
Wing Commander Universe
chuck_starchaser
Elite
Elite
Posts: 8014
Joined: Fri Sep 05, 2003 4:03 am
Location: Montreal
Contact:

Re: Code reformatting project

Post by chuck_starchaser »

Another way of putting it:
The difference between threads in a program and separate processes is that threads share
memory, while processes don't. But threads are better designed and better performing the
more closely they resemble processes.
Each thread should have formalized input and formalized output, and no thread should be
able to see or read (let alone write) another thread's variables.
In fact, each thread should be embodied by a singleton object that contains and hides all
the classes making up the code for the thread, privately and without get functions.
All inter-thread comms, in and out, should flow through the formalized channels, and must
never contain pointers or references, except perhaps to objects within the controlled channel.
So, making the code thread-aware is more intuitively likened to separating *processes* than
to separating threads.
safemode
Developer
Developer
Posts: 2150
Joined: Mon Apr 23, 2007 1:17 am
Location: Pennsylvania
Contact:

Re: Code reformatting project

Post by safemode »

Technically, the game is already showing you where to make it threaded. At the system level. Systems can only communicate with eachother (should anyway)via ships jumping, so there is our point of locking and only means of inter-thread interaction. Every simulated system gets it's owns thread, including the active one the player is in. The main game sits back in the initial process. When a ship jumps, we already have a lock so that's when we also dump info about news and any faction stats from that system. this doubles as being more realistic as news and such would have to piggy back on ships as we have no FTL information service in VS.

so every cycle, we tell all the systems threads to wake up. They simulate to one frame and sleep, waiting to be waken up again. The main process then looks at each system and checks if anyone jumped and if so, works that and then wakes the threads again.


There would be no need for threads to talk to eachother at any other point other than when processing ships jumping. At such points, _all_ threads are sleeping. Locking is no longer an issue at all then.

The only time one needs to lock then is if we happen to need to read a universe variable that may get modified by the player or some other means and it's not atomic. But for the most part, universe should be read only and the systems should be self contained and isolated except for jumps.
Ed Sweetman endorses this message.
pheonixstorm
Elite
Elite
Posts: 1567
Joined: Tue Jan 26, 2010 2:03 am

Re: Code reformatting project

Post by pheonixstorm »

Along the lines of threading... should we use pthreads for all releases or something like zthread so that each platform can use its native thread system.
Because of YOU Arbiter, MY kids? can't get enough gas. OR NIPPLE! How does that mkae you feeeel? ~ Halo
safemode
Developer
Developer
Posts: 2150
Joined: Mon Apr 23, 2007 1:17 am
Location: Pennsylvania
Contact:

Re: Code reformatting project

Post by safemode »

It doesn't matter to me. They aren't going to be stressed with spawning or locking. If we do it right, there will never be any lock contention, no spawning or joining since all threads represent the simulated systems, they are created before graphics initialize and destroyed when the game is over. Simple.

Now, keep in mind, threads are always tried to forced to be executed on the current cpu as the main process, because memory accesses are seen as more costly than the serealized processing. So the thread api we use should have a means of selecting the cpu affinity. We can tell the scheduler that our threads aren't going to be playing with eachother's memory often, so run them on as many different cores as we can.

In any case, this is all seriously far away ....like probably not going to happen this year. We still have to clean the code up, document more, make a release, work on the python side of things for a change (hah) and _then_ maybe we can play with the idea of threading. Because that is not trivial, even when using my suggestion, let alone any other threading idea.
Ed Sweetman endorses this message.
chuck_starchaser
Elite
Elite
Posts: 8014
Joined: Fri Sep 05, 2003 4:03 am
Location: Montreal
Contact:

Re: Code reformatting project

Post by chuck_starchaser »

Well, systems are indeed separate chunks of data, BUT they have common code. This is actually the easiest case,
but not the best in terms of performance. In fact, as way you state it, each system thread would execute and go
to sleep, so in fact you achieve the same thing as running system simulation code across a vector of systems, and
in the latter case you have less overhead, unless multiple cores execute them in parallel. Unfortunately, this is
not a symmetric enough situation to guarantee balance between cores, and then you end up with each core
accessing most of the code. What I was suggesting is something different, which makes each thread/core to
specialize, and therefore access only a portion of the code.

There are three promising types of islands for thread cleavage:
  • Different data/common code (as per your example). Pro: Easy. Con: Multiple cores executing duplicate code sections.
  • Common data/different code (data moves like cars in a car manufacturing plant, each thread being like a machine along the assembly line). Pro: Each core deals with a fraction of the code, increasing code cache coherence. Con: The amounts of data being passed around from thread to thread are rather large.
  • Different data/different code (data moves like in a hierarchical tree assembly plant, where wheels come from one assembly line, body parts from another, engine from another, chassis from another, and as these parts converge they are assembled at the trunk assembly line). Pro: Inter-thread comm bridges deal with smaller chunks of data; cores specialize, increasing code cache coherence. Con: None; just needs more careful planning.
The last is the best cleavage opportunity, and therefore the first islands that need to be identified.
A perfect example is sound, where the code there is unique, AND it deals with its own data types and stream.

Other possible examples might include a "user and network input thread", which would poll user input devices
and network server for events, add time stamps; and perhaps could also run the predictive code and the user
input delay to match network latency, and all such hacks for lag equalization and compensation. It could give the
rest of the engine a FIFO output of nicely cleaned out, tagged, and extrapolated input events data. It would be
given real time priority, but would sleep() often, so as not to interfere with other threads.

A file i/o server thread, which could secretly add predictive file fetching whenever there are no pending requests.
This thread would not own the file data memory: its output would simply be pointers to the memory, which it then
relinquishes.

The most difficult part is the heart of the medusa: units, physics and AI, but this can be simplified and separated
in code AND data, with a few hacks:

Unit AI could be a low priority thread that loops through all the AI's computing a next state. Note that AI does
not need to run synchronously with physics. If AI gets less time, and therefore runs at a lower FPS, it would be
hardly noticeable. The AI's might appear "slow", as in "slow thinking", during large battles; but that might be
perceived by players as a natural consequence of AI's being overwhelmed by so much going on. As long as the
physics run in good timing, nothing will seem too odd.
But for this we need to separate physics and AI data.
Now, AI has to send commands to the physics; and physics has to send collision data back to AI.
However, in both cases we can use a double-buffer. The physics uses the same AI output until the AI announces
it has computed a new AI commands frame; then the buffers are flipped, and the physics will now use the new
AI commands, until new orders.
The physics output to AI are, for the most part, collisions, which are one-time events, for the most part, and
can simply be pushed into a queue which the AI consumes at its own pace.

And, needless to say, the graphics back-end can also be a thread (high priority, but with a Sleep() as soon as
it is done. Here again, the graphics thread consumes draw orders and unit position data and other things, all
of which can be FIFO queued and/or double-buffered.

The advantage of specializing threads is that, when they execute on separate cores, they only require a sub-
-set of the code, which means that a greater % of the code fits in the core's code cache, which increases the
likelihood of code being already in cache that we call or jump to, which reduces cache miss latencies, which
is the number one cause of pipeline stalls in code execution.
safemode
Developer
Developer
Posts: 2150
Joined: Mon Apr 23, 2007 1:17 am
Location: Pennsylvania
Contact:

Re: Code reformatting project

Post by safemode »

And that method is exactly why it's going to be well over a year before it's even a bad idea to start working on it. :)

The more you have to wake a thread, create a thread (especially), lock a thread the performance benefit goes down the drain so fast it's crazy.

it's obviously an attractive feature to wish into existence, so long as it doesn't steal time away from things that actually should be getting done :)

After the reformatting is merged back to trunk, we should all take code we're familiar with and start documenting the headers and if necessary, the source as best as we can.

I got no suggestion on how to best document the headers, other than to put yourself in your shoes back when you first started looking at the code and what would have helped you.

documentation doesn't have to be complete for you to commit it, just do what you can. Though, i would not mix header documentation with any other changes. in-code comments are different, but header documentation should be more official and ought to be their own commit for tracking purposes.
Ed Sweetman endorses this message.
chuck_starchaser
Elite
Elite
Posts: 8014
Joined: Fri Sep 05, 2003 4:03 am
Location: Montreal
Contact:

Re: Code reformatting project

Post by chuck_starchaser »

Indeed. Getting back to the work at hand:

In file src/gfx/vdu.cpp, line 1568:

Code: Select all

    switch (mnt->ammo != 0 ? mnt->status : 127)
    {
    case Mount::ACTIVE:
        return mountcolor;
        goto drawme;       //CAN'T go nowhere after return. BUG! --chuck_starchaser
    case Mount::DESTROYED:
        mountcolor = GFXColor( 1, 0, 0, 1 );
        return mountcolor;
drawme:
    ...........
Donno what to do about it, so I report it.
klauss
Elite
Elite
Posts: 7243
Joined: Mon Apr 18, 2005 2:40 pm
Location: LS87, Buenos Aires, República Argentina

Re: Code reformatting project

Post by klauss »

It's there in trunk, I say remove.
Oíd mortales, el grito sagrado...
Call me "Menes, lord of Cats"
Wing Commander Universe
pheonixstorm
Elite
Elite
Posts: 1567
Joined: Tue Jan 26, 2010 2:03 am

Re: Code reformatting project

Post by pheonixstorm »

there are probably a lot of files in trunk that need to be removed that are no longer used by any of the binaries... Now if they are still included is another story.

As for threading... On the server side of things it might be better to have each UNIT/PLAYER as a thread since this will very quickly become the largest part of vegaservers usage. I'm not familiar enough with the main vs binary to say what should get threaded as what, but in any event do we really want to try and thread VS w/o either looking into a major rewrite or changing over to ogre? We really need to seperate the graphics engine away from data and everything else. My guess anyway.. Give me time and i'll probably have some better suggestions. Remember though this is a game we are talking about so rules are made to be broken or distorted.

And how goes the reformat branch anyway, everything check out now?
Because of YOU Arbiter, MY kids? can't get enough gas. OR NIPPLE! How does that mkae you feeeel? ~ Halo
chuck_starchaser
Elite
Elite
Posts: 8014
Joined: Fri Sep 05, 2003 4:03 am
Location: Montreal
Contact:

Re: Code reformatting project

Post by chuck_starchaser »

It's going well; I finished checking all case statements in all files. I tried to get rid of boost library
build warnings but without success; the CXX string from Makefile.am doesn't seem to get through
to the boost build.
I committed the last batch of files, r12642, but forgot to type a message with the commit. Same
as the previous commit: purely cosmetic stuff.
For the past couple of hours I've been playing vegastrike, to make sure there are no bugs with
trading and upgrading. I started a new campaign, to make sure a universe has to be generated.
No problems so far.
chuck_starchaser
Elite
Elite
Posts: 8014
Joined: Fri Sep 05, 2003 4:03 am
Location: Montreal
Contact:

Re: Code reformatting project

Post by chuck_starchaser »

Alright, this is done. I'm not entirely sure I understand the first thing about the svn merge command.
Managed to use it before, but that was from local repo to local repo. So, I'm just going to post here
what I think the command is, and the message to put with the three-part commit:

Code: Select all

svn merge -r 12620:12628 https://vegastrike.svn.sourceforge.net/svnroot/vegastrike/trunk/vegastrike
  • Reformatting all the sources with uncrustify.
  • Three macros in basecomputer.cpp wrapped in do{}while(0).

Code: Select all

svn merge -r 12628:12640 https://vegastrike.svn.sourceforge.net/svnroot/vegastrike/trunk/vegastrike
  • General sanitation, plus manual formatting cosmetics, such as inserting blank lines.
  • All command macros wrapped in do{ ...; }while(0).
  • Some classes in src/gfx and a few other places were improved in terms of const correctness, virtual dtor & explicit.
  • Klauss' premul alpha commit for techniques merged.
  • All warnings squashed except those from compiling the boost libraries.
  • File planetary_transform.h removed (was not being used).
  • Files added, Makefile.am changed, major refactoring, to get unit.cpp to compile to its own unit.o.
  • Removed use of abs() with (unsigned) pImage->ecm, and simplified some logic statements.

Code: Select all

svn merge -r 12640:12642 https://vegastrike.svn.sourceforge.net/svnroot/vegastrike/trunk/vegastrike
  • Manual formatting and cosmetic touch-up.
TRAC timeline
safemode
Developer
Developer
Posts: 2150
Joined: Mon Apr 23, 2007 1:17 am
Location: Pennsylvania
Contact:

Re: Code reformatting project

Post by safemode »

always use --dry-run first. Make sure the output of what it says it's doing is good and that you dont get any Conflicts. A proper merge back to trunk should not result in any conflicts.

the first revision should be the revision that you last merged your branch from trunk (in this case, the revision you created your branch with will work unless you've synced your branch with trunk since then...which i dont think you did.

Now, the second revision is the last revision you comitted. (again this assumes trunk has been synced and no additional commits made). Also, if you are doing it incrementally, modify revs as necessary.

Now, you should be in the directory of a trunk pull (not your branch) and the URL should be your branch's url.

This will then pull the diff of your current working dir and your branch, giving you a set of changes to trunk that you will then commit to the trunk, with a message such as "merging reformat branch, r-whatever"

So, you basically have the url's wrong. Your working dir should be trunk, your url's should be pointing to your branch.

edit: when doing a merge, think of it this way. Your working dir is the destination of the merge command, the URL is the source. And be very careful about the revisions when you haven't synced changes in the destination repo. You can inadvertently reverse them. always dry-run and always make sure your branch is synced with commits made to trunk between the last sync with your branch before syncing your branch back.
Ed Sweetman endorses this message.
chuck_starchaser
Elite
Elite
Posts: 8014
Joined: Fri Sep 05, 2003 4:03 am
Location: Montreal
Contact:

Re: Code reformatting project

Post by chuck_starchaser »

safemode wrote:always use --dry-run first. Make sure the output of what it says it's doing is good and that you dont get any Conflicts. A proper merge back to trunk should not result in any conflicts.
Ok.
the first revision should be the revision that you last merged your branch from trunk (in this case, the revision you created your branch with will work unless you've synced your branch with trunk since then...which i dont think you did.
I did. I merged Klauss' premul alpha commit. I guess this is going to create a conflict then, because the first commit I want to do, --the uncrustify-only commit-- is from before that merge, but the second commit I want to do contains that merge, among my cleanup and refactoring.
Then the third commit I want to do has manual cosmetic stuff (the last two branch revisions).
Now, the second revision is the last revision you comitted.
Well, I need to merge in three stages, to avoid mixing reformatting with cleanup and refactoring.
(again this assumes trunk has been synced and no additional commits made). Also, if you are doing it incrementally, modify revs as necessary.
Exactly.
Now, you should be in the directory of a trunk pull (not your branch) and the URL should be your branch's url.
Problem. I never did a full trunk pull; I pulled folders one at a time. When I was trying to merge Klauss' commit to trunk, I had a huge problem: If I was in my local vegastrike folder and specified the trunk url, it wanted to merge trunk into my vegastrike folder and add another vegastrike folder underneath. If I added /vegastrike to the trunk url, it told me "vegastrike is not under version control".
What I ended up doing was making a fresh pull of the vegastrike folder from trunk, and then doing the merge from one local repo to another. I think I might have to do that again.
This will then pull the diff of your current working dir and your branch, giving you a set of changes to trunk that you will then commit to the trunk, with a message such as "merging reformat branch, r-whatever"

So, you basically have the url's wrong. Your working dir should be trunk, your url's should be pointing to your branch.
Okay, but ditto; I think I'll have to do the merges locally onto my trunk pull, and then commit them from my local trunk.
edit: when doing a merge, think of it this way. Your working dir is the destination of the merge command, the URL is the source. And be very careful about the revisions when you haven't synced changes in the destination repo. You can inadvertently reverse them. always dry-run and always make sure your branch is synced with commits made to trunk between the last sync with your branch before syncing your branch back.
.... <some time elapsed> ...
my first commit (to my local trunk) is underway.

Code: Select all

/vs/repo/trunk/vegastrike$ svn merge -r 12620:12628 ../../branches/refactor/vegastrike/
--- Merging r12621 through r12628 into '.':
Not sure why svn needs to use the internet, but it is.
As soon as it's done I'm going to verify a bunch of random files, and then commit trunk; wash and repeat for the other two commits.
I'm going to have conflicts, because I merged Klauss' commit already to my branch, but I guess I'll just say "use mine" for the files.
So, I will temporarily undo Klauss' commit for my first commit; but it will be back in for the second and third.
chuck_starchaser
Elite
Elite
Posts: 8014
Joined: Fri Sep 05, 2003 4:03 am
Location: Montreal
Contact:

Re: Code reformatting project

Post by chuck_starchaser »

Part 1 of 3 merge done; committing to trunk now. This is the biggest commit; it will probably take an hour.
klauss
Elite
Elite
Posts: 7243
Joined: Mon Apr 18, 2005 2:40 pm
Location: LS87, Buenos Aires, República Argentina

Re: Code reformatting project

Post by klauss »

chuck_starchaser wrote:
This will then pull the diff of your current working dir and your branch, giving you a set of changes to trunk that you will then commit to the trunk, with a message such as "merging reformat branch, r-whatever"

So, you basically have the url's wrong. Your working dir should be trunk, your url's should be pointing to your branch.
Okay, but ditto; I think I'll have to do the merges locally onto my trunk pull, and then commit them from my local trunk.
I always do it like that. I have two local working directories: one with the branch, one with trunk. When I'm merging, I go to trunk, do svn up, then svn merge -r<revs> <path-to-local-branch> . --dry-run, if all's ok, svn merge -r<revs> <path-to-local-branch>, svn diff | less (inspect diff - in your case it may be too big for that), and if all's ok svn commit.

That process works perfectly every time.
Of course if you have conflicts at some point you may have to resolve them. It's not hard at all, if svn merge says C <file>, you open up <file> and look for "<<<< mine" (or similar - I always search "<<<") - you have your side of things and their side of things, pick one, or mix and match, remove the conflict markers (the <<< mine and stuff like that), and svn resolved <file>.
Oíd mortales, el grito sagrado...
Call me "Menes, lord of Cats"
Wing Commander Universe
Post Reply