by safemode » Fri Mar 20, 2009 11:16 am
building for linux...not done by us.. so that's already done in such a way. Building for win32, not sure how hard that is. I do know that it's not so much the compiling as it's the packaging that takes the majority of the time in that dept. Mac bins are even more labor intensive, as it's build process is far from automatic, let alone the packaging. cmake should be able to minimize that, but it's not widely adopted yet and even if it was, it's mac port is not as complete as the win32 and linux ones.
Building VS takes what, 10 minutes on any half decent machine. That's not the problem, it's packaging, and making sure your build actually runs on more than just your machine. This usually requires bouncing the bin around to a few boxes. I know it's surprising, but the mac and windows devs can be counted on one hand, using probably 2 fingers at most. So getting a build up and out takes a bit more waiting than for the linux side of things, which only requires a source release. Having the bins is sort of a requirement for the release to be actually made however, and we can't just accept any user's compiled bins as official for obvious reasons. If a dev wanted to do it and got it done, then we wouldn't be having this discussion, as it would have already been done then. Since we are, then obviously, the devs that can do it, can't at the moment. it's not as cut and dry as zipping up your build tree into a auto-extracting archive. If someone wants to volunteer to be that person and can actually get it done right, i'm sure the higher ups will be interested. it's obvious the packaging setup needs work, we've known that for a while, but for whatever reason (that i'm not privy to) nothing has been done about it. So either these other devs dont want to do it, or for some reason, they're being refused. I dont know, i'm not involved in the other OS's.
And yes, to a lesser extent, size is an issue when considering a release.
Size, does matter. For us it matters because we already break the rules as far as sourceforge's allowed/recommended/suggested repo size limits. Who else is going to allow you to post half a gig zip files and host them to download to hundreds of people and keep them up. Going with the release early and often mantra, who is going to allow this to be done on a monthly or every other monthly basis? I dont think anyone would really put up with that for long. The bandwidth costs them per MB, and our project would soon be seen as rather costly i think. What user is going to happily redownload half a gig whenever 10% of the files are altered? Not everyone has 1MB/sec download speeds, and while most people pay a flat rate, that doesn't mean people are willing or happy to have to wait to download large chunks of data when they dont have to.
Now, while doing this twice a year isn't much of an issue for the most part, we are talking about making this more of a monthly thing (assuming anyone actually works on the code within that month). Doing it the way we do it now is already detrimental, we just have to live with it at the moment. For instance, no linux distro includes VS's new data. Why? because we have no way of downloading it independently automatically. We should easily be able to make packages via scripts that retrieve our data set, and not expect to have every distro mirror the huge chunk of data with all it's redundency everytime we make a release. We should also think of our host and not want to continue to increase our burden on them, since they're hosting us for free. Should that ever become a factor, then we'd be without a host, and finding one that can handle the code base and data set of VS with the full SVN repo would be quite hard, or expensive. Users even now, pass by VS because they take a look at the download size, and from where it's coming from and pass. It's one thing to have a fast pipe, it's another to have it hosted from SF or one of it's mirrors where that download very likely isn't in any danger of maxing out the download speed of the user. How do you convince users to retrieve 500MB of data and then a couple months later, do it again because you added a couple models and made a couple corrections to some 50kb python files? You don't and you get users who continue using the data set they downloaded because the new bin still works and that's fine by them.
So no, i dont believe it's a broken argument, i just think we've been so long on this side of the fence that we dont realize the people who took a look at our side of the fence and decided to keep walking by it rather than take the jump over, or the people who jumped once, then decided it wasn't worth their effort to stay on our side.
Ideally, i'd like to see the dataset split into sections. like how music is split from it now, so would models (which would be the ships and their textures), backgrounds (space backgrounds) and everything else. Bins should already be split out and independent of the data repo. in this way we wont have to use the svn server to have our updater/installer script retrieve the smallest changesets possible when updating/installing the game. While you're right, the initial install/download of the game would not change, it allows the load to be spread out and it makes updating the dataset more manageable to the hosts and distributors, and it makes those users who take the plunge, more likely to keep up to date since they dont have to retrieve another 600MB file.
Even if you dont agree that it is a limiting factor in making a release, you can't think that our current method of distribution via replacing a massive file every release is optimal and not a gross waste of resources, and it will bite us if we move to a rapid release schedule.
The only way to avoid abusing SVN is by chomping the data repo into pieces, and hosting them wherever we can (and more places would welcome them than a singular massive file) and building in a little updater script that tracks your installed revisions of those chunks and checks them against a central list we can still host on SF letting the app know if any chunks were updated in a given release. There's no reason why this can't be done and the packaging of such a file could be accomplished by any dev as it would be OS agnostic. The updater script would handle all the OS stuff.
Ed Sweetman endorses this message.