Tri-amping must have made the difference.
Indeed, and as you say, there's always 'better': In my first implementation of the crossover, I used analog subtraction to get phase coherency. I noticed however that distortion began earlier in the mid range than in the tweeter and woofer amps. Finally I simulated my crossover (after having built it, of course
) and discovered that the subtraction method causes the channel resulting from the subtraction almost double the gain at the corner frequency, in order to *subtract* air pressure from the other speaker!!! Later I designed a new preamp using the Linkwitz-Riley crossover (this time I simulated it first
), and what a difference! That's when people started to say "this can't be 270 Watts, this is at least 1000W, and you're kidding me."
Besides that, I implemented an invention of mine, in the amplifiers, that electronically linearizes and ultra-dampens speakers. I'm using high efficiency speakers, from Eminence, the type used for rock band stage amps, --NOT hi-fi speakers--, which is part of the reason the system sounds so loud; but my linearization invention makes them sound like flat response, "monitor" speakers, in terms of quality. No resonant booming sounds: My bass speaker vibrates you and pounds you evenly, at all bass frequencies..
But in fact, if the amp is on, but I have no music player, if you tap on the speaker cones, they sound like tapping the belly of a dead cat. Almost no sound at all. And if you push the cone for one second and let go, it oscillates a couple of times very slowly, like 0.5 or 0.3 Hz. That's my invention causing the speaker's coil act as if it were superconductive... People who tried this though my speakers had the Devil inside..
R.E. Stress, cargo, acceleration: Good you're thinking too. Yeah, always one should strive to think to trigger things on their immediate causes, rather than on causes of those causes, or on sister-consequences.
R.E. tick() idea: Excellent. It should work. I'm not sure whether the VS engine is constant rate or variable rate. If the latter, we might need to use time as an input param.
R.E.: /sound rather than .whatever/sound for .etx's: Got you. We could reflect the same folder hierarchy except without the leafs, having each etx file named as the leaf folder is named under .whatever/sound.
R.E.: New language, I was joking. Sort of a bad omen tradition, when someone starts coming up with a new language to solve a problem, they forget about the problem and spend the rest of their lives working on their language...
That sounds very powerful, BTW.
R.E.: OpenAL's limitations: Been digging around, and found someone who developed the sound system for a Quake 2 mod, based on OpenAL, but implementing... get this: A3D!!!! I've no idea how, but I sent the guy an email, around 4 am this morning. No reply yet.
R.E.: 0.6mS: That's right. That's why you were talking about millisecondS and I was saying we need sub-millisecond reflections. Of course we don't perceive the change as a change in delay time; but we do perceive it as a change in spectral content. Your idea of just trying to do with filters and balance would not sound very realistic, because our ears are a lot more sensitive to, for example, the reinforced bands in a comb-filter stretching apart, than they are to a change in frequency response involving a single moving pole. I'm sure there are a lot of psychoaccoustic hacks and shortcuts we can take, but I really don't think this is one of them. A 0.6 mS delay makes a comb filter response with peaks and throws every 1.5 Khz. Not easy at all to do that with an LPF...
So, if the hardware won't do it for us, I'd explore the possibility of real time software post-processing, to add at least one dynamic reflection.
Oh... I just remembered.
With all the fuzz about the sounds, I think people forgot about the voice acting thing. Perhaps opening a "Voice Acting - Really" thread would be nice, don't you think?
There was such a thread open, somewhere. Maybe it was at the WCU forum at the crius.net website. No responses at all, that I remember, and no responses to this thread, in that regard, either.
I think people will have to feel motivated. Right now the EQ on the current voice-overs is simply wrong, or non-existent. With my preliminary experiments processing those few confed tracks, I got to what, to my ears, is much more understandable speech. Once people see that their voice acting efforts will be put to good use, they'll begin volunteering. "Build it, and they'll come."
Doh! I keep forgetting this is about the VS engine, not WCU alone.
Well, maybe we should leave the voice acting alone for now. The WCU voice overs are not really that bad, just lacking EQ and compression. With some processing they'll sound a lot better. About Vegastrike the game, I haven't played it in a while, I forget what its voiceovers sounded like.
I was going to ask you, BTW, whether it would be possible to use ORFEO to do the same processing I did to those voice-overs, namely:
Code: Select all
*Bit depth to 16, if needed.
*Upsample to 88,888 Hz (I'll call this 88KHz)
*Normalize to about 50% below clipping.
*"Differentiate" (EQ such that gain = 1 at 1Khz and falls 6db/oct to the left, and rises 6db/oct to the right.)
*"Telephone band-pass filter" 250Hz to 4KHz. Applied 3 times for sharpness.
*Bit depth down to 8.
*Sampling down to 8KHz
*Bit depth up to 16
*Sampling back up to 88KHz
*Apply EQ curve of cheap 2" speaker. Got the EQ file but can't upload to geocities.
*"Integrate" (EQ such that gain = 1 at 1Khz and falls 6db/oct to the right, and rises 6db/oct to the left.)
*Distort with smooth symetric function ("Synchronize" is the word Soundforge uses for symmetric, as they use symmetric for some other meaning...)
*"Differentiate" back
*Add reverbs: 3 taps: 1ms 50%, 3ms 25%, 7ms 17%.
*Normalize to about 75% of clipping volume.
*Downsample back to 11KHz.
Without the last 3 steps. And, do you have a command line version of ORFEO? What I'm thinking about is writing a little program that iterates through the cvs tree, processing all the voice-over files, batch-mode style.