AI re-devlopment thread

Talk among developers, and propose and discuss general development planning/tackling/etc... feature in this forum.
safemode
Developer
Developer
Posts: 2150
Joined: Mon Apr 23, 2007 1:17 am
Location: Pennsylvania
Contact:

AI re-devlopment thread

Post by safemode »

This is a technical brainstorming thread directed towards creating a viable more realistic AI in the game to tackle issues associated with the current AI's performance.


My takes:

We have two modes of operation we have to deal with when talking about the game's AI. We have group think, and we have individual think.

Group think: This is the response an individual unit has towards another based on it's association with other units, both itself and the unit it's interacting with. Group think almost requires a separate AI response path in of itself, where it can be resolved against the unit's individual think response to come to some response. This is akin to what is referred to as Faction Ai. Though it's more general than that. Group AI can simply be a weighted response choice every unit takes into account when facing a decision. This would require a series of defined "groups" or bonds and a strength to be assigned to these bonds. These groups or bonds each then have a list of all other groups or bonds that form a matrix of connection strengths (both positive and negative). The difference between what i would call group think and faction relations (already in game) is that responses that come from the group think winning over individual think results in group think bond changes all the time, but responses that result from individual think do not always change group bonds. This is a key difference in behavior.

Individual Think: This is directed towards simulating an individual...obviously. They have varying levels of aggressiveness, bravery, etc. They take offense to a unit that attacks them, but their response is not only dictated by self preservation or previous interactions with that particular unit, but every response must also be weighted against the group think. Depending on the faction, group think may lose most of the time to individual think, or win most of the time. Personal enemies may be saved to a list of hostile units in the individual's AI, (group think may get something like this too, kind of like an Ace's list of who to really want to kill/look out for if you come across). A friend's list may also be saved so that units you help out make return the favor if you ask for their assistance later, while others may ignore you if you had made a similar request to a stranger.


Every unit would have a couple modifiers in the engine for behavior.
The main one is the groupindividual think modifier. For simplicity, we can just say that is a number from 1 to 99. This number can be added or reduced by certain outcomes from events the unit experiences. It does not change often.
We have the Group event modifier. This is something added to the groupindivdual modifier depending on the type of event. this is generated per event by the unit based on what the event is and personal modifiers listed below. This is float that represents a percentage, positive or negative of the groupindividual modifier that will temporarily be added or subtracted from that modifier for this event.

Our decision resolver is a random number generator that generates a number from 1 to 100. The groupindividual modifier then acts as an upper bounds, with any number falling within the range of 1-it results in the group response being used. If the number is outside of that range, the individual think response is used.

attributes:
aggressiveness
bravery
loyalty
greed

These are looked at by the group event modifier to change how likely it is a unit will respond with the group response as well as used to formulate an individual response. Loyalty is added, despite seemingly being redundant to the whole group thing, but this is in relation to loyalty to other particular units provided by a "friends list" each unit maintains. There are units that have personally helped the unit in some way. There is a similar list of foes that each unit maintains. These lists get cleaned out over time, as over a given period, their weight is reduced toward 0. Once it reaches 0, it is removed from the list. This removal is mitigated by continual interaction with a unit (refreshing the bond, both friend and foe).

What all of this does however, is simply provide a mechanism for choosing between 2 options, the group action or the individual action. These are simply a list of possible actions in themselves. Both pooled from the same selection of actions given by python scripts, but given different priorities.

The priorities are determined by either the individual's attributes or in the case of group think, the group's attributes. The unit will try to do the top priority first, but if met with sufficient resistance, move to another until it reaches the end of the list. Unless of course it succeeds.

Python actions allow one to script the behavior response of a unit in as much detail as one wants. Make an attack as complicated and skilled as you can or make it uncontrolled, leaving the details to the "instincts" of the unit.

Which brings me to instincts.

Instincts are actions hard-coded into the engine. These are what the units act on for the first few frames of physics simulation any important action has prior to a AI reaction is formulated. These consist of how to retreat, evasive maneuvering to avoid collision or weapon blasts, returning fire to neutralize a hostile, basic navigation. Instincts are used to fill in the details of python directives in a given action command. So how much of them gets used after the initial response time is up to the script. Instincts are basically self preservation determined, choices are made based on if it likely results in avoiding death and accomplishing a certain goal. (fly to the left of a unit, so long as unit isn't going to crash or still getting hit)


so a little pipeline would look like this ...

Code: Select all

                                          campaign                                               
                                          |
                                   /-> group think  ->\           
--> event   -> response -> instinct  --------------> action -> event    and so on.       
                                   \-> individual think  ->/
                                       |
                                       campaign
The campaign can manipulate the group think or individual think to modify key character's responses to specific events. This helps drives VIP's in a desired direction for a consistent experience or to script events to help plots.



anyways, add your own ideas and such. off to job #2
Ed Sweetman endorses this message.
chuck_starchaser
Elite
Elite
Posts: 8014
Joined: Fri Sep 05, 2003 4:03 am
Location: Montreal
Contact:

Re: AI re-devlopment thread

Post by chuck_starchaser »

A lot of what you describe makes sense, and I have no issues with.

Firstly, the problems with the AI are much more fundamental than to be addressable as implementation tweaks.
I think a starting point for better "faction AI" would be to map out the major factors in real human relations. We are a lot more complex than to either love or hate each other. You might say you "hate" your mother in law, but perhaps you wouldn't shoot her; perhaps you'd jump to save her if she was about to walk in front of a speeding bus. I dislike my boss in many ways, but I wouldn't shoot him, even if I had the opportunity to do it with guaranteed immunity from prosecution. Modeling faction relations as a number spanning from love to hate is absurd to the point of being hateful.
If you're a merchant, you'd probably hate pirates, but avoid shooting at them unless they are attacking you. The pirates wouldn't necessarily hate you because you're a merchant. They probably just love your cargo more than they love you. It makes no sense that pirates would attack you simply because they are pirates. There IS an irrational faction in Vegastrike: the luddites. That's enough. Pirates should be rational people.
In RL, there are many peoples who hate many other peoples, yet they manage to live without war for extended periods (India-Pakistan). There are asymmetrical love-hate relationships: The Chinese and Koreans hate the Japanese, but the Japanese don't hate them back (and this is so for good reasons). Conversely, you can have wars between people that don't hate each other, and never hated each other, at all (US-North Vietnam). You can have an arms-race without a war. You can have un-declared wars (pirate wars). You can have declared wars lasting for years where neither side shoots the first bullet.
The present VS model for faction relations can be used as is, but for a like-dislike continuum, within a more complex relationship framework.
You should be able to meet pilots that verbally show the deepest hatred for you and your kind, and yet don't shoot you, or even return fire if you shoot at them, --unless you're doing them real damage and they can prove they acted on self-defense. Conversely, you should be able to come across pilots that hold no grudge against you but shoot at you because they must. Like the hunters in Privateer saying "nothing personal, but your death is my living." That's just one thing hunters say; but I'm talking about generally modeling like/dislike separately from factual hostility.
It also doesn't make sense that a ship becomes hostile just because you accidentally crash into it. In Privateer 2, if you hired a cargo ship, you'd have to shoot at it a lot before it became hostile. Before that, the pilot would just say "Hey, this is YOUR cargo you're shooting at!"

Secondly, and as I've mentioned many times in these forums before, what we need to do, first of all, is decoupling ship computer AI's from their pilots' AI's. For PU, we've had an idea for a long time that we cannot implement because of this lack of decoupling. What your ship's AI considers to be a friend or an enemy is based partly on a shared *ships* (NOT pilots) database, and partly on heuristics. The system should NOT be infallible. If you're good with the merchants, but a merchant ship has just been hijacked by pirates, perhaps your ship's computer hasn't got the update yet, and identifies it as friendly. Right now, the correspondence between color codes in the sensor screen and real enemies that shoot at you is 100%. This is patently absurd. The relations database would also work using some "official" algorithms. If you are klkk, and they are in good terms with the andolians, then andolian ships would ALWAYS show as friendly on your screen, even as they shoot at you. Why? Because as klkk you are simple prohibited from shooting at andolians, because the klkk don't care to risk a war with the andolian. Makes sense?
So, the sensor screen should be fallible, stupid, politicized... as you would expect it to be; and if you want to override politics in your sensor, perhaps you could get a shady programmer to look at it. But, by default, what your sensor and ship computer consider friendly or hostile should be decoupled from the reality out there; --from the real people piloting the ships.
Thus, in the PU story I was mentioning, Burrows would kill certain corrupt confed officials. ALL confed ships would then look red on your screen, because the confed authorities mark you as a terrorist, and this goes into the official database; but most confed pilots don't shoot you, and in fact hail you for having killed those high-ranking bastards. They speak friendly to you; even help you if you are in trouble, yet they appear red (hostile) on your sensor.
Makes sense?
This would add a lot of depth to the game, as well as make it more edge of the seat ennerving to see hostiles on your screen, only to verify they aren't; as well as to never being able to count on your sensor reporting friendly-looking skies 100%.
And even when the sensor is right, just because a ship is "hostile" it doesn't mean it will shoot at you. As a player, you could make your calculations, based on what you know the pilots' motivations likely are. If you're a pirate, merchant ships would look red, but you'd know they will probably avoid you, rather than seek confrontation.
Etceteras.
safemode
Developer
Developer
Posts: 2150
Joined: Mon Apr 23, 2007 1:17 am
Location: Pennsylvania
Contact:

Re: AI re-devlopment thread

Post by safemode »

While i dont have any doubt that a better setup for a faction level AI is needed, i'm now less sure if it needs to be a "faction ai" and starting to consider a more general group think AI model that implicitly becomes a faction AI all on it's own. Think of it as a hive mind that's directed by common bonds rather than a group directed by a single authority. That's the main difference between "group think ai" and "faction ai".

The group think gets it's direction from common bonds and the ai directives that correspond to them and the campaign. The campaign gives it goals or commands ...basically a direction, and then the group think function every single unit has, becomes directed towards that goal/command and the overall effect is behavior that appears to be faction (though groups can be of any type, traversing faction lines) directed.

In this manner, we keep the AI code centralized, without having to simulate two AI's that then must be resolved each physics frame.

Also, since this does not become attached to a certain group. the group think function is completely dynamic. How you associate the unit when it's created shapes it's group think AI behavior. Obviously, we would group units into factions, creating that bond in the group think AI routine, but we could also group them in classes such as merchant, military, civilian, explorer. We could also group them into flightgroups (we do) and every group has different bond strengths that allow one to supersede the other. But it doesn't exist as it's own AI entity, it's a part of the unit that may or may not get processed during a physics frame. The unit chooses between an action directed by the group think AI or it's own personal AI. As a player, this is a transparent function. So long as the campaign is written correctly and the units are created with some level of reason, either choice of AI routine should seem logical for a given situation, yet completely dynamic (if we allow it to be, and in VS we will).

Now, for decoupling the pilot from the ship, I agree. I think the ship should get a totally new ai created for it. One that behaves much like you described. I also think the nav system should be updatable within a certain range, allowing the pilot to broadcast their callsign to all ships in range, and those ships would then display the callsign next to the ship's transponder. This is more for the player's benefit than anything else, allowing communication to ships within your flightgroup or such to be referred to by their pilot's names (callsign anyway) rather than some faceless ship type and faction name. Give ships a face (or in this case, at least a name for now) and the player will enjoy the interaction way more than without it.

Pilot AI's then become separate entities, we can think of them as belonging to cockpit objects that then can be copied as they move around (if they move) ...like if they eject or get a new ship. Giving the characters some manner of persistence. And we really want to go in that direction, where the player is interacting with pilots, not just ships.


edit: Actually, it's wrong to think of "group think ai" as an it, or a thing in the game. It's just a mode of thinking that the pilot's AI has at it's disposal. What makes units in a group behave with a common purpose is a result of individual decisions to do the same thing based on the strength of their bond together in their group think routines and the various python AI scripts we make available to the units. in my version, there is only one real AI, the pilot AI. It can decide if it wants to behave as part of a group or individually, and the player will only have the pilot's personality and interactions to figure out which is which when something happens. Most of the time, if the pilot's personality was created to mesh well with the group bonds we started it out with, both choices may usually result in the same response. etc etc. I just wanted to clear up that when i talk about the "group think ai" vs the "individual AI" i'm not talking about two things, but basically the same AI having two different sets of priorities and choosing which set to use in a given situation, and having that decision effect things differently, depending on which one was chosen.
Ed Sweetman endorses this message.
RedAdder
Bounty Hunter
Bounty Hunter
Posts: 149
Joined: Sat Jan 03, 2009 8:11 pm
Location: Germany, Munich
Contact:

Re: AI re-devlopment thread

Post by RedAdder »

One challenge to such AI is consistency and going through with a decision.
How to you keep AI from going to choice A, then to choice B, then back to A.
One can often see such silly vacillation in AI.

Adding a random element and "big" increases/decreases to plans seem like a good idea. E.g. if plan A's weight is 0.2 and plan B's weight is 0.4 then with probabilty .67 increase the votes for plan B by +1 and increase plan A votes by +1 with probability .33. Make a plans' votes decrease slowly with time and new plans only be considered(and thus adding +1 vote to a plan) if the current plan drops to zero votes.
safemode
Developer
Developer
Posts: 2150
Joined: Mon Apr 23, 2007 1:17 am
Location: Pennsylvania
Contact:

Re: AI re-devlopment thread

Post by safemode »

well the idea is that the choice between group think and personal think happens once during an event, an event being defined as any number of configurable things... say one is unit proximity ... so two units get close to eachother, they have choices to make, all configurable . Before anything is done however, the unit must first decide if it's going to act on it's group think or personal think. This is a weighted decision based on attributes given to the particular unit and the event.

So say it goes with group think, this choice is not re-made until all the options available to group think have been exhausted, or until the event is invalidated. Now, stuck in a certain think mode, you have choices available ....or actions the unit can take. It picks the highest valued action (determined by various variables) and attempts to complete it. Any time it has to stop to do something like protect itself from being fired upon or if the other unit moves away or otherwise the unit has to try harder each physics frame, the value of that action decreases. Once it falls below another action, the other action is taken.

This continues to occur until the "re-assess" action is executed. This is basically like a default, start over because nothing is working action. What this will do is re-run the event with it's new circumstances and see if a different set of actions are chosen, perhaps group think doesn't win out this time, or a different order of actions is used and such. How the unit's attributes are setup determines how long it takes before each action is given up on etc. and it also determines which actions are even available.

This is a fairly slow process. Instincts are always available to alter the behavior of a ship. The AI then processes an event using previous physics frame data to base it's decision off of. this allows it to make a decision based on more than just a single frame prior to now, and so this decision is more like to be valid for a longer period of time. Perhaps you accidentally fired a round at a ship...well the response of the AI isn't going to see your single shot and subsequent cease firing as a sign that you are attacking it. Perhaps you even throw it a comm after shooting that you're sorry...etc. The instincts of the AI ship may cause it to start evading your ship, or breaking a lock you may have on it, but it's Ai would kick in and decide what to do with you only after some physics frames had gone by and it had a chance to see what is going on.
Ed Sweetman endorses this message.
RedAdder
Bounty Hunter
Bounty Hunter
Posts: 149
Joined: Sat Jan 03, 2009 8:11 pm
Location: Germany, Munich
Contact:

Re: AI re-devlopment thread

Post by RedAdder »

Humm, I suggest then that the reassess action is taken when the time the last action plan was held on to drops below a fraction of the time since last reassess action. This way, you avoid two action plans being executed for only seconds alternatingly.
chuck_starchaser
Elite
Elite
Posts: 8014
Joined: Fri Sep 05, 2003 4:03 am
Location: Montreal
Contact:

Re: AI re-devlopment thread

Post by chuck_starchaser »

RedAdder wrote:Humm, I suggest then that the reassess action is taken when the time the last action plan was held on to drops below a fraction of the time since last reassess action. This way, you avoid two action plans being executed for only seconds alternatingly.
Hysteresis is good. Randomness, however, should be used with moderation and caution; --just like you wouldn't want to overdo it with noise, in texturing. A bit of dithering is good. Same way in AI, a pinch of randomness can make up for many factors that would be expensive to compute, but only after a good number of factors HAVE been computed, such as weighting relative strenghts, uncertainty, group politics, possible secondary consequencs of starting a shoot-out. AI should be as sophisticated as possible to be believable, AND THEN have a light sprinkle of uncertainty, otherwise the player in no time realizes that certain decisions are largely random based, which makes the game feel cheap.
klauss
Elite
Elite
Posts: 7243
Joined: Mon Apr 18, 2005 2:40 pm
Location: LS87, Buenos Aires, República Argentina

Re: AI re-devlopment thread

Post by klauss »

This brainstorming has happened repeatedly in VS history.

The infrastructure hinders implementation of whatever comes out.

I think another kind of brainstorming would be better, one directed towards replacement of the current (unintuitive and overly specific) infrastructure in favour of a more intuitive and flexible one. Then one can experiment and actually implement all that gets brainstormed at the level of this thread.

The current AI, for what I could decipher, is layers on top of layers of configurable behavior, with the most common AI module being the "aggressive AI" that is basically (contrary to what the name would seem to imply) a very limited state machine.

Problem is, the AI in VS is overly "custom" - AI people don't get it rightaway because it's... weird. And non-AI people don't get it because it's complex and has too much functionality coded in badly named and completely undocumented modules.

I know of a couple of AIs: there's the ad-hoc, script-based AI used in the most flexible engines, there's state machine AIs of various kinds (deterministic or nondeterministic, fuzzy or not), there's even genetic AIs (which are basically tweakable state machines that have their tweakable parameters determined automatically by genetic evolution).

Script-based AI is useless in VS because it's too heavy for the number of actors involved in the game, but VS actually supports it because mods could benefit from them. In fact, even select actors within campaigns could also benefit from them.

So the most commonly seen AI kind is the one that has three layers stacked:
  • At the lowest level there are a few parallel state machines, each controlling a specific orthogonal behavior: motion (Go here, follow this guy, hold ground, etc), stance (defend against attackers, attack a specific target, hold fire), etc. Each state machine runs independently and usually is hardcoded, perhaps configurable but hardcoded nonetheless.
  • At the mid level there's a personality, composed by a higher level state machine (that sets the behavioral state machines and coordinates the unit's overall behavior). This one's usually the one that gets genetically tweaked, and could even be designed as a fuzzy nondeterministic machine, which provides credible behavior.
  • At the higher level there's the script. Usually one for a group of entities, maybe in the form of triggers.
Now, let me be clear. That's pretty much VS's current design. Thing is, it's hard to follow, and all I could ever do is recognize VS uses that design, but never worked out the details of how it works. Furthermore, each of those layers in VS is very very ad-hoc, limited, and full of bugs. I believe the only way to get rid of that stigma (noone understands the AI system, noone can work out the bugs, noone can modify AI metadata to enhance the AI even when it is "possible" right now) is to re-code it. Figure out a nice system, and make it from scratch... properly documenting it on the way.
Oíd mortales, el grito sagrado...
Call me "Menes, lord of Cats"
Wing Commander Universe
safemode
Developer
Developer
Posts: 2150
Joined: Mon Apr 23, 2007 1:17 am
Location: Pennsylvania
Contact:

Re: AI re-devlopment thread

Post by safemode »

Well, A complete rewrite has the advantage of also being completely parallel. Meaning, work on it can occur without removing the current code, and we should be able to make it so that we can switch from one to the other to test how each method behaves before getting rid of the old system.

Much of what i was talking about follows this 3 stack approach fairly well. With events providing your high level scripting control, your group think/self think layer controlling personality, and instincts controlling hardcoded actions. Each unit would have this AI class that contains all 3 layers in one. The layers in what i was suggesting become a product of what would likely be a single function. High level code is recieved and causes certain variables to change in the AI and certain scripts allowing certain actions to become available. We then move through the function to process the type of response we will have, providing our personality layer (group think vs self think and the associated behaviors). In parallel with that we could respond by instinct if the personality conditional code is not ready yet (not enough data to fully respond). Finally, we execute the response action, which is configurable or instinctual.

I would put an AI class in the cockpit of a unit, and only the cockpit. Providing access to the ship in the same way the player has access. Each AI class could then be persistent with it's unit without having to dump and read all kinds of variables into the unit class. The AI class would be completely independent of the unit it's in, allowing AI classes to be moved from one unit to another, and retain all it's personality, which would inherently consist of a type of memory of it's past. Basically, things like kills, who attacked it, who it attacked, friends and foes...that type of stuff.


I think we need to also make the assumption that there is only one AI object per ship, and it refers to the pilot/captain. That is to say, turrets would not get an AI class. Turrets would be controlled by a much simpler function that handles firing solutions.

With the AI class, we can also do nifty things like provide it a communications function. this can not only be used for ship to ship communications, but we can ask it to speak when in character mode when docked. Say you need to talk face to face with some character, it's AI object would have to be docked at the same place (in the case of characters who never use a ship in-game, we can use a magic ship that just becomes "docked" whenever we need it to in order to contain the AI object of that character. Then the game writers would script some possible things to say or respond with and the AI object chooses based on it's own personality and modifications thereto caused by gameplay. So the characters would respond realistically for their in-game behavior and state of things without any complicated dialogue scripting.

So this would require the creation of a new unit class, avatar seems fitting. The avatar unit is like a ship unit, only it would never load textures, and it would have no way to eject or move. An AI object in an avatar unit doesn't get processed like a regular unit, instead it gets called to be processed specifically by scripts, and it gets created and destroyed specifically by them. An avatar's mesh is very very tiny, and it can be docked anywhere a ship can be docked, and thus would be killable in-game or via scripted action (say a campaign allows you to murder someone in the bar).

A script may even allow an avatar in transport of a ship to hijack the ship by replacing the pilot AI (swapping to the avatar and then killing or holding hostage, etc). That stuff would be inherently possible, without any sort of hand waving.

So, I guess what i'm suggesting is a layer by step method. Where rather than distinct layers where each layer is it's own function nor even class into of itself, we deal with 1 class and basically 1 function where each layer is addressed in order or in place of. I'm thinking instincts would be threaded, since instinctual actions dont need to read universe data, it just acts on a very limited set of inputs. We then can be fairly thread safe in our transforms or actions and lock around the personality decision block such that so long as it's still working, we skip to instinctual action and once it's done, we skip instinct . This mimics real decision making. sure the data we are using in the personality block may no longer be valid for the action we come up with due to a previous instinctual reaction, but that's how it is in real life too.
Ed Sweetman endorses this message.
chuck_starchaser
Elite
Elite
Posts: 8014
Joined: Fri Sep 05, 2003 4:03 am
Location: Montreal
Contact:

Re: AI re-devlopment thread

Post by chuck_starchaser »

There should be a way to frame a post in phpBB, golden trim :)

I think the best layer system for AI would be based on the Theosophical division, stula sharira,
prana sharira, linga sharira, kama manas, manas, bodhi, atma
; because even though one could
argue these subdivisions come from religion, the fact is they make more intuitive sense than any
Western understanding in philosophy or psychology; --they have not just better static appeal, but
even functionally. Probably wouldn't need bodhi or atma state machines, unless some race often
attain samadhi during fights :)

Linga sharira ("astral", emotions), for example, would be best modelled as a fuzzy state machine,
with basic emotions like fear and optimism, satisfaction and anger, self-control and indulgence,
sense of duty and selfishness... Well, not really a state machine yet; these are pairs of opposites
each best modelled as a -1 to +1 range float; so the fuzzy emotional state would be a vector in
this N-space.

Horizontal modeling would introduce biases in these variables spanning opposite pairs in response
to perceptions and conditionally in response to internal attempts at self-control. Vertical modeling
has to do with strength and quality of self-control, and with what's controlling or trying to control
what, and with the reaction force to the self-control force. Vertical modeling is a "conflictt". Racial
traits would precondition the conflict zone. Conceivably some race might have complete control of
their emotions, though.

Just brainstorming. Can't type any more; too much pain, and 1/2 hr to go yet to my next
morphine pill.

EDIT:
Needless to say, these layers work as decorators on top of decorators, sort of like an IP stack.
Each layer has input and output flow directions.
Stula sharira is like the "physical layer", where input is visual and auditory, extended by the
cockpit sensors and instruments, and passes its percepttions up the stack. Ouput is limbs
and phalanges pulling sticks and pushing buttons, amplified by the fly by wire controls and
the thrusters and weapon systems.
Prana sharira would be the pilot's space-time intelligence. This would be where our neural
networks would reside.
safemode
Developer
Developer
Posts: 2150
Joined: Mon Apr 23, 2007 1:17 am
Location: Pennsylvania
Contact:

Re: AI re-devlopment thread

Post by safemode »

Gotta remember though, we aren't looking to create life here. We have 10,000 units to handle and puny computers to handle them in and it has to be done in real time.

Much of what it takes to simulate a real person in the real world in terms of AI in something like a robot would not pertain to simulating the ai in a realtime game. It's beyond our scope to use a type of evolution and realistic AI routines to allow the computer to learn how to play the game itself, create it's own routines and learn from experiences. Our AI can't be anything more than a clever and dynamic means of choosing one of a collection of actions to take given a trigger event. Ideas such as personality, selfishness or whatever are imparted by the player, all the AI is doing is being consistent, but reacting to the game in a believable way. We dont want to be totally random, but we dont want to be totally predictable.

We need to process AI routines in a fraction of a fraction of a second. This is why we compress things down to 3 layers most of the time, not because we are stuck in a western way of thinking, but because there exists a point where you have to decide how much AI you need vs what you can afford.

Though, on a separate note, i dont think the brain layers thought in step fashion but simply shotguns a decison through all facets of what makes up a decision (attitude, predisposition, experience, personality, etc) at the same time and the response blends and certain pathways result in a weightier result than others. That's not to say you can't describe their weights as layers, but that each layer would then not be able to effect the other, and you would need to use some type of mask system to blend the results from each layer and read only the combined mask.

I think we can combine most "identity" related things together into a single personality layer, which I organize into a group think vs single think mode to create a "faction ai" within the unit's AI (0 cost bonus) and then use this mask system to generate a means for selecting from a group of actions. in addition to that layer, an instinctual layer that can be used in parallel or instead of this personality layer if needed (danger is eminent or personality layer is still thinking), which consists of hardcoded actions to take given a limited amount of data about what's going on (instincts dont go polling the universe for data, it operates blindly given only a small amount of inputs as arguments). The 3rd level, would be on top of both of those, consisting of the campaign's authoritative scripting of certain events. This can be used to group complex actions that would otherwise be far too difficult to instruct an AI to take or perhaps impossible to take given the lack of simulation.

Basically what was already described as a 3 level system. It's a practical system, not necessarily western or influenced by culture. But in any case, keep the ideas coming.... just keep in mind. We aren't creating Data here.
Ed Sweetman endorses this message.
klauss
Elite
Elite
Posts: 7243
Joined: Mon Apr 18, 2005 2:40 pm
Location: LS87, Buenos Aires, República Argentina

Re: AI re-devlopment thread

Post by klauss »

chuck_starchaser wrote: Linga sharira ("astral", emotions), for example, would be best modelled as a fuzzy state machine,
with basic emotions like fear and optimism, satisfaction and anger, self-control and indulgence,
sense of duty and selfishness... Well, not really a state machine yet; these are pairs of opposites
each best modelled as a -1 to +1 range float; so the fuzzy emotional state would be a vector in
this N-space.
Frame that in gold.
You got the concept of fuzzy state machine in a microsecond, I don't think we ever agreed/clicked this quickly ;-)
chuck_starchaser wrote:Needless to say, these layers work as decorators on top of decorators, sort of like an IP stack.
Each layer has input and output flow directions.
Precise description
chuck_starchaser wrote:Stula sharira is like the "physical layer", where input is visual and auditory, extended by the
cockpit sensors and instruments, and passes its percepttions up the stack. Ouput is limbs
and phalanges pulling sticks and pushing buttons, amplified by the fly by wire controls and
the thrusters and weapon systems.
That's too low level. The problem with going so low level is complexity: you can't get into that much detail without major logic or real cognitive AI, which is overkill and impracticable for a game engine. Rather, you have to cheat, make "as if" that logic was there and merely model the limitations faulty/imprecise logic and/or input impose on reaction capabilities. Basically what safemode says... but your point of modelling different psychological elements with different independent state machines is spot on.
chuck_starchaser wrote:Prana sharira would be the pilot's space-time intelligence. This would be where our neural
networks would reside.
I'd stay away from neural networks. Too hard to design, control and train. Genetic design of neural networks would also take ages.
Oíd mortales, el grito sagrado...
Call me "Menes, lord of Cats"
Wing Commander Universe
chuck_starchaser
Elite
Elite
Posts: 8014
Joined: Fri Sep 05, 2003 4:03 am
Location: Montreal
Contact:

Re: AI re-devlopment thread

Post by chuck_starchaser »

safemode wrote:Gotta remember though, we aren't looking to create life here. We have 10,000 units to handle and puny computers to handle them in and it has to be done in real time.
What I was describing would be the "top LOD" of the AI, which would be applied to units within your theater (a fuzzy concept that could mean within com range. or weapons range, or visual range, or visual field, or simply who are interacting with you; but whose final definition I prefer to defer for now.
The lesser AI LOD's would be attempts at producing results as similar as possible to the top LOD but with less instructions. Again, I defer speculation on how such "optimization" could be achieved. We could use neural networks and "train" them on the basis of the difference between their output and the top LOD's output under similar random situations; or we could use fuzzy logic, or manual hacks.
Note, however, thet if we manage to use fuzzy state machines throughout (no conditionals, no pointer arithmetic), we could conceivably translate the algorithm to openCL and process 65536 AI's as a 256x256 texture; or say use 64x64 textures, 4k AI's, per-faction (with textures defining faction character permanently loaded in video memory).
Though, on a separate note, i dont think the brain layers thought in step fashion but simply shotguns a decison through all facets of what makes up a decision (attitude, predisposition, experience, personality, etc) at the same time and the response blends and certain pathways result in a weightier result than others. That's not to say you can't describe their weights as layers, but that each layer would then not be able to effect the other, and you would need to use some type of mask system to blend the results from each layer and read only the combined mask.

I think we can combine most "identity" related things together into a single personality layer, which I organize into a group think vs single think mode to create a "faction ai" within the unit's AI (0 cost bonus) and then use this mask system to generate a means for selecting from a group of actions.
I'm not sure I understand your "group think", but I think that choosing a fuzzy kind of term is a recipe for disaster from the get go. If it seems impossible to get programmers to write clear code by choosing clear names for variables and functions when the code's intent is clearly defined, imagine what they would do implementing a less than clear concept...
There's no such thing as "group think", so we should strive not to model anything by that name.
What there is is associations of individuals with something in common, be it cultural, ethnic, a common interest or passion. "Group think" would happen perhaps in a telepathic race...
I know that's not what you meant; but my whole point is precisely that you picked a terrible name for it, and lost me as a result.
Perhaps you were thinking in optimizing terms: Modeling communications as would be needed to make group actions appear cohesive would be expensive, so just model them as telepaths. That may be a good idea; but to call the whole AI level "group think" amounts to bringing a particular optimization to the fore, to the programming interface. The programming interface, at all levels, should contour the problem domain; NOT details of implementation. Optimizations should be named and/or explained by comments in the code. Otherwise what happens if in the future communications are modelled and the optimization removed or replaced? The name would probably remain, and become misleading.
You can still have the same idea but call it "Group Leader AI". Then, where you broadcast orders, you put a comment "//communications delay not modelled, for simplicity."

By the way, what's still throwing me off is it seems to me that by "Faction AI" what's usually meant is flight group AI. In my mind, "Faction AI" would be "Faction Government AI".

Choosing proper names for things applies at all levels; --starting at concept stages.
in addition to that layer, an instinctual layer that can be used in parallel or instead of this personality layer if needed (danger is eminent or personality layer is still thinking), which consists of hardcoded actions to take given a limited amount of data about what's going on (instincts dont go polling the universe for data, it operates blindly given only a small amount of inputs as arguments).
Indeed, I'm hoping to address all this, but in a unified, comprehensive way, rather than as scattered state machines tied together with coat-hangers.
The 3rd level, would be on top of both of those, consisting of the campaign's authoritative scripting of certain events. This can be used to group complex actions that would otherwise be far too difficult to instruct an AI to take or perhaps impossible to take given the lack of simulation.

Basically what was already described as a 3 level system. It's a practical system, not necessarily western or influenced by culture. But in any case, keep the ideas coming.... just keep in mind. We aren't creating Data here.
Absolutely. What I was describing is ruminations about how to model the individual pilot AI's. Of course there would be Group Leader AI on top of that; perhaps Fleet AI on top of that, and Faction Government AI at the top.
klauss wrote:
chuck_starchaser wrote: Linga sharira ("astral", emotions), for example, would be best modelled as a fuzzy state machine,
with basic emotions like fear and optimism, satisfaction and anger, self-control and indulgence,
sense of duty and selfishness... Well, not really a state machine yet; these are pairs of opposites
each best modelled as a -1 to +1 range float; so the fuzzy emotional state would be a vector in
this N-space.
Frame that in gold. You got the concept of fuzzy state machine in a microsecond, I don't think we ever agreed/clicked this quickly ;-)
Hahaha, it was time we did ;-)
chuck_starchaser wrote:Stula sharira is like the "physical layer", where input is visual and auditory, extended by the cockpit sensors and instruments, and passes its percepttions up the stack. Ouput is limbs
and phalanges pulling sticks and pushing buttons, amplified by the fly by wire controls and
the thrusters and weapon systems.
That's too low level. The problem with going so low level is complexity: you can't get into that much detail without major logic or real cognitive AI, which is overkill and impracticable for a game engine. Rather, you have to cheat, make "as if" that logic was there and merely model the limitations faulty/imprecise logic and/or input impose on reaction capabilities. Basically what safemode says...
Ah, I thought you knew me better. I'm talking purely at the problem domain. I didn't mean we'd model every button press or do visual field analysis; all I meant is that these are the "problems" that belong to stula sharira. After optimizations, stula sharira might boil down to a single assembly instruction for input and another for output, for all I know. My current guess as to the complexity of the stula sharira layer that would be just right would be a) limiting of the visual field, for input, and b) bottlenecking of commands (say one button press per second maximum, if we do model some button presses), and/or neural transmission delay.
chuck_starchaser wrote:Prana sharira would be the pilot's space-time intelligence. This would be where our neural networks would reside.
I'd stay away from neural networks. Too hard to design, control and train. Genetic design of neural networks would also take ages.
Okay; this could be programmed ad-hoc, easily.

EDIT:
A word on kama-manas/manas.
Manas would be the interface between the pilot AI and the Group Leader AI. No idea about implementation yet; could be as simple as a pass-through, or some hack that makes it appear as if the commands were verbal and capable of being misinterpreted. And perhaps it could take an abstract command and plan how to execute it, and be capable of mistakes. But this layer's purpose is 100% well-intentioned fulfillment of duty, anyhow.
Kama manas is the selfish mind. Not necessarily bad; immensely useful if a pilot is capable of bending the orders a little to save himself and the ship, and then finish his work. But kama manas can hijack the pilot's judgement, --e.g.: trying to show off, like Maniac. IOW, kama manas can inject selfish goals into the plan. Specially good for modelling pirates.
Note that Manas would be operating in such unselfish acts as sacrificing one's life on the fulfilment of duty; though not all suicide bombers would necessarily be operating from a high principle, of course. A strong emotional attachment to self sacrifice to duty would be linga sharira-sub-manas. Long story. And suicide for fear of being seen as a coward wouldn't even qualified to such a high emotion as love of duty.
safemode
Developer
Developer
Posts: 2150
Joined: Mon Apr 23, 2007 1:17 am
Location: Pennsylvania
Contact:

Re: AI re-devlopment thread

Post by safemode »

chuck_starchaser wrote:
safemode wrote:Gotta remember though, we aren't looking to create life here. We have 10,000 units to handle and puny computers to handle them in and it has to be done in real time.
What I was describing would be the "top LOD" of the AI, which would be applied to units within your theater (a fuzzy concept that could mean within com range. or weapons range, or visual range, or visual field, or simply who are interacting with you; but whose final definition I prefer to defer for now.
The lesser AI LOD's would be attempts at producing results as similar as possible to the top LOD but with less instructions. Again, I defer speculation on how such "optimization" could be achieved. We could use neural networks and "train" them on the basis of the difference between their output and the top LOD's output under similar random situations; or we could use fuzzy logic, or manual hacks.
Note, however, thet if we manage to use fuzzy state machines throughout (no conditionals, no pointer arithmetic), we could conceivably translate the algorithm to openCL and process 65536 AI's as a 256x256 texture; or say use 64x64 textures, 4k AI's, per-faction (with textures defining faction character permanently loaded in video memory).
I dont think we'll get complicated to the point of anything close to algorithmic learning and neural networks. Also, openCL is great, but that would only be used on maybe 1% of the people who would be playing VS.
Though, on a separate note, i dont think the brain layers thought in step fashion but simply shotguns a decison through all facets of what makes up a decision (attitude, predisposition, experience, personality, etc) at the same time and the response blends and certain pathways result in a weightier result than others. That's not to say you can't describe their weights as layers, but that each layer would then not be able to effect the other, and you would need to use some type of mask system to blend the results from each layer and read only the combined mask.

I think we can combine most "identity" related things together into a single personality layer, which I organize into a group think vs single think mode to create a "faction ai" within the unit's AI (0 cost bonus) and then use this mask system to generate a means for selecting from a group of actions.
I'm not sure I understand your "group think", but I think that choosing a fuzzy kind of term is a recipe for disaster from the get go. If it seems impossible to get programmers to write clear code by choosing clear names for variables and functions when the code's intent is clearly defined, imagine what they would do implementing a less than clear concept...
There's no such thing as "group think", so we should strive not to model anything by that name.
What there is is associations of individuals with something in common, be it cultural, ethnic, a common interest or passion. "Group think" would happen perhaps in a telepathic race...
I know that's not what you meant; but my whole point is precisely that you picked a terrible name for it, and lost me as a result.
"Group think" is a block that mimics how a person thinks in a group. Not some quasi-telepathic communication with a group. It's driven by associations an individual has with others and only considers information the individual has. It's basically a means for deciding if an action will be driven by selfish motives or via a sense of cooperation or obligation. It's more than just another data variable to plug into a decision algorithm, in that actions leading from doing something from an obligation you may have has vastly different effects (and actions) from doing something for yourself. For instance, if i shoot you because i'm in a gang and they just decided they hate your gang, then my attack on you is seen as my gang's attack on you. My carrying it out strengthens my bond to my gang and friend/foe information is weighted differently. An AI will have multiple bonds, and these bonds will be of varying strengths. The characteristics of the group a bond is associated with, determines the behavior of that group but this behavior stems from every individual AI bonded to it as it acts, there is no central AI controlling units within a group. We let the bonds each individual AI has to the group create the appearance of a group of units that behave cooperatively. No communication occurs.
Perhaps you were thinking in optimizing terms: Modeling communications as would be needed to make group actions appear cohesive would be expensive, so just model them as telepaths. That may be a good idea; but to call the whole AI level "group think" amounts to bringing a particular optimization to the fore, to the programming interface. The programming interface, at all levels, should contour the problem domain; NOT details of implementation. Optimizations should be named and/or explained by comments in the code. Otherwise what happens if in the future communications are modelled and the optimization removed or replaced? The name would probably remain, and become misleading.
You can still have the same idea but call it "Group Leader AI". Then, where you broadcast orders, you put a comment "//communications delay not modelled, for simplicity."
I would strongly be against telepathic units. For one, it would end any hope of making parts of an expensive operation (physics frame) threaded in the future. Second, it's not necessary. You can mimic coordinated behavior by applying the same set of events to units associated to a given group bond you are targetting. Some may end up doing something selfish, some may have stronger bonds to something else that prohibit it from doing what was intended, but most will have the intended reaction if we are consistent with giving units in various bonds characteristics that would believably exist in those groups. (there would probably not be any happy friendly pirates waltzing about in space)

AS for the programming interface, you wouldn't see it.

The ai function for deciding on an action would be 1 function. It would probably take an event, maybe one or two other arguments and that's it. You dont get to control how the unit thinks about the event in the way you were describing. Group think vs selfish think is simply a term to describe an if else block that's conditional on some meter of how strong a tendency for the unit's personal interests are compared to any of it's bonds. What occurs inside each conditional block alters the unit's variables in different ways. Like if an event has a unit kill another unit, if that killing action was decided because of an association between the killing unit's group and the victim's group, then your friend foe list changes. If it was just a personal vendetta, then the victim's group doesn't matter. They may make you an enemy and want to take revenge, and this could escalate into a group rivalry that didn't exist before, but that's not required.
By the way, what's still throwing me off is it seems to me that by "Faction AI" what's usually meant is flight group AI. In my mind, "Faction AI" would be "Faction Government AI".

Choosing proper names for things applies at all levels; --starting at concept stages.
I threw out the idea of a faction ai that either sends out events to member units or temporarily takes control of them. Group think is a conditional block of code every AI would have that mimics belonging to a group by modifying actions and consequences based on the acting being a result of a bond that unit has to a group rather than from a selfish interest that AI has. It's much simpler and far less overhead. And it should work.
in addition to that layer, an instinctual layer that can be used in parallel or instead of this personality layer if needed (danger is eminent or personality layer is still thinking), which consists of hardcoded actions to take given a limited amount of data about what's going on (instincts dont go polling the universe for data, it operates blindly given only a small amount of inputs as arguments).
Indeed, I'm hoping to address all this, but in a unified, comprehensive way, rather than as scattered state machines tied together with coat-hangers.
What's scattered? The decision process the AI has is 1 function. It steps from organizing an absolute reaction list from the event trigger data (high level) to immediately considering if it will react selfishly or not (mid level) and while it's thinking about that, or instead of, it reacts instinctually (low level). The result is a choice of actions from the list or via the instincts. It's a fairly short procedural list confined to a single function.
The 3rd level, would be on top of both of those, consisting of the campaign's authoritative scripting of certain events. This can be used to group complex actions that would otherwise be far too difficult to instruct an AI to take or perhaps impossible to take given the lack of simulation.

Basically what was already described as a 3 level system. It's a practical system, not necessarily western or influenced by culture. But in any case, keep the ideas coming.... just keep in mind. We aren't creating Data here.
Absolutely. What I was describing is ruminations about how to model the individual pilot AI's. Of course there would be Group Leader AI on top of that; perhaps Fleet AI on top of that, and Faction Government AI at the top.
I am opting to not have multiple layers of ai on top of the unit ai's. So there would be no fleet ai, no government ai. All of that stuff can be mimiced by the campaign script and group think. What's important then is in what bonds we give units on creation, and how strong to make them. In this way, we can keep AI complexity to a minimum and not have to process anything in addition to the individual units and the campaign script.

I think group think replicates the behavior of an individual in a group much more than some kind of telepathic force controlling member units or some overriding body. What we lose is the AI leader ...but i think that's best left to creative campaign script writing than trying to make an AI smart enough to handle governing and all the things that go with that. The rather non-flexible campaign script would be completely hidden by the means in which it's dealt with by the units in the game and so the end effect on the player is the same, but we've reduced complexity and processing many times.
chuck_starchaser wrote:Stula sharira is like the "physical layer", where input is visual and auditory, extended by the cockpit sensors and instruments, and passes its percepttions up the stack. Ouput is limbs
and phalanges pulling sticks and pushing buttons, amplified by the fly by wire controls and
the thrusters and weapon systems.
That's too low level. The problem with going so low level is complexity: you can't get into that much detail without major logic or real cognitive AI, which is overkill and impracticable for a game engine. Rather, you have to cheat, make "as if" that logic was there and merely model the limitations faulty/imprecise logic and/or input impose on reaction capabilities. Basically what safemode says...
Ah, I thought you knew me better. I'm talking purely at the problem domain. I didn't mean we'd model every button press or do visual field analysis; all I meant is that these are the "problems" that belong to stula sharira. After optimizations, stula sharira might boil down to a single assembly instruction for input and another for output, for all I know. My current guess as to the complexity of the stula sharira layer that would be just right would be a) limiting of the visual field, for input, and b) bottlenecking of commands (say one button press per second maximum, if we do model some button presses), and/or neural transmission delay.
[/quote]
Limiting the visual field should be something we end up doing regardless of AI modifications. I'm sick of this radar that sees everything in a system in realtime. We need to accurately model the radar (which would also work for the visual field of AI units) and if possible the GL rendering to take into account the speed of light and distances. That can't be that taxing.
EDIT:
A word on kama-manas/manas.
Manas would be the interface between the pilot AI and the Group Leader AI. No idea about implementation yet; could be as simple as a pass-through, or some hack that makes it appear as if the commands were verbal and capable of being misinterpreted. And perhaps it could take an abstract command and plan how to execute it, and be capable of mistakes. But this layer's purpose is 100% well-intentioned fulfillment of duty, anyhow.
Kama manas is the selfish mind. Not necessarily bad; immensely useful if a pilot is capable of bending the orders a little to save himself and the ship, and then finish his work. But kama manas can hijack the pilot's judgement, --e.g.: trying to show off, like Maniac. IOW, kama manas can inject selfish goals into the plan. Specially good for modelling pirates.
Note that Manas would be operating in such unselfish acts as sacrificing one's life on the fulfilment of duty; though not all suicide bombers would necessarily be operating from a high principle, of course. A strong emotional attachment to self sacrifice to duty would be linga sharira-sub-manas. Long story. And suicide for fear of being seen as a coward wouldn't even qualified to such a high emotion as love of duty.
This is close to what i was saying with the personality layer consisting of group think and self think blocks. Only, i remove the leader AI setups you suggest and imply such an AI through bonds in every unit. These bonds dont link units together in some type of communication, but rather consists of simple strength meters a unit has to any group or ship relevant to the unit. These all get considered and modify the eventual action choice if we happen to be going the group route. Group behavior then occurs implicitly and some central driving force is derived solely by the player's own mind and some creative campaign script writing. it's totally dynamic faction behavior and we really dont have to code anything related to a faction AI to do it. Would the faction exist, yes. Would it actually have a consistent personality, yes. Total processing overhead of faction, 0. And we could even create factions ad-hoc in game this way. If a player, or even an AI unit, creates a bond with another unit (maybe by saving his ass) and they repeat this with other units, eventually they would form a web of bonds between eachother and essentially have created a new faction unto themselves.
Ed Sweetman endorses this message.
klauss
Elite
Elite
Posts: 7243
Joined: Mon Apr 18, 2005 2:40 pm
Location: LS87, Buenos Aires, República Argentina

Re: AI re-devlopment thread

Post by klauss »

safemode wrote: The ai function for deciding on an action would be 1 function.
...
I am opting to not have multiple layers of ai on top of the unit ai's. So there would be no fleet ai, no government ai. All of that stuff can be mimiced by the campaign script and group think.
Don't get me wrong.

But I think that's bad design.

You're taking a hugely complex thing as unit behavioral AI, in multiple levels (as a small group (gang), as a huge group (faction), as a thinking individual, as an instinctively reactive individual), and saying you'll code all that in 1 function, 1 class alone.

That violates clear separation of responsibilities, you have one function responsible for many interacting aspects of behavior.

This means when I want to read the code later, perhaps to merely understand it, or to fix a bug, or to enhance it, I will be forced to read one hugely complex function. I mean, even if you code it small and compact, and apparently very elegantly, say 30 lines, it will still be hugely complex in nature, in its interaction with the world and in how it manages to model all those seemingly orthogonal thought/behavioral levels. And it will not expose the problem's structure, so it will be harder to understand.

Elegant code is code that exposes the problem's structure and form, so when you read it it helps you understand the problem and see the solution that has been implemented more clearly. Not code that is short and fast.

AI is a tricky business, and AI code is code that gets tweaked and tweaked, for nerfing, for enhancement, for moddability. AI code must be maintainable, and your description screams unmaintainable to me.

Of course I could be wrong. But it would take me reading the proposed code and understainding it, its principle and how it interacts with everything, for me to see I am.
Oíd mortales, el grito sagrado...
Call me "Menes, lord of Cats"
Wing Commander Universe
chuck_starchaser
Elite
Elite
Posts: 8014
Joined: Fri Sep 05, 2003 4:03 am
Location: Montreal
Contact:

Re: AI re-devlopment thread

Post by chuck_starchaser »

safemode wrote:
chuck_starchaser wrote:
safemode wrote:Gotta remember though, we aren't looking to create life here. We have 10,000 units to handle and puny computers to handle them in and it has to be done in real time.
What I was describing would be the "top LOD" of the AI, which would be applied to units within your theater (a fuzzy concept that could mean within com range. or weapons range, or visual range, or visual field, or simply who are interacting with you; but whose final definition I prefer to defer for now.
The lesser AI LOD's would be attempts at producing results as similar as possible to the top LOD but with less instructions. Again, I defer speculation on how such "optimization" could be achieved. We could use neural networks and "train" them on the basis of the difference between their output and the top LOD's output under similar random situations; or we could use fuzzy logic, or manual hacks.
Note, however, thet if we manage to use fuzzy state machines throughout (no conditionals, no pointer arithmetic), we could conceivably translate the algorithm to openCL and process 65536 AI's as a 256x256 texture; or say use 64x64 textures, 4k AI's, per-faction (with textures defining faction character permanently loaded in video memory).
I dont think we'll get complicated to the point of anything close to algorithmic learning and neural networks.
Agreed; Klauss already expressed skepticism about using NN's, and it was just an idea toss on my part. I simply wanted to say I was describing my AI's top LOD and deferring consideration of lesser ones.
Also, openCL is great, but that would only be used on maybe 1% of the people who would be playing VS.
Maybe not. The next generation AMD multi-core cpu chips will pack a gpu that will probably suck for graphics but be perfect for openCL, so gamers will probably have a separate videocard.
Anyhow, code optimized for gpu's would fly on sse as well.
"Group think" is a block that mimics how a person thinks in a group. Not some quasi-telepathic communication with a group. It's driven by associations an individual has with others and only considers information the individual has. It's basically a means for deciding if an action will be driven by selfish motives or via a sense of cooperation or obligation. It's more than just another data variable to plug into a decision algorithm, in that actions leading from doing something from an obligation you may have has vastly different effects (and actions) from doing something for yourself. For instance, if i shoot you because i'm in a gang and they just decided they hate your gang, then my attack on you is seen as my gang's attack on you. My carrying it out strengthens my bond to my gang and friend/foe information is weighted differently. An AI will have multiple bonds, and these bonds will be of varying strengths. The characteristics of the group a bond is associated with, determines the behavior of that group but this behavior stems from every individual AI bonded to it as it acts, there is no central AI controlling units within a group. We let the bonds each individual AI has to the group create the appearance of a group of units that behave cooperatively. No communication occurs.
Ahhhhhhhhhhh, now I understand.
Well, this is Manas, exactly, in my proposed system.
The ai function for deciding on an action would be 1 function. It would probably take an event, maybe one or two other arguments and that's it. You dont get to control how the unit thinks about the event in the way you were describing. Group think vs selfish think is simply a term to describe an if else block that's conditional on some meter of how strong a tendency for the unit's personal interests are compared to any of it's bonds.
Group think vs selfish think === Manas vs Kama-Manas, btw.
What occurs inside each conditional block alters the unit's variables in different ways. Like if an event has a unit kill another unit, if that killing action was decided because of an association between the killing unit's group and the victim's group, then your friend foe list changes. If it was just a personal vendetta, then the victim's group doesn't matter. They may make you an enemy and want to take revenge, and this could escalate into a group rivalry that didn't exist before, but that's not required.
It'd be good indeed for AI's to consider intentions. Accidentally crashing into a ship should not automatically result in enmity and lead to a fight, for example. Killing a pirate for fun should affect your standing with pirates much more than killing a pirate to collect a bounty. Killing a pirate who attacked you first should have no effect at all on standings. And as you say, transfer of conditions between individual and grop AI's needs careful considerations, --intentions prominently among them.
By the way, what's still throwing me off is it seems to me that by "Faction AI" what's usually meant is flight group AI. In my mind, "Faction AI" would be "Faction Government AI".

Choosing proper names for things applies at all levels; --starting at concept stages.
I threw out the idea of a faction ai that either sends out events to member units or temporarily takes control of them. Group think is a conditional block of code every AI would have that mimics belonging to a group by modifying actions and consequences based on the acting being a result of a bond that unit has to a group rather than from a selfish interest that AI has. It's much simpler and far less overhead. And it should work.
Oh, wait a minute; I don't think this is a good idea, at all.
First of all, it would NOT be simpler: Every time we need to implement the effect of group leder commands it would force us to think of how to achieve that using the group think paradigm. Simple is to divide a problem as much as possible into atomic sub-problems, and solving one sub-problem at a time. I'm trying to separate manas (group think) from kama manas (selfish think) from linga sharira
(emotions, which are something else altogether from either kind of "think") from prana sharira (energy, alertness, "zone"), from stula sharira (physical limitations), precisely to "divide and rule" in pilot AI; and I take it for granted that flightgroups, fleets and goverments would have specialized AI's.
You're going the opposite way: Mixing separate problems... in the hope to "simplify"?
Not a chance!
Secondly, programming is an art, but the best style in this art is "super-realism". In any kind of problem in programming, nothing can be better than to design your variables and classes to mirror the real problem's quantifiables and identifiables. If flight groups often have a leader, in the real world, there's probably a very good reason for it; and the last thing we want to do is to re-discover it (painfully).
EDIT:
But I do think your idea makes for a good litmus test of the Manas layer: If group leder AI were missing (leader killed), AI's with a fully fleshed out Manas layer should probably still manage to function almost as if they still had a group leader. So, as a player, you might learn that the best way to deal with pirates is to identify and kill their leader first, as pirates wouldn't have much Manas; but this trick would have much less effect when fighting a more noble group.
OT: What faction in VS is highly principled and moral? Just curious... /OT
safemode
Developer
Developer
Posts: 2150
Joined: Mon Apr 23, 2007 1:17 am
Location: Pennsylvania
Contact:

Re: AI re-devlopment thread

Post by safemode »

The problem with leaders or faction AI's ...government AI's is that the things they would have to deal with and the actions they have to take would be overly complex. These aren't actions that can be implied and constructed in realtime to any great degree. The script pool would need to be very complex and flexible.

I think we're not matching up on what an event is when it's given to a unit. Units wont be given complex directions and then left to think of how to accomplish them. Also, a unit will never get events it has to group think it's way through. Units choose to either use group think or selfish think based on a superficial determination of whether the event is more strongly linked to selfish drives or tied to it's group obligations.

so say we the campaign tells flightgroup A to attack flight group B. All units in flightgroup A get the event "Flightgroup A attack flightgroup B" This is translated in each unit to mean that the flightgroup A which they belong to is to attack flightgroup B, so it must target units in flightgroup B and attack. Now, at this point it must decide if it wants to follow the order based on group think or selfish think. Perhaps the unit is damaged, the selfish drives to not die may easily override it's group drives to follow the order and so it decides not to attack flight group B based on selfish think. Perhaps it isn't damaged and group think wins out, it then processes the command using a pool of actions that are different from selfish think's actions (though, this isn't a requirement, it's just a possibility). The group think decides on a event that targets a unit in flight group B but tries to maintain speed with units in flightgroup A that are nearest to flightgroup B. Other units would likely come to similar actions, assuming they have similar personality traits (and why wouldn't they if they were all in the same group). Of course, you may get a hot dog who targets flightgroup B and rockets off without regard of other units in flightgroup A. While this seems to not be high in group behavior, it's the hot dog of the group as far as the player is concerned.

I'm not seeing the need of a leader AI here, nor am i seeing how this makes things more complex than a faction AI would. I think you're trying to micromanage too much in AI that just wouldn't be very practical in any solution. I think the more complex we try to make the AI's, the worse they will be in terms of intended behavior. "Intended" not meaning scripted in outcome, but intended as in how we would like the dynamics to behave and be realistic. I think AI works on a bell curve here, where too simple is obviously fake, then you get to a point where they behave decently, but then you quickly fall off again into the realm of just wrong when you try to get more and more realistic. You end up spending all your time trying to tweak these high level AI's and they'll always glitch in very noticeable ways because their glitches can't be masked off as anything but.

Now, i do think there is a place for flight group leaders. Such units wouldn't be special, nor need special AI routines. They just have an alpha personality and give orders to other members in the flight group and they follow because flightgroups have a very strong bond. This would be similar to Aces in WC. In essence, killing a leader would have a noticeable effect on a fg too, so it would be realistic, and we dont need an independent AI for it.

My argument is that we can't simulate an AI in a practical way at the government level so why bother? We very likely wouldn't need to in order to make it seem like we do as far as the units in the game are concerned, all that's lacking is direction and direction can be provided by the campaign script (which is kinda where we wouldn't mind having direction anyway). How much direction is up to the campaign. Left to their own devices, the units would still behave as if they were in a faction, and may automatically cooperate in a large scale to do certain things. Would be interesting to see how much faction personality auto-represents in such a setup where there is no central authority directing them.

In the end, you still need to have when i'm talking about on the unit level even with a government AI, all the government AI does is choose directions and send off the commands to units, the units still have to decide how to or if to carry them out. I'm just saying we can't simulate that in a realistic way practically, so we shouldn't bother with it at all and just leave it up to the campaign script. Combine that with the idea of alpha personalities in flightgroups and factions and we should have no problem having things look like they have a logical direction (though maybe only logic implied by the player after the fact ....which is great) without the need for government AI's.
Ed Sweetman endorses this message.
chuck_starchaser
Elite
Elite
Posts: 8014
Joined: Fri Sep 05, 2003 4:03 am
Location: Montreal
Contact:

Re: AI re-devlopment thread

Post by chuck_starchaser »

safemode wrote:The problem with leaders or faction AI's ...government AI's is that the things they would have to deal with and the actions they have to take would be overly complex. These aren't actions that can be implied and constructed in realtime to any great degree. The script pool would need to be very complex and flexible.

I think we're not matching up on what an event is when it's given to a unit. Units wont be given complex directions and then left to think of how to accomplish them. Also, a unit will never get events it has to group think it's way through. Units choose to either use group think or selfish think based on a superficial determination of whether the event is more strongly linked to selfish drives or tied to it's group obligations.

so say we the campaign tells flightgroup A to attack flight group B. All units in flightgroup A get the event "Flightgroup A attack flightgroup B" This is translated in each unit to mean that the flightgroup A which they belong to is to attack flightgroup B, so it must target units in flightgroup B and attack. Now, at this point it must decide if it wants to follow the order based on group think or selfish think. Perhaps the unit is damaged, the selfish drives to not die may easily override it's group drives to follow the order and so it decides not to attack flight group B based on selfish think. Perhaps it isn't damaged and group think wins out, it then processes the command using a pool of actions that are different from selfish think's actions (though, this isn't a requirement, it's just a possibility). The group think decides on a event that targets a unit in flight group B but tries to maintain speed with units in flightgroup A that are nearest to flightgroup B. Other units would likely come to similar actions, assuming they have similar personality traits (and why wouldn't they if they were all in the same group). Of course, you may get a hot dog who targets flightgroup B and rockets off without regard of other units in flightgroup A. While this seems to not be high in group behavior, it's the hot dog of the group as far as the player is concerned.

I'm not seeing the need of a leader AI here, nor am i seeing how this makes things more complex than a faction AI would.
Well, this is like discussing whether the glass is half full or half empty. Of course there's "no need" for leader AI, considering you can build it right into the unit AI's. But what about efficiency? That's tantamount to running the leader AI again and again for every unit.
I think you're trying to micromanage too much in AI that just wouldn't be very practical in any solution.
I think you are; not me.
I think the more complex we try to make the AI's, the worse they will be in terms of intended behavior.
Exactly. And we add leader AI precisely so we don't have to complicate the pilot AI's too much.
"Intended" not meaning scripted in outcome, but intended as in how we would like the dynamics to behave and be realistic. I think AI works on a bell curve here, where too simple is obviously fake, then you get to a point where they behave decently, but then you quickly fall off again into the realm of just wrong when you try to get more and more realistic.
I disagree; --completely. No bell curve anywhere in sight.
You end up spending all your time trying to tweak these high level AI's and they'll always glitch in very noticeable ways because their glitches can't be masked off as anything but.
Not sure whose high level AI's you refer to; certainly not mine, since I haven't said a word about them yet.
Now, i do think there is a place for flight group leaders. Such units wouldn't be special, nor need special AI routines. They just have an alpha personality and give orders to other members in the flight group and they follow because flightgroups have a very strong bond. This would be similar to Aces in WC. In essence, killing a leader would have a noticeable effect on a fg too, so it would be realistic, and we dont need an independent AI for it
Well, WC doesn't have any AI whatsoever at all, to speak of.

Allright, let's talk about the role of fleet AI:
Are you familiar with the military term "defeat in detail"?
It refers to when two otherwise equivalen military forces appear highly unbalanced in battle because of one side failing to bring all their potential firepower to bear upon the enemy units in a short enough span, and the other side taking advantage of this, concentrating their full firepower on the most immediate threats and taking them out quicly, proceeding to the next biggest threats, and so on. Defeat in detail is something Gen. Norman Schwarzkopf understood well and took to heart.
For game AI this could be implemented simply by programming units to target damaged or otherwise weak enemy units.
The problem is that the opponent also applies the same philosophy, and you lose units to concentrated enemy fire. I'm talking about a hypothetical strategy game.
A better policy would be to program your units to switch to evasive mode the moment they are damaged, and take half of your healthy units to defend your damaged unit from its attackers, and use missiles if necessary, until they break off pursuit. Conserving your units is MORE valuable than destroying enemy units, though two aims are mutually reinforcing.
But if your enemy is using this policy, you might want to try to cause him to switch tactics often and waste missiles early.
None of this should be the concern of pilot AI's. Nor should all leaders be master tacticians, for that matter; some would be your alpha archetypes of more testosterone than brains, of course.
Should depend on race or faction, experience and user difficulty settings; and when you hire pricey escorts you should expect them to include a reasonably good tactician.
My argument is that we can't simulate an AI in a practical way at the government level so why bother?
On what basis you state that? You seem to think it's so self-evident as not to need a justification!!!
We very likely wouldn't need to in order to make it seem like we do as far as the units in the game are concerned, all that's lacking is direction and direction can be provided by the campaign script (which is kinda where we wouldn't mind having direction anyway).
That's fne for PU; we don't want any kind of dynamic universe; but I know JackS, Hellcat, etceteras, have always wanted a dynamic universe. Problem is the current dynamic universe is all based on random numbers and makes no sense. Faction government AI would be one way to inject common sense. When faction A declares war on faction B, the player should say "Finally!", --not wonder why.
How much direction is up to the campaign.
AI and campaigns are orthogonal.
Left to their own devices, the units would still behave as if they were in a faction, and may automatically cooperate in a large scale to do certain things. Would be interesting to see how much faction personality auto-represents in such a setup where there is no central authority directing them.
Remember that we still don't know how ants work; how they take on different, specialized jobs, even without a boss telling them what to do. If you can get unit AI's capable of producing complex group behaviors, such sending one advance unit to split the defenses, or hiding behind an asteroid for a surprise advantage, I'll take my hat off; but I will still think it's easier to have a group/tactical AI.
In the end, you still need to have when i'm talking about on the unit level even with a government AI,
Sure I do; and as I said, I callked it Manas. All I'm saying is it's not enough.
all the government AI does is choose directions and send off the commands to units, the units still have to decide how to or if to carry them out.
Government AI does less than that; it merely decides when to go to war with whom, and such.
I'm just saying we can't simulate that in a realistic way practically,
Why not?
so we shouldn't bother with it at all and just leave it up to the campaign script.
Script is as unrelated to this as anything could be.
Combine that with the idea of alpha personalities in flightgroups and factions and we should have no problem having things look like they have a logical direction (though maybe only logic implied by the player after the fact ....which is great) without the need for government AI's.
You keep saying "without need for", as if it were generally understood that multiple AI levels are undesireable, and the onus were on me to prove the contrary. Meanwhile your arguments are a tangle of contradictions and grand statements about what we can or cannot do, without even trying to justify them.

By the way, although I don't care for govt. ai for pu, I do need Fleet AI for the next pu project. I want destroyers that behave like destroyers --e.g. blocking the path of incoming bombers, avoiding enemy destroyers unless they have numeric superiority, etceteras; but where a fleet commander may command a deviatoin from the by-the-book roles in some particular situation.
pheonixstorm
Elite
Elite
Posts: 1567
Joined: Tue Jan 26, 2010 2:03 am

Re: AI re-devlopment thread

Post by pheonixstorm »

To me it sounds as if the role of a gov AI is being made too complicated for what it SHOULD be. This isn't a 4X game so the rol gov plays is much simpler than that of say master of orion 3 were you can fiddle with each AI using spys or how large an empire is. The role of the VS gov ai should simply have a few base states. isAtWar (true/false) as well as a base emotion passive, neutral, aggressive. Using random events such as border skirmishes or system blockades can give the player an idea that faction a and faction b may go to war eventually w/o the need for a highly complex AI model. You can make further changes to make a more dynamic universe by randomizing this emotion state each time a new game is created.

For added realism you can also add a CR (current relations) table on how much each faction likes/dislikes the next (see http://www.ataricommunity.com/forums/sh ... did=331031 for a complex CR/CB(Casus Beli) structure). Using this CR table you can randomly adjust the values after a giving amount of time or based on radom events. Negative values like those above, or positive factors such as faction a helping faction b fend off pirate attacks or even supplying aid during some (un)natural disaster. Even w/o adding extras a gov ai should be very basic, very few states and should not exert much control except over war/peace values. Unless you want to script a war and when/where to send fleets the player could possibly interact with. A campaign script should only involve the players interaction with the universe not how the universe interacts with itself.

Fleet ai can be boiled down to much the same way, a series of states to help the pilot ai make decisions. If a fleet, use fleet state to see if you need to attack a cap ship, defend a cap ship, kill escorts etc. A squadron Ai would be what the player interacts with most (besides the pilot AI) and would use its own rules based on the situation. Kill pirates, defend merchie, defend fleet, att fleet, etc.

Pilot AI is where all the decisions are truly made. How to fly the ship, who to attack, defend, leave alone...or even when to dump your cargo and run if a merchie. Hero or chicken this AI is where the action really is and what should be the most advanced. Based on its own rules (bomber, fighter, destroyer) it should follow the orders of its fleet/squadron to some extent but have enough personality to go down in a blaze of glory or run away like the chicken it is (lowest probability options). Also set min/max engagement distances as well as any other crucial information we feel a pilot would need.

Just my thoughts... we dont need the complexity of a 4X game, but we DO need more complexity than most flight sims since we deal with multiple factions and a DYNAMIC universe. This especially holds true when you get into the realm of a mmo. You have to have some flow control on the events and action that affect the universe to keep the players interested.
Because of YOU Arbiter, MY kids? can't get enough gas. OR NIPPLE! How does that mkae you feeeel? ~ Halo
chuck_starchaser
Elite
Elite
Posts: 8014
Joined: Fri Sep 05, 2003 4:03 am
Location: Montreal
Contact:

Re: AI re-devlopment thread

Post by chuck_starchaser »

Well said. My only cow is with randomness: as I was saying elsewhere, I firmly believe randomness in ai is like dithering in textures, --you want to use it very sparingly.
But yeah, govt. ai doesn't need to be as complex as a 4X game, though it could be, eventually. My interest right now wasn't to define the simplicity or complexity of implementation, but merely to map out the general structure. Speaking of which, I think military central command would best be a separate ai from govt. ai.
The way they would interact is, govt wants to go to war with faction x so they consult with their central military command, and the latter say "we need more corvettes to counter x's destroyer superiority, so they go into a build-up phase. As a player you may notice they buy a lot of basic materials and equipment, and you may notice a lot more corvettes than before around their shipyards. THEN the war starts.
safemode
Developer
Developer
Posts: 2150
Joined: Mon Apr 23, 2007 1:17 am
Location: Pennsylvania
Contact:

Re: AI re-devlopment thread

Post by safemode »

government AI's: First, we dont have an economic system where by a government can operate in a realistic manner. There is no concept of supply and demand enforced and certainly no sense of a "federation" linking multiple bases into a singular economy. Thus, there is no constraint on building ships or resupplying lost ones that has anything to do at all with reflecting the game state. Aside from the game not simulating a necessary requirement to make government AI's relevant, a government AI would do little to nothing more than what a campaign script is likely to do. That's why i say why bother. A government AI with the time delay in information that it would have in VS could easily be mimicked by the campaign script running an infrequent loop and taking certain actions until it's ready to do some key event. I guess you could call that an AI, but it need not be anything in the engine.

Leader AI's: I'm saying a leader AI becomes implicit given some basic personality variable values. We let leaders self express the role and control this expression by only creating them sparingly (1 per flightgroup or so). The processing of leaders and regular units is all the same, the difference is that the personality of the leader ai's result in it making decisions that tend to appear more leaderlike. Stuff like requesting it's flightgroup to get in formation or follow him or attack a ship or what not. Things that normal units wouldn't come to a conclusion to do. What i'm saying is that the difference between a leader AI and a non-leader AI is like 1 variable that says "i'm very arrogant" vs "i'm not arrogant" or something along those lines, and that variable steers their actions to be more controlling of units they are bonded to, ie their flightgroup or such. No additional processing is needed, and no special AI need to be layered here.

faction AI's ...fleet AI's: Again, I would rather see these be implied AI's that exist out of the parallel mode of decision making present in the regular unit AI. Rather than running a layer of AI on top of unit AI processing, slowing down the frame significantly by having to grep the game state, send commands to units which then have to not only process these new commands and weigh them against what they're doing, I'd rather just have this parallel mode of thinking within the unit's that cause them to think more alike when in similar situations and cause them to tend towards more coordinated actions. Group think is not an additional layer requiring additional processing, rather it's done instead of, and it doesn't requiring polling units and gathering game state data to make a decision.

I dont think we can afford layers of AI on top of unit ai processing because we barely have time for the processing we do now. The game is not threaded. Every physics frame, and thus every timeslice AI is processed, has to occur in a very tiny fraction of a second or we quickly drop in framerate. I just dont see how an upper level AI such as a fleet ai or faction ai or government ai is supposed to pull data and send commands and thus basically do anything useful given it's time constraint and the processor load required to do those things. What I'm suggesting is to put just enough of the functionality of this type of level AI into the unit AI because I dont think a layer on top of them can avoid being too expensive to run in it's given timeframe.

If you want further justification as to why i think that, go ahead and profile the code (which i did back when i was putting opcode in). Ai is not cheap at the unit level, and it only gets more expensive the more you have to poll the universe. Combine this with the fact that these higher level AI's would be more active in combat situations where physics (actual physics stuff) begins to consume the vast majority of cpu available and the necessity for light weight AI's especially at those events is easily apparent. That is, unless we like stop motion gameplay. As it is, we already dip to dangerously low framerates on systems we really shouldn't in certain situations, which is due to a few different things, but the overall effect is that when it comes to combat (where we would expect AI to be most active) we can barely afford the AI we have now.

Dont get me wrong, we're not horribly inefficient here. We simulate tons of units sequentially in VS. Unless major work is done to parallelize VS though, we are going to find it extremely difficult to squeeze yet another layer of ai in anything but non-active periods of gameplay, which may or may not be useable for the purposes of higher level AI's. Maybe we can make a high level ai such as faction ai or government ai only active when we've been quiet for a while and then not process it when the game is very active, falling to something closer to what i'm suggesting in those situations. Not sure if that would be good enough though to do more than what a campaign could do.

When i say we can do something in the campaign, i'm not saying we are removing dynamic behavior, i'm saying we can implement that dynamic behavior without actually having the engine create an entity to implement that dynamic behavior. Granted, a lot of the dynamic attributes of the game are simply python implemented, i'm talking about not requiring a full AI to do the things a government AI would do because we dont simulate enough in-game to warrant the need of an AI to govern it. We dont have real economy, we dont have real politics, we dont have real inputs for a government AI to act on, thus we dont need a real government to create the actions and events based on the inputs we do have. Hence, a much simpler routine in the campaign can mimick anything an in-game government AI would do in any realistic manner.

I also think you are giving your centralized AI (flleet, faction, government) AI's too much credit. They'es simply no practical way we would be able to give tactical experience to these AI's to the complexity you're suggesting they would have given the completely unknown situations that will occur in-game. Determining such actions would require routines that would take way too long to process in the timeframes we have for these things. The idea of assigning a system where an AI can choose a tactic such as hiding behind an object to surprise an enemy based on inputs from the game is just boggling. Outside of being entirely random, how would we show that this is a better action than just straight attacking? How many data points are we going to have to pull from the universe to make that determination ? How many hundreds of python scripted actions are we going to have to make available to handle all the complex tactics needed to make "hide behind object for surprise attack" or similar actions viable?

I'm not saying that this wouldn't be more realistic, or make for a more dynamic and thus better game. I'm saying we can't afford the AI you want, and the AI we can afford would be so uselessely crippled that we could do more without it existing as it's own entity.


edit: but seriously, if you get the time. Profile the code. (it helps to be quick about loading a save game and entering into the situation you want to profile, as tons of empty space flying will skew your results). It's been a long time since i did it (and had time) and it'll give you a good idea of what we have in terms of time to work with.
Ed Sweetman endorses this message.
klauss
Elite
Elite
Posts: 7243
Joined: Mon Apr 18, 2005 2:40 pm
Location: LS87, Buenos Aires, República Argentina

Re: AI re-devlopment thread

Post by klauss »

safemode wrote:government AI's: First, we dont have an economic system where by a government can operate in a realistic manner. There is no concept of supply and demand enforced and certainly no sense of a "federation" linking multiple bases into a singular economy. Thus, there is no constraint on building ships or resupplying lost ones that has anything to do at all with reflecting the game state. Aside from the game not simulating a necessary requirement to make government AI's relevant, a government AI would do little to nothing more than what a campaign script is likely to do. That's why i say why bother. A government AI with the time delay in information that it would have in VS could easily be mimicked by the campaign script running an infrequent loop and taking certain actions until it's ready to do some key event. I guess you could call that an AI, but it need not be anything in the engine.
That's the hack-your-way mode of doing things that has made VS so difficult to maintain.
True, it does not need to be called AI. But, as python's Zen states, if it looks like a duck, quacks like a duck, smells like a duck, then call it a duck. An AI in our case. Calling things by their name is what makes things easy to understand.

Hiding an AI script in a campaign script is a sure way of misguiding future modders and maintainers.
safemode wrote:Leader AI's: I'm saying a leader AI becomes implicit given some basic personality variable values. We let leaders self express the role and control this expression by only creating them sparingly (1 per flightgroup or so). The processing of leaders and regular units is all the same, the difference is that the personality of the leader ai's result in it making decisions that tend to appear more leaderlike. Stuff like requesting it's flightgroup to get in formation or follow him or attack a ship or what not. Things that normal units wouldn't come to a conclusion to do. What i'm saying is that the difference between a leader AI and a non-leader AI is like 1 variable that says "i'm very arrogant" vs "i'm not arrogant" or something along those lines, and that variable steers their actions to be more controlling of units they are bonded to, ie their flightgroup or such. No additional processing is needed, and no special AI need to be layered here.

faction AI's ...fleet AI's: Again, I would rather see these be implied AI's that exist out of the parallel mode of decision making present in the regular unit AI. Rather than running a layer of AI on top of unit AI processing, slowing down the frame significantly by having to grep the game state, send commands to units which then have to not only process these new commands and weigh them against what they're doing, I'd rather just have this parallel mode of thinking within the unit's that cause them to think more alike when in similar situations and cause them to tend towards more coordinated actions. Group think is not an additional layer requiring additional processing, rather it's done instead of, and it doesn't requiring polling units and gathering game state data to make a decision.
But in order for that variable to do its thing, you're forced to implement tons of nuances in Pilot AI. Like message passing between members in a peer-to-peer fashion, which is quite more complex than a hierarchical message system - in fact, with flightgroup AI you probably don't need any form of message passing, just a flightgroup state structure that pilot AIs query, but the thing is, you have more freedom in designing simpler algorithms than if you were attempting to create swarm intelligence (which is what you're proposing).

You also seem to assume having two layers is slower than one big layer. That's a fallacy. Complexity cannot be decided without an implementation, for the same task, there are complex and simple implementations, efficient and inefficient. Sometimes a two-pass algorithm is faster than a one-pass algorithm. A multithreaded algorithm too, it may be faster, it may be slower.
safemode wrote:I dont think we can afford layers of AI on top of unit ai processing because we barely have time for the processing we do now. The game is not threaded. Every physics frame, and thus every timeslice AI is processed, has to occur in a very tiny fraction of a second or we quickly drop in framerate. I just dont see how an upper level AI such as a fleet ai or faction ai or government ai is supposed to pull data and send commands and thus basically do anything useful given it's time constraint and the processor load required to do those things.
Lack of imagination I'd say. That you don't see a way (OTOH, because I bet you didn't even spend a day thinking about it), doesn't mean there isn't one. Besides you're forgetting that if there are 1K pilot AIs running, there will be less than 0.5K flightgroup AIs, running in spacier intervals, and there will be only a handful faction/govt AIs.

Lack of motivation, more likely. You don't want high level layers, so you don't make the effort to find an implementation. Motivation is key in voluntary software development, I'm well aware of that.

So, to try and motivate you: a high level AI is key for any game. Why does VS get boring after a while of playing? It's not that art is ugly, it's not. It's not that shaders aren't perfect, privateer didn't even have them and it was a good game. It's not that physic isn't fully realistic, freelancer was anything except realistic, and it was fun to play. It's not that AI isn't intelligent, lets compare against privateer again, its AI was downright stupid. It's game dynamics. VS feels too random.

And in order to design fun game dynamics (forget about realism, realism is good but even more important than realism in a game is entertainment) you need something tweakable, easily tweakable. Swarm intelligence, besides a difficult and rather new area in CS, is not easily tweakable by anyone but the biggest AI gurus.

safemode wrote:If you want further justification as to why i think that, go ahead and profile the code (which i did back when i was putting opcode in). Ai is not cheap at the unit level, and it only gets more expensive the more you have to poll the universe.
AI is not cheap in VS because it's 90% of the simulation. Physics only does trivial stuff, collision happens rarely in such a sparse universe, and is highly optimized for sparsity.
AI is not cheap, furthermore, because it's messily coded. When coding a critical path care must be taken in many otherwise unimportant details, like avoiding unpredictable branches, or choosing good algorithms and memory layouts. VS did none of that, code is messy, I bet not a single minute of thought was dedicated to weeding out unpredictable branches and other poorly performant constructs, and data structures aren't an exception.
I remember seeing constructs with quadratic performance on some parameter, when a better construct was pretty obvious, can't remember which. Probably because the author of that code didn't think it would sit on the critical path and paid no attention to it. Some of that I changed loooong ago, some of that I didn't.

Take a look for instance at this function. The first bad thing there is a switch statement, which is an unpredictable branch, that gets executed for each unit, and is the result of choosing a bad pattern: one function that does too much.

Granted, I myself have used switch statements elsewhere, but that was a) before I knew better patterns, and b) in not as critical paths as this. This thing gets executed several times per unit in the system. The worse switch I remember writing gets executed merely once per mesh type on screen.

A better pattern would be to form a graph of logic objects and each node calls the next - then, each node type would be a different call site and the processor has a better chance of predicting the branch targets.

Now take a look at ProcessCurrentFgDirective. There you have a huge function that does too much.
  • Try to understand it and explain it to me. Not easy I guess.
  • Lots and lots of branches. Too many because instead of using an indirect call to execute the right action, it's one function for all cases. It has to see if it's running in the current starsystem, it has to see if it's attacking, following, etc, etc... That also contributes to difficult reading.
  • String manipulation. Critical paths souldn't manipulate strings unless absolutely necessary. Here it wasn't.
  • See line 838. Another example of bad code.
  • How big is that function? Circa 500 lines. Is that good?
safemode wrote:Combine this with the fact that these higher level AI's would be more active in combat situations where physics (actual physics stuff) begins to consume the vast majority of cpu available and the necessity for light weight AI's especially at those events is easily apparent. That is, unless we like stop motion gameplay. As it is, we already dip to dangerously low framerates on systems we really shouldn't in certain situations, which is due to a few different things, but the overall effect is that when it comes to combat (where we would expect AI to be most active) we can barely afford the AI we have now.
Are you trying to say a better AI is outside the reach of current hardware? Have you ever played any modern commercial game lately? Have you ever played a modern flight simulator? Or an old one? Try EF-2000... that one had a really good AI (with a few glitches but really good nonetheless)... and on slower machines.

The limit is not how much the AI can do, it's how many cycles are used to do it, and how good does the game choose which part of it simulate more accurately than which. Because you shouldn't simulate every psychological aspect of an NPC that is fighting another NPC at the other side of the galaxy.
safemode wrote:Dont get me wrong, we're not horribly inefficient here. We simulate tons of units sequentially in VS.
VS is terribly inefficient in many places.

safemode wrote:I also think you are giving your centralized AI (flleet, faction, government) AI's too much credit. They'es simply no practical way we would be able to give tactical experience to these AI's to the complexity you're suggesting they would have given the completely unknown situations that will occur in-game. Determining such actions would require routines that would take way too long to process in the timeframes we have for these things. The idea of assigning a system where an AI can choose a tactic such as hiding behind an object to surprise an enemy based on inputs from the game is just boggling. Outside of being entirely random, how would we show that this is a better action than just straight attacking? How many data points are we going to have to pull from the universe to make that determination ? How many hundreds of python scripted actions are we going to have to make available to handle all the complex tactics needed to make "hide behind object for surprise attack" or similar actions viable?
The AI doesn't make that determination. The AI coder does, he decides how to quickly decide if a situation warrants an ambush, and codes an appropriate heuristic. The heuristic can be as simple as "if <has big unit nearby> and <i'm closer to it than the enemy> and <a bit of randomness> then ambush <target> at <big unit>".
And the "ambush" tactic is a generic pattern, a state machine that implements ambush tactics (sit here until enemy in view -> attack enemy -> retreat).

safemode wrote:edit: but seriously, if you get the time. Profile the code. (it helps to be quick about loading a save game and entering into the situation you want to profile, as tons of empty space flying will skew your results). It's been a long time since i did it (and had time) and it'll give you a good idea of what we have in terms of time to work with.
Will do, mostly now that I know oprofile :D
Oíd mortales, el grito sagrado...
Call me "Menes, lord of Cats"
Wing Commander Universe
chuck_starchaser
Elite
Elite
Posts: 8014
Joined: Fri Sep 05, 2003 4:03 am
Location: Montreal
Contact:

Re: AI re-devlopment thread

Post by chuck_starchaser »

Ah, now I'm starting to get where you're coming from... and it seems part of the problem is semantic, part of it details that were in our minds but we both forgot to mention, and part true disagreement, probably, but hopefully resolvable.
safemode wrote:government AI's: First, we dont have an economic system where by a government can operate in a realistic manner. There is no concept of supply and demand enforced and certainly no sense of a "federation" linking multiple bases into a singular economy. Thus, there is no constraint on building ships or resupplying lost ones that has anything to do at all with reflecting the game state.
I've partly been looking forward to having such economic system, while thinking of the ai; partly saying to myself "until then it don't matter anyway; we can, for now, have government ai that simply decides between war and peace as stupidly as it does presently; but at least we'll have that code in the right place (in an ai layer, where it belongs, rather than in a DynamicUniverse.py file... --the latter being a prime example of solution domain code (mis)structuring)."
Aside from the game not simulating a necessary requirement to make government AI's relevant, a government AI would do little to nothing more than what a campaign script is likely to do. That's why i say why bother. A government AI with the time delay in information that it would have in VS could easily be mimicked by the campaign script running an infrequent loop and taking certain actions until it's ready to do some key event. I guess you could call that an AI, but it need not be anything in the engine.
Well, it seems we're thinking at different levels, and the crossness between our goals is therefore an optical illusion: I'm thinking at the level of code re-factoring/organization; you're thinking at the implementation level.
I don't care much about whether we fully simulate B or just mimick it by tweaking A into some A', as long as we call (A'-A) by its intended function: "B". I want to have the scaffolding in place for all the ai layers that naturally should be there, even if the shelves are mostly empty in a first implementation, or even if they were to remain empty forever, --I don't care. I just want to organize the code in a way that makes intuitive sense, without weighing myself down with implementation concerns.
Otherwise, instead of a code structre that contours the problem domain, as it should, we'd end up with solution domain structure, --which is a terrible thing, because later on, if you think of a better solution, you can't implement it without re-factoring.
Leader AI's: I'm saying a leader AI becomes implicit given some basic personality variable values. We let leaders self express the role and control this expression by only creating them sparingly (1 per flightgroup or so). The processing of leaders and regular units is all the same, the difference is that the personality of the leader ai's result in it making decisions that tend to appear more leaderlike. Stuff like requesting it's flightgroup to get in formation or follow him or attack a ship or what not.
Requesting or ordering implies that a message is "broadcast" to its (not it's :) ... ) flight-group; so, consciously or not, you've just acknowledged the need for group leader ai as a separate ai entity or layer. Thank you. :D
Things that normal units wouldn't come to a conclusion to do. What i'm saying is that the difference between a leader AI and a non-leader AI is like 1 variable that says "i'm very arrogant" vs "i'm not arrogant" or something along those lines, and that variable steers their actions to be more controlling of units they are bonded to, ie their flightgroup or such. No additional processing is needed, and no special AI need to be layered here.
That's fine with me, as long as we call B "B" and not "A".
faction AI's ...fleet AI's: Again, I would rather see these be implied AI's that exist out of the parallel mode of decision making present in the regular unit AI. Rather than running a layer of AI on top of unit AI processing, slowing down the frame significantly by having to grep the game state, send commands to units which then have to not only process these new commands and weigh them against what they're doing, I'd rather just have this parallel mode of thinking within the unit's that cause them to think more alike when in similar situations and cause them to tend towards more coordinated actions. Group think is not an additional layer requiring additional processing, rather it's done instead of, and it doesn't requiring polling units and gathering game state data to make a decision.
Ditto. You are talking about code optimization, which is a detail of implementation that should NOT soil the general structure of the code. If we find fleet ai unnecessary, there should still be a fleet_ai.h, with perhaps a comment explaining how pilot ai tricks make fleet ai unnecessary; but the code organization should reflect the problem domain; --NOT the solution domain.
Having said that, let me clarify a detail of implementation that was in my mind but I neglected to mention:
If at the top ai LOD (nearest units) pilot ai's are updated at each physics frame, say; then, group leader ai's, --which eventually would be more complex than you describe, I hope--, could be updated once every 10 physics frames. Fleet ai's could be updated once every 100 physics frames. Central Military command ai's every 1000 physics frames. And faction government ai's once every time you boot the game, or jump to a new system. And if latency were a problem, they could be coded as co-routines.
I dont think we can afford layers of AI on top of unit ai processing because we barely have time for the processing we do now. The game is not threaded. Every physics frame, and thus every timeslice AI is processed, has to occur in a very tiny fraction of a second or we quickly drop in framerate. I just dont see how an upper level AI such as a fleet ai or faction ai or government ai is supposed to pull data and send commands and thus basically do anything useful given it's time constraint and the processor load required to do those things. What I'm suggesting is to put just enough of the functionality of this type of level AI into the unit AI because I dont think a layer on top of them can avoid being too expensive to run in it's given timeframe.
Ditto. I don't believe system analysis should be complicated with implementation/optimization concerns. System analysis should strive to model the problem domain.
If you want further justification as to why i think that, go ahead and profile the code (which i did back when i was putting opcode in). Ai is not cheap at the unit level, and it only gets more expensive the more you have to poll the universe. Combine this with the fact that these higher level AI's would be more active in combat situations where physics (actual physics stuff) begins to consume the vast majority of cpu available and the necessity for light weight AI's especially at those events is easily apparent. That is, unless we like stop motion gameplay. As it is, we already dip to dangerously low framerates on systems we really shouldn't in certain situations, which is due to a few different things, but the overall effect is that when it comes to combat (where we would expect AI to be most active) we can barely afford the AI we have now.
Ditto^2.
Dont get me wrong, we're not horribly inefficient here. We simulate tons of units sequentially in VS. Unless major work is done to parallelize VS though, we are going to find it extremely difficult to squeeze yet another layer of ai in anything but non-active periods of gameplay, which may or may not be useable for the purposes of higher level AI's. Maybe we can make a high level ai such as faction ai or government ai only active when we've been quiet for a while and then not process it when the game is very active, falling to something closer to what i'm suggesting in those situations.
Ditto^3.
Not sure if that would be good enough though to do more than what a campaign could do.
Campaign is unrelated to ai.
When i say we can do something in the campaign, i'm not saying we are removing dynamic behavior, i'm saying we can implement that dynamic behavior without actually having the engine create an entity to implement that dynamic behavior.
Taking a feature from one place and sticking it somewhere else is a zero sum game. I don't see where the performance advantage would come from. What I know is that things are best placed where they naturally belong, so that the code is understandable and maintainable.
Granted, a lot of the dynamic attributes of the game are simply python implemented, i'm talking about not requiring a full AI to do the things a government AI would do because we dont simulate enough in-game to warrant the need of an AI to govern it.
And what I'm saying is I don't care how full or empty our government AI be, as long as it be there, --for completeness and comprehensibility of the AI code organization.
We dont have real economy, we dont have real politics, we dont have real inputs for a government AI to act on, thus we dont need a real government to create the actions and events based on the inputs we do have. Hence, a much simpler routine in the campaign can mimick anything an in-game government AI would do in any realistic manner.
The campaign has nothing to do with any of this.
I also think you are giving your centralized AI (flleet, faction, government) AI's too much credit. They'es simply no practical way we would be able to give tactical experience to these AI's to the complexity you're suggesting they would have given the completely unknown situations that will occur in-game. Determining such actions would require routines that would take way too long to process in the timeframes we have for these things. The idea of assigning a system where an AI can choose a tactic such as hiding behind an object to surprise an enemy based on inputs from the game is just boggling. Outside of being entirely random, how would we show that this is a better action than just straight attacking?
If the destroyers are at the front of a fleet, it is best to attack from the back. So, if fleet is coming towards you, && destroyers are formed at front && there's a place to hide...
How many data points are we going to have to pull from the universe to make that determination ? How many hundreds of python scripted actions are we going to have to make available to handle all the complex tactics needed to make "hide behind object for surprise attack" or similar actions viable?
Details of implementation and/or optimization concerns I don't care to worry about during the analysis phase.
I'm not saying that this wouldn't be more realistic, or make for a more dynamic and thus better game. I'm saying we can't afford the AI you want, and the AI we can afford would be so uselessely crippled that we could do more without it existing as it's own entity.

edit: but seriously, if you get the time. Profile the code. (it helps to be quick about loading a save game and entering into the situation you want to profile, as tons of empty space flying will skew your results). It's been a long time since i did it (and had time) and it'll give you a good idea of what we have in terms of time to work with.
Ditto.
Last edited by chuck_starchaser on Thu Feb 04, 2010 6:09 pm, edited 1 time in total.
safemode
Developer
Developer
Posts: 2150
Joined: Mon Apr 23, 2007 1:17 am
Location: Pennsylvania
Contact:

Re: AI re-devlopment thread

Post by safemode »

We'll see. But what is missing from VS is not AI. It's just straight up plot. There is no direction, and that's not the fault of the AI or lack of faction AI or too much left up to randomness, it's just a lack of any type of campaign and a limited set of missions (which is related to the lack of any type of campaign).

If we had a decent campaign we could have rock stupid AI and still be fun (see most of the WC games). The drive for better AI is to fill in all the non-campaign gameplay that can occur, but that's really a distant second to why people get bored with VS.
Ed Sweetman endorses this message.
klauss
Elite
Elite
Posts: 7243
Joined: Mon Apr 18, 2005 2:40 pm
Location: LS87, Buenos Aires, República Argentina

Re: AI re-devlopment thread

Post by klauss »

safemode wrote:We'll see. But what is missing from VS is not AI. It's just straight up plot. There is no direction, and that's not the fault of the AI or lack of faction AI or too much left up to randomness, it's just a lack of any type of campaign and a limited set of missions (which is related to the lack of any type of campaign).

If we had a decent campaign we could have rock stupid AI and still be fun (see most of the WC games). The drive for better AI is to fill in all the non-campaign gameplay that can occur, but that's really a distant second to why people get bored with VS.
Actually I wasn't pushing for "better" AI - the current AI is *potentially* good enough. I was pushing, actually, for maintainable AI. Currently, the AI is plagued with bugs noone knows how to fix, because the code in unmaintainable.

* Potentially because being a set of state machines it should be able to express proper reaction patterns, but noone understands it in full, so noone knows how to exploit its full potential.

But... yeah, I've always acknowledged the need for a campaign.
Oíd mortales, el grito sagrado...
Call me "Menes, lord of Cats"
Wing Commander Universe
Post Reply