Medieval Conquest Mod How To Adjust The Ai Settings

  суббота 21 марта
      84

BackgroundI am curious about what it would take to make 'good' (competitive with dedicated humans) commercial turn-based strategy game AI. My impression is that this problem is fantastically complex, probably orders of magnitude more complex than chess or go for a litany of reasons. In go for instance, you can only do one thing in a move, place a stone on a board against a single opponent.

The first 20 moves in the game are pretty well set in stone for me. That can be easily programed into the game to give a 'competative' start. In genaral the first priorites area) get a city established. 2 things to look for. Best nearby tile for the city, any nasty monster threats within 8 cells. If a tile meets a minimun critiera I'll settle immediatly if not i'll take 2 turns to 'explore' to idneify if there is a nearby 'better' tile. The criteria are in order of importance.1.

No monster lair greater than Medium within 2 tiles (No)2. Key resource within 1 tile (shard, clay, forest, river) (Yes)3. Mana score (at least 1)4. Grain score (3 or better)5.

A massive medieval map that can easily be used for RP!- Features - YOU MUST HAVE THE 2 CONTENT PACKS FOR ALL TEXTURES AND MODELS! This map also has AI NODES! This map features a large capital city and 3 small. Forums for TaleWorlds Entertainment, creators of the Mount & Blade game series.

Wood Score ( 2 or better)if any of those criteria are not met I'll spend 4 turns looking for one that does meet those criteria, if not the I'll go for the best fit tile in range.Taxes are set to 0 until I run out of cash.Research is Civics, Admin, and other civ techsCast any enchantments - Enchanted hammers, insparation, meditationbuild order is Production enhancing building, pioneer, second enhancement, cheap troop,first champion + exploring priorites - get them exploring. If I had to settle for a subpar tile for my capital the first priority is to find a second city location that does meet those critiera. If not the top priority is looking for equipment caches and id'ing monster lairs.after that it starts to fall into the catagory 2 and 3 areas.2, The main things I'm trying to discover through the game area) find as many 'sweet spot' city tiles and settle them.identify choke points and defensible perimiters. Both to keep opponents bottleed in and keep them from bottleing me in from expansion.c) key resources ( shards, iron, crystal, clay, horses, quests, etc)d) Wasteland locations and types (i like taming the wastelands. I know the cost/benefit ratio is rarely worth it from an efficiency standpoint but it's still fun)e) location and disposition of the opponents.Depending on how those look will determine the stategy I'll use to win.

If I'm penned in or have limited iron I'll go for a quest victory or spell victory, if the AI gets too far ahead I'll go for alliance, if I manage to pull off the start I'll conquer.3. I'll watch very closly the tactical battles to optimize my play for the future. The strength indicators of weak, medium, strong, deadly, epic are very general, and tend to by creature specific.

Learning what you can and can't tackle is very much trial by error. For example you always autoresolve against a dark wizard. Whatever algorithm works to your advantage when you autoresolve and against you when you fight it out in the tactical screen. You can sometimes defeat monsters high above your level with the right combination of spells and units. The reverse works as well.4.

Harder to answer as I cant see the tactical outcomes of AI battles. They do tend to clear quests fairly aggresivly so there must be something working at that level.Other than that the AI does not place cities terribly effectivly, it does pretty good the rest. But does tend to ignore the wastelands. What xml are you looking at? I'd like to look around the ai some myself; and i didn't see any xmls for the ai.In general; the problem with game ai's is not complexity; but a combination of lack of effort and making it too early.Game Ais are usually expert systems; and building expert systems is a lot harder when the game itself is still receiving many balance changes, or is still in development. Developer are also often not too expert at playing the games they make. Community built ai's are often quite good, in part cuz they're made after people have had much chance to learn to play the game well.While neural networks may be of some use; I believe genetic algorithms are the best prospect for better game ai for this, and many other games; plus some systems to copy human behavior as best the computer can. Running a dedicated analysis of things is time consuming; but you have tens of thousands of players playing games; if you use a system which makes small alterations and then uses some basic statistical tools to see which ones work better, and reports them back to the company over internet; then you can get some real improvement over time due to the huge number of games, with little overhead in coding time or in player experience.

What xml are you looking at? I'd like to look around the ai some myself; and i didn't see any xmls for the ai.They are in nearly every file under the english data section, everywhere you see an AIpriority tag and a value, that is the weight some decision algorithm assigns to that game element. Spells, equipment, and traits each have these. CoreAIDefs.xml has the weights on the main game ideas. Hundreds, maybe thousands, of what look like hand-picked weights. So the idea is that game comes up with candidate actions by applying these weights and applies the decision with the highest value.

I didn't find the algorithm that computes this, but picking the weights amounts to determining the policy, so the rest is search. The behavior is set with these weights.You also alluded to community built AI for these types of games that was good (can beat dedicated players straight up). I have never heard of any turn-based strategy game like this that had 'good' fair AI.

Which games are you talking about? Most of these games just cheat with the AI to make it look like a human and be competitive too.I think you may see the problem with expert systems when you see just how many weights there are in the xml. So if I was an expert, here is what I may do, throw some initial weights based on intuition, because I sure don't think in weights myself, and see how things go.

Then you find out it doesn't do optimal solution S in situation X. So you can tweak the numbers to make S get the highest score in X. Then you see it doesn't do S' in X', so you tweak again.

At some point your tweaks may undo previous solutions, but the real problem is that there are an intractable number of Xs in TBS of this ilk. Tweaking for each S, making sure it doesn't break a previous S, and testing it is an enormously time-consuming task, and to get to the 'good' AI solution may take millions of tweaks. That was why I was comparing it to transporting all the rocks in a quarry by hand.The big problem is generalization, because what works in game G against four opponents on random map M can be disaster on G' against two opponents on M'. That is why I think this problem is crazy hard. I think people don't notice because they don't play other humans and the AI isn't strong enough to make them work hard. If you want a solution that will just satisfy most customers, who just want a fun experience, then whatever works. But if the goal is to play this game at the highest levels straight up, I think putting someone on the moon was an easier problem.

People studied chess programs for decades, IBM put a lot of cash in to building several supercomputers, and chess is a far, far less difficult problem compared to playing a game like this optimally. The world doesn't take FE as seriously as chess, because there is no MP we can't see how clever we can be in straight up situations in FE, so few people actually ponder the reasons I put in the background section of my original post for why this problem is orders of magnitude harder.

While neural networks may be of some use; I believe genetic algorithms are the best prospect for better game ai for this, and many other games; plus some systems to copy human behavior as best the computer can. Running a dedicated analysis of things is time consuming; but you have tens of thousands of players playing games; if you use a system which makes small alterations and then uses some basic statistical tools to see which ones work better, and reports them back to the company over internet; then you can get some real improvement over time due to the huge number of games, with little overhead in coding time or in player experience.This may be possible, but I am not familiar with the use of GAs outside of search problems. Are you suggesting weighting the game elements with GAs?

I'm not familiar with tbs community developed ai's; though it still wouldn't surprsie me if some were better than the oens the games came wtih. Good isn't about beating master players; it's about being a more respectable challenge, and doing fewer stupid things. Greentea AI for sc2 is quite a bit better than the one that came with the game.Making an ai for FE that's significantly better than the current one woudln't be hard, just time-consuming.While optimality is hard; satisficing isn't; and many games are less complex in plan space than something like go, cuz there aren't weird peculiar interactions that happen many turns down the road. There's actaully a lot more feasible generalization in this than something like chess i'd say.I'm well aware of the problem with expert systems; it just gets alot worse when the rules of the game change; as they often' don't update the ai's weights for that.Hopefull some day i'll get a chance to make a good ai for a game; any openings frogboy?On genetic algorithms; i'm suggesting changing some of the ai weights with them; and seeing which ones do well, especially for the more important ai weights. I'm not sure how that matches up with what you said.

I'm not familiar with tbs community developed ai's; though it still wouldn't surprsie me if some were better than the oens the games came wtih. Good isn't about beating master players; it's about being a more respectable challenge, and doing fewer stupid things. Greentea AI for sc2 is quite a bit better than the one that came with the game.I am approaching this from an optimality standpoint because it is more interesting. From a commercial point of view, such a solution is completely impractical. From an appreciation of how difficult some machine learning tasks are, I find it fascinating.I wouldn't feel right comparing RTS AI with TBS AI, because computers have an enormous edge on speed in RTS.

The time element muddies the waters too much. A solution that beats every human player in RTS wouldn't have to be close to optimal because humans don't have time to think while they are pressing hundreds of keys a minute, unless the maps are always the same and the game wasn't interesting from a strategic standpoint. Making an ai for FE that's significantly better than the current one woudln't be hard, just time-consuming.I am not sure what you mean by significantly better and time-consuming. I was focusing on ideals because they are more interesting than commercial needs. Even if the goal is just significant improvement, the problem is hard.

Hard in the sense of coordinating many complex abstractions in a problem space so large you may as well call it infinite because there is not enough of anything physical in the universe to give a meaningful comparison to. I bet there are more combinations of just starting variables (just the default opponents and the resource frequency) and the random maps generated from them than sub-atomic particles in the universe.

And that is just the board setup. While optimality is hard; satisficing isn't; and many games are less complex in plan space than something like go, cuz there aren't weird peculiar interactions that happen many turns down the road. There's actaully a lot more feasible generalization in this than something like chess i'd say.The thing is, there is no plan space in chess or go AI solutions. There is no need because the board is always the same, the rules are simple, and there are a manageable number of factors to weight to make heuristics. You only need one move on a turn. So they can get by with the expert tweaking on weights and just do efficient search during the game.I think we disagree about how complex games like FE, Civ, and GalCiv, and MoM are.

I have played club chess weekly since I was in high school. FE is more of a visceral kind of fun than chess, so it is hard to compare the work of chess calculation to FE strategy calculation. But I have no doubt FE is more complex than chess. I firmly believe making a Deep-Blue FE AI would be harder than any task I have learned humans have accomplished.Chess is at the human-hard level, meaning playing it perfectly is outside human ability.

It is not outside the computer-hard level (better than any human), and go won't stay outside the computer-hard level forever either.I think people would see FE as human-hard if it was opened up to multiplayer. Think about it as if you were playing eight humans instead of eight AIs. It is a war game with multiple opponents, which from a game-theory perspective makes it unfathomably complex. The prisoner's dilemma is something that is hard to wrap the mind around the first time you come across it, yet that is a two player game with four outcomes. Nearly every turn after a certain point in FE is a chance for cooperation or to sabotage someone. You can do both at the same time and in varying degrees. This doesn't take in account the spells, unit design, money management, the hostile world, technology decisions, expansion, the logistics of moving armies for attack and defense.I really stressed how hard the problem is because there is no way to design a good solution for a problem you don't appreciate.

People complain about how stupid the AI is and how easy it would be to fix because they don't appreciate how hard the problem is. I think it is difficult to see how hard it is because the AI isn't challenging, we play for fun, the game is beautiful, the rpg aspects are addictive, ect.

It is too fun to seem like a complex problem like chess. But once you look at it that way, chess AI, with its opening books and endgame databases, fairly simple heuristics, and alpha-beta cutoff, extending search on promising lines. This is child's play compared to developing a learning solution. Learning the formula for learning is a problem so mind-boggling, few understand just how difficult it is, much less how to solve it. It makes us fear Skynet, when computers have yet to do anything that remotely looks like the learning humans do. I don't think humans are turing machines, and I don't think any turing machine will ever exist that really learns and isn't an extension of the human mind.Sorry for the rant, I appreciate your comments. I am sorry if I came off as harsh.

Adjust

I just feel like I am the only one who sees the problem is this hard. The reason you seem to be the only one seeing the problem being so hard is that you're looking at a different problem than everyone else. You're looking at advanced artificial intelligence capable of complex learning and optimal gameplay. Everyone else here is looking at practical solutions for the question of improving ai in games people play; which is far easier, and only requires simple kinds of learning, not complex ones. And i'm a computer scientist, I well understand how complex ai can be; but there's a lot of room to improve game ai's using very simple techniques; and since i'm looking from a practical perspective; I want to implement those first, then deal with more complicated challenges.While the problems are certainly very hard at a theoretical level; how well game ai's would be if they had the same level of effort put into them as something like chess is rather unclear, as there aren't actual examples to look at.Offer's still open to make FE ai frogboy! The thing is there are so many 'different' experienced players playing computer games it's impossible to make an AI that will challenge ALL of them.

Some of us are very experienced intelligent humans and it would be very hard to make an ai challenging to us without making it unfun and frustrating to the 'normals' out there who only play these game 'casually'.I for one gave up on an intelligent AI a long time ago and was willing to settle for sliders and increases in resource and starting handicaps to make these games more challenging. Some developers have done this. Stardock did it with Galactic Civilization II and I'm pretty sure over time this AI will achieve that type of challenge and fun level as well.Some noteable games with challenging ai's of the past.

'War of the Lance' by SSI first came out on the Commodore 64, Battles of Napoleon by the same company, Centurion Defender of Rome (one of the most challenging ai's I ever played against on just normal settings), SPARTAN by Slitherine v 1.013 before they dumbed it down with 1.017 because some whined the ai was too hard lol too hard!! Can you believe that complainging about being TOO HARD!??, Some say the Ageod games of the Civil War and War of Independence have a strong ai, I never could get into the game engine so I can't say myself. Panzer Command: Osfront recently released by Matrixgames as well as Command Ops games like Battles from the Bulge and Conquest of the Agean, Tin Soldiers: Caesar has a pretty decent challenge to it although this from the ai being STACKED wih more units moreso that optimum gameplay, Medieval Total War origional and more recently Shogun 2 Total War, Sid Meier's Railroad Tycoon and Transport Tycoon Deluxe and Sid Meier's Alpha Centauri and a more recent Slitherine chesslike game Medieval Conquest has a very good AI and AI opponents. Empire Deluxe from Killerbee software is exceptional for a beer and pretzels game.I wish all developers would realize that there is a more intelligent group of gamers out there that need a very strong AI challenge.

Not necessarily a stong playing AI as just the ability to make the AI as strong as they are willing to take on. Being able to input the amount of handicap the AI gets as well as resources would be great for pc gaming. From amount of resources to the amount of combat die roll adjustments or rerolls sort of like the old Combat Mission quick battle setups allowed, to whether the ai has fog of war on or not.The other thing with large hard drives these days I don't see why they can't start programing LEARNING AI's that RECORD how the player plays and uses that data to make decisions based on patterns the player plays thus making the human learn to figure out even more ways to play and win. All too often gamers figure out ONE way to defeat the AI and then use it over and over and time after time. Learning AI's would prevent this.Oh one game that uses the player designs was SPORE and I was surprised the first time I encountered one of my first creations in a game in another game I was playing.

My origional creation beat the crap out of me lol now THAT was impressive game programming and AI usage. I understand that Stardock uses that same principle in this game using the players custom builds. I think that is GRRRRRRRRRRRREAAAAAAAAT! The reason you seem to be the only one seeing the problem being so hard is that you're looking at a different problem than everyone else. You're looking at advanced artificial intelligence capable of complex learning and optimal gameplay.

Everyone else here is looking at practical solutions for the question of improving ai in games people play; which is far easier, and only requires simple kinds of learning, not complex ones.I think the practical solutions only seem simple from the player's perspective, but are a nightmare from the design perspective. For instance, there are a lot of threads made which point out a specific poor play the AI makes. One 'practical' solution would be to make a list of these and for each one tweak priority values to make it not happen or implement the fix as a special rule independent of the generalized priority system. The problem with the first way is that the priority value fixes may cause other bad behavior that didn't exist before, and it may be hard to figure out what values would actually fix the problem. The problems with the second way are special rules must be maintained with changes in the game's rules, they are work to create that solves only one problem, and it is extra complexity keeping up with all the special rules that you made (for error purposes and maintenance), and the rules likely aren't portable to the other strategy games Stardock will make.It can be done that way, but that way also seems practical for being a nightmare to do and maintain and redo for the next game. There are also so many opportunities to make bad decisions that without automated improvement there probably will be no end to the list of bad plays.

I think this is why most strategy game devs go to cheating AIs, it is much easier to make the facade of good play than to make an algorithm of it.Exploring the optimal solutions is useful to find practical learning solutions that may not be Deep FE, but may improve enough to save time over coding specific behavior and trial-and-error with priority values, and be usable over and over again in new games.Here are a couple of ways to apply learning that seem promising to me. They are both hard to implement well. Here are some high-level sketches:Method 1 (online and offline) Strategy games keep statistics over the course of the game, usually to show the player a nice bar graph of the game's ebb and flow. The first idea would be to use a more robust statistics system to improve the priority system. I wouldn't be surprised if they didn't do this already, but to make the changes in the weights themselves by hand. Let's take an example system, let's say the city settling mechanic.

It is easy to know how this goes, how many cities did the AI get to build? But it is not easy to determine why it went well and why it went wrong. So keep track of settlers produced, on what turn they were produced, the number of areas that could be settled that were found, the rate at which the world was explored, how many settlers were killed, ect. It could go on and on. This could be done for each prioritizing mechanism.

Keep statistics on each tactical battle, each unit built, each technology used, ect. Everything that could be relevant to prioritizing behavior. The online aspect would adjust priorities according to what is going well and what isn't. This should be geared to adapt priorities that are specific to the map situation, for instance threat management more than technology priotization. The offline aspect would adapt priorities that deal with more generalized play and adjusting weights that are consistently adpated in-game to better values so fewer in-game adaptations need to be made.How to adapt the priorities based on the statistical measures relevant to them? This is not easy. It is a hard design problem but would have the benefit of being more flexible.

It would also be a separate system that interacts with an existing system, so it could be linked and unlinked in different game areas from the default priority system until it actually improves it.Method 2 (offline only) There is already a prioritizing system in place. What if you reverse-engineer the prioritizing system (offline) to find the weights the priorities would have had to have been to do what the human player did. You may have to break the game in to epochs to make it easier to manage, and you would need to search for weights that would correspond to the human actions. This could be done with gradient-descent or hill-climbing against whatever values exist already. Maybe add some kind of MP aspect, even if only for internal use, and play each other and apply statistical learning to find out what weights match your behavior.

Play in different ways, aggressive and turtling, and you have different AI personalities. It may require additional or different priorities be made to account for reasons why the humans did certain things.

The cat lady simpsons. It also may take many games until some stability and generalization is found in the weights.The advantages of this approach would be that weight-finding is turned in to a search problem, which is much less complex conceptually. The learning aspect is all in the human in a very natural form, instead of the expert having to guess what weights she/he puts on something, the expert just plays well and the machine searches for weights that match that. It can also be designed for the algorithm to be portable at a high level from game to game, with the specifics (bows or lasers) at a lower level of abstraction. But if Stardock makes it possible for a player designed AI to play the game, that opens up a whole new kind of game play (and modding). We could have AIs that are only capable of playing a specific sovereign or a specific faction or a specific map size or whatever. We could watch player created AIs compete with each other.

If AIs can read from files, this could even be a basis for someone to make a 'play by mail' mod for FE.The risk of course is that for this to be good you need people interested in using the mechanisms. And I do not know how to judge that. Expert systems have the problem that weighting does not always produce the correct decision. They are quite limited in their capacity to deal with human ingenuity.The advantage humans have is focus. Instead of evaluating all possible options every turn in the hopes of turning up something good, we stick to one approach and then adapt if we run into something bad - for example, I might approach a city I want to capture but then back off if it's too heavily guarded, only coming back if I increase the strength of my army. Scouting in force has its advantages and disadvantages - if I find a weakness I can take advantage of it immediately, but if I don't then my military strength is in the wrong place, and a scout unit could have told me that.I'm not saying that the AI doesn't do that, but it lacks the focus of the human approach.

I might be determined to take a particular city, regardless of the losses or the defeat I might suffer because it will be worth it, while the AI tends to follow a path of least resistance. Which is fine and all, but sometimes the path of least resistance offers very little gain.Case in point, I leave an undefended city wide open with an army or two lurking nearby.

The AI takes the city, next turn I take back the city and their troops are all gone. Then I leave the city and the trap is reset.