MMORPG development and server side logic, update all Game Objects? - c#

I have designed Multiplayers Games before, but now I wanted to create an MMORPG Architecture for learning/challenge purpose. I want to go as far as simulate hundreds(or a couple thousands) of concurrent players on a single server.
So far so good except that right now I am facing a problem to figure out a good way to update all game objects on the Server as often and as fast as possible.
By Game Objects I mean all Players, Mobs, Bullets.
The problem is that all Players, Mobs, Bullets are stored in a Collections on the Server Side Memory for faster processing, and iterating throught all of them to check for collisions, update health, update movement, etc... is taking way too long.
Lets say I have 1000 players and there is 10000 mobs in the whole world and all players and creatures are responsible for creating 5 other game objects(no more no less) such as bullets.
That would give (1000 + 10000) * 5 = 55000 game objects in a collection.
Iterating throught all objects, to update them, take forever(a couples minutes) on a Dual-Core HT i5 with 4gb RAM. This is wrong.
When iterating, the code look like this(pseudo):
for(int i = 0; i < gameobjects.Count; i++) {
for(int j = 0; j < gameobjects.Count; j++) {
// logic to verify if gameobjects[i] is in range of
// gameobjects[j]
}
}
As an optimization, I am thinking about dividing my game objects in different zones and collections but that wouldn't fix the problem where I need to update all objects several times per seconds.
How should i proceed to update all game objects on the server side? I did heavy search to find interesting game design patterns but no results so far. :(
Thanks in advance!

I would change the design completely and implement an event base design. This has many advantages, the obvious one is that you will only need to update the objects that are actually being interacted with. As you will always have a majority of the game objects in an MMO game being idle, or not seen at all.
There is no reason why you should calculate objects that are not visible on any players screen. That would be insane and require a server farm that you most likely cannot afford. Instead you can try to predict movement. Or store a list of all objects that are currently not interacted with and update these less frequently.
If a player can't see an object you can teleport the unit over large distances instead of having it travel smoothly. Essentially moving the unit over huge distances within the confined area that the object is allowed to move. Making it look like the object is moving freely even when the object is not visible to the players. Usually this would be triggered as an event when a new player enters, or leaves a zone.
You can achieve this by simply calculating the time since the last update and predict how far the object would have traveled, as if it was visible to the player. This is especially useful for objects or NPCs that has a set route, as it makes the calculations much simpler.

Your code is running that slow, not just because it is checking all N objects, but because it checking all possible interactions of objects, and that takes N^2 calculations = 3 025 000 000 in your sample.
One way to reduce this number of checks would be to put the object in your game world into a grid, so that objects that are not in the same or aligned cells cannot interact with each other.
Also, your current code checks each interaction twice, you can easily fix this by starting loop from i in your inner cycle:
for(int i = 0; i < gameobjects.Count; i++)
for(int j = i; j < gameobjects.Count; j++)

Looping over 55,000 objects shouldn't be too slow. Obviously, you are doing too much stuff too often over those objects and probably doing stuff that shouldn't always be done.
For example, if there is no players around a mob, should it really be calculated?
(if a tree falls in a forest and there's nobody around, does it really make a sound?)
Also, lot of objects might not need to be updated at every loop. Players for instance could be left to the client to calculate and only be "verified" once every 1-2 seconds. Dumping all the player's collision to the client would make your server workload much easier to handle. Same things for player's bullet or raycast. In return, it also makes the game much more fluid for the players.
Does mobs when following a path need to be tested for collision or can the path's nodes be enough?
Testing every objects against every other objects is terrible. Does all mobs has to be tested vs all other mobs or only specific type or faction need to be tested? Can you split your world into smaller zone that would only test mobs within it against objects also in it?
There's huge work done in MMO server's code to make it work properly. The optimizations done is sometime insane, but as long as it works.

Related

Should I use SceneManager.LoadScene("Scene Name") for infinity game?

I have only 1 scene (It is infinity Game with one level)
Therefore, I can not find anywhere the differences and I would like to understand if there is performance or anything between those 2 codes:
SceneManager.LoadScene(SceneManager.GetActiveScene().buildIndex);
SceneManager.LoadScene("Scene Name");
I would have thought
SceneManager.LoadScene(0);
would be the most efficient way because ints take up less memory than strings. Also, if you want to avoid the whole game pausing whilst the level is reloading, you could use
SceneManager.LoadSceneAsync(0);
This will load the game asyncronously in the background so the game can carry on whilst the level is reloading.

C# GMap.Net identifying position on a route

My question is not so much code-oriented, it's more theoretical.
I'm currently working on an application for a sporting event. The goal is to be able to track competitors on a map while they are moving along a predetermined route.
Currently I have already been able to map the route and I'm able to place markers on the different locations using GMap.NET.
However I have two big challenges that I don't know how to tackle.
1 Calculating the distance and the (estimated) time until the competitors reach the finish.
So for every competitor that has a tracker with them, I would like to map him/her on the map and calculate the distance to the finish. In theory that should be easy. Every competitor will always be in between two waypoints and when I get the position of the tracker I could calculate the distance to the next waypoint and from there on I add every distance between the next waypoints and I have the total remaining distance to the finish.
But that's just theory, I have no clue how I could implement this.
Is there a way to know in between which two waypoints the competitor is?
And what should I do if, for example, there is a part of the route where the competitor goes up to a turningpoint at the end of the road and then comes back on the same road but just on the other side. How would I know if the runner is going up to the turningpoint or if he/she is on the way back from the turningpoint?
2 Working with loops inside the route
This is an even more complicated task. In the route there are two sections that the competitors have to do twice. They are large loops, not small ones. In order to get a correct calculation of the distance, I would need to find a way to know if the competitors are in the first loop or the second.
I was thinking I could probably use a quite similar approach as the issue above, i.e. to specify two waypoints in which I register the time that they have passed there.
If they pass again I could compare that time with the saved time and if there is enough time in between, I could conclude that they are in the second loop of that section.
But again, that's theory. The question is how would I do this in reality? And how do I calculate the distance properly? Do I specify two indexes of waypoints where I calculate the distance in between those indexes twice?
I would love to hear some of your insights on this.
Thanks,

Instancing data management

I'm trying to draw a few thousand particles using instancing. It's working and it's fast, but I have one bottleneck that slows the whole program down.
My Particle class is similar to this:
public class Particle
{
public Vector2 Position;
//More data not used for drawing
//....
}
Now in my DrawLoop() I got something like this:
Vector2[] instanceData = new Vector2[numParticles];
public void Draw()
{
for(int i = 0; i < numParticles; ++i)
instanceData[i] = Particles[i].Position; //THAT'S the slow part
instanceBuffer.SetData(instanceData);
//Now draw VertexBuffer using instancing
//...
}
I have tried using Parallel.For, but it doesn't speed things up enough, since I'm having like 8000 particles. Also I looked in the particlesystem example from MSDN. But their Particle struct just contains the data for drawing the particles, and the positions are calculated in the shader. However, I need additional data for several algorithms.
I can't think of a class design, so I don't need to assign the particle positions to the array every frame.
Since this problem ultimately arose from the data structures being used, let me present you with a common alternative to the linked list for scenarios such as this one.
Linked lists are generally not a good idea for storing particles for two reasons: one, you can't randomly access them efficiently, as you discovered here; and two, linked lists have poor locality of reference. Given the performance requirements of particle systems, the latter point can be killer.
A standard list has much better locality of reference, but as you've discovered, adding and removing items can be slow, and this is something you do commonly in particle engines.
Can we improve on that?
Let's start with something even more basic than a list, a simple array. For simplicity's sake, let's hard-cap the number of particles in your engine (we'll redress this later).
private const Int32 ParticleCount = 8000;
private readonly Particle[] particles = new Particle[ParticleCount];
private Int32 activeParticles = 0;
Assuming you have room, you can always add a particle to the end of the array in constant time:
particles[activeParticles++] = newParticleData;
But removing a particle is O(n), because all of the particles after it need to be shifted down:
var indexOfRemovedParticle = 12;
particles.RemoveAt(indexOfRemovedParticle);
activeParticles--;
What else can we do in constant time? Well, we can move particles around:
particles[n] = particles[m];
Can we use this to improve our performance?
Yes! Change the remove operation to a move operation, and what was O(n) becomes O(1):
var indexOfRemovedParticle = 12;
var temp = particles[indexOfRemovedParticle];
particles[indexOfRemovedParticles] = particles[activeParticles - 1];
particles[activeParticles - 1] = temp;
activeParticles--;
We partition our array: all of the particles at the beginning are active, and all of the particles at the end are inactive. So to remove a particle, all we have to do is swap it with the last active particle, then decrement the number of active particles.
(Note that you need the index within the array of the particle to remove. If you have to go searching for this, you end up reverting to O(n) time; however, since the usual workflow for particles is "loop through the whole list, update each particle, and if it's dead, remove it from the list," you often get the index of dead particles for "free" anyway.)
Now, this all assumes a fixed number of particles, but if you need more flexibility you can solve this problem the same way the List<T> class does: whenever you run out of room, just allocate a bigger array and copy everything into it.
This data structure provides quick inserts and removals, quick random access, and good locality of reference. The latter can be improved further by making your Particle class into a structure, so that all of your particle data will be stored contiguously in memory.

How do I create a test to see if my A.I. is perfect?

I made a tic tac toe A.I. Given each board state, my A.I. will return 1 exact place to move.
I also made a function that loops though all possible plays made with the A.I.
So it's a recursive function that lets the A.I. make a move for a given board, then lets the other play make all possible moves and calls the recursive function in it self with a new board for each possible move.
I do this for when the A.I goes first, and when the other one goes first... and add these together. I end up with 418 possible wins and 115 possible ties, and 0 possible loses.
But now my problem is, how do I maximize the amount of wins? I need to compare this statistic to something, but I can't figure out what to compare it to.
My feeling is that the stats you're quoting are already pretty good. Two expert Tic-Tac-Toe players will always end in a tie, and there is no way to force a win if your opponent knows how to play the game.
Update
There's probably a more elegant wayt o prove the correctness of your A.I., but the most straightforward approach would be the brute force one. Just enumerate all possible board positions as a game tree, and prune the branches that lead directly to a loss. Then for each branch in the tree you can work out the probability of win resulting from following that branch. Then you just need to test your A.I. on each board position and make sure it's picking the branch with the highest probability of a win.
You should start by observing that move 9 is always forced: there is only one empty square on the board. Move can be considered 8 forced as well, because after seven moves there could be exactly three situations:
O can win on the next move, in which case it takes the win
Placing an X in either one of the two remaining squares wins the game for X, in which case O has lost regardless of its next move
X has zero or one path to victory, in which case O blocks to force a draw
This means that the game is over after at most seven moves.
Also observe that there are only three opening moves: the center, a corner, or a side. It does not matter which of the four corners or sides you take, because the board can be rotated to match a "canonical" opening (the upper-left corner or the middle of the top side).
You can now build your state analysis code. Starting with each of the three possible openings, search with backtracking up to six additional moves using all squares that are open by the time you make the move. After each move, analyze the position to see if X or O has already won; mark wins by X as Wx, and wins by O as Wo. The remaining positions are undecided.
Do not explore positions after Wx or Wo: simply return to the prior step, reporting the win by the corresponding side.
When you reach the seventh move, statically analyze the position to decide if it is one of the three situations described above is applicable, marking the position a Wx, Wo, or a Draw.
Now to the most important step: when you backtrack to the move N-1 by the player p,
If one of the moves that you try is such that all position at the next level becomes Wp, declare the current position a Wp as well.
If all of the moves that you try lead to the win of the opponent, declare the current position a win for the opponent
Otherwise, declare the current position a Draw, and return to the prior level.
If you do this right, all three opening positions will be classified as a Draw. You should see some forcible wins after three moves.
Running this procedure classifies each position as a Wx, Wo, or a Draw. If your AI gets you a win for the player p in a position classified as Wp, or gets you a draw in a position classified as a Draw, then your AI is perfect. If, on the other hand, there are positions that are statically classified as Wp in which the AI gets p only a draw, then your AI engine needs an improvement.
Additional reading: you can find additional insights into the game in this article describing methods of counting possible games of Tic-Tac-Toe.
What you're doing is more linear optimisation than A.I... I'll not describe all the linear algebra of the Tic-Tac-Toe here, there's plenty of examples on the net.
So using linear algebra, you don't have to prove anything about your results (searching for magic statistics, etc), because your results can be validated by a simple solution-injection in the original equation.
In conclusion, there is two cases :
You're using simple "deduction" logic (which is in reality non-formal linear algebra formulation) : we can't found a ready-to-use method for checking your results without look at your code. EDIT : as Andrew Cooper suggests, brute force can be a ready to use method without seeing at your code.
You're using formal linear algebra formulation : your results can be validated by a simple solution-injection in the original equation.
The only thing you can compare is one potential move against another. Whenever it's the computer's turn to make a move, have it play out all possible games from that point on, and choose the move that leads to the highest possible amount of wins. You can't always win, but you can give the opponent more chances to make a bad move.
Or, you can always try the tic tac toe algorithm in the link below:
Tic Tac Toe perfect AI algorithm: deeper in "create fork" step
given that we know
one cannot force a win
with optimal strategy one cannot lose
your AI has already proven to be optimal if
you did search the full tree when playing against it
and your AI is deterministic (if it were rolling the dice at certain stages you would have had to play against all combinations)
It did not lose, you cannot demand it to win. the wins it did do not count, as your full tree search included bad moves as well. that's all, you are done.
just for fun:
if you had no a priori knowledge about the chances to win/draw/lose a game a common strategy would be to persistently save lost positions. on the next game you would try to avoid them. if you can't avoid a move to a lost position you found another one. this way you can learn not to lose against a certain strategy (if possible) or to avoid an error in your strategy.
In order for your tic-tac-toe AI to be proven correct, it needs to satisfy two conditions:
It must never lose.
When the opponent deviates from optimal play, it must win.
Both conditions derive from the fact that if both players play optimally, the tic-tac-toe always ends in a draw.
One automatic method of determining whether your program fulfills these two conditions is to construct what is called a "minimax tree" of every possible tic-tac-toe game. The minimax tree completely characterizes the optimal move for each player, so you can use it to see if your program always selects the optimal move. This means that my answer essentially boils down to, "Write a perfect AI, and then see if it plays the same way as your own AI." However, the minimax algorithm is useful to know, and to my knowledge, this is the only way to test if your AI actually plays optimally.
Here is how the minimax algorithm works (For a gif explanation, see Wikipedia. There's also some pseudocode in the Wikipedia article on minimax.):
Beginning with the tic-tac-toe setup under consideration, construct a tree of all possible subsequent moves. The initial position at the root node. At the lowest level in the tree, you have all of the possible final positions.
Assign a value of +1 to all final positions in which the first player wins, a value of -1 to all moves in which the second player wins, and a value of 0 to all ties.
Now we propagate these values up the tree to the root node. Assume that each player plays optimally. In the last move, Player One will select any move that has a value of +1, i.e. a move that wins the game. If no move has a value of +1, Player One will select a move with value 0, tying the game. Thus, nodes where it is player Player One's move are assigned the maximum value of any of their child nodes. Conversely, when it is Player Two's move, they prefer to select moves with a value of -1, which win them the game. If no winning moves are available, they prefer to tie the game. Thus, nodes where it is Player Two's turn are assigned a value equal to the minimum of their child nodes. Using this rule, you can propagate values from the deepest level in the tree all the way up to the root node.
If the root node has a value of +1, the first player should win with optimal play. If it has a value of -1, the second player should win. If it has a value of 0, optimal play leads to a draw.
You can now determine, in each situation, whether your algorithm selects the optimal move. Construct a tree of all possible moves in tic-tac-toe, and use the minimax algorithm to assign +1, 0 or -1 to each move. If your program is Player One, it is optimal if it always selects the move with the maximum value. If it plays as Player Two, it is optimal if it always selects the move with the minimum value.
You can then loop through every move in the tree, and ask your AI to select a move. The above tells you how to determine if the move it selects is optimal.
I would use a decision tree to solve this problem.
Putting it in simple words, decision trees are a method to recursively calculate the expectancy (and chance) of the end result. each "branch" in the tree is a decision who's expectancy is calculated from the sum of (value * chance) possible for this decision.
in a limited options scenario (like tic-tac-toe) you can have the entire tree pre-calculated and therefore after each move of the human player (chance) you can make choose (decision) the next branch witch has the highest expectancy to win.
In a chess game the solution is similar but the tree is not pre-built: after each move the computer calculates the value for every possible move on the board for n depth forward. choosing the best, second best or n-th best expectancy depending on the difficulty of the game selected by the player.

Representing a Gameworld that is Irregularly shaped

I am working on a project where the game world is irregularly shaped (Think of the shape of a lake). this shape has a grid with coordinates placed over it. The game world is only on the inside of the shape. (Once again, think Lake)
How can I efficiently represent the game world? I know that many worlds are basically square, and work well in a 2 or 3 dimension array. I feel like if I use an array that is square, then I am basically wasting space, and increasing the amount of time that I need to iterate through the array. However, I am not sure how a jagged array would work here either.
Example shape of gameworld
X
XX
XX X XX
XXX XXX
XXXXXXX
XXXXXXXX
XXXXX XX
XX X
X
Edit:
The game world will most likely need each valid location stepped through. So I would a method that makes it easy to do so.
There's computational overhead and complexity associated with sparse representations, so unless the bounding area is much larger than your actual world, it's probably most efficient to simply accept the 'wasted' space. You're essentially trading off additional memory usage for faster access to world contents. More importantly, the 'wasted-space' implementation is easier to understand and maintain, which is always preferable until the point where a more complex implementation is required. If you don't have good evidence that it's required, then it's much better to keep it simple.
You could use a quadtree to minimize the amount of wasted space in your representation. Quad trees are good for partitioning 2-dimensional space with varying granularity - in your case, the finest granularity is a game square. If you had a whole 20x20 area without any game squares, the quad tree representation would allow you to use only one node to represent that whole area, instead of 400 as in the array representation.
Use whatever structure you've come up with---you can always change it later. If you're comfortable with using an array, use it. Stop worrying about the data structure you're going to use and start coding.
As you code, build abstractions away from this underlying array, like wrapping it in a semantic model; then, if you realize (through profiling) that it's waste of space or slow for the operations you need, you can swap it out without causing problems. Don't try to optimize until you know what you need.
Use a data structure like a list or map, and only insert the valid game world coordinates. That way the only thing you are saving are valid locations, and you don't waste memory saving the non-game world locations since you can deduce those from lack of presence in your data structure.
The easiest thing is to just use the array, and just mark the non-gamespace positions with some special marker. A jagged array might work too, but I don't use those much.
You could present the world as an (undirected) graph of land (or water) patches. Each patch then has a regular form and the world is the combination of these patches. Every patch is a node in the graph and has has graph edges to all its neighbours.
That is probably also the most natural representation of any general world (but it might not be the most efficient one). From an efficiency point of view, it will probably beat an array or list for a highly irregular map but not for one that fits well into a rectangle (or other regular shape) with few deviations.
An example of a highly irregular map:
x
x x
x x x
x x
x xxx
x
x
x
x
There’s virtually no way this can be efficiently fitted (both in space ratio and access time) into a regular shape. The following, on the other hand, fits very well into a regular shape by applying basic geometric transformations (it’s a parallelogram with small bits missing):
xxxxxx x
xxxxxxxxx
xxxxxxxxx
xx xxxx
One other option that could allow you to still access game world locations in O(1) time and not waste too much space would be a hashtable, where the keys would be the coordinates.
Another way would be to store an edge list - a line vector along each straight edge. Easy to check for inclusion this way and a quad tree or even a simple location hash on each vertice can speed lookup of info. We did this with a height component per edge to model the walls of a baseball stadium and it worked beautifully.
There is a big issue that nobody here addressed: the huge difference between storing it on disk and storing it in memory.
Assuming you are talking about a game world as you said, this means it's going to be very large. You're not going to store the whole thing in memory in once, but instead you will store the immediate vicinity in memory and update it as the player walks around.
This vicinity area should be as simple, easy and quick to access as possible. It should definitely be an array (or a set of arrays which are swapped out as the player moves). It will be referenced often and by many subsystems of your game engine: graphics and physics will handle loading the models, drawing them, keeping the player on top of the terrain, collisions, etc.; sound will need to know what ground type the player is currently standing on, to play the appropriate footstep sound; and so on. Rather than broadcast and duplicate this data among all the subsystems, if you just keep it in global arrays they can access it at will and at 100% speed and efficiency. This can really simplify things (but be aware of the consequences of global variables!).
However, on disk you definitely want to compress it. Some of the given answers provide good suggestions; you can serialize a data structure such as a hash table, or a list of only filled-in locations. You could certainly store an octree as well. In any case, you don't want to store blank locations on disk; according to your statistic, that would mean 66% of the space is wasted. Sure there is a time to forget about optimization and make it Just Work, but you don't want to distribute a 66%-empty file to end users. Also keep in mind that disks are not perfect random-access machines (except for SSDs); mechanical hard drives should still be around another several years at least, and they work best sequentially. See if you can organize your data structure so that the read operations are sequential, as you stream more vicinity terrain while the player moves, and you'll probably find it to be a noticeable difference. Don't take my word for it though, I haven't actually tested this sort of thing, it just makes sense right?

Categories