Keeping sync in multiplayer RTS game that uses floating point arithmetic - c#

I'm writing a 2D space RTS game in C#. Single player works. Now I want to add some multiplayer functionality. I googled for it and it seems there is only one way to have thousands of units continuously moving without a powerful net connection: send only the commands through the network while running the same simulation at every player.
And now there is a problem the entire engine uses doubles everywhere. And floating point calculations are depends heavily on compiler optimalizations and cpu architecture so it is very hard to keep things syncronized.
And it is not grid based at all, and have a simple phisics engine to move the space-ships (space ships have impulse and angular-momentum...). So recoding the entire stuff to use fixed point would be quite cumbersome (but probably the only solution).
So I have 2 options so far:
Say bye to the current code and restart from scratch using integers
Make the game LAN only where there is enough bandwidth to have 8 players with thousands of units and sending the positions and orientation etc in (almost) every frame...
So I looking for better opinions, (or even tips on migrating the code to fixed-point without messing everything up...)

Surely all your client will be using the same binary, so compiler optimisations have no effect on synchronisation issues.
Also, if you are only planning on targeting one architecture (or are at least only allowing people to play against each other if they are on the same architecture) then that doesn't matter either.
I've done exactly the same thing using floating points in C# developing games for the iPhone and desktop, and they both give the same results, even though the iPhone is ARM and desktop is x86.
Just make sure the game does exactly the same calculations and you will be fine.
If all else fails, just replace all instances of float in your game to a standard fixed-point arithmetic class. That way you can be 100% sure that your calculations are deterministic across architectures, although the nature of fixed-point arithmetic may adversely effect your game.

I'm a little late responding to this, but from a Game Security point of view the simulation should only be running on the server/host (i.e.: don't trust the clients, they could be cheating):
The clients should only be sending their movements/commands to the server (which discards bad inputs or clamps them within game limits, so a client saying "I'm running at 10,000m/s" gets clamped by the server to say 10m/s).
The server/host only tells clients about things happening within their field of view (i.e.: a player at co-ordinates 0,0 doesn't get told about two AIs fighting each other at 200,0 if he can only see a radius of 50 units around him/herself).
It's the second part that saves the bandwidth - the simulation on the server/host may have thousands of objects to manage but the clients only need to know about 100 or 200 things within their own field of view.
The only wrinkle in the situation is things like dynamic fire (bullets, missiles, etc) whose range may be greater than a client's view radius. The server/host tells the clients their origin and initial trajectory/target object, the clients then simulate their path according to the same rules, but the kills are only valid in the simulation on the server/host.
Serializing the client-specific world state and compressing it before transmission can also be a huge win, especially if your class properties are only Public where needed. (I normally avoid XML, but we significantly improved compression ratios in one application by serializing to XML and compressing it versus serializing to a binary format and compressing that. I suspect the limited range of ASCII characters used had a hand in it, YMMV.)

A common technique is to have all clients describe their current state to the other clients, periodically.
When two computers disagree about the state of an object, presumably due to floating point error, the game has some rule to determine which is correct, and all clients adjust to match it.

What are you using doubles for specifically? Could you use decimal instead?
Generaly the server would store the state(position/oriantaion/type) of all players units.
When player1 moves a unit
either... the instuction to move is sent to the server
or... the updated state is sent to the server
When the player client needs to render the scene the server sends back state info on the location of all the units within a requested scope.

Related

Implementing State Machine for Unity3D NPC characters

I can't seem to wrap my head around this topic and i need a bit of a push forward, maybe an example.
I'm currently developing a project including simulating a large number of automated characters trying to fulfill their "needs".
As for now the characters have their need levels stored in their scripts as simple floating point <0,1> value and check every frame if it exceeds a given value and then try to move to a defined point and satisfy their need.
The problem is one character can have many needs and that leads to a situation when an character moves to satisfy one need and it drops just below the threshold it moves to satisfy the next need. Assuming that later i would like the needs to "rise" over time this will be a big problem.
I think i should implement an state machine to transition from idle -> move to satisfy -> wait to satisfy -> idle but as i said i cant quite understand the whole state machine thing. The closest to understanding for me was this: https://github.com/nblumhardt/stateless but I still cant wrap my mind around it.
Would be grateful for any help, tutorials, examples, anything.

How to make object/layer ignore near clipping planes?

I'm developing game for Google Daydream and cameras that are used there are really confusing - all of their params are set to the same value when scene starts. Initially I had a problem with further object meshes clipping through each other. I found out that the solution to this is to set nearClippingPlaneto higher value. This caused next problem wich was the cockpit of my ship being not entirerly rendered because of this nearClippingPlane being too far. I tried to create another camera that would render only the cockpit, but because of what I mentioned previously it doesn't work(the cameras act weird), and when I set it through a script it also doesn't work properly.
Because of that I need to change some property of the object, not the camera. I found this but it only works for farClippingPlane(otherwise it would be perfect). Do you know how can I ignore nearClippingPlane for one object/layer without adding second camera?
Regarding further object meshes clipping, this is likely happening due to z-fighting: https://en.wikipedia.org/wiki/Z-fighting
Note that by default, Daydream in Unity will use a 16-bit depth buffer. This is accessible via the player settings -> other settings -> Virtual Reality SDKs -> Daydream.
Switching to a 32-bit depth buffer might allow you to render both the objects in the cockpit and the objects far away using a small nearClippingPlane value. However, this is mostly a mitigation and you might still run into clipping problems, albeit much smaller ones. Additionally there's a performance impact by doing this since you're doubling the memory and bandwidth used by your depth buffer.
You should be able to use multiple cameras like you tried. Create a "Cockpit" and "Environment" camera, with the cockpit camera rendering second using the camera's Depth property. This will render the environment first, and you can ignore all the objects within the cockpit. This has the advantage that you can push out the near plane pretty far.
Next, set the cockpit camera to only clear depth. You can set the cockpit camera to enclose only objects that might be in your cockpit. Now the renderer will preserve the environment, but allow you to use two different depth ranges. Note that this also has performance implications on mobile devices, as the multiple render passes will incur an extra memory transfer, and you also need to clear the depth buffer in the middle of rendering.
You might want to consider creating separate layers for your objects, e.g. "cockpit" and "environment" to prevent things from being rendered twice.
Sample scene with two cameras, note the difference in near/far values

Simulating fluid flow over a heightmap

I am looking for a way to approximate a volume of fluid moving over a heightmap. The easiest solution I can think of is to approximate it as a large number of non-drawn spheres, of small diameter (<0.1m). I would then place a visible plane representing the surface of the water on "top" of the spheres, at the locations they came to rest. To my knowledge, no managed physics engines contain a built in fluid simulator, hence the question.
Implementation would consist of using a physics engine such as JigLibX, which is capable of simulating the motion of the spheres. To determine the height of the planes, I was thinking of averaging the maximum height of each sphere that is on the top layer of a grouping.
I dont expect performance to be great, but would it be approachable for real time? If not, could I use this simulation to pre-bake lines of flow?
I hope this makes sense, I really want opinions/suggestions as to whether this is feasible, or if there is a better way of approaching this.
Thanks for any help, Venatu
(If its relevant, my target platform is XNA 4.0, using C#. Windows only at this point in time, so PhysX/Havok are possibilities for the simulation, but I would prefer a managed solution)
I haven't seen realistic fluid dynamics in real time without using something like PhysX as of yet - probably because the calculations needed are so complicated! The problem with your approach as I see it would come with the resting contact of all those spheres as they settled down, which takes up a lot of processing power. Lots of resting contact points are notorious for eating into performance very quickly, even on the most powerful of desktops.
If you are going down this route then I'd recommend modelling the fluid as an elastic but solid body using spring based physics, where the force applied to one part of the water would use springs to propagate out to the rest. This gives you the option of setting a breaking point for the springs and separating the body into two or more bodies when that happens (and the reverse for coming back together.) This can give you the foundation for things like spray. It's also a more versatile approach in terms of performance, because you can choose the number of particles and springs you use to approximate your model.
It's a big and complicated topic, but I hope that provided at least some insight!
The most popular method to simulate fluids in real-time is Smoothed-particle hydrodynamics.
Several useful links:
http://en.wikipedia.org/wiki/Smoothed-particle_hydrodynamics
http://http.developer.nvidia.com/GPUGems/gpugems_ch38.html
http://www.plunk.org/~trina/thesis/html/thesis_toc.html
In addition to simulation itself you will also need some specialized broad-phase collision detection algorithms such as sweep-and-prune or hashing cells.
And you're right, there is no completed 2d solutions for the fluid dynamics.

Optimization of a GC language, any ideas?

I'm a pretty big newbie when it comes to optimization. In the current game I'm working on I've managed to optimize a function and shave about 0.5% of its CPU load and that's about as 'awesome' as I've been.
My situation is as follows: I've developed a physics heavy game in MonoTouch using an XNA wrapper library called ExEn and try as I might I've found it very hard to get the game to reach a playable framerate on an iPhone4 (don't even want to think about iPhone3GS at this point).
The performance degradation is almost certainly in the physics calculations, if I turn physics off the framerate jumps up sharply, if I disable everything, rendering, input, audio and just leave physics on performance hovers around 15fps during physics intensive situations.
I used Instruments to profile the performance and this is what I got: http://i.imgur.com/FX25h.png The functions which drain the most performance are either from the physics engine (Farseer) or the ExEn XNA wrapper functions they call (notably Vector2.Max, Vector2.Min).
I looked into those functions and I know wherever it can Farseer is passing values by reference into those functions rather than by value so that's that covered (and it's literally the only way I can think of. The functions are very simple themselves basically amounting to such operations as
return new Vector2(Max(v1.x, v2.x), Max(v1.y, v2.y))
Basically I feel like I'm stuck and in my limited capacity and understanding of code optimizations I'm not sure what my options are or if I even have any options (maybe I should just curl into a fetal position and cry?). With LLVM turned on and built in release I'm getting maybe 15fps at best. I did manage to bring the game up to 30fps by lowering the physics precision but this makes many levels simply unplayable as bodies intersect one another and collapse in on themselves.
So my question is, is this a lost cause or is there anything I can do to beef up performance?
First of all, love your game on Windows Phone 7!
Secondly, I don't see anything out of the ordinary in your profiler output. I did a quick and dirty performance analysis of the Farseer engine once (running in .net) and came up with similar results. It almost looks like you have a slowdown that is proportional across the board and may be due to mono itself.
I suppose you follow the performance hints in http://farseerphysics.codeplex.com/documentation already :-)
The most important thing seems to be
to reduce complexity for the
collision detection calculations,
i.e. not the visual but the colliding
shapes. In Unijty3D they are called
colliders and you can attach a simple
cube as a collider to a complex human
body. I don't know anything about
Fareer but they probably have similar
concept (is it called body?).
If possible, try to replace your main
character or other complex objects by
easy cubes and check if fps raises.
Compiler switches sometimes leverage performance. Be really sure that there are no debug settings activated (I got up to 30 times slower code in a C++ library project). Ensure that armv7 optimisation is turned on and -O3 or -Os
Watch out for logging statements as they are extremely expensive on iPhone
[Update:]
Try to decrease the number of actively calculated AABBs just to find out which part of the physics engine causes the trouble. If it's the pure number follow FFox' advice.
What is about other platforms? Where did you perform the testing during the development phase, on simulator? Which one? Any chance to get it running on Android or Android simulator or Windows Phone? This would give you a hint if it is an iPhone specific problem.
Ah, I just saw that ExEn still is in pre-release state and the final will be launched on July 21th as OS. IMO this changes the situation: If your App is running fine on some other comparable platform, then just wait for the release and give it a new try. Chances are pretty well that there is still debugging code in the pre-release you are working on.

using C# for real time applications

Can C# be used for developing a real-time application that involves taking input from web cam continuously and processing the input?
You cannot use any main stream garbage collected language for “hard real-time systems”, as the garbage collect will sometimes stop the system responding in a defined time. Avoiding allocating object can help, however you need a way to prove you are not creating any garbage and that the garbage collector will not kick in.
However most “real time” systems don’t in fact need to always respond within a hard time limit, so it all comes down do what you mean by “real time”.
Even when parts of the system needs to be “hard real time” often other large parts of the system like the UI don’t.
(I think your app needs to be fast rather than “real time”, if 1 frame is lost every 100 years how many people will get killed?)
I've used C# to create multiple realtime, high speed, machine vision applications that run 24/7 and have moving machinery dependent on the application. If something goes wrong in the software, something immediately and visibly goes wrong in the real world.
I've found that C#/.Net provide pretty good functionality for doing so. As others have said, definitely stay on top of garbage collection. Break up to processing into several logical steps, and have separate threads working each. I've found the Producer Consumer programming model to work well for this, perhaps ConcurrentQueue for starters.
You could start with something like:
Thread 1 captures the camera image, converts it to some format, and puts it into an ImageQueue
Thread 2 consumes from the ImageQueue, processing the image and comes up with a data object that is put onto a ProcessedQueue
Thread 3 consumes from the ProcessedQueue and does something interesting with the results.
If Thread 2 takes too long, Threads 1 and 3 are still chugging along. If you have a multicore processor you'll be throwing more hardware at the math. You could also use several threads in place of any thread that I wrote above, although you'd have to take care of ordering the results manually.
Edit
After reading other peoples answers, you could probably argue my definition of "realtime". In my case, the computer produces targets that it sends to motion controllers which do the actual realtime motion. The motion controllers provide their own safety layers for things like timing, max/min ranges, smooth accel/decelerations and safety sensors. These controllers read sensors across an entire factory with a cycle time of less than 1ms.
Absolutely. The key will be to avoid garbage collection and memory management as much as possible. Try to avoid new-ing objects as much as possible, using buffers or object pools when you can.
Of course, someone has even developed a library to do that: AForge.NET
As with any real-time application and not just C#, you'll have to manage the buffers well as #David suggested.
Not only that, there're also the XNA Framework (for things like 3D games) and you can program DirectX using C# as well which are very real-time.
And did you know that, if you want, you can do pointer manipulations in C# too?
It depends on how 'real-time' it needs to be; ie, what your timing constraints are, and how quickly you need to 'do something'.
If you can handle 'doing something' maybe every 300ms or so in .NET, say on a timer event, I've found Windows to work okay. Note that this is something I found true on multiple systems of different ages and different speeds. As always, YMMV.
But that number is awfully long for a lot of applications. Maybe not for yours.
Do some research, make sure your app responds quickly enough for your application.

Categories