Firebase Realtime Database - Matchmaking using transactions = High download usage - c#

Problem
I'm using Firebase Realtime Database (for Unity) to manage the server side for a turn based game but I have a problem with my matchmaking... a high download usage.
Every online game have 2 base states to avoid more than 2 players join a game: Created and Joined
Created: A player try to join a game, if can't find one a new game will be Created
Joined: A player try to join a game, if find one change the state to from Created to Joined
I'm using RunTransaction to prevent more than 2 players from joining a game, but I checked that the latest data was not fetched from the database because of the local cache, adding keepSynced over my matches-{lang} child node will always have the latest data but naturally this produces a high download usage.
private DatabaseReference DatabaseReference()
{
return FirebaseDatabase.DefaultInstance.RootReference.Child(MatchesLocation(LanguageManager.Manager.GetPlayerLanguageCode()));
}
private DatabaseReference DatabaseReferenceLangMatch(Language language)
{
return FirebaseDatabase.DefaultInstance.RootReference.Child(MatchesLocation(LanguageManager.Manager.GetLanguageCode(language)));
}
public void ManageKeepSyncedMatches(Language lang)
{
DatabaseReferenceLangMatch(Language.English).KeepSynced(lang == Language.English);
}
public void JoinMatchTransaction(GameMatchOnline gameMatchOnline, UnityAction<string, bool> callback)
{
JoinTransactionAbort joinResult = JoinTransactionAbort.None;
DatabaseReference matchesListRef = DatabaseReference();
Dictionary<string, object> joinerDict = gameMatchOnline.ToJoinDictionary();
matchesListRef.Child(gameMatchOnline.matchId).RunTransaction(matchData =>
{
Dictionary<string, object> matchDict = matchData.Value as Dictionary<string, object>;
if (matchDict == null)
{
joinResult = JoinTransactionAbort.Null;
return TransactionResult.Success(null);
}
if (!matchDict.ContainsKey("state"))
{
joinResult = JoinTransactionAbort.Error;
return TransactionResult.Abort();
}
GameMatchOnline.State state = (GameMatchOnline.State)System.Convert.ToInt32(matchDict["state"]);
if (state != GameMatchOnline.State.Created)
{
joinResult = JoinTransactionAbort.Error;
return TransactionResult.Abort();
}
joinResult = JoinTransactionAbort.None;
matchDict.Add("joinerInfo", joinerDict["joinerInfo"]);
matchDict["state"] = joinerDict["state"];
matchData.Value = matchDict;
return TransactionResult.Success(matchData);
}).ContinueWith(task =>
{
// Fail
if (task.IsFaulted || task.IsCanceled)
{
UnityThread.executeInUpdate(() =>
{
if (joinResult == JoinTransactionAbort.Error)
{
callback(null, false);
}
});
}
// Can Join match
else if (task.IsCompleted)
{
UnityThread.executeInUpdate(() =>
{
if (joinResult == JoinTransactionAbort.None)
{
AddListenerResultsValueChanged(gameMatchOnline.matchId, gameMatchOnline.joinerInfo.userId, gameMatchOnline.isPrivate, gameMatchOnline.language);
callback(gameMatchOnline.matchId, true);
}
else
{
callback(null, false);
}
});
}
});
}
Question
Removing keepSynced players will have locally cached information for matches-{lang}, can I trust that by doing this there will be no more than 2 players per game? *Transactions are supposed to avoid this kind of problem.
Is there a way to avoid the local cache for a request and thus always get the updated data?
Could the best solution be to move the games to another node to reduce the size of the matches-{lang} node?
Thanks!

Removing "keepSynced" players will have locally cached information for "matches", can I trust that by doing this there will be no more than 2 players per game? *Transactions are supposed to avoid this kind of problem.
With KeepSynced off, Transactions will still hit the local cache then hit the internet. It'll probably save you some bandwidth since it's a lazy access (that's assuming you don't do something like "get all matches"), and you'll be able to make the guarantees you need. Whether or not you use KeepSynced, you should be prepared for your transaction to run multiple times (and against null data if the local cache is empty).
Is there a way to avoid the local cache for a request and thus always get the updated data?
Correction
It looks like I got this a little backwards, see this answer for more details. It will return the cached value and request an updated one. Subsequent calls will get a new value when it's available. You should always try to use ValueChanged when possible.
old answer:
You _can_ just say `GetValueAsync`, which has to bypass the cache since it will only fire once. You really should use ValueChanged listeners to listen for changes and Transactions to change data if you can to keep everything up to date and to avoid data races.
Could the best solution be to move the games to another node to reduce the size of the "matches" node?
Generally the fewer people hitting a shared resource, the better your performance. If you haven't already, check out the Loteria post to see how a team created a realtime game on Realtime Database that was resilient enough to be a Google Doodle.
The TLDR is that rather than a player being responsible for creating or finding a game, players looking for games are written to a queue. When a player is added to the matchmaking queue, a Cloud Function trigger fires which does the work of hooking the users up. A client knows that they're in a game by putting a ValueChanged listener onto a player entry in the database and waiting for a game to be written into it.
The game is further kept low latency with some manual sharding logic. They performed some profiling to see how much traffic a single database could handle, then write some quick (manual - since it was a one day thing) scaling logic to distribute players to one of a number of databases.
I hope that all helps!
--Patrick

Related

Firebase keepsynced

Working with Firebase in a Unity project, for a simple highscore, I stumbled on problems when doing a query.
In editor, everything works like a charm (Editor does not have persistence)
On devices (with persistence enabled) the troubles begin. Query is showing cached Firebase data, so it is only correct on first ever call in the client, and then when Firebase sees it fit to sync (maybe never, since there is no eventhandler)
However, looking for a solution, there is no way to force an update of the cached values.
I then tested with KeepSynced(true) on the query and this seems to work:
this.HiscoreQuery = hiscoreref.OrderByChild ("Score").LimitToLast (20);
this.HiscoreQuery.KeepSynced(true);
this.HiscoreQuery.GetValueAsync().ContinueWith (task => {
if (task.IsFaulted) {
Debug.LogError ("Get hiscores faulted");
return;
}
if (task.Result != null && task.Result.ChildrenCount > 0) {
Debug.Log ("Get hiscore data success!");
this.AddDelayedUpdateAction (() => this.OnGetHiScores (task.Result));
}
});
Question: While this can be fine if Firebase only listen for the Query's LImitToLast(20), it would be a very bad thing, if the Firebase internally is keeping the whole (growing) hiscorelist copied in every client.
Does anyone know if KeepSynced(true) is limited to the actual query scope or the whole tree/branch? And how could one validate this?
GetValue calls don't work well with KeepSynced(true). Firebase eagerly returns you the value from the cache, and only then loads the data from the server.
For a longer explanation, see my answer here: Firebase Offline Capabilities and addListenerForSingleValueEvent
If you want to use caching, use listeners and not GetValue calls. With a listener, your callback will be fired twice (if there is a change): once with the value from the cache, and then once with the value from the server.

RESTful service to make a request to foreign API and update SQL Server database

I want to make a RESTful API (or any other way that can get it done, really) to have it work in a loop to do a specified task everyday at the same hour.
Specifically, I want it to access a foreign API, let's say, at midnight everyday, request the specified data and update the database accordingly. I know how to make a request to an API and make it do something. But I want it to do it automatically so I don't even have to interact with it, not even having to make requests.
The reason for this is that I'm working on a project that requires multiple platforms (and even if it was only one platform the users would be several) and I can't make a request to a foreign API (mainly because it's trial, it's a school project) every time a user logs in or clicks a button on each platform.
I don't know how to even do that (or if it's even possible) with a web service. I've tried with a web form doing it async with BackgroundWorker but nothing.
I thought I might have better luck here with more experienced people.
Hope you can help me out.
Thanks, in advance,
Fábio.
Don't know if I get it right, but it seems to me that the easiest way to do what you want (have a program scheduled to work at a given time, every day) is to use Windows Scheduler to schedule your application to run always on the specific time you want.
I managed to get there, thanks to the help of #Pedro Gaspar - LoboFX.
I didn't want the Windows Scheduler as I want it reflected on the code and I don't exactly have access to the server where it's going to be. That said, what got me there was something like this:
private static string LigacaoBD="something";
private static Perfil perfil = new Perfil(LigacaoBD);
protected void Page_Load(object sender, EventArgs e)
{
Task.Factory.StartNew(() => teste());
}
private void teste()
{
bool verif = false;
while (true)
{
if (DateTime.UtcNow.Hour + 1 == 22 && DateTime.UtcNow.Minute == 12 && DateTime.UtcNow.Second == 0)
verif = false;
if (!verif)
{
int resposta = perfil.Guardar(DateTime.UtcNow.ToString());
verif = true;
}
Thread.Sleep(1000);
}
}
It's inserting into the database through a class library. And with this loop it garantees that it only inserts once (hence the bool) and when it gets to the specified hour, minute and second it resets, allowing it to insert again. If something happens that the servers goes down, when it gets back up it inserts anyway. The only problem is that if it's already inserted and the server goes down it will insert again. But for that there are stored procedures. Well, not for the DateTime.UtcNow.ToString() but that was just a test.

Multithreading design

I have a program that will store some network activity data from some servers. For the speed I will design the application to make each request in a separate thread and put the result in a generic dictionary where the key is the server id and the object is the result class.
However this responses from the server should be saved each 10 minutes to DB. I don't know if I have any good idea how to solve this. So some input would be great.
What I have in mind is to lock the result dictionary and make a deep clone of the result and start to analyze the result in another thread that just put it in the DB.
How could I minimize the blocking from the request threads so they can start to add fresh results asap but still read from the dictionary?
The idea is to take the current state aside in the time your persist logic fires while directing new input into a fresh storage. This is the basic pattern for this task:
class PeriodicPersist{
// Map must be volatile, persist may look at a stale copy
private volatile Map<String, String> keyToResultMap = new HashMap<String, String>();
public void newResult(String key, String result) {
synchronized(keyToResultMap) { // Will not enter if in the beginning of persist
keyToResultMap.put(key,result);
}
}
public void persist(){
Map<String, String> tempMap;
synchronized (keyToResultMap) { // will not enter if a new result is being added just now.
if(keyToResultMap.size() == 0) {
return;
}
tempMap = keyToResultMap;
keyToResultMap = new HashMap<String, String>();
}
// download freshMap to the DB OUTSIDE the lock
}
}
You can avoid dealing with locking the dictionary by using the ConcurrentDictionary. Run a thread every 10 mins using Timer based events that will check the contents of the Dictionary and save current count items to your DB, remove the same and then start the analysis on the saved content.
// myCD is your concurrent dictionary
// every 10 mins
var myCDClone = myCD.MemberwiseClone();
// save to DB using myCDClone
// using myCDClone.Keys delete everything saved up from myCD
// myCDClone.Clear();

A Dictionary of Queues to manage Streamed Video Frames

I am working on a system that will analyze video frames from multiple cameras. Each camera will initialize a WCF session and be assigned a GUID for identification before sending video frames. The processing load is more than the server can accomplish in real time as the number of cameras grow. As such, I am writing a class to manage the data as it arrives in bursts from the motion activated cameras.
Each frame from the cameras must be analyzed in parallel with frames from the other cameras. Each of the cameras will be filling a queue with frames to manage the FIFO nature of the data.
Management of the Queues has left me searching for ideas. Initially I had intended to create a dictionary and insert each GUID-Queue pair. After working on the code for a while I am starting to think that this is not the best approach. Does anyone have any suggestions?
Some simplified sample code:
namespace DataCollection
{
[ServiceBehavior(Name = "ClientDataView", InstanceContextMode = InstanceContextMode.Single)]
public class DataCollectionService : IDataCollectionService
{
//Dictionary of Guid-Queue pairs
CameraCollection CameraArray = new CameraCollection();
public Guid RegisterCamera()
{
Guid ID = new Guid();
WorkQueue FrameQueue = new WorkQueue();
CameraArray.AddCamera(ID, FrameQueue)
return ID;
}
public bool SendData(Guid ID, FrameReady newFrame)
{
Frame dataFrame = newFrame.OpenFrame;
WorkQueue que = CameraArray.GetQueueReference(ID);
que.QueueFrame(dataFrame);
return true;
}
}
}
Currently if I hard-code everything for a set number of cameras the system works. Getting it to work with a variant number of cameras is more challenging. Any suggestions or advise would be appreciated.
.Net framewok 4 has a class called Parallel. This class has an extension method Foreach. With your collection, you can do something like Parallel.Foreach(collection,action),

How can i make NHibernate survive database downtime?

I have a C# console app that I would like to keep running, even when its database crashes. In that case it should poll the database to see when it comes back online, and then resume operation. I have this code for it, that I don't like:
public static T Robust<T>(Func<T> function)
{
while (true)
{
try
{
return function();
}
catch (GenericADOException e)
{
Console.WriteLine("SQL Exception. Retrying in 10 seconds");
Thread.Sleep(10000);
}
}
}
[...]
N.Robust(() => Session.CreateCriteria(typeof(MyEntity)).List());
The problem is that I have to insert that pesky N.Robust construct everywhere which clutters the code. Also, I run the risk of forgetting it somewhere. I have been looking into using NHibernate's EventListeners or Inceptors for it, but haven't been able to make it work. Do I really have to fork NHibernate to make this work?
Update
Alright, so I've been able to overcome one of my two issues. By injecting my own event listeners I can at least ensure that all calls to the database goes through the above method.
_configuration.EventListeners.LoadEventListeners
= new ILoadEventListener[] { new RobustEventListener() };
[...]
public class RobustEventListener : ILoadEventListener
{
public void OnLoad(LoadEvent e, LoadType type)
{
if (!RobustMode)
throw new ApplicationException("Not allowed");
}
}
I am still left with a cluttered code base, but I think it's a reasonable price to pay for increasing service uptime.
One archtecturial approach to tolerate database downtime is to use a queue (client side and/or server side). For reads of static or largely static data, cache on the client side with an expiry window (say 15 - 30 minutes).
This is non-trivial if you have complex database transactions.
Sleeping like you propose, is rarely a good idea.
Another option (used mainly in occasionally connected applications) is to use Database Replication. Using a RDBMS with replication support (Sql Server, for example), have your application always talk to the local DB, and let the replication engine deal with synchronization with the remote database automatically when the connection is up. This will probably introduce the issue of conflict management/resolution.

Categories