Working with Firebase in a Unity project, for a simple highscore, I stumbled on problems when doing a query.
In editor, everything works like a charm (Editor does not have persistence)
On devices (with persistence enabled) the troubles begin. Query is showing cached Firebase data, so it is only correct on first ever call in the client, and then when Firebase sees it fit to sync (maybe never, since there is no eventhandler)
However, looking for a solution, there is no way to force an update of the cached values.
I then tested with KeepSynced(true) on the query and this seems to work:
this.HiscoreQuery = hiscoreref.OrderByChild ("Score").LimitToLast (20);
this.HiscoreQuery.KeepSynced(true);
this.HiscoreQuery.GetValueAsync().ContinueWith (task => {
if (task.IsFaulted) {
Debug.LogError ("Get hiscores faulted");
return;
}
if (task.Result != null && task.Result.ChildrenCount > 0) {
Debug.Log ("Get hiscore data success!");
this.AddDelayedUpdateAction (() => this.OnGetHiScores (task.Result));
}
});
Question: While this can be fine if Firebase only listen for the Query's LImitToLast(20), it would be a very bad thing, if the Firebase internally is keeping the whole (growing) hiscorelist copied in every client.
Does anyone know if KeepSynced(true) is limited to the actual query scope or the whole tree/branch? And how could one validate this?
GetValue calls don't work well with KeepSynced(true). Firebase eagerly returns you the value from the cache, and only then loads the data from the server.
For a longer explanation, see my answer here: Firebase Offline Capabilities and addListenerForSingleValueEvent
If you want to use caching, use listeners and not GetValue calls. With a listener, your callback will be fired twice (if there is a change): once with the value from the cache, and then once with the value from the server.
Related
I developing a Xamarin application, and I communicating an external custom device. My problem is very strange, firstly the application starting, and connecting automatically to device, so everything is fine. When i suddenly remove the battery from the external device, the bluetooth connection is broken, and it's working fine to, but when I turn on the external device again, my Xamarin application connecting to it very well well, but the subscriptions not working anymore.
I debugged it, but not calling anymore. I think the unsubscribe/subscribe process is wrong.
...
if (ble.GetConnectionStatus())
{
Device.BeginInvokeOnMainThread(() =>
{
...
ble.Adapter.DeviceConnectionLost -= Adapter_DeviceConnectionLost;
ble.Adapter.DeviceConnectionLost += Adapter_DeviceConnectionLost;
ble.PropertyChanged -= Ble_PropertyChanged;
ble.PropertyChanged += Ble_PropertyChanged;
data.PropertyChanged -= data_PropertyChanged;
data.PropertyChanged += data_PropertyChanged;
...
});
...
So it's so strange, because first time this working, when starting the app, but when I call it after reconnect that same subscription not working. So if its wrong, then why working this at very first time?
I have no error, just not fire the functions again after resubscribe.
So as you see, I need to "refresh" the subscription. Is there another way to solve this problem?
If that "button to recreate everything" works, then I see two alternatives.
Option 1:
Have such a button, so that user can manually "fix" the situation.
PRO: Gives the user a solution that is guaranteed to work.
CON: Requires user intervention.
Option 2:
Have a periodic timer, that decides whether/when to forcibly "fix" the situation.
PRO: Automatically recovers.
CON: Risks losing data, if forces a recovery at the same time data is arriving.
In pseudo-code, option 2 might be something like this:
// pseudo-code
static Timer timer = ..start a timer that has an event every 10 seconds.
OnTimerElapsed:
if (!eventSeenRecently)
ForceReset();
eventSeenRecently = false;
..whereever you receive data..
if (..has data..)
eventSeenRecently = true;
The concept is that you keep track of whether data continues to be received. If the device stops sending you information (but you believe it should be), then you "ForceReset" - whatever is needed to get everything going again.
DeviceConnectionLost should also set some flag, that you use to ForceReset when the device "comes back".
// pseudo-code
DeviceConnectionLost:
resetNeeded = true;
OnTimerElapsed:
if (resetNeeded && ..test that device is available again..) {
ForceReset();
resetNeeded = false;
}
Perhaps this custom device has some option or info that can help.
For example, there might be a way to query some id or other info, so you can discover that the device is now "different", in a way that requires the reset. Then the timer does that query, and uses that info to decide to reset.
Problem
I'm using Firebase Realtime Database (for Unity) to manage the server side for a turn based game but I have a problem with my matchmaking... a high download usage.
Every online game have 2 base states to avoid more than 2 players join a game: Created and Joined
Created: A player try to join a game, if can't find one a new game will be Created
Joined: A player try to join a game, if find one change the state to from Created to Joined
I'm using RunTransaction to prevent more than 2 players from joining a game, but I checked that the latest data was not fetched from the database because of the local cache, adding keepSynced over my matches-{lang} child node will always have the latest data but naturally this produces a high download usage.
private DatabaseReference DatabaseReference()
{
return FirebaseDatabase.DefaultInstance.RootReference.Child(MatchesLocation(LanguageManager.Manager.GetPlayerLanguageCode()));
}
private DatabaseReference DatabaseReferenceLangMatch(Language language)
{
return FirebaseDatabase.DefaultInstance.RootReference.Child(MatchesLocation(LanguageManager.Manager.GetLanguageCode(language)));
}
public void ManageKeepSyncedMatches(Language lang)
{
DatabaseReferenceLangMatch(Language.English).KeepSynced(lang == Language.English);
}
public void JoinMatchTransaction(GameMatchOnline gameMatchOnline, UnityAction<string, bool> callback)
{
JoinTransactionAbort joinResult = JoinTransactionAbort.None;
DatabaseReference matchesListRef = DatabaseReference();
Dictionary<string, object> joinerDict = gameMatchOnline.ToJoinDictionary();
matchesListRef.Child(gameMatchOnline.matchId).RunTransaction(matchData =>
{
Dictionary<string, object> matchDict = matchData.Value as Dictionary<string, object>;
if (matchDict == null)
{
joinResult = JoinTransactionAbort.Null;
return TransactionResult.Success(null);
}
if (!matchDict.ContainsKey("state"))
{
joinResult = JoinTransactionAbort.Error;
return TransactionResult.Abort();
}
GameMatchOnline.State state = (GameMatchOnline.State)System.Convert.ToInt32(matchDict["state"]);
if (state != GameMatchOnline.State.Created)
{
joinResult = JoinTransactionAbort.Error;
return TransactionResult.Abort();
}
joinResult = JoinTransactionAbort.None;
matchDict.Add("joinerInfo", joinerDict["joinerInfo"]);
matchDict["state"] = joinerDict["state"];
matchData.Value = matchDict;
return TransactionResult.Success(matchData);
}).ContinueWith(task =>
{
// Fail
if (task.IsFaulted || task.IsCanceled)
{
UnityThread.executeInUpdate(() =>
{
if (joinResult == JoinTransactionAbort.Error)
{
callback(null, false);
}
});
}
// Can Join match
else if (task.IsCompleted)
{
UnityThread.executeInUpdate(() =>
{
if (joinResult == JoinTransactionAbort.None)
{
AddListenerResultsValueChanged(gameMatchOnline.matchId, gameMatchOnline.joinerInfo.userId, gameMatchOnline.isPrivate, gameMatchOnline.language);
callback(gameMatchOnline.matchId, true);
}
else
{
callback(null, false);
}
});
}
});
}
Question
Removing keepSynced players will have locally cached information for matches-{lang}, can I trust that by doing this there will be no more than 2 players per game? *Transactions are supposed to avoid this kind of problem.
Is there a way to avoid the local cache for a request and thus always get the updated data?
Could the best solution be to move the games to another node to reduce the size of the matches-{lang} node?
Thanks!
Removing "keepSynced" players will have locally cached information for "matches", can I trust that by doing this there will be no more than 2 players per game? *Transactions are supposed to avoid this kind of problem.
With KeepSynced off, Transactions will still hit the local cache then hit the internet. It'll probably save you some bandwidth since it's a lazy access (that's assuming you don't do something like "get all matches"), and you'll be able to make the guarantees you need. Whether or not you use KeepSynced, you should be prepared for your transaction to run multiple times (and against null data if the local cache is empty).
Is there a way to avoid the local cache for a request and thus always get the updated data?
Correction
It looks like I got this a little backwards, see this answer for more details. It will return the cached value and request an updated one. Subsequent calls will get a new value when it's available. You should always try to use ValueChanged when possible.
old answer:
You _can_ just say `GetValueAsync`, which has to bypass the cache since it will only fire once. You really should use ValueChanged listeners to listen for changes and Transactions to change data if you can to keep everything up to date and to avoid data races.
Could the best solution be to move the games to another node to reduce the size of the "matches" node?
Generally the fewer people hitting a shared resource, the better your performance. If you haven't already, check out the Loteria post to see how a team created a realtime game on Realtime Database that was resilient enough to be a Google Doodle.
The TLDR is that rather than a player being responsible for creating or finding a game, players looking for games are written to a queue. When a player is added to the matchmaking queue, a Cloud Function trigger fires which does the work of hooking the users up. A client knows that they're in a game by putting a ValueChanged listener onto a player entry in the database and waiting for a game to be written into it.
The game is further kept low latency with some manual sharding logic. They performed some profiling to see how much traffic a single database could handle, then write some quick (manual - since it was a one day thing) scaling logic to distribute players to one of a number of databases.
I hope that all helps!
--Patrick
I want to make a RESTful API (or any other way that can get it done, really) to have it work in a loop to do a specified task everyday at the same hour.
Specifically, I want it to access a foreign API, let's say, at midnight everyday, request the specified data and update the database accordingly. I know how to make a request to an API and make it do something. But I want it to do it automatically so I don't even have to interact with it, not even having to make requests.
The reason for this is that I'm working on a project that requires multiple platforms (and even if it was only one platform the users would be several) and I can't make a request to a foreign API (mainly because it's trial, it's a school project) every time a user logs in or clicks a button on each platform.
I don't know how to even do that (or if it's even possible) with a web service. I've tried with a web form doing it async with BackgroundWorker but nothing.
I thought I might have better luck here with more experienced people.
Hope you can help me out.
Thanks, in advance,
Fábio.
Don't know if I get it right, but it seems to me that the easiest way to do what you want (have a program scheduled to work at a given time, every day) is to use Windows Scheduler to schedule your application to run always on the specific time you want.
I managed to get there, thanks to the help of #Pedro Gaspar - LoboFX.
I didn't want the Windows Scheduler as I want it reflected on the code and I don't exactly have access to the server where it's going to be. That said, what got me there was something like this:
private static string LigacaoBD="something";
private static Perfil perfil = new Perfil(LigacaoBD);
protected void Page_Load(object sender, EventArgs e)
{
Task.Factory.StartNew(() => teste());
}
private void teste()
{
bool verif = false;
while (true)
{
if (DateTime.UtcNow.Hour + 1 == 22 && DateTime.UtcNow.Minute == 12 && DateTime.UtcNow.Second == 0)
verif = false;
if (!verif)
{
int resposta = perfil.Guardar(DateTime.UtcNow.ToString());
verif = true;
}
Thread.Sleep(1000);
}
}
It's inserting into the database through a class library. And with this loop it garantees that it only inserts once (hence the bool) and when it gets to the specified hour, minute and second it resets, allowing it to insert again. If something happens that the servers goes down, when it gets back up it inserts anyway. The only problem is that if it's already inserted and the server goes down it will insert again. But for that there are stored procedures. Well, not for the DateTime.UtcNow.ToString() but that was just a test.
I have a C# console app that I would like to keep running, even when its database crashes. In that case it should poll the database to see when it comes back online, and then resume operation. I have this code for it, that I don't like:
public static T Robust<T>(Func<T> function)
{
while (true)
{
try
{
return function();
}
catch (GenericADOException e)
{
Console.WriteLine("SQL Exception. Retrying in 10 seconds");
Thread.Sleep(10000);
}
}
}
[...]
N.Robust(() => Session.CreateCriteria(typeof(MyEntity)).List());
The problem is that I have to insert that pesky N.Robust construct everywhere which clutters the code. Also, I run the risk of forgetting it somewhere. I have been looking into using NHibernate's EventListeners or Inceptors for it, but haven't been able to make it work. Do I really have to fork NHibernate to make this work?
Update
Alright, so I've been able to overcome one of my two issues. By injecting my own event listeners I can at least ensure that all calls to the database goes through the above method.
_configuration.EventListeners.LoadEventListeners
= new ILoadEventListener[] { new RobustEventListener() };
[...]
public class RobustEventListener : ILoadEventListener
{
public void OnLoad(LoadEvent e, LoadType type)
{
if (!RobustMode)
throw new ApplicationException("Not allowed");
}
}
I am still left with a cluttered code base, but I think it's a reasonable price to pay for increasing service uptime.
One archtecturial approach to tolerate database downtime is to use a queue (client side and/or server side). For reads of static or largely static data, cache on the client side with an expiry window (say 15 - 30 minutes).
This is non-trivial if you have complex database transactions.
Sleeping like you propose, is rarely a good idea.
Another option (used mainly in occasionally connected applications) is to use Database Replication. Using a RDBMS with replication support (Sql Server, for example), have your application always talk to the local DB, and let the replication engine deal with synchronization with the remote database automatically when the connection is up. This will probably introduce the issue of conflict management/resolution.
THis is an interesting question. I am developing a web-chat software piece and for the past couple of hours I've been trying to figure out why this happens. Basically, I add an actual chat object (the part that does communications) to the Cache collection when you start chatting. In order to detect that you closed the window, I set the sliding expiration to say 10-30 seconds. I also set the callback to let the chat client know that he needs to disconnect to end the chat session. For some odd reason, when I use the code to dispose of the chat client, whatever it is, it causes the entire w3svc process to crash (event log checked). I also tried just sending myself an email when the item is removed, which worked. I even tried to put the entire code in try-catch block but it seems to ignore that as well. Any ideas? O_o
UPD: No, i am not trying to refresh the object (in reference to this).
Adding:
HttpContext.Current.Cache.Insert("ChatClient_" + targetCid + HttpContext.Current.Session.SessionID, cl, null, Cache.NoAbsoluteExpiration,
TimeSpan.FromSeconds(15), CacheItemPriority.Normal, new CacheItemRemovedCallback(removeMyself));
Removing:
public static void removeMyself(string key, Object value, CacheItemRemovedReason reason) {
var wc = (WebClient)value;
try {
wc.Remove();
}
catch { }
}
I am in fact using the lock on HttpContext.Current.cache when adding to the cache objects.
Can you post both the cache.insert and item removed callbacks code? Are you using any kind of locking when inserting into the cache? Have you done anything to the default settings for the ASP.net cache? Are you able to reproduce this on another web server? Are you sure you are expiring the cache in ms instead of seconds...
Is your sliding expiration like this? TimeSpan.FromSeconds(30)