I'm writing a pretty big web system with asp.net MVC, which involves sending data in Real Time to numerous users, based on what they are subscribed to.
My team and I decided to use SignalR for that, and I am in charge of implementing it in the system.
In our case, that a user picks a group to join, and then picks 1 Thing to work on.
For that, I'm saving all the users in a DB. I'll be using the SignalR Groups to handle the first category, and when I need to push a message to a specific user (for the other thing hes picking) I'll just get his ConnectionID from the DB.
Here's the problem - every time the page is refreshed (for instance, when the user picks a group to join) he gets a new connectionID. And now he won't see anything that's pushed to him.
I saw that in the SignalR beta, and on version 2 (I only have 1.1.1 on the computer I'm working on) you can make your own IUserIdProvider (IUserIdPrefixGenerator in the beta), or IUserConnectionIdFactory etc. So I can give him the I'd I want him to have, but I don't seem to have any of those in my version of SignalR.
There are many ways to solve this, but perhaps one of the simplest ways is to associate the new connection id with the user (maybe they still have the other connection open in a different tab). This can be done using any combination of IP Address, User-Agent, Headers, or location. Another good candidate for this is either to use sessions, or just a simple identifier cookie (which is more or less what a session would do anyway).
I'll often use GUIDs for this and then create a table in the database when a new identifier cookie is created. Every time the user "refreshes" or opens a new tab, the cookie can be read in JS and sent with the hub.connect(). Then, you can create an association between the new connection id and the existing identifier cookie.
I'd highly recommend figuring out a different way to maintain your users' persisted connections. Typically, I keep all of my users connection ids stored in a concurrent dictionary to allow for thread-safe access to the collection. I remove the users from the dictionary whenever a disconnection event occurs and I add them whenever a connection event occurs.
SignalR will manage your users' connections for you. For you to do it in the database and fall out of sync with SignalR circumvents a lot of the mechanics that make it work correctly in the first place.
private readonly static Lazy<App> _instance = new Lazy<App>(
() => new App(GlobalHost.ConnectionManager.GetHubContext<AppHub>().Clients));
private readonly ConcurrentDictionary<string, User> _users = new ConcurrentDictionary<string, User>(StringComparer.OrdinalIgnoreCase);
private IHubConnectionContext Clients { get; set; }
public App(IHubConnectionContext clients)
{
Clients = clients;
}
Related
I have sample webapp deployed to Azure. The app cached a variable using MemoryCacheEntryOptions to store a value (from database) which expire in 5 minutes.
However after 5 minutes via Chrome debugging tool, I still can query the cache, the cache value expected to be empty or whatever the new value which currently stored updated in the database.
I even tried to clear cache in the web browser, but cache seem still retain the previous value.
However when I restart the web site, and open the web app again the cache value is no longer exist.
Would any setting in Azure might affect the cache expiry?
private readonly MemoryCacheEntryOptions _cacheEntryOptions;
protected CacheService(IMemoryCache memoryCache)
{
_ memoryCache = memoryCache;
_cacheEntryOptions = new MemoryCacheEntryOptions
{
AbsoluteExpirationRelativeToNow = TimeSpan.FromSeconds(300)
};
}
Debugging the behavior of a web application is notoriously hard, as all you got to control it is the Browser - and you never get exclusive access.
Even if you did not refresh the page, any number of things might have queried the server. The culprits start around "any search engines webcrawler" and end around "somewhat aggressive security tools" (because some viruses might use web servers). You could try a way shorter timeout. But ideally you want to have both the Server and the client you access it with run in separate virtual machines, which are only connected via the Hypervisor. That way you can be certain nobody is interfering.
Lets say the user already has files synchronized (via my app) to their Drive folder. Now they sign into my app on a second device and is ready to sync files for the first time. Do I use the Changes API for the initial sync process?
I ask because using the Changes API requires a StartPageToken, which requires that there had been a previous sync operation. Well there is no possible way for user to already have a StartPageToken if they are synchronizing data on a device for the first time.
Google's documentation is a joke. They shouldn't leave it up to us to read between the lines and just figure this out. I'm sure I can cook up something that will "work", but how do I ever know that it is the "appropriate" and EFFICIENT way to go about handling this?
public async Task<AccessResult> GetChangesAsync(CancellationToken cancellationToken, string fields = "*")
{
ChangesResource.ListRequest listRequest = new ChangesResource.ListRequest(DriveService, startPageToken)
{
Spaces = Folder_appDataFolder,
Fields = fields + ", nextPageToken",
IncludeRemoved = true,
PageSize = 20
};
ChangeList changeList = await listRequest.ExecuteAsync(cancellationToken);
}
Here, I am looking to start syncing the user's for the first time and so a page token doesn't even make sense for that because during the first sync your goal is to get all of the users data. From then on you are looking to only sync any further changes.
One approach I thought of is to simply use ListRequest to list all of the users data and start downloading files that way. I can then simply request a start page token and store it to be used during sync attempts that occur later...
...But what if during the initial download of the user's files (800 files, for example) an error occurs, and the ListRequest fails on file 423? Because I cannot attain a StartPageToken in the middle of a ListRequest to store in case of emergency, do I have to start all over and download all 800 files again, instead of starting at file 423?
When doing changes.list for the first time you should call getStartPageToken this will return the page token you can use to get the change list. If its the first time then there will be no changes of course.
If the user is using your application from more then one device then the logical course of action would be for you to save the page token in a central location when the user started the application for the first time on the first deceive. This will enable you to use that same token on all additional devices that the user may chose to use.
This could be on your own server or even in the users app data folder on drive
I am not exactly user what your application is doing but i really dont think you should be downloading the users files unless they try to access it. There is no logical reason i can think of for your application to store a mirror image of a users drive account. Access the data they need when they need it. You shouldn't need everything. Again i dont know exactly what your application does.
I have a SignalR Chat that is available to anonymous users. I need a way to map the users so that the connections persist on page reload, and if the user has multiple tabs they should get the message displayed on every tab.
If I use Context.ConnectionId, every page reload creates a new connection. I want to map these connections using Single-user groups.
For logged-in users, I use Context.User.Identity.Name:
Groups.AddAsync(Context.ConnectionId, Context.User.Identity.Name);
Is there a similar way to get an anonymous user's "identity"? From what I have read, session is not supposed to be used in the SignalR hub, but all the information I found is old, so I may be wrong here.
When you reload a page the existing connection will be closed and a new one will be opened. The new connection will have a new connection id and on the server side you will not be able to tell what user initiated it. Depending on your circumstances you could try identifying the user by their ip addresses - i.e. you would store the user ip when the connection is open and then, when a new connection is opened you would check if you have already seen the ip. This may not work however because the same user can actually have different ips and multiple users can have the same ip. Another method would be to send a client side generated identifier in the query string when opening the connection and use that to identify the same user on the server side.
You must implement some mechanism to create unique ID for each user.
I would do something like following
Before user actually connects to Hub i would create a unique ID and
store it in cookie
set the query string for signalR URL with the value of this cookie so
that each call to hub will avail with the user's ID
now even when the user refreshes the page, the old connection would
automatically get removed from the group by signalR and you can
continue adding new connections mapped against the uniquely generated
ID.
I'm trying to setup msmq communication between two services on two separate machines within the same domain.
The machines are a Primary - Secondary situation.
I need to be able to Send and Purge from the Primary, and Receive from the Secondary
I can't seem to find a setup that allows for this. I had one setup where the queues existed on the Secondary and the Primary could Send and the Secondary Receive BUT I could not purge from the Primary as I need to. I believe this was due to the queues being private and remote.
So I tried to flip the situation. I put the queues on the Primary, but then I couldn't Send to the private local queue, so I made them public, and now I cant get the secondary to find the remote public queues.
if (!MessageQueue.Exists(queueName))
{
//log they don't exist and exit
throw new Exception("One or more of the required queues do not exist");
}
syncQ = new MessageQueue(queueName){Formatter = new XmlMessageFormatter(new Type[] {typeof (String)})};
if (machineState == MachineState.Primary)
{
syncQ.SetPermissions(ASPConfiguration.SyncUser, MessageQueueAccessRights.FullControl, AccessControlEntryType.Allow);
syncQ.Purge();
}
Primary = ".\nw"
Secondary = "FormatName:Direct=OS:CACTEST-WS-D\nw"
It doesn't really matter if the queues are private or public to me just as long as I can do what I need.
Thanks for the Help.
In answer to your queries:
I had one setup where the queues existed on the Secondary and the
Primary could Send and the Secondary Receive BUT I could not purge
from the Primary as I need to.
The initial set up you had, with the queue on secondary and primary sending to secondary is the correct configuration. Unsure why you are unable to purge secondary queues from primary. This is probably just a simple case of queue permissions. Try granting full control to the service account running on primary.
So I tried to flip the situation. I put the queues on the Primary, but
then I couldn't Send to the private local queue
Again, unsure why this is. What errors do you receive? You can enable MSMQ tracing which will give you any transmission errors on both machines. Anyhow, this is not the correct configuration. It's always best to send remote - receive local.
Primary = ".\nw"
It's always better to use the fully formatted msmq address in all situations to avoid addressing problems.
It doesn't really matter if the queues are private or public to me
Well it really really does matter. Public queues are for enterprise-scale configurations with high volume, clustering and/or routing requirements, and anyhow you have to register them with active directory for them to even work. If you just want to send stuff from place to place ALWAYS use private queues.
My advice is to revert back to you initial setup and concentrate on getting the remote purge working.
I must build a Application that will use Webclient multiple times to retrieve every "t" seconds information from a server.
Here is a small plan to show you what I'm doing in my application:
Connect to the Web Client "USER_LOGIN" that returns me a GUID(user unique ID). I save it and keep it to use it in future Web Client calls.
Connect to the Web Client "USER_GETINFO" using the GUID I saved before as parameter. This Web Service returns an array of strings holding all my personal user information( my Name, Age, Email, etc...). => I save the array information this way: Textblock.Text = e.Result[2].
Starting a Dispatcher.Timer with a 2 seconds Tick to start my Loop. (Purpose of this is to retrieve information and update it every 2 seconds)
Connect to the Web Client "USER GETFRIEND", wich is in my Timer, giving him the GUID as parameter. It returns me an array filled with my friends informations(Name, email, message, etc...). I inserted this WebClient in the timer so my friend list refreshes every 2 seconds.
I am able to create all the steps without any error until step 3. When I call the "USER_GETFRIEND" Web Client I am facing two major problems:
On one side I noticed that my number of Thread increased dramatically. => I always thought that when a WebClient had finished its instructions it would shut down by itself, but apparently that does not happen in Asyncronous calls.
And on the other side I was surprised to see that using the same proxy for two Webclient calls(ie: if i declare test.MainSoapClient proxy = new test.MainSoapClient()), the data i would retrieve from "USER_GETFRIEND" e.Result, was sent directly to my "USER_GETINFO" array. And so my Name and Email adresses on the UI were replaced by the same value in the USER_GETFRIEND array. So my Name is changed to my friends email and so on...
I would like to know if it's possible to close a WebClient call(or Thread) that I am not using anymore to prevent any conflicts? Or if someone has any suggestion concerning my code and the way i should develop my application please feel free to propose.
I got the answer a few weeks ago and figured out it was important to answer my own question.
My whole problem was that I wasn't unsubscribing from my asynchronous calls and that I was using the same proxy class from "Add Service reference":
So when I was using:
proxy.webservice += new Eventhandler<whateverinhere>(my_method);
I never did:
proxy.webservice -= new Eventhandler<whateverinhere>(my_method);
Hope it will help someone.