I want to map user's to connection id's when they connect to my hub class, what is a good strategy for doing this efficiently? I want to associate a user's profile with his connection id so when I check what users are in a particular signalr group I can each user's profile info easily
Technically if you're not worried about maintaining state you could solve this with a poor man's in memory ConcurrentDictionary<string, ConcurrentBag<string>>, but I would assume you are trying to be a little more scalable/fault tolerant than that.
JabbR, which is the flagship testbed chat application for the SignalR framework, stores the connected client details in a table in its DB (which happens to be SQL). It has a mapping of the single ChatUser -> to many ChatClient instances (one-to-many). This way, when a logical user is logged in, it knows who that user is logically and can also make sure it can direct the proper messages to all the connected client instances that user might currently have open. You can find that specific implementation here if you're interested in learning more about it.
Related
I've to design a web application to support multiple clients.
I'm thinking to have a MongoDB with the username or email of each user and the name of the connection string of each user.
And with the connection string get the SQL database of the client.
But I'm not sure if this is the best approach.
Do you have any other suggestion?
Had situation close to you.
We used 1 common db(parent), where stored connections per clients and simple iterface to control child database's(they are separated, you can create manualy or automaticly as many db's per client, as you want or as many clients.
Based in what way you want to find clients. Our system used client per url/ Every client had own url and own db. So in code, we check url, then get from main db connection string and init context with specified connection.
You need provide more details, to get more info. Based on your goal, solution can be different.
I saw some projects with URL based approach... However, if you want your application more dynamic like let say migrating from server side to client side application and you don't want your URL change... I would say, your "user based" approach is more ideal in my opinion. Good luck.
If you have many clients with their databases then you must made different web application, even if they are copy/paste the one to the other
If you have many clients under the same url, under the same web application, then you can have one database and there you separate them, inside the database.
The web.config is not offering for frequent change the connection - you setup this ones to work and you forget it.
Every time you change the web.config you create a serial of events, and restart the application, recompile it if find some reason... etc
Good day!
I'm developing a web application to access an ancient Informix 7.31.UC4 database with IBM Informix Client SDK for .NET. The system is currently built so that each user must connect with his own DBMS credentials (not Identity credentials) which means that I either must
a) connect with different credentials upon each request which is way to slow (over 5 sec for creating, opening and closing connection, not to say about queried themselves)
or b) somehow store connections in Dictionary by username and make use of their pooling and avoid multiple connections for the same user in different sessions
or c) store a connection for each Session.
The first approach is unacceptable for its low speed. The second and the third approaches give me no way of disposing the IfxConnection : IDisposable, which makes me cry.
Am I missing a way to dispose objects stored in session upon session abandoning or ending? Is there a way to take advantages of the connection pooling in case of multiple sets of credentials? Or is it just that the system has been ill-designed in the first place and I should not care?
The only way I currently see is to create an Informix user with sufficient rights and keep one connection with pooling and use another SQLServer database for storing users and roles while ruling all the rights and roles at Controller level, but database designer won't hear me out.
P.S. I've seen several questions about disposable objects in sessions, all say it's bad, but I see no other option.
I have a client who would like a system developed which handles sending out alert emails to opted-in registered users. The alerts are based on geographic travel-related incidents and these are provided via a third party web service. The registered userbase is expected to be around 500,000 users and each user can subscribe to multiple alert categories.
I'm just trying to scope out what would be required technically to create this alerting functionality. In my mind we would create something like the following:
Poll the alert service once hourly
Store a "queue" of alerts in a temporary database table
Process the queue table as a scheduled task and send out the emails to the opted-in users
The part that I'm really not sure about is the physical sending of the emails. I was wondering if anyone could recommend any good options here. I guess we're looking to either create a custom component to send out the emails or use a dedicated third party service (like dotMailer maybe? Or a custom mass mail server?). I've read that rolling out your own solution runs the risk of having IP's blacklisted if you're not careful, and obviously we want to keep this all completely legitimate.
I should also mention that we will be looking to create the solution using .NET (C#) and we're limited to SQL Server Express 2008 edition.
Any thoughts much appreciated
Many thanks in advance
For the Poll , Queue and Send operations I'd create a windows service that calls the external service , operates on the data and then gathers the results to send out the emails updating the database as necessary with sent flags etc.
I handled a similar project recently and we found many ISPs or Hosting Providers got really twitchy when we mentioned mass emails. You should defintly check out the http://en.wikipedia.org/wiki/CAN-SPAM_Act_of_2003 CAN SPAM guidelines (or similar in your territory).
As long as you play by the rules and follow the guidelines you should be OK sending out from a local mail server however its important that you ensure that DNS Lookups or Reverse DNS lookups on the MX records all point back and forth to each other properly. Indeed this would be easier to out source to a third party mail provider or ISV but when we tried we were unable to find a good fit and ended up doing it ourselves.
Additionally you may want to glance at SPF records and other means to increase mass email delivery! For what its worth this can be a very tricky task to implement as SMTP (my least favourite protocol) is painful to try to debug and people get very upset if they receive multiples or unsolicited emails so ensure you have an Opt-in policy and appropriate checks to prevent duplicate delivery.
The program I am making requires the use of real time cross computer interactions via the internet.
The issue I'm coming across is that while I wish for the clients to connect to a host client rather then going for a client server model there are a lot of problems in terms of getting the host client able to actually host (accept an incoming connection, etc.)
I'm trying to make the process of hosting a session as simple as possible, so that a user with no networking knowledge can accept incoming connections without having to configure their router or any other such thing. I was wondering how I could achieve this?
Sounds like you want to programatically update firewall rules, given the variation in network set ups, it's not possible to have a one size fits all approach. I think you have three choices, the last probably being the better:
1) http://en.wikipedia.org/wiki/Internet_Gateway_Device_Protocol
2) http://en.wikipedia.org/wiki/Tunneling_protocol
3) instructions for users to configure their routers (will be needed as a back-up for users who the first two fail for)
I am looking for a way to detect if a SQL Server fails to respond (timeout/exceptions etc) so that if our C#/.net program trying to access the server asks a server that is down for maintenance, it will jump and try another automatically.
I would like a solution where we do the SQL connection and THEN get the timeout. We could build a WCF service and ask that one, but that is a bit of an overkill in my opinion.
The sync between multiple server locations is not the issue here.
Our development platform at the moment is SQL2008express as its plenty at the moment, but later on we might switch to real SQL2008 servers (or whatever is latest when the time comes).
Clients will connect to "first known" in a "last known dynamic list" asking a "rootserver" or hardcoded configs for first lookup.
When clients loses connections, they will automatically have to try to reconnect to other nodes in the clusters and use whatever returns a reply first. The nodes will individualle connect and share data through other services which we also distribute in the cloud.
We know that mirroring and clustering might be available through large licenses, but our setup demands a more dynamically "linking" and we believe this approach suits our needs better.
So... to be specific:
we need to detect when a SQL-server goes offline, when its not available anymore. Eg. in the middle of a transaction or when we try to connect.
Is the best approach to do a "try-catch" exception handling or is there better tricks when looking over WAN's and using C#/.net =
EDIT
I've received a lot of good ideas to use failover servers, but I would like a more programatical approach, so whats the best way to query the server if its available?
Situation:
4 different SQL servers running on seperate WAN/IP's, each will maintain a list of "where the others are" (peer-to-peer). They will automatically be moving data from each other (much like a RAID-setup where data is spread out on multiple drives)
A client retries the list from an entry-point-server and asks the first available.
If the client asks a server that is "down for maintance" or the data has moved to one of the other servers, it must automatically ask the next in the list.
What we are looking for..
is the best way from within C#/.net to detect that the server is currently unavailble.
We could have a service we connect to and when we loose this, the server is off
We could make a "dbConnectionSqlServer3.open()" and wait for the time out.
We could invest in "real cluster servers", and pay a lot + bind ourselfs to 1 SQL-server type (The SQL platform might change in future, so this is not a real option)
So whats your vote here: 1 or 2?
or can you come up with a number 4 that beats the h**k out of ideas? :o)
I would probably take a different approach.
I'd designate a server (with a failover partner) to be the holder of the server list. It should monitor all of the other servers for availability.
When the client initially wants to connect, it should contact the list server and ask it what to use. From that point forward the client should stick with it, unless a failure is detected. At which point it should contact the list server to get a new one.
The list server would be able to implement any type of load balancing you wanted just by telling the clients which one to connect to. Further, deploying new servers would be easy as the only "list" would be maintained by this primary server.
This completely ignores any sort of server synchronization issues you might have.
One major benefit is that by having a central server doing the monitoring your clients won't have to fall through 3, 5, or 10 servers just to find one that's up. which would increase responsiveness.
UPDATE
#BerggreenDK: The best, and probably only assured way, to detect if a server has failed is to connect to it and run a simple query. All other mechanisms such as pinging, a simple check if the port is open, etc. might give a false positive. Even having a process running on that server itself may give a false reading if the server itself is up but SQL Server is down (e.g: the database is offline).
Ultimately it sounds like your client will have to connect to one of the gateway servers and get a short list of sql servers to attempt to connect to. On failure, it will have to rotate through that short list. If all are down it will have to go back to the gateway and ask for a new list to start the process over.
In the ConnectionString you can specify a failover partner like
Data Source=<MySQLServer>;Failover Partner=<MyAlternateSQLServer>;Initial Catalog=<MyDB>;Integrated Security=True
On http://www.connectionstrings.com there is a section talking of DataBase mirroring.