I've to design a web application to support multiple clients.
I'm thinking to have a MongoDB with the username or email of each user and the name of the connection string of each user.
And with the connection string get the SQL database of the client.
But I'm not sure if this is the best approach.
Do you have any other suggestion?
Had situation close to you.
We used 1 common db(parent), where stored connections per clients and simple iterface to control child database's(they are separated, you can create manualy or automaticly as many db's per client, as you want or as many clients.
Based in what way you want to find clients. Our system used client per url/ Every client had own url and own db. So in code, we check url, then get from main db connection string and init context with specified connection.
You need provide more details, to get more info. Based on your goal, solution can be different.
I saw some projects with URL based approach... However, if you want your application more dynamic like let say migrating from server side to client side application and you don't want your URL change... I would say, your "user based" approach is more ideal in my opinion. Good luck.
If you have many clients with their databases then you must made different web application, even if they are copy/paste the one to the other
If you have many clients under the same url, under the same web application, then you can have one database and there you separate them, inside the database.
The web.config is not offering for frequent change the connection - you setup this ones to work and you forget it.
Every time you change the web.config you create a serial of events, and restart the application, recompile it if find some reason... etc
Related
I am trying to improve a windows service we use at work.
The part I am trying to improve is the maintainability. The service exists on several different machines. Now, I have a form which receives information from the service via Shared Memory, but to monitor these services someone has to login to several remote machines to view the forms.
What I am trying to do is decide the best way to have these services send their information to a single location for easy viewing.
My initial thought was to create a web service which the services could call with their details. Then create a web page where those details could be viewed. But I imagine that I will also need a db to store the messages in which I don't really want to have to do.
I would also like for the location in which shows the combined details to be able to send commands to the individual services, such as Start and Stop.
So I am lacking a bit of knowledge on the best way to accomplish this and am looking for suggestions that would give me something more specific to research.
I would appreciate any and all input on a real-time appropriate solution to having multiple window services located on several different machines within our network to send their status data to a single location to be displayed visually together, as well as allowing that form/website/whatever to send messages to those services such as Start and Stop.
If you don't want the Web Service/SQL approach, WCF might be a good approach.
http://msdn.microsoft.com/en-us/library/ms731082.aspx
Basically the remote services could report to the central service, and everything could be stored in memory, no DB required.
The program I am making requires the use of real time cross computer interactions via the internet.
The issue I'm coming across is that while I wish for the clients to connect to a host client rather then going for a client server model there are a lot of problems in terms of getting the host client able to actually host (accept an incoming connection, etc.)
I'm trying to make the process of hosting a session as simple as possible, so that a user with no networking knowledge can accept incoming connections without having to configure their router or any other such thing. I was wondering how I could achieve this?
Sounds like you want to programatically update firewall rules, given the variation in network set ups, it's not possible to have a one size fits all approach. I think you have three choices, the last probably being the better:
1) http://en.wikipedia.org/wiki/Internet_Gateway_Device_Protocol
2) http://en.wikipedia.org/wiki/Tunneling_protocol
3) instructions for users to configure their routers (will be needed as a back-up for users who the first two fail for)
Is it possible to detect whether a website has a dedicated or shared ip address from it's url using C# (Windows Forms application) ? I want to implement a functionality in my application to let write a web address in a TextBox than i click on the Test button. and then show ( Success ) MessageBox if the site has a Dedicated ip address or show a ( Failure ) MessageBox otherwise.
How can i detect whether a website has a Shared or Dedicated IP Address using C#.NET?
You can try, but you'll never have a good result. The best I think you could do is to check the PTR records of the IP, and then check if there are associated A records from different websites. This would still suck however, since a website could have two seemingly different domains that pertain to the same organization (googlemail.com/gmail.com for example).
Also, this assumes the existence of PTR records, multiple ones. I don't think I've seen such a setup supported by most VPS/sharing hosting.
Well, the way I would do it is:
Send HTTP GET to the URL and save the result.
Resolve the URL to an IP.
Send HTTP GET to the IP and save the result.
Compare the two results. (You can do sample checks between the two result)
If the results are the same, then this is dedicated hosting, if the result is different then this is a shared hosting.
Limitations for this method that I can think of now:
Will take you time to figure our a proper comparing method for the
two results.
If shared hosting is configured to default route to the site which you are checking.
Functions to resolve URLs, and do web requests for different programming languages are scattered across the Internet.
From a technical standpoint, there's no such thing as a "shared" or "dedicated" IP address; the protocol makes no distinction. Those are terms used to describe how an IP is used.
As such, there's no programmatic method to answer "is this shared or dedicated?" Some of the other answers to this question suggest some ways to guess whether a particular domain is on a shared IP, but those methods are at best guesses.
If you really want to go down this road, you could crawl the web and store resolved IPs for every domain. (Simple, right?) Then you could query your massive database for all the domains hosted on a given IP. (There are tools that seem to do this already, although only the first one was able to identify the multiple domains I have hosted on my server.)
Of course, this is all for naught with VPS (or things like Amazon EC2) where the server hardware itself is shared, but every customer (domain) gets one or more dedicated IPs. From the outside, there's no way to know how such servers are set up.
TL;DR: This can't be done in a reliable manner.
How do i work on 2 or more different data sources coming from different servers and expose APIs on the Web project where i can add more connections strings programmatically to the web project from the client side?
SQL Connection Strings? Change the connection object. Even if you use something automagic, like LINQ to SQL or EF, you have the ability to change the connection string at runtime. You have to know what connection string you are changing it to, but that can even be sent into the service from your silverlight application (although "never trust user input" comes to mind when working with a "public API").
The same holds true for SOA type implementations where the "connection string" means the service URI.
If you want something more concrete, be more specific on what you are contacting and how.
I'm working with an n-Tier application using WinForm and WCF
Engine Service (Windows Service) => WCF Service => Windows Form Client Application
The problem is that the WinForm Client Application need to be 100% available for work even if Engine Service is down.
So how can I make a disconnected architecture in order to make my winform application always available ?
Thanks.
Typically you implement a queue that's internal to your application.
The queue will forward the requests to the web service. In the event the web service is down, it stays queued. The queue mechanism should check every so often to see if the web service is alive, and when it is then forward everything it has stored up.
Alternatively, you can go direct to the web service, then simply post it to the queue in the event of initial failure. However, the queue will still need to check on the web service every so often.
EDIT:
Just to clarify, yes all of the business logic would need to be available client side. Otherwise you would need to provide a "verify" mechanism when the client connects back up.
However, this isn't a bad thing. As you should be placing the business logic in it's own assembly(ies) anyway.
Have a look at Smart Client Factory: http://msdn.microsoft.com/en-us/library/aa480482.aspx
Just to highlight the goals (this is sniped from the above link):
They have a rich user interface that
takes advantage of the power of the
Microsoft Windows desktop.
They connect to multiple back-end
systems to exchange data with them.
They present information coming from
multiple and diverse sources through
an integrated user interface, so the
data looks like it came from one
back-end system.
They take advantage of local storage
and processing resources to enable
operation during periods of no
network connectivity or intermittent
network connectivity.
They are easily deployed and
configured.
Edit
I'm going ansewr this with the usual CYA statement of it really depends. Let me give you some examples. Take an application which will watch the filesystem for files to be generated in any number of different formats (DB2, Flatfile, xml). The application will then import the files, displaying to the user a unified view of the document. And allow him to place e-commerce orders.
In this app, you could choose to detect the files zip them up and upload to the server do the transforms (applying business logic like normalization of data etc). But then what happens if the internet connection is down. Now the user has to wait for his connection before he can place his e-Commerce order.
A better solution would be to run the business rules in the client transforming the files. Now let's say, you had some business logic which would based on the order determine additional rules such as a salesman to route it to or pricing discounts...These might make sense to sit on the server.
The question you will need to ask is what functionality do I need to make my application function when the server is not there. Anything thing which falls within this category will need to be client side.
I've also never used Click Once deployment we had to roll our own updater which is a tale for another thread, but you should be able to send down updates preety easily. You could also code your business logic in an assembly, that you load from a URL, so while it runs client side it can be updated easily.
You can do all your processing off line, and use some thing like Microsoft Sync Framework to sync the data between the client and the server.
Assuming both server and client are .net, you can use same code base to do the data validation both on the server and the client. This way you will have a single code base that will serve both server and client.
You can use frameworks like CSLA.NET to simplify this validation process.