Specifying failover servers in c# webservice client - c#

I need to build in some resilience in a web service client application. Are any of these two scenarios supported by the standard dot.net web service generated client (classic or 3.0)?
Specifying a list of server addresses so that the clien can fall back automatically if one server goes down.
Configuring the client so it looks up the DNS Service records instead of the standard hosts and uses the lists of host by priority, keeping track of which hosts are up.
Load balancing the server or going through a proxy does not solve my problem, which is related to geographical resilience.
Any help would be appreciated, thanks!

We've generally built our own layer; I don't think the default generated client code does anything like this.
More often, we define a custom configSection and then add a bunch of key/value pairs in that section. Then, we round-robin through that list for each request.

Related

C# Client-Server application

I need to write a client-server application. First of all, I'm going to write an application server. Also my app server should connect to database(MS Sql Server) and give data from it to client app. So, as I know, I should use WCF. Is it a good idea? Maybe I need to take a look for something else?
Lets start with client-server architecture.
Assuming you have finalized that you need client and server, but have you decided carefully the architecture? I mean what type of server and what type of client you are going to create?
Let's see the options here:
Server
1. What type of hosting you are going to use?
2. What type and how much load your server needs to handle?
Client
1. Type of consumer of your service
2. Do client need to be deployed on local machine or it should be web based?
There are obviously more concerns than above. Initial design should be as flexible as possible.
So, now lets look at some solutions regarding architecture.
Server:
1. Application Hosted WCF server: Each time you need to manage the server lifecycle. Also, this is not scalable. So if you are looking for scalable architecture, you need to look more.
2. IIS hosted WCF server: This might be a good idea along with some architecture concerns as per your need.
3. Web Method: Obviously this came after WCF, but WCF is still in its place. So the main difference is at What is the difference between an asp.net web method and a wcf service?
Now Client:
1. ASP.NET: This will enable to use a single client app for every platform obviously because of HTML
2. WPF/WinForms: This is going to bit tricky to use as client as you need to deploy the client app on user machine and here comes the data security problem. In former you can directly use SSL or some other way to send data to browser. While in this if you are not using WCF with HTTPS and there are some proprietary data going over wires, it may be concerns.
If you are looking for cross platform usage of your server you can use HTML.
Conclusion:
You can use Server as WCF hosted service (either in IIS or in self contained application) and client as ASP.NET.
-----------------------------If it is not big enough requirement then you can use ASP.NET as server and then browser as client (No need to create client).----------------------------
You can create server either as WCF as web methods and deploy the client on user machine.
----------------------------
WCF is nice enough and it can handle your proprietary data types as well.
WCF is a nice thing, but i would use ASP.Net Self-Hosted Web-API. It's more modern. And you have a full rest interface, which is much more popular.
Here is a comparison: WCF and ASP.NET Web API
Here is a good starting point: Self-Host ASP.NET Web API 1 (C#)

prevent cross domain requests to my wcf services

I use wcf ui services communicate between my javacsript (jquery) and server side code. I find this work effectively.
However I want to make it more secure. I can I set up wcf so that the requests to the services can only be made from within the same domain, to prevent external clients from making such requests to my services.
So for example, my service opertion url is http://www.website.com/Service.svc/GetProducts. I want to set up wcf so that only requests from pages in the http://www.website.com are allowed. I presume this is in the realm of cross domain wcf requests but need some assistance in setting this up. Help would be great.
This simply isn't possible if your services are exposed to the web.
If something about your services isn't secure enough for that, you should look into fixing that problem - not trying to prevent people from making requests.
Anyone can always use a debugging proxy like Fiddler, Charles, etc. or a tool like WireShark to send any data they want to your services - including a complete replay of a request made via the browser. (Including referrer http headers, etc).
If your situation allows for it, perhaps you might consider using a VPN appliance or something similar, and restrict access to users inside your network (or coming in through the VPN). That way there is less concern about your security of the services.... however it's a known fact that "internal attackers" are just as prevalent, if not more so, than external ones... so don't get too comfy.
Let me head this argument off at the pass too, while I'm at it; someone might suggest that browsers already prevent cross site scripting like that. Yes, that's true. But usually it would be the developer of the other application adding the client side script to call those services - and he/she could just as easily make that request on the server side and proxy the results along to the client.

Dynamic choose connection entity framework rest

I have a very simple entity framework (.edmx) file, and a .svc rest service.
Everything works fine for CRUD operations.
I have many databases thats shares the exactly same schema.
My next step is to let the client pass inn a parameter that could be the connection string or some other value identifying the user so that the service serves data from the correct database.
Now, the only parameter is the uri for the ServiceRoot
I see in the datamodel that I can pass inn a connection string, but how can i do this from the client without making many service files.
I am assuming you are using WCF Data Services to expose the edmx file. I am no expert in this toolset but I suspect the only direct way is to create a service for each database.
This is a great question and it is a scenario that I hope will be addressed in the future WCF HTTP stack.
In the meanwhile, there is some positive news. I have experimented in the past with creating a large number of service hosts (around 1000) and my experimentation showed that it was quite efficient to start up and did not consume large amounts of RAM. The key is to create the service hosts in code rather than via the config files. Obviously, you don't want to be hand writing an XML config file with thousands of service entries in it!
It may not be the ideal solution but I believe it would work.
If you're using WCF Data Services you should be able to pass the information identifying the data source to use in the HTTP request. Either as a custom option in the URL or as a custom HTTP header (I would probably use the custom header as it's much easier to work with from the client).
Depending on the way you host the service you should be able to access the headers of the request on the server. You can use the ASP.NET way to do this (static variables), or you can hook into the processing pipeline of the WCF Data Services which should allow you to access those headers as well.

How to store WCF sessions so another application can access them

Hi I have an application that operations like this..
Client <----> Server <----> Monitor Web Site
WCF is used for the communication and each client has its own session on the server. This is so callbacks can be used from the server to callback to the client.
The objective is that a user on the "Monitor Website" can do the following:
a) Look at all of the users currently online - that is using the client application.
b) Select a client and then perform an action on the client.
This is a training system so the idea being the instructor using a web terminal can select his or her target client and then make the client application do something. Or maybe they want to send a message to the client that will be displayed on the clients screen.
What I cant seem to do is to store a list of all the clients in the server application, that can then be retrieved by the server. If I could do this I could then access the callback object for the client and call the appropriate method.
A method on the monitoring website would look something like this...
Service.SendMessage(userhashcode, message)
The service would then somehow look up the callback that matches the hashcode and then do something like this
callback.SendMessage(message)
So far I have tried without look to serialise the callbacks into a centralised DB. However, it doesnt seem possible on the service to serialise a remote object as the callback exists from the client.
Additionally I thought I could create a global hash table in my service but im not sure on how to do this and to make it accesible application wide.
Any help would be appreciated.
Typically, WCF services are "per-call" only, e.g. each caller gets a fresh instance of the service class, it handles the request, formats the response, send it back and then gets disposed. So typically, you don't have anything "session-like" hanging around in memory.
What you do have is not the service classes themselves, but the service host - the class that acts as the host for your service classes. This is either IIS (in that case you just need to monitor IIS), or then it's a custom app (Windows NT Service, console app) that has a ServiceHost instance up and running.
I am not aware what kind of hooks there might be to connect to and "look inside" the service host - but that's what you're really looking for, I guess.
WCF services can also be configured to be session-ful, and keep a session up and running with a service class - but again: you need to have that turned on explicitly. Even then, I'm not really sure if you have many API hooks to get "inside" the service host and have a look around the current sesssions.
Question is: do you really need to? WCF exposes a gazillion of performance counters, so you can monitor and record just about anything that goes on in WCF - wouldn't that be good enough for you?
Right now, WCF services aren't really hosted in a particularly well-designed system - this should become better with the so-called "Dublin" server-addon, which is designed to host WCF services and WF workflows and give admins a great experience monitoring and managing them. "Dublin" is scheduled to be launched shortly after .NET 4.0 becomes available (which Microsoft has promised will be before the end of calendar year 2009).
Marc
What I have done is as follows...
Created a static instance in my service that keeps a dictionary of callbacks keyed by the hashcode of each WCF connection.
When a session is created it publishes itself to a DB table which contains the hash code and additional connection information.
When a user is using the monitor web application, it can get a list of connected clients from the DB and get the hashcode for that client.
If the monitor application user wants to send a command to the client the following happens..
The hashcode for the sessionn is obtained from the db.
A method is called on the service e.g. SendTextMessage(int hashcode, string message).
This method now looks up the callback to the client from the dictionary of callbacks and obtains a reference to it.
The appropriate method in this case SendTextMessage(message) is called on the callback.
Ive tested this and it works ok, Ive also added a functionality to keep the DB table synchronised to the actual WCF sessions and to clean up as required.

Disconnected Architecture With .NET

I'm working with an n-Tier application using WinForm and WCF
Engine Service (Windows Service) => WCF Service => Windows Form Client Application
The problem is that the WinForm Client Application need to be 100% available for work even if Engine Service is down.
So how can I make a disconnected architecture in order to make my winform application always available ?
Thanks.
Typically you implement a queue that's internal to your application.
The queue will forward the requests to the web service. In the event the web service is down, it stays queued. The queue mechanism should check every so often to see if the web service is alive, and when it is then forward everything it has stored up.
Alternatively, you can go direct to the web service, then simply post it to the queue in the event of initial failure. However, the queue will still need to check on the web service every so often.
EDIT:
Just to clarify, yes all of the business logic would need to be available client side. Otherwise you would need to provide a "verify" mechanism when the client connects back up.
However, this isn't a bad thing. As you should be placing the business logic in it's own assembly(ies) anyway.
Have a look at Smart Client Factory: http://msdn.microsoft.com/en-us/library/aa480482.aspx
Just to highlight the goals (this is sniped from the above link):
They have a rich user interface that
takes advantage of the power of the
Microsoft Windows desktop.
They connect to multiple back-end
systems to exchange data with them.
They present information coming from
multiple and diverse sources through
an integrated user interface, so the
data looks like it came from one
back-end system.
They take advantage of local storage
and processing resources to enable
operation during periods of no
network connectivity or intermittent
network connectivity.
They are easily deployed and
configured.
Edit
I'm going ansewr this with the usual CYA statement of it really depends. Let me give you some examples. Take an application which will watch the filesystem for files to be generated in any number of different formats (DB2, Flatfile, xml). The application will then import the files, displaying to the user a unified view of the document. And allow him to place e-commerce orders.
In this app, you could choose to detect the files zip them up and upload to the server do the transforms (applying business logic like normalization of data etc). But then what happens if the internet connection is down. Now the user has to wait for his connection before he can place his e-Commerce order.
A better solution would be to run the business rules in the client transforming the files. Now let's say, you had some business logic which would based on the order determine additional rules such as a salesman to route it to or pricing discounts...These might make sense to sit on the server.
The question you will need to ask is what functionality do I need to make my application function when the server is not there. Anything thing which falls within this category will need to be client side.
I've also never used Click Once deployment we had to roll our own updater which is a tale for another thread, but you should be able to send down updates preety easily. You could also code your business logic in an assembly, that you load from a URL, so while it runs client side it can be updated easily.
You can do all your processing off line, and use some thing like Microsoft Sync Framework to sync the data between the client and the server.
Assuming both server and client are .net, you can use same code base to do the data validation both on the server and the client. This way you will have a single code base that will serve both server and client.
You can use frameworks like CSLA.NET to simplify this validation process.

Categories