I'm working with an n-Tier application using WinForm and WCF
Engine Service (Windows Service) => WCF Service => Windows Form Client Application
The problem is that the WinForm Client Application need to be 100% available for work even if Engine Service is down.
So how can I make a disconnected architecture in order to make my winform application always available ?
Thanks.
Typically you implement a queue that's internal to your application.
The queue will forward the requests to the web service. In the event the web service is down, it stays queued. The queue mechanism should check every so often to see if the web service is alive, and when it is then forward everything it has stored up.
Alternatively, you can go direct to the web service, then simply post it to the queue in the event of initial failure. However, the queue will still need to check on the web service every so often.
EDIT:
Just to clarify, yes all of the business logic would need to be available client side. Otherwise you would need to provide a "verify" mechanism when the client connects back up.
However, this isn't a bad thing. As you should be placing the business logic in it's own assembly(ies) anyway.
Have a look at Smart Client Factory: http://msdn.microsoft.com/en-us/library/aa480482.aspx
Just to highlight the goals (this is sniped from the above link):
They have a rich user interface that
takes advantage of the power of the
Microsoft Windows desktop.
They connect to multiple back-end
systems to exchange data with them.
They present information coming from
multiple and diverse sources through
an integrated user interface, so the
data looks like it came from one
back-end system.
They take advantage of local storage
and processing resources to enable
operation during periods of no
network connectivity or intermittent
network connectivity.
They are easily deployed and
configured.
Edit
I'm going ansewr this with the usual CYA statement of it really depends. Let me give you some examples. Take an application which will watch the filesystem for files to be generated in any number of different formats (DB2, Flatfile, xml). The application will then import the files, displaying to the user a unified view of the document. And allow him to place e-commerce orders.
In this app, you could choose to detect the files zip them up and upload to the server do the transforms (applying business logic like normalization of data etc). But then what happens if the internet connection is down. Now the user has to wait for his connection before he can place his e-Commerce order.
A better solution would be to run the business rules in the client transforming the files. Now let's say, you had some business logic which would based on the order determine additional rules such as a salesman to route it to or pricing discounts...These might make sense to sit on the server.
The question you will need to ask is what functionality do I need to make my application function when the server is not there. Anything thing which falls within this category will need to be client side.
I've also never used Click Once deployment we had to roll our own updater which is a tale for another thread, but you should be able to send down updates preety easily. You could also code your business logic in an assembly, that you load from a URL, so while it runs client side it can be updated easily.
You can do all your processing off line, and use some thing like Microsoft Sync Framework to sync the data between the client and the server.
Assuming both server and client are .net, you can use same code base to do the data validation both on the server and the client. This way you will have a single code base that will serve both server and client.
You can use frameworks like CSLA.NET to simplify this validation process.
Related
I am working on a project in which a WCF service will be consumed by iOS apps. The number of hits expected on the webserver at any given point in time is around 900-1000. Every request may take 1-2 seconds to complete. The same number of requests are expected on every second 24/7.
This is what my plan:
Write WCF RESTful service (the instance context mode will be percall).
Request/Response will be in Json.
There are some information that needs to be persisted in the server - this information is actually received from another remote system - which is shared among all the requests. Since using a database may not be a good idea (response time is very important - 2 seconds is the max the customer can wait), would it be good to keep it in server memory (say a static Dictionary - assume this dictionary will be a collection of 150000 objects - each object consists of 5-7 string types and their keys). I know, this is volatile!
Each request will spawn a new thread (by using Threading.Timers) to do some cleanup - this thread will do some database read/write as well.
Now, if there is a load balancer introduced sometime later, the in-memory stored objects cannot be shared between requests routed through another node - any ideas?
I hope you gurus could help me by throwing your comments/suggestions on the entire architecture, WCF throttling, object state persistence etc. Please provide some pointers on the required Hardware as well. We plan to use Windows 2008 Enterprise Edition server, IIS and SQL Server 2008 Std edition database.
Adding more t #3:
As I said, we get some information to the service from a remote system. On the web server where the the WCF is hosted, a client of the remote system will be installed and WCF references one of this client dlls to get the information, in the form of a hashtable(that method returns a hashtable - around 150000 objects will be there in this collection). Would you suggest writing this information to the database, and the iOS requests (on every second) which reach the service retrieves this information from the database directly? Would it perform better than consuming directly from this hashtable if this is made static?
Since you are using Windows Server 2008 I would definitely use the Windows Server App Fabric Cache to store your state:
http://msdn.microsoft.com/en-us/library/ff383813.aspx
It is free to use, well supported and integrated and is (more or less) API compatible with the Windows Azure App Fabric Cache if you every shift your service to Azure. In our company (disclaimer: not my team) we used to use MemCache but changed to the App Fabirc Cache and don't regret it.
Let me throw some comments/suggestions based on my experience in serving a similar amount or request under the WCF framework, 3.5 back in the days.
I don't agree to #3. Using a database here is the right thing to do. To address response time, implement caching and possibly cache dependency in order to keep the data synchronized across all instances (assuming that you are load balanced)(also see App Fabric suggested above/below). In real world scenarios, data changes, often, and you must minimize the impact.
We used Barracuda hardware and software to handle scalability as far as I can tell.
Consider indexing keys/values with Lucene if applicable. Lucene delivers extremely good performances when it comes to read/write. Do not use it to store your entire data, read on it. A life saver if used correctly. Note that it could be complicated to implement on a load balanced environment.
Basically, caching might be the only necessary change to your architecture.
We are developping a web application that uses external web services as the main data source. The web services have been created and are maintained by one of our close partners. Even though they are supposed to work all the time, they are not 100% reliable. From time to time, they stop being reachable or they start throwing exceptions.
What would be a good way of monitoring external web services and getting informed when something wrong happens?
Limitation:
Web services are host externally on our partner's servers
We don't have the source code of these web services
We have no control over the general infrastructure
I thought of creating a simple .NET application that calls the web services regularly and report when there is a problem (by email, in a log file or in a db). But maybe you have better ideas?
I thought of creating a simple .NET
application that calls the web
services regularly and report when
there is a problem (by email, in a log
file or in a db). But maybe you have
better ideas?
As well as reporting to your company that the services are down, you might also want to inform the vendor, eg by emailing their tech support or placing an automated call to their hotline or something.
If these services are business critical, perhaps you can agree an SLA with the vendor as part of your contract.
I don't know of anything else you can do, except maybe to implement local caching of the data if this makes sense in your scenario. This would insulate you, at least a little, from temporal failures in the web services.
G'day,
There's a few aspects you have to consider here.
Are the external web services living behind a load-balancing layer? In that case, you're pretty much limited as to the usefullness of what you can report back to the other company.
Do you have SLAs in place with the company to help ensure the provision of their web services? If you do, then you'll need to support any claims with recorded data which changes the extent of the monitoring needed.
What about asking external companies like Gomez to monitor the company's web service application for you? They have an excellent range of services. BTW I don't work for Gomez, just use their services.
Does your company have SLAs with any customers for the provision of your application? Once again, if you do, you're going then need to mitigate the cost of any such penalties by definitely having SLAs with the other company.
Edit: I forgot to say that any probes you do should be of at least two types.
availability of the external platform, and
availability of your particular service
HTH
'Avahappy,
Core company data is held and managed in physically separate, third-party, line-of-business applications: Finance, Transport Management. Customers are created in the Finance app (SQL Server), delivery information is held in the Transport Management app (Oracle). Communication between the two is point-to-point.
We need to build a new application (well upgrade the old one, but essentially from scratch) to process customer claims for damaged or short deliveries. Claims, customer and delivery data is currently manually entered in to MS Access. This will be migrated to a SQL server DB. The app development platform is VS2008 (C#).
I would like to avoid having all of the customer and delivery data in the claims database, since we already hold it elsewhere, so I plan to produce WCF based feeds from the LOB systems (and possibly the claims db) which can then be used as the data sources for the customer claims app. There will be claim-specific data entry but the core customer and delivery data would not need to be updated in the LOB apps.
So far I have in mind
database-->ORM-->WCF \
database-->ORM-->WCF --->BLL-->UI
database-->ORM-->WCF /
but it feels wrong as I will be creating separate service feeds for Customers, Deliveries and Claims (object-oriented services?). What I also can't quite grasp is how and where I join and work across data sources within the app to produce, say, a report showing claims against deliveries per customer (i.e. where I would traditionally write a query or view to get all of this from multiple tables in one DB).
Am I on the right track or I am missing the big picture here - should I just run regular extracts in to a claims db and work with traditional n-tier / n-layer architecture?
I don't think your design is too far off from where it should be.
If you have apps that will access finance data via the WCF service or the Transport service, those make sense to build. They also make sense to build because each of those services just supports what it needs to know about (ties in with the Single Responsibility Principle).
Where it might not feel right is where your UI app needs to know about and call 3 separate services to get its job done. In situations like that we've often built a wrapper service that makes the call to the appropriate service. Meaning your UI app would reference a WCF service and that service would then call the Finance service or the Transport service or the Claims service. Downside - each call results in multiple calls... yes. But it abstracts the logic away from your UI app and provides the benefit of giving you a place to manipulate or combine data from the other services or to add other business logic that is appropriate for the app. You also still have the benefit of the Finance service still supporting the finance apps without your UI app's business needs getting in the way or muddying up the code for its benefit.
I'm sure that there are different solution paths for this. This is just how we've handled in a couple of applications.
EDIT (answering your follow up question took too much space to make a comment).
If the data you can get from the Transport service is enough to satisfy the question that is asked by saying "getCustomerDelieveries" then no I wouldn't break it out to another wrapper service. If you need more data, then what other apps would also benefit from that service providing more customer information? Do those apps rely soley on the Transport service? This is one of those where the answer has to "feel" right to you, since you know the most about your systems.
Perhaps you need to break the SRP rule and have your Transport service get more customer information from the finance db or service. Or if apps that rely on the Transport service routinely need more customer data then thought could be given to expanding the customer table in the Transport db.
No rule, principle or philosophy should be applied so rigidly that you can't break it if it makes more sense for your app. It's going to be a balance and there is no right or wrong answer, just what works better for this situation.
You started this post by talking about a new UI app that would support the Claims part of your business and it needed both Finance and Transport data (as well as its own). That is a perfect candidate to call a wrapper service. It needs data from 3 distinct and separate data sources. Your Transport service has limited customer information which works well for some apps but perhaps not so well for others. If you write a wrapper service that mirrors your Transport service 100% and additionally provides a bit more customer data, what have you gained? More data for the apps that consume it but also more maintainance for you whenever you add functionality to the Transport service. What other value could this wrapper provide?
In this case, to me, having the Transport service get more customer data from the Finance service "feels" better. Your Transport db has some data but not enough. It's almost like the Transport service needs to make up for this short coming by fufilling the data need itself.
I should use an orchestration service with WWF (or other orchestration tool).
I like this view:
DAL, BLL, SIL --> WCf1
DAL, BLL, SIL --> WCF2
wcf1 and wcf2 are joined by an orchestration service over them. This way the services remains autonomous and decoupled, and you could reuse them in other orchestrations.
When generating reports, it's usually tolerable to deliver not most up-to-date data, so it may be a good idea to dedicate separate DB as source for reporting queries. Your master DBs will receive updates from UI (taking advantage of transactions and conflicts detection) and then replicate data to reporting DB.
This architectural pattern is called CQS (Command Query Separation), read this great article by Udi Dahan.
Hi I have an application that operations like this..
Client <----> Server <----> Monitor Web Site
WCF is used for the communication and each client has its own session on the server. This is so callbacks can be used from the server to callback to the client.
The objective is that a user on the "Monitor Website" can do the following:
a) Look at all of the users currently online - that is using the client application.
b) Select a client and then perform an action on the client.
This is a training system so the idea being the instructor using a web terminal can select his or her target client and then make the client application do something. Or maybe they want to send a message to the client that will be displayed on the clients screen.
What I cant seem to do is to store a list of all the clients in the server application, that can then be retrieved by the server. If I could do this I could then access the callback object for the client and call the appropriate method.
A method on the monitoring website would look something like this...
Service.SendMessage(userhashcode, message)
The service would then somehow look up the callback that matches the hashcode and then do something like this
callback.SendMessage(message)
So far I have tried without look to serialise the callbacks into a centralised DB. However, it doesnt seem possible on the service to serialise a remote object as the callback exists from the client.
Additionally I thought I could create a global hash table in my service but im not sure on how to do this and to make it accesible application wide.
Any help would be appreciated.
Typically, WCF services are "per-call" only, e.g. each caller gets a fresh instance of the service class, it handles the request, formats the response, send it back and then gets disposed. So typically, you don't have anything "session-like" hanging around in memory.
What you do have is not the service classes themselves, but the service host - the class that acts as the host for your service classes. This is either IIS (in that case you just need to monitor IIS), or then it's a custom app (Windows NT Service, console app) that has a ServiceHost instance up and running.
I am not aware what kind of hooks there might be to connect to and "look inside" the service host - but that's what you're really looking for, I guess.
WCF services can also be configured to be session-ful, and keep a session up and running with a service class - but again: you need to have that turned on explicitly. Even then, I'm not really sure if you have many API hooks to get "inside" the service host and have a look around the current sesssions.
Question is: do you really need to? WCF exposes a gazillion of performance counters, so you can monitor and record just about anything that goes on in WCF - wouldn't that be good enough for you?
Right now, WCF services aren't really hosted in a particularly well-designed system - this should become better with the so-called "Dublin" server-addon, which is designed to host WCF services and WF workflows and give admins a great experience monitoring and managing them. "Dublin" is scheduled to be launched shortly after .NET 4.0 becomes available (which Microsoft has promised will be before the end of calendar year 2009).
Marc
What I have done is as follows...
Created a static instance in my service that keeps a dictionary of callbacks keyed by the hashcode of each WCF connection.
When a session is created it publishes itself to a DB table which contains the hash code and additional connection information.
When a user is using the monitor web application, it can get a list of connected clients from the DB and get the hashcode for that client.
If the monitor application user wants to send a command to the client the following happens..
The hashcode for the sessionn is obtained from the db.
A method is called on the service e.g. SendTextMessage(int hashcode, string message).
This method now looks up the callback to the client from the dictionary of callbacks and obtains a reference to it.
The appropriate method in this case SendTextMessage(message) is called on the callback.
Ive tested this and it works ok, Ive also added a functionality to keep the DB table synchronised to the actual WCF sessions and to clean up as required.
I have 50+ kiosk style computers that I want to be able to get a status update, from a single computer, on demand as opposed to an interval. These computers are on a LAN in respect to the computer requesting the status.
I researched WCF however it looks like I'll need IIS installed and I would rather not install IIS on 50+ Windows XP boxes -- so I think that eliminates using a webservice unless it's possible to have a WinForm host a webservice?
I also researched using System.Net.Sockets and even got a barely functional prototype going however I feel I'm not skilled enough to make it a solid and reliable system. Given this path, I would need to learn more about socket programming and threading.
These boxes are running .NET 3.5 SP1, so I have complete flexibility in the .NET version however I'd like to stick to C#.
What is the best way to implement this? Should I just bite the bullet and learn Sockets more or does .NET have a better way of handling this?
edit:
I was going to go with a two way communication until I realized that all I needed was a one way communication.
edit 2:
I was avoiding the traditional server/client and going with an inverse because I wanted to avoid consuming too much bandwidth and wasn't sure what kind of overhead I was talking about. I was also hoping to have more control of the individual kiosks. After looking at it, I think I can still have that with WCF and connect by IP (which I wasn't aware I could connect by IP, I was thinking I would have to add 50 webservices or something).
WCF does not have to be hosted within IIS, it can be hosted within your Winform, as a console application or as windows service.
You can have each computer host its service within the winform, and write a program in your own computer to call each computer's service to get the status information.
Another way of doing it is to host one service in your own computer, and make the 50+ computers to call the service once their status were updated, you can use a database for the service to persist the status data of each node within the network. This option is easier to maintain and scalable.
P.S.
WCF aims to replace .net remoting, the alternatives can be net.tcp binding or net.pipe
Unless you have plans to scale this to several thousand clients I don't think WCF performance will even be a fringe issue. You can easily host WCF services from windows services or Winforms applications, and you'll find getting something working with WCF will be fairly simple once you get the key concepts.
I've deployed something similar with around 100-150 clients with great success.
There's plenty of resources out on the web to get you started - here's one to get you going:
http://msdn.microsoft.com/en-us/library/aa480190.aspx
Whether you use a web service or WCF on your central server, you only need to install and configure IIS on the server (and not on the 50+ clients).
What you're trying to do is a little unclear from the question, but if the clients need to call the server (to get a server status, for example), then they just call a method on the webservice running on the server.
If instead you need to have the server call the clients from time to time, then you'll need to have each client call a sign-in method on the server webservice each time the client starts up. The sign-in method would take a delegate method from the client as a parameter. The server would then call this delegate when it needed information from the client.
Setting up each client with its own web service would represent an inversion of the traditional (one server, multiple clients) client/server architecture, and as you've already noted this would be impractical.
Do not use remoting.
If you want robustness and scalability you end up ruling out everything but what are essentially stateless remote procedure calls. Since this is exactly the capability of web services, and web services are simpler and easier to build, remoting is an essentially pointless technology.
Callbacks with remote delegates are on the performance/reliability forbidden list, so if you were thinking of using remoting for that, think again.
Use web services.
I know you don't want to be polling, but I don't think you need to. Since you say all your units are on a single network segment then I suggest UDP for broadcast change notifications, essentially setting a dirty flag, and allowing the application to (re-)fetch on demand. It's still not reliable but it's easy and very fast because it's broadcast.
As others have said you don't need IIS, you can self-host. See ServiceHost class for details on how to do this.
I'd suggest using .NET Remoting. It's quite easy to implement and doesn't require anything else.
For me its is better to learn networking.. or the manual way of socket communication.. web services are mush slower because it contains metadata..
your clients and the servers can transform to multithreaded application. just imitate the request and response architecture. it is much easy to implement a network application like this..
If you just need a status update, you can use much simpler solution, such as simple tcp server/client messaging or like orrsella said, remoting. WCF is kinda overkill here.
One note though, if all your 50+ kiosk is connected via internet, then you might need use VPN or have an open port on each kiosk(which is a security risk) so that your server can retrieve status update from each kiosk.
We had a similiar situation, but the status is send to our server periodically, so we only have 1 port to protect/secure. The frequency of the update is configurable as to accomodate slower clients.
As someone who implemented something like this with over 500+ clients and growing:
Message Queing is the way to go.
We have gone from an internal developed TCP server and client to WCF polling and ended up with Message queing. It's the only guaranteed way to get data to and from clients and servers over the internet. As a bonus, many of these solutions have an extensive framework makeing it trivial to implement publish-subscribe, Send-one-way, point-to-point sending, Request-reply. Some of these are possible with WCF but it will involve crying, shouting, whimpering and long nights not to mention gallons of coffee.
A couple of important remarks:
Letting a process poll the clients instead of the other way around = Bad idea.. it is not scalable at all and you will soon be running in to trouble when the process is take too long to complete.. Not to mention having to handle all the ip addresses ( do you have access to all clients on the required ports ? What happpens when the ip changes etc..)
what we have done: The clients sends status updates to a central message queue on a regular interval ( you can easily implement live updates in the UI), it also listens on it's own queue for a GetStatusRequest message. if it receives this, it answers ( has a timeout).. this way, we can see overal status of all clients at all times and get a specific status of a specific client when needed.
Concerning bandwidth: kiosk usually show images/video etc.. 1Kb or less status messages will not be the big overhead.
I CANNOT stress enough that the current design you present will have a very intensive development cycle AND will not scale or extend well ( trust me, we have learned this lesson). Next to this, building a good client/server protocol for this type of stuff is a hard job that will be totally useless afterwards if you make a design error ( migrating a protocol is not easy)
We have built our solution ontop of ActiveMQ ( using NMS library c#) and are currently extending Simple Service Bus for our internal workings.
We only use WCF for the communication between our winforms app and the centralized service(s)