I am trying to improve a windows service we use at work.
The part I am trying to improve is the maintainability. The service exists on several different machines. Now, I have a form which receives information from the service via Shared Memory, but to monitor these services someone has to login to several remote machines to view the forms.
What I am trying to do is decide the best way to have these services send their information to a single location for easy viewing.
My initial thought was to create a web service which the services could call with their details. Then create a web page where those details could be viewed. But I imagine that I will also need a db to store the messages in which I don't really want to have to do.
I would also like for the location in which shows the combined details to be able to send commands to the individual services, such as Start and Stop.
So I am lacking a bit of knowledge on the best way to accomplish this and am looking for suggestions that would give me something more specific to research.
I would appreciate any and all input on a real-time appropriate solution to having multiple window services located on several different machines within our network to send their status data to a single location to be displayed visually together, as well as allowing that form/website/whatever to send messages to those services such as Start and Stop.
If you don't want the Web Service/SQL approach, WCF might be a good approach.
http://msdn.microsoft.com/en-us/library/ms731082.aspx
Basically the remote services could report to the central service, and everything could be stored in memory, no DB required.
Related
I've to design a web application to support multiple clients.
I'm thinking to have a MongoDB with the username or email of each user and the name of the connection string of each user.
And with the connection string get the SQL database of the client.
But I'm not sure if this is the best approach.
Do you have any other suggestion?
Had situation close to you.
We used 1 common db(parent), where stored connections per clients and simple iterface to control child database's(they are separated, you can create manualy or automaticly as many db's per client, as you want or as many clients.
Based in what way you want to find clients. Our system used client per url/ Every client had own url and own db. So in code, we check url, then get from main db connection string and init context with specified connection.
You need provide more details, to get more info. Based on your goal, solution can be different.
I saw some projects with URL based approach... However, if you want your application more dynamic like let say migrating from server side to client side application and you don't want your URL change... I would say, your "user based" approach is more ideal in my opinion. Good luck.
If you have many clients with their databases then you must made different web application, even if they are copy/paste the one to the other
If you have many clients under the same url, under the same web application, then you can have one database and there you separate them, inside the database.
The web.config is not offering for frequent change the connection - you setup this ones to work and you forget it.
Every time you change the web.config you create a serial of events, and restart the application, recompile it if find some reason... etc
This is more of a design/architectural question
I know the question sounds vague but let me explain my applications need.
So I have a windows form which collects certain data from the local machine and sends it to an azure queue regularly. I also have a Web app which pulls data from the queue simultaneously and displays the data. All is well and good here as the web application works fine . But the web app only pulls data from the queue when I 'launch' it. Is there a way to run this task of processing the data every time ? (like as and when the data is available).
This is a requirement because along with displaying the data, the web app also monitors contents of this data for threshold limit and sends notifications too.
Right now, only if the web app is launched/opened on a client browser, it can send notifications.
you should take a look at SignalR. It is a library which is used for real time communication, there are a lot of strategy for making this real-time communication happen and SignalR implements for you.
It can be used easily with ASP.NET you should just take a look for some codes to know how to implement in your case but this is your game.
From the couple of suggestions, I found Azure WebJobs ideal for my task. A little more research shows Azure Worker Role to be useful too, but setting it up looks more difficult.
I am trying to create a monitoring application for our operations department to be proactive when dealing with systems that are encountering problems. I created an app that does the job but it has some draw backs:
Each copy of the app running serves individual pings to the systems, when 1 ping would suffice.
I have 3 different api's for getting the status of our systems depending whether its hosted IIS, WCF or desktop.
To fix the first issue i was going to create a database which an interim service(app)(monitor) would make the pings, then the app would query the database for updates. After thinking about this I realized the second issue and decided it is a future problem.
So my thought was to, rather than have the interim application pinging the systems, simply have each system have one interface in which it posts it status to the database every x time. But then I ran into a problem with the WCF and IIS services we have. These services can sit for days without anyone actually using the service. How would I make these services continue to post its data?
My questions are:
Is it better to have data REQUESTED or PUSHED in this type of situation?
If REQUESTED, what is a suggested practice for maintaining a single API across mulitple platforms(IIS, WCF, Desktop)?
If PUSHED, how would you handle the case of the Web services which are instance based and not continuously running?
For web services, one solution might be to implement a health-check end point , something that you can simply call like: webservice/isServiceUp?
I prefer that this information is PULLED. If a service / web Service/ Application is down, then you can't possibly rely on it to write something to the DB... it would be possible but highly risky and unreliable.
In a real world situation, it is a little more complicated than that because something might happen between your service host and the consumer (DNS problem for example), in which case, you would want to consider the case of not getting anything back from the isServiceUp (no true no false, just a 400 lvl error)...
Consider using your load balancer for checking on APPS / web services and proactively switching to a different IP in case of issues... it is a possibility.
I have two servers(and could be more later) with a WCF service, both behind a load balancer. The client application, in multiple IS servers(also loadbalanced), call the WCF to do some action, lets say a Save.
The same data, lets say Client information, could be opened by several users at same time.
The Save action can be, then, be executed from several users at the same time, and the call will go to different WCF servers.
I want that when a user call Save from UI, and there is already a Save in progress from another UI over the same Client data, then the second user be alerted about it.
For that, all WCF instances should know about actions been executed in other instances.
How can I synchronize data status between all WCF server instances then ?
I dont want to share the data, just some status of the data(opened, save in progress, something like that)
please advice, thanks,
I'm working with c#/.NET4
Added: WCF is actually hosted inside a windows service.
The problem you are having is one of resource management.
You are trying to resolve a way how you can get your service clients to somehow all know about what open handles each other have on internal state within your service, and then force them to orchestrate in order to handle this.
Pushing this responsibility onto your clients is going to make things much more complex in the long run. Ideally clients should be able to call your service in as straightforward manner as possible, without having to know about any other clients who may be using the service. At most clients should be expected to retry if their call was unsuccessful.
Normally you would handle these situations by using locking - the clients could all commit changes simultaneously and all but one will have to retry based on some exception raised or specific response sent back (perhaps including the updated ClientInformation object), depending on how you handle it.
If you absolutely have to implement this notifications stuff, then you could look at using the WCF duplex bindings, whereby your service exposes a callback contract which allow clients to register a handler for notification which can be used to send notifications to all clients on a different channel to the one the request was made on. These however, are complex at best to set up and do not scale particularly well.
EDIT
In response to your comment, the other half of your question was about sharing state across load balanced service endpoints.
Load balancing wcf is just like load balancing websites - if you need to share state across them you must configure some backing data store which all services have access to.
In your case the obvious place would be the database. You just need to make sure that concurrency/deadlock related problems are caught and handled in your service code (using something like NHibernate to persist the data can help with this). I just don't see that you have a real problem here.
I'm working with an n-Tier application using WinForm and WCF
Engine Service (Windows Service) => WCF Service => Windows Form Client Application
The problem is that the WinForm Client Application need to be 100% available for work even if Engine Service is down.
So how can I make a disconnected architecture in order to make my winform application always available ?
Thanks.
Typically you implement a queue that's internal to your application.
The queue will forward the requests to the web service. In the event the web service is down, it stays queued. The queue mechanism should check every so often to see if the web service is alive, and when it is then forward everything it has stored up.
Alternatively, you can go direct to the web service, then simply post it to the queue in the event of initial failure. However, the queue will still need to check on the web service every so often.
EDIT:
Just to clarify, yes all of the business logic would need to be available client side. Otherwise you would need to provide a "verify" mechanism when the client connects back up.
However, this isn't a bad thing. As you should be placing the business logic in it's own assembly(ies) anyway.
Have a look at Smart Client Factory: http://msdn.microsoft.com/en-us/library/aa480482.aspx
Just to highlight the goals (this is sniped from the above link):
They have a rich user interface that
takes advantage of the power of the
Microsoft Windows desktop.
They connect to multiple back-end
systems to exchange data with them.
They present information coming from
multiple and diverse sources through
an integrated user interface, so the
data looks like it came from one
back-end system.
They take advantage of local storage
and processing resources to enable
operation during periods of no
network connectivity or intermittent
network connectivity.
They are easily deployed and
configured.
Edit
I'm going ansewr this with the usual CYA statement of it really depends. Let me give you some examples. Take an application which will watch the filesystem for files to be generated in any number of different formats (DB2, Flatfile, xml). The application will then import the files, displaying to the user a unified view of the document. And allow him to place e-commerce orders.
In this app, you could choose to detect the files zip them up and upload to the server do the transforms (applying business logic like normalization of data etc). But then what happens if the internet connection is down. Now the user has to wait for his connection before he can place his e-Commerce order.
A better solution would be to run the business rules in the client transforming the files. Now let's say, you had some business logic which would based on the order determine additional rules such as a salesman to route it to or pricing discounts...These might make sense to sit on the server.
The question you will need to ask is what functionality do I need to make my application function when the server is not there. Anything thing which falls within this category will need to be client side.
I've also never used Click Once deployment we had to roll our own updater which is a tale for another thread, but you should be able to send down updates preety easily. You could also code your business logic in an assembly, that you load from a URL, so while it runs client side it can be updated easily.
You can do all your processing off line, and use some thing like Microsoft Sync Framework to sync the data between the client and the server.
Assuming both server and client are .net, you can use same code base to do the data validation both on the server and the client. This way you will have a single code base that will serve both server and client.
You can use frameworks like CSLA.NET to simplify this validation process.