I'm currently evaluating the options for adding a web UI to a .NET 4.5 application that is installed and running as a Windows Service.
The basic idea is that the service application is running 24/7 and collects various data from network devices and persists them in a local data store (esentially, it monitors these devices)
The web UI interface is used for data presentation and analysis purposes and to send command & control messages to the backend (i.e. the service layer) which in turn fowards these commands to the network devices.
The big difference to a "classic" multi-tier web application is that the service part has to run even if no user has been interacting with it through the web UI (therefore the idea is to have it run as a Windows Service).
I currently do not know how to mix this web part (request/response pattern, short running) with the service part (polling on the network, long running, 24/7).
My ideas so far:
Embed IIS Core (or any other web server) into the service application: would probably work but the embedded web server would not know about any existing IIS configuration on the same machine which makes integration and configuration not straightforward (e.g. ports, authentication, SSL etc.)
Deploy an ASP.NET application on IIS and a separate service application: the ASP.NET application would then just act as a facade to the service and would need a proper and reliable way to communicate with the service application (two-way IPC?).
Currently it feels as if 2 is the best option.
If so, are there any IPC recommendations?
Thanks!
The simplest (and probably the worst) way is to embed all your logic to IIS and disable shutdown of your app (this way IIS app will run like a windows service).
I currently do not know how to mix this web part (request/response pattern, short running) with the service part (polling on the network, long running, 24/7).
You shouldn't. Regarding the second case I would suggest to decouple your service app and web ui app as much as possible. This way you minimize dependencies and IPC (therefore improve scalability and stability).
The windows service in this case may implement its minimal role: collecting various data from network devices and persists them in a local data store (the only feature that requires 24/7). IIS app may implement all UI-related features (data presentation and analysis roles) and user command. This way you don't need to delegate all presentation features to the windows service app. IPC is used just for sending command & control messages to the backend (i.e. the service layer) which in turn fowards these commands to the network devices.
I suggest using message queue model (ZeroMQ, MSMQ, RabbitMQ, etc) that is asynchronous IPC with its advantages. From the other hand it is possible to use the database itself for the IPC: e.g. push the messages to some table (or collection if using NoSQL) and read them by the win service app. This is an alternative to message queues but in most cases is worse than it.
Related
I want to design an application that serves a REST API and also has a continuous process running that connects to websockets and processes the incoming data.
I have two approaches in mind:
Create a Windows Service with Kestrel running on one thread and the websocket listener on another. The API would be made accessible via a IIS reverse proxy.
Create the REST API with ASP.NET directly hosted in IIS and utilize the BackgroundService Class for the websocket listener as described here.
As I am new to the Windows Ecosystem I'd like to know if one of the approaches is more suitable or if I'm going about it the wrong way.
My understanding is that the Windows service approach should just work, but it seems more elaborate.
I'm unsure about the BackgroundService approach. The background process should really run 24/7. Are BackgroundServices designed for this? The docs always talk about long running tasks, but does it also work for infinite running ones with restart on failure etc.?
I'd recommend to host the continuous process in a Windows service as you have much more control over the lifecycle.
With a BackgroundService hosted on IIS, the process is controlled by IIS. In this case, it might be recycler from time to time or terminated of idle for some time. You can control this behavior with some configuration settings, but especially in combination with ASP.NET Core, the IIS process might be running, but the underlying Kestrel service is only started when a request hits the website.
If the two components do not rely on each other, you could also split them and have the best of both worlds, the web application hosted in IIS and the websocket listener running in a Windows service
We have a current application that has consists of two solutions:
Windows Service
This one takes care of communication between devices (ie. IoT, and other devices through different communication protocols); this service also contains some logic and runs 24/7; writes ocassionally to a database (SQL or Influx).
Web Interface
The web interface shows some of the database information; but can also get information from the Windows Service (live data); we currently use RabbitMQ with RPC for this; but it is far from ideal. We use Typescript to call back to a controller; from the controller we RPC to the Windows Server and the way back again.
We are currently looking on how to evolve this to a more robust solution with less transfer objects in between while maintaining security as the web interface has login credentials.
Ideally we would replace all with SignalR as this would make the client side also easier; we have a lot of TypeScript currently to do Rx calls.
My particular concern would be security in having SignalR directly to the Windows Service.
I am creating a client application that downloads and displays market data from Yahoo! for a university project, but that also sends out notifications to mobiles (so far using Google cloud messaging). So far it's a WPF client and the "server" is a class library - so far working. What I was wondering, is can you mix this server with a WCF service - the WCF service I was planning on using for registering devices, as well as accepting and parsing commands.
So I would call .Start() on my server object, and it will be constantly running in the background, while a WCF REST service runs alongside it - or would I be better simply having a thread running on the server that can accept input... sorry if this is confusing, but just wondering if it can, or has been done before or any advice. :)
Just to explain a bit better
The client front end and the "server" are running on the same machine - I was calling it a server because it is not only updating the front end, but sending out GCM notifications at the same time. I was wondering if maybe a WCF service could be added to make it simpler to handle adding devices to a database ("server" reads a list of device reg ids from a database, sends notifications to these) by allowing an android app to details via REST or something similiar
I would explore wrapping the class library in a Windows Service (which is essentially a process that runs continuously, and can be stopped/started/paused) and keep your WCF service as a web service for client communication.
How the WCF client service communicates with the Windows service is up to you - whether you store the data in a shared database, keep it in memory and have another WCF layer communicating between the two, etc. A shared database would be the most straightforward, especially if you want to persist the data for use by other apps/services as well.
WCF Service would be useful if you had one notification service on your server with multiple WPF client application connecting to it. If you have just one application running on the same server then not sure if it will be worth the overhead.
The usual pattern is to host WCF service in IIS, that way it always starts whenever first request is received. WCF is very flexible though, therefore you can host in in Windows Service, Console Application, etc.
I have a basic windows service which does some conversions of data. There's decoupled GUI which allows user to changes some configuration and this needs to be proprogated to the Windows Serivice running. Both of them are running the same box and implemented using C# .NET. Which is the best way to communicate to the service other than interprocess communication mechanisms like mutex, events etc.
Also I'd like to avoid to implement it as a web service because it's not a webservice.
I would use a WCF Service to communicate.
You can use netNamedPipe binding but that might not work on Windows 2008/Windows 7 since the Service runs in session 0 and all user code runs in sessions >0 and they would not be able to communicate.
So I used netTcpBinding in my own project.
If the processes are not going to move to different machines, you can use memory mapped files as the communication mechanism.
If that's not the case, WCF is a good option.
Since you're dealing with configuration data for the service, I would persist it somewhere. Database, file, registry, etc. UI writes the information and the service reads it when appropriate (e.g. each run).
I have a C# form application that connects to a electronic device using the serial port.
The class "'SerialCommunicationManager'" hooks up to the serial port on application startup and handles the dirty business of talkning to the device.
What I would like is to expose the following methods.
Write()
SerialDataReceived event
SerialDataTransmitted event
Primarily a local website running on the same machine is the one I want to expose the methods for, but in the future I imagine the need for external applications as well.
What is the easiest way to expose the functionality?
TCPIP client server?
Web service? (Can I create a web service inside a WinForm?)
other?
Big thanks
//David
I would recommend self-hosting a WCF Service. This provides you a huge amount of flexibility in terms of how you serve and expose this information, including being able to change the method by which its served via configuration.
It seems to me, that if you would like to do it properly, you should break apart your forms app, and create:
a service that handles serial comm and has an API exposed through remoting
a Forms app that uses the API and makes a way with the service
Then, depending on the locality of your web site, if it will remain local (or near local - LAN):
web site should use remoting to call the service
else, if you plan to have multiple web sites:
web service hosted inside the IIS that will wrap remoting API
web site that will use web service
However, if it is too much work to do - just use remoting and expose needed methods to the web site.
In a recent project we did the following:
Write a Console application (or Windows Service, depending on your needs) that communicates with the electronic device.
Make the Console application host a .NET 4 WCF service.
Write a .NET 2 Windows Forms application to communicate through Web Services with the console application.
In this context, I could imagine the website you are mentioning to also use Web Services (WSDL) to communicate with the Console application.