Client App vs Windows Service vs? - c#

I am currently looking into some re-architecture for a set of applications for our organization. We currently have a set of 10-15 odd stand-alone applications that communicate to each other and provide an intermediate level between client software and hardware.
Problem with current model is lots of individual apps, which add memory overhead, latency in communication, bloat the system and make it difficult to recover from issues in case of any of these apps crashing.
I am thinking of combining the application into 1-2 logical units that help address some of these issues. The dilemma is on how to do this well:
Windows Service
UI Application
Both?
The goal is to have an always on system that will handle all of the client-hardware comms but also have a rich admin-user configuration UI that will be able to talk to all of the individual components of this system and provide config/etc capabilities. Having a WinForms/WPF application will allow easy admin-user access to the system config and provide real time feedback (camera feed, etc), but will leave this open to admins accidentally closing the window. Having a service doing all of that work is great, but I am not sure on how to provide a rich admin-user UI that interacts and changes this service.
Any ideas or links worth reading?
Thanks!

just thought I'd update my own question for anyone else that might be having a similar question.
What I went with is a centralized Windows Service that exposes a number of WCF endpoints for a number of its child components. Sitting on top of that is a UI application that communicates to the Windows Service through the WCF endpoints. To make building & debugging easier the Windows Service is configured to run as a Console application when run in debug.
This solution seems to work great so far!

Related

How to decide between developing a web application and a desktop application [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I am a software engineer intern for a manufacturing company and they want me to develop an application for the company. They are leaning towards a web application however, I wish to know whether a desktop application would better fit the job. Therefore, I have been googling and looking through stackoverflow to find out what the pros and cons between desktop applications and web applications are. The following is essentially what I found:
Quick disclaimer, I have background in C# and WPF so I am a bit biased as it would be easier for me to develop a desktop application. I have no web experience so there is nothing I can really talk about in that area which is why I wish to know more about whether this application is better suited as a web application or desktop application. I am absolutely open to learning Php and web development to expand my abilities. I have started (a bit) looking into developing the web application using Php7 with Laravel framework.
Pros of desktop applications:
Typically faster than Web Applications (Assuming web application will perform complex queries, calculations, etc, and not just display markups)
Development of GUIs is faster
More secure as desktop applications are private by default.
There are more available controls allowing for a more rich and interactive experience for user (Or at least, these controls are easier/faster to implement on desktop based applcations compared to a web-based application)
Can take advantage of user hardware.
Cons of desktop applications:
Use/deployment is limited by system (However, this should not be a problem because all our systems are Windows based.)
Updates and installation must be manually implemented.
If every client desktop gets a database connection, scaling is not
good as database suffers from heavy load. (However, this probably
will not be the case since we won't have more than 500 users).
Pros of web application
Cross platform (No need to deal with different operating systems) so it is easily portable
Development is quick and easy
Deployment is easy as updates are automatic and server side.
Large community support and available frameworks.
Cons of web application
Larger overhead (Applications tend to be slower due to need to transmit data across the internet).
Need to deal with different browsers. Javascript most likely needs to be tweaked to be perfect on one web platform (Chrome, Firefox, etc) and will not be perfect on the others. (However, this is not that big of a deal).
Security is an issue since data will be public.
Please let me know if any of the above is outdated (most of the posts I found were from 2011 or prior) or wrong. Also, if there is any other pro/con to consider.
Moving on to the application description....
Background on the company: We build and process dozens of different parts every day. For each type of part, after X amount of the part is processed, a sample needs to be taken for inspection. So for example, part Y has 3 samples taken every 120 minutes to be inspected (Because the machine typically finishes processing X amounts in 120 minutes). The inspection results (measurement data) are then stored in the database (MySQL database).
General summary of the application's purpose:
View the schematics of all the parts we design (We store all the schematics as pdfs on a network drive, so this is simply just pulling up the specific pdf requested from the drive and displaying it onto the application).
View/update the status of all the machines in the company (What parts are they working on, are they online/offline, etc). A certain user (Inspector) will use this application to update machine status/information. Then another user (Operator) will use the application to view the statuses.
Monitor part inspections. So, for every machine and part being processed, there will be a timer to let a Operator user know when a certain part needs to be submitted for inspection. Upon part submission, an inspector will then receive a notification to inspect the part, and after letting the application know that they completed the inspection, the timer will restart to let the operator know whens the next time they need to submit a part.
The application will calculate statistical data (For example, Cpk values) from the part measurements obtained from inspection results and display the statistical data along with a graph/chart.
I hope I explained all of this clearly enough. Some other things to note, from my understanding, the users will not need remote access. This application will pretty much only be used on company site. Also, the original reason that the company wanted a web application was because operators will be using a tablet for the application and the tablets they acquired were original android based. However, they decided to switch to using Windows Surface tablets so WPF applications are now a possibility.
With all of this being said, I am really looking for input on what route people with more experience would recommend. I am still in college so please forgive my lack of knowledge/experience. What else should I be thinking about when deciding between a web application and desktop application?
Here are some of the pages I have seen while pondering this topic:
Advantages of web applications over desktop applications
https://www.quora.com/How-much-different-is-it-to-build-a-web-application-vs-a-desktop-application
https://www.quora.com/What-are-the-advantages-and-disadvantages-of-web-based-application-development-vs-desktop-application-development
There were more stackoverflow pages but the one listed above pretty much has everything that the other pages stated.
EDIT: Seems like web-application is winning so far (Not that I mind at all, I am actually excited to develop a web-application based off what I am hearing). Is there anyone who would rather do a desktop application? If so, why?
I'm inherently biased against web apps. They're difficult to get right due to browsers, they're typically insecure (by accident though). The platform sucks (JavaScript and the bazillion libraries from random people/orgs), "everything is a string". I could go on.
However it's undeniably the best platform for reaching a wide, public audience and allowing continual updates.
In a corporate environment the advantages do tend to go away, but not entirely. Updates, for example can be achieved generally by storing all your .exe & DLLs in a shared directory. As you say, you can build a much richer UI quicker and cheaper using the Windows platform.
With regards to your architecture, something that has worked for me in a similar situation is to have a Windows front end, but also have the guts of the business logic, data access (connection pooling) and processing off on a stateless web server (or two) accessed from the UI via Web Services (protocol of your choice - I prefer SOAP due to WCF and WSDL but plenty of folks won't).
This allows for centralised data access and a place to put your one-off batch jobs or calculations that can then be shared. It also has the advantage that if you need to do something really intensive, not every client machine has to have that capability.
Your situation seems to fit this model but without a lot of insider knowledge it's primarily opinion, but possibly one to consider.
Sounds like assembling or similar company work proccess monitoring to me.
If i have to build this application then first i will search and do some research if the function you want is possible and easy to develop with the programming language you will use
for example, if i choose to develop using web based then :
Larger overhead (Applications tend to be slower due to need to
transmit data across the internet).
you can use intranet and good spec server computer
Need to deal with different browsers. Javascript most likely needs to
be tweaked to be perfect on one web platform (Chrome, Firefox, etc)
and will not be perfect on the others. (However, this is not that big
of a deal).
then set the standard browser for working in your workplace
View the schematics of all the parts we design (We store all the
schematics as pdfs on a network drive, so this is simply just pulling
up the specific pdf requested from the drive and displaying it onto
the application).
you can upload pdf to server and view it within browser using pdf viewer plugin like pdfjs or similar plugin
View/update the status of all the machines in the company (What parts
are they working on, are they online/offline, etc). A certain user
(Inspector) will use this application to update machine
status/information. Then another user (Operator) will use the
application to view the statuses
is the machine have ip ?
can i use ping function to the machine to determine the machine is online or not to ease the task ?
if not then what is the schedule of the inspector to inspect the machine ?
of course the inspector can login to the system then update the machine status manually using web application
Monitor part inspections. So, for every machine and part being
processed, there will be a timer to let a Operator user know when a
certain part needs to be submitted for inspection. Upon part
submission, an inspector will then receive a notification to inspect
the part, and after letting the application know that they completed
the inspection, the timer will restart to let the operator know whens
the next time they need to submit a part.
this one sounds like scheduling mechanics to ensure the quality, you can make a timer with jquery and using ajax to send notification to the operator with specific data about certain part that need to be inspected
The application will calculate statistical data (For example, Cpk
values) from the part measurements obtained from inspection results
and display the statistical data along with a graph/chart.
this one is depends on your statistic formula, you can use highchart plugin for this one
the second one after you ensure your choosen programming language able to accomplish the task you want is to design the database structure
Quote by Linus Torvalds :
"Bad programmers worry about the code. Good programmers worry about data structures and their relationships."
Have a nice day, good luck with deciding after give it some good thinking to avoid development problem in the future

Is there any way to start a GUI application from a windows service on Windows 7?

I have done a lot of searching to find a way to start a GUI application from a windows service on Windows 7. Most of what I have found is that with Windows 7 services now run in a separate user session and can not display any graphical interface to the current user. I'm wondering is there is any kind of workaround or different way of accomplishing something like this? Can the service start a process in a different user session?
This change was made for a reason and not simply to annoy developers. The correct approach is to put your UI in a different program and communicate with the session through a pipe, or some other IPC mechanism. The recommendation that services do not present UI is more than 10 years old now.
You should really try to follow these rules, even though it may seem inconvenient to begin with. On the plus side you will enjoy the benefit of keeping your service logic and UI logic separate
If your services runs under the LOCALSYSTEM account then you can check "Allow service to interact with desktop", for the benefit of legacy services that would fail if they could not show UI. But it won't help you anyway because the UI will show in session 0 where it is never seen!
I recommend you take a read of the official Microsoft document describing session 0 isolation.
There is a way to do this.
If you need to show a simple message box you can use the WTSSendMessage Routine.
If you need a complex UI elements you can put it in a separate program and you need to use CreateProcessAsUser Routine.
In this sample provided by microsoft you can see the process.
http://blogs.msdn.com/b/codefx/archive/2010/11/26/all-in-one-windows-service-code-samples.aspx
Windows 7 introduced what is called "Session 0 isolation" that in practice means that every service (except system services) run in a separate non-interactive session. For this reason you cannot directly create a GUI from within the service, except if you run in legacy mode by flagging the Interact With Destop option, which is not good if you plan to run your service for some years in the future.
As David Heffernan said, the best is to use a client-server architecture. WCF makes it easy to communicate with named pipes.
This page is a good starting point to read about Session 0 Isolation and this white paper is also very good.

Is silverlight scalable?

Is silverlight more scalable then HTML. I found out that silverlight code runs on the client except whrn it has to update or fetch data from the server. Will my application be more responsive if I develop it in silverlight. I am not worried about end users installing silverlight on the clients. I am in a position to install silverlight on clients myself.
i just need to know if I develop a silverlight application will it make my application more scalable and/or responsive
Silverlight applications are, for all intents and purposes, "fat client" applications delivered over the web. Their code is executed on the local machine, and communication with a data store is conducted over WCF web services, which are usually wrapped by RIA Services.
Silverlight applications are quite responsive once loaded. Building a well-performing UI in Silverlight may be a little more challenging than it would be in WPF, but not by much.
The question doesn't make sense. HTML by itself doesn't do anything. There is no interactivity, nothing that can be responsive.
Of course, web apps typically rely on server-side logic (which requires a network round-trip, causing a delay) and Javascript (which runs locally, and so is pretty snappy)... But HTML itself is just a language for describing documents. It doesn't do anything, and it isn't "responsive" or "scalable".
Ultimately, it's much the same either way: it won't make a noticeable difference in terms of responsiveness whether you implement your logic in Javascript on a HTML page or in Silverlight. And when you need to communicate with the server, it doesn't matter if the browser or the Silverlight plugin makes the request, in both cases it requires a network round-trip.
Scalable in terms of what? Bandwidth, server CPU?
In theory moving the processing to the client will help server CPU but your data requests will still need processing. Also if you silverlight app is bigger than the web page(s) it would replace you may end up with more bandwidth being used. (You could use a CDN though)
In principle though if many pages are hit during one session it would be fair to think it could be more scalable.
Other issues such as market reachable come into play of course but having a client side app is an approach I have used to help speed and costs.

Question about how to implement a c# host application with a plugin-like architecture

I want to have an application that works as a Host to many other small applications. Each one of those applications should work as kind of plugin to this main application. I call them plugins not in the sense they add something to the main application, but because they can only work with this Host application as they depend on some of its services.
My idea was to have each of those plugins run in a different app domain. The problem seems to be that my host application should have a set of services that my plugins will want to use and from what is my understanding making data flow in and out from different app domains is not that great of a thing.
On one hand I'd like them to behave as stand-alone applications(although, as I said, they need to use lots of times the host application services), but on the other hand I'd like that if any of them crashes, my main application wouldn't suffer from it.
What is the best (.NET) approach to this kind of situation? Make them all run on the same AppDomain but each one in a different Thread? Use different AppDomains? One for each "plugin"? How would I make them communicate with the Host Application? Any other way of doing this?
Although speed is not an issue here, I wouldn't like for function calls to be that much slower than they are when we're working with just a regular .NET application.
Thanks
EDIT: Maybe I really need to use different AppDomains. From what I've been reading, loading assemblies in different AppDomains is the only way to later be able to unload them from the process.
I've implemented something along these lines using the Managed Addin Framework (MAF) in the System.Addin namespace. With MAF you package your addins as separate DLLs, which your host app can discover and launch in its app domain, in a separate domain for all of the addins, or each addin in its own domain. With shadow copy and separate domains you can even update an addin without shutting down your hostapp.
Your host app and the addins communicate through contracts that you derive from MAF interfaces. You can send objects back and forth between the host and the addins. The cotnracts provide a black-box interface between addins and the host, allowing you to change an addin's implementation unbeknownst to the host.
Addins can even communicate between themselves if the host tells them about each other. In my case a logging addin is shared by the others. This lets me drop in different loggers without touching the other addins or the host.
For my app, the addin use simple supervisor classes that in launch worker classes on their own threads that do all of the processing. Workers catch their own exceptions, which they return to their supervisor through callback methods. Supervisors can restart workers or take other action. The host controls the supervisors through a command contract, which instructs them to start and stop workers and return data.
My host app is a Windows service. The worker threads have thrown exceptions for all the usual reasons (including bugs!), but the host app has never crashed in any of our installations. Since debugging services is inconvenient, addins allow me to build test apps that use the same contracts, with added assurance that I'm testing what I deploy.
Addins can expose UI elements, too. This is very helpful to me as I need to deploy a controller app with the host service, since services do not have UIs. Each plugin includes its own controller interface. The controller app itself is very simple - it loads the addins and displays their UI elements. This allows me to ship an updated addin with an updated interface and not have to ship a new controller.
Even though the controller and the host service use the same addins, they don't step on each other; in fact, they don't even know that another app is using the same addins. The controller and the host talk to each other through a shared database, but you could also use another inter-app mechanism like MSMQ. In the next version the host will be a WCF service with addins on the backend and web services for control.
This is a bit long-winded but I wanted to give you an idea of how versatile MAF is. It's not as complex as it might first look, and you can build rock-solid apps with it.
It depends on how much trust you wish to allow the extensions. I'm working on a similar application and I've chosen to mostly trust the extension code, as this greatly simplifies things. I call into the code from a common thread (in my case, the extensions don't really 'run' in any continuous loop, but rather execute certain tasks that the main application wants to do) and catch exceptions in this thread, so as to provide helpful warnings that loaded extensions are misbehaving.
Currently there's nothing keeping these extensions from launching their own threads that could throw and crash the whole app, but this where I've had to make the trade-off between safety and complexity. My application is not mission-critical (not like a web server or database server), so I consider it an acceptable risk that a buggy extension could bring down my application. I provide safeguards to more politely cover the most common failure cases and leave it to the plugin developers (who will mostly be in-house people for now anyway) to clean up their bugs.
In regards to Unloading, yes, you can only unload the code and metadata for an assembly if you place it in an AppDomain. That said, unless you want to be loading and unloading frequently over the life of your program, the overhead associated with keeping the code in memory is not necessarily an issue. Any actual instances or resources using types from the assembly will still be cleaned up by the GC when you stop 'using' it, so the fact that it's still in memory doesn't imply a memory leak.
If your main use case is a series of plugins that you locate once at startup and then provide an option to instantiate while your app is running, I suggest investigating the real memory footprint associated with loading all of them at start-up and keeping them loaded. If you use AppDomains, there will be additional overhead there as well (for instance, memory for the proxy objects and loaded/JITed code to support AppDomain marshaling). There will also be CPU overhead associated with the marshaling and attendant serialization.
In short, I would only use AppDomains if one of the following were true:
I want to get true isolation for the purposes of code security (i.e. I need to run untrusted code in an isolated way)
My app is mission-critical and I absolutely need to make sure that if a plugin fails, it can't bring down my core app.
I need to load and unload the same plugin repeatedly, in order to support dynamic changes to the DLL. This is mainly if my app can't stop running, but I want to hot-patch plugins while it's still running.
I would not prefer AppDomains for the sole purpose of reducing possible memory footprint by allowing Unload.
This is an interisting question.
My first idea was to simply implement interfaces from your host application in your plugin applications to allow them to communicate through Reflection, but this would only allow communication and would not bring a real "sandbox-like" architecture.
My second thought was to design a service-oriented platform. The host application would be a kind of "plugin broadcaster" which would publish your plugins in a ServiceHost on a different thread. As this need to be really responsive and "no brainer configurated", the host application could communicate with the plugin through named pipes channel (NetNamedPipesBinding for WCF) which means is only communicating with localhost pipes and does not need any network configuration or knowledge at all. I think this could be a good solution to your problem.
Regards.

High availability & scalability for C#

I've got a C# service that currently runs single-instance on a PC. I'd like to split this component so that it runs on multiple PCs. Each PC should be assigned a certain part of the work. If one PC fails, its work should be moved to a backup machine.
Data synchronization can be done by the DB, so that should not be much of an issue. My current idea is to use some kind of load balancer that splits and sends the incoming requests to the array of PCs and makes sure the work is actually processed.
How would I implement such a functionality? I'm not sure if I'm asking the right question. If my understanding of how this goal should be achieved is wrong, please give me a hint.
Edit:
I wonder if the idea given above (load balancer splitswork packages to PCs and checks for result) is feasible at all. If there is some kind of already implemented solution so this seemingly common problem, I'd love to use that solution.
Availability is a critical requirement.
I'd recommend looking at a Pull model of load-sharing, rather than a Push model. When pushing work, the coordinating server(s)/load-balancer must be aware of all the servers that are currently running in your system so that it knows where to forward requests; this must either be set in config or dynamically set (such as in the Publisher-Subscriber model), then constantly checked to detect if any servers have gone offline. Whilst it's entirely feasible, it can complicate the scaling-out of your application.
With a Pull architecture, you have a central work queue (hosted in MSMQ, Sql Server Service Broker or similar) and each processing service pulls work off that queue. Expose a WCF service to accept external requests and place work onto the queue, safe in the knowledge that some server will do the work, even though you don't know exactly which one. This has the added benefits that each server monitors it's own workload and picks up work as-and-when it is ready, and you can easily add or remove servers to/from this model without any change in config.
This architecture is supported by NServiceBus and the communication between Windows Azure Web & Worker roles.
From what you said each PC will require a full copy of your service -
Each PC should be assigned a certain
part of the work. If one PC fails, its
work should be moved to a backup
machine
Otherwise you won't be able to move its work to another PC.
I would be tempted to have a central server which farms out work to individual PCs. This means that you would need some form of communication between each machine and and keep a record back on the central server of what work has been assigned where.
You'll also need each machine to measure it's cpu loading and reject work if it is too busy.
A multi-threaded approach to the service would make good use of those multiple processor cores that are ubiquitoius nowadays.
How about using a server and multi-threading your processing? Or even multi-threading on a PC as you can get many cores on a standard desktop now.
This obviously doesn't deal with the machine going down, but could give you much more performance for less investment.
you can check windows clustering, and you have to handle set of issues that depends on the behaviour of the service (you can put more details about the service itself so I can answer)
This depends on how you wanted to split your workload, this usually done by
Splitting the same workload by multiple services
Means same service being installed on
different servers and will do the
same job. Assume your service is reading huge data from the db servers and processing them to produce huge client specific datafiles and finally this datafile is been sent to the clients. In this approach all your services installed in diff servers will do the same work but they split the work to increaese the performance.
Splitting the part of the workload by multiple services
In this approach each service will be assigned to the indivitual jobs and works on different goals. in above example one serivce is responsible for reading data from db and generating huge data files and another service is configured only to read the data file and send it to clients.
I have implemented the 2nd approach in one of my work. Because this let me isolate and debug the errors in case of any failures.
The usual approach for load balancer is to split service requests evenly between all service instances.
For each work item (request) you can store relative information in database. Then each service should also have at least one background thread checking database for abandoned work items.
I would suggest that you publish your service through WCF (Windows Communication Foundation).
Then implement a "central" client application which can keep track of available providers of your service and dish out work. The central app will act as scheduler and load balancer of the tasks to be performed.
Check out Juwal Lövy's book on WCF ("Programming WCF Services") for a good introduction on this topic.
You can have a look at NGrid : http://ngrid.sourceforge.net/
or Alchemi : http://www.gridbus.org/~alchemi/index.html
both are grid computing framework with load balancers that will get you started in no time.
Cheers,
Florian

Categories