Observing multiple windows services [closed] - c#

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I would like input on the design I currently have planned.
Basically, I have some number of external instrumentation, each of which should always be running, collecting specific data. My thought was to create a service for each, always running and polling the instruments, performing logging, etc. There could be one instrument, or there could be 40.
However, I need one application to consume all this data, run some math on it, and do the charting, display, emailing, etc. The kicker is that even if this application is not running, the services should constantly be consuming data. Also, these services should almost always be supposed to run on the same machines as the client application itself, but the ability to network them (like .NET Remoting used to do) could be an interesting feature.
My question is... is this the best design? If it is, how do I go about doing the communication between services and application? I've looked into WCF, but it seems to be geared towards request-response web services, not something that is continually streaming data to anything that might listen to it. Alternatively, should I have these services contact some other Web Service using WCF, that then compiles the data for use in a thin client viewer that polls the web service often?
Any links and resources would be greatly appreciated. .NET namespaces for me to research are also appreciated. If I wasn't clear about something let me know.

Just a thought....but have you considered perhaps adding a backend database? All services could collate data and persist it then your application that needs to process the information can just query the database rather than setting up loads of IPC between the services.

WCF can handle streaming. It can also use MSMQ as a transport, which will ensure that no messages are lost, even if your instruments begin producing large quantities of data.

Related

It is common to ask a caller to communicate via a queue? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
Originally the team was thinking of developing a C# service for a parent company. The service would receive requests and then the service would ping a 3rd party and return the result.
Instead of it being synchronous, we decided we would have an AWS SQS and SNS queue for both requests and results. Our company would give credentials to our AWS for the parent to write and notify from a request queue.
Then this service wouldn't be a service, it would be a processor. It would then read from the queue and sent the requests and writeback the results to another SQS, SNS that would notify an API on the parent company side.
Question: Is this a good design? We are bypassing the use of services to prevent having to have retry logic and having to develop clients, rather we just communicate via queues.
There are many advantages to such an approach.
You allow your client and "processor" to focus on their
responsibilities and interact via a message bus
You allow the client and processor to be independent and decoupled of
each other, especially in terms of technology and programming
language used.
The can both be scaled independently as and when required.
If one of them is "down" then the messages can still be accumulated
on the message bus and won't get lost (assuming you have a suitable
TTL to cover this)
If the services offered grows, you can introduce new queues to
process them without impact the current client and processor
They are asynchronous
You can have multiple producers and/or consumers
Etc
There are of course disadvantages
Latency is increased. For some situations this is a issue
You are dependent upon a 3rd party product
If using AWS, Azure, etc, you are dependent upon a 3rd party company
Additional costs from 3pp party & product
Debugging can be more difficult
Tracing can be more difficult
Security can be more difficult
If a client or receiver goes down, it might not be immediately
obvious. Leading to a build up of messages, each of which might have
a limited time to live! (what happens if this occurs at the weekend?)
Acknowledging receipt of requests is difficult
Ensuring messages don't get lost when a process fails is more
difficult
Etc
You can of course find more about these on the internet.
So there is nothing wrong with this as an approach. The only question really is
"is this the right architecture for our use case(s)?"
Only you can determine this by weighing up the pros and cons, maybe together with your customer.

Communication between microservices pipeline [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I want to use microservices architecture for my next project based on ASP.NET core.
I could exchange between the services via REST but it is really heavy to maintain.
Is there an other way for communication between microservices, for example over event bus like vertx.
I could use rabbitmq, but do not know, if it is a good option.
I think Rabbit MQ is going to work OK, especially if you have many consumers i.e. need load balancing, and/or if you need messages to be persistent, and also if conceptually, your micro-services are OK processing messages.
Otherwise, since you’re considering REST, I’d recommend WCF instead.
Just ignore Microsoft’s examples, those are too complex. You need to make an assembly containing protocols (in WCF terminology, service contracts) + messages they send/receive (in WCF terminology, data contracts) that’ll be shared between your services. Having a shared assembly will allow you to get rid of that enterprise-style XML configuration nonsense. This will also make maintenance simpler than REST, because the compiler is going to verify the correctness of your network calls: you forget to update one service after changing a protocol, and the service will stop compiling.
Here’s my demo which uses WCF to implement zero-configuration client-server communications in C#.
It’s currently set up to use named pipe binding i.e. will only work locally. But it’s easy to switch from NetNamedPipeBinding to NetTcpBinding which will do networking just fine.
P.S. If you’ll pick WCF, don’t forget the abstraction can and will leak. Any network call may fail with any network-related exception. Also you’ll need to reconnect sometimes (if you don’t want to, and your don’t have too many messages per second, you can use a connection-less protocol like NetHttpBinding but those are much less performant).

How to call functions on other computers? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
Is there a way to call a function to run on all instances of a Windows Forms application across a LAN?
I have an application which contains a dashboard of their own Joblist. I want another user on another PC to create and allocate a job to this user. Once created and saved, I would like the method GetJobs(); to refresh. I've not done anything this advanced yet, so please go easy :)
Chris Walsh has excellent advice in his comment. That said, it is possible for Windows Forms applications to communicate with each other, and the simplest method, for me anyway, is WCF, self-hosted server. Typically the server code will not be running in the UI thread -- at least I don't recommend it. In fact, all WCF is best kept running in a background thread in a Windows Forms application, to avoid blocking the UI. WCF has lots of error conditions you will need to handle.
Another thing you might want to look at is MSMQ, now called Message Queueing. It can store a queue of jobs for you, and it won't lose them if the power is lost.
I assume you have some SQL Server Express Edition installed as the database backend.
This way you can connect to the database using some authentication, and add the job's directly there.
Then on the other computer, add a refresh button or poll for changes. This has the advantage that you don't need to write a service by yourself, and jobs can be created even if the user is not there and his PC is switched off.
You need just one server which hosts the database.

Is web service is better than the regular query method? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
We have a desktop application. we need to install in client pc and connect the database to remote server. which method is better to connect database (for speed and performance).
1. Normal query method (mention the server name in connection string).
2. Create a web service and get the data in xml or json format.
Both solutions will bring positive and negative points.
Direct query to server -> imply that your client software knows the Database schema. If you change the Database schema, you need to test its integration in the client app.
Web service -> a limited API allows your Database to be only known by its data web service. The client app only knows about the small web service API. When the Database evolves, you have a very low chance to negatively impact the client code.
From an architectural point of view, it is encouraged to limit the size of contracts between 2 pieces of technology.
From a development cost point of view, creating and maintaining such a service has a cost and introduces maybe the need of a new set of technical skill set in your team.
Depends on your requirement, budget and time constraint.
If there is any possibility that this desktop software would be later extended to Mobile App and other platforms, then go for Creating web services preferably with JSON.
Keeping data access layer in Client Desktop Application saves a little development time, but makes testing, re usability and maintenance harder.
Also, the trend is to use SOA, thus I'd always prefer creating Web Services. Its secure, reusable and very friendly for future modification to project.

Is having many small wcf services better then one? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I developing software for taxi services. My software consist from wcf service as server and wpf application as client. Functionality is growing and my wcf service has more then 50 methods now. I thinking about split my one big WCF service to couple of small services.
Is it a good idea to do so?
I would say yes if you can define a clear separation of responsibilities for each of your services. You should try to avoid or minimize coupling between services though. Keep in mind you'll break existing clients if you do change the contract, but it sounds like you are in control of these.
This is quite a common scenario, and you can help yourself by ensuring your service layer is essentially a facade to make it easier to move things around.
IME, if splitting the service up would just create a lot of replicated code for common functionality (like DB access), or would complicate things by needing to add stuff like additional functionality for the services to talk to each other, I would suggest no.
Another reason: your fault-tolerant scenarios now become more complicated. It is one thing if your entire monolithic service dies - then nothing works. But if you have 4 related services that need to work together, you now have to intelligently handle partial failure scenarios like what happens if service #3 goes down, and the other services have half-done jobs that need it. Now you have to be able to get things back to a consistent state while waiting for #3 to come back up, or be able to persist stuff so you can get back to it when it does. Your number of error messages had just increased as well.

Categories