Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I am developing a Windows Service (.NET, C#). One of the non functional requirement is to ensure the high availability of this Windows Service. I understand that installing this Windows Service on a Failover Cluster will make this highly available. To install this service on a Cluster, is there any specific code I have to write within this service? I have heard about cluster aware services, however I have not came across any article that explains how to develop a cluster aware windows service. Is it really required to make a windows service to install it on a cluster?
Microsoft Failover Cluster Services allows for this high availability without the need for you to code complicated heartbeats into your service. The Failover Cluster Service manages this for you.
For more information on Failover Clusters
For Windows Services, there generally isn't much you need to do beyond writing your windows service. That said, you may need to use the Failover Cluster API in your code if you need to know about your environment from your code, use local resources such IP addresses/disks, etc. However, most simple services I've seen don't require modifications to make calls to the Clustering APIs and can be installed directly into a properly configured failover cluster with the proper resource groups defined.
View Microsoft's guidance on writing Cluster Aware applications.
Visit Creating a failover cluster for help and for services, the clustered role type should be "Generic Service".
Cheers,
cbuzzsaw
First of all this question is EXTREMELY broad, but here are my two cents.
It depends on the service.
If executing multiple instances simultaneously of your service doesn't breaks it's purpose, then you don't need to do nothing, if there can be only one service being executed then you must coordinate these instances (udp broadcast messages maybe?) to only have one active and in the case the instance which is active stops start another one.
A cluster is just a bunch of machines with a same purpose (yes yes, there is a lot more of things but for this case that comparison is enough), so think it as if you were running that service on a local network in multiple machines.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I want to use microservices architecture for my next project based on ASP.NET core.
I could exchange between the services via REST but it is really heavy to maintain.
Is there an other way for communication between microservices, for example over event bus like vertx.
I could use rabbitmq, but do not know, if it is a good option.
I think Rabbit MQ is going to work OK, especially if you have many consumers i.e. need load balancing, and/or if you need messages to be persistent, and also if conceptually, your micro-services are OK processing messages.
Otherwise, since you’re considering REST, I’d recommend WCF instead.
Just ignore Microsoft’s examples, those are too complex. You need to make an assembly containing protocols (in WCF terminology, service contracts) + messages they send/receive (in WCF terminology, data contracts) that’ll be shared between your services. Having a shared assembly will allow you to get rid of that enterprise-style XML configuration nonsense. This will also make maintenance simpler than REST, because the compiler is going to verify the correctness of your network calls: you forget to update one service after changing a protocol, and the service will stop compiling.
Here’s my demo which uses WCF to implement zero-configuration client-server communications in C#.
It’s currently set up to use named pipe binding i.e. will only work locally. But it’s easy to switch from NetNamedPipeBinding to NetTcpBinding which will do networking just fine.
P.S. If you’ll pick WCF, don’t forget the abstraction can and will leak. Any network call may fail with any network-related exception. Also you’ll need to reconnect sometimes (if you don’t want to, and your don’t have too many messages per second, you can use a connection-less protocol like NetHttpBinding but those are much less performant).
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
We have a desktop application. we need to install in client pc and connect the database to remote server. which method is better to connect database (for speed and performance).
1. Normal query method (mention the server name in connection string).
2. Create a web service and get the data in xml or json format.
Both solutions will bring positive and negative points.
Direct query to server -> imply that your client software knows the Database schema. If you change the Database schema, you need to test its integration in the client app.
Web service -> a limited API allows your Database to be only known by its data web service. The client app only knows about the small web service API. When the Database evolves, you have a very low chance to negatively impact the client code.
From an architectural point of view, it is encouraged to limit the size of contracts between 2 pieces of technology.
From a development cost point of view, creating and maintaining such a service has a cost and introduces maybe the need of a new set of technical skill set in your team.
Depends on your requirement, budget and time constraint.
If there is any possibility that this desktop software would be later extended to Mobile App and other platforms, then go for Creating web services preferably with JSON.
Keeping data access layer in Client Desktop Application saves a little development time, but makes testing, re usability and maintenance harder.
Also, the trend is to use SOA, thus I'd always prefer creating Web Services. Its secure, reusable and very friendly for future modification to project.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
We currently have a single database with users, customers, products and orders logically separated by schemas. We then have several MVC.net applications accessing the database via their own BLLs. Each of these applications have their own functionality and share some aspects with some/all of the other applications.
Currently, some code is duplicated in these BLLs and it's a bit of a mess to maintain. It does however, allow us to develop features quickly and deploy each application independently (assuming on major database work here).
We have started to develop a single access layer, properly separated out that sits above the database and is used by all of our MVC.net applications. Logically this makes sense as we can now share code between our applications. For example, application A can retrieve a customer record in the same way as application B. The issue comes when we want to deploy an application, we wouldn't be able to deploy one application, we'd need to deploy them all.
What other architectural approaches could we consider that would allow us to share code between our applications and deploy those applications independently?
A common solution is to factor out services (based on an arbitrary communication layer REST, WCF, Message Bus, your choice with versioning) and deploy these services to your infrastructure as standalone services.
Now you can evolve, scale and deploy your services independently of the consumers. Instead of deploying all applications you now only have to deploy the changed services (side-by-side with the old ones) and the new application.
This adds quite a lot of complexity around service versioning, configuration management, integration testing, a little communication overhead etc. So you have to balance the pros and cons. There are quite a bunch of articles on the net how to build such an architecture.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
What is the best practice to share data between different applications on the same machine and notify them if the data has changed?
I have 4 applications which are using the same settings project to change their settings. When I change the setting in the project, other applications have to act on this change and have to know that the setting was changed.
I thought about IPC to make setting changes and then broadcast the change information to all users but it would be great if such a library already exists.
EDIT:
I found a solution that worked for me. We decided not to spend a lot of time in this functionality because its not extreme critically to update the other applications.
We save our settings, as we did before, in a XML-file and I registered the FileSystemWatcher on that file to get all changes. So if I change the settings all 4 applications go and read the settings file and determine if they have to take an action or not.
Solution to be chose depend on different parameters:
How much effort can you invest in implementation.
How critical is it that applications will be updated quickly.
Which environment is available for you/ yours customers.
...
For example:
saving changes to database/config file, and let the applications run a separate thread, which is dedicated to check for setting changes every n seconds. It's cheap and easy to implement this solution, yet not "nice", and many developers will reject such solution.
Create a WCF service, which "publish" changes to the applications. That case, using Dual bindings, applications will be updated instantly. Of course, this solution is more costly....
Those are only 2 examples out of many available solutions (shared memory, shared application domain, etc).
What you have done seems wisely.And I have another experience using MSMQ .
You can create Private or Public Queues,since you have all of your apps on the same machine, private queue is ok,otherwise, you should use Public queues.
In that time I had choosen Spring.Net as my framework (object builder & dependency Injector). Spring.net has brilliant QuickStarts and one of them is using MSMQ as communicating bridge between applications.
If I were you, I would utilize Queuing approach, because you can notify apps running on different machines.
Also,WCF provides convenient means for developing a distributed service over the underlying MSMQ component.
Furthermore,Publish-Subscribe is a common design pattern that is widely used in client/server
communication applications. In WCF service development, the Publish-Subscribe pattern will
also help in those scenarios where the service application will expose data to certain groups
of clients that are interested in the service and the data is provided to clients as a push model
actively (instead of polling by the client)
How about using the built in dependency objects ?
like:
CacheDependency - http://msdn.microsoft.com/en-us/library/system.web.caching.cachedependency.aspx
SqlDependency - http://msdn.microsoft.com/en-us/library/system.data.sqlclient.sqldependency.aspx
these are pretty simple to implement and and they work quite well
How about Network sockets ? You can make listeners and senders on different ports
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I would like input on the design I currently have planned.
Basically, I have some number of external instrumentation, each of which should always be running, collecting specific data. My thought was to create a service for each, always running and polling the instruments, performing logging, etc. There could be one instrument, or there could be 40.
However, I need one application to consume all this data, run some math on it, and do the charting, display, emailing, etc. The kicker is that even if this application is not running, the services should constantly be consuming data. Also, these services should almost always be supposed to run on the same machines as the client application itself, but the ability to network them (like .NET Remoting used to do) could be an interesting feature.
My question is... is this the best design? If it is, how do I go about doing the communication between services and application? I've looked into WCF, but it seems to be geared towards request-response web services, not something that is continually streaming data to anything that might listen to it. Alternatively, should I have these services contact some other Web Service using WCF, that then compiles the data for use in a thin client viewer that polls the web service often?
Any links and resources would be greatly appreciated. .NET namespaces for me to research are also appreciated. If I wasn't clear about something let me know.
Just a thought....but have you considered perhaps adding a backend database? All services could collate data and persist it then your application that needs to process the information can just query the database rather than setting up loads of IPC between the services.
WCF can handle streaming. It can also use MSMQ as a transport, which will ensure that no messages are lost, even if your instruments begin producing large quantities of data.