How to notify client from server in C# [closed] - c#

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I have a windows app and a web service in server side, and on other side(client) a windows form application. I want to notify client from server when something change on server. One way is that client check the server constantly. But server will too busy because number of clients are about 100,000. What is the best way to do this?

One way is using a Duplex Contract
How to: Create a Duplex Contract
The duplex contract is one of three message patterns available to
Windows Communication Foundation (WCF) services. The other two message
patterns are one-way and request-reply. A duplex contract consists of
two one-way contracts between the client and the server and does not
require that the method calls be correlated. Use this kind of contract
when your service must query the client for more information or
explicitly raise events on the client
You could also use Signalr i guess
Introduction to SignalR
ASP.NET SignalR is a library for ASP.NET developers that simplifies
the process of adding real-time web functionality to applications.
Real-time web functionality is the ability to have server code push
content to connected clients instantly as it becomes available, rather
than having the server wait for a client to request new data.
Update [Answer to comment]
WFC and Duplex Contracts is very secure an reliable way to achieve your results. Signalr is a very lightweight approach and is'nt as robust or secure.
Yes you are right there is some sort of socket connection that will be required at the base level, though you really need to study these 2 options to work out what is mostly like best for you. I personally have had a lot of success with Signalr and use it often, its easy to setup, and fairly fault tolerant. Though if security and reliability is a concern then Duplex Contracts are probably your best bet

Related

It is common to ask a caller to communicate via a queue? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
Originally the team was thinking of developing a C# service for a parent company. The service would receive requests and then the service would ping a 3rd party and return the result.
Instead of it being synchronous, we decided we would have an AWS SQS and SNS queue for both requests and results. Our company would give credentials to our AWS for the parent to write and notify from a request queue.
Then this service wouldn't be a service, it would be a processor. It would then read from the queue and sent the requests and writeback the results to another SQS, SNS that would notify an API on the parent company side.
Question: Is this a good design? We are bypassing the use of services to prevent having to have retry logic and having to develop clients, rather we just communicate via queues.
There are many advantages to such an approach.
You allow your client and "processor" to focus on their
responsibilities and interact via a message bus
You allow the client and processor to be independent and decoupled of
each other, especially in terms of technology and programming
language used.
The can both be scaled independently as and when required.
If one of them is "down" then the messages can still be accumulated
on the message bus and won't get lost (assuming you have a suitable
TTL to cover this)
If the services offered grows, you can introduce new queues to
process them without impact the current client and processor
They are asynchronous
You can have multiple producers and/or consumers
Etc
There are of course disadvantages
Latency is increased. For some situations this is a issue
You are dependent upon a 3rd party product
If using AWS, Azure, etc, you are dependent upon a 3rd party company
Additional costs from 3pp party & product
Debugging can be more difficult
Tracing can be more difficult
Security can be more difficult
If a client or receiver goes down, it might not be immediately
obvious. Leading to a build up of messages, each of which might have
a limited time to live! (what happens if this occurs at the weekend?)
Acknowledging receipt of requests is difficult
Ensuring messages don't get lost when a process fails is more
difficult
Etc
You can of course find more about these on the internet.
So there is nothing wrong with this as an approach. The only question really is
"is this the right architecture for our use case(s)?"
Only you can determine this by weighing up the pros and cons, maybe together with your customer.

Communication between microservices pipeline [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I want to use microservices architecture for my next project based on ASP.NET core.
I could exchange between the services via REST but it is really heavy to maintain.
Is there an other way for communication between microservices, for example over event bus like vertx.
I could use rabbitmq, but do not know, if it is a good option.
I think Rabbit MQ is going to work OK, especially if you have many consumers i.e. need load balancing, and/or if you need messages to be persistent, and also if conceptually, your micro-services are OK processing messages.
Otherwise, since you’re considering REST, I’d recommend WCF instead.
Just ignore Microsoft’s examples, those are too complex. You need to make an assembly containing protocols (in WCF terminology, service contracts) + messages they send/receive (in WCF terminology, data contracts) that’ll be shared between your services. Having a shared assembly will allow you to get rid of that enterprise-style XML configuration nonsense. This will also make maintenance simpler than REST, because the compiler is going to verify the correctness of your network calls: you forget to update one service after changing a protocol, and the service will stop compiling.
Here’s my demo which uses WCF to implement zero-configuration client-server communications in C#.
It’s currently set up to use named pipe binding i.e. will only work locally. But it’s easy to switch from NetNamedPipeBinding to NetTcpBinding which will do networking just fine.
P.S. If you’ll pick WCF, don’t forget the abstraction can and will leak. Any network call may fail with any network-related exception. Also you’ll need to reconnect sometimes (if you don’t want to, and your don’t have too many messages per second, you can use a connection-less protocol like NetHttpBinding but those are much less performant).

Send & Receive methods to be used in .Net app via an server [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
After wasting over a month looking and reading about .Net Protections, I have convinced that there is no way to 100% protect .Net from decompiling ,even if there it won't last a lot.
However i though about rebuilding my app remotely to a server built in c# too.
My questions is:
1-is it possible to send methods to my app to be used there? (That shouldn't be a full method transfer).
2-Best practice for socket multi-threading to handle data from each client on my server.
Generally speaking, if you want to keep your compiled C# code from being decompiled, don't make the compiled bytecode available to anyone. You seem to sense that this will require a client-server system, and that's correct. You also want a "thin client," meaning that the client shouldn't contain any of your application's business logic but rely on the server for everything but user input and presentation of data. You could do this with a custom C# client or something written in HTML and JavaScript that would run within a web browser. (If you go with a web application, make sure you don't include any business logic in your JavaScript, because that will be sent to the browser in plain text.)
As for the idea of sending executable bytecode to the client from the server, that seems less secure than a web app. Even if you encrypt communication between the client and server, the client will still end up with executable bytecode that could be decompiled on the client side.
Before you start implementing the communications protocol yourself, do take a look at WCF. If both your client and server are .NET based, WCF is the easiest way to go.

What is the right programing model for global chat app? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I am trying to build a chat app in c# that would work in the wan network.
There are 2 side in the app. Server side and client side.
In my thoughts i think that every message from client to client need to be passed to the server and the server will forward it to the right des client. The communications between the clients wont be directelly.
Is this the right model?
If yes, does the server need to have one socket that will listen to all clients? (Because every client sends his message to the same port at server).
Will the sever can handle management of million of messages on same port?
I think it is really up to what you want to accomplish, each choice has it's own pros and cons.
For example:
Using a centralized server can track messages , which users are online etc... but you will have to manage the ports for each connection (see explanation at the end of the answer for details).
Using a P2P model, you will not have the bottle neck and management required by the centralized server, but again it might be more of a hassle to manage a non centralized system (depends what exactly you want to accomplish).
If you are going to the centralized design, Typically You would have a server with a port that will listen for requests.
once a user wants to connect, the server will start a new thread for the client, and will assign a port for him (the thread will be typically from a thread pool and the port from a specific port range).
this will allow users to speak to the server in a non-blocking manner, and by that allow for multiple users to use the service simultaneously.
Take a look at SignalR and the chat system implemented using SignalR, Jabbr:
http://signalr.net/
http://about.jabbr.net/

Observing multiple windows services [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I would like input on the design I currently have planned.
Basically, I have some number of external instrumentation, each of which should always be running, collecting specific data. My thought was to create a service for each, always running and polling the instruments, performing logging, etc. There could be one instrument, or there could be 40.
However, I need one application to consume all this data, run some math on it, and do the charting, display, emailing, etc. The kicker is that even if this application is not running, the services should constantly be consuming data. Also, these services should almost always be supposed to run on the same machines as the client application itself, but the ability to network them (like .NET Remoting used to do) could be an interesting feature.
My question is... is this the best design? If it is, how do I go about doing the communication between services and application? I've looked into WCF, but it seems to be geared towards request-response web services, not something that is continually streaming data to anything that might listen to it. Alternatively, should I have these services contact some other Web Service using WCF, that then compiles the data for use in a thin client viewer that polls the web service often?
Any links and resources would be greatly appreciated. .NET namespaces for me to research are also appreciated. If I wasn't clear about something let me know.
Just a thought....but have you considered perhaps adding a backend database? All services could collate data and persist it then your application that needs to process the information can just query the database rather than setting up loads of IPC between the services.
WCF can handle streaming. It can also use MSMQ as a transport, which will ensure that no messages are lost, even if your instruments begin producing large quantities of data.

Categories