At the moment i am putting together an asynchronous tcp server, everything seems to be coming together but im now at the stage where i need to figure out what to do with the data once received (I should mention that it will be used for receiving data primarily and will possibly never send anything to the clients).
As it is written asynchronously i don't particularly want to do any processing of the data in the server application itself (in the handler in which the data is received) to ensure that it will perform as optimally as possible, though eventually the data needs to processed and submitted to various sql tables to be of some use.
As part of a previously asked question here on SO
Asynchronous Processing of Data
Stephen Clearly had pointed out that to ensure no messages are lost due to power failure, system failure etc i should look into some kind of message queue.
In doing so i have seen various ways of doing this, one of which being using SQL server as the host to the queue.
What im wondering is using the SQL Service Broker and a Queue going to be any quicker than doing a normal insert to a table which contains only a UID, The Data (byte array no bigger than 1024 bytes) and a processed flag? And if not what is the fastest insert to use in C#
The processing of said data will probably take place in another application on the same server which will also receive the data and host the sql server if it makes any difference.
Any advice or thoughts will be much appreciated!
If you can afford the license fee I would recommend NServiceBus. If you can't afford it, consider MassTransit. Both of which will manage the message queue for you. They both support multiple queue types such as:
MSMQ
RabbitMQ
ActiveMQ
Azure
Implementing your own queuing system in SQL Server is a poor choice in the long run. Been there, got the t-shirt.
Related
Intro:
I have developed a Server (TCP Listener) program developed in C# which runs at specific (constant) IP at specific (constant) port. This server program receives data packet sent by client programs, process it and sends back data packet to same socket. Whilst the number of client programs for this server may be in hundreds:
Code Sample
while(true)
{
socket = Host.AcceptSocket();
int len = socket.Receive(msgA);
performActivities(msgA); // May take up-to 5 seconds
socket.Send(msgB);
socket.Close();
}
Problem:
Due to some critical business requirements, processing may take up-to 5 seconds hence other requests during this time are queued which I need to avoid so that every request must be entertained in not more than 5 seconds.
Query:
I can make it multi-threaded but (pardon me if you find me novice):
how one socket will receive another packet from different clients if it is still opened by previous thread?
In case of entertaining multi-requests, how this can be made sure that response is sent back to respective clients?
Building an efficient, multi-threaded socket server requires strong knowledge and skills in that area. My proposal is instead of trying to build your own TCP server from scratch, use one of the existing libraries, that already solved this problem. Few that come to my mind are:
DotNetty used on Azure IoT services.
System.IO.Pipelines which is experimental, but already quite fast.
Akka.Streams TCP stream.
Each one of those libs covers things like:
Management of a TCP connection lifecycle.
Efficient management of byte buffers. Allocating new byte[] for every package is highly inefficient and causes a lot of GC pressure.
Safe access to socket API (sockets are not thread-safe by default).
Buffering of incoming packets.
Abstraction in form of handlers/pipes/stages, that allow you to compose and manipulate binary payload for further processing. This is particularly useful i.e. when you want to operate on the incoming data in terms of messages - by default TCP is a binary stream and it doesn't know when one message inside the pipeline ends and another one starts.
Writing a production-ready TCP server is a tremendous work. Unless you're an expert in network programming with a very specific requirements, you should never write the one from scratch.
I have been searching extensively for solutions or alternatives to the following approach to handle large data processing:
We are currently using a C# Windows service (call it Listener) to listen to TCP ports and save incoming messages to the MSMQ (Message Queue) to make sure that no data is lost. At the same time, we have another C# Windows service (call it Decoder) that reads from the MSMQ, processes the data in multiple threads and calls SQL stored procedures to save the data and do additional data processing.
The rate of incoming messages is around 10,000 message/second and each message is processed individually by the Decoder and then send to the SQL SP, but recently we are facing some issues related to the performance and high Server resources usage (CPU and RAM).
We are currently investigating using SQL Server Integration Service (SSIS) as an alternative to read the messages from the message queue and do the processing and execute the stored procedures but it seems like it has some limitation when it comes to the Message queues.
Now my question is, what do you think are better alternatives to the current system design given that we have to use SQL Server to store data?
SSIS is intended for ETL-type workloads -- think batch processing files and synchronizing data across systems (especially rdbs).
Processing lots of messages which need a bit of massaging, Kafka (https://kafka.apache.org/) and hadoop might do better there. Even if at the end of the process it pushes data to SQL server or you pull it in via ssis.
m
I need to implement a persistent queue in my C# service. It polls data from an external source. After it received a unit of data, it shall send it to a server. If the sending fails, it shall write it to disk in a queue-manner and try to resend it with an interval but also continue to poll the data and thus keep fill up the queue. I need to save it to disk because the network can fail and meanwhile the server can be shutdown. Resulting in a restart of the service and thus in-memory queue deleted. (Of course no polling will be made during reboot but the data during the network failure before will be lost).
I have now solved this problem by implementing a queue in Sql CE. After it polls the data, it directly writes it to the sql ce database, another thread then reads (peek) the database and tries to send the data. If it managed to send it, the message gets dequeued. I feel this solution is quite heavy and not very efficient.
Do anyone have experience with similar scenario and tips of how to implement it in a better way?
I came across a situation in my work environment. where i have wcf service which receives messages from client and store in db. Now my problem is suppose server was down for 10 mins these 10 mins messages should be stored in client at some place and client should check for availability of server for every 1 min.Is there any procedure that i could follow or any help would be appreciated .Thank you
binding :netTCPBinding
MSMQ does exactly what your first sentence says - when you send an MSMQ message, if it can't get the remote queue then it stays with the client and the built-in MSMQ service retries in the background. That way your message, once sent, is "safe." It's going to reach its destination if at all possible. (If you have a massive message volume and messages need to be stored for a long time then storage capacity can be an issue, but that's very, very unlikely.)
Configure WCF to send/receive MSMQ messages
I'd only do this if it's necessary. It involves modifying both the service and the client, and the documentation isn't too friendly.
Here's the documentation for MsmqBinding. Steps 3 and 4 for configuring the WCF service are blank. That's not helpful! When I selected the .NET 4.0 documentation those details are filled in.
I looked at several tutorials, and if I was going to look at this I'd start with this one. I find that a lot of tutorials muddy concepts by explaining too many things at once and including unnecessary information about other parts of the writers' projects.
The client queues its messages locally
If you don't to make lots of modifications to your service to support MsmqBinding. You could just implement the queuing locally. If the WCF service is down, the client puts the message in a local MSMQ queue and then at intervals reads the messages back from that queue and tries sending to the WCF service again. (If the WCF service is still down, put the message back in the queue.)
I'd just send messages straight to the queue and have another process dequeue and send to WCF. That way the client itself just "fires and forgets" if that's okay.
That way you don't have to deal with the hassle of modifying your service, but you still get the benefit. If your message can't go to the WCF service then it goes someplace "safe" where it can even survive the client app terminating or the computer restarting.
Sending and receiving messages in a local queue is much easier to configure. Your client can check to see if the queues exist and create them if needed. This is much easier to work with and the code samples are much more complete and on-point.
I'm trying to build a simple multithreaded tcp server. The client connects and sends data to server, the server responds and waits for data again. The problem is I need the server to listen for incoming data in separate thread and be able to send command to client any time (for example to notify about new update). As far as I understood, when ever client sends data to server, if server doesn't respond with any data, client app doesn't let me send more data, server simply doesn't receive them. If I send data ether way around, does the data need to be 'acknowledged' for tcpclient?
Here's the source for the server: http://csharp.net-informations.com/communications/files/print/csharp-multi-threaded-server-socket_print.htm
How can I make the server send command to a client in separate thread outside the "DoChat" functions loop? or do I have to handle everything in that thread? Do I have to respond to each request client sends me? Thanks!
The problem is I need the server to listen for incoming data in separate thread
No, there is an async API. You can polll a list of threads to see which ahve new data waiting, obcviously to be done froa worker thread.
As far as I understood, when ever client sends data to server, if server doesn't respond with any
data, client app doesn't let me send more data, server simply doesn't receive them.
That is a lot more crap programming than the way sockets work. Sockets are totally ok with streaming ata in sending and receiving direction att the same time.
How can I make the server send command to a client in separate thread outside the "DoChat"
functions
Wel, me diong your job costs money.
BUT: The example is retarded. As in- totally anti pattern. One thread per client? You will run into memroy problems and perforamnce problems once 1000+ clients connect. You get tons of context switches.
Second, the client is not async because it is not written so. Mayy I suggest giong to the documentation, reading up on sockts an trying to build that yourself? THEN come back with questions that show more than "i just try to copy paste".
With proper programming this is totally normal. I have a similar application in development, sending data lall the time to the client and getting commands from the client to modify the data stream. Works liek a charm.
If I send data ether way around, does the data need to be 'acknowledged' for tcpclient?
Yes and no. No, not for TCP - TCP does it'Äs wn handshake under the hoods. Yes, if your protocol decides it has to, which is a programmer level design decision. It may or may not be necesssary, depending on the content of the data. Sometimes the acknowledgement provides more information (timestamp server side, tracking numer) and is not pure ly there for "I got it".