issue/response serial port processing in C# - c#

Okay here's the problem (this is related to a previous post of mine)
I need to be able to have an issue/response system for serial comms that works something like this:
issue: hello
response: world?
issues: no, hello nurse
reponse: well you're no fun.
this would mean I say "hello" the remote unit is expected to send back "world?" within some timeframe, and if it doesn't i should have a way to access that buffer, so here's what i'm thinking, please give me feedback
a ReaderWriterLock'd 'readBuffer'
a Issue Method that will write to the stream
a Response Method that will watch the readBuffer until it contains what i'm expecting or until the timeout expires.
first, how would the stackoverflow community design this class, second, how would they write the datarecieved event? Third how would they make this code more robust so that multiple instances of the class can exist in parallel threads for simultaneous communications?

This is basically a producer-consumer problem, so that should be the basis for the general design.
Here are some thoughts on that:
a) FIFO buffer (Queue)
First of all, you should have an instance of a thread-safe Queue (a FIFO buffer) for each instance of your class. One thread would receive the data and fill it, while the other one would read the data in a thread-safe manner. This only means you would have to use a lock on each enqueue/dequeue operation.
FIFO Queue would enable you to simultaneously process the data in the worker thread, while filling it from the communication thread. If you need to receive lots of data, you could dequeue some data in the worker thread and parse it before all of it has been received. Otherwise you would need to wait until all data has been received to parse it all at once. In most cases, you don't know how much data you are supposed to get, until you start to parse it.
b) Worker thread waiting for data
I would create a worker thread which would wait for a signal that new data has been received. You could use ManualResetEvent.WaitOne(timeOut) to have a timeout in case nothing happens for a while. When the data is received, you would have to parse it, based on your current state -- so that would be an implementation of a state machine.
c) Port abstraction
To handle different types of ports, you could wrap your serial port inside an interface, which could have at least these methods (I might have forgotten something):
interface IPort
{
void Open();
void Close();
event EventHandler<DataEventArgs> DataReceived;
void Write(Data data);
}
This would help you separate the specific communication code from the state machine.
NOTE:
(according to Microsoft) The DataReceived event is not guaranteed to be raised for every byte received. Use the BytesToRead property to determine how much data is left to be read in the buffer. So you could create your own implementation of IPort which would poll SerialPort in regular intervals to ensure that you don't miss a byte (there is a question of SO which already addresses this).
d) Receiving data
To receive the data, you would have to attach a handler for the IPort.DataReceived event (or SerialPort.DataReceived, if you're not wrapping it), and enqueue the received data to the Queue inside the handler. In that handler you would also set the mentionel ManualResetEvent to notify the worker thread that new data has been received.

I had a similar design issue writing an asynchronous socket listener for formatted data.
A translation of what I came up with into the SerialPort / DataReceived model would be something like this:
The main class encapsulates the issue/response system - it will contain the logic for generating the response based on input. Each instance of the class will be bound to a single serial port that can be set during or after construction. There will be a StartCommunications type method - it will wire up the DataReceived event to another method in the class. This method is responsible for grabbing the data from the port, and determining if a full message has arrived. If so, it raises its own event (defined on the class), which will have an appropriate method wired to it. You could also have it call a predefined method on your class instead of raising an event - I defined an event to improve flexibility.
That basic design is working just fine in a production system, and can handle more input than the rest of the systems connected to it can.

Related

Event handler timing and threading

I am still learning C# so please be easy on me. I am thinking about my application I am working on and I can't seem to figure out the best approach. This is not a forms application but rather a console. I am listening to a UDP port. I get UDP messages as fast as 10 times per second. I then look for a trigger in the UDP message. I am using an event handler that is raised each time i get a new UDP packet which will then call methods to parse the packet and look for my trigger. So, i have these questions.
With regard to threading, I assume a thread like my thread that listens to the UDP data should be a permanent thread?
Also on threading, when I get my trigger and decide to do something, in this case send a message out, i gather that I should use a thread pool each time I want to perform this task?
On thread pools, I am reading that they are not very high priority, is that true? If the message I need to send out is critical, can i rely on thread pools?
With the event handler which is raised when i get a UDP packet and then calls methods, what is the best way to ensure my methods all complete before the next packet/event is raised? At times I see event queue problems because if any of the methods take a bit longer than they should (for exampe writing to a DB) and the next packet comes in 100ms later, you get event queue growth because you cannot consume events in a timely manner. Is there a good way to address this?
With regard to threading, I assume a thread like my thread that listens to the UDP data should be a permanent thread?
There are no permanent threads. However there should be a thread that is responsible for receiving. Once you start it, let it run until you no longer need to receive any messages.
Also on threading, when I get my trigger and decide to do something, in this case send a message out, i gather that I should use a thread pool each time I want to perform this task?
That depends on how often would you send out messages. If your situation is more like consumer/producer than a separate thread for sending is a good idea. But if you send out a message only rarely, you can use thread pool. I can't define how often rare means in this case, you should watch your app and decide.
On thread pools, I am reading that they are not very high priority, is that true? If the message I need to send out is critical, can i rely on thread pools?
You can, it's more like your message will be delayed because of slow message processing or slow network rather than the thread pool.
With the event handler which is raised when i get a UDP packet and then calls methods, what is the best way to ensure my methods all complete before the next packet/event is raised? At times I see event queue problems because if any of the methods take a bit longer than they should (for exampe writing to a DB) and the next packet comes in 100ms later, you get event queue growth because you cannot consume events in a timely manner. Is there a good way to address this?
Queue is a perfect solution. You can have more queues if some messages are independent of others and their execution won't collide and then execute them in parallel.
I'll adress your points:
your listeting thread must be a 'permanent' thread that gets messages and distribute them.
(2+3) - Look at the TPL libarary you should use it instead of working with threads and thread pools (unless you need some fine control over the operations which, from your question, seems like you dont need) - as MSDN states:
The Task Parallel Library (TPL) is based on the concept of a task, which represents an asynchronous operation. In some ways, a task resembles a thread or ThreadPool work item, but at a higher level of abstraction
Look into using MessageQueues since what you need is a place to receive messages, store them for some time (in memory in your case)and handle them at your own pace.
You could implement this yourself but you'll find it gets complicated quickly,
I recommend looking into NetMQ - it's easy to use, especially for what you describe, and it's in c#.

.NET AMQP Messaging Pattern Issues

I have created a small class using RabbitMQ that implements a publish/subscribe messaging pattern on a topic exchange. On top of this pub/sub I have the methods and properties:
void Send(Message, Subject) - Publish message to destination topic for any subscribers to handle.
MessageReceivedEvent - Subscribe to message received events on this messaging instance (messaging instance is bound to the desired subscribe topic when created).
SendWaitReply(Message, Subject) - Send a message and block until a reply message is received with a correlation id matching the sent message id (or timeout). This is essentially a request/reply or RPC mechanism on top of the pub/sub pattern.
The messaging patterns I have chosen are somewhat set in stone due to the way the system is to be designed. I realize I could use reply-to queues to mitigate the potential issue with SendWaitReply, but that breaks some requirements.
Right now my issues are:
For the Listen event, the messages are processed synchronously through the event subscribers as the listener runs in a single thread. This causes some serious performance issues when handling large volumes of messages (i.e. in a back-end process consuming events from a web api). I am considering passing in a callback function as opposed to subscribing to an event and then dispatching the collection of callbacks in parallel using Task or Threadpool. Thread safety would obviously now be a concern of the caller. I am not sure if this is a correct approach.
For the SendWaitReply event, I have built what seems to be a hacky solution that takes all inbound messages from the message listener loop and places them in a ConcurrentDictionary if they contain a non-empty correlation guid. Then in the SendWaitReply method, I poll the ConcurrentDictionary for a message containing a key that matches the Id of the sent message (or timeout after a certain period). If there is a faster/better way to do this, I would really like to investigate it. Maybe a way to signal to all of the currently blocked SendWaitReply methods that a new message is available and they should all check their Ids instead of polling continuously?
Update 10/15/2014
After much exhaustive research, I have concluded that there is no "official" mechanism/helper/library to directly handle the particular use-case I have presented above for SendWaitReply in the scope of RabbitMQ or AMQP. I will stick with my current solution (and investigate more robust implementations) for the time being. There have been answers recommending I use the provided RPC functionality, but this unfortunately only works in the case that you want to use exclusive callback queues on a per-request basis. This breaks one of my major requirements of having all messages (request and reply) visible on the same topic exchange.
To further clarify, the typical message pair for a SendWaitReply request is in the format of:
Topic_Exchange.Service_A => some_command => Topic_Exchange.Service_B
Topic_Exchange.Service_B => some_command_reply => Topic_Exchange.Service_A
This affords me a powerful debugging and logging technique where I simply set up a listener on Topic_Exchange.# and can see all of the system traffic for tracing very deep 'call stacks' through various services.
TL; DR - Current Problem Below
Backing down from the architectural level - I still have an issue with the message listener loop. I have tried the EventingBasicConsumer and am still seeing a block. The way my class works is that the caller subscribes to the delegate provided by the instance of the class. The message loop fires the event on that delegate and those subscribers then handle the message. It seems as if I need a different way to pass the message event handlers into the instance such that they don't all sit behind one delegate which enforces synchronous processing.
It's difficult to say why your code is blocking without a sample, but to prevent blocking while consuming, you should use the EventingBasicConsumer.
var consumer = new EventingBasicConsumer;
consumer.Received += (s, delivery) => { /* do stuff here */ };
channel.BasicConsume(queue, false, consumer);
One caveat, if you are using autoAck = false (as I do), then you need to ensure you lock the channel when you do channel.BasicAck or you may hit concurrency issues in the .NET library.
For the SendWaitReply, you may have better luck if you just use the SimpleRpcClient included in the RabbitMQ client library:
var props = channel.CreateBasicProperties();
// Set your properties
var client = new RabbitMQ.Client.MessagePatterns.SimpleRpcClient(channel, exchange, ExchangeType.Direct, routingKey);
IBasicProperties replyProps;
byte[] response = client.Call(props, body, out replyProps);
The SimpleRpcClient will deal with creating a temporary queue, correlation ID's, and so on instead of building your own. If you find you want to do something more advanced, the source is also a good reference.

Windows Phone 8 - Keep socket open and receive data of unknown length

I have socket in which I want to receive multiple messages of unknown lengths: text, media, ..
I saw how it works with Windows.Networking.Sockets and it seems that the sender should send the length first and that's not my case
I saw a few improvements in System.Net.Sockets but didn't find any events that listens for data packets receive.
My question is: do I have to check the socket every now and then for data ? is there any better implementation?
Windows Phone 8 Approach
Asynchronous networking operations are supported by using the System.Net.Sockets.Socket class ReceiveAsync method. All you need to do is properly configure the SocketAsyncEventArgs before making this call and you will be able raise a handler only when data has been received.
The example from this MSDN article shows how to perform a Read operation synchronously, but internally it is using an asynchronous pattern. If you remove the ManualResetEvent part from the code, then the calls will fall through, and the event will only fire when data is ready to be processed.
Desktop / Full .NET Approach
Use the TcpClient class instead of Socket and you will be able to get a NetworkStream object which implements BeginRead - an asynchronous method which will invoke a callback function only when there is new data to be read (or the remote side has closed).
Note: the MSDN example for BeginRead is finished on the page for EndRead. Use those two code snippets together for working code.
You can use the select function to keep track of your open descriptors and receive the data that has arrived over the socket.

How to receive multiple tasks at the same time in a single com port in c sharp

Im a novice in c sharp and im stuck with this task. My requirement is , I create many threads and these threads (send using COM1) have to communicate with a single serial port say COM2. All the threads have to send message using a single COM port(receive using COM2).
say,send "helloworld1"(using thread1) and "helloworld2"(thread2) using COM1 and receive using COM2 in hyperterminal. So i need to see both the helloworlds in the hyperterminal at the same time.
Please help me out.
Since you want to send from two different threads, you will need to surround your calls SerialPort.Write() with a lock{} like this:
SerialPort s = new SerialPort();
//configure serial port, etc.
//spin off additional threads
//in each thread, do this:
lock(s)
{
s.Write("Hello World1");
}
You will want to start here.
You can instantiate 2 instances the SerialPort class for each COM port you want to send/receive on.
I have used 2 variations of receiving data us the SerialPort class:
1. You can manually "Read" on the port at a certain interval (e.g. you can have each thread read as needed).
2. The SerialPort class exposes a DataReceived event that can be subscribed to (an ErrorReceived is also available).
Option 1 might be the best fit.
Edit
After reading your comment, Option 2 may be a better fit so that you can have one "receive" thread that subscribes to the DataReceived/ErrorReceived events.
Per #Slider, the lock will also be required to ensure only 1 thread is writing at any given time.

Serial port communication: polling serial port vs using serial port DataReceived event

I am just reviewing some code I wrote to communicate with the serial port in C# on CF2.0.
I am not using DataReceived event since it's not reliable. MSDN states that:
The DataReceived event is not
gauranteed to be raised for every byte
received. Use the BytesToRead property
to determine how much data is left to
be read in the buffer.
I poll the port with read() and have a delegate that processes the data when it is read. I also read somewhere that "polling is bad"(no explanation given).
Any ideas why polling might be bad? aside from the usual threading cautions - I have a separate thread (background thread) that polls the port, the thread is exited after the data is read, all tested and works well.
The way I read that, you might get one event for multiple bytes, rather than one event per byte. I would still expect to get an event when data is ready, and not have it "skip" some bytes entirely.
I've always used this event, and have not had any trouble with it.
Conventional wisdom has it that "polling is bad" because it often ends up being a CPU-bound process. If blocking I/O is used instead, then the CPU is available for other processes until the event happens.
That said, it is usually possible to set things up so that a poll waits for a (short) timeout before returning when no characters are available. If a suitable timeout is chosen, then your simple polling loop uses significantly less CPU time, and other processes also get to run.
I haven't used serial ports from C# at all, but I am going to hazard a guess that what the documentation meant by
The DataReceived event is not guaranteed to be raised for every byte received. Use
the BytesToRead property to determine how much data is left to be read in the buffer.
is that you can't expect to get one event per character. It might under some circumstances deliver the event with more than one character available. Simply retrieve all the available characters in your event handler, and all will be well.
Edit: Doing a blocking call on a reader thread might be the best answer overall. It isn't polling per se since the thread is blocked until characters arrive. You might need to tune the buffer sizes and some of the serial port settings if you need to process the data as it arrives rather than in fixed sized chunks.
I'm pretty sure the underlying serial port driver code is interrupt driven, even when using the blocking Read call.

Categories