I'm trying to send my rs232 device multiple SerialPort.Write commands right after each other. However, I don't think it can handle multiple WRITE commands at once.
Currently, I'm just using Thread.Sleep(500) to delay between WRITEs, but is there a way to detect when the perfect time to send is? Or with buffers?
example code
Interface
private void btn_infuse_Click(object sender, RoutedEventArgs e) {
cmd.SetTargetInfusionVolume(spmanager, double.Parse(tbox_targetvolume.Text));
cmd.StartInfuse(spmanager);//cmd object of Command class
}
Command Class
public void StartInfuse(SPManager spm){
spm.Write("RUN");//spm object of serialportmanager class
}
public void SetTargetInfusionVolume(SerialPortManager spm, double num) {
spm.Write("MLT " + num.ToString());
}
SerialPortManager class
public void Write(string sData) {
if (oSerialPort.IsOpen) {//oSerialPort object of SerialPort class
try {
oSerialPort.Write(sData + "\r");
}
catch { MessageBox.Show("error"); }
}
}
If your serial port settings (especially, as Hans Passsant mentioned, flow control) are correct, then the problem with speed is most likely that your device can't handle messages fast enough to keep up with you if you send them too fast, or that it expects "silent" gaps between messages in order to delineate them.
In this case, a Sleep() to introduce a transmission delay is a very reasonable approach. You will need to find a sensible delay that guarantees the device handles your messages successfully, ideally without stalling your application for too long.
All too often this involves trial and error, but consult the documentation for the device, as quite a few devices use gaps in transmission to indicate the end of a message packet (e.g. often they may expect you to be silent for a short time after a message, e.g. if they specified 10 bits worth of "silent time" on a 2400 bps link, this would correspond to 10/2400ths or just over 4 milliseconds). This can all be compromised a bit by windows, though, as it tends to buffer data (i.e. it will hang on to it for a few milliseconds to see if you are going to ask it to transmit anything more) - you may therefore need a significantly longer delay than should strictly be required to get this to work - maybe 15-20ms. And of course, I could be barking up the wrong tree and you may find you need something as large as 500ms to get it to work.
Related
So I am developing an app that can adjust (amongst other things) the volume of a device. So what I started with was a very simple Model which implements INotifyPropertyChanged. There is no need for a ViewModel in such a simple scenario as far as I can tell. INPC is called when the volume property is set, and the Model generates a TCP message to tell the device to change the volume.
However, this is where it gets complicated. The volume does not have to be changed by the app, it could also be changed directly on the device, or even by another phone with the app. The only way to get these changes from the device is to poll it periodically.
So what I think is reasonable is to change the structure a bit. So now I have a DeviceModel which represents the actual device. I add a VolumeViewModel. The DeviceModel class now handles generating the TCP messages. It also periodically polls the device. However, lets say the DeviceModel finds that the volume changed. How should this propagate back to the VolumeViewModel such that all changes are two-way both from the UI, and from the actual device? If I put INPC in the DeviceModel, it seems my VolumeViewModel becomes superfluous. Perhaps for this simple contrived example that's fine, but lets say the device is more complicated than just 1 volume. I was thinking the VM could contain a reference to the Model, and the volume property could just be a reference to the volume in the DeviceModel but it still doesn't really solve my problem.
If the DeviceModel volume changes, the reference isn't changing, so it seems to me this would not trigger the setter function for the volume property in the VolumeViewModel. Do I have the ViewModel inject an event handler into the Model to be called when polling sees a different volume? Do I use INPC in both (what would implementing it that way look like?)
Set direction is clear. And you want to get it explicitly. So we need something like
class MyDeviceService : IDeviceService
{
public async Task SetVolumeAsync(int volume) { }
public async Task<int> GetVolumeAsync() { }
}
// ViewModel
class DeviceViewModel : INotifyPropertyChanged
{
public int Volume { get{ ... } set { ... } }
public DeviceViewModel(IDeviceService service) { ... }
}
For the update you have different options:
Callback
Pro:
Easy to implement
Con:
only one subscriber
looks like a bad implementation of events (in our scenario)
class MyDeviceService
{
public Action<int> VolumeChangedCallback { get; set; }
public async Task SetVolumeAsync(int volume) { }
public async Task<int> GetVolumeAsync() { }
// producer
VolumeChangedCallback(newVolume);
}
// consumer
myDeviceService.VolumeChangedCallback = v => Volume = v;
// deregistration
myDeviceService.VolumeChangedCallback = null;
Event
Pro:
Language feature (built in)
Multiple subscribers
Con:
???
class MyDeviceService
{
public event EventHandler<VolumeChangedEventArgs> VolumeChanged;
public async Task SetVolumeAsync(int volume) { }
public async Task<int> GetVolumeAsync() { }
// producer
VolumeChanged(new VolumeChangedEventArgs(newVolume));
}
// consumer
MessagingCenter.Subscribe<MyDeviceService, int>(this,
MyDeviceService.VolumeMessageKey, newVolume => Volume = newVolume);
// needs deregistration
MessagingCenter.Unsubscribe<MyDeviceService, int>(this,
MyDeviceService.VolumeMessageKey, newVolume => Volume = newVolume);
Messaging
Pro:
Easy Sub / Unsub
Multiple subscribers
Multiple senders
Receiver does not need to know the sender
Con:
external library needed (but included in Xamarin.Forms, MvvMCross, other MvvM Frameworks)
class MyDeviceService
{
public static string VolumeMessageKey = "Volume";
public async Task SetVolumeAsync(int volume) { }
public async Task<int> GetVolumeAsync() { }
// producer
MessagingCenter.Send<MyDeviceService, int>(this,
VolumeMessageKey, newVolume);
}
// consumer
MessagingCenter.Subscribe<MyDeviceService, int>(this,
MyDeviceService.VolumeMessageKey, newVolume => Volume = newVolume);
// needs deregistration
MessagingCenter.Unsubscribe<MyDeviceService, int>(this,
MyDeviceService.VolumeMessageKey, newVolume => Volume = newVolume);
Observable
Using Reactive extensions is always nice, if you have event streams.
Pro:
Easy Sub / Unsub
Multiple subscribers
Filterable like IEnumerable (e.g. Where(volume => volume > 10))
Con:
external library just for one case
high learning effort due totally new approach
class MyDeviceService
{
IObservable<int> VolumeUpdates { get; }
public async Task SetVolumeAsync(int volume) { }
public async Task<int> GetVolumeAsync() { }
}
// consumer
_volumeSubscription = myDeviceService.VolumeUpdates
.Subscribe(newVolume => Volume = newVolume);
// deregistration
// - implicitly, if object gets thrown away (but not deterministic because of GC)
// - explicitly:
_volumeSubscription.Dispose();
Conclusion
I left out INPC in the model, because that's events but worse, because you have to compare the property names.
If you have a look at these examples, you see, that they just differ in the way you subscribe and unsubscribe. The main difference is the flexibility they offer. Personally, I'd go for Reactive Extensions ;) But Events and Messaging are fine, too. So go for the approach that you and your team members understand the best. You just have to remember:
ALWAYS deregister! ^^
I am presuming that you intend to show a UI to the user that displays the current volume (such as a slider widget). Therefore your real challenge is the fact that any attempts to manipulate that slider cannot be immediately confirmed - it may take some time for the device to respond, and once it does it may not even accept the request (or might be overridden by local manipulation). Yet you still have a need to show the mobile app user that their request is being processed - or else they will assume it is malfunctioning.
I've had to solve this in an app as well - although my example was a much more complicated situation. My app is used to control large installations of irrigation management hardware, with many devices (with varying versions of firmware and varying degrees of remote control capabilities). But ultimately the problem was the same. I solved it with standard MVVM.
For each device, create a viewmodel that tracks two distinct values: the actual last known (reported) status of the hardware, and any "pending" value that may have been recently requested by the app. Bind the visual controls to the "pending" values via standard INPC bindings. In the setters for those values, if the new value differs from the last known hardware status, then it would trigger an async request to the device to transition to the desired status. And for the rest of the time, you just poll the device status using whatever mechanism makes sense for you (push notifications might be better, but in my case the infrastructure I was working with could only support active polling). You would update with the new hardware status values, and also the pending values (unless a different value was already pending).
In the app UI, you probably want to show the actual hardware status values as well as the "pending" values that the user is allowed to manipulate. For sliders, you might want to implement a "ghost" slider thumb that reflects the reported hardware value (read-only). For switches, you might want to disable them until the hardware reports the same value as the pending value. Whatever makes sense for your app's design language.
This leaves the final edge case of how to deal with situations where the hardware does not (or cannot) respect a request. Perhaps the user tries to turn up the volume to 11 when the device can only go up to 10. Or maybe someone presses a physical pushbutton on the device to mute it. Or maybe someone else is running the mobile app and fighting you for control of it. In any event, it is easily solved by establishing a maximum wait timeout for pending manipulations. For example, any volume change requests that aren't met after 10 seconds are assumed to be pre-empted and the UI would just stop waiting for it by setting the pending value = last reported value.
Anyhow, good luck! It's a challenging thing to handle well, but worth the effort!
A legacy app is in an endless loop at startup; I don't know why/how yet (code obfuscation contest candidate), but regarding the method that's being called over and over (which is called from several other methods), I thought, "I wonder if one of the methods that calls this is also calling another method that also calls it?"
I thought: "Nah, the compiler would be able to figure that out, and not allow it, or at least emit a warning!"
So I created a simple app to prove that would be the case:
public partial class Form1 : Form
{
public Form1()
{
InitializeComponent();
}
private void button1_Click(object sender, EventArgs e)
{
method1();
}
private void button2_Click(object sender, EventArgs e)
{
method2();
}
private void method1()
{
MessageBox.Show("method1 called, which will now call method2");
method2();
}
private void method2()
{
MessageBox.Show("method2 called, which will now call method1");
// Note to self: Write an article entitled, "Copy-and-Paste Considered Harmful"
method1();
}
}
...but no! It compiles just fine. Why wouldn't the compiler flag this code as questionable at best? If either button is mashed, you are in never-never land!
Okay, sometimes you may want an endless loop (pacemaker code, etc.), but still I think a warning should be emitted.
As you said sometimes people want infinite loops. And the jit-compiler of .net supports tailcall optimization, so you might not even get a stack overflow for endless recursion like you did it.
For the general case, predicting whether or not a program is going to terminate at some point or stuck in an infinite loop is impossible in finite time. It's called the halting problem. All a compiler can possibly find are some special cases, where it is easy to decide.
That's not an endless loop, but an endless recursion. And this is much worse, since they can lead to a stack overflow. Endless recursions are not desired in most languages, unless you are programming malware. Endless loops, however, are often intentional. Services typically run in endless loops.
In order to detect this kind of situation, the compiler would have to analyze the code by following the method calls; however the C# compiler limits this process to the immediate code within the current method. Here, uninitialized or unused variables can be tracked and unreachable code can be detected, for instance. There is a tradeoff to make between the compiling speed and the depth of static analysis and optimizations.
Also it is hardly possible to know the real intention of the programmer.
Imagine that you wrote a method that is perfectly legal. Suddenly because you are calling this method from another place, your compiler complains and tells you that your method is no more legal. I can already see the flood of posts on SO like: "My method compiled yesterday. Today it does not compile any more. But I didn't change it".
To put it very simply: it's not the compiler's job to question your coding patterns.
You could very well write a Main method that does nothing but throw an Exception. It's a far easier pattern to detect and a much more stupid thing to do; yet the compiler will happily allow your program to compile, run, crash and burn.
With that being said, since technically an endless loop / recursion is perfectly legal as far as the compiler is concerned, there's no reason why it should complain about it.
Actually, it would be very hard to figure out at compile time that the loop can't ever be broken at runtime. An exception could be thrown, user interaction could happen, a state might change somewhere on a specific thread, on a port you are monitoring, etc... there's way too much possibilities for any code analysis tool out there to establish, without any doubt, that a specific recursing code segment will inevitably cause an overflow at runtime.
I think the right way to prevent these situations is through unit testing organization. The more code paths you are covering in your tests, the less likely you are to ever face such a scenario.
Because its nearly impossible to detect!
In the example you gave, it is obvious (to us) that the code will loop forever. But the compiler just sees a function call, it doesn't necessarily know at the time what calls that function, what conditional logic could change the looping behavior etc.
For example, with this slight change you aren't in an infinite loop anymore:
private bool method1called = false;
private void method1()
{
MessageBox.Show("method1 called, which will now call method2");
if (!method1called)
method2();
method1called = true;
}
private void method2()
{
MessageBox.Show("method2 called, which will now call method1");
method1();
}
Without actually running the program, how would you know that it isn't looping? I could potentially see a warning for while (true), but that has enough valid use cases that it also makes sense to not put a warning in for it.
A compiler is just parsing the code and translating to IL (for .NET anyways). You can get limited information like variables not being assigned while doing that (especially since it has to generate the symbol table anyways) but advanced detection like this is generally left to code analysis tools.
I found this on the Infinite Loop Wiki found here: http://en.wikipedia.org/wiki/Infinite_loop#Intentional_looping
There are a few situations when this is desired behavior. For example, the games on cartridge-based game consoles typically have no exit condition in their main loop, as there is no operating system for the program to exit to; the loop runs until the console is powered off.
Antique punchcard-reading unit record equipment would literally halt once a card processing task was completed, since there was no need for the hardware to continue operating, until a new stack of program cards were loaded.
By contrast, modern interactive computers require that the computer constantly be monitoring for user input or device activity, so at some fundamental level there is an infinite processing idle loop that must continue until the device is turned off or reset. In the Apollo Guidance Computer, for example, this outer loop was contained in the Exec program, and if the computer had absolutely no other work to do it would loop running a dummy job that would simply turn off the "computer activity" indicator light.
Modern computers also typically do not halt the processor or motherboard circuit-driving clocks when they crash. Instead they fall back to an error condition displaying messages to the operator, and enter an infinite loop waiting for the user to either respond to a prompt to continue, or to reset the device.
Hope this helps.
I have a Worker Role which processes items off a queue. It is basically an infinite loop which pops items off of the queue and asynchronously processes them.
I have two configuration settings (PollingInterval and MessageGetLimit) which I want the worker role to pick up when changed (so with no restart required).
private TimeSpan PollingInterval
{
get
{
return TimeSpan.FromSeconds(Convert.ToInt32(RoleEnvironment.GetConfigurationSettingValue("PollingIntervalSeconds")));
}
}
private int MessageGetLimit
{
get
{
return Convert.ToInt32(RoleEnvironment.GetConfigurationSettingValue("MessageGetLimit"));
}
}
public override void Run()
{
while (true)
{
var messages = queue.GetMessages(MessageGetLimit);
if (messages.Count() > 0)
{
ProcessQueueMessages(messages);
}
else
{
Task.Delay(PollingInterval);
}
}
}
Problem:
During peak hours, the while loop could be running a couple of times per second. This means that it would be querying the config items up to 100,000 times per day.
Is this detrimental or inefficient?
John's answer is a good one using the Environment Changing/Changed events to modify your settings without restarts, but I think perhaps a better method is for you to use an exponential back-off policy to make your polling more efficient. By having the code behavior smarter on it's own you will reduce how often you are in there tweaking it. Remember that each time you update these environment settings it has to be rolled out to all of the instances, which can take a little time depending on how many instances you have running. Also, you are putting a step in here that a human has to be involved.
You are using Windows Azure Storage Queues which means each time your GetMessages(s) executes it's making a call to the service and retrieving 0 or more messages (up to your MessageGetLimit). Each time it asks for that you'll get charged a transaction. Now, understand that transactions are really cheap. Even 100,000 transactions a day is $0.01/day. However, don't underestimate the speed of a loop. :) You may get more throughput than that and if you have multiple worker role instances this adds up (though will still be a really small amount of money compared to actually running the instances themselves).
A more efficient path would be to put in an exponential backoff approach to reading your messages off the queue. Check out this post by Maarten on a simple example: http://www.developerfusion.com/article/120619/advanced-scenarios-with-windows-azure-queues/. Couple a back off approach with an auto-scaling of the worker roles based on queue depth and you'll have a solution that relies less on a human adjusting settings. Put in minimum and maximum values for instance counts, adjust the numbers of messages to pull based on how many times a message has been present the very next time you ask for one, etc. There are a lot of options here that will reduce your involvement and have an efficient system.
Also, you might look at Windows Azure Service Bus Queues in that they implement long polling, so it results in much fewer transactions while waiting for work to hit the queue.
Upfront disclaimer, I haven't used RoleEnvironments.
The MDSN documentation for GetConfigurationSettingValue states that the configuration is read from disk. http://msdn.microsoft.com/en-us/library/microsoft.windowsazure.serviceruntime.roleenvironment.getconfigurationsettingvalue.aspx. So it is sure to be slow when called often.
The MSDN documentation also shows that there is an event fired when a setting changes. http://msdn.microsoft.com/en-us/library/microsoft.windowsazure.serviceruntime.roleenvironment.changed.aspx. You can use this event to only reload the settings when they have actually changed.
Here is one (untested, not compiled) approach.
private TimeSpan mPollingInterval;
private int mMessageGetLimit;
public override void Run()
{
// Refresh the configuration members only when they change.
RoleEnvironment.Changed += RoleEnvironmentChanged;
// Initialize them for the first time
RefreshRoleEnvironmentSettings();
while (true)
{
var messages = queue.GetMessages(mMessageGetLimit);
if (messages.Count() > 0)
{
ProcessQueueMessages(messages);
}
else
{
Task.Delay(mPollingInterval);
}
}
}
private void RoleEnvironmentChanged(object sender, RoleEnvironmentChangedEventArgs e)
{
RefreshRoleEnvironmentSettings();
}
private void RefreshRoleEnvironmentSettings()
{
mPollingInterval = TimeSpan.FromSeconds(Convert.ToInt32(RoleEnvironment.GetConfigurationSettingValue("PollingIntervalSeconds")));
mMessageGetLimit = Convert.ToInt32(RoleEnvironment.GetConfigurationSettingValue("MessageGetLimit"));
}
I'm currently looking into ways to simulate the transmission of resources and messages between connected objects such as power to devices or control messages over a networked system:
CLICK FOR LARGER IMAGE.
I've been recently looking into TPL Dataflow not for its threading and parallelism but for how well it can implement pipelining of data without large messy code handling edge cases. The simulation will only run maybe once every 500ms or so and is really not time critical.
I've been playing around with the library and have read the documentation and few times now but I'm struggling to realize a solution with it. Of the node concepts above pictured I'm not sure what would fit the Dataflow nodes.
I would love to get some advise around whether TPL Dataflow is a good fit here, and if so a basic implementation of each of the pictured nodes in the Dataflow Block counterparts.
I don't think TPL Dataflow fits this well. There are several reasons:
TDF doesn't have duplex (two-way) communication, you would somehow need to bolt that on.
In TDF, blocks usually receive messages and then produce more messages to send along the pipeline. That doesn't seem to be what you need (with the exception of your hub node), at least not logically.
But I think your requirements don't require something as heavy-weight as TDF. I think what you should do is:
Create a simple library for message sending, possibly using a client-server-like architecture: a client (e.g. consumer node or distribution node) sends a message to a server (e.g. distribution node or power node) and the server replies, possibly with some delay. If a client is connected to multiple servers, it sends the same message to all of them and decides how to handle multiple responses (probably by accepting only the first one; this also means client has to be able to reject a response).
Create a PowerStore class that stores power and can be used to take it. It will return a Task, so the consumer can wait until the power is available.
Using the above two points, it should be relatively simple to build your nodes.
After much thought, prototyping and research I've finally implemented the solution using Events and Delegates and its working quite well!
The only major design problem is that there will be cases where messages will enter an infinite loop if, for example, 3 distribution nodes were connected in a triangle. Or if a node is connected to itself or 2 nodes connected to each other more than once.
I covered each of these edge cases with some simple logic in the event listener connections:
public bool ConnectTo(Node peerNode)
{
EthernetPort peerPort = peerNode.GetFreePort();
EthernetPort myPort = this.GetFreePort();
// Perform a check for free ports for both peers:
if (peerPort == null || myPort == null)
return false; // Either myself or my peer do not have a spare port.
// Perform a check to make sure these nodes aren't already connected:
if (this.HasConnectedNode(peerNode))
return false;
// Connect the two ports:
myPort.Connect(peerNode, peerPort);
peerPort.Connect(this, myPort);
return true;
}
public bool HasConnectedNode(Node node) {
foreach (var port in ethernetSwitch.ethernetPorts)
{
if (port.peerNode == node)
return true; // Found a port already connected to this node.
}
return false; // No port has this node connected to it.
}
Finally, just in case I missed something or to simply feel safe about it, I implemented a custom EventArgs type with a int timeToLive variable. This variable is decremented each time a node handles the message and if it hits 0 then the message is discarded.
I am developing a networked application that sends a lot of packets. Currently, my method of serialization is just a hack where it takes a list of objects and converts them into a string delimited by a pipe character '|' and flushes it down the network stream (or just sends it out through UDP).
I am looking for a cleaner solution to this in C# while minimizing
packet size (so no huge XML serialization).
My experiences with BinaryFormatter is SLOW. I am also considering compressing my packets by encoding them into base64 strings and them decoding them on the client side. I would like some input on seeing how this will effect the performance of my application.
Also, another quick question:
My setup creates 2 sockets (one TCP and UDP) and the client connects individually to these two sockets. Data is flushed down either one based off of the need (TCP for important stuff, UDP for unimportant stuff). This is my first time using TCP/UDP simultaneously and was wondering
if there is a more unified method, although it does not seem so.
Thanks as always for the awesome support.
I would use a binary protocol similar to Google's Protocol Buffers. Using John Skeet's protobuf-csharp-port one can use the WriteDelimitedTo and MergeDelimitedFrom methods on IMessage and IBuilder respectively. These will prefix the message with the number of bytes so that they can consumed on the other end. Defining messages are really easy:
message MyMessage {
required int32 number = 1;
}
Then you build the C# classes with ProtoGen.exe and just go to town. One of the big benefits to protobuffers (specifically protobuf-csharp-port) is that not every endpoint needs to be upgraded at the same time. New fields can be added and consumed by previous versions without error. This version independence can be very powerful, but can also bite you if you're not planning for it ;)
You could look into using ProtoBuf for the serilization
I personally have used following system:
Have abstract Packet class, all packets are derived from. Packet class defines two virtual methods:
void SerializeToStream(BinaryWriter serializationStream)
void BuildFromStream(BinaryReader serializationStream)
This manual serialization makes it possible to create small sized packets.
Before sending to socket, packets are length prefixed and prefixed with unique packet type id number. Receiving end can then use Activator.CreateInstance to build appropriate packet and call BuildFromStream to reconstruct the packet.
Example packet:
class LocationUpdatePacket : Packet
{
public int X;
public int Y;
public int Z;
public override void SerializeToStream(BinaryWriter serializationStream)
{
serializationStream.Write(X);
serializationStream.Write(Y);
serializationStream.Write(Z);
}
public override void BuildFromStream(BinaryReader serializationStream)
{
X = serializationStream.ReadInt32();
Y = serializationStream.ReadInt32();
Z = serializationStream.ReadInt32();
}
}
I am developing a networked application that sends a lot of packets
Check out networkComms.net, an open source network communication library, might save you a fair bit of time. It incorporates protobuf for serialisation, an example of which is here, line 408.