I am listening for a network change in my WinRT application using what I believe is the recommended approach.
I subscribe to the event using this code. (I have tried in a number of places but currently I have it in the page OnNavigatedTo method).
NetworkInformation.NetworkStatusChanged += NetworkInformation_NetworkStatusChanged;
Then in the OnNvaigatedFrom method I remove it:
NetworkInformation.NetworkStatusChanged -= NetworkInformation_NetworkStatusChanged;
When I remove the network cable the NetworkInformation_NetworkStatusChanged event is fired correctly. However, when I plug back in (go back online) the event is fired twice and my data (stored locally while offline) gets uploaded to the server twice.
Has anybody come across this before/know why it might be happening - its driving me mad.
Many thanks
Chris
I remember this being an issue. Seems like it still is.
http://social.msdn.microsoft.com/Forums/en-US/winappswithcsharp/thread/520ea5e2-cc17-486d-815e-528ca041d77f/
To solve your problem, keep track of the network availability with a flag and only update if the previous network status was unavailable.
In our testing we found that you will receive the event once per available adapter. We have 5 adapters so every time we plug a cable back in we seem to get an event as many times as there are active (enabled) adapters. We tested this by disabling one of the adapters and the number of the events reduced by exactly one and vice versa. It seems we only get one event for disconnect though.
The other reason for your event to be firing more than once is the fact that your network does actually drops and reconnects.
In any case it's on you to write the code defensively to deal with the reality of the implementation and general unreliability associated with network connections.
Related
I have a doubt about sending/publishing NServicebus events and commands.
Commands seem pretty straightforward. According to the documentation they "They should be sent direct to their logical owner." So I guess the destination is decided either by the defined routing, or by the overriding the default routing:
using options.SetDestination("MyDestination")
So far I am correct?
But I am not sure I am understanding how events actually work:
"An event, on the other hand, is sent by one logical sender, and received by many receivers, or maybe one receiver, or even zero receivers. This makes it an announcement that something has already happened."
Does that mean that an event would be processed by ANY endpoint that implements IHandleMessages[SomethingHappened]? Regardless of the routing configuration? I mean if I have these endpoints A,B,C,D,E and A is configured to send to B can still C,D,E get the event? What If I have no explicit routing configuration because I am using options.SetDestination to send my commands? There is any way to say I want this Event to be published only to D and E?
Thank you for any light you can shed on this subject.
Commands require routing. A command is always destined for a specific endpoint to be processed as there's an expectation for the outcome and knowledge of what destination can process the command at hand.
Events have no routing. Events are notifications about something that already happened. Anyone can receive and process events. It won't change the outcome of what has happened in the first place, causing the event to be raised.
An event can be analogous to a radio broadcast to understand the 'how' part and why routing is unnecessary. When a radio station is broadcasting, the radio host doesn't know who'll be tuning in to listen. Listeners can tune in (subscribe) and tune out (unsubscribe). All these listeners (subscribers) need to know the radio station frequency (a 'topic' or 'exchange') where the broadcast is taking place (events published to).
To sum it up - events are notifications, allowing loosely coupled distribution of events among endpoints. It's subscribers that choose wherever to receive events or not, not the publisher.
Like many people, I'm having issues with the DataReceived event not firing.
After working with it, I wrapped my handling processes under the the BytesToRead count, so if I miss a fire, I can pick up where I left off. Seemed like it would fix all my issues.
The problem is, sometimes it doesn't trigger even once. Depending on the packet being sent back, this could be absolutely critical, forcing me to restart the application and the setup process because it relies on being able to process a response.
Reading through some of the responses to similar questions hasn't gotten me any closer to guaranteeing that the event will fire at minimum requirements. Microsoft mentions the issue with DataReceived not being guaranteed to fire for every byte, but I noticed this above:
The DataReceived event is also raised if an Eof character is received, regardless of the number of bytes in the internal input buffer and the value of the ReceivedBytesThreshold property.
So my question is, can I force an EOF character through my serial connection to force the event to fire? What would this character be, 0x1A?
If I can't force an EOF character through serial, what would my options be? My first thought was maybe create a Task to keep a watch for the event triggering, and if it doesn't trigger, to trigger the actions through the Task.
So I was able to fix my issues, coming out a little wiser.
From my observations, the ReadBytesThreshold plays a critical part in how effective the event actually is. When the threshold is set too low, the serial port has a tendency to get itself confused, and will eventually throw up its hands and give up.
Setting this closer to the size of my expected data coming in appeared to help ease the burden enough to make the reading fairly consistent.
I didn't test my idea of using a Task, but reading further online appeared to answer my question about using 0x1A: from what I noticed, it will trigger if it receives this character (at least on a Windows machine).
I have noticed a problem where the .Completed event of a SocketAsyncEventArgs seems to stop firing. The same SAEA can fire properly and be replaced in the pool several times, but eventually all instances will stop firing, and because the code to replace them in the pool is in the event handler, the pool empties.
The following circumstances are also apparently true:
1) It seems to only occur when a server side socket sends data out to one of the connected clients. When the same class is connecting as a client, it doesn't appear to malfunction.
2) It seems to occur under high load. The thread count seems to creep up until eventually the error happens.
3) A test rig under similar stress appears never to malfunction. (It's only 20 messages per second, and the test rig has been proven to 20K)
I'm not going to be able to paste the rather complicated code, but here is a description of my code:
1) The main inspiration is this: http://vadmyst.blogspot.ch/2008/05/sample-code-for-tcp-server-using.html. It shows how to hook up a completion port using an event, how to get different sized messages over the TCP connection, and so on.
2) I have a byte buffer in which all SAEAs have a piece, that doesn't overlap.
3) I have an object pool of SAEAs, based on a blockingcollection. This throws if the pool is empty for too long.
4) As a server, I keep a collection of sockets returned from the AcceptAsync function, indexed by the endpoint of the client. A single process can use one instance as a server as well as multiple instances as clients (forming a web). They share the data buffer and pool of SAEAs.
I realise it's hard to explain this; I've been debugging it for an entire day and night. Just hoping someone has heard of this or has useful questions or suggestions.
At the moment, I am suspecting some sort of thread exhaustion, leading to the SAEAs not being able to call the completion. Alternatively, some sort of buffer problem on the outgoing buffer.
So, another day of debugging and finally I have an explanation.
1) The SAEAs were not firing the completed event because they were unable to send more. This is revealed by Wireshark to be due to the TCP window emptying. (TCP ZeroWindow)
2) The TCP window was emptying because the networking layer was passing an event up the stack that took too long to complete, ie there's no producer/consumer between the network layer and the UI. Thus the network op would have to wait for the screen draw before sending the ACK.
3) The event that took too long was a screen draw in an event handler on the GUI. The test rig was a console window (one that summarized incoming messages), so that's why it didn't cause a problem at much higher load. It's normal not to redraw the screen on each message, but this was happening because the project isn't quite done yet. The redraw rate would have been fixed later.
4) The short term solution is simply to make sure there's no GUIs holding up the show. A more robust solution might be to create a producer/consumer at the network layer.
As I understand, before the Mango SDK update (7.1), you were only able to access a rather broad network type via the property NetworkInterface.NetworkInterfaceType. This would return an enumeration like Wireless80211, MobileBroadbandGSM, or MobileBroadbandCDMA.
After the release of the Mango SDK, we are now able to access the NetworkInterfaceSubtype via an open socket using a call similar to this: socket.GetCurrentNetworkInterface(); A property of the returned object (NetworkInterfaceInfo.InterfaceSubtype) will give you more specific network information such as Cellular_EDGE, Cellular_HSPA, or Cellular_EVDV. This is the information I need.
The most efficient way I have found to access this is to open an async host name resolution request and grab the information in the async callback function, such as below (borrowed from this post: How can I check for 3G, wifi, EDGE, Cellular Networks in Windows Phone 7?):
DeviceNetworkInformation.ResolveHostNameAsync(
new DnsEndPoint("microsoft.com", 80),
new NameResolutionCallback(nrr =>
{
var info = nrr.NetworkInterface;
var subType = info.InterfaceSubtype;
}), null);
What I'm looking for is a way to access this NetworkSubtype information without having to actually open a data connection. The reason I need a passive method for querying this information is that I need to know when the network type changes, but continually opening a data connection in a loop that queries this could potentially prevent that change from taking place.
UPDATE 1: I have found through testing that, as Richard Szalay suggested, the DeviceNetworkInformation.NetworkAvailabilityChanged event does indeed fire when the handset switches network technologies (i.e. 3G to EDGE, or WiFi to HSPA), and you do have access to the NetworkInterfaceSubtype. I unfortunately have also found that when switching from WiFi to a cellular network technology (e.g. HSPA, EDGE) that the reported network subtype can often be inaccurate. For instance, if you switch from WiFi to HSPA, the event arguments may still report a connection to WiFi when it gets fired, and no second event is fired to report HSPA. You are thus given the wrong connection type. This unreliability may make using this trigger ultimately useless, but I am going to do some network testing (without WiFi) to see if this issue is confined to WiFi switching. I'm hoping that it's just an issue with the WiFi radio, and that cellular network switching is reported accurately. I'll update this when I know more.
UPDATE 2: I have found through a lot of (driving around) testing that while the DeviceNetworkInformation.NetworkAvailabilityChanged event will get you the network changes, it does not seem possible to determine exactly what raises/triggers the event. For instance, if you're recording the network interface each time the event is triggered, you could end up with results like: HSPA, EDGE, EDGE, EDGE, GPRS, GPRS, HSPA. The event argument object has a variable named NotificationType that is supposed to tell you the reason it was triggered, but this is always set to CharacteristicUpdate in my tests, so I have no idea why it is being triggered multiple times for the same network type (e.g. EDGE, EDGE, EDGE). For my purposes, I am just recording the changes that have not already been recorded and ignoring the multiples. It is not ideal (and seems slightly less than trustworthy), but it's better than nothing, I suppose.
I posted the answer you grabbed the code from, and I did a bit of research for that question (including going through the reflected source of the WP7 framework).
Unfortunately NetworkSubType is not publically exposed from any location that is not the result of an open connection, with host name resolution being the simplest.
The only thing I can recommend is doing a test to determine if DeviceNetworkInformation.NetworkAvailabilityChanged is fired when your data type changes (say, from 3G to H). If so, you can perform another resolution at that time (though even that may prove too costly). If not, I'm afraid you're out of luck.
Register to DeviceNetworkInformation.NetworkAvailabilityChanged then get the list of NetworkInterfaceSubtype this way:
var currentList = new NetworkInterfaceList().Where(i => i.InterfaceState == ConnectState.Connected).Select(i => i.InterfaceSubtype);
if (currentList.Contains(NetworkInterfaceSubtype.WiFi))
Debug.WriteLine("WiFi");
if (currentList.Intersect(new NetworkInterfaceSubType[]
{
NetworkInterfaceSubtype.Cellular_EVDO,
NetworkInterfaceSubtype.Cellular_3G,
NetworkInterfaceSubtype.Cellular_HSPA,
NetworkInterfaceSubtype.Cellular_EVDV,
}).Any())
Debug.WriteLine("3G");
if (currentList.Intersect(new NetworkInterfaceSubType[]
{
NetworkInterfaceSubtype.Cellular_GPRS,
NetworkInterfaceSubtype.Cellular_1XRTT,
NetworkInterfaceSubtype.Cellular_EDGE,
}).Any())
Debug.WriteLine("2G");
I've written an application which reads from a serial device at a very fast rate. The serial port object however, fails to fire the DataReceieved event after about 20 min of continuous operation. Disconnecting and reconnecting the serial programmatically allows the event to work again but for only another 20 min.
I tried using DiscardInBuffer after every DataReceived event and this has appeared to have solved the problem. But the method consumes a lot of cpu time and is degrading the performance of the application. MSDN mentions that the method "Discards data from the serial driver's receive buffer.", but fails to suggest when it should be used.
When and how should DiscardInBuffer be used, and am I using it in the correct context for my particular problem ?
Edit:
After implementing the ErrorReceived event, the event data returned indicated the event type was an "RXOver".
After more investigations, it appears my problem was more a fundamental issue. Since the data was flooding in at a hot pace, the SerialPort buffer needed to be cleared or processed continuously to prevent the "RXOver" error. I achieved this by reading into another buffer during the DataReceived event and processed it in another separate thread.
From my understanding the DiscardInBuffer should only be used selectively to clear the contents of the ports for initialisation purposes, such as before opening a port. The process of clearing the driver buffer does take some time to complete and therefore should be used wisely in a performance orientated application.
Two ideas come to mind. The first (horrible) idea: call DiscardInBuffer every 15 to 20 minutes instead of after every DataReceived event. The second (somewhat better) idea: call DiscardInBuffer when you receive the ErrorReceived event, which you should be handling.