My goal was to reduce the time it takes until the application stops to try connecting to a server.
This is the solution I am now using:
(It works, but I want to understand more in detail how)
MongoClientSettings clientSettings = new MongoClientSettings()
{
Server = new MongoServerAddress(host, port),
ClusterConfigurator = builder =>
{
//The "Normal" Timeout settings are for something different. This one here really is relevant when it is about
//how long it takes until we stop, when we cannot connect to the MongoDB Instance
//https://jira.mongodb.org/browse/CSHARP-1018, https://jira.mongodb.org/browse/CSHARP-1231
builder.ConfigureCluster(settings => settings.With(serverSelectionTimeout: TimeSpan.FromSeconds(2)));
}
};
I do not understand exactly what SocketTimeout and ConnectTimeout is then used for.
If I make those settings e.g. to 3 seconds, it actually would not make sense for the driver to wait longer then, since it is not expected that anything good can happen after the socket had a timeout?
My theory is that Connect and SocketTimeout affect how long it takes until the the connect to a single server waits, while serverSelectionTimeout is the timeout for the overall process? Is this true?
You might see here in ClusterRegistry.cs that ConnectTimeout is passed to TcpStreamSettings.ConnectTimeout, while SocketTimeout is passed to both TcpStreamSettings.ReadTimeout and TcpStreamSettings.WriteTimeout.
Then in TcpStreamFactory.cs you see how those Read and Write timeouts are used: they are used as NetworkStream.ReadTimeout and NetworkStream.WriteTimeout, when creating stream to read\write data from\to TCP connection.
Now, if we go to documentation of NetworkStream.ReadTimeout we see there:
This property affects only synchronous reads performed by calling the Read method. This property does not affect asynchronous reads performed by calling the BeginRead method.
But in Mongo driver, those network streams are read asynchronously, which means those timeouts do nothing. Same story with NetworkStream.WriteTimeout.
So long story short: SocketTimeout seems to have no effect at all and ConnectTimeout on the other hand is used when doing TCP connection. How exactly it happens you can see in TcpStreamFactory.cs.
Related
I'm having the issue that setting a timeout in HttpClient in C# a) interrupts a running download if it is set too low, and b) doesn't trigger in some situations. I am trying to find a workaround, and need some help.
I have a pretty straightforward implementation of a HttpClient client call. The problem is easiest to see when downloading a large file. The code looks like this, and I believe this is the correct usage:
HttpClient client= new HttpClient()
client.Timeout= TimeSpan.FromMinutes(1);
HttpMessageRequestMessage msg= new HttpRequestMessage(HttpMethod.Get, "<URL>");
HttpResponseMessage response= await client.SendAsync(msg, HttpCompleteOption.ResponseHeadersRead);
response.EnsureSuccessStatusCode();
await response.Content.ReadAsSytreamAsync().Result.CopyToAsync(fileStream);
Now this works in principle, BUT:
The timeout of 1 minute seems to be an "execution time timeout", that is, it kills the copy even if the file has not been downloaded completely, but is downloading quite well up to that point.
If I actually unplug a network cable during the transfer (to simulate a catastrophic failure), the timeout does not fire. I assume that some .Read() method within CopyToAsync simply blocks in that case.
Regarding 1: AFAIK the client.Timeout gets converted to CancellationTokens internally (which is why a TaskCanceledException is thrown), that is, it a) only works if the underlying operation actually checks for a cancellation and b) the underlying operation seemingly doesn't reset the timer on a successful read, since the whole point seems to cancel after a set timeout.
Regarding 2: In many cases, i.e. if the server isn't there at all, or if I kill the server, i.e. if there is a "definite network failure" which the client can recognize, I do get a proper exception from this code; but I don't get one in, let's say, more 'problematic network failures' (as simulated by unplugging the network cable from a (wired) server, while the (wireless) client still tries to download.
Now, this is easiest to test on CopyToAsync for a large file, but I have no reason to believe that this works any different on a standard GetAsync or PostAsync, which means that with unlucky timing, those methods might hang indefinitely as well.
What I would expect the Timeout in HttpClient to do is a) only count from the last successful read/write operation (which it seemingly doesn't, it counts from the start of the operation), and b) fire in all cases, even if the network goes down (which it doesn't either).
So - what can I do about this? Should I just use another implementation (which?), or should I implement my own timeouts by using a secondary thread which just kills the underlying client/socket/stream?
Thanks.
I'm trying the new 9.0.0 RC1 release of SharpSNMP for its async methods. It's easy to use - practically a drop-in replacement for the old synchronous method.
My code to poll a list of OIDs asynchronously is:
// create a new get request message
var message = new GetRequestMessage(Messenger.NextRequestId, VersionCode.V2, SNMPReadCommunity, oids);
// get a new socket
using (Socket udpSocket = SNMPManager.GetSocket())
{
// wait for the response (this is async)
var res = await message.GetResponseAsync(SNMPManager, new UserRegistry(), udpSocket);
// check the variables we received
CheckSnmpResults(res.Pdu().Variables);
}
I limit the number of OIDs per get-request to 25. My application connects to c.50 SNMP devices. Every 5 minutes a timer ticks and runs the above code several times in a loop in order to poll c.100 OIDs on each device. All good.
The problem is that the message.GetResponseAsync method is leaking memory. Every poll run adds 6 or 7 MB to my application's memory usage. Using the VS2015 memory profiler, I can see a large number of OverlappedData objects, each 65K, the number of which increases every time I run message.GetResponseAsync. So running this to receive c.200 SNMP get-requests every 5 minutes means my application's memory use quickly rockets.
Am I using message.GetResponseAsync incorrectly somehow? Is this a bug in SharpSNMPLib?
Thanks,
Giles
A temp answer right now.
The leak is caused by the fact that SocketAsyncEventArgs is not reused. This kind of object should be reused (as well as the Socket object) if a manager tries to manage an agent with multiple operations.
The current design does not allow such reuse. Thus, an overall redesign is needed.
I already have some ideas on how to move on, but probably it won't be able to make into 9.0 release. See if 9.5 can be the first release with the new design. And then I will go back and update this answer.
Updated: this commit contains a quick fix to dispose the args object. But it does not enable reuse yet.
I'm trying to implement a basic UDP client. One of its functions is the ability to probe computers to see if a UDP server is listening. I need to scan lots of these computers quickly.
I can't use the Socket.BeginReceiveFrom method and run a timeout waiting for it to complete, because callbacks may occur after the timeout is over, and seeing as many computers are being probed quickly, I found that later callbacks ended up using modified data as a new probe was already underway when the callback was finally invoked.
I can't use the Socket.ReceiveFrom method and set a Socket.ReceiveTimeout because the SocketException being thrown+handled takes a long time (not sure why, I'm not running much code to handle it), meaning it takes about 2 seconds per computer rather than 100ms like hoped.
Is there any way of running a timeout on a synchronous call to ReceiveFrom without using exceptions to determine when the call has failed/succeeded? Or is there a tactic I've not yet taken that you think could work?
Any advice is appreciated.
I decided to rewrite the probe code using TCP.
However, I later discovered the Socket.ReceiveFromAsync method which, seeing as it only receives a single datagram per call, would have made life easier.
I've been working on learning to deliver and display data in "real-time" over websockets. Real-time, in this context, just means that the data is being read from a sensor (camera with image processing) every .03 to 1 seconds. Each data point consists of a time and a value (t,v) which are encoded as doubles (although the time is always an integer in this case, I'm not assuming it will be).
The server side uses Alchemy Websockets implementation (C#) as I found it very easy to understand/modify for my purposes.
I'm leveraging the websockets examples found here and here as well as the examples included with Alchemy.
I'm using HighCharts to display the data in "real-time", but I also have it printing to a div for debug purposes (independent example so they don't interfere with each other).
So much of it already works pretty well, but there's an obvious problem that happens when I send data too quickly (to be clear, sending the data about a point every second or two results in a nice graph which doesn't appear to have any problems - the problems become more pronounced the faster I call the alchemy server "send" function).
The data appears to be coming in in the wrong order resulting in an interesting "mangled" effect.
I'm going to start delving into the packet order contained on the server side buffer (the server is instructed to send a certain number of "historical" points when a new user connects and it is already running - this results in a pronounced problem like the one shown above) as well as the client side receive order by looking at the timestamps.
The error is inconsistent in that each time I reload the page it results in a different "mangled" data set. This makes me suspect the communication over websockets to be responsible or something involving the alchemy server.
I will attach the full code if necessary, but right now it's rather messy so I am more looking for troubleshooting ideas.
I've gathered this is not expected behavior for a web socket as it is built on TCP.
Any suggestions/ideas for things to look at?
Thanks!
Edit: I ran another test to check how many data points were out of order each time I refreshed the page. The numbers are as follows:
1 2 3 25 6 5 10 11 96 2 8
Very inconsistent (never 0). Certainly interesting!
This result was taken by excluding the charting component and only using websockets and an array to store the data.
Update:
I decided I'd start analyzing the order things come in in and it does appear to be randomly receiving out of order points using an identical data set. I implemented an "insert" function which will take into account out of order packets. The result (plus a little theme change) looks pretty good!
Open question remains: Is it expected that a websocket can deliver information out of order? Or is there something wrong with my implementation on the server side (or Alchemy). I will investigate further when I have time.
SOLUTION!
I figured it out! After a lot of testing, I realized that my Connection object (which is responsible for watching a data set for new data and sending it as is appropriate given how the connection is configured) was implemented using a Timer object. This was something I took from an example (normally I just use the Thread object for most things asynchronous).
As the Timer object speeds up, it starts executing asynchronously with it's previous calls to it's Tick function. This means that very occasionally, one call to it's Tick function will happen a little faster than another (due to the latency in the Alchemy Send function). This causes minor out of order problems.
I switched the implementation of the communication loop from a Timer object to a Thread object, thus enforcing synchronization, and the out of order packets went away!
Websockets use TCP, and TCP guarantees that data is delivered in order.
I would hope that the browser fires websocket message events in order as well. In my tests, this seemed to be the case.
Using this simple Node app to test:
var sio = require('socket.io')
, http = require('http')
, express = require('express')
, app = express()
, srv = http.createServer(app)
, io = sio.listen(srv)
, fs = require('fs')
, idx = fs.readFileSync('index.html').toString()
;
app.get('/', function(req, res) {
res.send(idx);
});
var i = 0;
setInterval(function() {
io.sockets.emit('count', i++);
}, 1);
srv.listen(888);
This simply sends websocket messages as fast as it can with a number that's incremented each message. The client:
<script src="/socket.io/socket.io.js"></script>
<script>
var last = 0;
var socket = io.connect('/');
socket.on('count', function(d) {
if (d-1 != last) console.warn('out of order!', last, d);
last = d;
});
</script>
Throws a console warning if it receives a message that contains a number that is not one more than the previous message.
In Chrome and Firefox, I saw zero out-of-order messages.
I also tried blocking for a while in the message received event (for (var i = 0; i < 1000000; i++) { }) to simulate work that would cause messages to queue up. Message events still fired in order.
In other words, it's something else in your setup. Most likely, the Alchemy server is actually sending the messages in a different order.
Do not use an object like Timer which has asynchronous callbacks when the task is synchronous. Use a Thread and run the communication loop that way.
I don't know when the issue was posted. I also have similar problem. I use Alchemy Client send small data, then there is no problem. There are a lot example for chat Service. But when I send some file more than 4 KB (not precisely), the problem take place. I try to find what happened. I wrote a program sent numbers from 0-7000 by Alchemy client, and received from UserContext.DataFrame(onreceive), there will happen that DataFrame.ToString will get extra "\0\0\0\0" on about position 508. Then, after this position, the data ordering will be wrong. I used the 2.2.1 from nuget. And I read version 2.0 on GiHub. The source code is not workable. So, to old and no reference value.
I've got a C# program with lots (let's say around a thousand) opened TcpClient objects. I want to enter a state which will wait for something to happen for any of those connections.
I would rather not launch a thread for each connection.
Something like...
while (keepRunning)
{
// Wait for any one connection to receive something.
TcpClient active = WaitAnyTcpClient(collectionOfOpenTcpClients);
// One selected connection has incomming traffic. Deal with it.
// (If other connections have traffic during this function, the OS
// will have to buffer the data until the loop goes round again.)
DealWithConnection(active);
}
Additional info:
The TcpClient objects come from a TcpListener.
The target environment will be MS .NET or Mono-on-Linux.
The protocol calls for long periods of idleness while the connection is open.
What you're trying to do is called an Async Pattern in Microsoft terminology. The overall idea is to change all I/O blocking operations to non-blocking. If this is done, the application usually needs as many system threads as there are CPU cores at the machine.
Take a look at Task Parallel Library in .Net 4:
http://msdn.microsoft.com/en-us/library/dd460717%28VS.100%29.aspx
It's a pretty mature wrapper over the plain old Begin/Callback/Context .Net paradigm.
Update:
Think about what to you will do with the data after you read from the connection. In real life you probably have to reply to the client or save the data to a file. In this case you will need some C# infrastructure to contain/manage your logic and still stay within a single thread. TPL provides it to you for free. Its only drawback is that it was introduced in .Net 4, so probably it's not in Mono yet.
Another thing to consider is connection's lifetime. How often your connections are opened/closed and how long do they live? This is important because accepting and disconnecting a TCP connection requires packet exchange with the client (which is asynchronous by nature, and moreover - a malicious client may not return ACK(-nowledged) packets at all). If you think this aspect is significant for your app, you may want to research how to handle this properly in .Net. In WinAPI the corresponding functions are AcceptEx and DisconnectEx. Probably they are wrapped in .Net with Begin/End methods - in this case you're good to go. Otherwise you'll probably have to create a wrapper over these WinAPI calls.