I have 2 programs. 1 server and 1 client.
In the client it goes something like this:
while(statement)
{
networkstream.write(data);
}
And in the server it goes something like this:
while(statement)
{
while(statement)
{
ReceiveData;
}
Do other stuff;
}
So, while the client can write to the network stream really fast, the server still has to attend to the data before he can read some more.
What happens when the client has already made 4 laps of the loop containing the write, while the server has still only read 1 time for example.
Is there a way of letting the client know when he can make another write?
And also what happens when the client make several '.write'? does the server keep them all and reads them all or does the data that has been sent get overwriten?
Hopefully you can understand my question. Edit the question title if you desire.
There is a whole stack of layers between your .write and the actual wires. The layers do all kinds of stuff, from error correction to making sure that your network traffics does not collide with others etc.; non the least, they provide you with buffering and blocking (if you sent too much or - on the receiving side - there is no data yet).
To answer your question, somewhere along the line will be a buffer (a chunk of memory) where your bytes are written to. Until they are sent along to the next stop along the way, it will block (wait/hang...). These calls will travel down the stack, up on the receiving side, and then down the other side and up yours again. So when your write returns, you can be reasonably sure that all is well, unless you get back an error code, exception or however your error handling works.
If the server is slow when processing your requests, it will not get to read as quick as you'd like, so the buffers between you and them will fill up, at which point writing stops.
There are plenty of different buffers on different layers as well, a full answer would be pretty complex. https://en.wikipedia.org/wiki/OSI_model
Related
I have a server that sends telemetry data of varying size to receivers, or as I'll call them, clients, via NetworkStream. I aim for maximum speed, so I want minimal delays. The frequency is uncontrolled, so I'd like to use an infinite loop in my clients, that uses NetworkStream.Read to read the data, process it, then repeat.
My problem is that sometimes, if two packets are sent very quickly, and a client has a slow internet connection, the two packets will be received as a continous stream, resulting unprocessable data. A half-solution I found (mostly to confirm that this is indeed the error) is to have a small delay after/before each transmission, using System.Threading.Thread.Sleep(100), but not only I find Sleep a botchy solution, it's inconsistent, as it also slows down clients with a good connection, and the problem may persist with an even worse connection.
What I'd like to do is to have the server send a gap between each transmission, providing a separation regardless of internet speed, as NetworkStream.Read should finish after the current continous stream ends. I don't understand deeply the working of a NetworkStream, and have no idea what a few bytes of empty stream look like or how it could be implemented. Any ideas?
I would strong advise changing the protocol instead if you possibly can. TCP is a stream-based protocol, and any attempt to effectively ignore that is likely to be unreliable.
Instead, I'd suggest changing to make it a stream of messages where each message has a prefix indicating the length of the body of the message. That way it doesn't matter if a particular message is split across multiple packets or received in the same packet as other messages. It also makes reading easier for clients: they know exactly how much data to read, so can just do that in a simple loop.
If you're concerned that the length prefix will introduce too much overhead (if the data is often small) you could potentially make the protocol slightly more complicated with a single message containing a whole batch of information (multiple telemetry items).
But fundamentally, it's worth assuming that the data will be split into multiple packets, combined again etc. Don't assume that one write operation corresponds to one read operation.
You don't specifically mention the ProtocolType you're using on your NetworkStream but TCP is bound to fail your requirements. Intermediate routers/switches have no way to know your intent is to separate packets by time and will not respect that desire. Furthermore, TCP, being stream oriented, delivers packets in order, and it has error correction against dropped packets and corrupt packets. On any occurrence of one or the other it will hold all further packets until the error packet is retransmitted - and then you'll probably get them all in a bunch.
Use UDP and implement throttling on the receiving (i.e., client) side - throwing out data if you're falling behind.
I am currently building a very small kind of API in php. Depending on the data the client is requesting, it can take hours until the data is collected and can be returned. My client is currently a C# program. It gets a timeout after some time.
Is there a way in php to notify the client that the server is still working?
I do not want to increase the clients timeout span
I do not want to write some white spaces to prevent the time out. This would damage the format of the response (csv file) and would require to send the header before being sure that everything worked
Wikipedia lists the status code 102 Processing, which notifies the client that the server is still working. This is exactly what I need. Does somebody know how to send that without canceling the execution of the script?
If you think I need to do this with threading, I can try that. But it looks like some work and I would prefer a more simple way
Thanks for reading!
The simplest solution in my opinion is to return a url that the client can poll to check if the result is ready.
This is how it should behave precisely: http://farazdagi.com/blog/2014/rest-long-running-jobs/
I was looking for some advice on the best approach to a TCP/IP based server. I have done quite a bit of looking on here and other sites and cant help think what I have saw is overkill for the purpose I need it for.
I have previously written one on a thread per connection basis which I now know wont scale well, but what I was thinking was rather that creating a new thread per connection I could use a ThreadPool and queue the incoming connections for processing as time isn't a massive issue (provided they will be processed in less that a minute or two of coming in).
The server itself will be used essentially for obtaining data from devices and will only occasionally have to send a response to the sending device to update settings (Again not really time critical as the devices are setup to stay connected for as long as they can and if for some reason if it becomes disconnected the response will be able to wait until the next time it sends a message).
What I wanted to know is will this scale better than the thread per connection scenario (I assume that it will due to the thread reuse) and roughly what kind of number of devices could this kind of setup support.
Also if this isn't deemed suitable could someone possibly provide a link or explanation of the SocketAsyncEventArgs method. I have done quite a bit of reading on the topic and seen examples but cant quite get my head around the order of events etc and why certain methods are called at the time the are.
Thanks for any and all help.
I have read the comments but could anybody elaborate on these?
Though to be honest i would prefer the initial approach of of rolling my own.
I've been working on learning to deliver and display data in "real-time" over websockets. Real-time, in this context, just means that the data is being read from a sensor (camera with image processing) every .03 to 1 seconds. Each data point consists of a time and a value (t,v) which are encoded as doubles (although the time is always an integer in this case, I'm not assuming it will be).
The server side uses Alchemy Websockets implementation (C#) as I found it very easy to understand/modify for my purposes.
I'm leveraging the websockets examples found here and here as well as the examples included with Alchemy.
I'm using HighCharts to display the data in "real-time", but I also have it printing to a div for debug purposes (independent example so they don't interfere with each other).
So much of it already works pretty well, but there's an obvious problem that happens when I send data too quickly (to be clear, sending the data about a point every second or two results in a nice graph which doesn't appear to have any problems - the problems become more pronounced the faster I call the alchemy server "send" function).
The data appears to be coming in in the wrong order resulting in an interesting "mangled" effect.
I'm going to start delving into the packet order contained on the server side buffer (the server is instructed to send a certain number of "historical" points when a new user connects and it is already running - this results in a pronounced problem like the one shown above) as well as the client side receive order by looking at the timestamps.
The error is inconsistent in that each time I reload the page it results in a different "mangled" data set. This makes me suspect the communication over websockets to be responsible or something involving the alchemy server.
I will attach the full code if necessary, but right now it's rather messy so I am more looking for troubleshooting ideas.
I've gathered this is not expected behavior for a web socket as it is built on TCP.
Any suggestions/ideas for things to look at?
Thanks!
Edit: I ran another test to check how many data points were out of order each time I refreshed the page. The numbers are as follows:
1 2 3 25 6 5 10 11 96 2 8
Very inconsistent (never 0). Certainly interesting!
This result was taken by excluding the charting component and only using websockets and an array to store the data.
Update:
I decided I'd start analyzing the order things come in in and it does appear to be randomly receiving out of order points using an identical data set. I implemented an "insert" function which will take into account out of order packets. The result (plus a little theme change) looks pretty good!
Open question remains: Is it expected that a websocket can deliver information out of order? Or is there something wrong with my implementation on the server side (or Alchemy). I will investigate further when I have time.
SOLUTION!
I figured it out! After a lot of testing, I realized that my Connection object (which is responsible for watching a data set for new data and sending it as is appropriate given how the connection is configured) was implemented using a Timer object. This was something I took from an example (normally I just use the Thread object for most things asynchronous).
As the Timer object speeds up, it starts executing asynchronously with it's previous calls to it's Tick function. This means that very occasionally, one call to it's Tick function will happen a little faster than another (due to the latency in the Alchemy Send function). This causes minor out of order problems.
I switched the implementation of the communication loop from a Timer object to a Thread object, thus enforcing synchronization, and the out of order packets went away!
Websockets use TCP, and TCP guarantees that data is delivered in order.
I would hope that the browser fires websocket message events in order as well. In my tests, this seemed to be the case.
Using this simple Node app to test:
var sio = require('socket.io')
, http = require('http')
, express = require('express')
, app = express()
, srv = http.createServer(app)
, io = sio.listen(srv)
, fs = require('fs')
, idx = fs.readFileSync('index.html').toString()
;
app.get('/', function(req, res) {
res.send(idx);
});
var i = 0;
setInterval(function() {
io.sockets.emit('count', i++);
}, 1);
srv.listen(888);
This simply sends websocket messages as fast as it can with a number that's incremented each message. The client:
<script src="/socket.io/socket.io.js"></script>
<script>
var last = 0;
var socket = io.connect('/');
socket.on('count', function(d) {
if (d-1 != last) console.warn('out of order!', last, d);
last = d;
});
</script>
Throws a console warning if it receives a message that contains a number that is not one more than the previous message.
In Chrome and Firefox, I saw zero out-of-order messages.
I also tried blocking for a while in the message received event (for (var i = 0; i < 1000000; i++) { }) to simulate work that would cause messages to queue up. Message events still fired in order.
In other words, it's something else in your setup. Most likely, the Alchemy server is actually sending the messages in a different order.
Do not use an object like Timer which has asynchronous callbacks when the task is synchronous. Use a Thread and run the communication loop that way.
I don't know when the issue was posted. I also have similar problem. I use Alchemy Client send small data, then there is no problem. There are a lot example for chat Service. But when I send some file more than 4 KB (not precisely), the problem take place. I try to find what happened. I wrote a program sent numbers from 0-7000 by Alchemy client, and received from UserContext.DataFrame(onreceive), there will happen that DataFrame.ToString will get extra "\0\0\0\0" on about position 508. Then, after this position, the data ordering will be wrong. I used the 2.2.1 from nuget. And I read version 2.0 on GiHub. The source code is not workable. So, to old and no reference value.
The requirement of the TCP server:
receive from each client and send
result back to same client (the
server only do this)
require to cater for 100 clients
speed is an important factor, ie:
even at 100 client connections, it should not be laggy.
For now I have been using C# async method, but I find that I always encounter laggy at around 20 connections. By laggy I mean taking around almost 15-20 seconds to get the result. At around 5-10 connections, time to get result is almost immediate.
Actually when the tcp server got the message, it will interact with a dll which does some processing to return a result. Not exactly sure what is the workflow behind it but at small scale you do not see any problem, so I thought the problem might be with my TCP server.
Right now, I thinking of using a sync method. Doing so, I will have a while loop to block the accept method, and spawn a new thread for each client after accept. But at 100 connections, it is definitely overkill.
Chance upon IOCP, not exactly sure, but it seems to be like a connection pool, as the way it handles tcp is quite like the normal way.
For these TCP methods I am also not sure whether it is a better option to open and close connection each time message needs to be passed. On average, message are passed from each client at around 5-10 min interval.
Another alternative might be to use a web, (looking at generic handler) to form only 1 connection with the server. Any message that needs to be handled will be passed to this generic handler, which then sends and receive message from the server.
Need advice from especially those who did TCP in large scale. I do not have 100 PC for me to test out, so quite hard for me. Language wise C# or C++ will do, I'm more familar with C#, but will consider porting to C++ for the speed.
You must be doing it wrong. I personally wrote C# based servers that could handle 1000+ connections, sending more than 1 message per second, with <10ms response time, on commodity hardware.
If you have such high response times it must be your server process that is causing blocking. Perhaps contention on locks, perhaps plain bad code, perhaps blocking on external access leading to thread pool exhaustion. Unfortunately, there are plenty of ways to screw this up, and only few ways to get it right. There are good guidelines out there, starting with the fundamentals covered in Rick Vicik's High Performance Windows Programming articles, going over the SocketAsyncEventArgs example which covers the most performant way of writing socket apps in .Net since the advent of Socket Performance Enhancements in Version 3.5 and so on and so forth.
If you find yourself lost at the task ahead (as it seems you happen to be) I would urge you to embrace an established communication framework, perhaps WCF with a net binding, and use the declarative service model programming of WCF. This way you'll piggyback on the WCF performance. While this may not be enough for some, it will get you far enough, much further than you are right now for sure, with regard to performance.
I don't see why C# should be any worse than C++ in this situation - chances are that you've not yet hit upon the 'right way' to handle the incoming connections. Spawning off a separate thread for each client would certainly be a step in the right direction, assuming that workload for each thread is more I/O bound than CPU intensive. Whether you spawn off a thread per connection or use a thread pool to manage a number of threads is another matter - and something to determine through experimentation and also whilst considering whether 100 clients is your maximum!