Is there something like twisted (python) or eventmachine (ruby) in .net land?
Do I even need this abstraction? I am listening to a single IO device that will be sending me events for three or four analog sensors attached to it. What are the risks of simply using a looped UdpClient? I can't miss any events, but will the ip stack handle the queuing of messages for me? Does all of this depend on how much work the thread tries to do once I receive a message?
What I'm looking for in an abstraction is to remove the complication of threading and synchronization from the problem.
I think you are making it too complicated.
Just have 1 UDP socket open, and set an async callback on it. For every incoming packet put it in a queue, and set the callback again. Thats it.
make sure that when queuing and dequeueing you set a lock on the queue.
it's as simple as that and performance will be great.
R
I would recommend ICE it's a communication engine that will abstract threading and communication to you (documentation is kind of exhaustive).
Problem is that with Udp you are automatically assuming the risk of lost packets. I've read the documentation on ICE (as Steve suggested), and it is very exhaustive. It appears that ICE will work for Udp, however, it appears that Tcp is preferred by the developers. I gather from the ICE documentation that it does not provide any intensive mechanisms to ensure reliable Udp communications.
It is actually very easy to set up an asynchronous Udp client or server. Your real work comes in checking for complete packets and buffering. The asynchronous implementations should keep you from managing threads.
It sounds like you are looking for reliable multicast -You could try RMF , it will do the reliability and deliver the messages using asyc callbacks from the incoming message queue. IBM also does WebSphere which has a UDP component. EmCaster is also an option - however development seems to have stopped back in 2008.
If you aren't going to be transmitting these packets (or events) to other machines you might just want to use something simple like memory mapped files or other forms of IPC.
Related
I have created a simple server that currently uses a TCP socket for all my packeting needs. If some data transfer is best used with TCP and others with UDP, what is the most effective way to use them in tandem? Can you simply have 2 sockets? I cannot find example code that shows best practices for this, and it seems like something that would be difficult to debug if done improperly. Thanks!
There is nothing to prevent you from having 2 listening sockets, one each for TCP and UDP. However, the communication method is generally chosen in advance for a given application. For example, the syslog protocol is almost always implemented only with UDP. HTTP on the other hand is pretty much always implemented only over TCP. There are a few protocols that often support both (NTP, DNS are examples).
It would be extremely rare to find a single application that allows a single logical data stream to use both at the same time, and it would be nigh on impossible to ever get that working reliably.
But otherwise if you do support both mechanisms, debugging is straightforward because each can be treated and debugged in isolation.
TCP is far easier to use if you require reliable sequenced delivery -- and for most applications the obvious choice. Every byte sent via TCP is guaranteed to be delivered in the order you send it (or you will receive a distinct error notification), and the peer operating systems between the two machines will cooperate in attempting retries as needed in case any packets are dropped en route. A lot of stuff you don't need to worry about here. Downsides are: (a) some increased overhead, and (b) "message" boundaries are not respected (that is, TCP delivers a stream of bytes; the receiver won't necessarily get them in the same discrete chunks in which they were sent so you must impose message boundaries yourself).
UDP doesn't guarantee delivery. That is, packets may still be dropped, but with no notice to the sender and it is your (i.e. your application's) responsibility to handle that appropriately. Likewise, it is possible for packets to show up in a different order than you sent them (due to different routing paths, for example). On the other hand, the packet you send is the packet you receive so your message boundaries remain intact.
So UDP is often chosen for short "one-shot" or single message notifications, where it's easy to set a timeout at application level, or where each message stands alone and a lost message is non-critical. While TCP is usually the better choice when there is a long-lasting connection or large continuous data streams need to be sent (file transfers and so forth).
Unless you use a raw socket, you must have two separate sockets for UDP and TCP communication.
TCP and UDP are independent OSI Level 4 protocols.
If you use a socket with a default options, it's most likely a TCP socket.
You specify the protocol you want to use when you create a socket - SOCK_STREAM is for TCP and SOCK_DGRAM is for UDP.
C# may have more readable options instead of SOCK_STREAM and SOCK_DGRAM.
If you do use raw sockets, you can distinguish between protocols by parsing IP headers.
I'm looking to create some kind of a gateway that sits between a server and a client.
The so called gateway is supposed to filter some packets sent by the client and forward 99% of them.
Here are my questions:
Client opens a new socket to the gateway, i open a socket to the server from the gateway and store it in a list for further use. However, in one situation, all the connections will come from the same IP, thus leaving me with limited options on choosing the socket that should forward the packet to the server. How can i differentiate between opened sockets?
From previous situations, i'm expecting about 500 clients sending a packet every second. Performance wise, should i use a multithread model, or stick with a single thread application?
Still a performance question :I have to choose between C# and Python. Which one should give better performance?
Socket addresses are a host and port, not just a host. And different connections from the same host will always have different ports. (I'm assuming TCP, not UDP, here.)
Or, even more simply, you can just compare the file descriptors (or, in Python, the socket objects themselves) instead of their peer addresses.
Meanwhile, for performance, on most platforms, 500 is nearing the limits of what you can do with threads, but not yet over the limits, so you could do it that way. But I think you'll be better off with a single-threaded reactor or a proactor with a small thread pool. Especially if you can use a preexisting framework like twisted or gevents to do the hard part for you.
As for the language, for just forwarding or filtering packets, the performance of your socket multiplexing will be all that matters, so there's no problem using Python. Pick a framework you like from either language, and use that language.
Some last side comments: you do realize that TCP packets aren't going to match up with messages in your higher level protocol(s), right? Also, this whole thing would probably be a lot easier to do, and more efficient, with a Linux or BSD box set up as a router so you don't have to write anything but the filters.
I'm creating a UDP server that needs to receive UDP packets from various clients, and then forward them to other clients. I'm using C# so each UDP socket is a UdpClient for me. I think I want 2 UdpClient objects, one for receiving and one for sending. The receiving socket will be bound to a known port, and the sender will not be bound at all.
The server will get each packet, lookup the username in the packet data, and then based on a routing list the server maintains, it will forward the packet to 1 or more other clients.
I start the listening UdpClient with:
UdpClient udpListener = new UdpClient(new IPEndPoint(ListenerIP, UdpListenerPort));
udpListener.BeginReceive(new AsyncCallback(OnUDPRead), udpListener);
My callback looks like:
void OnUDPRead(IAsyncResult ar)
{
UdpClient udpListener = (UdpClient)ar.AsyncState;
try
{
IPEndPoint remoteEndPoint = null;
byte[] packet = udpListener.EndReceive(ar, ref remoteEndPoint);
// Get connection based on data in packet
// Get connections routing table (from memory)
// ***Send to each client in routing table***
}
catch (Exception ex)
{
}
udpListener.BeginReceive(new AsyncCallback(OnUDPRead), udpListener);
}
The server will need to process 100s of packets per second (this is for VOIP). The main part I'm not sure about, is what to do at the "Send to each client" part above. Should I use UdpClient.Send or UdpClient.BeginSend? Since this is for a real-time protocol, it wouldn't make sense to have several sends pending/buffered for any given client.
Since I only have one BeginReceive() posted at any given time, I assume my OnUDPRead function will never be called while already executing. Unless UdpClient works differently than TcpClient in this area? Since I don't give it a single buffer (I don't give it any buffer), I suppose it could fire OnUDPRead several times with only one BeginReceive posted?
I don't see any reason why you can't use one socket, the straightforward way. Async I/O doesn't appear to me to offer any startling benefit in this situation. Just use non-blocking mode. Receive a packet, send it. If send() causes EAGAIN or EWOULDBLOCK or whatever the C# manifestation of that is, just drop it.
In your plan, the sending socket will be bound automatically the first time you call send(). Clients will reply to that port. So it needs to be a port you are reading on. And you already have one of those.
It very much depends on scale - i.e. how many concurrent RTP streams you want to relay. I have developed exactly this solution as part of a media gateway for our SIP switch. Despite initial reservations, I have it working very efficiently using C# (1000 concurrent streams on a single server, 50K datagrams per second using 20ms PTime).
Through trial-and-error, I determined the following:
1) Using Socket.ReceiveFromAsync is much better than using synchronous approaches, since you don't need individual threads for each stream. As packets are received, they are scheduled through the threadpool.
2) Creating/disposing the SocketAsyncEventArgs structures takes time - better to allocate a pool of structures at startup and 'borrow' them from the pool.
3) Similarly for any other objects required during processing (e.g. a RTPPacket class). Allocating these dynamically at such a high rate results in GC issues pretty quickly with any sort of real load. If you avoid all dynamic allocation, and instead use pools of objects you manage yourself, this issue goes away.
4) If you need to do any clocking to co-ordinate input and output streams, transcoding, mixing or file playback, don't try to use the standard .NET timers. I wrapped the Win32 Multimedia timers from winmm.dll - these were much more precise.
I ended up with an architecture where components could expose IRTPSource and IRTPSink interfaces, which could then be hooked-up to create the graph required. I created RTP input/output components, mixer, player, recorder, (simple) transcoder, playout buffer etc. - and then wired them up using these two interfaces. The RTP packets are then clocked through the graph using the MM timers.
C# proved to be a good choice for what would typically be approached using C or C++.
Mike
What EJP said.
Most all the literature on scaling socket servers with async IO is written with TCP scenarios in mind. I did a lot of research last year on how to scale a UDP socket server, and found that IOCP/epoll wasn't as appropriate for UDP as it is for TCP. See this answer here.
So it's actually quite simple to scale a UDP socket server just using multiple threads than trying to do any sort of asynchronous I/O thing. And often times, one single thread pumping out recvfrom/sendto calls is good enough.
I would suggest that the server have one synchronous socket per client. Then have several threads do a Socket.Select call on a different subset of client sockets. I suppose you could have a single socket for all clients, but then you'll get some out-of-ordering problems if you try to process audio packets on that socket from multiple threads.
Also consider using the lower level socket class intead of UdpClient.
But if you really want to scale out - you may want to do the core work in C/C++. With C#, more layers of abstraction on top of winsock means more buffer copies and higher latency. There are techniques by which you can use winsock to forward packets from one socket to another without incurring a buffer copy.
a good design would be not to re-invent the wheel :)
There are already lots of RTP media proxy solutions: Kamailio, FreeSWITCH, Asterisk, and some other open-source projects. I have great experience with FreeSWITCH, and would highly recommend it as a media proxy for SIP calls.
If your signalling protocol is not SIP, there are some solutions for other protocols as well.
I'm thinking like the methods games like Counter Sstrike, WoW etc uses. In CS you often have just like 50 ping, is there any way to send information to an online MySQL database at that speed?
Currently I'm using an online PHP script which my program requests, but this is really slow, because the program first has to send headers and post-information to it, and then retrieve the result as an ordinary webpage.
There really have to be any easier, faster way of doing this? I've heard about TCP/IP, is this what I should use here? Is it possible for it to connect to the database in a faster way than indirectly via the PHP script?
TCP/IP is made up of three protocols:
TCP
UDP
ICMP
ICMP is what you are using when you ping another computer on a network.
Games, like CounterStrike, don't care about what you previously did. So there's no requirement for completeness, to be able to reconstruct what you did (which is why competitors have to tape what they are doing). This is what UDP is used for - there's no guarantee that data is delivered or received. Which is why lag can be such a problem - you're already dead, you just didn't know it.
TCP guarantees that data is sent and received. Slower than UDP.
There are numerous things to be aware of to have a fast connection - less hops, etc.
Client-to-server for latency-critical stuff? Use non-blocking UDP.
For reliable stuff that can be a little slower, if you use TCP make sure you do so in a non-blocking fashion (select(), non-blocking send, etc.).
The big reason to use UDP is if you have time-sensitive data - if the position of a critter gets dropped, you're better off ignoring it and sending the next position packet rather than re-sending the last one.
And I don't think any high-performance game has each and every call resolve to a call to the database. It's more common to (if a database is even used) persist data occasionally, or at important events.
You're not going to implement Counterstrike or anything similar on top of http.
Most games like the ones you cite use UDP for this (one of the TCP/IP suite of protocols.) UDP is chosen over TCP for this application since it's lighter weight allowing for better performance and TCP's reliability features aren't necessary.
Keep in mind though, those games have standalone clients and servers usually written in C or C++. If your application is browser-based and you're trying to do this over HTTP then use a long-lived connection and strip back the headers as much as possible, including cookies. The Tornado framework may be of interest to you there. You may also want to look into HTML5 WebSockets however widespread support is still a fair way off.
If you are targeting a browser-based plugin like Flash, Java, SilverLight then you may be able to use UDP but I don't know enough about those platforms to confirm.
Edit:
Also worth mentioning: once your networking code and protocol is sufficiently optimized there are still things you can do to improve the experience for players with high pings.
I'm an embedded programmer trying to do a little bit of coding for a communications app and need a quick start guide on the best / easiest way to do something.
I'm successfully sending serial data packets but need to impliment some form of send/ response protocol to avoid overflow on the target system and to ensure that the packet was received ok.
Right now - I have all the transmit code under a button click and it sends the whole lot without any control.
What's the best way to structure this code , i.e sending some packets - waiting for response .. sending more .. etc etc until it's all done, then carrying on with the main program.
I've not used threads or callbacks or suchlike in this environment before but will learn - I just need a pointer to the most straigtforward ways to do it.
Thanks
Rob
The .NET serialport uses buffers, learn to work with them.
Sending packets that are (far) smaller than the Send-buffer can be done w/o threading.
Receiving can be done by the DataReceived event but beware that that is called from another thread. You might as well start your own thread and use blocking reads from there.
The best approach depends on what your 'packets' and protocol look like.
I think to have a long experience about serial comm, both MCU and PC-based.
I strongly UNSUGGEST the single-thread based solution, although it is very straigthful for light-speed testing, but absolutely out for final releases.
Surely you may choose among several patterns, but they are mostly shaped around a dedicated thread for the comm process and a finite-state-machine to parse the protocol (during receiveing).
The prevoius answers give you an idea to how build a simple program, but it might depends on the protocol specification, target device, scope of the application, etc.
there are of course different ways.
I will describe a thread based and an async operation based way:
If you don't use threads, your app will block as long as the operation is performing. This is not what a user is expecting today. Since you are talking about a series of sending and receiveing commands, I would recommend starting the protocol as a thread and then waiting for it to finish. You might also place an Abort button if neccesary. Set the ReadTimeout values and at every receive be ready to catch the exception! An introducing into creating such a work thread is here
If you want to, use Async Send/Receive functions instead of a thread (e.g. NetworkStream.BeginRead etc.). But this is more difficult because you have to manage state between the calls: I recommend using a Finite State Machine then. In fact you create an enumeration (i.e. ProtocolState) and change the state whenever an operation has completed. You can then simply create a function that performs the next step of the protocol with a simple switch/case statement. Since you are working with a remote entity (in your case the serial target system), you always have to consider the device is not working or stops working during the protocol. Do this by starting a timeout timer (e.g. set to 2000ms) and start it after sending each command (assuming each command will get a reply in your protocol). Stop it if the command was received successfully or on timeout.
You could also implement low-level handshaking on the serial port; set the serial port's Handshake property to rts/cts or xon/xoff.
Otherwise (or in addition), use a background worker thread. For simple threads, I like a Monitor.Wait/Pulse mechanism for managing the thread.
I have some code that does read-only serial communications in a thread; email me and I'll be happy to send it to you.
I wasn't sure from your question if you were designing both the PC and embedded sides of the communication link, if you are you might find this SO question interesting.