Pcap.net vs Sharppcap - c#

I just want to listen a network device, capture packets and write the packets to a dummy file. Also i need to filter packets while listening so ill only write packets which passes the filter. I need to do these on .net c#. These are my requirements. So which one should i use? High transfer rate and minimum packet loss is really important.
Thanks for reading.

As the author of SharpPcap I can say that you'll be able to perform all of those operations with the library. Performance was a critical design goal.
Packet.Net has a range of packets that it can parse and is the library bundled along with SharpPcap for packet dissection and generation. It's architecture does lazy evaluation anywhere it is possible in order to be as fast as possible.
Performance is tricky, especially because network packet capture is often a lower priority task for an operating system. The faster your application handles the packet the more packets can be handled without drops. I've been able to capture 3MB/s of packets without any drops. I haven't tried it at higher data rates or written extensive tests to generate and capture data in order to evaluate performance. Tests and real world results are welcome data points to be added to the documentation and website though.

The entire functionality is available in Pcap.Net.
Pcap.Net uses C++/CLI to wrap WinPcap, which is considered more efficient than PInvoke.
The packet library in Pcap.Net is quite big and complex packets can be parsed and created. This includes recursive layers like IP over IP. Parsing of each layer is done lazily and only when you need it.
For your needs, I see only benefits of using Pcap.Net over SharpPcap.

Related

RabbitMQ transfer rates speed up?

I look for ideas how to speed up message transfers through RabbitMQ.
I installed the latest version on Windows 64 bit, running a server on my local machine on which I also publish and consume to/from through a C# implementation. I initially maxed out at 40,000 messages per second which is impressive but does not suit my needs (I compete with a custom binary reader which can handle 24 million unparsed 16 byte large byte arrays per second; obviously I dont expect to get close to that but I attempt to improve at least). I need to send around 115,000,000 messages as fast as possible. I do not want to persist the data and the connection is gonna be direct to one single consumer. I then built chunks of my 16b byte arrays and published onto the bus without any improvement. The transfer rate maxed out at 45mb/second. I find this very very slow given the fact that in the end it should just boil down to raw transmission speed because I could create byte arrays the size of several megabytes where the efficiency rate of routing by the exchange becomes negligible vs raw transmission speed. Why does my message bus max out at 45mb/second transfer speed?
Bump...and Update: Have not seen any answer to this question in a longer time. I am a bit surprised not a single RabbitMQ developer chimed in. I played extensively with RabbitMQ and ZeroMQ. I decided that RabbitMQ is not up to the task when looking at high throughput in-process messaging solutions. The broker implementation and especially parsing logic is a major bottleneck to improving throughput. I dropped RabbitMQ from my list of possible options.
There was a white paper out describing how they provided a solution to managing low latency, high throughput options financial data streams but it sounds to me all they did was throwing hardware at it rather than providing a solution that targets low latency, high throughput requirements.
ZeroMQ, did a superb job after I studied the documentation more intensively. I can run communication in-process, it provides stable enough push/pull, pub/sub, req/rep, and pair/pair patterns which I need. I was looking for blocking logic within the pub/sub pattern which ZeroMQ does not provide (it drops messages instead when a high watermark is exceeded), but the push/pull pattern provides blocking. So, pretty much all I needed is provided for. The only gripe I have is with their understanding of event processing; the event structure implementation through poll/multiplex is not very satisfactory.

Need C# client to connect to server with low latency bi-direction communication

I'm in the process of learning C# and just need a pointing in the right direction. I want to build a client in C# that communicates with a server running PHP/mySQL. There will need to be almost constant communication between the two. It will be for a game, so low-latency and bi-directional communication. I'm not looking for an exact how-to, but rather what method I need to use to connect the two for the fastest and most reliable connection. I have read others use XML, but that seems like it would be slower if used near-constantly, like once or more a second, but I could be totally wrong. Thanks in advance!
Normally communication with those characteristics is made over a persistent TCP connection. C# offers lots of ready-to-use functionality in the System.Net.Sockets namespace (start looking from TcpClient and TcpListener; there are also more low-level interfaces if you need them).
The important question here is: what do you mean exactly "server running PHP"? If the server offers only an HTTP interface, then you would find it more natural to communicate not with sockets but with the WebClient or the more low-level HttpWebRequest classes instead.
Ah, writing a game in C# as a means to get started with the language! How many have started this way.
Have you defined your client-server protocol yet? I'm not talking about TCP vs. UDP, which TomTom and Jon have discussed. I mean, what is the data stream going to look like?
Packet fragmentation is the enemy of low-latency network code. Learn about MTU and packet fragmentation, Nagle's algorithm, etc. and write down some notes for later when you implement the network code. Make sure you calculate the smallest size packet you would be interested in sending, how big its headers might be, and how large of a payload you can fit into that packet. Then see if you can come up with a protocol that uses the available space efficiently.
You may gain a lot more by optimizing your server application and/or porting it to a different language. Just because you CAN use PHP for everything server side doesn't mean you SHOULD. Keep the part that shows you useful information in a web browser, and evaluate whether you should rewrite the time-critical and game client communication parts in another language. Interpreted languages are not especially well known for their speed when crunching real-time game world data. Sure, I once wrote something like that in Perl using POE, but ultimately it was a lot less performant than the C code I was mimicking.
Finally, I would recommend you look into XNA, since it has a lot of this stuff already.

Network Programming Low Level or Class Abstraction?

I see a lot of questions on the topic of network programming. Despite all the questions and answers I just do not know which way is best to start. Is it better to start from the lowest level, or immediately to work in .NET C #, without going into details below abstraction? Is it better to go with Winsock or BSD Socket programming in Linux?
You can still do low-level TCP or UDP programming in C#, so at that point it is really just a matter of choice whether you want to write network code in C, C#, etc... If all you are trying to do is learn how to write network code, I would consider the language more of a personal choice as the underlying network concepts remain the same.
.NET C#
You have:
low level API to work with TCP, UDP etc.
.NET Remoting
WCF (http, tcp, named pipes, MSMQ)
I would recommended last option but depends on what exaclty you are trying to learn:
to build distributed apps or the nitty gritty details of low level socket API.
It all depends on your existing programming skills. I would not start with the lower levels such as the Socket class (or TcpClient/UdpClient) without having at least basic understanding of asynchronous programming.
A lot of people who starts with socket programming launches a separate thread for the reading since the Read method blocks. It's a very noneffective way to solve the problem, especially in servers. BeingRead/EndRead is the way to go.
Next up is designing a transfer protocol since TCP doesn't guarantee that the complete message is delivered at one. It only guarantees that your messages arrive in the correct order.
The next big thing with socket programming is how to handle incoming data. A newbie mistake is to start appending strings which would result in a lot of memory usage in server applications. Use byte[] buffers and a buffer pool (flyweight pattern) to manage incoming data (should be a easy task if you've made a well designed protocol).
As you can see, it' quite a big task to take on with no prior experience. WCF is a much better option since it handles most of that stuff for you.

C# Network Packet Spoofing

I am connected via ethernet to a simple I/O hardware device which is controlled by a very old, inflexible .NET driver. I've used WireShark to peek at the packets and they are very small, simple packets containing the name of the driver and a few bytes for data (unencrypted). Each packet receives a success packet from the hardware device with a few bytes of confirmation data. There doesn't appear to be any persistence with the connection, it seems to be very trasnsactional.
I would like to fashion my own driver for this device, and send it my own packets to eliminate the junky driver. I understand struct layouts and how to format them explicitly, my question is what the slickest, most modern method of sending data to this network device would be.
Just looking for some info to get me started. Any ideas?
Avoid thinking in terms of "driver", true operating system drivers cannot be written in a .NET language. The original code no doubt used a simple Socket. So can you. The only hard part is figuring out the exact protocol, the meaning of each individual byte.
Use Reflector on the original .NET assembly, you can probably reverse-engineer it. Do first check if your license agreement allows this, it isn't exactly common.

What's the fastest IPC method for a .NET Program?

Named Pipes ? XML-RPC ? Standard Input-Output ? Web Services ?
I would not use unsafe stuff like Shared Memory and similar
Named pipes would be the fastest method, but it only works for communication between processes on the same computer. Named pipes communication doesn't go all the way down the network stack (because it only works for communication on the same computer) so it will always be faster.
Anonymous Pipes may only be used on the local machine. However, Named Pipes may traverse the network.
I left out Shared Memory since you specifically mentioned that you don't want to go that route. Shared Memory would be even faster than named pipes tho.
So it depends if you only need to communicate between processes on the same computer or different computers. Any XML-based communication protocol (eg. Web Services) will usually be slower due to the massive overhead in XML.
i don't think there's a quick answer to this. if i was you, i would buy/borrow a copy of Advanced Programming in the Unix Environment (APUE) by Stevens and Rago and read Chapter 15 and 16 on IPC. It's a brilliant book if you really want to understand how *nix (a lot of it applies to any POSIX system) works down to the kernel level.
If you must have a quick answer, i would say the following (without putting a huge amount of thought into it), in descending order of efficiency:
Local Machine IPC
Shared Memory/Memory Mapped
files
Named Pipe/FIFO (only between
related processed - i.e. fork)
Unix Domain Socket
Network IPC/Internet Sockets
Datagram Sockets
Stream Sockets
Raw Sockets
At both levels, you are going to have to think about how the data you transfer is encoded/decoded and trade off between memory usage and CPU utilization.
At the Network level, you will have to consider what layers of protcols you are going to run on top of. Most commonly, at the bottom of the application layer you will be choosing between TCP/IP or UDP. TCP has a lot more overhead as it is does error correction, checksumming and lots of other stuff. if you need in order delivery of messages you need to use TCP as opposed to UDP.
On top of these are other protocols like HTTP, SOAP (on top of HTTP or another protocol like FTP/SMTP etc.). A binary protocol is going to be more efficient as long as you are network bound rather than CPU bound. If using SOAP on the MS.Net platform, then binary encoding of the messages is going to be quicker across the network but may be more CPU intensive.
I could go on. it's not a simple question. Learning where the latencies are and how buffering is handled are key to being able to make decisions on the trade offs you are always forced to with IPC. I'd recommend the APUE book above if you really want to know what is going on under the hood...
Windows Messaging is one of the fastest ways for IPC, after-all Windows is built on them.
It's possible to use WM_COPYDATA with IPInvoke calls to exchange data between 2 form based .Net applications, and I've got an open source library for doing exactly that. I've bench marked around 1771 msg/sec on a fairly hot laptop.
http://thecodeking.github.com/XDMessaging.Net
I don't know why you won't go with shared memory, but its very very fast from C# to C# apps on the same machine, and very reliable (unlike TCP sockets). spazzarama/SharedMemory is a fantastic C# lib that supports shared arrays and buffers with a simple high level API. You just initialize the class with a common memory file name (on client/server sides), and then update the array. Values magically appear on the other side!

Categories