I would like to extract the HTTP header information using Packet.Net. I am using SharpPcap to capture the packet and need to access the User-Agent field in the TCP packet. If I understand correctly Packet.Net is used to analyze the packet captured. Help would be appreciated on this regard. I have tried to display the TCP packet with the following code but I get bytes displayed. I am using C# as development language.
private static void device_OnPacketArrival(object sender,CaptureEventArgs packet){
Packet p =Packet.ParsePacket(packet.Device.LinkType,packet.Packet.Data);
System.Text.ASCIIEncoding ASCII = new System.Text.ASCIIEncoding();
String StringMessage = ASCII.GetString(p.Bytes);
Console.WriteLine(StringMessage);
}
Packet.Net doesn't currently have http decoding support. Because http messages can be split across multiple packets, it seems like a good approach would be to first add support to allow the following of tcp connections, then add http session detection and parsing on top of the tcp data stream. Trying to parse http data on a per-packet basis might work for the headers of the data or some http messages but isn't a robust solution as it would prevent being able to get the full content of the http message that might be several kilobytes in size.
(I have a commercial library that builds upon SharpPcap/Packet.Net that adds tcp session following and http session following and decode. Post your email here if you want me to email you with more details.)
Related
I need to consume a third-party WebSocket API in .NET Core and C#; the WebSocket server is implemented using socket.io (using protocol version 0.9), and I am having a hard time understanding how socket.io works... besides that the API requires SSL.
I found out that the HTTP handshake must be initiated via a certain path, which is...
socket.io/1/?t=...
...whereby the value of the parameter t is a Unix-timestamp (in seconds). The service replies with a session-key, timeout information, and a list of supported transport protocols. Due to simplicity, this first request is made via HttpClient and does not involve any additional headers.
Next, another HTTP request is required, which should result in an HTTP 101 Switching Protocol response. I specified the following headers in accordance to the previous request...
Connection: Upgrade
Upgrade: websocket
Sec-WebSocket-Key: ...
Sec-WebSocket-Version: 13
...whereby the value of the Key-header is a Base64-encoded GUID-value that the server will use to calculate the Sec-WebSocket-Accept header value. I also precalculate the expected Sec-WebSocket-Accept header value, for validation...
I tried to make that request using HttpClient as well, but that does not seem to work... I actually don´t understand why, because I expect an HTTP response. I also tried to make the request using TcpClient by sending a manually prepared GET request over a SslStream, which accepts the remote certificate as expected. Sending data seems to work, but there´s no response data... the Read-method returns zero.
What do I miss here? Do I need to setup a listener for the WebSocket connection as well, and if yes how? I don´t want to implement a feature complete socket.io client, I´d just like to keep it as simple as possible to catch some events...
The best way of debugging these issues is to use a sniffer like wireshark or fiddler. Often connect using an IE and compare IE results with my application and modify my app so it works like the IE. Using WebClient instead of HttpClient will also work better because the WebClient does more automatically than the HttpClient.
A web connection uses the header of the client and the headers in the server webpage to negotiate a connection mode. Adding additional headers to you client will change the connection mode. Cookies are also used to select the connection mode. Cookies are the results of previous connection to the same server which shortens the negotiations and stores info from previous connection so less data has to be downloaded from server. The server remembers the cookies. Cookies have a timeout and is kept until timeout expires. The IE history in your client has a list of IP addresses and Net automatically sends the cookies associated with the server IP.
If a bad connection is made to the server the cookies is also bad so the only was of connection is to remove the cookie. Usually I go into the IE and delete cookies manually in the IE history.
To check if a response is good the server returns a status. A completed response contains a status 200 DONE. You can get status which are errors. You can also get a 100 Continue which means you need to send another request to get the rest of the webpage.
Http has 1.0 (stream mode) and 1.1 (chunk mode). Net library doesn't work with chunk. Chunk requires client to send message to get next chunk and I have not found a way in Net to send the next chunk message. So if a server responds with a 1.1 then you have to add to your client headers to use 1.0 only.
Http uses TCP as the transport layer. So in a sniffer you will see TCP and HTTP. Usually you can filter sniffer just to return Http and look at header for debugging. Occasionally TCP disconnects and then you have to look at TCP to find why the disconnect occurs.
I have code performing an HTTP POST to a vendor's site using WebClient.UploadValues. When the payload is somewhere under 1.6 MB in size, the response is some XML data as expected. When larger, the response from the vendor's site is null.
var client = new WebClient();
client.Encoding = Encoding.UTF8;
byte[] response = client.UploadValues(strTargetUri, paramsNameValueCollection);
The vendor indicates they routinely receive larger payloads. I can't find any IIS or WCF settings that would be limiting outgoing payload by size or time. If I were exceeding a limit I set, .NET would throw an exception, not just return null.
Any suggestions of what I might be missing on my side? Or something I should be sharing with the vendor?
UPDATE
I've received back samples received at the vendor end. When under ~2MB, they show that they receive straight up XML such as:
<STAT>
<REQUEST _SEQUENCE_ID="1">
<CUSTOMER>...
But when larger, it 1) is URL encoded, 2) is still preceded by other query string components and 3) contains some of the embedded "add-on" XML such as XML namespace references:
integrator=MyVal&userId=MyUser&password=12345&payload=%3cSTAT+xmlns%3axsd%3d%22http%3a%2f%2fwww.w3.org%2f2001%2fXMLSchema%22+xmlns%3axsi%3d%22http%3a%2f%2fwww.w3.org%2f2001%2fXMLSchema-instance%22%3e%3cREQUEST+_SEQUENCE_ID%3d%221%22...
My simplistic understanding of POSTs and the fact that I set nothing differently between scenarios makes me think the difference is because the vendor's processing software has "choked" and showing them different results. I'm getting my net eng team to help me with tracking out outgoing packets to see if we can verify what we're sending at the last moment.
John,
The limit is most likely the server, so you may have to contact the vendor. For example, the default max, non multipart, POST size of a Tomcat server is only 2MB. If you can detect that the server is Tomcat, you could suggest that they increase the maxPostSize attribute of the Connnector.
Joe
I have a problem whereby I have lots of very small web service calls to a Java endpoint (hosted on Oracle GlassFish 3.1.X). I added this service as a Service Reference (using the remote wsdl file) and use a BasicHttpBinding to reach it.
Since the service is located half the world away plus going across the internet, we frequently experience some packet loss when reaching the destination. We are looking at any way possible to reduce the impact of these occurrences. We have been using Wireshark to give us detailed knowledge of what is going across the wire to our destination, and back again. I was curious to see that for every single request we generate, we are sending 2 packets. The packet boundary is always between the HTTP header and the <s:Envelope> tag. To me this is a big overhead, particularly in my environment where I want to minimise the amount of packets sent (to reduce overall packetloss).
In most cases (99% of our calls), the HTTP header packet is 210 bytes followed by a SOAP envelope packet of 291 bytes (excluding the 54 bytes of TCP/IP overhead for each packet). Totalling these gives 501 bytes - just over a third of our Max Segment Size of 1460 bytes. Why isn't WCF sending this HTTP POST request as a single packet of 501 bytes (555 bytes if you include the 54 bytes of TCP/IP overhead)?
Does anyone know why it does this? It almost seems as if the HttpWebRequest object is calling .Flush() on the stream after writing it's headers but I'm not sure why it would do this?
I've tried different combinations on these:
ServicePointManager.UseNagleAlgorithm = false;
ServicePointManager.Expect100Continue = false;
With no effect.
EDIT
Wrong: I've investigated a bit further and when a HttpWebRequest.GetRequestStream() is called, it does write the headers to the stream immediately. At some stage before you write to the Stream that is given back to you, the network would Flush these (I guess? Unless a deliberate flush is happening somewhere). When you finally start writing to the stream, it has already sent the header packet. Not sure how to prevent this, it seems a very hard assertion inside the HttpWebRequest code that called GetRequestStream() will write the headers. For my small requests, I want nothing to be written until I have closed the stream but that goes against the streaming nature of it.
And the answer is - Can't be done with WebHttpRequest (and hence BasicHttpBinding)
http://us.generation-nt.com/answer/too-packets-httpwebrequest-help-23298102.html
I need to write an Icecast 2 client that will be able to stream audio from the computer (mp3-files, soundcard recording and so forth) to the server. I decided to write such a client on C#.
Two questions:
1) It will be very useful to know common guidelines (best practices, maybe tricks) I may/should/must use to seamlessly work with streamed audio (streamed over network, of course) in C#. Some general technical documentation about streaming over TCP/IP in common and ICY in particular, advices and notes on the overall architecture of the application will be very appreciated.
2) Is there any good documentation regarding the Icecast 2 streaming protocol? I couldn't find those docs on the official site of Icecast. I don't want to extract the protocol description directly from the source code of it. If the protocol is really simple and neat, could anybody provide a summary of it right here?
As far as I know, there is no protocol spec anywhere, outside of the Icecast source code. Here's what I've found from packet sniffing:
Audio Stream
The protocol is similar to HTTP. The source client will connect to the server make a request with the mountpoint, and pass some headers with information about the stream:
SOURCE /mp3test ICE/1.0
content-type: audio/mpeg
Authorization: Basic c291cmNlOmhhY2ttZQ==
ice-name: This is my server name
ice-url: http://www.google.com
ice-genre: Rock
ice-bitrate: 128
ice-private: 0
ice-public: 1
ice-description: This is my server description
ice-audio-info: ice-samplerate=44100;ice-bitrate=128;ice-channels=2
If all is good, the server responds with:
HTTP/1.0 200 OK
The source client then proceeds to send the binary stream data. Note that it seems some encoders don't even wait for the server to respond with 200 OK before they start sending stream data. Just headers, an empty line, and then stream data.
Meta Data
Meta data is sent using an out-of-band HTTP request. The source client sends:
GET /admin/metadata?pass=hackme&mode=updinfo&mount=/mp3test&song=Even%20more%20meta%21%21 HTTP/1.0
Authorization: Basic c291cmNlOmhhY2ttZQ==
User-Agent: (Mozilla Compatible)
The server responds with:
HTTP/1.0 200 OK
Content-Type: text/xml
Content-Length: 113
<?xml version="1.0"?>
<iceresponse><message>Metadata update successful</message><return>1</return></iceresponse>
Also note that both the audio stream and meta data requests are sent on the same port. Unlike SHOUTcast, this is the base port that the server is running on.
I'm going to comment here despite this question being quite old.
Icecast is HTTP compliant. This was always the case for the listener side (plain and simple HTTP1.0, RFC 1945), starting with 2.4.0 it's also true for the source client side.
To implement a source client it's a PUT request in compliance with HTTP 1.1 aka RFC2616. Some options can be set through HTTP headers, for details please refer to the current Icecast documentation.
If you send one of the supported container formats: Ogg or WebM (technically EBML), then this is all you need to know. To make it clear this covers at leastOpus, Vorbis, Theora and VP8 codecs.
Please note that while generally working fine, other formats are technically not supported. Icecast only passes through the stream without any processing in such a case.
If you need help or have further questions, then the official mailing lists and the IRC channel are the right place to go.
Looked at Icecast2 a good long while ago: best reference I could find was at http://forums.radiotoolbox.com/viewtopic.php?t=74 link (I should print that out, took me forever to figure out the proper Google spell to cast to surface that again). It appears to cover source to server and server to client.
Questions remain about just how accurate it is: I got about halfway through an Android implementation before other things consumed me, and I can't quite remember what was wrong with the communication between my implementation of that and VLC/Winamp, but honestly it was the closest thing I could find to a spec.
The best description I know is here: https://gist.github.com/ePirat/adc3b8ba00d85b7e3870
#ePirat is xpiph/icecast core committer.
Can anyone guide me in acquiring the POST content-length of a website by just using sockets. Thanks and Kudos!
(I'm avoiding using httpwebrequest for some reason)
If it's a proxy application you don't need to be parsing headers at all. You just need to mirror the data from one side to the other, as bytes. The only thing you need to parse is for example the initial HTTP CONNECTION request, or whatever your initial handshake with the client is that causes you to set up the upstream connection. The rest of it is just byte copying and EOS and error propagation.
In the Http protocol the header is seperated from the content by a double crlf.
So you could either parse the header and get the Content-Length header or you can figure out the length of the content (since you know where the header ends and content starts).
HTTP/1.1 message length rules are described in section 4.4 of the RFC 2616.