Why the (httpwebresponse) ResponseStream stops reading data from internet radio? - c#

I am using this code to get the data from an icecast radio, but the ResponseStream stops reading data at 64K recieved. Can you help me with this?
HttpWebRequest request = (HttpWebRequest) WebRequest.Create("http://icecast6.play.cz/radio1-128.mp3");
request.AllowReadStreamBuffering = false;
request.Method = "GET";
request.BeginGetResponse(new AsyncCallback(GetShoutAsync), request);
void GetShoutAsync(IAsyncResult res)
{
HttpWebRequest request = (HttpWebRequest) res.AsyncState;
HttpWebResponse response = (HttpWebResponse) request.EndGetResponse(res);
Stream r = response.GetResponseStream();
byte[] data = new byte[4096];
int read;
while ((read = r.Read(data, 0, data.Length)) > 0)
{
Debug.WriteLine(data[0]);
}
}

I don’t see any obvious problems in your code. Apart from not using async-await which greatly simplifies the kind of asyncronous code you’re developing :-)
What do you mean “the ResponseStream stops reading”?
If the connection is dropped, then my #1 idea — server does that. Use wireshark to confirm, and then use wireshark to compare the request’s HTTP header with e.g. Winamp that starts playing that stream. I’m sure you’ll find some important differences.
If however it merely pauses, it’s normal.
Upon connect, streaming servers typically send you some initial amount of data, and then they only send you their data in real-time. So, after you’ve received that initial buffer, you’ll only get the data # the rate of your stream, i.e. 16 kbytes/sec for your 128 kbit/sec radio.
BTW, some clients send “Initial-Burst” HTTP header with the request, but I was unable to find the documentation about that header. When I worked on my radio for WP7, I basically replicated the behavior of some other, iOS app.

Finally I write this code to solve the issue, it is completely necessary to use the namespace : Windows.Web.Http, And it is like..
Uri url = new Uri("http://icecast6.play.cz/radio1-128.mp3");
HttpResponseMessage response = await httpClient.GetAsync(
url,
HttpCompletionOption.ResponseHeadersRead);
IInputStream inputStream = await response.Content.ReadAsInputStreamAsync();
try
{
ulong totalBytesRead =
IBuffer buffer = new Windows.Storage.Streams.Buffer(100000);
while (buffer.Length > 0);
{
uffer = await inputStream.ReadAsync(
buffer,
buffer.Capacity,
InputStreamOptions.Partial);
//
// Some stuff here...
totalBytesRead += buffer.Length;
Debug.WriteLine(buffer.Length + " " + totalBytesRead);
}
Debug.WriteLine(totalBytesRead);
I hope you guys enjoy it.

Related

How to read data from response stream HttpWebRequest C#

I'm building a Xamarin app. I'm still on a very very noobish level, and I'm coming from Nativescript, and something (not much) of Native Android.
I have an Express server that performs long-time operations. During that time the Xamarin client waits with a spinner.
On the server I already calculate the percentage progress of the job, and I'd like to send it to the client each time it changes, in order to swap that spinner with a progress.
Still, on the server, the task was already achieved with a
response.write('10'); where the number 10 stands for "10%" of the Job done.
Now the tuff part. How can I read that 10 from the stream? Right now it works as a JSON response, because it waits for the whole response to come.
Xamarin client HTTP GET:
// Gets weather data from the passed URL.
async Task<JsonValue> DownloadSong(string url)
{
// Create an HTTP web request using the URL:
HttpWebRequest request = (HttpWebRequest)WebRequest.Create(new Uri(url));
request.ContentType = "application/json";
request.Method = "GET";
// Send the request to the server and wait for the response:
using (WebResponse response = await request.GetResponseAsync())
{
// Get a stream representation of the HTTP web response:
using (System.IO.Stream stream = response.GetResponseStream())
{
// Use this stream to build a JSON document object:
JsonValue jsonDoc = await Task.Run(() => JsonValue.Load(stream));
// Return the JSON document:
return jsonDoc;
}
}
}
The server writes on the response each time the progress of the job changes, sending a plain string containing the percentage value. At the end of the job, it will write a final string, which will be a Base64 (very long) string. And the response will be then closed.
Can anyone indicate me how to change that script in order to read each data chunk the server sends?
First you need to define some protocol. For simplicity we can say that server sends:
(optional) current progress as 3-digit string ("010" - means 10%)
(required) final progress as "100"
(required) json data
So, valid response is, for example, "010020050090100{..json here..}".
Then you can read response in 3-byte chunks, until you find "100". Then you read json. Sample code:
using (System.IO.Stream stream = response.GetResponseStream()) {
while (true) {
// 3-byte buffer
byte[] buffer = new byte[3];
int offset = 0;
// this block of code reliably reads 3 bytes from response stream
while (offset < buffer.Length) {
int read = await stream.ReadAsync(buffer, offset, buffer.Length - offset);
if (read == 0)
throw new System.IO.EndOfStreamException();
offset += read;
}
// convert to text with UTF-8 (for example) encoding
// need to use encoding in which server sends
var progressText = Encoding.UTF8.GetString(buffer);
// report progress somehow
Console.WriteLine(progressText);
if (progressText == "100") // done, json will follow
break;
}
// if JsonValue has async api (like LoadAsync) - use that instead of
// Task.Run. Otherwise, in UI application, Task.Run is fine
JsonValue jsonDoc = await Task.Run(() => JsonValue.Load(stream));
return jsonDOc;
}

HttpWebRequest is slow with chunked data

I'm using HttpWebRequest to connect to my in-house built HTTP server. My problem is that it is a lot slower than connecting to the server via for instance PostMan (https://chrome.google.com/webstore/detail/postman-rest-client/fdmmgilgnpjigdojojpjoooidkmcomcm?hl=en), which is probably using the built-in functions in Chrome to request data.
The server is built using this example on MSDN (http://msdn.microsoft.com/en-us/library/dxkwh6zw.aspx) and uses a buffer size of 64. The request is a HTTP request with some data in the body.
When connecting via PostMan, the request is split into a bunch of chunks and BeginRecieve() is called multiple times, each time receiving 64B and taking about 2 milliseconds. Except the last one, which receives less than 64B.
But when connecting with my client using HttpWebRequest, the first BeginRecieve() callback receives 64B and takes about 1 ms, the following receives only 47B and takes almost 200 ms, and finally the third receives about 58B and takes 2ms.
What is up with the second BeginRecieve? I note that the connection is established as soon as I start to write data to the HttpWebRequest input stream, but the data reception does not start until I call GetResponse().
Here is my HttpWebRequest code:
var request = (HttpWebRequest)WebRequest.Create(url);
request.Method = verb;
request.Timeout = timeout;
request.Proxy = null;
request.KeepAlive = false;
request.Headers.Add("Content-Encoding", "UTF-8");
System.Net.ServicePointManager.Expect100Continue = false;
request.ServicePoint.Expect100Continue = false;
if ((verb == "POST" || verb == "PUT") && !String.IsNullOrEmpty(data))
{
var dataBytes = Encoding.UTF8.GetBytes(data);
try
{
var dataStream = request.GetRequestStream();
dataStream.Write(dataBytes, 0, dataBytes.Length);
dataStream.Close();
}
catch (Exception ex)
{
throw;
}
}
WebResponse response = null;
try
{
response = request.GetResponse();
}
catch (Exception ex)
{
throw;
}
var responseReader = new StreamReader(rStream, Encoding.UTF8);
var responseStr = responseReader.ReadToEnd();
responseReader.Close();
response.Close();
What am I doing wrong? Why is it behaving so much differently than a HTTP request from a web browser? This is effectively adding 200ms of lag to my application.
This looks like a typical case of the Nagle algorithm clashing with TCP Delayed Acknowledgement. In your case you are sending a small Http Request (~170 bytes according to your numbers). This is likely less than the MSS (Maximum Segment Size) meaning that the Nagle Algorithm will kick in. The server is probably delaying the ACK resulting in a delay of up to 500 ms. See links for details.
You can disable Nagle via ServicePointManager.UseNagleAlgorithm = false (before issuing the first request), see MSDN.
Also see Nagle’s Algorithm is Not Friendly towards Small Requests for a detailed discussion including a Wireshark analysis.
Note: In your answer you are running into the same situation when you do write-write-read. When you switch to write-read you overcome this problem. However I do not believe you can instruct the HttpWebRequest (or HttpClient for that matter) to send small requests as a single TCP write operation. That would probably be a good optimization in some cases. Althought it may lead to some additional array copying, affecting performance negatively.
200ms is the typical latency of the Nagle algorithm. This gives rise to the suspicion that the server or the client is using Nagling. You say you are using a sample from MSDN as the server... Well there you go. Use a proper server or disable Nagling.
Assuming that the built-in HttpWebRequest class has an unnecessary 200ms latency is very unlikely. Look elsewhere. Look at your code to find the problem.
It seems like HttpWebRequest is just really slow.
Funny thing: I implemented my own HTTP client using Sockets, and I found a clue to why HttpWebRequest is so slow. If I encoded my ASCII headers into its own byte array and sent them on the stream, followed by the byte array encoded from my data, my Sockets-based HTTP client behaved exactly like HttpWebRequest: first it fills one buffer with data (part of the header), then it uses another buffer partially (the rest of the header), waits 200 ms and then sends the rest of the data.
The code:
TcpClient client = new TcpClient(server, port);
NetworkStream stream = client.GetStream();
// Send this out
stream.Write(headerData, 0, headerData.Length);
stream.Write(bodyData, 0, bodyData.Length);
stream.Flush();
The solution was of course to append the two byte arrays before sending them out on the stream. My application is now behaving as espected.
The code with a single stream write:
TcpClient client = new TcpClient(server, port);
NetworkStream stream = client.GetStream();
var totalData = new byte[headerBytes.Length + bodyData.Length];
Array.Copy(headerBytes,totalData,headerBytes.Length);
Array.Copy(bodyData,0,totalData,headerBytes.Length,bodyData.Length);
// Send this out
stream.Write(totalData, 0, totalData.Length);
stream.Flush();
And HttpWebRequest seems to send the header before I write to the request stream, so it might be implemented somewhat like my first code sample. Does this make sense at all?
Hope this is helpful for anyone with the same problem!
Try this: you need to dispose of your IDisposables:
var request = (HttpWebRequest)WebRequest.Create(url);
request.Method = verb;
request.Timeout = timeout;
request.Proxy = null;
request.KeepAlive = false;
request.Headers.Add("Content-Encoding", "UTF-8");
System.Net.ServicePointManager.Expect100Continue = false;
request.ServicePoint.Expect100Continue = false;
if ((verb == "POST" || verb == "PUT") && !String.IsNullOrEmpty(data))
{
var dataBytes = Encoding.UTF8.GetBytes(data);
using (var dataStream = request.GetRequestStream())
{
dataStream.Write(dataBytes, 0, dataBytes.Length);
}
}
string responseStr;
using (var response = request.GetResponse())
{
using (var responseReader = new StreamReader(rStream, Encoding.UTF8))
{
responseStr = responseReader.ReadToEnd();
}
}

GetRequestStream() is throwing time out exception when posting data to HTTPS url

I'm calling an API hosted on Apache server to post data. I'm using HttpWebRequest to perform POST in C#.
API has both normal HTTP and secure layer (HTTPS) PORT on the server. When I call HTTP URL it works perfectly fine. However, when I call HTTPS it gives me time-out exception (at GetRequestStream() function). Any insights? I'm using VS 2010, .Net framework 3.5 and C#. Here is the code block:
string json_value = jsonSerializer.Serialize(data);
HttpWebRequest request = (HttpWebRequest)System.Net.WebRequest.Create("https://server-url-xxxx.com");
request.Method = "POST";
request.ProtocolVersion = System.Net.HttpVersion.Version10;
request.ContentType = "application/x-www-form-urlencoded";
byte[] buffer = Encoding.ASCII.GetBytes(json_value);
request.ContentLength = buffer.Length;
System.IO.Stream reqStream = request.GetRequestStream();
reqStream.Write(buffer, 0, buffer.Length);
reqStream.Close();
EDIT:
The console program suggested by Peter works fine. But when I add data (in JSON format) that needs to be posted to the API, it throws out operation timed out exception. Here is the code that I add to console based application and it throws error.
byte[] buffer = Encoding.ASCII.GetBytes(json_value);
request.ContentLength = buffer.Length;
I ran into the same issue. It seems like it is solved for me. I went through all my code making sure to invoke webResponse.Close() and/or responseStream.Close() for all my HttpWebResponse objects. The documentation indicates that you can close the stream or the HttpWebResponse object. Calling both is not harmful, so I did. Not closing the responses may cause the application to run out of connections for reuse, and this seems to affect the HttpWebRequest.GetRequestStream as far as I can observe in my code.
I don't know if this will help you with your specific problem but you should consider Disposing some of those objects when you are finished with them. I was doing something like this recently and wrapping stuff up in using statements seems to clean up a bunch of timeout exceptions for me.
using (var reqStream = request.GetRequestStream())
{
if (reqStream == null)
{
return;
}
//do whatever
}
also check these things
Is the server serving https in your local dev environment?
Have you set up your bindings *.443 (https) properly?
Do you need to set credentials on the request?
Is it your application pool account accessing the https resources or is it your account being passed through?
Have you thought about using WebClient instead?
using (WebClient client = new WebClient())
{
using (Stream stream = client.OpenRead("https://server-url-xxxx.com"))
using (StreamReader reader = new StreamReader(stream))
{
MessageBox.Show(reader.ReadToEnd());
}
}
EDIT:
make a request from console.
internal class Program
{
private static void Main(string[] args)
{
new Program().Run();
Console.ReadLine();
}
public void Run()
{
var request = (HttpWebRequest)System.Net.WebRequest.Create("https://server-url-xxxx.com");
request.Method = "POST";
request.ProtocolVersion = System.Net.HttpVersion.Version10;
request.ContentType = "application/x-www-form-urlencoded";
using (var reqStream = request.GetRequestStream())
{
using(var response = new StreamReader(reqStream )
{
Console.WriteLine(response.ReadToEnd());
}
}
}
}
Try this:
WebRequest req = WebRequest.Create("https://server-url-xxxx.com");
req.Method = "POST";
string json_value = jsonSerializer.Serialize(data); //Body data
ServicePointManager.Expect100Continue = false;
using (var streamWriter = new StreamWriter(req.GetRequestStream()))
{
streamWriter.Write(json_value);
streamWriter.Flush();
streamWriter.Close();
}
HttpWebResponse resp = req.GetResponse() as HttpWebResponse;
Stream GETResponseStream = resp.GetResponseStream();
StreamReader sr = new StreamReader(GETResponseStream);
var response = sr.ReadToEnd(); //Response
resp.Close(); //Close response
sr.Close(); //Close StreamReader
And review the URI:
Reserved characters. Send reserved characters by the URI can bring
problems ! * ' ( ) ; : # & = + $ , / ? # [ ]
URI Length: You should not exceed 2000 characters
I ran into this, too. I wanted to simulate hundreds of users with a Console app. When simulating only one user, everything was fine. But with more users came the Timeout exception all the time.
Timeout occurs because by default the ConnectionLimit=2 to a ServicePoint (aka website).
Very good article to read: https://venkateshnarayanan.wordpress.com/2013/04/17/httpwebrequest-reuse-of-tcp-connections/
What you can do is:
1) make more ConnectionGroups within a servicePoint, because ConnectionLimit is per ConnectionGroups.
2) or you just simply increase the connection limit.
See my solution:
private HttpWebRequest CreateHttpWebRequest<U>(string userSessionID, string method, string fullUrl, U uploadData)
{
HttpWebRequest req = (HttpWebRequest)HttpWebRequest.Create(fullUrl);
req.Method = method; // GET PUT POST DELETE
req.ConnectionGroupName = userSessionID; // We make separate connection-groups for each user session. Within a group connections can be reused.
req.ServicePoint.ConnectionLimit = 10; // The default value of 2 within a ConnectionGroup caused me always a "Timeout exception" because a user's 1-3 concurrent WebRequests within a second.
req.ServicePoint.MaxIdleTime = 5 * 1000; // (5 sec) default was 100000 (100 sec). Max idle time for a connection within a ConnectionGroup for reuse before closing
Log("Statistics: The sum of connections of all connectiongroups within the ServicePoint: " + req.ServicePoint.CurrentConnections; // just for statistics
if (uploadData != null)
{
req.ContentType = "application/json";
SerializeToJson(uploadData, req.GetRequestStream());
}
return req;
}
/// <summary>Serializes and writes obj to the requestStream and closes the stream. Uses JSON serialization from System.Runtime.Serialization.</summary>
public void SerializeToJson(object obj, Stream requestStream)
{
DataContractJsonSerializer json = new DataContractJsonSerializer(obj.GetType());
json.WriteObject(requestStream, obj);
requestStream.Close();
}
You may want to set timeout property, check it here http://www.codeproject.com/Tips/69637/Setting-timeout-property-for-System-Net-WebClient

Why does HttpWebResponse try to decompress the stream using GZip when Content-Encoding is empty?

I've looked at a lot of Q&As around this subject and got to the point where I use the following code in order to get the bytes from a given URI:
var request = (HttpWebRequest)WebRequest.Create(uri);
request.AutomaticDecompression = DecompressionMethods.GZip | DecompressionMethods.Deflate;
var response = request.GetResponse();
var stream = response.GetResponseStream();
if (stream != null)
{
var buffer = new byte[4097];
var memoryStream = new MemoryStream();
do
{
var count = stream.Read(buffer, 0, buffer.Length);
memoryStream.Write(buffer, 0, count);
if (count == 0)
break;
} while (true);
return memoryStream.ToArray();
}
response.Close();
return null;
Now, for a certain URI (which is pointing to a file), when debugging, I see that the 'Content-Encoding' header of the web response equals to nothing (""), but when trying to read from the stream, it throws an exception:
System.IO.InvalidDataException: The magic number in GZip header is not correct. Make sure you are passing in a GZip stream.
When debugging the same URI in dev tools I get this on the response headers:
Content-Encoding:gzip,deflate
So I really don't know what happens.
Any clues and ideas on how to avoid this exception and successfully read the file's bytes?
Thanks!
The .NET framework blanks this header out as part of the auto-decompression. I would assume this is because you are asking the framework to handle it automatically and that the response stream you get back is no longer compressed.
I had to just look this up myself by inspecting the source code HttpWebResponse at Microsoft

Anyone have sample code for doing a "chunked" HTTP streaming download of one web directly to a upload to a separate web server?

Background - I'm trying to stream an existing webpage to a separate web application, using HttpWebRequest/HttpWebResponse in C#. One issue I'm striking is that I'm trying to set the file upload request content-length using the file download's content-length, HOWEVER the issue seems to be when the source webpage is on a webserver for which the HttpWebResponse doesn't provide a content length.
HttpWebRequest downloadRequest = WebRequest.Create(new Uri("downloaduri")) as HttpWebRequest;
using (HttpWebResponse downloadResponse = downloadRequest.GetResponse() as HttpWebResponse)
{
var uploadRequest = (HttpWebRequest) WebRequest.Create(new Uri("uripath"));
uploadRequest.Method = "POST";
uploadRequest.ContentLength = downloadResponse.ContentLength; // ####
QUESTION : How could I update this approach to cater for this case (when the download response doesn't have a content-length set). Would it be to somehow use a MemoryStream perhaps? Any sample code would be appreciated. In particular is there a code sample someone would have that shows how to do a "chunked" HTTP download & upload to avoid any issues of the source web server not providing content-length?
Thanks
As I already applied in the Microsoft Forums, there are a couple of options that you have.
However, this is how I would do it with a MemoryStream:
HttpWebRequest downloadRequest = WebRequest.Create(new Uri("downloaduri")) as HttpWebRequest;
byte [] buffer = new byte[4096];
using (MemoryStream ms = new MemoryStream())
using (HttpWebResponse downloadResponse = downloadRequest.GetResponse() as HttpWebResponse)
{
Stream respStream = downloadResponse.GetResponseStream();
int read = respStream.Read(buffer, 0, buffer.Length);
while(read > 0)
{
ms.Write(buffer, 0, read);
read = respStream.Read(buffer, 0, buffer.Length);
}
// get the data of the stream
byte [] uploadData = ms.ToArray();
var uploadRequest = (HttpWebRequest) WebRequest.Create(new Uri("uripath"));
uploadRequest.Method = "POST";
uploadRequest.ContentLength = uploadData.Length;
// you know what to do after this....
}
Also, note that you really don't need to worry about knowing the value for ContentLength a priori. As you have guessed, you could have set SendChunked to true on uploadRequest, and then just copied from the download stream into the upload stream. Or, you can just do the copy without setting chunked, and HttpWebRequest (as far as I know) will buffer the data internally (make sure AllowWriteStreamBuffering is set to true on uploadrequest) and figure out the content length and send the request.

Categories