I totally hit my limit here:
I am working with an API that offers me this:
For a running event, the video images can be received in the form of a
continuous multipart stream. The stream ends as soon as the event
finishes.
Would would I capture this?
I started coding something like this:
System.Net.WebResponse res = req.GetResponse();
System.IO.Stream ReceiveStream = res.GetResponseStream();
using (System.IO.StreamReader sr = new System.IO.StreamReader(ReceiveStream, Encoding.UTF8))
which works like a charm, as long as there is only 1 response.
Could someone point me in the right direction?
Related
I am working with HttpClient and I want to get the stream response.
I want to make it work like Webresponse.getResponseStream() in order to read it in a BinaryReader like this :
BinaryReader reader = new BinaryReader(new BufferedStream(myWebResponse.GetResponseStream());
I have tried to use the GetStreamAsync, but it never works because I am forced to use await and the HttpResponseMessage gets bytes infinitely.
BinaryReader reader = new BinaryReader(new BufferedStream(await myHttpClient.GetStreamAsync());
I don't know how to use the CopyToAsync, so I don't know if it works..
Any idea ?
Edit : More details.
The method getStreamAsync works when I'm receiving one anwser, but since i'm receiving a live stream, I don't get the stream before the live ends !
I am writing an Rcon in Visual Studio for Black Ops. I know its an old game but I still have a server running.
I am trying to download the data from this link
Black Ops Log File
I am using this code.
System.Net.WebClient wc = new System.Net.WebClient();
string raw = wc.DownloadString(logFile);
Which take between 6441ms to 13741ms according to Visual Studio.
Another attempt was...
string html = null;
HttpWebRequest request = (HttpWebRequest)WebRequest.Create(logFile);
request.AutomaticDecompression = DecompressionMethods.GZip;
request.Proxy = null;
using (HttpWebResponse response = (HttpWebResponse)request.GetResponse())
using (Stream stream = response.GetResponseStream())
using (StreamReader reader = new StreamReader(stream))
{
html = reader.ReadToEnd();
}
Which also takes around 6133ms according to VS debugging.
I have seen other rcon respond to commands really quickly. Mine take on best 5000ms which is not really acceptable. How can I download this this information quicker. I am told it shouldn't take this long??? What am I doing wrong?
This is just how long the server takes to answer:
In the future you can debug such problems yourself using network tools such as Fiddler or by profiling your code to see what takes the longest amount of time.
I am trying to read a response from a server that I receive when I send a POST request. Viewing fiddler, it says it is a JSON response. How do I decode it to a normal string using C# Winforms with preferably no outside APIs. I can provide additional code/fiddler results if you need them.
The fiddler and gibberish images:
The gibberish came from my attempts to read the stream in the code below:
Stream sw = requirejs.GetRequestStream();
sw.Write(logBytes, 0, logBytes.Length);
sw.Close();
response = (HttpWebResponse)requirejs.GetResponse();
Stream stream = response.GetResponseStream();
StreamReader sr = new StreamReader(stream);
MessageBox.Show(sr.ReadToEnd());
As mentioned in the comments, Newtonsoft.Json is really a good library and worth using -- very lightweight.
If you really want to only use Microsoft's .NET libraries, also consider System.Web.Script.Serialization.JavaScriptSerializer.
var serializer = new System.Web.Script.Serialization.JavaScriptSerializer();
var jsonObject = serializer.DeserializeObject(sr.ReadToEnd());
Going to assume (you haven't clarified yet) that you need to actually decode the stream, since A) retrieving a remote stream of text is well documented, and B) you can't do anything much with a non-decoded JSON stream.
Your best course of action is to implement System.Web.Helpers.Json:
using System.Web.Helpers.Json
...
var jsonObj = Json.Decode(jsonStream);
I would like to load an excel file directly from an ftp site into a memory stream. Then I want to open the file in the FarPoint Spread control using the OpenExcel(Stream) method. My issue is I'm not sure if it's possible to download a file directly into memory. Anyone know if this is possible?
Yes, you can download a file from FTP to memory.
I think you can even pass the Stream from the FTP server to be processed by FarPoint.
WebRequest request = FtpWebRequest.Create("ftp://asd.com/file");
using (WebResponse response = request.GetResponse())
{
Stream responseStream = response.GetResponseStream();
OpenExcel(responseStream);
}
Using WebClient you can do nearly the same. Generally using WebClient is easier but gives you less configuration options and control (eg.: No timeout setting).
WebClient wc = new WebClient();
using (MemoryStream stream = new MemoryStream(wc.DownloadData("ftp://asd.com/file")))
{
OpenExcel(stream);
}
Take a look at WebClient.DownloadData. You should be able to download the file directory to memory and not write it to a file first.
This is untested, but something like:
var spreadSheetStream
= new MemoryStream(new WebClient().DownloadData(yourFilePath));
I'm not familiar with FarPoint though, to say whether or not the stream can be used directly with the OpenExcel method. Online examples show the method being used with a FileStream, but I'd assume any kind of Stream would be accepted.
Download file from URL to memory.
My answer does not exactly show, how to download a file for use in Excel, but shows how to create a generic-purpose in-memory byte array.
private static byte[] DownloadFile(string url)
{
byte[] result = null;
using (WebClient webClient = new WebClient())
{
result = webClient.DownloadData(url);
}
return result;
}
I'm trying to zip a memory stream into another memory stream so I can upload to a rest API. image is the initial memory stream containing a tif image.
WebRequest request = CreateWebRequest(...);
request.ContentType = "application/zip";
MemoryStream zip = new MemoryStream();
GZipStream zipper = new GZipStream(zip, CompressionMode.Compress);
image.CopyTo(zipper);
zipper.Flush();
request.ContentLength = zip.Length; // zip.Length is returning 0
Stream reqStream = request.GetRequestStream();
zip.CopyTo(reqStream);
request.GetResponse().Close();
zip.Close();
To my understand, anything I write to the GZipStream will be compressed and written to whatever stream was passed into it's constructor. When I copy the image stream into zipper, it appears nothing is actually copied (image is 200+ MB). This is my first experience with GZipStream so it's likely I'm missing something, any advice as to what would be greatly appreciated.
EDIT:
Something I should note that was a problem for me, in the above code, image's position was at the very end of the stream... Thus when I called image.CopyTo(zipper); nothing was copied due to the position.
[Edited: to remove incorrect info on GZipStream and it's constructor args, and updated with the real answer :) ]
After you've copied to the zipper, you need to shift the position of the MemoryStream back to zero, as the process of the zipper writing to the memory stream advances it's "cursor" as well as the stream being read:
WebRequest request = CreateWebRequest(...);
request.ContentType = "application/zip";
MemoryStream zip = new MemoryStream();
GZipStream zipper = new GZipStream(zip, CompressionMode.Compress);
image.CopyTo(zipper);
zipper.Flush();
zip.Position = 0; // reset the zip position as this will have advanced when written to.
...
One other thing to note is that the GZipStream is not seekable, so calling .Length will throw an exception.
I don't know anything about C# and its libraries, but I would try to use Close instead of (or after) Flush first.
(Java's GZipOutputStream has the same problem that it doesn't properly flush, until Java 7.)
See this example:
http://msdn.microsoft.com/en-us/library/system.io.compression.gzipstream.flush.aspx#Y300
You shouldn't be calling flush on the stream.