I'm storing bitmap images in an Azure blob store and delivering them to a .Net Micro Framework device. Because of memory limitations on the device I need to break the files into chunks and deliver them to the device where they are to be recombined onto the device's microSD card. I am having trouble with byte fidelity and am struggling to understand this pared down test.
I have a simple bitmap on azure: https://filebytetest9845.blob.core.windows.net/files/helloworld.bmp It is just a black and white bitmap of the words "Hello World".
Here's some test code I've written to sit in an ASP .Net Web API and read the bytes ready for breaking into chunks. But to test I just store the bytes to a local file.
[Route("api/testbytes/")]
[AcceptVerbs("GET", "POST")]
public void TestBytes()
{
var url = "https://filebytetest9845.blob.core.windows.net/files/helloworld.bmp";
var fileRequest = (HttpWebRequest) WebRequest.Create(url);
var fileResponse = (HttpWebResponse) fileRequest.GetResponse();
if (fileResponse.StatusCode == HttpStatusCode.OK)
{
if (fileResponse.ContentLength > 0)
{
var responseStream = fileResponse.GetResponseStream();
if (responseStream != null)
{
var contents = new byte[fileResponse.ContentLength];
responseStream.Read(contents, 0, (int) fileResponse.ContentLength);
if (!Directory.Exists(#"C:\Temp\Bytes\")) Directory.CreateDirectory(#"C:\Temp\Bytes\");
using (var fs = System.IO.File.Create(#"C:\Temp\Bytes\helloworldbytes.bmp"))
{
fs.Write(contents, 0, (int) fileResponse.ContentLength);
}
}
}
}
}
Here's the original bitmap:
And here's the version saved to disk:
As you can see they are different, but my code should just be saving a byte-for-byte copy. Why are they different?
Try this
var contents = new byte[fileResponse.ContentLength];
int readed = 0;
while (readed < fileResponse.ContentLength)
{
readed += responseStream.Read(contents, readed, (int)fileResponse.ContentLength - readed);
}
looks like it can't download whole image in single Read call and you have to recall it untill whole image is downloaded.
Atomosk is right - single Read call can't read whole response. If you are using .NET 4+, then you can use this code to read full response stream:
var fileResponse = (HttpWebResponse)fileRequest.GetResponse();
if (fileResponse.StatusCode == HttpStatusCode.OK)
{
var responseStream = fileResponse.GetResponseStream();
if (responseStream != null)
{
using (var ms = new MemoryStream())
{
responseStream.CopyTo(ms);
ms.Position = 0;
using (var fs = System.IO.File.Create(#"C:\Temp\Bytes\helloworldbytes.bmp"))
{
ms.CopyTo(fs);
}
}
}
}
Using this code you don't need to know Content Length since it is not always available.
Related
I have a big file, and I want to send it to Web API which will send it to Amazon. Since file is big I want to send file to Amazon in chunk wise.
So If I have 1 GB file, I want my API to receive file in let's say 20 MB chunk so that I can send it to Amazon and then again receive 20 MB chunk. How is this doable. Below is my attempt.
public async Task<bool> Upload()
{
var fileuploadPath = ConfigurationManager.AppSettings["FileUploadLocation"];
var provider = new MultipartFormDataStreamProvider(fileuploadPath);
var content = new StreamContent(HttpContext.Current.Request.GetBufferlessInputStream(true));
// Now code below writes to a folder, but I want to make sure I read it as soon as I receive some chunk
await content.ReadAsMultipartAsync(provider);
return true;
}
Pseudo Code:
While (await content.ReadAsMultipartAsync(provider) == 20 MB chunk)
{
//Do something
// Then again do something with rest of chunk and so on.
}
File is as large as 1 GB.
As of now entire file is getting sent by this line of code:
await content.ReadAsMultipartAsync(provider);
I am lost here please help me. All I want is receive file in small chunk and process it.
P.S: I am sending file as MultiPart/Form-Data from Postman to test.
Attempt No 2:
var filesReadToProvider = await Request.Content.ReadAsMultipartAsync();
foreach (var content in filesReadToProvider.Contents)
{
var stream = await content.ReadAsStreamAsync();
using (StreamReader sr = new StreamReader(stream))
{
string line = "";
while ((line = sr.ReadLine()) != null)
{
using (MemoryStream outputStream = new MemoryStream())
using (StreamWriter sw = new StreamWriter(outputStream))
{
sw.WriteLine(line);
sw.Flush();
// Do Something
}
}
}
}
No time to test this, but the ReadBlock method seems to be what you want to use.
Should look something like what I have below, but it assumes all your other code is good and you just needed some help with the buffering. This is a "blocking" read operation, but there is also a ReadBlockAsync method which returns a Task.
const int bufferSize= 1024;
var filesReadToProvider = await Request.Content.ReadAsMultipartAsync();
foreach (var content in filesReadToProvider.Contents)
{
var stream = await content.ReadAsStreamAsync();
using (StreamReader sr = new StreamReader(stream))
{
int bytesRead;
char[] buffer = new char[bufferSize];
while ((bytesRead = sr.ReadBlock(buffer, 0, bufferSize)) > 0)
{
// Do something with the first <bytesRead> of buffer and
// not with <bufferSize> as <bytesRead> will contain the
// number of bytes actually read by the call to ReadBlock
}
}
}
I am using the HttpListener class to create a very simple web server.
I am able to serve html, javascript, attachements probably everything to the browser. The only thing I am having trouble is giving the browser a wav file so that it plays it.
The file that I want to play in the browser is this one:
https://dl.dropboxusercontent.com/u/81397375/a.wav
Note how if you click on that link your browser starts playing the audio file instead of downloading it as an attachment. I want to do the same thing with the HttpListerner class!
Anyways here is my code:
string pathToMyAudioFile = #"c:\a.wav";
// create web server
var web = new HttpListener();
// listen on port 8081 only on local connections for testing purposes
web.Prefixes.Add("http://localhost:8081/");
Console.WriteLine(#"Listening...");
web.Start();
// run web server forever
while (true)
{
var context = web.GetContext();
var requestUrl = context.Request.Url.LocalPath.Trim('/');
// this command will stop the web server
if (requestUrl == "Stop")
{
context.Response.StatusCode = 200; // set response to ok
context.Response.OutputStream.Close();
break;
}
else if (requestUrl == "DownloadAudio") // <--------- here is where I am interested
{
// we are ready to give audio file to browser
using (var fs = new FileStream(pathToMyAudioFile , FileMode.Open))
{
context.Response.ContentLength64 = fs.Length;
context.Response.SendChunked = true;
//obj.Response.ContentType = System.Net.Mime.MediaTypeNames.Application.Octet;
context.Response.ContentType = "audio/wav";
//obj.Response.AddHeader("Content-disposition", "attachment; filename=" + fs.Name);
context.Response.StatusCode = 206; // set to partial content
byte[] buffer = new byte[64 * 1024];
try
{
using (BinaryWriter bw = new BinaryWriter(context.Response.OutputStream))
{
int read;
while ((read = fs.Read(buffer, 0, buffer.Length)) > 0)
{
bw.Write(buffer, 0, read);
bw.Flush(); //seems to have no effect
}
bw.Close();
}
}
catch
{
Console.Write("closded connection");
}
}
}
else
{
context.Response.StatusCode = 404; // set response to not found
}
// close output stream
context.Response.OutputStream.Close();
}
web.Stop();
Now when I go to http://localhost:8081/DownloadAudio I see this:
But I am not able to play the file. Why? What headers am I missing? I do not want to download the file as an attachment.
Solution
I just found the solution. I was missing the Content-Range header. This how the response looks when making it agains a real web server IIS:
Note how it specifies the ranges it sends with the header: Content-Range: bytes 0-491515/491516
So I just have to add the line
context.Response.Headers.Add("Content-Range", $"bytes 0-{fs.Length-1}/{fs.Length}");
To my code and now it works! If the audio file is very large I could do some math so I do not return everything at once.
Here is the goal:
1) Get image from URL, in this case Google Static Maps API
2) Insert this image into an Excel Worksheet. I am okay if I have to create (or use an existing) shape and set the background to the image. I am also okay inserting at specific cells. I can define the image size via the Google Static Maps API (see URL above) so it will always be known.
I am not entirely clear on how to do this WITHOUT saving the file directly to the file system first.
I currently have code like this which gets the image in a MemoryStream format:
public static MemoryStream GetStaticMapMemoryStream(string requestUrl, string strFileLocation)
{
try
{
HttpWebRequest request = WebRequest.Create(requestUrl) as HttpWebRequest;
using (HttpWebResponse response = request.GetResponse() as HttpWebResponse)
{
if (response.StatusCode != HttpStatusCode.OK)
throw new Exception(String.Format(
"Server error (HTTP {0}: {1}).",
response.StatusCode,
response.StatusDescription));
using (BinaryReader reader = new BinaryReader(response.GetResponseStream()))
{
Byte[] lnByte = reader.ReadBytes(1 * 700 * 500 * 10);
using (FileStream lxFS = new FileStream(strFileLocation, FileMode.Create))
{
lxFS.Write(lnByte, 0, lnByte.Length);
}
MemoryStream msNew = new MemoryStream();
msNew.Write(lnByte, 0, lnByte.Length);
return msNew;
}
}
}
catch (Exception e)
{
System.Windows.Forms.MessageBox.Show(e.Message);
return null;
}
}
Note that in the middle of the above code, I write the image to the file system as well. I'd like to avoid this part if at all possible.
At any rate, my code can create a rectangle, call the above sequence which saves the image, and then grab the image and populate the background of the rectangle:
Excel.Shape shapeStaticMap = wsNew2.Shapes.AddShape(Office.MsoAutoShapeType.msoShapeRectangle, 0, 0, 700, 500);
string strFileLocation = #"C:\Temp\test.jpg";
MemoryStream newMS = GetStaticMapMemoryStream(strStaticMapUrl, strFileLocation);
shapeStaticMap.Fill.UserPicture(strFileLocation);
So the real problem here is that I'd like to skip the "write to file and then grab from file" back-and-forth. It seems like an unnecessary step, and I anticipate that it will also get messy with file permissions and what-not.
UPDATE
Okay, so I basically gave up and left it using a local file. That worked for a while, but now I'm trying to re-work this code to grab an image from a different source where I don't know the image size in advance. The method above requires me to know the SIZE of the image in advance. How do I modify the code above to use any image size dynamically?
Use this version of GetStaticMapMemoryStream:
public static MemoryStream GetStaticMapMemoryStream(string requestUrl)
{
try
{
HttpWebRequest request = WebRequest.Create(requestUrl) as HttpWebRequest;
using (HttpWebResponse response = request.GetResponse() as HttpWebResponse)
{
if (response.StatusCode != HttpStatusCode.OK)
throw new Exception(String.Format(
"Server error (HTTP {0}: {1}).",
response.StatusCode,
response.StatusDescription));
var responseStream = response.GetResponseStream();
var memoryStream = new MemoryStream();
responseStream.CopyTo(memoryStream);
memoryStream.Position = 0;
return memoryStream;
}
}
catch (Exception e)
{
Console.WriteLine(e.Message);
return null;
}
}
We have a page that users can download media and we construct a folder structure similar to the following and zip it up and send it back to the user in the response.
ZippedFolder.zip
- Folder A
- File 1
- File 2
- Folder B
- File 3
- File 4
The existing implementation that accomplishes this saves files and directories temporarily to file system and then deletes them at the end. We are trying to get away from doing this and would like to accomplish this entirely in memory.
I am able to successfully create a ZipFile with files in it, but the problem I am running into is creating Folder A and Folder B and adding files to those and then adding those two folders to the Zip File.
How can I do this without saving to the file system?
The code for just saving the file streams to the zip file and then setting the Output Stream on the response is the following.
public Stream CompressStreams(IList<Stream> Streams, IList<string> StreamNames, Stream OutputStream = null)
{
MemoryStream Response = null;
using (ZipFile ZippedFile = new ZipFile())
{
for (int i = 0, length = Streams.Count; i < length; i++)
{
ZippedFile.AddEntry(StreamNames[i], Streams[i]);
}
if (OutputStream != null)
{
ZippedFile.Save(OutputStream);
}
else
{
Response = new MemoryStream();
ZippedFile.Save(Response);
// Move the stream back to the beginning for reading
Response.Seek(0, SeekOrigin.Begin);
}
}
return Response;
}
EDIT We are using DotNetZip for the zipping/unzipping library.
Here's another way of doing it using System.IO.Compression.ZipArchive
public Stream CompressStreams(IList<Stream> Streams, IList<string> StreamNames, Stream OutputStream = null)
{
MemoryStream Response = new MemoryStream();
using (ZipArchive ZippedFile = new ZipArchive(Response, ZipArchiveMode.Create, true))
{
for (int i = 0, length = Streams.Count; i < length; i++)
using (var entry = ZippedFile.CreateEntry(StreamNames[i]).Open())
{
Streams[i].CopyTo(entry);
}
}
if (OutputStream != null)
{
Response.Seek(0, SeekOrigin.Begin);
Response.CopyTo(OutputStream);
}
return Response;
}
and a little test:
using (var write = new FileStream(#"C:\users\Public\Desktop\Testzip.zip", FileMode.OpenOrCreate, FileAccess.Write))
using (var read = new FileStream(#"C:\windows\System32\drivers\etc\hosts", FileMode.Open, FileAccess.Read))
{
CompressStreams(new List<Stream>() { read }, new List<string>() { #"A\One.txt" }, write);
}
re: your comment -- sorry, not sure if it creates something in the background, but you're not creating it yourself to do anything
I want to iterate through the contents of a zipped archive and, where the contents are readable, display them. I can do this for text based files, but can't seem to work out how to pull out binary data from things like images. Here's what I have:
var zipArchive = new System.IO.Compression.ZipArchive(stream);
foreach (var entry in zipArchive.Entries)
{
using (var entryStream = entry.Open())
{
if (IsFileBinary(entry.Name))
{
using (BinaryReader br = new BinaryReader(entryStream))
{
//var fileSize = await reader.LoadAsync((uint)entryStream.Length);
var fileSize = br.BaseStream.Length;
byte[] read = br.ReadBytes((int)fileSize);
binaryContent = read;
I can see inside the zip file, but calls to Length result in an OperationNotSupported error. Also, given that I'm getting a long and then having to cast to an integer, it feels like I'm missing something quite fundamental about how this should work.
I think the stream will decompress the data as it is read, which means that the stream cannot know the decompressed length. Calling entry.Length should return the correct size value that you can use. You can also call entry.CompressedLength to get the compressed size.
Just copy the stream into a file or another stream:
using (var fs = await file.OpenStreamForWriteAsync())
{
using (var src = entry.Open())
{
var buffLen = 1024;
var buff = new byte[buffLen];
int read;
while ((read = await src.ReadAsync(buff, 0, buffLen)) > 0)
{
await fs.WriteAsync(buff, 0, read);
await fs.FlushAsync();
}
}
}