When I post the below code using httpclient
using var formContent = new MultipartFormDataContent("NKdKd9Yk");
using var stream = new MemoryStream();
file.CopyTo(stream);
var fileBytes = stream.ToArray();
formContent.Headers.ContentType.MediaType = "multipart/form-data";
formContent.Add(new StreamContent(stream), "file", fileName);
var response = await httpClient.PostAsync(GetDocumentUpdateRelativeUrl(), formContent);
here file is of type IFormfile
In API side I retrieve file as follows
var base64str= "";
using (var ms = new MemoryStream())
{
request.file.CopyTo(ms);
var fileBytes = ms.ToArray();
base64str= Convert.ToBase64String(fileBytes);
// act on the Base64 data
}
I get 0 byte. My questions is what's wrong with this approch?
But If I use below code. Then API works and I get what I post.
using var formContent = new MultipartFormDataContent("NKdKd9Yk");
using var stream = new MemoryStream();
file.CopyTo(stream);
var fileBytes = stream.ToArray();
formContent.Headers.ContentType.MediaType = "multipart/form-data";
formContent.Add(new StreamContent(new MemoryStream(fileBytes)), "file", fileName);
differece is how I add stream content
formContent.Add(new StreamContent(stream), "file", fileName);
vs
formContent.Add(new StreamContent(new MemoryStream(fileBytes)), "file", fileName);
Why the first approch didn't work but second one does?
You need to add stream.Seek(0, SeekOrigin.Begin); in order to jump back to the beginning of the MemoryStream. You should also use CopyToAsync
In the second version, you had a fresh MemoryStream from the byte[] array, which is positioned on 0 anyway.
using var formContent = new MultipartFormDataContent("NKdKd9Yk");
using var stream = new MemoryStream();
await file.CopyToAsync(stream);
stream.Seek(0, SeekOrigin.Begin);
formContent.Headers.ContentType.MediaType = "multipart/form-data";
formContent.Add(new StreamContent(stream), "file", fileName);
using var response = await httpClient.PostAsync(GetDocumentUpdateRelativeUrl(), formContent);
Although to be honest, the MemoryStream seems entirely unnecessary here. Just pass the a Stream from file directly.
using var formContent = new MultipartFormDataContent("NKdKd9Yk");
formContent.Headers.ContentType.MediaType = "multipart/form-data";
using var stream = file.OpenReadStream();
formContent.Add(new StreamContent(stream), "file", fileName);
using var response = await httpClient.PostAsync(GetDocumentUpdateRelativeUrl(), formContent);
Related
I am trying to upload file to amazon s3 but got error Cannot Access a closed stream in await client.PutObjectAsync(request);
using (var stream = new MemoryStream())
{
using (var sWriter = new StreamWriter(stream, Encoding.UTF8))
{
await sWriter.WriteAsync(commandWithMetadata.SerializeToString());
stream.Seek(0, SeekOrigin.Begin);
var fileName = GetFileName(command);
var request = new PutObjectRequest
{
BucketName = BucketName,
Key = fileName,
InputStream = stream
};
await client.PutObjectAsync(request);
}
}
there is AutoCloseStream property on request which by default is true and amazon lib is closing the stream automatically
I would like to to read an image from file or blob storage and base64 encode it as a stream and then pass that stream to StreamContent. The following code times out:
[HttpGet, Route("{id}", Name = "GetImage")]
public HttpResponseMessage GetImage([FromUri] ImageRequest request)
{
var filePath = HostingEnvironment.MapPath("~/Areas/API/Images/Mr-Bean-Drivers-License.jpg");
HttpResponseMessage result = new HttpResponseMessage(HttpStatusCode.OK);
var stream = new FileStream(filePath, FileMode.Open);
var cryptoStream = new CryptoStream(stream, new ToBase64Transform(), CryptoStreamMode.Write);
result.Content = new StreamContent(cryptoStream);
result.Content.Headers.ContentType = new MediaTypeHeaderValue("application/octet-stream");
return result;
}
I am able to get the following code to work without keeping the file as a stream and read it all into memory but I would like to avoid that.
HttpResponseMessage result = new HttpResponseMessage(HttpStatusCode.OK);
using (var fileStream = new FileStream(filePath, FileMode.Open))
{
using (var image = Image.FromStream(fileStream))
{
var memoryStream = new MemoryStream();
image.Save(memoryStream, image.RawFormat);
byte[] imageBytes = memoryStream.ToArray();
var base64String = Convert.ToBase64String(imageBytes);
result.Content = new StringContent(base64String);
result.Content.Headers.ContentType = new MediaTypeHeaderValue("text/plain");
return result;
}
}
The problem here is you're passing CryptoStreamMode.Write to the constructor of CryptoStream whereas you should be passing CryptoStreamMode.Read because the CryptoStream is going to be read as the HttpResponseMessage is returned.
For more details about this, see Figolu's great explanation about the various usages of CryptoStream in this answer.
I'm trying to return a file in a ASP.NET Web API Controller. This file is a dynamically-generated PDF saved in a MemoryStream.
The client (browser) receives the file successfully, but when I open the file, I see that all the pages are totally blank.
The thing is that if I take the same MemoryStream and write it to a file, this disk file is displayed correctly, so I assume that the problem is related to the file transfer via Web.
My controller looks like this:
[HttpGet][Route("export/pdf")]
public HttpResponseMessage ExportAsPdf()
{
MemoryStream memStream = new MemoryStream();
PdfExporter.Instance.Generate(memStream);
memStream.Position = 0;
HttpResponseMessage result = new HttpResponseMessage(HttpStatusCode.OK);
result.Content = new ByteArrayContent(memStream.ToArray()); //OR: new StreamContent(memStream);
return result;
}
Just to try, if I write the stream to disk, it's displayed correctly:
[HttpGet][Route("export/pdf")]
public HttpResponseMessage ExportAsPdf()
{
MemoryStream memStream = new MemoryStream();
PdfExporter.Instance.Generate(memStream);
memStream.Position = 0;
using (var fs = new FileStream("C:\\Temp\\test.pdf", FileMode.OpenOrCreate, FileAccess.ReadWrite))
{
memStream.CopyTo(fs);
}
return null;
}
The differences are:
PDF saved on disk: 34KB
PDF transferred via web: 60KB (!)
If I compare both files contents, the main differences are:
File Differences
On the left is the PDF transferred via web; on the right, the PDF saved to disk.
Is there something wrong with my code?
Maybe something related to encodings?
Thanks!
Well, it turned out to be a client (browser) problem, not a server problem. I'm using AngularJS in the frontend, so when the respose was received, Angular automatically converted it to a Javascript string. In that conversion, the binary contents of the file were somehow altered...
Basically it was solved by telling Angular not to convert the response to a string:
$http.get(url, { responseType: 'arraybuffer' })
.then(function(response) {
var dataBlob = new Blob([response.data], { type: 'application/pdf'});
FileSaver.saveAs(dataBlob, 'myFile.pdf');
});
And then saving the response as a file, helped by the Angular File Saver service.
I guess you should set ContentDisposition and ContentType like this:
[HttpGet][Route("export/pdf")]
public HttpResponseMessage ExportAsPdf()
{
MemoryStream memStream = new MemoryStream();
PdfExporter.Instance.Generate(memStream);
var result = new HttpResponseMessage(HttpStatusCode.OK)
{
Content = new ByteArrayContent(memStream.ToArray())
};
//this line
result.Content.Headers.ContentDisposition = new System.Net.Http.Headers.ContentDispositionHeaderValue("attachment")
{
FileName = "YourName.pdf"
};
//and this line
result.Content.Headers.ContentType = new MediaTypeHeaderValue("application/octet-stream");
return result;
}
Try this
[HttpGet][Route("export/pdf")]
public HttpResponseMessage ExportAsPdf()
{
MemoryStream memStream = new MemoryStream();
PdfExporter.Instance.Generate(memStream);
//get buffer
var buffer = memStream.GetBuffer();
//content length for header
var contentLength = buffer.Length;
var statuscode = HttpStatusCode.OK;
var response = Request.CreateResponse(statuscode);
response.Content = new StreamContent(new MemoryStream(buffer));
response.Content.Headers.ContentType = new MediaTypeHeaderValue("application/pdf");
response.Content.Headers.ContentLength = contentLength;
ContentDispositionHeaderValue contentDisposition = null;
if (ContentDispositionHeaderValue.TryParse("inline; filename=my_filename.pdf", out contentDisposition)) {
response.Content.Headers.ContentDisposition = contentDisposition;
}
return response;
}
I'm retrieving a file from Amazon S3. I want to convert the file to bytes so that I can download it as follows:
var download = new FileContentResult(bytes, "application/pdf");
download.FileDownloadName = filename;
return download;
I have the file here:
var client = Amazon.AWSClientFactory.CreateAmazonS3Client(
accessKey,
secretKey,
config
);
GetObjectRequest request = new GetObjectRequest();
GetObjectResponse response = client.GetObject(request);
I know about response.WriteResponseStreamToFile() but I want to download the file to the regular downloads folder. If I convert the GetObjectResponse to bytes, I can return the file. How can I do this?
Here's the solution I found for anyone else who needs it:
GetObjectResponse response = client.GetObject(request);
using (Stream responseStream = response.ResponseStream)
{
var bytes = ReadStream(responseStream);
var download = new FileContentResult(bytes, "application/pdf");
download.FileDownloadName = filename;
return download;
}
public static byte[] ReadStream(Stream responseStream)
{
byte[] buffer = new byte[16 * 1024];
using (MemoryStream ms = new MemoryStream())
{
int read;
while ((read = responseStream.Read(buffer, 0, buffer.Length)) > 0)
{
ms.Write(buffer, 0, read);
}
return ms.ToArray();
}
}
Just another option:
Stream rs;
using (IAmazonS3 client = Amazon.AWSClientFactory.CreateAmazonS3Client())
{
GetObjectRequest getObjectRequest = new GetObjectRequest();
getObjectRequest.BucketName = "mybucketname";
getObjectRequest.Key = "mykey";
using (var getObjectResponse = client.GetObject(getObjectRequest))
{
getObjectResponse.ResponseStream.CopyTo(rs);
}
}
I struggled to get the cleaner method offered by Alex to work (not sure what I'm missing), but I wanted to do it w/o the extra ReadStream method offered by Erica (although it worked)... here is what I wound up doing:
var s3Client = new AmazonS3Client(AccessKeyId, SecretKey, Amazon.RegionEndpoint.USEast1);
using (s3Client)
{
MemoryStream ms = new MemoryStream();
GetObjectRequest getObjectRequest = new GetObjectRequest();
getObjectRequest.BucketName = BucketName;
getObjectRequest.Key = awsFileKey;
using (var getObjectResponse = s3Client.GetObject(getObjectRequest))
{
getObjectResponse.ResponseStream.CopyTo(ms);
}
var download = new FileContentResult(ms.ToArray(), "image/png"); //"application/pdf"
download.FileDownloadName = ToFilePath;
return download;
}
Stream now has asynchronous methods. In C# 8, you can do this:
public async Task<byte[]> GetAttachmentAsync(string objectPointer)
{
var objReq = new GetObjectRequest
{
BucketName = "bucket-name",
Key = objectPointer, // the file name
};
using var objResp = await _s3Client.GetObjectAsync(objReq);
using var ms = new MemoryStream();
await objResp.ResponseStream.CopyToAsync(ms, _ct); // _ct is a CancellationToken
return ms.ToArray();
}
This won't block any threads while the IO occurs.
I have to retrieve an image from the disk or a web link , resize it and stream it to the client app. This is my controller method.
[HttpPost]
[ActionName("GetImage")]
public HttpResponseMessage RetrieveImage(ImageDetails details)
{
if (!details.Filename.StartsWith("http"))
{
if (!FileProvider.Exists(details.Filename))
{
throw new HttpResponseException(Request.CreateResponse(HttpStatusCode.NotFound, "File not found"));
}
var filePath = FileProvider.GetFilePath(details.Filename);
details.Filename = filePath;
}
var image = ImageResizer.RetrieveResizedImage(details);
MemoryStream stream = new MemoryStream();
// Save image to stream.
image.Save(stream, System.Drawing.Imaging.ImageFormat.Jpeg);
var response = new HttpResponseMessage();
response.Content = new StreamContent(stream);
response.Content.Headers.ContentDisposition
= new ContentDispositionHeaderValue("attachment");
response.Content.Headers.ContentDisposition.FileName = details.Filename;
response.Content.Headers.ContentType
= new MediaTypeHeaderValue("application/octet-stream");
return response;
}
And this is how am sending the web link(in this case) and receiving the image at the client app end.
HttpClient client = new HttpClient();
client.BaseAddress = new Uri("http://localhost:27066");
client.DefaultRequestHeaders.Accept.Add(
new MediaTypeWithQualityHeaderValue("application/octet-stream"));
ImageDetails img = new ImageDetails { Filename = "http://2.bp.blogspot.com/-W6kMpFQ5pKU/TiUwJJc8iSI/AAAAAAAAAJ8/c3sJ7hL8SOw/s1600/2011-audi-q7-review-3.jpg", Height = 300, Width = 200 };
var response = await client.PostAsJsonAsync("api/Media/GetImage", img);
response.EnsureSuccessStatusCode(); // Throw on error code.
var stream = await response.Content.ReadAsStreamAsync();
FileStream fileStream = System.IO.File.Create("ImageName");
// Initialize the bytes array with the stream length and then fill it with data
byte[] bytesInStream = new byte[stream.Length];
stream.Read(bytesInStream, 0, (int)bytesInStream.Length);
// Use write method to write to the specified file
fileStream.Write(bytesInStream, 0, (int) bytesInStream.Length);
MessageBox.Show("Uploaded");
The image is being retrieved from the web link and the resizing is done properly but am not sure if its being streamed proeprly as its creating a 0kb file with "ImageName" when received at client app. Can anyone please tell me where am going wrong? I have been banging my head about it all day :(
Try resetting the position of the memory stream before passing it to the response:
stream.Position = 0;
response.Content = new StreamContent(stream);
I suppose that your image resizing library is leaving the position of the memory stream at the end.