I have two cloud provider with their client SDKs say SDK1 and SDK2. And I wanted to copy one file from one cloud to another cloud storage. These SDKs have upload and download APIs like this:
Response uploadAsync(Uri uploadLocation, Stream fileStream);
Stream downloadAsync(Uri downloadLocation);
Earlier I was copying downloaded Stream to MemoryStream and passing it to upload API. And it was working but obviously, it will load entire file to memory which is not good.
I cannot directly pass downloaded Stream to upload API as somewhere it's checking Length of Stream and System.Net.ConnectStream being non-seekable throws Exception.
Any pointer on how can we use the downloaded Stream (which is of Type System.Net.ConnectStream) in upload API without actually storing entire file?
Related
I have been trying to upload to a OneDrive account and I am hopelessly stuck not being able to upload neither less or greater than 4MB files. I have no issues accessing the drive at all, since I have working functions that create a folder, rename files/folders, and a delete files/folders.
https://learn.microsoft.com/en-us/graph/api/driveitem-put-content?view=graph-rest-1.0&tabs=csharp
This documentation on Microsoft Graph API is very friendly to HTTP code, and I believe I am able to fairly "translate" the documentation to C#, but still fail to grab a file and upload to OneDrive. Some places online seem to be using byte arrays? Which I am completely unfamiliar with since my primary language is C++ and we just use ifstream/ofstream. Anyways, here is the portion of code in specific (I hope this is enough):
var item = await _client.Users[userID].Drive.Items[FolderID]//"01YZM7SMVOQ7YVNBXPZFFKNQAU5OB3XA3K"].Content
.ItemWithPath("LessThan4MB.txt")//"D:\\LessThan4MB.txt")
.CreateUploadSession()
.Request()
.PostAsync();
Console.WriteLine("done printing");
As it stands, it uploads a temporary file that has a tilde "~" in the OneDrive (like as if I was only able to open but not import any data from the file onto it). If I swap the name of the file so it includes the file location it throws an error:
Message: Found a function 'microsoft.graph.createUploadSession' on an open property. Functions on open properties are not supported.
Try this approach with memory stream and PutAsync<DriveItem> request:
string path = "D:\\LessThan4MB.txt";
byte[] data = System.IO.File.ReadAllBytes(path);
using (Stream stream = new MemoryStream(data))
{
var item = await _client.Me.Drive.Items[FolderID]
.ItemWithPath("LessThan4MB.txt")
.Content
.Request()
.PutAsync<DriveItem>(stream);
}
I am assuming you have already granted Microsoft Graph Files.ReadWrite.All permission. Check your API permission. I tested this code snippet with pretty old Microsoft.Graph library version 1.21.0. Hopefully it will work for you too.
I'm storing large media files in Azure Blob Storage (audio, images, videos) that need to be previewed on my web client application.
Currently the client requests a media file and my server downloads the entire blob to memory, then returns the file to the client.
Controller Action
[HttpGet("[action]/{blobName}")]
public async Task<IActionResult> Audio(string blobName)
{
byte[] byteArray = await _blobService.GetAudioAsync(blobName);
return File(byteArray, AVHelper.GetContentType(blobName));
}
Download Service Method
private async Task<byte[]> GetAudioAsync(CloudBlobContainer container, string blobName)
{
using (MemoryStream stream = new MemoryStream())
{
CloudBlockBlob blob = container.GetBlockBlobReference(blobName);
await blob.DownloadToStreamAsync(stream);
return stream.ToArray();
}
}
I'm concerned that this is not good design as the file is being downloaded twice in serial which would cause slower downloads and heightened server memory usage. File sizes can be several hundred MB.
Is there some recommended method for doing this? Maybe something where the server is downloading from blob storage and streaming the file to the client pseudo simultaneously? So the client doesn't have to wait for the server to completely download the file to start its download, and the server can remove already transmitted file contents from memory.
To make the answer visible to others, I'm summarizing the answer shared in comment:
The suggestion is to redirect to the Blob Url directly so that the file download can start to client machine directly and the web application don't need to download it to stream or file on the server. Steps:
1.When client clicks on Download, an AJAX request comes to the server.
2.the server code performs necessary verification and returns the file URL of Azure Storage.
3.The AJAX code get the URL returned from the server and opens up a new browser window and redirects it to the URL.
I have a unique scenario in which I'd like the end result to help me upload a zip file. Here is what is happening in my workflow:
Our user is given an application on their local machine. With a click of a button, it will copy files and a zip file to remote-machine-1.
On remote-machine-2, it is running a .NET Core web app.
On remote-machine-1, I'd like to ping an endpoint off the web app in order to upload the zip file to remote-machine-2. However, the caveat is that the user will not be able to specify where this zip file is - the location of the zip file already known due to the structure of how the files and zip file are copied over in the first place.
So the question remains, with the code below - how do I pass in an IFormFile object when I call the endpoint localhost:5000/PublishTargetAsync?file=[???]? Or is there another workaround?
public async Task<bool> PublishTargetAsync(IFormFile file)
{
if (file != null)
{
using (var fileStream = new FileStream(Path.Combine(_targetOutputDirectory.ToFileSystemPath(), file.Name), FileMode.Create))
{
await file.CopyToAsync(fileStream);
}
}
return true;
}
A simple, but non optimized approach would be to use HttpClient and post the file contents as a base64 encoded string as Json using sample code similar to what is in my link. From there you could work your way back to using HttpWebRequests and a network stream and crafting the Http request by hand if necessary for performance, but the above approach should work for most small files. You'll have to modify your PublishTargetAsync endpoint to handle a post request with the right type.
I have a fairly complex scenario that I try to port to Windows 8 from Windows Phone 7.
I need to
Download s Zip file from the internet
Unzip it to the isolated storage
Read the unzipped xml files and images
Problems
In Windows Phone 7 I use WebClient that is no longer available in Windows 8. I tried HttpClientHandler but I am only able to download the ZIP file as a string and I do no know how to save it to isolated storage.
I found ZipArchive class but it takes a IO.Stream and I am not really sure how to use it (if I had the file save somewehre - point 1)
I'm just starting out with the new API's as well (so this might be off a bit), but based on the documentation:
HttpClient (and it's default handler HttpClientHandler) return a Task<HttpResponseMessage> from SendAsync.
The HttpResponseMessage has a property, Content which is of type HttpContent.
HttpContent in turn has a method, ReadAsStreamAsync, which returns Task<Stream> which you should be able to use (albeit indirectly) to pass to ZipArchive.
Or you can just use the HttpClient.GetStreamAsync method to get the stream (much simpler):
HttpClient client = new HttpClient();
Stream stream = await client.GetStreamAsync(uri);
If that doesn't work then you could also just wrap the string you get now in a MemoryStream and pass it to ZipArchive but that sounds kind of unsafe because of possible encoding problems.
I have a WCF Service that returns a byte array with a Zip file (50MB) to any client that requests it. If the Zip is very small (say 1MB), the SOAP response is coming from WCF with the byte array embedded in it. But the response size is very huge even for a 1MB file. If I try to transfer the 50MB file the service hangs and throws an out of memory exception, because the SOAP response becomes huge in size.
What is the best option available with WCF / web service to transfer large files (mainly ZIP format) as I am sending back a byte array. Is there any good approach instead of that for sending back the file?
Whether WCF / web service is best way to transfer large files to any client or is there any other better option/technology available so that interoperability and scalability for 10,000 users can be achieved?
My Ccode is below:
String pathfordownload = #"D:\New Folder.zip";
FileStream F2D = new FileStream(pathfordownload, FileMode.Open,FileAccess.Read);
BinaryReader binReader = new BinaryReader(F2D);
binReader.BaseStream.Position = 0;
byte[] binFile = binReader.ReadBytes(Convert.ToInt32 (binReader.BaseStream.Length));
binReader.Close();
return binFile;
A working piece/real piece of information will be really helpful as I am struggling with all the data available in Google and have had no good results for last week.
You can transfer a Stream through WCF and then you can send (almost) limitless length files.
I've faced the exact same problem. The out of memory is inevitable because you are using Byte arrays.
What we did is to flush the data on the hard drive, so in stead of being limited by your virtual memory your capacity for concurrent transactions is the HD space.
Then for transfer, we jut placed the file on the other computer. Of course in our case it was a server to server file transfer. If you want to de decoupled form the peer, you can use a file download in http.
So instead than responding with a file, your service could respond with a http url to the file location. Then when the client has successfully downloaded form the server with a standard HttpRequest or WebClient it calls a method to delete the file. In SOAP that could be Delete(string url), in REST that would be delete method on the resource.
I hope this makes sense to you. The most importnat part of this is to understand that in a scalable software especially if you are looking at 10000 clients (concurrent?) is that you may not use resources that are limited, like memory streams or byte arrays. But rather rely on large and easily expandable resources like a hard drive partition that coule eventually be on a SAN and IT could grow the partition as needed.