I was pleasantly surprised to discover in .net 4.5 I can connect an HTTP client stream to a Zip archive and get it unzipped on the local file system.
Here is the code that does it:
System.Net.WebClient wc = new System.Net.WebClient();
string url = "http://www.example.com/abc.zip";
Stream zipReadingStream = wc.OpenRead(url);
System.IO.Compression.ZipArchive zip = new System.IO.Compression.ZipArchive(zipReadingStream);
System.IO.Compression.ZipFileExtensions.ExtractToDirectory(zip, "C:\\ExtractHere");
It works fine. But I want to keep track of the progress. I tried to check the Position property of the stream but I got Not Supported error message. Is there other way to do it?
Related
I have an excel file I am downloading using C# and Webclient. The download works but I am wondering if it will also work on a Mac since Macs don't have a C drive.
Here is what I am using:
WebClient client = new WebClient();
client.DownloadFile(file, #"C:\" + guidToken.ToString() + ".xlsx");
The file downloads fine on windows, but if someone is using a Mac, will it work and if not, how do I get it to work?
Use Environmental variables may work.
E.g:
string documentpath = Environment.GetFolderPath (Environment.SpecialFolder.MyDocuments);
I´ve installed Dynamics Version 1612 (8.2.2.112) (DB 8.2.2.112) on-premises. I wrote a plugin to connect to a web API, work perfect, now I need to add more situations. The API can return an url to a zipped file with the response in a JSON file. Well, there're any change to point me in the right direction? First, to get the file. STFW so far he gave me no clue. Thanks in advance.
If you have an URL then get unzip and process.
Example how to get and unzip file.
WebClient wc = new WebClient();
using (MemoryStream stream = new MemoryStream(wc.DownloadData("URL")))
{
using (var zip = new ZipArchive(stream, ZipArchiveMode.Read))
{
...
}
}
I'm trying to download files using a list of urls. how would i go about downloading the files if your urls only end in the page where you would normally click the download button(it has a redirect and wait on the download also)?
i havent actually managed to get further than downloading a file using
but i know this wont work because i wont be able to know the filename and ill have to save it to a file in a config location
using (WebClient client = new WebClient())
{
client.DownloadFile("https://github.com/Hellzbellz123/downloadme/raw/master/TestAddon.7z", "testAddon.7z");
}
I intend to build a backend then plug it into a windows forms app for a gui because im really new to C# and programming in general
Do you mean that you don't know the filenames so you don't know how to save them locally?
If so:
//with 'url' as string
WebClient client = new WebClient();
Uri uri = new Uri(url);
client.DownloadFile(uri, uri.Segments.Last());
It takes the URL and splits it by every slash - the last item in the list is the filename..
EDIT: Improved, thanks to Jimi
That method won't work for links like "[..]/download.php?fileid="
For those links take a look at this
I have tried many methods to deserialise this xml from URL. But none were successful due to what I believe is an encoding issue.
If i right click download, then deserialise it from my C drive, it works fine.
So i decided to try downloading the file first, and then process it. But the file it downloads via code is in the wrong encoding as well!
I dont know where to start, but im thinking maybe forcing a UTF-8 or UTF-16 encoding when downloading??
Here is the download code:
using (var client = new WebClient())
{
client.DownloadFile("http://example.com/my.xml", "my.xml");
}
How to download a file from a URL in C#?
Image of file when downloaded
Try this
using (var client = new WebClient())
{
client.Encoding = System.Text.Encoding.UTF8;
client.DownloadFile("http://example.com/my.xml", "my.xml");
}
The file was infact in a gzip format. Despite it being an xml url.
My connections must have been accepting gzip so the server responded with such. Even though i tried a few different methods with different variations. (Downloading/String streaming, parsing string from URL etc)
The solution for me, was to download, then uncompress the gzip file before deserialising. Telling the server not to send gzip didn't work. But may be a possibility for some.
I have a Windows Service will be reading from local disk (video files) and post them to remote service via API.
Video files over 2gb size and I need to transfer them to another location through HttpClient/POST request.
There is no limitation on API call, so even if file is 10gb I can still post file to destination.
Right now I am reading entire video file as byte[] fileContent and pass it to function as
ByteArrayContent contentBody = new ByteArrayContent(fileContent);
It works for now, but since this is not scalable. (If multiple files will be transferred at the same time, it can fill up memory) I am seeking for a solution that transfer will happen in chunks.
Question: Can I read big files in buffer and transfer over HTTP as I am reading from local disk?
You can use the PostAsync(Uri, HttpContent) method of HttpClient. In order to stream the contents of your local file, use the StreamContent subclass of HttpContent and supply a file reader stream. A brief example:
async Task PostBigFileAsync(Uri uri, string filename)
{
using (var fileStream = File.OpenRead(filename))
{
var client = new HttpClient();
var response = await client.PostAsync(uri, new StreamContent(fileStream));
}
}