I have downloaded a zip file using this code from a web server:
client.DownloadFileAsync(url, savePath);
Then, in another method, during the same session I try and extract the file with this method:
ZipFile.ExtractToDirectory(zipPath, extractDir);
This throws the error:
System.IO.IOException: 'The process cannot access the file
'C:\ProgramData\ZipFile.zip' because it is being used by another process.'
If I restart the program then unzip the file (without redownloading it) it extracts without any problem.
This doesn't make much sense to me because the Webclient client is located in another method and hence should be destroyed...
There is nothing else accessing that file other than the 2 lines of code provided above.
Is there any way to free the file?
You need to extract the files when the download completed, to do this, you need to use DownloadFileCompleted event of webclient
private void DownloadPackageFromServer(string downloadLink)
{
ClearTempFolder();
var address = new Uri(Constants.BaseUrl + downloadLink);
using (var wc = new WebClient())
{
_downloadLink = downloadLink;
wc.DownloadFileCompleted += Wc_DownloadFileCompleted;
wc.DownloadFileAsync(address, GetLocalFilePath(downloadLink));
wc.Dispose();
}
}
private void Wc_DownloadFileCompleted(object sender, AsyncCompletedEventArgs e)
{
UnZipDownloadedPackage(_downloadLink);
}
private void UnZipDownloadedPackage(string downloadLink)
{
var fileName = GetLocalFilePath(downloadLink);
ZipFile.ExtractToDirectory(fileName, Constants.TemporaryMusicFolderPath);
}
Related
I'm downloading one file using WebClient in one of the method.
public static void DownloadServicePackage()
{
FilePath = #"C:\Users\userName\AppData\TestExe.zip".Replace("userName", userName);
string extractPath = #"C:\Users\userName\AppData".Replace("userName", userName);
WebClient webClient = new WebClient();
webClient.DownloadFile("https://stg.test.com/downloadScript/TestExe.zip", FilePath);
ZipFile.ExtractToDirectory(FilePath, extractPath);
}
public MainWindow()
{
DownloadServicePackage();
DownloadFile();
}
I need to wait in this method until zip file gets downloaded completely. So that I can extract further.
Any suggestions? How can I achieve this as I don't want to proceed till file get downloaded. Also file size can differ from time to time.
Thanks in advance!
I am able to upload the files to an API, but I need a small help. Right now I just hard-coded it. But actually, I will be having a PDF and XML files in two different local file storage locations, I need to get the files from that location and needs to upload them to API. Can anyone help me to achieve this?
private void btnsubmit_Click(object sender, EventArgs e)
{
UploadFileAsync(#"D:\test\SBP-1102.pdf");
}
public static async Task UploadFileAsync(string path)
{
HttpClient client = new HttpClient();
// we need to send a request with multipart/form-data
var multiForm = new MultipartFormDataContent();
// add file and directly upload it
FileStream fs = File.OpenRead(path);
multiForm.Add(new StreamContent(fs), "files", Path.GetFileName(path));
// send request to API
var url = "https://spaysaas-dev/api/getOCRDocuments";
var response = await client.PostAsync(url, multiForm);
if(response.IsSuccessStatusCode)
{
MessageBox.Show("Success");
}
else
{
MessageBox.Show(response.ToString());
}
}
This answer is incomplete in that it doesn't actually explain why the file isn't being uploaded, but it might help you diagnose the problem.
The documentation on WebClient.UploadFileAsync says:
The file is sent asynchronously using thread resources that are automatically allocated from the thread pool. To receive notification when the file upload completes, add an event handler to the UploadFileCompleted event.
So you could try handling WebClient.UploadFileCompleted and checking the UploadFileCompletedEventArgs for errors.
private void Upload(string fileName)
{
var client = new WebClient();
client.UploadFileCompleted += Client_UploadFileCompleted;
try
{
var uri = new Uri("https://saas-dev/api/getDocs");
{
client.Headers.Add("fileName", System.IO.Path.GetFileName(fileName));
client.UploadFileAsync(uri, fileName);
}
}
catch (Exception ex)
{
MessageBox.Show(ex.Message);
}
}
private void Client_UploadFileCompleted(object sender, UploadFileCompletedEventArgs e)
{
// Check e.Error for errors
}
I am able to upload the PDF file to an API using multi-part form data
I'm new to WinForms/C#/VB.NET and all and am trying to put together a simple application which downloads an MP3 file and edits its ID3 tags. This is what I've come up with so far :
Uri link = new System.Uri("URL");
wc.DownloadFileAsync(link, #"C:/music.mp3");
handle.WaitOne();
var file = TagLib.File.Create(#"C:/music.mp3");
file.Tag.Title = "Title";
file.Save();
The top section downloads the file with a pre-defined WebClient, but when I try to open the file in the first line of the second half, I run into this error The process cannot access the file 'C:\music.mp3' because it is being used by another process. which I'm guessing is due to the WebClient.
Any ideas on how to fix this? Thanks.
If using WebClient.DownloadFileAsync you should subscribe to the DownloadFileCompleted event and perform the remainder of your processing from that event.
Quick and dirty:
WebClient wc = new WebClient();
wc.DownloadfileCompleted += completedHandler;
Uri link = new System.Uri("URL");
wc.DownloadFileAsync(link, #"C:/music.mp3");
//handle.WaitOne(); // dunno what this is doing in here.
function completedHandler(Object sender, AsyncCompletedEventArgs e) {
var file = TagLib.File.Create(#"C:/music.mp3");
file.Tag.Title = "Title";
file.Save();
}
I have several URLs stored in a text file, each of them is a link leading to a Facebook emoji, like https://www.facebook.com/images/emoji.php/v5/u75/1/16/1f618.png
I'm trying to download these images and store them on my disk. I'm using WebClient with DownloadFileAsync, something like
using (var client = new WebClient())
{
client.DownloadFileAsync(imgURL, imgName);
}
My problem is even if the amount of URLs is small, say 10, some of the images are downloaded ok, some give me a file corrupt error. So I thought I needed to wait for files to be downloaded till the end and added DownloadFileCompleted event, like this
using System;
using System.ComponentModel;
using System.Collections.Generic;
using System.Linq;
using System.Net;
class Program
{
static Queue<string> q;
static void Main(string[] args)
{
q = new Queue<string>(new[] {
"https://www.facebook.com/images/emoji.php/v5/u51/1/16/1f603.png",
"https://www.facebook.com/images/emoji.php/v5/ud2/1/16/1f604.png",
"https://www.facebook.com/images/emoji.php/v5/ud4/1/16/1f606.png",
"https://www.facebook.com/images/emoji.php/v5/u57/1/16/1f609.png",
"https://www.facebook.com/images/emoji.php/v5/u7f/1/16/1f60a.png",
"https://www.facebook.com/images/emoji.php/v5/ufb/1/16/263a.png",
"https://www.facebook.com/images/emoji.php/v5/u81/1/16/1f60c.png",
"https://www.facebook.com/images/emoji.php/v5/u2/1/16/1f60d.png",
"https://www.facebook.com/images/emoji.php/v5/u75/1/16/1f618.png",
"https://www.facebook.com/images/emoji.php/v5/u1e/1/16/1f61a.png"
});
DownloadItem();
Console.WriteLine("Hit return after 'finished' has appeared...");
Console.ReadLine();
}
private static void DownloadItem()
{
if (q.Any())
{
var uri = new Uri(q.Dequeue());
var file = uri.Segments.Last();
var webClient = new WebClient();
webClient.DownloadFileCompleted += DownloadFileCompleted;
webClient.DownloadFileAsync(uri, file);
}
else
{
Console.WriteLine("finished");
}
}
private static void DownloadFileCompleted(object sender, AsyncCompletedEventArgs e)
{
DownloadItem();
}
}
It didn't help and I decided to look closer into the files that are corrupted.
It appeared that the files that were corrupted were not actually image files, but HTML pages, which either had some redirection JavaScript code to an image or were full HTML pages saying that my browser was not supported.
So my question is, how do I actually wait that an image file has been fully loaded and is ready to be downloaded?
EDIT I have also tried to remove the using statement, but that did not help either.
Nothing's being corrupted by your download - it's simply Facebook deciding (sometimes, which is odd) that it doesn't want to serve the image to your client.
It looks like it's the lack of a user agent that causes the problem. All you need to do is specify the user agent, and that looks like it fixes it:
webClient.Headers.Add(HttpRequestHeader.UserAgent,
"Mozilla/5.0 (compatible; http://example.org/)");
I have struggle with downloading few MB excel file from URL and then work with it. Im using VS2010 so i cant use await keyword.
My code follows:
using (WebClient webClient = new WebClient())
{
// setting Windows Authentication
webClient.UseDefaultCredentials = true;
// event fired ExcelToCsv after file is downloaded
webClient.DownloadFileCompleted += (sender, e) => ExcelToCsv(fileName);
// start download
webClient.DownloadFileAsync(new Uri("http://serverx/something/Export.ashx"), exportPath);
}
The line in ExcelToCsv() method
using (FileStream stream = new FileStream(filePath, FileMode.Open))
Throws me an error:
System.IO.IOException: The process cannot access the file because it
is being used by another process.
I tried webClient.DownloadFile() only without an event but it throws same error. Same error is throwed if i do not dispose too. What can i do ?
Temporary workaround may be Sleep() method but its not bullet proof.
Thank you
EDIT:
I tried second approach with standard handling but i have mistake in the code
using (WebClient webClient = new WebClient())
{
// nastaveni ze webClient ma pouzit Windows Authentication
webClient.UseDefaultCredentials = true;
// <--- I HAVE CONVERT ASYNC ERROR IN THIS LINE
webClient.DownloadFileCompleted += new DownloadDataCompletedEventHandler(HandleDownloadDataCompleted);
// spusteni stahovani
webClient.DownloadFile(new Uri("http://czprga2001/Logio_ZelenyKyblik/Export.ashx"), TempDirectory + PSFileName);
}
public delegate void DownloadDataCompletedEventHandler(string fileName);
public event DownloadDataCompletedEventHandler DownloadDataCompleted;
static void HandleDownloadDataCompleted(string fileName)
{
ExcelToCsv(fileName);
}
EDIT: approach 3
I tried this code
while (true)
{
if (isFileLocked(downloadedFile))
{
System.Threading.Thread.Sleep(5000); //wait 5s
ExcelToCsv(fileName);
break;
}
}
and it seems that it is never accessible :/ I dont get it.
Try to use DownloadFile instead of DownloadFileAsync, as you do in Edit 1, like this:
string filename=Path.Combine(TempDirectory, PSFileName);
using (WebClient webClient = new WebClient())
{
// nastaveni ze webClient ma pouzit Windows Authentication
webClient.UseDefaultCredentials = true;
// spusteni stahovani
webClient.DownloadFile(new Uri("http://czprga2001/Logio_ZelenyKyblik/Export.ashx"), filename);
}
ExcelToCsv(filename); //No need to create the event handler if it is not async
From your example it seems that you do not need asynchronous download, so use synchronous download and avoid possible related problems like here.
Also use Path.Combine to combine parts of a path like folder and filename.
There is also a chance that it is locked by something else, use Sysinternals Process Explorer's Find DLL or Handle function to check it.
Use local disk to store downloaded file to prevent problems with network.