I have struggle with downloading few MB excel file from URL and then work with it. Im using VS2010 so i cant use await keyword.
My code follows:
using (WebClient webClient = new WebClient())
{
// setting Windows Authentication
webClient.UseDefaultCredentials = true;
// event fired ExcelToCsv after file is downloaded
webClient.DownloadFileCompleted += (sender, e) => ExcelToCsv(fileName);
// start download
webClient.DownloadFileAsync(new Uri("http://serverx/something/Export.ashx"), exportPath);
}
The line in ExcelToCsv() method
using (FileStream stream = new FileStream(filePath, FileMode.Open))
Throws me an error:
System.IO.IOException: The process cannot access the file because it
is being used by another process.
I tried webClient.DownloadFile() only without an event but it throws same error. Same error is throwed if i do not dispose too. What can i do ?
Temporary workaround may be Sleep() method but its not bullet proof.
Thank you
EDIT:
I tried second approach with standard handling but i have mistake in the code
using (WebClient webClient = new WebClient())
{
// nastaveni ze webClient ma pouzit Windows Authentication
webClient.UseDefaultCredentials = true;
// <--- I HAVE CONVERT ASYNC ERROR IN THIS LINE
webClient.DownloadFileCompleted += new DownloadDataCompletedEventHandler(HandleDownloadDataCompleted);
// spusteni stahovani
webClient.DownloadFile(new Uri("http://czprga2001/Logio_ZelenyKyblik/Export.ashx"), TempDirectory + PSFileName);
}
public delegate void DownloadDataCompletedEventHandler(string fileName);
public event DownloadDataCompletedEventHandler DownloadDataCompleted;
static void HandleDownloadDataCompleted(string fileName)
{
ExcelToCsv(fileName);
}
EDIT: approach 3
I tried this code
while (true)
{
if (isFileLocked(downloadedFile))
{
System.Threading.Thread.Sleep(5000); //wait 5s
ExcelToCsv(fileName);
break;
}
}
and it seems that it is never accessible :/ I dont get it.
Try to use DownloadFile instead of DownloadFileAsync, as you do in Edit 1, like this:
string filename=Path.Combine(TempDirectory, PSFileName);
using (WebClient webClient = new WebClient())
{
// nastaveni ze webClient ma pouzit Windows Authentication
webClient.UseDefaultCredentials = true;
// spusteni stahovani
webClient.DownloadFile(new Uri("http://czprga2001/Logio_ZelenyKyblik/Export.ashx"), filename);
}
ExcelToCsv(filename); //No need to create the event handler if it is not async
From your example it seems that you do not need asynchronous download, so use synchronous download and avoid possible related problems like here.
Also use Path.Combine to combine parts of a path like folder and filename.
There is also a chance that it is locked by something else, use Sysinternals Process Explorer's Find DLL or Handle function to check it.
Use local disk to store downloaded file to prevent problems with network.
Related
I'm using C# to download different files via url and run them afterwards from a physical filepath in the system, however I need to wait for the file until it is completely written and only then run it. The reason is that some files would need more time than others to be saved and I can't really use Thread.Sleep() for this purpose.
I tried this code but it is not that flexible for some reason, as I can't really tell how many tries or how much time it should be until the file is saved. This depends always on the internet connection as well as on the file size.
WebClient client = new WebClient();
var downloadTask = client.DownloadFileTaskAsync(new Uri(url), filepath);
var checkFile = Task.Run(async () => await downloadTask);
WaitForFile(filepath, FileMode.CreateNew);
—
FileStream WaitForFile(string fullPath, FileMode mode)
{
for (int numTries = 0; numTries < 15; numTries++)
{
FileStream fs = null;
try
{
fs = new FileStream(fullPath, mode);
return fs;
}
catch (IOException)
{
if (fs != null)
{
fs.Dispose();
}
Thread.Sleep(50);
}
}
return null;
}
Is there a way to keep waiting until the File.Length > 0?
I would appreciate any help.
Thanks.
You're not awaiting for the file to complete download. Or better said, you're awaiting, in a different thread, and then throwing that result away. Just wait in the very same method and you no longer need a separate way to know when the file is downloaded
WebClient client = new WebClient();
await client.DownloadFileTaskAsync(new Uri(url), filepath);
You can use FileSystemWatcher to get an event when file changes its size or a new file appear https://learn.microsoft.com/en-us/dotnet/api/system.io.filesystemwatcher.onchanged?view=netcore-3.1
But you can't really tell if the file is fully downloaded this way unless you know its size.
You should just change the code that downloads the file so that it notifies when the file is downloaded.
I think this is all you have to do:
WebClient client = new WebClient();
await client.DownloadFileTaskAsync(new Uri(url), filepath);
// code continues when the file finishes downloading
I have downloaded a zip file using this code from a web server:
client.DownloadFileAsync(url, savePath);
Then, in another method, during the same session I try and extract the file with this method:
ZipFile.ExtractToDirectory(zipPath, extractDir);
This throws the error:
System.IO.IOException: 'The process cannot access the file
'C:\ProgramData\ZipFile.zip' because it is being used by another process.'
If I restart the program then unzip the file (without redownloading it) it extracts without any problem.
This doesn't make much sense to me because the Webclient client is located in another method and hence should be destroyed...
There is nothing else accessing that file other than the 2 lines of code provided above.
Is there any way to free the file?
You need to extract the files when the download completed, to do this, you need to use DownloadFileCompleted event of webclient
private void DownloadPackageFromServer(string downloadLink)
{
ClearTempFolder();
var address = new Uri(Constants.BaseUrl + downloadLink);
using (var wc = new WebClient())
{
_downloadLink = downloadLink;
wc.DownloadFileCompleted += Wc_DownloadFileCompleted;
wc.DownloadFileAsync(address, GetLocalFilePath(downloadLink));
wc.Dispose();
}
}
private void Wc_DownloadFileCompleted(object sender, AsyncCompletedEventArgs e)
{
UnZipDownloadedPackage(_downloadLink);
}
private void UnZipDownloadedPackage(string downloadLink)
{
var fileName = GetLocalFilePath(downloadLink);
ZipFile.ExtractToDirectory(fileName, Constants.TemporaryMusicFolderPath);
}
I'm new to WinForms/C#/VB.NET and all and am trying to put together a simple application which downloads an MP3 file and edits its ID3 tags. This is what I've come up with so far :
Uri link = new System.Uri("URL");
wc.DownloadFileAsync(link, #"C:/music.mp3");
handle.WaitOne();
var file = TagLib.File.Create(#"C:/music.mp3");
file.Tag.Title = "Title";
file.Save();
The top section downloads the file with a pre-defined WebClient, but when I try to open the file in the first line of the second half, I run into this error The process cannot access the file 'C:\music.mp3' because it is being used by another process. which I'm guessing is due to the WebClient.
Any ideas on how to fix this? Thanks.
If using WebClient.DownloadFileAsync you should subscribe to the DownloadFileCompleted event and perform the remainder of your processing from that event.
Quick and dirty:
WebClient wc = new WebClient();
wc.DownloadfileCompleted += completedHandler;
Uri link = new System.Uri("URL");
wc.DownloadFileAsync(link, #"C:/music.mp3");
//handle.WaitOne(); // dunno what this is doing in here.
function completedHandler(Object sender, AsyncCompletedEventArgs e) {
var file = TagLib.File.Create(#"C:/music.mp3");
file.Tag.Title = "Title";
file.Save();
}
I have several URLs stored in a text file, each of them is a link leading to a Facebook emoji, like https://www.facebook.com/images/emoji.php/v5/u75/1/16/1f618.png
I'm trying to download these images and store them on my disk. I'm using WebClient with DownloadFileAsync, something like
using (var client = new WebClient())
{
client.DownloadFileAsync(imgURL, imgName);
}
My problem is even if the amount of URLs is small, say 10, some of the images are downloaded ok, some give me a file corrupt error. So I thought I needed to wait for files to be downloaded till the end and added DownloadFileCompleted event, like this
using System;
using System.ComponentModel;
using System.Collections.Generic;
using System.Linq;
using System.Net;
class Program
{
static Queue<string> q;
static void Main(string[] args)
{
q = new Queue<string>(new[] {
"https://www.facebook.com/images/emoji.php/v5/u51/1/16/1f603.png",
"https://www.facebook.com/images/emoji.php/v5/ud2/1/16/1f604.png",
"https://www.facebook.com/images/emoji.php/v5/ud4/1/16/1f606.png",
"https://www.facebook.com/images/emoji.php/v5/u57/1/16/1f609.png",
"https://www.facebook.com/images/emoji.php/v5/u7f/1/16/1f60a.png",
"https://www.facebook.com/images/emoji.php/v5/ufb/1/16/263a.png",
"https://www.facebook.com/images/emoji.php/v5/u81/1/16/1f60c.png",
"https://www.facebook.com/images/emoji.php/v5/u2/1/16/1f60d.png",
"https://www.facebook.com/images/emoji.php/v5/u75/1/16/1f618.png",
"https://www.facebook.com/images/emoji.php/v5/u1e/1/16/1f61a.png"
});
DownloadItem();
Console.WriteLine("Hit return after 'finished' has appeared...");
Console.ReadLine();
}
private static void DownloadItem()
{
if (q.Any())
{
var uri = new Uri(q.Dequeue());
var file = uri.Segments.Last();
var webClient = new WebClient();
webClient.DownloadFileCompleted += DownloadFileCompleted;
webClient.DownloadFileAsync(uri, file);
}
else
{
Console.WriteLine("finished");
}
}
private static void DownloadFileCompleted(object sender, AsyncCompletedEventArgs e)
{
DownloadItem();
}
}
It didn't help and I decided to look closer into the files that are corrupted.
It appeared that the files that were corrupted were not actually image files, but HTML pages, which either had some redirection JavaScript code to an image or were full HTML pages saying that my browser was not supported.
So my question is, how do I actually wait that an image file has been fully loaded and is ready to be downloaded?
EDIT I have also tried to remove the using statement, but that did not help either.
Nothing's being corrupted by your download - it's simply Facebook deciding (sometimes, which is odd) that it doesn't want to serve the image to your client.
It looks like it's the lack of a user agent that causes the problem. All you need to do is specify the user agent, and that looks like it fixes it:
webClient.Headers.Add(HttpRequestHeader.UserAgent,
"Mozilla/5.0 (compatible; http://example.org/)");
I am using the code below to download multiple attachments from a TFS server:
foreach (Attachment a in wi.Attachments)
{
WebClient wc = new WebClient();
wc.Credentials = (ICredentials)netCred;
wc.DownloadFileCompleted += new AsyncCompletedEventHandler(wc_DownloadFileCompleted);
wc.DownloadFileAsync(a.Uri, "C:\\" + a.Name);
}
I would like to download multiple files using DownloadFileAsync, but I want them to be downloaded one by one.
One may ask "Why don't you just use the synchronous DownloadFile method?" Its because:
I want to make use of the events provided by DownloadFileAsync.
I don't want to make multiple instances of the Webclient to avoid flooding the server.
This is the solution that I thought of:
foreach (Attachment a in wi.Attachments)
{
WebClient wc = new WebClient();
wc.Credentials = (ICredentials)netCred;
wc.DownloadFileCompleted += new AsyncCompletedEventHandler(wc_DownloadFileCompleted);
wc.DownloadFileAsync(a.Uri, "C:\\" + a.Name);
while (wc.IsBusy)
{
System.Threading.Thread.Sleep(1000);
}
}
However, there are a couple of problems with this approach:
The Thread.Sleep() is locking up my Form. I still need to make my own Thread or use BackgroundWorker. (I would like to avoid this as much as possible)
The DownloadFileCompleted event is being triggered after ALL files has been downloaded. I don't know if this is a side-effect of using System.Threading.Thread.Sleep(1000);
Is there a better approach to download files one at a time using WebClient.DownloadFileAsync?
Thanks!
To simplify the task you can create separated attachment list:
list = new List<Attachment>(wi.Attachments);
where list is private field with type List<Attachment>.
After this you should configure WebClient and start downloading of first file:
if (list.Count > 0) {
WebClient wc = new WebClient();
wc.Credentials = (ICredentials)netCred;
wc.DownloadFileCompleted += new AsyncCompletedEventHandler(wc_DownloadFileCompleted);
wc.DownloadFileAsync(list[0].Uri, #"C:\" + list[0].Name);
}
Your DownloadFileComplete handler should check if not all files already downloaded and call DownloadFileAsync again:
void wc_DownloadFileCompleted(object sender, AsyncCompletedEventArgs e) {
// ... do something useful
list.RemoveAt(0);
if (list.Count > 0)
wc.DownloadFileAsync(list[0].Uri, #"C:\" + list[0].Name);
}
This code is not optimized solution. This is just idea.
At the risk of sounding like an idiot, this worked for me:
Console.WriteLine("Downloading...");
client.DownloadFileAsync(new Uri(file.Value), filePath);
while (client.IsBusy)
{
// run some stuff like checking download progress etc
}
Console.WriteLine("Done. {0}", filePath);
Where client is an instance of a WebClient object.
I think that should use Queue