I am using BackgroundTransferRequest to download files, I have more than 6,500 mp3 that user can download them at once clicking on download all button Or user can download individual file.
I cannot add more than 25 files in download BackgroundTransferRequest. What is the workaround with it to add more than 25 files in download queue.
When it reaches the level, the exception is
Unable to download. The application request limit has been reached
Code for adding in Queue, After all files are added. I am processing download.
transferFileName = aya.DownloadUri;
Uri transferUri = new Uri(Uri.EscapeUriString(aya.DownloadUri), UriKind.RelativeOrAbsolute);
BackgroundTransferRequest transferRequest = new BackgroundTransferRequest(transferUri);
transferRequest.Method = "GET";
string downloadFile = transferFileName.Substring(transferFileName.LastIndexOf("/") + 1);
Uri downloadUri = new Uri(downloadLocation + aya.ChapterID + "/" + downloadFile, UriKind.RelativeOrAbsolute);
transferRequest.DownloadLocation = downloadUri;
transferRequest.Tag = string.Format("{0},{1},{2}", downloadFile, aya.ID, aya.ChapterID);
transferRequest.TransferPreferences = TransferPreferences.AllowBattery;
BackgroundTransferService.Add(transferRequest);
You must attach an event handler to BackgroundTransferRequest.StatusChanged. On appropriate state you must explicitly remove transfer from BackgroundTransferService. As you may now, the requests have to be removed from BackgroundTransferService manually. All that is explained in detail in introduction to background transfers on msdn.
You should create a queue of files to download, start with placing first 25 transfers in BackgroundTransferService and after the BackgroudTransferService.Remove(..), you can start next transfer from your queue.
Related
DropboxClient dbx = new DropboxClient("my_Key");
var folder = "/Apps/Images";
var file = $"fileName.jpg";
var fileToUpload = #"C:\Users\LENOVO\Test\Test\test.jpg";
using (var mem = new MemoryStream(File.ReadAllBytes(fileToUpload)))
{
var updated = await dbx.Files.UploadAsync(folder + "/" + file,
WriteMode.Overwrite.Instance,
body: mem);
Console.WriteLine("Saved {0}/{1} rev {2}", folder, file, updated.Rev);
}
i want to upload Image to Dropbox. This code is worked but i want fileToUpload to be is a web URL because images is a Web Server. i know i can download every Images step by step. But this is a loss of performance. If i write a WebUrl in the fileToUpload. i see the exception. For Example:
fileToUpload = "https:\upload.wikimedia.org\wikipedia\commons\5\51\Small_Red_Rose.JPG"
The Exception:
C:\Users\LENOVO****\bin\Debug\net6.0\https:\upload.wikimedia.org\wikipedia\commons\5\51\Small_Red_Rose.JPG
*** - is a local folder name
i want to upload image to dropbox from Web
The UploadAsync method requires that you supply the data of the file to upload directly, as you are doing in your example by retrieving the file data from the local filesystem using File.ReadAllBytes.
If you want to upload a file to Dropbox directly from a URL without downloading it to your local filesystem first, you should instead use the SaveUrlAsync method.
I have a http server and I need to download files from that server to my computer every time I launch the app on my PC, I need to download about five thousand files and each of them are about 1-2 kb. Here is the code that I use for it:
WebClient[][] wbc = new WebClient[1][];
for(int file=0 ; file < myfilecount ; file++)
{
wbc[0][film] = new WebClient();
wbc[0][film].Credentials = new NetworkCredential(username, password);
wbc[0][film].DownloadFileCompleted += Form4_DownloadFileCompleted;
wbc[0][film].DownloadFileTaskAsync("http://MYIPADDRESS/File" + file.ToString(), databaselocation + "\\File" + file.toString());
}
When I do this it downloads files into ram in about 3 sec. But it takes about one minute to write them to my hdd. Is there any faster way to download those files to my HDD?
Also I am getting the information about the count of those files by a file that I write, so is there any better way to download all of them?
I agree if this is required every time you launch the app on your pc you should rethink the process. But then again, I don't completely know your circumstances. That aside, you can make a queued collection of request tasks and wait for them all concurrently:
var requestTasks = Enumerable.Range(0, myFileCount).Select(i => {
var webClient = new WebClient
{
Credentials = new NetworkCredential(username, password),
DownloadFileCompleted += Form4_DownloadFileCompleted
};
return webClient.DownloadFileTaskAsync("http://MYIPADDRESS/File" + i.ToString(), Path.Combine(databaselocation, "file" + i.ToString()));
});
await Task.WhenAll(requestTasks);
Also I believe WebClient is limited to the number of requests that can be made concurrently to the same host, you can configure this by using ServicePointManager:
ServicePointManager.DefaultConnectionLimit = carefullyCalculatedNumber;
I'd be careful with the number of connections allowed, too high of a limit can also become a problem. Hope this helps.
I use a website to get stats on wifi usage. The website creates an image of a graph representation of the data. The way it does this is by the user, setting a date. So for example, lets say it was last months statistics. The website generates a URL which is then sent to the server and the server returns an image. an examples of the link is like this:
https://www.example.com/graph/daily_usage?time_from=2015-06-01+00%3A00%3A00+%2B0100&time_to=2015-06-30+23%3A59%3A59+%2B0100&tsp=1436519988
The problem is, I am making a third party program that will download this image to be used in my program. However, I cannot use the file. It is as if the file is corrupt or something. I have tried a few methods but maybe someone can suggest a different approach. Basically, how do I download an image that is generated by a server from a URL link?
P.S.
Just noticed that if I download the file by right clicking through a browser and save, the image downloads with a size of 17.something kilobytes. But if I use the WebClient method to download the image, it only downloads 1.5kb. Why would that be? Seems like the WebClient method does not download completely.
Currently my code
if (hrefAtt == "Usage Graph")
{
string url = element.getAttribute("src");
WebClient client = new WebClient();
client.DownloadFile(url, tempFolderPath + "\\" + currentAcc + "_UsageSummary.png");
wd.AddImagesToDoc(tempFolderPath + "\\" + currentAcc + "_UsageSummary.png");
wd.SaveDocument();
}
TempFolderPath is my desktop\TempFolder\
UPDATE
Out of random, I decided to see the raw data of the file with notepad and interestingly, the image data was actually a copy of the websites homepage html code, not the raw data of the image :S how does that make sense?
This will download the Google logo:
var img = Bitmap.FromStream(new MemoryStream(new WebClient().DownloadData("https://www.google.co.uk/images/srpr/logo11w.png")));
First of all, you have to understand link texture. If all links are same or close to each other, you have to use substring/remove/datetime etc. methods to make your new request link. For example;
string today = DateTime.Now.ToShortDateString();
string generatedLink = #"http://www.yoururl.com/image + " + today + ".jpg";
string generatedFileName = #"C:\Usage\usage + " + today + ".jpg";
WebClient wClient = new WebClient();
wClient.DownloadFile(generatedLink, generatedFileName);
I know a few similar questions have been asked on how to download files using WebClient. I can download individual files perfectly fine, but I want to download a range of files. Anywhere from 1-6000 files. I can download them just fine into my current directory, but I am stumped on how to download them to a different directory based upon where they're being downloaded from. Do I need to temporarily change the current working directory just before downloading them?
And slightly on the same topic, I'm stuck on how to verify the files exist before downloading them. I don't want to waste bandwidth or diskspace with empty files.. Here's what I have so far:
for (int x = 1; x <= 6000; x++)
{
pbsscount = x.ToString();
// Used for downloading file
string directoryName = textBox1.Text.ToString().Replace(":", "_");
if (!Directory.Exists(textBox1.Text))
Directory.CreateDirectory(directoryName.Substring(7));
string wholePBSSurl = textBox1.Text + "/" + "pb" + pbsscount.PadLeft(6, '0') + ".png";
// Used for saving file, file name in directory
string partPBSSurl = "pb" + pbsscount.PadLeft(6, '0') + ".png";
Uri uri2 = new Uri(wholePBSSurl);
//if (fileExists(wholePBSSurl))
//{
// Initialize downloading info, grab progressbar info
WebClient webClient = new WebClient();
webClient.DownloadFileCompleted += new AsyncCompletedEventHandler(Completed);
webClient.DownloadProgressChanged += new DownloadProgressChangedEventHandler(ProgressChanged);
// Save file to folder
//webClient.DownloadFileAsync(uri2, textBox1.Text + "/" + partPBSSurl);
webClient.DownloadFileAsync(uri2, partPBSSurl);
//}
}
Do I need to temporarily change the current working directory just before downloading them?
The second parameter can be a full path #"C:\folder\file.png". If you're fine with relative path to your current directory, just change the code to webClient.DownloadFileAsync(uri2, directoryName + partPBSSurl); or even better use System.Path.Combine(directoryName, partPBSSurl)
Sure you can know the size before If sever supports that. See: How to get the file size from http headers
I don't want to waste bandwidth or diskspace with empty files.
I wouldn't worry about that. The performance slow down is negligible.
There is no need to change the current directory. You are already using an overload of DownloadFileAsync that accepts a file path as the second parameter.
Just ensure that partPBSSurl contains a full path to the destination file, including both the directory and filename.
With regard to your second question of avoiding wasted time if the file does not exist, it so happens that I asked the same question recently:
Fail Fast with WebClient
Finally, I recently extended WebClient to provide simpler progress change events and allow for the timeout to be changed. I posed that code here:
https://stackoverflow.com/a/9763976/141172
When the user selects the list of files from a page and hit's download selected, then a post back happens to server and starts zipping on the server. This works great until we hit the timeout on the page ( which is default to 90 seconds ) and just returns the process to the page even though the backend process is still zipping. Is it possible to show the size of zip file when the file is being zipped instead of waiting till the end to provide the download link?
You can use ActiveX components to do that:
var oas = new ActiveXObject("Scripting.FileSystemObject");
var d = filepath;
var e = oas.getFile(d);
var f = e.size;
alert(f + " bytes");
}
but it will limit you to IE and should have appropriate IE security settings.