Downloading a PDF (not opening it) using c#/URLDownloadToFile - c#

I need to download a pdf files,but when i click on links my browser open it,i dont have download window( save\saveAs\Open).
I am using WatIn making login\password than i press on links, i can't use Webreqest to get this files because i need to set cookies, and i can't get cookies from WatIn brower(in this case).
My code
using (var browser = new IE("https://www.test.com")){
browser.GoTo(Link);
int response = URLDownloadToFile(0, Link, FilePath, 0, 0);
}
In link that open download windowd( save\saveAs\Open) all work,but here my brower just open the file in brower,and i can't save It.
How can i save PDF file with URLDownloadToFile

You could use a WebClient, this is the simplest way I can think of:
using (var webClient = new WebClient())
{
webClient.DownloadFile("https://www.test.com", "C:\test.pdf");
}
You can also add a Proxy and Network Credentials if you need them.
EDIT: About the cookie stuff, you can also add those to the WebClient

Related

File downloaded from gdrive is corrupted

I have a problem downloading file from gdrive
I am using this code
DriveService service = new DriveService();
var Stream = service.HttpClient.GetStreamAsync("https://drive.google.com/open?id=blablabla");
var result = Stream.Result;
using (var fileStream = System.IO.File.Create("MyFile.exe"))
{
result.CopyTo(fileStream);
}
But I am getting size of 103kB in MyFile.exe, and it has 800 kB.
I suspect that I am not getting download url right, as I right click on the file I want to download and get shareable link in this format: https://drive.google.com/open?id=blablaid
According to www.labnol.org/internet/direct-links-for-google-drive/28356/ the Google Drive download link is https://docs.google.com/uc?export=download&id=[FILE_ID].
Otherwise you could also look at the Google Drive REST API (developers.google.com/drive/v3/web/manage-downloads).

RestSharp - Output file download result in browser

I'm quite new to using RestSharp and I've got a question that I can't find an answer to here on SO.
I've have this situation where I must download a csv-file and output the file directly in the browser. The following code illustrates how to download a file and save it to a certain path on disc.
string tempFile = Path.GetTempFileName();
using (var writer = File.OpenWrite(tempFile))
{
var client = new RestClient(baseUrl);
var request = new RestRequest("Assets/LargeFile.7z");
request.ResponseWriter = (responseStream) => responseStream.CopyTo(writer);
var response = client.DownloadData(request);
}
I want to download the csv-file and directly output the result as a download file in the browser. You know, like in Chrome the file you download will be displayed in the left bottom corner of your browser.
Can this be done using RestSharp? And if so, how? Got an example? Please share it. ;-)
Thanx!

Download PDF file from WebPage WatIn

I have a website (Bank WebSite) i using WatIn to Login and getting to page with links(with pdf files), each link open a Page with opened pdf.file,on that page i have only the opened pdf file and button to download this file(no need to click on it because page automaticlly popUp message with save\saveAs)
I tried:
1- string page=browser.body.OuterHtml
Not working i cant see the Iframe,i cant find it too.
2-int response = URLDownloadToFile(0, Link, FullFilePath, 0, 0);
Not working a gettin login page it because i need cookies
3- WebClient myWebClient = new WebClient();
myWebClient.DownloadFile(myStringWebResource,fileName);
Gives me the same result.
I CAN'T GET COOKIES FROM WatIn Browser and SET IT IN WebClient
CookieCollection cookies = _browser.GetCookiesForUrl(new Uri(url));
string cookies= ie.Eval("document.cookie");
returns my only 1 parameter
sow please do not say to me that i just need to get cokies from WatIn and set it in myWebClient.
Sow any ideas how can i save this pdf file?
One option would be using iTextSharp library, which would give a list helpful methods to download the PDF. Sample code is below...
Uri uri = new Uri("browser url");
PdfReader reader = new PdfReader(uri);

Using webclient to download images from deployed website

i deployed a website on IIS running on localhost/xxx/xxx.aspx . On my WPF side , i download a textfile using webclient from the localhost server and save it at my wpf app folder
this is how i do it :
protected void DownloadData(string strFileUrlToDownload)
{
WebClient client = new WebClient();
byte[] myDataBuffer = client.DownloadData(strFileUrlToDownload);
MemoryStream storeStream = new MemoryStream();
storeStream.SetLength(myDataBuffer.Length);
storeStream.Write(myDataBuffer, 0 , (int)storeStream.Length);
storeStream.Flush();
string currentpath = System.IO.Directory.GetCurrentDirectory() + #"\Folder";
using (FileStream file = new FileStream(currentpath, FileMode.Create, System.IO.FileAccess.ReadWrite))
{
byte[] bytes = new byte[storeStream.Length];
storeStream.Read(bytes, 0, (int)storeStream.Length);
file.Write(myDataBuffer, 0, (int)storeStream.Length);
storeStream.Close();
}
//The below Getstring method to get data in raw format and manipulate it as per requirement
string download = Encoding.ASCII.GetString(myDataBuffer);
}
This is by writing btyes and saving them . But how do i download multiple image files and save it on my WPF app folder? I have a URL like this localhost/websitename/folder/designs/ , under this URL , there is many images , how do i download all of them ? and save it on WPF app folder?
Basically i want to download the contents of the folder whereby the contents are actually images.
First, the WebClient class already has a method for this. Use something like client.DownloadFile(remoteUrl, localFilePath).
See this link:
WebClient.DownloadFile Method (String, String)
Secondly, you will need to index the files you want to download on the server somehow. You can't just get a directory listing over HTTP and then loop through it. The web server will need to be configured to enable directory listing, or you will need a page to generate a directory listing. Then you will need to download the results of that page as a string using WebClient.DownloadString and parse it. A simple solution would be an aspx page that outputs a plaintext list of files in the directory you want to download.
Finally, in the code you posted you're saving every single file you download as a file named "Folder". You need to generate a unique filename for each file you want to download. When you're looping through the files you want to download, use something like:
string localFilePath = Path.Combine("MyDownloadFolder", imageName);
where imageName is a unique filename (with file extension) for that file.

WebClient.DownloadFile vs. WebClient.DownloadData

I am using WebClient.DownloadFile to download a small executable file from the internet. This method is working very well. However, I would now like to download this executable file into a byte array rather than onto my hard drive. I did some reading and came across the WebClient.DownloadData method. The problem that I am having with the downloadData method is that rather than downloading my file, my code is downloading the HTML data behind my file's download page.
I have tried using dozens of sites - each brings me the same issue. Below is the code I am using.
// Create a new instance of the System.Net 'WebClient'
System.Net.WebClient client = new System.Net.WebClient();
// Download URL
Uri uri = new Uri("http://www35.multiupload.com:81/files/4D7B4D2BFC3F1A9F765A433BA32ED2C5883D0CE133154A0FDB7E7786547A3165DA62393141C4AF8FF36C75222566CF3EB64AF6FBCFC02099BB209C891529CF7B90C83D9C63D39D989CBB8ECE6DE2B83B/Project1.exe");
byte[] dbytes = client.DownloadData(uri);
MessageBox.Show(dbytes.Length.ToString()); // Not the size of my file
Keep in mind that I am attempting to download the data of an executable file into a byte array.
Thank you for any help,
Evan
You are attempting to download a file using an expired token url. See below:
URL: http://www35.multiupload.com:81/files/4D7B4D2BFC3F1A9F765A433BA32ED2C5883D0CE133154A0FDB7E7786547A3165DA62393141C4AF8FF36C75222566CF3EB64AF6FBCFC02099BB209C891529CF7B90C83D9C63D39D989CBB8ECE6DE2B83B/Project1.exe`
Server: www35
Token:
4D7B4D2BFC3F1A9F765A433BA32ED2C5883D0CE133154A0FDB7E7786547A3165DA62393141C4AF8FF36C75222566CF3EB64AF6FBCFC02099BB209C891529CF7B90C83D9C63D39D989CBB8ECE6DE2B83B
You can't just download a file by waiting for the timer to end, and copy the direct link, it's a "token" link. It will only work for a specified period of time before redirecting you back to the download page (which is why you are getting HTML instead of binary data).
Workaround
You will have to download the multiupload's HTML and parse the direct download link from the HTML source code. Only this way provides a sure-fire way of getting an up-to-date token url.
How #Dark Slipstream said, you're attempting to download a file using an expired token url
look how get the new url:
System.Net.WebClient client = new System.Net.WebClient();
// Download URL
Uri uri = new Uri("http://www.multiupload.com/39QMACX7XS");
byte[] dbytes = client.DownloadData(uri);
string responseStr = System.Text.Encoding.ASCII.GetString(dbytes);
HtmlAgilityPack.HtmlDocument doc = new HtmlAgilityPack.HtmlDocument();
doc.LoadHtml(responseStr);
string urlToDownload = doc.DocumentNode.SelectNodes("//a[contains(#href,'files/')]")[0].Attributes["href"].Value;
byte[] data = client.DownloadData(uri);
length = data.Length;
I dont parsing the exceptions

Categories