I am developing a application for downloading mcx data from the website.It would be good if i
myself create an application and use it.
There is a datetimepicker in the website in which I want to select the date programatically
click the go button and later view in excel.when I click view on excel it downloads the file
of the data of the particular date. You can see this link and understand what i want to say.
http://www.mcxindia.com/sitepages/bhavcopy.aspx
There would be a great appreciation if someone could help me.
Thanks In Avance.
using System.Net;
WebClient webClient = new WebClient();
webClient.DownloadFile("http://mysite.com/myfile.txt", #"c:\myfile.txt");
but if the file is too large then you should use the async method.
check this code example http://www.csharp-examples.net/download-files/
You'll need to post your data to the server with your client request as explained by #Peter.
This is an ASP.net page, and therefore it requires that you send some data on postback in order to complete the callback.
Using google, I was able to find this as a proof of concept.
The following is a snippet I wrote in Linqpad to test it out. Here it is:
void Main()
{
WebClient webClient = new WebClient();
byte[] b = webClient.DownloadData("http://www.mcxindia.com/sitepages/BhavCopyDateWise.aspx");
string s = System.Text.Encoding.UTF8.GetString(b);
var __EVENTVALIDATION = ExtractVariable(s, "__EVENTVALIDATION");
__EVENTVALIDATION.Dump();
var forms = new NameValueCollection();
forms["__EVENTTARGET"] = "btnLink_Excel";
forms["__EVENTARGUMENT"] = "";
forms["__VIEWSTATE"] = ExtractVariable(s, "__VIEWSTATE");
forms["mTbdate"] = "11%2F15%2F2011";
forms["__EVENTVALIDATION"] = __EVENTVALIDATION;
webClient.Headers.Set(HttpRequestHeader.ContentType, "application/x-www-form-urlencoded");
var responseData = webClient.UploadValues(#"http://www.mcxindia.com/sitepages/BhavCopyDateWise.aspx", "POST", forms);
System.IO.File.WriteAllBytes(#"c:\11152011.csv", responseData);
}
private static string ExtractVariable(string s, string valueName)
{
string tokenStart = valueName + "\" value=\"";
string tokenEnd = "\" />";
int start = s.IndexOf(tokenStart) + tokenStart.Length;
int length = s.IndexOf(tokenEnd, start) - start;
return s.Substring(start, length);
}
There're many way to DownloadFile using WebClient
You must read this first
http://msdn.microsoft.com/en-us/library/system.net.webclient.aspx
If you want put some addition information, you can use WebClient.Headers,
and using
using System.Net;
WebClient webClient = new WebClient();
var forms = new NameValueCollection();
forms["token"] = "abc123";
var responseData = webClient.UploadValues(#"http://blabla.com/download/?name=abc.exe", "POST", forms);
System.IO.File.WriteAllBytes(#"D:\abc.exe");
Related
Since we have moved to Dev Ops my application fails to download the images that are in any field stored in a work item.
I have an image URL that has already been stripped out of the description files via a regular expression.
If I take this link and paste it in to a browser then it returns the images (so the url is valid)
The issue is that within the call to download the image we dont have any authentication credentials and its trying to return me to a login page.
I do authenticate with the dev ops server within my application and it caches these.
readonly VssCredentials creds = new VssClientCredentials();
I have tried to use a webclient to make the call but you cant cast the VSScredentuals to system.net credentials
this used to work before
using (WebClient webClient = new WebClient())
{
byte[] data = webClient.DownloadData(src);
using (MemoryStream mem = new MemoryStream(data))
{
using (var yourImage = Image.FromStream(mem))
{
// If you want it as Png
yourImage.Save(#"c:\temp\path_to_your_file.png", ImageFormat.Png);
// If you want it as Jpeg
yourImage.Save(#"c:\temp\path_to_your_file.jpg", ImageFormat.Jpeg);
}
}
}
I have tried also using
using (var client = new TfvcHttpClient(new Uri(src), creds))
{
var itemRequestData = Create(src);
}
private static TfvcItemRequestData Create(string folderPath)
{
return new TfvcItemRequestData
{
IncludeContentMetadata = true,
IncludeLinks = true,
ItemDescriptors =
new[]
{
new TfvcItemDescriptor
{
Path = folderPath,
RecursionLevel = VersionControlRecursionType.Full
}
}
};
}
But how do i then write the itemRequestData to a file?
Or am i going about this the wrong way?
thanks
Try this:
using (WebClient webClient = new WebClient())
{
webClient.Headers.Add("Authorization", "Basic " + base64Token);
byte[] data = webClient.DownloadData(src);
// ....
}
Where base64Token is your Personal Access Token converted to base64 with a ":" at the start.
For example if your token is abcdefg you need to convert :abcdefg to base64 and use it as the authorization token.
I use C# windows form app and I want to download content website and after edit it display in external browser.
WebClient client = new WebClient();
string s = client.DownloadString("http://google.com");
How can i display String html (s) in external browser?
Regard.
WebClient client = new WebClient();
string s = client.DownloadString("http://google.com");
Process.Start("http://www.google.com");
If you want make it complex a little more, you can save it to a local file and open it in external browser by Process.Start() function.
WebClient client = new WebClient();
string s = client.DownloadString("http://google.com");
StreamWriter sw = File.CreateText("google.html");
sw.Write(s);
sw.Close();
Process.Start("google.html");
I need some help with the C# WebClient UploadString Method. I'm trying to upload a long string (that I read from a database) to a server (PHP) and I'm currently trying to do that with the UploadString Method because it seemed to be the easiest. The problem that I have is that the string that I upload gets cut off after about 4000 characters and I can't figure out why.
For Example:
data.length: 19000 (before Upload)
Post.length: 4000 (in PHP)
What I did to bypass this problem: I upload my string in pieces of less than 4000 characters. BUT I still face the problem! Every second upload gets cut off and I can't figure out why.
This is my C# Code:
WebClient client = new WebClient();
foreach (DataRow dr in dra)
{
foreach (int y in index)
{
data += dr[y] + ";";
Console.Write(".");
}
data += ":";
if (count1 > 50)
{
// Upload the data.
Console.WriteLine("Uploading Data.....");
Console.WriteLine("Länge des Strings:" + data.Length);
Console.WriteLine(data);
client.Dispose();
client.Encoding = System.Text.Encoding.UTF8;
client.Headers[HttpRequestHeader.ContentType] = "application/x-www-form-urlencoded";
string Ergebnis = client.UploadString(address, "POST", data);
Console.WriteLine(Ergebnis);
client.Dispose();
result.ErrorMessage += Ergebnis;
count1 = -1;
data = "table="+table+"&columns=continueUpload&values=";
}
++count1;
}
Does anyone have any idea where this comes from? Is there any string limit on the webclient method?
Alright, I found the solution, thanks Alex for the hint!
I had to urlencode all my values, than it worked!
the page at https://qrng.physik.hu-berlin.de/ provides a high bit rate quantum number generator web service and I'm trying to access that service.
However I could not manage to do so. This is my current code:
using System;
using System.Collections.Generic;
using System.Linq;
using S=System.Text;
using System.Security.Cryptography;
using System.IO;
namespace CS_Console_App
{
class Program
{
static void Main()
{
System.Net.ServicePointManager.Expect100Continue = false;
var username = "testuser";
var password = "testpass";
System.Diagnostics.Debug.WriteLine(Post("https://qrng.physik.hu-berlin.de/", "username="+username+"&password="+password));
Get("http://qrng.physik.hu-berlin.de/download/sampledata-1MB.bin");
}
public static void Get(string url)
{
var my_request = System.Net.WebRequest.Create(url);
my_request.Credentials = System.Net.CredentialCache.DefaultCredentials;
var my_response = my_request.GetResponse();
var my_response_stream = my_response.GetResponseStream();
var stream_reader = new System.IO.StreamReader(my_response_stream);
var content = stream_reader.ReadToEnd();
System.Diagnostics.Debug.WriteLine(content);
stream_reader.Close();
my_response_stream.Close();
}
public static string Post(string url, string data)
{
string vystup = null;
try
{
//Our postvars
byte[] buffer = System.Text.Encoding.ASCII.GetBytes(data);
//Initialisation, we use localhost, change if appliable
System.Net.HttpWebRequest WebReq = (System.Net.HttpWebRequest)System.Net.WebRequest.Create(url);
//Our method is post, otherwise the buffer (postvars) would be useless
WebReq.Method = "POST";
//We use form contentType, for the postvars.
WebReq.ContentType = "application/x-www-form-urlencoded";
//The length of the buffer (postvars) is used as contentlength.
WebReq.ContentLength = buffer.Length;
//We open a stream for writing the postvars
Stream PostData = WebReq.GetRequestStream();
//Now we write, and afterwards, we close. Closing is always important!
PostData.Write(buffer, 0, buffer.Length);
PostData.Close();
//Get the response handle, we have no true response yet!
System.Net.HttpWebResponse WebResp = (System.Net.HttpWebResponse)WebReq.GetResponse();
//Let's show some information about the response
Console.WriteLine(WebResp.StatusCode);
Console.WriteLine(WebResp.Server);
//Now, we read the response (the string), and output it.
Stream Answer = WebResp.GetResponseStream();
StreamReader _Answer = new StreamReader(Answer);
vystup = _Answer.ReadToEnd();
//Congratulations, you just requested your first POST page, you
//can now start logging into most login forms, with your application
//Or other examples.
}
catch (Exception ex)
{
throw ex;
}
return vystup.Trim() + "\n";
}
}
}
I'm having 403 forbidden error when I try to do a get request on http://qrng.physik.hu-berlin.de/download/sampledata-1MB.bin.
After debugging abit, I've realised that even though I've supplied a valid username and password, the response html that was sent after my POST request indicate that I was actually not logon to the system after my POST request.
Does anyone know why is this the case, and how may I work around it to call the service?
Bump. can anyone get this to work or is the site just a scam?
The site is surely not a scam. I developed the generator and I put my scientific reputation in it. The problem is that you are trying to use the service in a way that was not intended. The sample files were really only meant to be downloaded manually for basic test purposes. Automated access to fetch data into an application was meant to be implemented through the DLLs we provide.
On the other hand, I do not know of any explicit intent to prevent your implementation to work. I suppose if a web browser can log in and fetch data, some program should be able to do the same. Maybe only the login request is just a little more complicated. No idea. The server software was developed by someone else and I cannot bother him with this right now.
Mick
Actually, the generator can now also be purchased. See here:
http://www.picoquant.com/products/pqrng150/pqrng150.htm
Have you tried to change this
my_request.Credentials = System.Net.CredentialCache.DefaultCredentials
to
my_request.Credentials = new NetworkCredential(UserName,Password);
as described on MSDN page?
I have a webpage which has nothing on it except some string(s). No images, no background color or anything, just some plain text which is not really that long in length.
I am just wondering, what is the best (by that, I mean fastest and most efficient) way to pass the string in the webpage so that I can use it for something else (e.g. display in a text box)? I know of WebClient, but I'm not sure if it'll do what I want it do and plus I don't want to even try that out even if it did work because the last time I did it took approximately 30 seconds for a simple operation.
Any ideas would be appreciated.
The WebClient class should be more than capable of handling the functionality you describe, for example:
System.Net.WebClient wc = new System.Net.WebClient();
byte[] raw = wc.DownloadData("http://www.yoursite.com/resource/file.htm");
string webData = System.Text.Encoding.UTF8.GetString(raw);
or (further to suggestion from Fredrick in comments)
System.Net.WebClient wc = new System.Net.WebClient();
string webData = wc.DownloadString("http://www.yoursite.com/resource/file.htm");
When you say it took 30 seconds, can you expand on that a little more? There are many reasons as to why that could have happened. Slow servers, internet connections, dodgy implementation etc etc.
You could go a level lower and implement something like this:
HttpWebRequest webRequest = (HttpWebRequest)WebRequest.Create("http://www.yoursite.com/resource/file.htm");
using (StreamWriter streamWriter = new StreamWriter(webRequest.GetRequestStream(), Encoding.UTF8))
{
streamWriter.Write(requestData);
}
string responseData = string.Empty;
HttpWebResponse httpResponse = (HttpWebResponse)webRequest.GetResponse();
using (StreamReader responseReader = new StreamReader(httpResponse.GetResponseStream()))
{
responseData = responseReader.ReadToEnd();
}
However, at the end of the day the WebClient class wraps up this functionality for you. So I would suggest that you use WebClient and investigate the causes of the 30 second delay.
If you're downloading text then I'd recommend using the WebClient and get a streamreader to the text:
WebClient web = new WebClient();
System.IO.Stream stream = web.OpenRead("http://www.yoursite.com/resource.txt");
using (System.IO.StreamReader reader = new System.IO.StreamReader(stream))
{
String text = reader.ReadToEnd();
}
If this is taking a long time then it is probably a network issue or a problem on the web server. Try opening the resource in a browser and see how long that takes.
If the webpage is very large, you may want to look at streaming it in chunks rather than reading all the way to the end as in that example.
Look at http://msdn.microsoft.com/en-us/library/system.io.stream.read.aspx to see how to read from a stream.
Regarding the suggestion
So I would suggest that you use WebClient and investigate the causes of the 30 second delay.
From the answers for the question
System.Net.WebClient unreasonably slow
Try setting Proxy = null;
WebClient wc = new WebClient();
wc.Proxy = null;
Credit to Alex Burtsev
If you use the WebClient to read the contents of the page, it will include HTML tags.
string webURL = "https://yoursite.com";
WebClient wc = new WebClient();
wc.Headers.Add("user-agent", "Only a Header!");
byte[] rawByteArray = wc.DownloadData(webURL);
string webContent = Encoding.UTF8.GetString(rawByteArray);
After getting the content, the html tags should be removed. Regex can be used for this:
var result= Regex.Replace(webContent, "<.*?>", String.Empty);
But this method is not very accurate, the better way is to install HtmlAgilityPack and use the following code:
HtmlAgilityPack.HtmlDocument doc = new HtmlAgilityPack.HtmlDocument();
doc.LoadHtml(webData);
string result = doc.DocumentNode.InnerText;
You say it takes 30 seconds, It has nothing to do with using WebClient (The main factor is internet connections or proxy). WebClient has worked very well for me. example
WebClient client = new WebClient();
using (Stream data = client.OpenRead(Text))
{
using (StreamReader reader = new StreamReader(data))
{
string content = reader.ReadToEnd();
string pattern = #"((https?|ftp|gopher|telnet|file|notes|ms-help):((//)|(\\\\))+[\w\d:##%/;$()~_?\+-=\\\.&]*)";
MatchCollection matches = Regex.Matches(content,pattern);
List<string> urls = new List<string>();
foreach (Match match in matches)
{
urls.Add(match.Value);
}
}
XmlDocument document = new XmlDocument();
document.Load("www.yourwebsite.com");
string allText = document.InnerText;