the page at https://qrng.physik.hu-berlin.de/ provides a high bit rate quantum number generator web service and I'm trying to access that service.
However I could not manage to do so. This is my current code:
using System;
using System.Collections.Generic;
using System.Linq;
using S=System.Text;
using System.Security.Cryptography;
using System.IO;
namespace CS_Console_App
{
class Program
{
static void Main()
{
System.Net.ServicePointManager.Expect100Continue = false;
var username = "testuser";
var password = "testpass";
System.Diagnostics.Debug.WriteLine(Post("https://qrng.physik.hu-berlin.de/", "username="+username+"&password="+password));
Get("http://qrng.physik.hu-berlin.de/download/sampledata-1MB.bin");
}
public static void Get(string url)
{
var my_request = System.Net.WebRequest.Create(url);
my_request.Credentials = System.Net.CredentialCache.DefaultCredentials;
var my_response = my_request.GetResponse();
var my_response_stream = my_response.GetResponseStream();
var stream_reader = new System.IO.StreamReader(my_response_stream);
var content = stream_reader.ReadToEnd();
System.Diagnostics.Debug.WriteLine(content);
stream_reader.Close();
my_response_stream.Close();
}
public static string Post(string url, string data)
{
string vystup = null;
try
{
//Our postvars
byte[] buffer = System.Text.Encoding.ASCII.GetBytes(data);
//Initialisation, we use localhost, change if appliable
System.Net.HttpWebRequest WebReq = (System.Net.HttpWebRequest)System.Net.WebRequest.Create(url);
//Our method is post, otherwise the buffer (postvars) would be useless
WebReq.Method = "POST";
//We use form contentType, for the postvars.
WebReq.ContentType = "application/x-www-form-urlencoded";
//The length of the buffer (postvars) is used as contentlength.
WebReq.ContentLength = buffer.Length;
//We open a stream for writing the postvars
Stream PostData = WebReq.GetRequestStream();
//Now we write, and afterwards, we close. Closing is always important!
PostData.Write(buffer, 0, buffer.Length);
PostData.Close();
//Get the response handle, we have no true response yet!
System.Net.HttpWebResponse WebResp = (System.Net.HttpWebResponse)WebReq.GetResponse();
//Let's show some information about the response
Console.WriteLine(WebResp.StatusCode);
Console.WriteLine(WebResp.Server);
//Now, we read the response (the string), and output it.
Stream Answer = WebResp.GetResponseStream();
StreamReader _Answer = new StreamReader(Answer);
vystup = _Answer.ReadToEnd();
//Congratulations, you just requested your first POST page, you
//can now start logging into most login forms, with your application
//Or other examples.
}
catch (Exception ex)
{
throw ex;
}
return vystup.Trim() + "\n";
}
}
}
I'm having 403 forbidden error when I try to do a get request on http://qrng.physik.hu-berlin.de/download/sampledata-1MB.bin.
After debugging abit, I've realised that even though I've supplied a valid username and password, the response html that was sent after my POST request indicate that I was actually not logon to the system after my POST request.
Does anyone know why is this the case, and how may I work around it to call the service?
Bump. can anyone get this to work or is the site just a scam?
The site is surely not a scam. I developed the generator and I put my scientific reputation in it. The problem is that you are trying to use the service in a way that was not intended. The sample files were really only meant to be downloaded manually for basic test purposes. Automated access to fetch data into an application was meant to be implemented through the DLLs we provide.
On the other hand, I do not know of any explicit intent to prevent your implementation to work. I suppose if a web browser can log in and fetch data, some program should be able to do the same. Maybe only the login request is just a little more complicated. No idea. The server software was developed by someone else and I cannot bother him with this right now.
Mick
Actually, the generator can now also be purchased. See here:
http://www.picoquant.com/products/pqrng150/pqrng150.htm
Have you tried to change this
my_request.Credentials = System.Net.CredentialCache.DefaultCredentials
to
my_request.Credentials = new NetworkCredential(UserName,Password);
as described on MSDN page?
Related
My aim is to get content from a website (for instance a league table from a sports website) and put it in a .txt file so that I can code with a local file.
I have tried multiple lines of code and others examples such as:
// prepare the web page we will be asking for
HttpWebRequest request = (HttpWebRequest)
WebRequest.Create("http://www.stackoverflow.com");
// prepare the web page we will be asking for
HttpWebRequest request = (HttpWebRequest)
WebRequest.Create("http://www.stackoverflow.com");
// execute the request
HttpWebResponse response = (HttpWebResponse)request.GetResponse();
// we will read data via the response stream
Stream resStream = response.GetResponseStream();
string tempString = null;
int count = 0;
do
{
// fill the buffer with data
count = resStream.Read(buf, 0, buf.Length);
// make sure we read some data
if (count != 0)
{
// translate from bytes to ASCII text
tempString = Encoding.ASCII.GetString(buf, 0, count);
// continue building the string
sb.Append(tempString);
}
while (count > 0); // any more data to read?
}
My issue is when trying this, is that the words request and response are underlined in read and all the tokens are invalid.
Is there a better method to get content from a website to a .txt file or is there a way to fix the code supplied?
Thanks
is there a way to fix the code supplied?
The code you submitted works for me, make sure you have the proper name spaces defined.
In this case : using System.Net;
Or might it be that the duplicate creation of the variable request isn't a typo?
If so remove one of the request variables.
Is there a better method to get content from a website to a .txt file
Since you're reading all the content from the site anyway there isn't really a need for the while loop. Instead you can use the ReadToEnd method supplied by the StreamReader.
string siteContent = "";
using (StreamReader reader = new StreamReader(resStream)) {
siteContent = reader.ReadToEnd();
}
Also be sure to dispose of the WebResponse, other than that your code should work fine.
Sorry for my ignorance. You'll have to explain things to me, I'm treading in new waters. I have some background in JAVA but mostly php, javascript.
http://www.codeproject.com/Articles/19971/How-to-attach-to-Browser-Helper-Object-BHO-with-C
I followed this article with some of my own modifications and my question is specifically, how do I detect the "top level frame" of the webpage, ie the parent document. Any code I execute in OnDocumentComplete will run when any iframes on the page have also completed.
My function and the solution I implemented isn't actually producing the correct results.
public class BHO:IObjectWithSite
{
WebBrowser webBrowser;
HTMLDocument document;
public void OnDocumentComplete(object pDisp, ref object URL)
{
document = (HTMLDocument)webBrowser.Document;
string href = document.location.href;
//get top level page
if (href == URL.ToString())
{
HttpWebRequest WebReq = (HttpWebRequest)WebRequest.Create("http://mysite.com");
WebReq.Method = "POST";
WebReq.ContentType = "application/x-www-form-urlencoded";
byte[] buffer = Encoding.ASCII.GetBytes("string");
WebReq.ContentLength = buffer.Length;
Stream PostData = WebReq.GetRequestStream();
PostData.Write(buffer, 0, buffer.Length);
PostData.Close();
// Prepare web request and send the data.
HttpWebResponse WebResp = (HttpWebResponse)WebReq.GetResponse();
StreamReader streamResponse = new StreamReader(WebResp.GetResponseStream(), true);
string Response = streamResponse.ReadToEnd();
Newtonsoft.Json.Linq.JObject json = Newtonsoft.Json.Linq.JObject.Parse(Response);
string active = json["active"].ToString();
//print to screen
System.Windows.Forms.MessageBox.Show(active, "Title");
}
}
Checking if document.location.href matches URL works in most cases but is not guaranteed. So the result is I end up with multiple webrequests and popups on 1 page load.
The easiest way is to store the web browser object (IWebBrowser2) in an object property in the SetSite method (examples in C++ but should be straightforward to translate to C#):
CComQIPtr<IServiceProvider> pServiceProvider(pUnkSite);
if (!pServiceProvider) {
return E_FAIL;
}
pServiceProvider->QueryService(SID_SWebBrowserApp, IID_IWebBrowser2, (LPVOID*)&m_WebBrowser.p);
if (!m_WebBrowser) {
return E_FAIL;
}
This will store the browser pointer in the object member m_WebBrowser. Then you can compare with the pDisp parameter to OnDocumentComplete:
CComQIPtr<IWebBrowser2> webBrowser(pDisp);
if (webBrowser == m_WebBrowser) {
// This is the top-level page.
}
I'm trying to obtain an image to encode to a WordML document. The original version of this function used files, but I needed to change it to get images created on the fly with an aspx page. I've adapted the code to use HttpWebRequest instead of a WebClient. The problem is that I don't think the page request is getting resolved and so the image stream is invalid, generating the error "parameter is not valid" when I invoke Image.FromStream.
public string RenderCitationTableImage(string citation_table_id)
{
string image_content = "";
string _strBaseURL = String.Format("http://{0}",
HttpContext.Current.Request.Url.GetComponents(UriComponents.HostAndPort, UriFormat.Unescaped));
string _strPageURL = String.Format("{0}{1}", _strBaseURL,
ResolveUrl("~/Publication/render_citation_chart.aspx"));
string _staticURL = String.Format("{0}{1}", _strBaseURL,
ResolveUrl("~/Images/table.gif"));
string _fullURL = String.Format("{0}?publication_id={1}&citation_table_layout_id={2}",
_strPageURL, publication_id, citation_table_id);
try
{
HttpWebRequest request = (HttpWebRequest)WebRequest.Create(_fullURL);
HttpWebResponse response = (HttpWebResponse)request.GetResponse();
Stream image_stream = response.GetResponseStream();
// Read the image data
MemoryStream ms = new MemoryStream();
int num_read;
byte[] crlf = System.Text.Encoding.Default.GetBytes("\r\n");
byte[] buffer = new byte[1024];
for (num_read = image_stream.Read(buffer, 0, 1024); num_read > 0; num_read = image_stream.Read(buffer, 0, 1024))
{
ms.Write(buffer, 0, num_read);
}
// Base 64 Encode the image data
byte[] image_bytes = ms.ToArray();
string encodedImage = Convert.ToBase64String(image_bytes);
ms.Position = 0;
System.Drawing.Image image_original = System.Drawing.Image.FromStream(ms); // <---error here: parameter is not valid
image_stream.Close();
image_content = string.Format("<w:p>{4}<w:r><w:pict><w:binData w:name=\"wordml://{0}\">{1}</w:binData>" +
"<v:shape style=\"width:{2}px;height:{3}px\">" +
"<v:imagedata src=\"wordml://{0}\"/>" +
"</v:shape>" +
"</w:pict></w:r></w:p>", _word_image_id, encodedImage, 800, 400, alignment.center);
image_content = "<w:br w:type=\"text-wrapping\"/>" + image_content + "<w:br w:type=\"text-wrapping\"/>";
}
catch (Exception ex)
{
return ex.ToString();
}
return image_content;
Using a static URI it works fine. If I replace "staticURL" with "fullURL" in the WebRequest.Create method I get the error. Any ideas as to why the page request doesn't fully resolve?
And yes, the full URL resolves fine and shows an image if I post it in the address bar.
UPDATE:
Just read your updated question. Since you're running into login issues, try doing this before you execute the request:
request.Credentials = CredentialCache.DefaultCredentials
If this doesn't work, then perhaps the problem is that authentication is not being enforced on static files, but is being enforced on dynamic files. In this case, you'll need to log in first (using your client code) and retain the login cookie (using HttpWebRequest.CookieContainer on the login request as well as on the second request) or turn off authentication on the page you're trying to access.
ORIGINAL:
Since it works with one HTTP URL and doesn't work with another, the place to start diagnosing this is figuring out what's different between the two requests, at the HTTP level, which accounts for the difference in behavior in your code.
To figure out the difference, I'd use Fiddler (http://fiddlertool.com) to compare the two requests. Compare the HTTP headers. Are they the same? In particular, are they the same HTTP content type? If not, that's likely the source of your problem.
If headers are the same, make sure both the static and dynamic image are exactly the same content and file type on the server. (e.g. use File...Save As to save the image in a browser to your disk). Then use Fiddler's Hex View to compare the image content. Can you see any obvious differences?
Finally, I'm sure you've already checked this, but just making sure: /Publication/render_citation_chart.aspx refers to an actual image file, not an HTML wrapper around an IMG element, right? This would account for the behavior you're seeing, where a browser renders the image OK but your code doesn't.
In my application I use the WebClient class to download files from a Webserver by simply calling the DownloadFile method. Now I need to check whether a certain file exists prior to downloading it (or in case I just want to make sure that it exists). I've got two questions with that:
What is the best way to check whether a file exists on a server without transfering to much data across the wire? (It's quite a huge number of files I need to check)
Is there a way to get the size of a given remote file without downloading it?
Thanks in advance!
WebClient is fairly limited; if you switch to using WebRequest, then you gain the ability to send an HTTP HEAD request. When you issue the request, you should either get an error (if the file is missing), or a WebResponse with a valid ContentLength property.
Edit: Example code:
WebRequest request = WebRequest.Create(new Uri("http://www.example.com/"));
request.Method = "HEAD";
using(WebResponse response = request.GetResponse()) {
Console.WriteLine("{0} {1}", response.ContentLength, response.ContentType);
}
When you request file using the WebClient Class, the 404 Error (File Not Found) will lead to an exception. Best way is to handle that exception and use a flag which can be set to see if the file exists or not.
The example code goes as follows:
System.Net.HttpWebRequest request = null;
System.Net.HttpWebResponse response = null;
request = (System.Net.HttpWebRequest)System.Net.HttpWebRequest.Create("www.example.com/somepath");
request.Timeout = 30000;
try
{
response = (System.Net.HttpWebResponse)request.GetResponse();
flag = 1;
}
catch
{
flag = -1;
}
if (flag==1)
{
Console.WriteLine("File Found!!!");
}
else
{
Console.WriteLine("File Not Found!!!");
}
You can put your code in respective if blocks.
Hope it helps!
What is the best way to check whether a file exists on a server
without transfering to much data across the wire?
You can test with WebClient.OpenRead to open the file stream without reading all the file bytes:
using (var client = new WebClient())
{
Stream stream = client.OpenRead(url);
// ^ throws System.Net.WebException: 'Could not find file...' if file is not present
stream.Close();
}
This will indicate if the file exists at the remote location or not.
To fully read the file stream, you would do:
using (var client = new WebClient())
{
Stream stream = client.OpenRead(url);
StreamReader sr = new StreamReader(stream);
Console.WriteLine(sr.ReadToEnd());
stream.Close();
}
In case anyone stuck with ssl certificate issue
ServicePointManager.ServerCertificateValidationCallback = new RemoteCertificateValidationCallback
(
delegate { return true; }
);
WebRequest request = WebRequest.Create(new Uri("http://.com/flower.zip"));
request.Method = "HEAD";
using (WebResponse response = request.GetResponse())
{
Console.WriteLine("{0} {1}", response.ContentLength, response.ContentType);
}
Here's the purpose of my console program: Make a web request > Save results from web request > Use QueryString to get next page from web request > Save those results > Use QueryString to get next page from web request, etc.
So here's some pseudocode for how I set the code up.
for (int i = 0; i < 3; i++)
{
strPageNo = Convert.ToString(i);
//creates the url I want, with incrementing pages
strURL = "http://www.website.com/results.aspx?page=" + strPageNo;
//makes the web request
wrGETURL = WebRequest.Create(strURL);
//gets the web page for me
objStream = wrGETURL.GetResponse().GetResponseStream();
//for reading web page
objReader = new StreamReader(objStream);
//--------
// -snip- code that saves it to file, etc.
//--------
objStream.Close();
objReader.Close();
//so the server doesn't get hammered
System.Threading.Thread.Sleep(1000);
}
Pretty simple, right? The problem is, even though it increments the page number to get a different web page, I'm getting the exact same results page each time the loop runs.
i IS incrementing correctly, and I can cut/paste the url strURL creates into a web browser and it works just fine.
I can manually type in &page=1, &page=2, &page=3, and it'll return the correct pages. Somehow putting the increment in there screws it up.
Does it have anything to do with sessions, or what? I make sure I close both the stream and the reader before it loops again...
Have you tried creating a new WebRequest object for each time during the loop, it could be the Create() method isn't adequately flushing out all of its old data.
Another thing to check is that the ResponseStream is adequately flushed out before the next loop iteration.
This code works fine for me:
var urls = new [] { "http://www.google.com", "http://www.yahoo.com", "http://www.live.com" };
foreach (var url in urls)
{
WebRequest request = WebRequest.Create(url);
using (Stream responseStream = request.GetResponse().GetResponseStream())
using (Stream outputStream = new FileStream("file" + DateTime.Now.Ticks.ToString(), FileMode.Create, FileAccess.Write, FileShare.None))
{
const int chunkSize = 1024;
byte[] buffer = new byte[chunkSize];
int bytesRead;
while ((bytesRead = responseStream.Read(buffer, 0, buffer.Length)) > 0)
{
byte[] actual = new byte[bytesRead];
Buffer.BlockCopy(buffer, 0, actual, 0, bytesRead);
outputStream.Write(actual, 0, actual.Length);
}
}
Thread.Sleep(1000);
}
Just a suggestion, try disposing the Stream, and the Reader. I've seen some weird cases where not disposing objects like these and using them in loops can yield some wacky results....
That URL doesn't quite make sense to me unless you are using MVC or something that can interpret the querystring correctly.
http://www.website.com/results.aspx&page=
should be:
http://www.website.com/results.aspx?page=
Some browsers will accept poorly formed URLs and render them fine. Others may not which may be the problem with your console app.
Here's my terrible, hack-ish, workaround solution:
Make another console app that calls THIS one, in which the first console app passes an argument at the end of strURL. It works, but I feel so dirty.