I work in C# and so far I used WebRequest method to GET and POST data. I use Fiddler to check what is the browser doing and I got to a point where the data is retrieved from Ajax after posting some data.
I am not sure if I have to add to my project a javascript page or what and what code do I need in the javascript file and how to call it.
In essence, I have to post the data {"name":"ABCD"} to url www.example.com/Website.AJAX,Website.ashx.
Ajax is not so different from ordinary request, so you can just post it as usual. Most likely problem is how the backend treat that it is an ajax request (if it does at all).
As it looks like you are using the WebForms there on backend, you just need to add a special header most likely (X-Requested-With). Some frameworks add it, though it's not a real requirement of the ajax request.
All in all I would just post an ordinary request with WebRequest as you did before. If that does not work, you need to study the original request from web UI to see what is different. E.g. a special header or request Content-Type is JSON or something like that.
P.S. If you use JSON in the body it's better to explicitly set content type to application/json; charset=utf-8 unless there is something special with the server.
Related
As far as I understand, the GET method asks the server to send something to the client's browser. I set up a HTTPListener in C# and when I access http://localhost:1330/form.html the request I get from the client is: GET /form.html which means that the client is saying "Hey server, I need the HTML code to display that page in the browser", which makes sense.
If I set a <form> with method=POST in form.html, the input fields values are located in the request body which is in context.Request.InputStream in C# which looks similar to this: input_name1=value&input_name2=value2&input_name3=value3... and the URL remains /form.html.
This also makes sense. The client says: "Hey server, take this data that was written in the HTML <input> elements" and the server uses it, maybe storing it in a database or computing something and send it back to the client.
Now if I set the form method to GET, the URL is modified to: /form.html?input_name1=value&input_name2=value2&input_name3=value3 and the context.Request.InputStream remains blank which is the opposite of the POST, in which the InputStream contained the data and the URL had no queries. For me, the GET method in forms doesn't make any sense. Why do we need to get the data from the form client side, send it to the server and then getting it back to client unmodified? Why do I send the data from the browser to C# and then sending it back to browser, if I can just get it client side using simple JavaScript?
In the moment the browser makes the GET request with the queries to the server, the client browser already has that data, so why does it ask the server to give it if it is already at the client's browser?
Generally speaking, an HTTP GET method is used to receive data from the server, while an HTTP POST is used to modify data or add data to a resource.
For example, think about a search form. There may be some fields on the form used to filter the results, such as SearchTerm, Start/EndDate, Category, Location, IsActive, etc, etc. You're requesting the results from the server, but not modifying any of the data. Those fields will be added to the GET request by the client so the server can filter and return the results you requested.
From the MDN article Sending form data:
Each time you want to reach a resource on the Web, the browser sends a
request to a URL. An HTTP request consists of two parts: a header that
contains a set of global metadata about the browser's capabilities,
and a body that can contain information necessary for the server to
process the specific request.
GET requests do not have a request body, so the parameters are added to the URL (this is defined in the HTTP spec, if you're interested).
The GET method is the method used by the browser to ask the server to
send back a given resource: "Hey server, I want to get this resource."
In this case, the browser sends an empty body. Because the body is
empty, if a form is sent using this method the data sent to the server
is appended to the URL.
An HTTP POST method uses the request body to add the parameters. Typically in a POST you will be adding a resource, or modifying an existing resource.
The POST method is a little different. It's the method the browser
uses to talk to the server when asking for a response that takes into
account the data provided in the body of the HTTP request: "Hey
server, take a look at this data and send me back an appropriate
result." If a form is sent using this method, the data is appended to
the body of the HTTP request.
There are plenty of resources online to learn about the HTTP protocol and HTTP verbs/methods. The MDN articles An overview of HTTP, Sending form data, and HTTP request methods should provide some good introductory reading material.
I've got an .ashx handler which, upon finishing processing will redirect to a success or error page, based on how the processing went. The handler is in my site, but the success or error pages might not be (this is something the user can configure).
Is there any way that I can pass the error details to the error page without putting it in the query string?
I've tried:
Adding a custom header that contains the error details, but since I'm using a Response.Redirect, the headers get cleared
Using Server.Transfer, instead of Response.Redirect, but this will not work for URLs not in my site
I know that I can pass data in the query string, but in some cases the data I need to pass might be too long for the query string. Do I have any other options?
Essentially, no. The only way to pass additional data in a GET request (i.e. a redirect) is to pass it in the query string.
The important thing to realise is that this is not a limitation of WebForms, this is just how HTTP works. If you're redirecting to another page that's outside of your site (and thus don't have the option of cookies/session data), you're going to have to send information directly in the request and that means using a query string.
Things like Server.Transfer and Response.Redirect are just abstractions over a simple HTTP request; no framework feature can defy how HTTP actually works.
You do, of course, have all kinds of options as to what you pass in the query string, but you're going to have to pass something. If you really want to shorten the URL, maybe you can pass an error code and expose an API that will let the receiving page fetch further information:
Store transaction information (or detailed error messages) in a database with an ID.
Pass the ID in the query string.
Expose a web method or similar API to allow the receiving page to request additional information.
There are plenty of hacky ways you could create the illusion of passing data in a redirect outside of a form post (such as returning a page containing a form and Javascript to immediately do a cross-domain form post) but the query string is the proper way of passing data in a GET request, so why try to hack around it?
If you must perform a redirect, you will need to pass some kind of information in the Query String, because that's how browser redirects work. You can be creative about how you pass it, though.
You could pass an error code, and have the consuming system know what various error codes mean.
You could pass a token, and have the consuming system know how to ask your system about the error information for the given token behind-the-scenes.
Also, if you have any flexibility around whether it's actually performing a redirect, you could use an AJAX request in the first place, and send back some kind of JSON object that the browser's javascript could interpret and send via a POST parameter or something like that.
A redirect is executed by most browsers as a GET, which means you'd have to put the data in the query string.
One trick (posted in two other answers) to do a "redirect" as a POST is to turn the response into a form that POSTs itself to the target site:
Response.Clear();
StringBuilder sb = new StringBuilder();
sb.Append("<html>");
sb.AppendFormat(#"<body onload='document.forms[""form""].submit()'>");
sb.AppendFormat("<form name='form' action='{0}' method='post'>",postbackUrl);
<!-- POST values go here -->
sb.AppendFormat("<input type='hidden' name='id' value='{0}'>", id);
sb.Append("</form>");
sb.Append("</body>");
sb.Append("</html>");
Response.Write(sb.ToString());
Response.End();
But I would read the comments on both to understand the limitations.
Basically there are two usual HTTP ways to send some data - GET and POST.
When you redirect to another URL with additional parameters, you make the client browser to send the GET request to the target server. Technically, your server responds to the browser with specific HTTP error code 307 + the URL to go (including the GET parameters).
Alternatively, you may want/need to make a POST request to the target URL. In that case you should respond with a simple HTML form, which consists of several hidden fields pre-filled with certain values. The form's action should point the target URL, method should be "POST", and of course your HTML should include javascript, which automatically submits the form once the document is loaded. This way the client browser would send the POST request instead of the GET one.
I have recently started using libcurl.net with one of my projects as a replacement to the HttpWebRequest and HttpWebResponse classes. The reason I chose to use libcurl.net instead of the managed classes is that libcurl.net mimics the behavior of cURL from PHP and I was porting over some code from PHP. I attempted to use the built-in managed classes, but the CookieContainer class was not capturing all of the cookies correctly from the website that I was trying to capture cookies from. I may end up going back to the managed classes if I can figure out how to capture the cookies correctly.
My PHP script works perfectly fine in capturing cookies so I ported most of the cURL functionality using libcurl.net to my C# project. The problem I'm having is when I have to send more than one request header with the CURLOPT_HTTPHEADER cURL option and I have to use an Slist datatype to pass in more than one header like so:
Slist headers = new Slist();
headers.Append("Content-Type: application/x-www-form-urlencoded");
headers.Append("X-Requested-With: XMLHttpRequest");
easy.SetOpt(CURLoption.CURLOPT_HTTPHEADER, headers);
I sometimes have to fake an AJAX request but it does not seem to pass the X-Requested-With: XMLHttpRequest header with the request as the website I'm scraping does not return any results for these "fake" AJAX requests. If I set the CURLOPT_HTTPHEADER do I need to set the Content-Type header or is that always defaulted to Content-Type: application/x-www-form-urlencoded?
It turns out that I was adding multiple headers correctly. I simply made an Slist object and added my headers to the request using the CURLOPT_HTTPHEADER option. In this way, one can "fake" AJAX requests or any other type of request sent by a web browser. The problem was that I wasn't sending the correct POST data with my request.
I'm trying to get the raw data sent to IIS using a HttpHandler. However, because the request is an "GET"-request without the "Content-Length" header set it reports that there is no data to read (TotalBytes), and the inputstream is empty. Is there any way I can plug into the IIS-pipeline (maybe even before the request is parsed) and just kind of take control over the request and read it's raw data? I don't care if I need to parse headers and stuff like that myself, I just want to get my hands on the actual request and tell IIS to ignore this one. Is that at all possible? Cause right now it looks like I need to do the alternative, which is developing a custom standalone server, and I really don't want to do that.
Most web servers will ignore (and rarely give you access to) the body of a GET request, because the HTTP semantics imply that it is to be ignored anyway. You should consider another method (for example POST or PUT).
See this question and the link in this answer:
HTTP GET with request body
I'm doing some automation work and can make my way around a site & post to HTML forms okay, but now I'm up against a new challenge, Ajax forms.
Since there's no source to read, I'm left wondering if it's possible to fill in an Ajax form progamatically, in C#. I'm currently using a non-visible axWebBrowser.
Thanks in advance for your help!
Yes, but I recommend using a different approach to requesting/responding to the server pages including the regular pages, and the AJAX handler pages.
In c#, try using the WebRequest/WebResponse or the more specialized HttpWebRequest/HttpWebResponse classes.
Ajax is no more than a "fancy" name for a technology that allows Javascript to make HTTP requests to a server which usually implements some handlers that produce specialized, light-weight content for the Javascript caller (comonly encoded as JSON).
Therefore in order to simulate AJAX calls, all you have to do is inspect your target application (the web page that you want to "post" to) and see what format is used for the AJAX communications - then replicate the page's Javascript behavior from C# using the WebREquest/WebResponse classes.
See Firebug - a great tool that allows you to inspect a web page to determine what calls it makes, to which pages and what those pages respond. It does a pretty good job at inspecting AJAX calls too.
Here's a very simple example of how to do a web request:
HttpWebRequest wReq = (HttpWebRequest)WebRequest.Create("http://www.mysite.com");
using (HttpWebResponse resp = (HttpWebResponse)wReq.GetResponse())
{
// NOTE: A better approach would be to use the encoding returned by the server in
// the Response headers (I'm using UTF 8 for brevity)
using (StreamReader sr = new StreamReader(resp.GetResponseStream(), Encoding.UTF8))
{
string content = sr.ReadToEnd();
// Do something with the content
}
}
A POST is also a request, but with a different method. See this page for an example of how to do a very simple post.
EDIT - Details on Inspecting the page behavior with Firebug
What I mean by inspecting the page you're trying to replicate is to use a tool (I use Firebug - on Firefox) to determine the flow of information between the page and the server.
With Firebug, you can do this by using the "Net" and "Console" panels. The Net panel lists all requests executed by the browser while loading the page. While the "Console" will list communications between the page and the server that take place after the page has loaded. Those communications that take place after the page has loaded are essentially the AJAX calls that you'll want to replicate (Note: Network monitoring has to be enbled in Firebug for this to work)
Check out Michael Sync's tutorial to learn more about Firebug and experiment with the Console panel to learn more about the AJAX requests.
Regarding "replicate the page's behavior from C# using the WebRequest/WebResponse" - what you have to realize is that like I said earlier, the Javascript AJAX call is nothing more than an HTTP Request. It's an HTTP Request that the Javacript makes "behind the scenes", or out-of-band, to the web server. To replicate this, it is really no different than replicating a normal GET or a normal POST like I showed above. And this is where Firebug comes in to play. Using it you can view the requests, as the Javascript makes them - look at the Console panel, and see what the Request message looks like.
Then you can use the same technique as above, using the HttpWebRequest/HttpWebResponse to make the same type of request as the Javascript does, only do it from C# instead.
Gregg, I hope this clarifies my answer a little bit but beyond this I suggest playing with Firebug and maybe learning more about how the HTTP protocol works and how AJAX works as a technology.
Have you looked at using Selenium. AFAIK, you can write the test cases in C# and I know our testers have successfully used it before to UI Test a Ajax enabled ASP.NET site
http://seleniumhq.org/