i have a legacy 3rd party application which submits data to our internal sales system. It exposes ASP page with a form to the internet as follows:
<form id="ServiceRequest" enctype="multipart/form-data" method="post" action="AddToServiceRequest.csp">
where AddToServiceRequest.csp is a proprietary IIS handler:
Right now we embed this form into our ASP.Net 4 website using iframe - and that is really inconvenient. What I want to do is to replace this form with a native form, do all validation etc - and then call AddToServiceRequest.csp handler from code-behind logic. What's the right way to do it? I can think only about something like this:
var r = (HttpWebRequest)WebRequest.Create("http://localhost/AddToServiceRequest.csp");
r.Method = "POST";
r.KeepAlive = false;
// fill in form data
var res = r.GetResponse();
res.Close();
but it just does not look "right" to me. Are there any other ways?
If handler serving request is for some other site (from IIS point of view) than code for it will run in separate process or separate AppDomain and you will have no reasonable way to call it directly.
If handler is registered for the same site as yours you may be able to invoke it directly - i.e. if it is APS.Net class that handles request than it just an interface with couple methods - you may be able to instantiate and execute it directly. Note that many handlers depend on HttpContext.Current and you may not be able to set request reasonably for such calls.
It is also unlikely to register same handler to your site as most handlers/controllers/forms are designed to work for particularly configured site (i.e. Web.Config will have DB connection info).
So making direct web request is most straightforward solution. I would not try any other way as most web code will not handle unusual ways of invocation correctly.
You may consider HttpClient instead of WebRequest to get easier async supoprt (assuming .Net 4.5+), but any way of setting up request is ok.
Note that if site uses Windows Authentication you may not be able to pass user information via Web request .
Related
Got a bit of an odd problem. Here goes:
I have two ASP.NET applications: A web app and a web service app.
Information arriving via the webservice effects data in the database used by the web app.
One particular bit of data controls items in a drop down menu - when the data is altered in the app it can call:
HttpContext.Current.Cache.Remove
but I now need to clear the cache in the web service as i can recieve messages which update that information.
Can anyone recommend a way of doing this?
Cache invalidation can be hard. Off the top of my head I can think of 3 solutions of varying complexity which may or may not work for you.
First, you could write a web service for the web app that the web service app calls to invalidate the cache. This is probably the hardest.
Second, you could have the web service app write a "dirty" flag in the database that the web app could check before it renders the drop down menu. This is the route I would go.
Third, you could simply stop caching that particular data.
You could have a web method whose sole purpose is to clear the cache.
var webRequest = HttpWebRequest.Create(clearCacheURL);
var webResponse = webRequest.GetResponse();
// receive the response and return it as function result
var sr = new System.IO.StreamReader(webResponse.GetResponseStream());
var result = sr.ReadToEnd();
Implement the cache with an expiry time.
Cache.Insert("DSN", connectionString, null,
DateTime.Now.AddMinutes(2), Cache.NoSlidingExpiration);
Cache.Insert Method
You can try SQL Dependency. It will trigger an event when the table you have subscribed has any changes.
https://www.codeproject.com/Articles/12335/Using-SqlDependency-for-data-change-events
I am developing a small application in asp.net (writing in c#).
In my application I am using jquery to perform asynchronous call to the server.
I have an http handler that listens in to the requests and does what it needs to do.
Problems start when in the handler I need to access information stored in the page , from where the asynchronous call started. When I try this:
Page page = HttpContext.Current.Handler as Page;
I don't get a page.
How else can I access the page itself?
Thank you
You have a slight design issue. The Page class IS an HttpHandler. It is in fact the default HttpHandler that handles requests. When you define your own HttpHandler, there is no Page class... and hence no Master either.
If you need to access information from a different page, you need to do that via the normal ASP.NET mechanisms... Session, Cache, etc.
You can create new instance of page.
SomePage page = new SomePage();
Please consider the following scenario,
There are two web applications App1 & App2. A user would submit his information on App1 though a form. On click of a specific button/link on App1, the same data should be posted to a page on App2 and the user should also be redirected to the same page on App2.
I would like some help in finding out the best way to implement this functionality.
One of the approaches that I have already tried out is by creating a temporary HTML form at runtime, setting the action attribute of the form to the App2 Page and get the form posted by using javascript submit. The data can then be fetched on App2 page by using the response.form object.
This approach works well, but i was still wondering if there is any other way to implement the required functionality.
I would really appriciate if you can give some insights on using RESTful webservices to implement this, or else, using some HttpModule to intercept requests at App1 and modify redirect response to app2 or any other approach that you might find fit for the purpose.
Edit:
Using querystring isnt an option for me.
I've had a need to do similar things with feed agregation and building rss feeds from web page content on different domains.
User Gets app1 page, fills in details and submits then on the server for app1 I have a method that looks like this ...
HTMLDocument FetchURL( string url )
{
WebClient wc = new WebClient();
string remoteContent = wc.DownloadString(url);
// mshtml api is very weird but lets just say you have to do things this way ...
HtmlDocument doc = new HTMLDocument();
IHTMLDocument2 doc2 = (IHTMLDocument2)doc;
doc2.write(new object[] { remoteContent });
return (HTMLDocument)doc2;
}
This function does 2 things of use ...
It gets the page of content at "url"
It parses that content in to a HTMLDocument object
Once you have this function you can then call it passing it the url to the remote page and get back a html doucment.
The functions in the HTMLDocument object will allow you to do javascript like dom queries such as :
docObject.GetElementById("id");
I then have different functions that do different things with this object based on the page / site i'm returning data from.
There is however one fatal flaw here ...
This is likely to work really well with sites that don't change much in structure and are built by code but not so well on less dynamic sites.
With stackoverflow for example its easy to pull out a question and the accepted answer for that question so I could use this code to pull and publish content from here on my own web site.
However ...
This is not going to help you for user / login related details as this sort of information is not shared to generally everyone.
It's bit like me going and trying this to link facebook profiles to my own website, I would have to go through some form of api that asked the user to authenticate their details before making the request.
simply pulling a web page based on a url only will give the other site no authentication information unless that site accepts the user login details in the quesrystring and you already have them.
You may however be able to chain requests by ripping apart my sample method, requesting the login page parsing the results, filling in the form, then posting back using the same web client instance to login then requesting the url.
The idea being that you would have a form that asks the user to put in their login details for the remote site on your site then you go and find their profile page based on that.
This would be best farmed out to a class rather than just a simple method like i have here.
In my case though i was only after something simple (the bbc top 40 uk charts) which i pulled information from not only the bbc but places like amazon, google, and youtube, then i built a page :)
It's neat but serves no functional purpose other than pulling all your other fave sources of info on to 1 page.
If you are already committed to using javascript, then why not an ajax post, and change the window.location based on the response?
You can use HttpServerUtility.Transfer this will preserve your form contents and transfer the user to the new page.
http://msdn.microsoft.com/en-us/library/system.web.httpserverutility.transfer.aspx
I have built something like what you are describing, and I found that using a <form> tag to POST to app2 is the most reliable way... basically, the way you found that worked well.
If App2 is residing on a different domain, it's usually best to create your own interface for the submission, and have that interface handle the posting from App1 to App2.
(Browser) -> Submits form to App1 ->
(App1) -> validate input
-> stores local info
-> creates an HttpRequest/POST object
-> posts to App2
(App2) -> handles the post
<- returns the response
-> confirms the results of App2
<- returns the results to the browser.
In essense, you want to control and proxy requests from your Applications domain to any outside interfaces as much as possible.
Note: I'm answering my own question
just to have a correct answers marked
against it. All the suggestions
provided by various members here are
correct in their own way, but they
were not apt for my requirements.
Hence, I cant accept any of them as
correct.
The way I have Implemented is by creating a custom control which would have a configurable property containing the URL to post data and another one accepting a dictionary object as the data input to be posted.
This control would internally create a HTML form with action attribute set to the URL specified by the user and have the data feilds created out of the dictionary object. This form would then be posted on the button click event on the page hosting this control.
I use Process.Start("firefox.exe", "http://localhost/page.aspx");
And how i can know page fails or no?
OR
How to know via HttpWebRequest, HttpWebResponse page fails or not?
When i use
HttpWebRequest myReq = (HttpWebRequest)WebRequest.Create("somepage.aspx");
HttpWebResponse loWebResponse = (HttpWebResponse)myReq.GetResponse();
Console.Write("{0},{1}",loWebResponse.StatusCode, loWebResponse.StatusDescription);
how can I return error details?
Not need additional plugins and frameworks. I want to choose this problem only by .net
Any Idea please
Use Watin to automate firefox instead of Process.Start. Its a browser automation framework that will let you monitor what is happening properly.
http://watin.sourceforge.net/
edit: see also Google Webdriver http://google-opensource.blogspot.com/2009/05/introducing-webdriver.html
If you are spawning a child-process, it is quite hard and you'd probably need to use each browser's specific API (it won't be the same between FF and IE, for example).
It doesn't help that in many cases the exe detects an existing instance and forwards the request there (so you can't trust the exit-code, since the page hasn't even been requested in the right exe yet).
Personally, I try to avoid assuming any particular browser for this scenario; just launch the url:
Process.Start("http://somesite.com");
This will use the user's default browser. You have to hope it appears though - you can't (reliably and robustly) check that externally without lots of work.
One other option is to read the data yourself (WebClient.Download*) - but this may have issues with complex cookies, login, user-agent awareness, etc.
Use HttpWebRequest class or WebClient class to check this. I don't think Process.Start will return something if the URL not exists.
Don't start the page in this form. Instead, create a local http://localhost:<port>/wrapper.html which loads http://localhost/page.aspx and then either http://localhost:<port>/pass.html or http://localhost:<port>/fail.html. localhost: is a trivial HTTP server interface implemented by your app.
The idea is that Javascript gives you an API inside the browser, which is far more standard than the APIs on the outside of browsers. Since the Javascript on wrapper.html comes from the same server and even port as the subsequent resources, this should satisfy the same-origin policies in current browsers.
I'm doing some automation work and can make my way around a site & post to HTML forms okay, but now I'm up against a new challenge, Ajax forms.
Since there's no source to read, I'm left wondering if it's possible to fill in an Ajax form progamatically, in C#. I'm currently using a non-visible axWebBrowser.
Thanks in advance for your help!
Yes, but I recommend using a different approach to requesting/responding to the server pages including the regular pages, and the AJAX handler pages.
In c#, try using the WebRequest/WebResponse or the more specialized HttpWebRequest/HttpWebResponse classes.
Ajax is no more than a "fancy" name for a technology that allows Javascript to make HTTP requests to a server which usually implements some handlers that produce specialized, light-weight content for the Javascript caller (comonly encoded as JSON).
Therefore in order to simulate AJAX calls, all you have to do is inspect your target application (the web page that you want to "post" to) and see what format is used for the AJAX communications - then replicate the page's Javascript behavior from C# using the WebREquest/WebResponse classes.
See Firebug - a great tool that allows you to inspect a web page to determine what calls it makes, to which pages and what those pages respond. It does a pretty good job at inspecting AJAX calls too.
Here's a very simple example of how to do a web request:
HttpWebRequest wReq = (HttpWebRequest)WebRequest.Create("http://www.mysite.com");
using (HttpWebResponse resp = (HttpWebResponse)wReq.GetResponse())
{
// NOTE: A better approach would be to use the encoding returned by the server in
// the Response headers (I'm using UTF 8 for brevity)
using (StreamReader sr = new StreamReader(resp.GetResponseStream(), Encoding.UTF8))
{
string content = sr.ReadToEnd();
// Do something with the content
}
}
A POST is also a request, but with a different method. See this page for an example of how to do a very simple post.
EDIT - Details on Inspecting the page behavior with Firebug
What I mean by inspecting the page you're trying to replicate is to use a tool (I use Firebug - on Firefox) to determine the flow of information between the page and the server.
With Firebug, you can do this by using the "Net" and "Console" panels. The Net panel lists all requests executed by the browser while loading the page. While the "Console" will list communications between the page and the server that take place after the page has loaded. Those communications that take place after the page has loaded are essentially the AJAX calls that you'll want to replicate (Note: Network monitoring has to be enbled in Firebug for this to work)
Check out Michael Sync's tutorial to learn more about Firebug and experiment with the Console panel to learn more about the AJAX requests.
Regarding "replicate the page's behavior from C# using the WebRequest/WebResponse" - what you have to realize is that like I said earlier, the Javascript AJAX call is nothing more than an HTTP Request. It's an HTTP Request that the Javacript makes "behind the scenes", or out-of-band, to the web server. To replicate this, it is really no different than replicating a normal GET or a normal POST like I showed above. And this is where Firebug comes in to play. Using it you can view the requests, as the Javascript makes them - look at the Console panel, and see what the Request message looks like.
Then you can use the same technique as above, using the HttpWebRequest/HttpWebResponse to make the same type of request as the Javascript does, only do it from C# instead.
Gregg, I hope this clarifies my answer a little bit but beyond this I suggest playing with Firebug and maybe learning more about how the HTTP protocol works and how AJAX works as a technology.
Have you looked at using Selenium. AFAIK, you can write the test cases in C# and I know our testers have successfully used it before to UI Test a Ajax enabled ASP.NET site
http://seleniumhq.org/