I'm trying to implement a simple proxy within a .NET Core REST service, so I can inject additional authentication headers, and then return it to any client like a normal website.
In a simplified form it looks like this:
[HttpGet]
public async Task<ContentResult> Get()
{
HttpRequestMessage request = new HttpRequestMessage(HttpMethod.Get, "http://google.com");
/* some extra headers injection happens here */
var response = await client.SendAsync(request);
response.EnsureSuccessStatusCode();
var result = await response.Content.ReadAsStringAsync();
return Content(result, "text/html", Encoding.UTF8);
}
The problem is that while the response is correctly rendered by any browser as the original HTML page, any script or link (any relative URL) inclusion in the returned page fails.
What is missing in the code above to make browsers resolve inner relative URL-s correctly?
In the above example, if I run it, I get google.com page displayed from my https://localhost:44307/api/test, except images and other stuff from relative URL-s is missing, as they fail to resolve inner relative URL-s.
In a confusion, I tried to play with such properties as Referer and Host within request and response, but didn't make any progress.
Where it is needed. We need to use a third-party website via IFRAME, and that website requires Authorization header present, so the proxy above is supposed to do just that, and then return the website, so the API link can be used directly, like this: <iframe src="https://localhost:44307/api/test"> - this example should render complete google.com website inside the iframe, but it renders HTML only.
A ton of websites out there use relative paths to grab their resources (scripts/links/images/etc.) because it is convenient and allows them to have different environments in which things work. For example, having a development server, staging server, and a production server requires that each one be able to load the appropriate content. With that being said, there are a couple of options for you but they will require you to parse there content:
You can replace all of their references to internal sources with links to your proxy so that your headers get added for each of the resources.
You can replace all of their relative paths with absolute paths to the original domain so that all resource requests bypass your proxy. There are a few issues that can come up with this depending on their security.
As some have mentioned, neither of these solutions will make it easy to have a robust solution and will require parsing the CSS and JavaScript as well for relative paths. Not exactly an easy task, unfortunately, but probably far easier than trying to use some kind of virtualization.
To replace the content you can use something like HTMLAgilityPack. I've used it on a few projects and it works great and has a pretty good community.
This gentleman has posted an example of how to do something very similar HERE.
Related
Implementing a code of Embedded Signing in MVC C# Project. When I post for the sign document, It's redirecting to DocuSign page and it will redirect to return URL. using below code
private const string returnUrl = "http://localhost:5050/DSReturn";
...
return Redirect(viewUrl.Url);
Here I want to get that signed document in response instead of email. How this is possible? or is there any other way to get signed document after finish signature process?
You would make the API call to the "document" resource (.../documents/{documentId or constant}).
The post-signing redirect URL is for the purposes of continuing your web workflow. The "event" parameter allows your web application to generate the correct page or results. For example, in the "Loan Co" example at the Dev Center generates a post-signing page that has links for the document, which in turn result in the API call to retrieve the document. In a real-world integration, the redirect URL is not a reliable indicator that the envelope is "completed". For example, the signer could close the browser before the redirect was executed, or the envelope may have subsequent signers. The Connect service provides a much more reliable trigger for downloading the documents.
Expanding on what #WTP mentioned, you have a couple of approaches. First is via a raw API call using the /v2/accounts/{accountId}/envelopes/{envelopeId}/documents/{documentId} endpoint and retrieving the file from the response. More information can be found here.
Another option you may or may not be aware of is using the DocuSign Client NuGet package. Your code would then look something like this pseudocode:
Stream documentStream = EnvelopesApi.GetDocument(accountId, envelopeId, documentId);
If you are not using the NuGet package yet, keep in mind there is setup work that you will have to do to set-up the EnvelopesApi. That information can be found here.
I am building an angularJs app and need to have application_beginrequest where I need to get current url in the browser.
E.g. localhost:60607/#/login : need to get /login
localhost:60607/#/activity : need to get /activity
Also whenever user hits a refresh or when page loads again and goes into beginrequest it should have the same response.
I tried with context.Request.Path but it gives "/" only.
bookmarks are not sent by browser to server, bookmarks are used for navigating inside the page. You have to get book mark using javascript and send it using different parameter to get that bookmark value at server side.
Data after "#" symbol is supposed to be client side only (usually browsers strip away those pieces, application servers usually ignore them).
If your intent is to provide users the ability to refresh the page using Angular you can:
Provide isomorphic behavior (load the page server side) but if you are using Angular JS ver 1.x is not as simple as you imagine
Provide double routing behavior (*):
avoid using the "#" symbol for url and provide user simple url like /login or /activity
manage your route client side using angular as you are supposed to do already
manage your route server side and provide the right view when requested
you can try to centralize the view management providing them using a controller instead of static content
Using the second approach is probably simpler, but you will find that:
keeping the view state is possible only for simple views (the things get complex quite quickly we introducing multiple UI element state)
to manage the double routing you have to find a compromise between client/server
We have a version control service which should be accessible through our REST API. One of those operations allows to delete a directory in SVN. Ideally, I'd like to send a DELETE request with the URI of the target to delete, something like this: http://service:4711/directory/http%3A%2F%2Fsome%2Fdirectory
What happens isn't new and there are plenty of answers out there. Unfortunately, they do not work for me. Depending on what I try, I get a 404 or a 403 (due to the malicious colon).
Let me show you some code and what I've tried without success so far:
// The action in my controller
[HttpDelete]
[Route("directory/{uri}/")]
public void DeleteDirectory(string uri)
{
var x = HttpUtility.UrlDecode(uri);
}
I am using MVC version 5.2.3.0.
I've tried:
[System.Web.Mvc.ValidateInput(false)] on the action and/or the class.
Setting runAllManagedModulesForAllRequests="true" in the web.config.
Setting requestPathInvalidCharacters="" in the web.config.
Setting requestValidationMode="true" in the web.config.
Right now, I see four possible solutions:
I've done something wrong with the previous approaches.
I have to create a custom RequestValidator.
I have to double encode the URI in the request.
Send a POST request instead of DELETE.
One may say, put it in the body of the DELETE request. But this option is highly controversial, so I'd like to ignore this one from the very beginning.
So what have I done wrong and what do you suggest to do?
Best regards,
Carsten
Colons in URI's in MVC are not allowed to be used until after the querystring '?' character in an URL, even when it is encoded as %3A.
Therefore, unless the SVN is http/s independent you could drop the initial http: from the parameter passed in an append it in the code.
I use Process.Start("firefox.exe", "http://localhost/page.aspx");
And how i can know page fails or no?
OR
How to know via HttpWebRequest, HttpWebResponse page fails or not?
When i use
HttpWebRequest myReq = (HttpWebRequest)WebRequest.Create("somepage.aspx");
HttpWebResponse loWebResponse = (HttpWebResponse)myReq.GetResponse();
Console.Write("{0},{1}",loWebResponse.StatusCode, loWebResponse.StatusDescription);
how can I return error details?
Not need additional plugins and frameworks. I want to choose this problem only by .net
Any Idea please
Use Watin to automate firefox instead of Process.Start. Its a browser automation framework that will let you monitor what is happening properly.
http://watin.sourceforge.net/
edit: see also Google Webdriver http://google-opensource.blogspot.com/2009/05/introducing-webdriver.html
If you are spawning a child-process, it is quite hard and you'd probably need to use each browser's specific API (it won't be the same between FF and IE, for example).
It doesn't help that in many cases the exe detects an existing instance and forwards the request there (so you can't trust the exit-code, since the page hasn't even been requested in the right exe yet).
Personally, I try to avoid assuming any particular browser for this scenario; just launch the url:
Process.Start("http://somesite.com");
This will use the user's default browser. You have to hope it appears though - you can't (reliably and robustly) check that externally without lots of work.
One other option is to read the data yourself (WebClient.Download*) - but this may have issues with complex cookies, login, user-agent awareness, etc.
Use HttpWebRequest class or WebClient class to check this. I don't think Process.Start will return something if the URL not exists.
Don't start the page in this form. Instead, create a local http://localhost:<port>/wrapper.html which loads http://localhost/page.aspx and then either http://localhost:<port>/pass.html or http://localhost:<port>/fail.html. localhost: is a trivial HTTP server interface implemented by your app.
The idea is that Javascript gives you an API inside the browser, which is far more standard than the APIs on the outside of browsers. Since the Javascript on wrapper.html comes from the same server and even port as the subsequent resources, this should satisfy the same-origin policies in current browsers.
I'm doing some automation work and can make my way around a site & post to HTML forms okay, but now I'm up against a new challenge, Ajax forms.
Since there's no source to read, I'm left wondering if it's possible to fill in an Ajax form progamatically, in C#. I'm currently using a non-visible axWebBrowser.
Thanks in advance for your help!
Yes, but I recommend using a different approach to requesting/responding to the server pages including the regular pages, and the AJAX handler pages.
In c#, try using the WebRequest/WebResponse or the more specialized HttpWebRequest/HttpWebResponse classes.
Ajax is no more than a "fancy" name for a technology that allows Javascript to make HTTP requests to a server which usually implements some handlers that produce specialized, light-weight content for the Javascript caller (comonly encoded as JSON).
Therefore in order to simulate AJAX calls, all you have to do is inspect your target application (the web page that you want to "post" to) and see what format is used for the AJAX communications - then replicate the page's Javascript behavior from C# using the WebREquest/WebResponse classes.
See Firebug - a great tool that allows you to inspect a web page to determine what calls it makes, to which pages and what those pages respond. It does a pretty good job at inspecting AJAX calls too.
Here's a very simple example of how to do a web request:
HttpWebRequest wReq = (HttpWebRequest)WebRequest.Create("http://www.mysite.com");
using (HttpWebResponse resp = (HttpWebResponse)wReq.GetResponse())
{
// NOTE: A better approach would be to use the encoding returned by the server in
// the Response headers (I'm using UTF 8 for brevity)
using (StreamReader sr = new StreamReader(resp.GetResponseStream(), Encoding.UTF8))
{
string content = sr.ReadToEnd();
// Do something with the content
}
}
A POST is also a request, but with a different method. See this page for an example of how to do a very simple post.
EDIT - Details on Inspecting the page behavior with Firebug
What I mean by inspecting the page you're trying to replicate is to use a tool (I use Firebug - on Firefox) to determine the flow of information between the page and the server.
With Firebug, you can do this by using the "Net" and "Console" panels. The Net panel lists all requests executed by the browser while loading the page. While the "Console" will list communications between the page and the server that take place after the page has loaded. Those communications that take place after the page has loaded are essentially the AJAX calls that you'll want to replicate (Note: Network monitoring has to be enbled in Firebug for this to work)
Check out Michael Sync's tutorial to learn more about Firebug and experiment with the Console panel to learn more about the AJAX requests.
Regarding "replicate the page's behavior from C# using the WebRequest/WebResponse" - what you have to realize is that like I said earlier, the Javascript AJAX call is nothing more than an HTTP Request. It's an HTTP Request that the Javacript makes "behind the scenes", or out-of-band, to the web server. To replicate this, it is really no different than replicating a normal GET or a normal POST like I showed above. And this is where Firebug comes in to play. Using it you can view the requests, as the Javascript makes them - look at the Console panel, and see what the Request message looks like.
Then you can use the same technique as above, using the HttpWebRequest/HttpWebResponse to make the same type of request as the Javascript does, only do it from C# instead.
Gregg, I hope this clarifies my answer a little bit but beyond this I suggest playing with Firebug and maybe learning more about how the HTTP protocol works and how AJAX works as a technology.
Have you looked at using Selenium. AFAIK, you can write the test cases in C# and I know our testers have successfully used it before to UI Test a Ajax enabled ASP.NET site
http://seleniumhq.org/