I am using the Facebook comments plugin on a blog I am building. It has some FBXML tags that are interpreted by the facebook javascript that is referenced on the page.
This all works fine, but I have to pass in the current, fully-qualified URL to the plugin.
<div style="width: 900px; margin: auto;">
<div id="fb-root"></div>
<fb:comments href="URL HERE" num_posts="10" width="900"></fb:comments>
</div>
What is the best way to get the URL of the current page? The request URL.
Solution
Here is the final code of my solution:
<fb:comments href="#Request.Url.AbsoluteUri" num_posts="15" width="900"></fb:comments>
You could use the Request.RawUrl, Request.Url.OriginalString, Request.Url.ToString() or Request.Url.AbsoluteUri.
Add this extension method to your code:
public static Uri UrlOriginal(this HttpRequestBase request)
{
string hostHeader = request.Headers["host"];
return new Uri(string.Format("{0}://{1}{2}",
request.Url.Scheme,
hostHeader,
request.RawUrl));
}
And then you can execute it off the RequestContext.HttpContext.Request property.
There is a bug (can be side-stepped, see below) in Asp.Net that arises on machines that use ports other than port 80 for the local website (a big issue if internal web sites are published via load-balancing on virtual IP and ports are used internally for publishing rules) whereby Asp.Net will always add the port on the AbsoluteUri property - even if the original request does not use it.
This code ensures that the returned url is always equal to the Url the browser originally requested (including the port - as it would be included in the host header) before any load-balancing etc takes place.
At least, it does in our (rather convoluted!) environment :)
If there are any funky proxies in between that rewrite the host header, then this won't work either.
Update 30th July 2013
As mentioned by #KevinJones in comments below - the setting I mention in the next section has been documented here: http://msdn.microsoft.com/en-us/library/hh975440.aspx
Although I have to say I couldn't get it work when I tried it - but that could just be me making a typo or something.
Update 9th July 2012
I came across this a little while ago, and meant to update this answer, but never did. When an upvote just came in on this answer I thought I should do it now.
The 'bug' I mention in Asp.Net can be be controlled with an apparently undocumented appSettings value - called 'aspnet:UseHostHeaderForRequest' - i.e:
<appSettings>
<add key="aspnet:UseHostHeaderForRequest" value="true" />
</appSettings>
I came across this while looking at HttpRequest.Url in ILSpy - indicated by the ---> on the left of the following copy/paste from that ILSpy view:
public Uri Url
{
get
{
if (this._url == null && this._wr != null)
{
string text = this.QueryStringText;
if (!string.IsNullOrEmpty(text))
{
text = "?" + HttpEncoder.CollapsePercentUFromStringInternal(text,
this.QueryStringEncoding);
}
---> if (AppSettings.UseHostHeaderForRequestUrl)
{
string knownRequestHeader = this._wr.GetKnownRequestHeader(28);
try
{
if (!string.IsNullOrEmpty(knownRequestHeader))
{
this._url = new Uri(string.Concat(new string[]
{
this._wr.GetProtocol(),
"://",
knownRequestHeader,
this.Path,
text
}));
}
}
catch (UriFormatException)
{ }
}
if (this._url == null) { /* build from server name and port */
...
I personally haven't used it - it's undocumented and so therefore not guaranteed to stick around - however it might do the same thing that I mention above. To increase relevancy in search results - and to acknowledge somebody else who seeems to have discovered this - the 'aspnet:UseHostHeaderForRequest' setting has also been mentioned by Nick Aceves on Twitter
public static string GetCurrentWebsiteRoot()
{
return HttpContext.Current.Request.Url.GetLeftPart(UriPartial.Authority);
}
Request.Url.PathAndQuery
should work perfectly, especially if you only want the relative Uri (but keeping querystrings)
I too was looking for this for Facebook reasons and none of the answers given so far worked as needed or are too complicated.
#Request.Url.GetLeftPart(UriPartial.Path)
Gets the full protocol, host and path "without" the querystring. Also includes the port if you are using something other than the default 80.
My favorite...
Url.Content(Request.Url.PathAndQuery)
or just...
Url.Action()
This worked for me for Core 3.0 for full URL:
$"{Request.Scheme}://{Request.Host.Value}{Request.Path.Value}"
One thing that isn't mentioned in other answers is case sensitivity, if it is going to be referenced in multiple places (which it isn't in the original question but is worth taking into consideration as this question appears in a lot of similar searches). Based on other answers I found the following worked for me initially:
Request.Url.AbsoluteUri.ToString()
But in order to be more reliable this then became:
Request.Url.AbsoluteUri.ToString().ToLower()
And then for my requirements (checking what domain name the site is being accessed from and showing the relevant content):
Request.Url.AbsoluteUri.ToString().ToLower().Contains("xxxx")
For me the issue was when I tried to access HTTPContext in the Controller's constructor while HTTPContext is not ready yet. When moved inside Index method it worked:
var uri = new Uri(Request.Url.AbsoluteUri);
url = uri.Scheme + "://" + uri.Host + "/";enter code here
The case (single page style) for browser history
HttpContext.Request.UrlReferrer
Related
I have following requirement:
the user comes to a job page in our customer's website, but the job is already taken, so the page does not exist anymore
the user should NOT get a 404 but a 410(Gone) and then be redirected to a job-overview-page where he gets the information that this job is not available anymore and a list of available jobs
but instead of a 302(temp. moved) or a 404(current behavior) google should get a 410(gone) status to indicate that this page is permanently unavailable
so the old url should be removed from the index and the new not be treated as a replacement
So how i can redirect the user with a 410 status? If i try something like this:
string overviewUrl = _urlContentResolver.GetAbsoluteUrl(overviewPage.ContentLink);
HttpContext context = _httpContextResolver.GetCurrent();
context.Response.Clear();
context.Response.Redirect(overviewUrl, false);
context.Response.StatusCode = 410;
context.Response.TrySkipIisCustomErrors = true;
context.Response.End();
I get a static error page in chrome with nothing but:
The page you requested was removed
But the status-code is correct(410) and also the Location is set correctly, just no redirect.
If i use Redirect and set the status before:
context.Response.StatusCode = 410;
context.Response.Redirect(overviewUrl, true); // true => endReponse
the redirect happens but i get a 302 instead of the desired 410.
Is this possible at all, if yes, how?
I think you're trying to bend the rules of http. The documentation states
The HyperText Transfer Protocol (HTTP) 410 Gone client error response code indicates that access to the target resource is no longer available at the origin server and that this condition is likely to be permanent.
If you don't know whether this condition is temporary or permanent, a 404 status code should be used instead.
In your situation either 404 or 410 seems to be the right status code, but 410 does not have any statement about redirection as a correct behavior that browsers should implement, so you have to assume a redirect is not going to work.
Now, to the philosophically right way to implement your way out of this...
With your stated requirements, "taken" does not mean the resource is gone. It means it exists for the client that claimed it. So, do you 302 Redirect a different client to something else that might be considered correct? You implemented that, and it seems like the right way to do it.
That said, I don't know if you "own" the behavior across the client and server to change the requirements to this approach. Looking at it from the "not found" angle, a 404 also seems reasonable. It's not found because "someone" already has the resource.
In short if your requirements are set in stone, they may be in opposition to the HTTP spec. If you still must have a 410 then you would need to change the behavior on the client-side somehow. If that's JavaScript, you'd need to expect a 410 from the server that returns a helpful payload that the client interprets to do something else (e.g. like a simulated redirect).
If you don't "own" the client code... well that's a different problem.
There's a short blog post by Tommy Griffth that backs up what I am saying. Take a read. It says in part,
The “Gone” error response code means that the page is truly gone—it’s no longer available on the origin server and no redirect was set up.
Sometimes, webmasters want to be very explicit to Google and other search engines that a page is gone. This is a much more direct signal to Google that a page is truly gone and never coming back. It's slightly more direct than a 404.
So, is it possible? Yes, but you're going to need to "fake" it by changing both client and server code.
I will accept Kit's answer since he's right in general, but maybe i have overcomplicated my requirement a bit, so i want to share my solution:
What i wanted actually?
provide crawlers a 410 so that the taken job page is delisted from search engine indexes
provide the user a better exeprience than getting a 404, so redirect him to a job-overview where he can find similar jobs and gets a message
These are two separate requirements and two separate users, so i could simply provide a solution for a crawler and one for a "normal" user.
In case someone needs something similar i can provide more details, just a snippet:
if (HttpContext.Current.IsInSearchBotMode())
{
Deliver410ForSearchBots(HttpContext.Current);
}
else
{
// redirect(301) to job-overview, omitting details
}
private void Deliver410ForSearchBots(HttpContext context)
{
context.Response.Clear();
context.Response.StatusCode = 410;
context.Response.StatusDescription = "410 job taken";
context.Response.TrySkipIisCustomErrors = true;
context.Response.End();
}
public static bool IsInSearchBotMode(this HttpContext context)
{
ISearchBotConfiguration configuration = ServiceLocator.Current.GetInstance<ISearchBotConfiguration>();
string userAgent = context.Request?.UserAgent;
return !(string.IsNullOrEmpty(userAgent) || configuration.UserAgents == null)
&& configuration.UserAgents.Any(bot => userAgent!.IndexOf(bot, StringComparison.InvariantCultureIgnoreCase) >= 0);
}
These user-agents i have used for the crawler detection:
<add key="SearchBot.UserAgents" value="Googlebot;Googlebot-Image;Googlebot-News;APIs-Google;AdsBot-Google;AdsBot-Google-Mobile;AdsBot-Google-Mobile-Apps;DuplexWeb-Google;Google-Site-Verification;Googlebot-Video;Google-Read-Aloud;googleweblight;Mediapartners-Google;Storebot-Google;LinkedInBot;bitlybot;SiteAuditBot;FacebookBot;YandexBot;DataForSeoBot;SiteCheck-sitecrawl;MJ12bot;PetalBot;Yeti;SemrushBot;Roboter;Bingbot;AltaVista;Yahoobot;YahooCrawler;Slurp;MSNbot;Lycos;AskJeaves;IBMResearchWebCrawler;BaiduSpider;facebookexternalhit;XING-contenttabreceiver;Twitterbot;TweetmemeBot" />
I have this code
<a href="#Url.Action(" Edicao ", "EdicaoListaVerificacao ", new { idFormulario = m.Id })" title="Editar" class="glyphicon glyphicon-pencil" aria-hidden="true" />
Where 'Edit' is my action and 'FunctionEdit' is my Controller. My action needs a parameter and I passed it building a 'instance'. How the property needs. The problem is that the URL can be altered and the user can access things that they can't.
You can never hide your URLs - nor should you. You should verify, instead, inside the Edicao action method that the user has permission to view the Formulario with the specified Id.
In all web applications, you have to assume that the URLs users try to retrieve can be absolutely anything - and that some users will attempt to edit URLs to get at hidden content. ASP.NET has built-in authentication and authorization mechanisms that you should use.
If you're just looking for a simple way to make a URL that's impossible to guess, without forcing users to log on, you have to use something more complicated than a numeric ID, like a GUID.
And if at any point you are tempted by roll-your-own solutions such as URL referrer checking or verifying cookies, remember that easier-to-use solutions are most likely built into ASP.NET already.
That's it! Thank you, guys. I found a way to implement a check in my action. With Request.UrlReferrer.
public ActionResult Edicao(int idFormulario)
{
Uri url = Request.UrlReferrer;
if (url != null)
{
DO ALL THINGS YOU HAVE TO
}
else
{
RETURN TO INDEX
}
}
The Request.UrlReferrer returns me the URL if it comes from my request. If not returns null. Than I just build an if block ;). Thank you guys!
I have the following code:
var redirectIp = string.Format("{0}{1}", Session["CurrentHost"], ip.PathAndQuery);
return new RedirectResult(redirectIp);
When I check the value of redirectIP it gives me:
redirectIp "127.0.0.1:84/Administration/Accounts/ShowSummary?ds=0001" string
However when I step through the code the browser opens and gives me the following:
http://127.0.0.1:84/Administration/Accounts/127.0.0.1:84/Administration/Accounts/ShowSummary?ds=0001
I am totally confused. Anyone have any idea what's happening?
That is how urls, http and browsers work. You forgot the protocol part, so the redirect actually does work as expected, given the url that you are redirecting to.
var redirectIp = string.Format("http://{0}{1}", Session["CurrentHost"], ip.PathAndQuery);
return new RedirectResult(redirectIp);
This will work better for now, but to be able to also cover https, you're better off storing the protocol part in a session variable along with the hostname.
I've been trying to get this to work. It's basically a way to have certain MVC pages work within a webforms cms (umbraco)
Someone tried it before me and had issues with MVC2.0 (see here), I have read the post, did what was announced there, but with or without that code, I seem to get stuck on a different matter.
It seems like, if I call an url, it fires the handler, but fails to request the querystring passed, the variable originalPath is always empty,
for example I call this url: http://localhost:8080/mvc.ashx?mvcRoute=/home/RSVPForm
the handler is supposed to get the mvcRoute but it is always empty. Thus gets rewritten to a simple / and then returns resource cannot be found error.
Here is the code I use now:
public void ProcessRequest(HttpContext httpContext)
{
string originalPath = httpContext.Request.Path;
string newPath = httpContext.Request.QueryString["mvcRoute"];
if (string.IsNullOrEmpty(newPath))
newPath = "/";
HttpContext.Current.RewritePath(newPath, false);
IHttpHandler ih = (IHttpHandler)new MvcHttpHandler();
ih.ProcessRequest(httpContext);
HttpContext.Current.RewritePath(originalPath, false);
}
I would like some new input on this as I'm staring myself blind on such a simple issue, while I thought I would have more problems with mvc itself.
have no time to investigate, but after copying the site over to different locations, using numerous web.config changes (unrelated to this error but was figuring other things out) this error seems to have solved itself. so its no longer an issue, however i have no clue as to what exactly made this to work again.
on a side note
ih.ProcessRequest(httpContext);
should have been,
ih.ProcessRequest(HttpContext.Current);
In a web application I'm working on the ReportViewer keeps giving me a error "Missing URL parameter: Name". I have found the cause but not a solution.
The url that is causing the exception from the report viewer
Reserved.ReportViewerWebControl.axd?ReportSession=3bkunv2wte3wmnabkquyr1y0&ControlID=1e2b5870e07b46abac7fd32a9e0e4b9d&Culture=1033&UICulture=1033&ReportStack=1&OpType=ReportArea&Controller=ctl00_ASPxRoundPanel3_PageContent_Wizard1_ReportViewer1&PageNumber=1&ZoomMode=Percent&ZoomPct=100&ReloadDocMap=true&SearchStartPage=0&LinkTarget=_top
if you notice in the query string instead of "&name=" for some reason it becomes "& ;Name=".
I noticed on numinous google searches there seams to be a lot of people having the same problem but not one solution.
Sounds like something is mangling your URL somewhere. Do you by chance have a Bluecoat proxy in place? I saw something about Bluecoat mangling the URL.
If that's the case and you have control over the proxy, you might be able to get a tunnel punched through it for your reports. Otherwise, you might have to rewrite the URL on your end.
Check here for more information (last post in the thread has a possible workaround).
You can fix this globally by checking for the BlueCoat request header at the start of each request. This bit of code placed in global.asax.cs fixes the problem:
protected void Application_BeginRequest(Object sender, EventArgs e)
{
// Fix incorrect URL encoding by buggy BlueCoat proxy servers:
if (!String.IsNullOrEmpty(Request.ServerVariables["HTTP_X_BLUECOAT_VIA"]))
{
string original = Request.QueryString.ToString();
if (original.Contains(Server.UrlEncode("amp;")))
{
HttpContext.Current.RewritePath(Request.Path + "?" + original.Replace(Server.UrlEncode("amp;"), "&"));
}
}
}
I'm not sure if any other proxy servers have the same issue, but if they do, this could be easily be adapted to check for the presence of & in the QueryString instead of checking for the BlueCoat header (or I guess, you could just check for the headers of any other affected products, which might be safer.