I have a c# asp.net app running on an Amazon EC2 however I am getting a validation error:
Exception type: HttpRequestValidationException
Exception message: A potentially dangerous Request.RawUrl value was detected from the client (="...h&content=<php>die(#md5(HelloT...").
The logs show that the request url was:
http://blah.com/?a=fetch&content=<php>die(#md5(HelloThinkCMF))</php>
Where does that PHP die script come from? Is this some kind of security breach and I have no idea how to debug this.
This is due to a built-in ASP.Net feature called "Request validation" which causes an exception to be thrown to prevent attacks whenever dangerous characters are found in e.g. the query string. In this case, it is probably caused by the < character, which is forbidden to make attacks such as Cross Site Scripting harder. As such, the error indicates that the attempt to access your site was stopped before your application code was even invoked.
The query string in your example is probably generated by some automated attack script or botnet that is throwing random data at your site to try to breach it. You can safely ignore this particular instance of the attack, since you're not running PHP. That being said, as others have commented, it does indicate that someone is trying to get in, so you should consider taking appropriate security measures either in your application code or in your network/hosting setup. What these are is both out of scope for this site and hard to say without knowing a lot more about your context, however.
Those are ThinkPHP5 (Chinese PHP framework based on Laravel) RCE exploit attempts
This blog post suggests that this is a wordpress exploit that no longer works.
I am not running PHP (or Wordpress) yet my web server (apache2, log extract) returns a 200 to this (which is why I was interested):`
[04/Jun/2020:11:43:35 -0500] "GET /index.php?s=/Index/\\think\\app/invokefunction&function=call_user_func_array&vars[0]=md5&vars[1][]=HelloThinkPHP HTTP/1.1" 404 367 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.108 Safari/537.36"
That request came from 195.54.160.135. Jonas Høgh is correct, of course, that securing your site is something you have to figure out yourself. I have a script to block an IP on an ad hoc basis and another one to get a list of bad actors from a website and block them all. I suppose, though, that many of these attempts come from pwned machines or through Tor, and blocking an IP may be useless.
It is an attempt to see if this code is running on the server side. PHP and its CMS had such problems before, but if the site is written in .net then everything is fine you don't have to worry.
Related
We got the following problem:
I am currently developing a web server implementing a specific API. The association behind that API provided specific test cases I'm using to test my implementation.
One of the test cases is:
5.3.2.12 Robustness, large resource ID
This test confirms correct error handling when sending a HTTP request with a very long location ID as URL parameter.
The url its calling looks something like this:
https://localhost:443/api/v2/functions/be13789-1f1e-47d0-8f8a-000000000005ebe13789-1f1e-47d0-8f8a-000000000005ebe13789-1f1e-47d0-8f8a-000000000005ebe13789-1f1e-47d0-8f8a-000000000005ebe13789-1f1e-47d0-8f8a-000000000005ebe13789-1f1e-47d0-8f8a-000000000005ebe13789-1f1e-47d0-8f8a-000000000005ebe13789-1f1e-47d0-8f8a-000000000005ebe13789-1f1e-47d0-8f8a-000000000005ebe13789-1f1e-47d0-8f8a-000000000005ebe13789-1f1e-47d0-8f8a-000000000005ebe13789-1f1e-47d0-8f8a-000000000005ebe13789-1f1e-47d0-8f8a-000000000005ebe13789-1f1e-47d0-8f8a-000000000005ebe13789-1f1e-47d0-8f8a-000000000005ebe13789-1f1e-47d0-8f8a-000000000005ebe13789-1f1e-47d0-8f8a-000000000005ebe13789-1f1e-47d0-8f8a-000000000005ebe13789-1f1e-47d0-8f8a-000000000005ebe13789-1f1e-47d0-8f8a-000000000005
Basically the tests checks, if my server responds with the correct error code if the URL is too long. (At the time of writing it is testing for Errorcode 405, but I already asked them if it shouldn't be 414)
I'm developing the server in Asp.Net 6 and it always returns Bad Request 400 in the testcase.
I don't seem to find a place to change the handling for this behaviour and I am not even sure, if I can, or if the IIS is blocking the request even before it reaches my server. I activated logging in IIS, but the request does not show in the logfile in inetpub/logs/LogFiles.
My question would be, if it is possible to tell IIS to return a different error code in this case, or if it is even possible to handle the error in my application.
What I tried:
Activating IIS Logs to see if the request is even passed to my site. (It did not)
Tried adding Filters to my Controller to see if I can catch an Exception
Checked, if Development Error Sites are called.
Breakpoints in existing middlewares are not reached.
EDIT:
I am now pretty sure now, that the request never reaches my application.
It is possible to reproduce the error by using the default site the IIS generates on windows. Just copy the whole path from above into a browser with the host http://localhost will also just produce the error 400
EDIT 2:
As #YurongDai pointed out, I tried activating failed request tracing for my IIS Site. I used the default path \logs\FailedReqLogFiles.
The folder was created, but no file is written, when I'm opening the URL above in my browser.
IIS Error 400 occurs when the server is unable to process a request sent to a web server. The most common cause of Bad Request error 400 is an invalid URL, but it can happen for other reasons as well. To resolve IIS Error 400, first make sure that you have entered the URL correctly, typos or disallowed characters in the URL are the most common causes of Bad Request errors. If the error persists after verifying the URL, please clear your browser's cache, DNS cache, and cookies and try again.
Clear your browser's cookies.
Clear your browser's cache.
Clear your DNS cache.(Execute the following command in the command prompt window: ipconfig /flushdns)
How do websites find out which browser is visiting them
how i can do this
Are you give answer for asp.net c#
They look for the user agent passed in the request.
In ASP.NET:
Request.ServerVariables["HTTP_USER_AGENT"]
The browser tells the server what kind of browser it is in the User-Agent string, which it includes with each HTTP request.
You can access the User-Agent directly and parse it yourself, or you can use ASP.NET's built-in browser capabilities feature, which relies on several *.browser files, regular expressions, etc.
User-Agent: <%= Request.UserAgent %>
ID: <%= Request.Browser.Id %>
Browser: <%= Request.Browser.Browser %>
Type: <%= Request.Browser.Capabilities["type"] %>
The HTTP protocol provides an attribute of the request header called the User-Agent which the client (here the web browsers) fill-in with a string identifying the browser make, version and operating system. Like all elements of the HTTP header, this information may well be "spoofed" or altered for various purposes (for example by various client-side privacy gateways and such), but it is usually relatively reliable.
An example of such a User-Agent string is (here for a FireFox browser, Version 3.5, running under Windows XP)
Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.1.5) Gecko/20091102 Firefox/3.5.5
This information, along with other attributes from the header can be queried by the receiving application. Although the specifics vary from one language/framework to the next, may of these languages/framworks expose a simple object model which mirrors the various objects associated with the HTTP protocol. In the case of the http header, this typically comes from the "Request" (may be named differently) object, so accessing the User-Agent may look something like:
ClientBrowser = Request.Header("User-Agent")
or possibly
ClientBrowser = HttpHeader.UserAgent
Edit: In the case of C#/ASP.NET (late edit of question):
ClientBrowser = Request.ServerVariables("HTTP_USER_AGENT")
Also, although you may be tempted to use this information directly, you may also rely on various libraries which encapsulate the details of parsing the [very many versions of the] User-Agent strings to figure out the particular web browser and even the particular forms of javascript such client should be sent.
I am trying to diagnose a problem that a client site has come across. Basically when you do an address search on their website you can specify a % symbol for wildcard searches. For example you can search for Be% to return Belfast etc.
This queries the database and then redirects you to the results page, passing the search criteria in the querystring for example results.aspx?criteria=Search%20criteria%20is%20Be%
This caused problems if you searched for something like %Belf as %Be is a reserved character in URL encoding. I therefore coded it to replace % with %25 (URL encoding representation of % symbol). This works fine on my test machine, where the URL is now results.aspx?criteria=Search%20Criteria%20is%20%25Be .
This however doesn't work on our clients website for some reason and I can't work out why. The page keeps error-ing with:
Error Code: 500 Internal Server Error. The request was rejected by the
HTTP filter. Contact the server administrator. (12217)
any time you search for something like %Be %Fa %Fe etc etc
Does anyone know if there is an IIS setting for this or something similar?
You might have URLScan installed on your server. URLScan intercepts requests and reject them if it detects invalid characters. It is meant to protect your website from malicious attacks and SQL injection. If you don't configure it correctly then it will reject perfectly reasonable requests. Take a look at the ISAPI filters on your website and see if URLScan is there.
Could this solve your problems? It is written by Zubair Alexander at http://blog.techgalaxy.net/archives/2521
I have a rather simple program which takes in a URL and spits out the first place it redirects to. Anyhow, I've been testing it on some links and noticed gets 400 errors on some urls. I tried testing such urls by pasting it into my browser and that worked fine.
static string getLoc(string curLoc, out string StatusDescription, int timeoutmillseconds)
{
HttpWebRequest x = (HttpWebRequest)WebRequest.Create(curLoc);
x.UserAgent = "Opera/9.52 (Windows NT 6.0; U; en)";
x.Timeout = timeoutmillseconds;
x.AllowAutoRedirect = false;
HttpWebResponse y = null;
try
{
y = (HttpWebResponse)x.GetResponse(); //At this point it throws a 400 bad request exception.
I think something weird is happening with cookies. It turns out that due to the way I was testing the link, the necessary cookies for it to work were in my browser but not the link. It turns out some of the links I was testing manually (when the other links failed) were generating cookies.
It's slightly convoluted what happened but the short answer is that my browser had cookies, the program did not, maintaining the cookies between redirects did not solve the problem.
The underlying problem is caused by the fact that the link I am testing requires either an extra parameter or a cookie or both. I was trying to avoid both in my tests since the parameter/cookie were for tracking and I didn't want to break tracking.
In short, I know what the problem is but it's not a solvable problem.
Use HttpWebRequest to download web pages without key sensitive issues
[update: I don't know why, but both examples below now work fine! Originally I was also seeing a 403 on the page2 example. Maybe it was a server issue?]
First, WebClient is easier. Actually, I've seen this before. It turned out to be case sensitivity in the url when accessing wikipedia; try ensuring that you have used the same case in your request to wikipedia.
[updated] As Bruno Conde and gimel observe, using %27 should help make it consistent (the intermittent behaviour suggest that maybe some wikipedia servers are configured differently to others)
I've just checked, and in this case the case issue doesn't seem to be the problem... however, if it worked (it doesn't), this would be the easiest way to request the page:
using (WebClient wc = new WebClient())
{
string page1 = wc.DownloadString("http://en.wikipedia.org/wiki/Algeria");
string page2 = wc.DownloadString("http://en.wikipedia.org/wiki/%27Abadilah");
}
I'm afraid I can't think what to do about the leading apostrophe that is breaking things...
I also got strange results ... First, the
http://en.wikipedia.org/wiki/'Abadilah
didn't work and after some failed tries it started working.
The second url,
http://en.wikipedia.org/wiki/'t_Zand_(Alphen-Chaam)
always failed for me...
The apostrophe seems to be the responsible for these problems. If you replace it with
%27
all urls work fine.
Try escaping the special characters using Percent Encoding (paragraph 2.1). For example, a single quote is represented by %27 in the URL (IRI).
I'm sure the OP has this sorted by now but I've just run across the same kind of problem - intermittent 403's when downloading from wikipedia via a web client. Setting a user agent header sorts it out:
client.Headers.Add("user-agent", "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.2; .NET CLR 1.0.3705;)");