fighting spam bots - c#

I have C# form in the site and want to prevent spam bots from filling it. The trick is, that I want to avoid CAPTHA or any other user input to avoid loosing a single registration.
Here are some techniques I have in my mind:
Hidden input field (question: is this still effective?)
Track time, since the first user input (focus on FirstName) till posting a form.. Humans will take more than 3 seconds to complete a form (even with auto-fill), where bots take a second or less to fill in registration and post it. (question: if I start timer with the first user input, when should I stop it?)
Put in the form tag a fake post url, or post form to itself, and only on Submit button click action to add a real post url with javascript. (question: wonder if new spam bots can cheat this?)
I would be glad to hear other techniques I could adopt, again, without using CAPTCHA, spam filters, form verifications and even validation. Thank you

would be good to have some sort of flash which asks you to reconnect dots (so that it is interactive and doesnt require typing), and when the user does it correctly, you can post with submit to check.
Never liked CAPTCHA, especially the wierd ones where even humans have problem intepreting it :)

A year ago there was a nice control for asp.net that put a hidden field on the form. With a javascript formula. Robots posted it back - and it wanted the result (stored the result first in the session). basically, as robots dont interpret the form in a browser (too slow).... ;) Most got just thrown out there.
Also, another tip: put in hidden fields for the email to address. Some (old)php forms use a mailer supportnig this. OBVIOUSLY only a robot fills that out ;) If not empty -> garbage.
Anyone else have any smart ideas? ;)

I would say stick with Captcha or a similar thing where the user has to type something in.
The problem with using JavaScript is that not everyone has javascript turned on and quite a few have it turned off for various reasons.
Now if you want to really track time, send a hidden form field with the server time filled in. When the postback occurs take the delta of that with the current time. Obviously if the field is missing then you know someone directly posted.

Related

Hypothetical Questions Regarding Keyloggers

Recently, a lot of people in my house have been going on my computer when I wasn't there and I want to make sure that I know what they have been on, for the sake of privacy. I had the idea of making a keylogger that I could use to see what they have been doing on my computer. The keylogger itself (as in a program that records key strokes) I could make, but I have another idea in mind that I'm not sure is possible.
My idea is basically, like a keylogger, something that compiles a list, or log, of the activities of the user, but to a greater extent. I was wondering if it would be possible to document the actions of the user to the extent that every element of the screen they clicked on would be documented. In a web browser, I'm sure there would be a way to do this as all of the information (ids) of the elements of the page can be collected by inspecting the elements (looking at the code). As for general use of the computer, I'm not so sure.
In simpler terms, I want to make a program that would record the users actions in a log as shown below.
14:17: User clicked Windows Start Button
14:17: User searched for 'Chrome' in search engine
14:17: User opened 'Chrome'
14:18: User clicked URL address bar
14:18: User searched for 'stackoverflow.com'
14:18: User clicked 'Login' button on 'stackoverflow.com'
I'm not an expert by any means and only have a school-level knowledge of programming, but I want to know if it would be possible to create something like this. I want the program to be able to collect all of these major actions so that I can compile them within a hidden text document. I'm not sure if these elements could be identified by a program but if anyone has any idea how I would do something like this or indeed if I could do something like this, by all means, message me on here. I would be EXTREMELY grateful!
P.S - This is my first post on here, go easy on me, aha.
With .NET code you will definitely be able to get window positions of other applications (for example google chrome).
For example with the following library: https://github.com/DataDink/WindowScrape
Furthermore you should be able to track keyboard inputs and mouse-clicks of the user on the desktop (which position he clicked on). I think the evualuation of the mouse-clicks will be up to you ("user clicked on url-bar in chrome").
Just a personal hint from me: Doing something like that without informing the person using the PC you will be heavily punished. Think about it twice.

"A potentially dangerous Request.Form value was detected from the client" before page load

I know this has been asked a LOT of times, but I am really struggling having tried a lot of different potential solutions.
I have a c# ASP.net website page. There is a form on there with a submit button. all code is in the code behind page.
We do not get any spam submitted through the form because I have a capture element. Yet - spam bots have are scanning the page, getting the field names and posting straight to the page.
I only know this because I set the Application_Error to report any errors by email to me (in global.asax).
I have tried changing my field names - but they just pick up the new fields.
I have put <httpRuntime requestValidationMode="2.0" /> in the web.config.
In my page, I have EnableEventValidation="False"
But - as I said, the problem isn't allowing html in the post data, it's trying to stop spam bots from submitting DIRECTLY to the page. It's being triggered (I think) before the page even loads.
I'm running out of ideas here! I am blocking ip ranges every 10 minutes on our firewall. I cannot keep doing that!
Thanks for any help!
This is what you do: ignore it. Blocking IPs will just keep you running around in circles and is ultimately a waste of time.
If spam is not actually being submitted then you really don't have a problem. The framework is doing exactly what it is supposed to be doing.
Quite frankly, I wouldn't bother investigating an error message like that unless it was preventing an actual user from doing what they need to do.
If you really just want the errors to go away then you need to do the following:
Set EnableEventValidation="true"
Set ValidateRequest="false"
EnableEventValidate tells .net to see if the post came from clicking on a control that it had rendered. This should help prevent direct posts.
ValidateRequest tells .net whether to test the inputs for html and other "dangerous" characters. Turning it off will stop your error message.
If you are simply trying to get spammers to stop hitting your site: close the site down. As that is the ONLY reliable way of keeping a spammer off of it.
Have you tried a honeypot field?
Create an input field in your markup, but don't display it on the page. You can use css or other methods to hide it from users, as long as it still shows up in your page source.
Then, in your code-behind, check that the field is empty before processing anything. You know your real users can't see the field, or enter anything in it. Therefore if that field was filled in, you know that it was from a bot scanning your page, and you can ignore all the rest.
The idea is that spam bots can't resist filling in fields, but most aren't smart enough to determine if the field is actually visible in a browser, so you trick them into giving themselves away by filling in something they shouldn't.
FWIW, I've used this approach personally with decent success.
However, if ASP is rejecting the submissions and causing an error, that's a different problem. Do you need legitimate users to be able to submit markup in the field? If you don't, the framework is actually doing the right thing by protecting your site. In that case, I would just check for that particular error in your Application_Error method and ignore it.

Controlling browser forward/back functions in web application

I'm writing a web-based application for internal use within the business where I work. It's a fairly complex application, with a lot of forms that will allow the user to view and enter data, which once saved will be stored in a database.
One thing I'm anxious to avoid is allowing a situation to exist where a user might enter large amounts of data in the browser, and then (either deliberately or inadvertently) navigate off the page without saving the changes. To this end, I have already implemented an entry page which opens up a new browser window in which there are no navigation controls at all; only what is provided on the web pages themselves.
However, there are two potential ways in which a user could still lose data:
The browser Close button is still enabled, and a user could potentially lose work by clicking it inadvertently. I can probably live with this, as it falls at the extreme end of helping the user not to shoot himself in the foot.
In Internet Explorer (and, apparently, in Firefox) the Backspace button works like a Back button. I only discovered this accidentally, and have as yet been unable to find a simple way of stopping this behaviour. This is potentially a problem, as an inadvertent use of the Delete key (e.g. having positioned the cursor in a read-only textbox, or when the cursor isn't on any particular field in the page) will navigate off the page.
What I would like to do, as a minimum, is prevent Backspace from navigating off a page if that page has any user-writable fields on it and any of those fields have been changed by the user since the form was loaded. Ideally, I would like to disable this particular use of the Backspace key completely, while the user is logged into this web application. The two possible ways that I can think of, for achieving this, are: (1) clear the browser's history as each page is loaded, or (2) trap the Backspace key and only allow it to work if the cursor is positioned within a field whose text can be changed (e.g. a textbox).
Can anyone suggest how I could achieve either of these things? The solution needs to be programmatic, rather than something that has to be manually configured on every browser in the company.
Instead of blocking* functionality that your users have learned to expect in their daily activities at work and at home, why not work with it? Make the "back" button actually take them to the previous screen as expected, and use AJAX to silently save the form as they fill it out (say, every 5 or 10 seconds), so when they return to the form you can check to see if they already have partial, unsubmitted values saved and reload them.
This approach aligns with the realities of web-based applications and delights users if implemented well. An alert that says "you did something wrong" just frustrates users and makes them trust your application less. Remember - users almost never do the wrong thing. It's our applications that aren't aligned with usage.
* more like trying to block functionality. As you've discovered, people who designed the interwebs and web browsers never really intended for site developers to totally disable moving back and forward in the navigation history.
What about something like this? You can ask them if they are sure before they leave.
var changes = false;
window.onbeforeunload =
function()
{
if (changes)
{
var message = "Are you sure you want to navigate away from this page?\n\nYou still have unsaved changes.\n\nPress OK to continue or Cancel to stay on the current page.";
if (confirm(message)) return true;
else return false;
}
}
You should look at the Javascript's window.unload event.
This is fired when the use tries to leave the page. You can't totally stop them leaving the page, but you can give them a chance to cancel.
try this
window.onbeforeunload() {
return "Are you sure you want to navigate away?";
}

ASP.NET prevent bot/spam attack from a comment form

I have a simple contact us / comment from in my website and this form will send email containing the comments, etc after it is submitted. I have used NoBot control from ajaxcontrol toolkit for several times but it seems that this control did not prevent the spam/bot attack 100%.
The client insist that this form should not have any capcha code or something that users have to insert in the form. So what is the best way to handle the spam/bot attack for my current case.
Thanks.
Without a captcha there is no 100% way of stopping all spam. (or even with a captcha)
one method would be to put an input type=text on the page and hide it using css, then if it's filled in when the form is submitted it's spam, any normal user would never even know about the field.
Outside of a captcha, the key to stopping bots on small sites is to do something custom. Bot-writers know their work, and they'll have canned scripts capable of defeating the common and even most of the uncommon systems out there. You need to do something unique. It doesn't even have to be that complicated. The person who created this very site was able to get by running a popular blog for years by simply asking his users to type in the word orange.
I want to also point out that this doesn't mean you should start from scratch. As with all security-related code, if you try to do it yourself you'll likely get it wrong. What you want to do is find a system that gives you source code and customize it for your site, so that existing scripts that know how to defeat that system will no longer work.

How do i lock an asp.net page from multi-user editing?

i have a page with a series of checkboxes that authenticated users can change. I need to make this page only editable by one person at a time. So if a user goes into it and edits one of the checkboxes, noone else can go into the page and change other checkboxes.
I thought about an edit page link and a readonly page link (all controls disabled), then set a database flag if user enters under edit mode, but my concern is i wouldn't know if the user changed something, then just x'd out of the browser/app, locking everyone else out.
This is an internal app to company. Has anybody done something like this?
Any ideas or thoughts or suggestions?
Thanks
We have this functionality on an older ASP app. The user will load data with some type of primary key. We put in a DB entry to "lock" that page. If they correctly move through the site, it will unlock the resources at that time.
Other users opening this page will receive indication that the page is locked and a read-only version is rendered.
It would be fairly trivial to code a unPageUnload AJAX call to reset the lock for browser closing. We don't find this to be much of an issue and old locks are just cleared by an evening process if more than 4 hours old.
Our situation is where the pages are tied to specific regions of data. If this is a general config screen, I think a more dynamic AJAX solution that pushed the updates back and pings for changes might make sense. You would have to decide if you want to disable changes from others after the first update is received or implement collision detection for the data.
Some type of hashing of the page data would probably make this easier to detect changes.
You do what you said, but add a client side timer which will ping the server and tell you they are still there. If you don't get a ping within x mins you could let a new user go into edit mode but perhaps warn them (or not).
What about letting all users edit this page and how your script check in for page updates? Just like SO does, while you are typing in an answer, an orange message appears above saying "At least one new answer has been posted". You could display something like "The page has been modified since you last opened it".
There was something like timer in ASP.NET AJAX. You could use that to talk to the server to send "IN EDIT" status updates. You can even go further. Say you send "LOCKOUT REQUEST" requests every 15 seconds asynchronously and you expect to receive the "LOCKOUT GRANTED" response from server. If the response hasn't been received, you disable all controls on the page until maybe the next request receives the confirmation (the previous message could have been lost in the network). This way, if one user closes the browser, the other won't have to wait many minutes or hours until they get the edit permission.
Essentially, you need a distributed implementation for a critical section concept. It maube a challenge to implement it over HTTP. But that's a very interesting challenge, isn't it?
If you're trying to prevent two users from updating a db record and over-writing each other, perhaps it would be easier to detect this than prevent it.
On strategy for this is to include a "version" field in the record, and save that in a hidden field when rendering the page.
Then you simply include that as a condition of your update (i.e. UPDATE ... WHERE ID = myID AND VERSION = myversion) - if your update returns 0 rows, you know that someone else modified the data, and you can then decide what to do - reload the new data, offer the user a chance to compare them, etc.
How about an alternative to an extended lock?
Since you appear to be manipulating relatively small amounts of data, it would be more polite to put an encoded version of original state of the data in a hidden form field (or a datestamp, though that's less reliable; a hash of the values would work for larger amounts of data). In a transaction, check the state of the database against the hidden form values; if the original record has changed since the user submitted the changes, you reject the update. If not, accept the update, and commit the transacation.
Another approach could be to have an Application variable that contained a map or dictionary of locked items.
So, when one user hits edit, add an entry to the AppVariable Map or Dictionary, with the Key set to the primary ID of the field being edited. Then for all further requests, when they change between records, do a check of the ID within the map and if its being edited, Toggle off any update buttons. If you want to do it AJAXy, add a timer and an UpdatePanel and poll to see when the lock is released, then refresh the page with the updated data and enable the update buttons again.
Or, as a slightly greater UI, allow the users to edit while waiting for the lock to release ( the Map item to be removed ), then when it is removed, compare the fields they have been working on, with the updated database values and allow them to overwrite/merge their changes.
The only real downside is, 1) You would need to create one Application level Dictionary or Map for each table that you want to lock/unlock. 2) If you get into a webfarm environment, it breaks and you would have to use a different system.
Does that make sense?

Categories