Allowing SQL queries that view data not modify it - c#

I am working on an application that will allow users to create queries on their own to view data in their database. However the stipulation is that the application should prevent any modification of the tables and data stored in the database. The Application will be written in C#. Any good suggestions of how this could be done? Possible idea that I have thought of:
Parse SQL to filter for any reserve word that may alter data(i.e. insert, alter, ect)
There maybe a setting that may prevent modification from this applications connection.
Any suggestion to block any changes made from this application to prevent any chance of a user error or attempt to modify tables of data is much appreciated.

You should run your queries as a user that doesn't have write permission.

Any decent DBMS should have these protections already built in (at a per-user level). You just make sure the only access they have is read-only.
Then you don't have to worry about anything that they do. Let them try to insert, update and delete all they want.
It's a basic tenet of databases that they are responsible for their own security and integrity. You never leave that up to an external application since any monkey can write an application to connect to the database that doesn't follow the rules.

This needs to be handled at the user level rather than the query level. When you set up your app, you'll need to make sure that the account used to run the queries does not have any dbwriter permissions.

This is usually handled by giving users access to (non-updatable) views, but not to tables.

IMHO, the best way is to create a user that can only do select on specified tables. And then use that user for connection.

Related

How can I safely let users query my database using (Postgre)SQL?

I'm currently writing a web app which would largely be used by developers, and I figured (from personal experience) that there would be times where it would be handy to run custom searches in an unrestricted way. I would like to let my users run arbitrary multi-statement SQL searches on their personal data (for an extra fee), so they can retrieve the data that's relevant to their question at the time.
Obviously, this is something that needs to be done with extreme caution, so I would like to make sure I'm going to tackle this the right way.
As I see it, the main points of concern are:
A malicious user could run a DOS (can track this via logging and remove their permissions)
Someone could run a function in a malicious way
Someone could access/modify data that doesn't belong to them (including database schema)
Someone could delete or modify data in a query (I would prefer they do that in a controlled manner)
What would be the safest way to go about providing this kind of ability to users safely?
This is dangerous territory (and I strongly recommend you weigh up this requirement carefully due to the obvious dangers you will be exposing yourself to), however I will try to give you the safest way to proceed if you must.
The only assumption I am making here is that you are running a current version of PostgreSQL and that you require users to remotely connect to the server (using their own tools) to execute their custom queries. Even if they will be entering them into a webpage, most of the same techniques will still apply as long as they each have a separate user log in for the database server.
First, (as NoBugs pointed out) to prevent users executing obvious malicious statements (like UPDATES, DELETES, DROPS, etc) you need to ensure that the user account connecting to the server has only SELECT permissions on the db(s) and table(s) they should be able to read from. Have a look in manual to see how to define roles for users, and grant specific permissions to those roles.
http://www.postgresql.org/docs/9.0/static/user-manag.html
http://www.postgresql.org/docs/9.0/static/database-roles.html
Note that you can only limit a user down to a particular table. If
users each need to be given access to different parts of a table, then
PostgreSQL (and nearly all DBMS's) will not support this out of the
box. Your only option would be to try and create some kind of SQL/TCP
proxy that intercepts requests, and modifies them somehow to limit
query results, before passing on to the database server. This would be
extremely difficult even for a very experienced developer!
To prevent (or at least detect) DOS attacks, you will need an external script or process to keep an eye on the resource usage of the database (and/or the entire server) every few seconds, and possibly build in a mechanism to restart the PostgreSQL service if it is maxed-out.
You will need to experiment with how long before you should intervene
carefully, as it is quite possible for a legitimate query to max
things for a few seconds.
As you mentioned, you would need to keep a careful log of who was trying to execute what, & when so, if necessary you can work backwards from a failure, to find out the culprit. You can really only rely on the system logs for this, which can be configured to write out to files, CSV, or Syslog.
I would suggest you pre-create some tools to help you quickly search
these logs to find what you need before you need to try and find it
(pun intended).
Finally you should also try to follow the other standard best practices for administration and security (all of which can be found in the manuals) including:
Only allow access for your users from specific ip's/hosts (dont give the general public any chance at connecting to your server. Your customers will need static IP's to access the system, but this is certainly worth considering to mitigate risks.
Keep a close eye on all of your standard administrative tasks for the server (especially backups, disk space, log file maintenance, index usage, etc.)
Make sure the user the sql is running as has permissions only to the tables/files the user should be able to modify.
There are also some other considerations - only allow trusted input (maybe use https in your api calls?) and know Mysql could access files and stuff you wouldn't want to let it access.
See also: http://www.greensql.com/article/protect-yourself-sqli-attacks-create-backdoor-web-server-using-mysql

C#/SQL changing logging - best methods

Not sure if this question is suitable for StackOverflow as it's much more 'general'. Basically, I have a database driven business application made in ASP.NET and C#, which'll be used by around 20 members of a company. A crucial aspect of this is auditing - I need to log on any changes to any of the tables, and have them viewable by senior members of the staff.
My current solution uses SQL triggers, but I need to create something much more robust and user friendly. The database is gigantic, with a lot of tables with relations etc, and the audits currently are very uninformative to the users - telling the staff that x user modified an order to have a customer of ID of 837 is near enough useless - I need to be able to dictate which field is displayed in the audit log.
My idea is to create a class in my code that'll handle all these, and somehow map out what fields to display to the user, and also somehow tell the code which table was modified and which record.
Can anyone offer any general advice on how to do what I want, and whether it's actually possibile? I'm a heavy user of LINQ-to-SQL in my code, so I'm hoping that'll help...
You could also try using DoddleAudit for your needs. It provides automatic auditing of all inserts/updates/deletes for any table in your database with a single line of code, including:
What table was modified?
What fields changed?
Who made the change?
When did it occur?
You can find it here: http://doddleaudit.codeplex.com/
I've had similar audit requirements for a healthcare application, which used linq-to-sql for data access.
One way to do it centrally in Linq-to-sql is to override SubmitChanges in the data context class. Before submitting the changes, call GetChangeSet() to get data about the pending changes. Then add change tracking information as appropriate to a relevant log table before calling base.SubmitChanges(). In my application I used an xml column to be able to store change data for different tables in a structured manner, without having to create special history tables for each table in the system.
You could also try using SQL Server 2008's Change Data Capture feature. It basically captures inserts, updates and deletes on the desired tables, and stores changes made into a separate set of relational tables.
http://www.mssqltips.com/sqlservertip/1474/using-change-data-capture-cdc-in-sql-server-2008/

Automatically log changes made to records in database

I'm wondering what the best way to implement this would be.
Basically our project has a requirement that any change made to records in the database should be logged. I already have it completed in C# using Reflection and Generics but I'm 100% sure that I used the best method.
Is there a way to do this from inside the SQL database?
The big key is that the way our project works, the ObjectContext is disconnected, so we couldn't use the built in Change Tracking and had to do our own compares against previous Log items.
If you're using SQL Server 2008 or higher, you can implement either change tracking or change data capture directly on the database. Note that the latter is only available in the Enterprise edition engine. There are pros and cons to each method. You'll have to review each solution for yourself as there isn't enough requirement information to go on in the question.
If you're using SQL Server 2005 or below, you'll have to resort to a trigger-based solution, as suggested by the other answers.
You want to look at database triggers.
depending on the complexity of your datamodel you could setup on update/insert/delete triggers on the relevant tables - these triggers could log whatever is needed (old/new values, User, timestamp etc.)... see http://msdn.microsoft.com/de-de/library/ms189799.aspx
Look at my blog to see how you can track data changes without database scheme modification:
part1,part2
For your project requirement, SQL trigger is the better solution than the current C# reflection. Becaz triggers provides a way for the database management system to actively control, monitor, and manage a group of tables whenever an insert, update, or delete operation is performed. More over, the requirement is full filled at DataBase layer itself and so hosted as the single solution for various front end applications.

Can Linq-to-SQL Do an Insert or update as needed?

I have a distributed app that sends data via WCF to a server to be stored in the database (SQL Server 2008).
Usually this data is new. So I just do InsertAllOnSubmit().
But occasionally, due to communication oddities with the handheld devices that run the client side I get an object that is already in the server.
Is there a way to say InsertOrUpdateAllOnSubmit? Or am I going to have to change my code to go through each object and check to see if it is in the database and then do and insert or update as needed? I have quite a few object types, so that will get tedious really fast :(
Change the store procedure on the database to handle Insert of duplicates.
Usually what I've seen as the pattern for the scenario when you attempt to do any kind of insert or update and there is conflict, such as objects already existing, then once you detect a conflict you request the new data and apply your updates to it. This allows you the flexibility of deciding which change wins, or prompting the user in some way letting them view the new data and decide if their data should win. It all depends on the context and business rules.
This mostly applies to updates but maybe if you think about why these conflicts are occurring for inserts you might be able to adapt your implementation to use concurrency detection and resolution techniques.
http://msdn.microsoft.com/en-us/library/bb399373.aspx

Monitoring /Watching database (activity) programmatically

I was wondering how to monitor a database for changes programmatically.
Suppose I want to have a .net application which would run after every 100th or(nth) row insertion or row deletion or updation . how can it be achieved?
I know little about triggers.they can be used to fire executable.
But I heard that it isn't a good practice.
Is there any other way?
2]Do database fire events on table updations? and can they be caught in a program?
3]Can SQL reporting services be used here?
(Also assuming that this application is independent from the actual program which does
database manipulation.)
SQL Server 2005 introduced query
notifications, new functionality that
allows an application to request a
notification from SQL Server when the
results of a query change. Query
notifications allow programmers to
design applications that query the
database only when there is a change
to information that the application
has previously retrieved.
Check out the MSDN link for more clarity
and sample immplementation
A trigger is really going to be your only way unless you aren't concerned about the accuracy of "100th" or "nth".
The answer to 2 and 3 are no.
You can write managed stored procedures (MSDN example) but that doesn't help you here really. In general triggers can be bad practice since they can block the initial caller but sometimes they are the only solution.
I think you need to question your requirement to place this low-level data monitoring in a separate application. Think about where your data changes could originate -
Do you have full understanding of every:
stored proc within your db (now and future) and which ones update this table?
application that may hit your database (now and future)
If not, then watching the changes right down at the data level (ie within the db) is probably the best option, and that probably means triggers...
Read about "Service Broker" at http://msdn.microsoft.com/en-us/library/ms166104(v=SQL.90).aspx

Categories