Disable DML queries in SQLCommand C# - c#

Problem
I'm writing a GUI application that allows users to generate an Excel file based on SELECT SQL query user enters in TextBox. It will connect to SQL server, run select over database, fill DataTable object and push that data to an Exel file.
The way I have developed application is vulnerable for SQL injections and user may be able to pass any DML query such as DELETE OR UPDATE.
Question
Is there a way in SQLCLient library to prevent user from entering DML queries and executing them? Can I somehow enforce SQLCommand object to throw an exception when DELETE command is passed?

The correct way to do this is to create a database user with only select grants to the specified tables or views as described by BhavO and jean in comments.
Why is this the correct way to limit the T-SQL commands?
Doing it client-side is significantly more complex. There is a T-SQL parser library that is provided by Microsoft, but do you really want to spend your time writing and testing tree visitor code that ensures you only have SELECT commands that only query some certain tables? Also now you have to worry about keeping this parser library component up-to-date with SQL Server releases which might have new SELECT query syntax that is not understood by the older parser library and causes errors in your app. Why not delegate the T-SQL parsing to the component in your system that is already designed to do that, which is SQL Server?
Doing it client-side provides no actual security. Security of a server needs to be implemented by the server, not by its client code. The client code is running on the user's machine, so the user has total control over what is being executed. This means a malicious user can potentially (1) decompile and edit out the "DML disable" check component and then run the edited binaries, therefore skipping the check, or more practically (2) use network inspection tools to determine how your client app is connecting to the service (i.e. the connection string) and then just directly connect using that connection string in SSMS or SQLCMD or whatever and own your server. So all of the complicated parsing logic really hasn't slowed down an attacker at all.
These reasons are (among others) why GRANT, DENY and so on exist in SQL Server in the first place. They are good (mature, well-tested, easy-to-use) tools that are implemented in the correct place in the stack.

Create a database user with only select grants, and use this user for the connection, and then handle database SqlException when executing the command.

Related

How can I disable a table in a SQL Server database from SSMS

I inherited a SQL Server database used with a C# client application. I know enough to be dangerous but I'm not a programmer. I know the previous programmer liked to play around in the live database and as a result I have a number of what I think are orphaned tables.
The application itself isn't super high-usage and can withstand the exceptions of tables not being there for a small time frame. I'd like to start turning off or disabling these tables to see they're being used anymore through trial and error.
Is there a way to turn them off without completely removing the data so that I can re-enable them quickly if needed? Or is there a more transparent way to discover whether those tables are needed?
There is no easy way. The tables can be accessed through both stored procedures and direct SQL calls from your client application. A comprehensive approach would mean that you'd have to have some way of making each table unavailable (renaming has been suggested in comments) and then perform a full regression test on your client application; you might have to do this with each table in the database. The client application might access the tables conditionally, subject to external things like the logged-in user (and related privileges), the date, configuration, and so forth.
The SQL Server Profiler is a good tool to use, but it's not going to solve your problem all by itself because you still have to analyze what it captures.
You could create a new db schema and transfer the tables to that schema
ALTER SCHEMA new schema TRANSFER dbo.your table
Then transfer them back again after testing.
https://learn.microsoft.com/en-us/sql/t-sql/statements/alter-schema-transact-sql
You can change the permissions so that no one except you and the dbos have select permission on the table. You cannot prevent a dbo or sa from having all permissions on a table.
You can also encrypt the table see- Encrypting database tables in SQL Server 2008 in which case it is really locked down.
You can also used SQL Server Audit to see if anyone reads the data. Audit is a very low impact product (comes with SQL Server 2008) and is very easy to set up and can audit selects unlike a trigger.

Is it appropriate to use a shell command to download MySQL tables from within a C# program?

I am writing a C# program that needs to obtain data from a MySQL database in a REMOTE server. The internet connections that it will be using are extremely slow and unreliable, so I want to minimize the data that is being transferred.
The following shell command gets MySQL to store data from a certain table as a *.txt file in the LOCAL machine:
mysql.exe -u USERNAME -pPASSWORD -h REMOTE_SERVER_IP DB_NAME -e "SELECT * FROM table1" > C:/folder/file_name.txt
As of now, I am writing a C# program that will execute this command. HOWEVER, when executing this command from the Windows Command Prompt, I get a Warning that says "Using a password on the command line interface can be insecure." I have a few questions:
1- What kind of security risk is it referring to?
2- Does this risk still exist if you execute it from within another program?
3- Would any of y'all use the same approach? How does this compare with using a straight MySqlConnection and calling in SP's to store all of the data in RAM (and inserting it into the local database later), in terms of amounts of data transferred, speed and RAM usage? (In theory, of course, I don't expect anyone to have tried this specific comparison already)
4- Is the code on the following link the best for this? Isn't there something in the MySql library (.Net Framework) that will make it easier?
How to use mysql.exe from C#
I am also open to suggestions on changing my approach altogether, just in case...
EDIT: The alternate method I referred to in 3 uses the MySqlDataAdapter class, which stores the data in DataSets.
1 & 2
As you're passing password as CLI arguments, if they were displayed on screen, anyone can see your password. As easy as that.
Rest of points
It's not true that you would take all records into memory. If you use MySQL's IDataReader MySqlDataReader (i.e. you'll need to call MySqlCommand.ExecuteReader method) implementation, you can sequentially retrieve results from the database like an stream, thus, you can read each result in the result set one by one and store them in a file using a FileStream.
It will show your password in plain text either on the screen or in the console output or in memory.
Yes since you need to store the password in plain text either on Disk or in memory
If you are not that concerned about someone gaining access to your remote machine and steal the password without you knowing it, then its fine
You can try Windows Native Authentication Plugin which you wouldn't need to store the password but instead it will use your current windows Login information to authenticate. (unless you are on Linux then forget about it)
It is pretty much the same idea as typing your password on any website without a mask (either dot or *). Whether or not that is a concern for you is for you to decide.
Why not connect to the DB the standard way (from w/i .Net like you can connect to an Oracle db for example) using MySqlConnection as shown here MySql Connection. Then, once you do that, you have no password concerns as this is in code. Then I think that I would handle the problem in a similar fashion (incrementally fetching data and storing locally - to get around the internet issue).
So, I finally got around to properly coding and testing both methods (the shell command and the IDataReader), and the results were pretty interesting. In total, I tested a sample of my 4 heaviest tables six times for each method. The method of the shell command needed 1:00.3 minute on average, while the DataReader needed 0:56.17, so I'll go with that because of an overall (and pretty consistent) advantage of 4s.
If you look at the breakdown per step though, it seems that C# needed a full 8s to connect to the database (48.3s for downloading the tables vs the previous total). If you consider that the shell command was most likely establishing and closing a new connection for each table that was being downloaded, it seems to me that something in that process is actually quicker for connecting to the remote database. Also, for one of the tables, the shell command was actually faster by 2.9 seconds. For the rest of the tables, only one was more than 8 seconds slower under the shell command.
My recommendation for anyone in the future is to use the shell command if you're only obtaining a single, large table. For downloading multiple tables, the IDataReader is the more efficient choice, probably because it only establishes the connection once.

how to maintain database integrity for write operations in a website

A website to change add records delete etc
website is not connected architecture, so i cant expect sql to refuse writes to a table being edit by some one else also. As data is only written when its sent back to server by the grid..
so is there a way using c# and asp.net, some code , by which i cant explicitly tell the sql server to lock the table, so that viewing is allowed but writing to it gives error like
"sorry another user is using the writting function for this table please wait".
No. SQL does not support "lock nowait" semantics with normal transaction modes; only Oracle does. See http://download.oracle.com/docs/cd/B10501_01/appdev.920/a96590/adg08sql.htm#5732 for more information on Oracle. Yes, the code will just hang until the table is released.
There is an option of lowering your transaction mode, but then you get more like "free for all" semantics, which you probably also don't want.

Finding the SQL output of a parameterized query

I'm making a parameterized query using C# against a SQL server 2005 instance, and I'd like to take a look at the SQL that is run against the database for debugging purposes. Is there somewhere I can look to see what the output SQL of the parameterized command is, either in the database logs or in the Visual Studio debugger?
Use SQL Server Profiler to view the sql
http://www.eggheadcafe.com/articles/sql_server_profiler.asp
http://msdn.microsoft.com/en-us/library/ms187929(SQL.105).aspx
SQL Profiler is the best solution, but if you need something more organic to your application that you could deploy and enable/disable in production, QA, etc... then you could build a wrapper around the System.Data.SqlClient Provider (Ex. the provider registered in the config file as... providerName="System.Data.SqlClient").
This would essentially act like an intercept proxy which would give you access to all the information passing through the Provider (e.g. between your application and the database client). This would allow you to siphon-off what you need, intercept, modify, aggregate and/or enrich it. This is a bit more advanced but opens the door to capture a whole range of information and could be inserted/replaced/removed as a separate layer of concern.

Run SQL statements from ASP.net application

I need to run sql statements from the application itself. i.e. the user can go into the asp.net applciation, gets a box and can run sql statements from there
I am already doing something like this
Can I rollback Dynamic SQL in SQL Server / TSQL
That is running dynamic sql
is there a better way to do this
Dynamic SQL is certainly the easiest way to do this. The alternative is parameterized SQL, but that would require having your users define and set parameters separately from the T-SQL.
You can simply submit the T-SQL string to SQL Server using the SqlCommand object; there's no real benefit to wrapping it in an EXEC or anything, as in the link you provided. You can do exception handling on the .NET side.
Also, if you want to support command batches, keep in mind that SqlClient and friends don't understand "GO", which also isn't an actual T-SQL command -- you will need to parse the input and break it into batches yourself.
I'm sure you understand that there is a big security risk in doing this, and that's it's generally not recommended. You might consider using a connection string that specifies a user with limited permissions, to help control / limit their access.
DO NOT DO THIS. What if the user types in sp_msforeachtable 'truncate table ?'...?
RunSQL.aspx utility might help. See Upload T-SQL and execute at your hosting provider using an ASP.NET page.

Categories