I'm tasked with designing a .Net application that will download a sql script file from a specific server and execute that file against a database. I can think of a number of security steps I'll want to include:
Use a secure connection to the server (SFTP)
Database user only has certain access (insert, update data on specific tables)
I suggested sandboxing the transaction in a separate database instance.
Unfortunately, they say the transfer data set is too large for this
to be practical.
I'm primarily worried not only about allowing someone to purposefully damage information in a very large database, but, ideally, to help prevent accidental damage as well.
Questions:
Did I miss anything? Are there any best practices to keep in mind
for this kind of thing?
What would be the best way to authenticate the server cert against a man-in-the-middle attack?
To point 1)
Keep an audit log.
To whatever degree possible, help the user create these SQL scripts. Drop downs to choose table names, radio buttons to choose the command, a column selector, etc... This will help prevent accidents.
Ideally, you would be able to roll back to before any specific script is executed (think of how a bank has to be able to replay your transactions to verify your account balance if ever questioned). Depending on the frequency of updates and this data's importance, you're probably fine with just some daily backups instead of an actual transcriptional, re-playable history.
To point 2)
WinVerifyTrust to make sure the certificate is valid and has a valid root.
CryptQueryObject to check for a specific certificate.
I would implement your point 2 as restrictive as possible, but obviously your script has to be allowed to do some stuff. So you will have to trust the person which provides the script. To make sure that you execute a script which is really from that person you trust, I would sign the script and would validate the signature before executing the script. So you can be sure that it has not been modified by somebody else.
Related
I have an application that uses an MS-SQL-Server to store it's data. We will roll out the application in several steps, so users can test first basic functionalities and we add functions over time.
Probably, this will cause changes to the database. In early stages, we can just drop the whole database and create everything anew with a script. But eventually, users want to keep their test data. For example, if I add a new column to a table, I don't want to delete and create anew the whole table, loosing all the data the user has stored so far in the process.
So, what I need is a script, that updates myDatabase v1.0 to myDatabase v2.0.
My question is: What is the best way to create such an update script? Do I have to manually keep track of all the changes and then assemble the update script myself? Or is there a function which could (semi)automatically create the update script from the old and the new database?
And finally, what is the best way to apply this patch? I have a ASP.NET Web-API so, I could create a controller api/updates/v1.0-to-v2.0
How would the code for applying the script at the server would look like?
Thanks in advance,
Frank
I'm working on a solution to this very problem, check out dbpatcher.com the software that I've created will help to make migrating database changes easier. I'm putting together the website at the moment so would welcome feedback. The program itself isn't yet available, as I'm trying to figure out the details for publishing.
If this is an on-going concern (corporate), you should really consider different environments, ie. test, staging and production. This way you can test your deployments and database scripting changes in a pristine environment (something that looks exactly like production).
Given that, to answer your question, there really isn't a good way to do this. I've seen people use diff tools to detect the differences between schema's, and it creates scripts to sync two schema's, but it's not fool proof.
I find that scripting the changes and combining that with version control and an installation procedure (manual or automated) is the only way to get consistent results, and even that fails sometimes.
Code first entity framework is attempting to solve that issue, but it's not an option for a number of shops.
I would love to see a good tool to manage this, but the set of diverse frameworks and human error are the biggest problems here.
In terms of down time, there really is no such thing as live in-place upgrades of web applications. There are ways to mitigate it to a minimum, like update one set of a load balanced Web/App server at a time and then fail users to the new software. If your doing table alterations, the likelyhood that your not going to lock the table and interrupt your users are pretty low.
Thanks for your replies!
I have different environments, my concern is how to change the database schema without loosing data in the production environment (if possible, of course). Downtime is not so much a problem. As I am not productive yet, I simply create a script to recreate the whole database, but when the user has live data stored, he would probably be a bit ... upset.
There is a nice tool from redgate, that seems to solve that problem, but I did not checked it yet.
I'm currently writing a web app which would largely be used by developers, and I figured (from personal experience) that there would be times where it would be handy to run custom searches in an unrestricted way. I would like to let my users run arbitrary multi-statement SQL searches on their personal data (for an extra fee), so they can retrieve the data that's relevant to their question at the time.
Obviously, this is something that needs to be done with extreme caution, so I would like to make sure I'm going to tackle this the right way.
As I see it, the main points of concern are:
A malicious user could run a DOS (can track this via logging and remove their permissions)
Someone could run a function in a malicious way
Someone could access/modify data that doesn't belong to them (including database schema)
Someone could delete or modify data in a query (I would prefer they do that in a controlled manner)
What would be the safest way to go about providing this kind of ability to users safely?
This is dangerous territory (and I strongly recommend you weigh up this requirement carefully due to the obvious dangers you will be exposing yourself to), however I will try to give you the safest way to proceed if you must.
The only assumption I am making here is that you are running a current version of PostgreSQL and that you require users to remotely connect to the server (using their own tools) to execute their custom queries. Even if they will be entering them into a webpage, most of the same techniques will still apply as long as they each have a separate user log in for the database server.
First, (as NoBugs pointed out) to prevent users executing obvious malicious statements (like UPDATES, DELETES, DROPS, etc) you need to ensure that the user account connecting to the server has only SELECT permissions on the db(s) and table(s) they should be able to read from. Have a look in manual to see how to define roles for users, and grant specific permissions to those roles.
http://www.postgresql.org/docs/9.0/static/user-manag.html
http://www.postgresql.org/docs/9.0/static/database-roles.html
Note that you can only limit a user down to a particular table. If
users each need to be given access to different parts of a table, then
PostgreSQL (and nearly all DBMS's) will not support this out of the
box. Your only option would be to try and create some kind of SQL/TCP
proxy that intercepts requests, and modifies them somehow to limit
query results, before passing on to the database server. This would be
extremely difficult even for a very experienced developer!
To prevent (or at least detect) DOS attacks, you will need an external script or process to keep an eye on the resource usage of the database (and/or the entire server) every few seconds, and possibly build in a mechanism to restart the PostgreSQL service if it is maxed-out.
You will need to experiment with how long before you should intervene
carefully, as it is quite possible for a legitimate query to max
things for a few seconds.
As you mentioned, you would need to keep a careful log of who was trying to execute what, & when so, if necessary you can work backwards from a failure, to find out the culprit. You can really only rely on the system logs for this, which can be configured to write out to files, CSV, or Syslog.
I would suggest you pre-create some tools to help you quickly search
these logs to find what you need before you need to try and find it
(pun intended).
Finally you should also try to follow the other standard best practices for administration and security (all of which can be found in the manuals) including:
Only allow access for your users from specific ip's/hosts (dont give the general public any chance at connecting to your server. Your customers will need static IP's to access the system, but this is certainly worth considering to mitigate risks.
Keep a close eye on all of your standard administrative tasks for the server (especially backups, disk space, log file maintenance, index usage, etc.)
Make sure the user the sql is running as has permissions only to the tables/files the user should be able to modify.
There are also some other considerations - only allow trusted input (maybe use https in your api calls?) and know Mysql could access files and stuff you wouldn't want to let it access.
See also: http://www.greensql.com/article/protect-yourself-sqli-attacks-create-backdoor-web-server-using-mysql
We are developing an application to change the password for users in our UCCE environment. I have found where this data is stored and I can update the password for a user.
However it seems to only work on the web based applications in out UCCE environment and not the physical applications like CAD and CSD. For the life of me I cannot figure out why this is happening.
Has anyone ever done this successfully?
This is the response I got from Cisco
Hi all,
First, you could have some LDAP issues, but sync does go out and check for changes every 10 minutes. You could have an LDAP sync issue. Second, the supervisor password is not stored in the same table. The table I mentioned is for agents. The supervisor password is stored in a couple different places including updating the AD, and I don’t believe it’s as easy to change as the agent table. Also, the password is not updated at the desktop – it is only stored in LDAP and validated when an agent tries to log in
Also, just by changing it in SQL, I’m not sure that that’s all to having it populated across to the other logger, HDS, etc (in fact I’m pretty sure it won’t). The normal way would be to make a change in the AW which would then push that change to the router (via UPCC.dll) which sends the change to the loggers to update their database, and finally back to the AW as a confirmation as well as the other AW/HDS’s. There’s certain checks/procedures for changes to be populated – one being the recover key on each server.
By what you are trying to do, you most likely will be causing corruption across all the databases because they are dependent on the recovery key to ensure they are all in sync. So I’m not so sure that changing it in SQL is a very good idea, nor would it be supported by Cisco or Calabrio. You most likely will be corrupting the database since you are bypassing the way the Central Controller keeps everything synchronized.
Lastly, CAD wouldn’t have anything to do with this/these changes – it would only query the database and update LDAP, but as mentioned I think you are changing the agent password and therefore the supervisor is not being changed. I would seriously urge you not to try and change anything in SQL as we seen enough cases where the databases get out of sync and/or corrupted – not a lot of fun when that happens!
Hope that helps explain a bit more.
Thanks,
Chris
Correct
The Password change which you doing in SQL wont populated across to the other AW / HDS , ETC.
I’m 100% sure it won’t and you will face
The Recovery Key Mismatch Issue.
_Real_Time Table Data Mismatched Issue means synchronization Issue.
I am designing a program that will build and maintain a database, and act as a central server. This is the 'first stage' of a grander plan. Coming later will be 3-5 remote programs built around the information put into this database.
The requirements are:
The remote programs must be able to access the information in the database.
The remote programs must be able to set alerts when information in the database changes.
The remote programs must be able to request the central server to go out and fetch new / different data.
So, the question is this: how do I expose this data and events to the outside world? My two choices are:
Have them communicate directly with my 'server' application. This seems easier to:
do event notifications (although I suppose I'm probably missing something in SQL).
It also seems like this is more 'upgradeable' - that is I don't need to worry about the database updating and crashing all my remote programs because something changed. I can account for this and transform it the data to a version the child program will understand.
Just go ahead and let them connect directly to the database.
This nice thing about this is that it's solved. I can use LINQ for SQL. The only thing the main server application needs to do is let the remote programs know where the database is.
I'm unsure how to trigger / relay 'events' for field changes in a database over different programs that may or may not be on the same computer.
Forgive my ignorance on this question. I feel woefully unprepared to ask it, but I'm having a hard time figuring out where to get started with this. It is my first real DB project :-/
Thanks!
If the other programs are going to need to know about updates to the database, then the best solution is to manage all db updates through your server application so it can alert clients of the changes. Otherwise it will be tough for the clients to be aware of changes to the db. This also has the advantage of hiding the implementation details of your storage solution from the clients, so you are free to change databases, etc...
My suggestion would be to go with option 1. Build out a web service that can provide the information they all need. This will be the most flexible and allow you to reduce duplicate backend code that would happen with direct communication with the database.
I would recommend looking at some Data Source design patterns first. This types of patterns will help you come up with solutions about how to manage the states of your data. Otherwise I think that I would require some more information about your requirements for the clients to make any further useful suggestions.
I recommend you learn about SQL Server and/or databases first. You don't appear to realize that most of what you want from your "central server" can all be done by SQL Server itself.
A central databse is the simplest option and the cheapest to both build and maintain.
There are however a few scenarios where a central database could cause problems:
High load on one of the systems: A high load on one of the systems could reduce performance on the other systems. For example someone running an internal report stops you being able to take orders on your eCommerce site.
With several systems writing to the same database there is a greater chance of locking.
With several systems dependent on the same database schema, how do you upgrade? All systems at the same time?
If you need to take down the database all systems stop.
I am working on an application that will allow users to create queries on their own to view data in their database. However the stipulation is that the application should prevent any modification of the tables and data stored in the database. The Application will be written in C#. Any good suggestions of how this could be done? Possible idea that I have thought of:
Parse SQL to filter for any reserve word that may alter data(i.e. insert, alter, ect)
There maybe a setting that may prevent modification from this applications connection.
Any suggestion to block any changes made from this application to prevent any chance of a user error or attempt to modify tables of data is much appreciated.
You should run your queries as a user that doesn't have write permission.
Any decent DBMS should have these protections already built in (at a per-user level). You just make sure the only access they have is read-only.
Then you don't have to worry about anything that they do. Let them try to insert, update and delete all they want.
It's a basic tenet of databases that they are responsible for their own security and integrity. You never leave that up to an external application since any monkey can write an application to connect to the database that doesn't follow the rules.
This needs to be handled at the user level rather than the query level. When you set up your app, you'll need to make sure that the account used to run the queries does not have any dbwriter permissions.
This is usually handled by giving users access to (non-updatable) views, but not to tables.
IMHO, the best way is to create a user that can only do select on specified tables. And then use that user for connection.