.NET - Best way to store encrypted offline data in Desktop Application - c#

I couldn't find proper topic so I'm creating this question.
I'm building an Desktop application that is running on data from MS SQL Database. Most of the data there is updated once a week/month, and most of the tables are read-only to end user. I figured, there is no need for the user to work directly on SQL Database online and in order to speed up the performance, I want the app to download necessary data from SQL Database on start, and then use localy saved data + in case of server outage user should be able to load app using latest saved data.
The thing is, data needs to be encrypted and secured from unauthorised use. I used to have SQLite database, but running on two databases doesn't feel efficient.
What solution would you suggest?

The problem with encrypting is:
1 - you need to decrypt in order to use the data in your application, which will have a performance impact.
2 - if you can decrypt, so can someone who has obtained the data illegally so it's not really that secure.
If you did want to go down this route, you could read in the data from the DB, decrypt and then cache it, which will speed the application.
The cache could last for several days based on the fact you said it only gets updated every few weeks/months and minimise DB calls

I am not sure your proposal is a great solution.
You have data that a user should not have full access to; so you want it encrypted.
The easiest way to limit access is to not give the user access at all; for example by putting a server in front of it. The server can only talk to the database and the database is secure and encrypted etc.. now you don't need client side encryption (which can always be broken if a hacker has enough time and money). You just secure your server and expose some API's and boom, you are secure.
But let's find out a bit more about your data. The client of course needs to view the data in the app, but isn't allowed to have access to the database when running locally. Why do you have data in the local database that the user isn't allowed to see in this case? This sounds like a security issue where a user might have (private) data of your company or other users on their pc, which you do not want, ever. So let's say we only store data on your pc that the user can see in the app anyway. Security is better now!
Now to solve the problem of performance. How much of a problem is it? You could implement caching in the server and client, depending on how performance problems you have. When the server only returns data that the client is allowed to see, you could cache it in the client and also on the server.
If the client already has data cached and doesn't need to retrieve new data (when is this?), Then it doesn'y need to talk to the API. and if the client doesn't have data or has outdated data, the API checks if it has the new data cached and returns it immediately if so.

Related

Using direct MySql connection in app - C#

I have developed an app, which more than 2k users are going to use it. This app is connected to a database which contains some data.
I have some questions:
1. Is it ok to use mysql direct connection in app instead of API for just reading data?
2. Is there a way that someone find my server's information (address, pass, etc) from my application?
App is wpf.
Generally speaking (and as with all generalities there are all kinds of exceptions here, in both directions) it's okay to connect directly to the database if one of these two conditions is met:
The app and the database are on the same computer
or
The app and the database are on different computers, but within the same corporate network and traffic between the app and the database is adequately protected.
and if one of these conditions is also met:
The end user owns the app and doesn't share data with other users (they break it, that's their own problem and no one else's)
or
You issue separate accounts with only the necessary privileges to each user (the user owns the credential)
or
The machines where the application is deployed are controlled by the business, where you can securely deploy the application (and the account credentials it uses to connect to the database) in such a way that end users are not able to retrieve the account credentials directly. (The business owns everything).
It is not generally okay to connect directly to a database over the public Internet, or within a local network where traffic to the database is not adequately protected, and it is not generally okay to let end users have direct access to the database separate from the application (and if a user has ownership of their machine, they will be able to get that access).
I also need to expound on what I mean by "adequately protected". This involves a few things:
A good firewall between the clients and the database. In some smaller environments, the firewall on the OS hosting the database itself may be enough.
Measures to prevent MitM attacks on data packets to and from the DB. For traditional corporate networks, this usually means 802.1x is running even on the wired network, and wifi access is similarly protected (a pre-shared key wifi network like you use at home is not good enough, because anyone who can get the key can decrypt your traffic). Alternatively, you can implement encryption that runs from the client all the way into such a protected network. This is what many corporate VPNs are for (a public VPN service doesn't accomplish this for you). You may also be able to encrypt that actual database connection traffic. I know how to do this for Sql Server, for example, though I'm less clear on what direct support is in MySql in this area.
If you save the information inside your application, it can be found. You should consider using an API to handle the data reading. Applications can be reverse engineerd.

Web service for Laravel

Assume that I have a third-party database application with SDK that can be used to retrieve data out of the database in XML.
On the other side, I have developed a website using Laravel framework of PHP. The website is supposed to display data from the database of the application.
In regards to above I have the following questions:
As far as I understand, I can either store the requested data in my website database or just show it without storing. What technique do you suggest?
How do I achieve xml data transfer from the database server to the website?
Taking into account that I have experience of development in C#, I assume that I have to develop some web-service that would run on the database server, retrieve the required data and send it to my website. So the web-service has to receive the requests from my Laravel website, retrieve data from database server accordingly and pass the xml response to my website that would finally display it. Am I on the right way? If so, could you please guide me on how to code and bind these parts?
Thank you in advance.
I have to agree with #Serge in the comments - there are many ways to do this because it is a very broad question.
My answer was mostly going to deal with how regularly the third party database was going to be updated but judging from your comments, I'm assuming it will be fairly often? In which case, I would likely connect directly to the third party database from your laravel app using the firebird driver found here: https://github.com/jacquestvanzuydam/laravel-firebird (Please note, I have never used this so I cannot comment on it's quality) instead of writing a C# web service. I don't know much about firebird itself but you will likely want to connect using an SSH tunnel or VPN for security reasons.
Then I would either store data in MySQL if you know it isn't likely to change very often (in this case you would use a laravel command, run on a schedule, to pull data out of firebird every [X] days/hours/minutes depending on the data) or, if the data is likely to change on each potential web request, using some form of caching system (redis, memcache, file cache etc) to speed up the web requests.
Sorry if that isn't particularly helpful - if you can provide more information maybe I can help you out further :)
Good luck!

How to put a database in the project? C#

I have the local database in SQL Server 2014 with four tables volume 1.5 GB. The essence of the program to look for in the database records with the user defines criteria. The program is written and it works fine. We should make sure that the program worked and other users who have not installed the server. How to implement this? Was a idea to serialize the data, but as I understand, it is necessary to deserialize all the data and then look for the right record.
As the comments before me already says i think you have 2 options.
Either ship the database with the client (using Sql Express or other similar solutions). That should work fine and will work without a connection to a centralized server but the size of your client package will be quite big. And if you make any changes it will only be locally, but it seems you only make reads to the database from the client?
But if i understand it correctly you install a sql server for each client, since you mention "users who have not installed the server"? Then you already have the problem with a lot of data needing to be sent out to each client, as well as the problem of updating all databases when the data needs to be refreshed.
Another solution is to allow access to the database from the client. This can work in serveral ways, if all your users is in your doman you can handle authentication based on their domain users and skip the authentication part. Then you would only need to send out the client and skip the installation of a big server and all the data.
If they are not on the domain but still on your network you could add a login on your application to allow access to the database or if you trust all your users you could add a read only account and just hardcode the login for that account.
If you want to access the data outside of a trusted environment you should of course add a separate login for each user to allow access and it might even be a good idea to use an api before the database that handles the requests from the client and then does the search to the database in a controlled manner.
I would personally go with using a centralized database to skip all the work of setting up new users and also have a single point to update when the data needs a refresh, but of course it all depends on where your users are.

What is the best way to port the data back and forth from our client’s local database to/from our webserver database?

My question is: What is the best way to port the data back and forth from our client’s local database to/from our webserver database?
An explanation of the problem:
Our clients run our software against their local copy of our SQL Server 2008 R2 database. We routinely (once a day, middle of the night) need to combine fields stored in multiple tables for each of these clients (i.e. a view) and send that information over the internet to a SQL Server 2008 R2 database which we will host on our webserver. Each of our clients may have tens-of-thousands of records which we will need to port to our webserver database.
This information will allow our client’s customers to make payments and place orders. We will store these transactions in one or more tables in our webserver database. At regular intervals we need to push these transaction records back to our client’s local database. The client will then process these transactions and update the records which we push up to our webserver database at night.
I am a C# programmer in a very small shop. My first thought was to write a windows service to control the porting of data back and forth. This would require installing the service on each of our client’s server. I am concerned with our ability to easily maintain and extend that service. In particular, when we decide to port more data back and forth this would require updating the service at each client site. Given the size of our shop, that would become a serious challenge.
I would prefer to manage this process through SQL Server, preferably at the SQL server instance on our webserver. However, we have no one with extensive knowledge of SQL Server. (I am the SQL Server guru here, and I know just enough to be dangerous.) Some of our clients are very small companies and only have SQL Server express installed on their server. We did some experiments with replication, but never found a way to make it wok reliably over the internet, especially with SQL Server express.
I have read some about SSIS, Linked Servers, Service Broker, and 3rd party tools such as RedGate’s SQL Compare. I am uncertain which, if any, of these options would best suit our needs and have not found clear examples showing how to make use of each.
Any guidance on this issue would be very much appreciated. It would be particularly helpful if you can point me to relevant examples showing how to do what I have described above.
Just fast,
one option is to use the MS Sync Framework - it does as I can see exactly what you need, though not sure of the specifics in your case.
hope this helps

C# mysql connection practices

If a C# application connects to a mysql server from a client, how do I store the mysql username/password for the connection? If I have it in a config file or embedded in the source it can be found by reverse engineering. It is not possible to give all users a MySql password.
Also, I have a log in for the application. How do I enforce that the user goes through the login process and does not just reverse engineer and comment out the C# code verifying the log in?
Is there anyway manage these connections between MySql and a client side application or must there be a third program on the server side interacting with the database locally to be secure?
Perhaps you can have a two-stage system: Have a SQL account whose only permission is to execute a stored procedure taking the user's username and password and giving them the credentials to use for the real account. When the user logs in, you connect using the restricted account, get the credentials for the real account, and then do your work using that account. You can change the SQL password fairly frequently, too. If you suspect a user of foul play, have the procedure return them a different set of credentials, and track those credentials.
For Winform clients that connect directly to the db this is an age old (10 years or so?) question that may never be solved.
The problem is that no matter how you encrypt or obfuscate the connection string there will always be a point in time at which the string will be represented as plain text in the client computer's memory and therefore hackable.
You have been recommended options by other SOers, i just thought i'd point out what you're trying to work around.
Here is the problem. You are trusting the end user with the binaries which will call MySQL queries. This means, beyond any shadow of a doubt, that a clever user could "take control" and run queries directly.
There are things you can do to improve the situation. It sounds like you are on a LAN. Why can't you give each user their own database user? That means that the authentication is (a) taken care of for you, and (b) you can use "real" MySQL permissions to limit what harm they can do. Also, you could use stored procedures and give them only access to the procs, really limiting what they can do.
You could also consider re-writing as a web-application where you process everything on the server out of their reach.
However, is there really a problem here, or are you just being theoretical?

Categories