I have tried to connect to MS access database on a network drive from windows application.
My connection string is:
Provider=Microsoft.ACE.OLEDB.12.0;Data Source=\\dtinaurdsna02\\LE-IN
\\Data_Analysis\\Quality_Rating_Tool.accdb
It's working on my system but it is throwing error on other systems (they have access to this network drive).
Microsoft Access database is designed to optimally work for one user at a time. You can get around some of these limitations by linking the database.
Excerpt from Access Database Best Practices
Avoid Multi-User Collisions: If you store all your objects in one file,
including your tables, Access will usually have difficulty when
multiple users attempt to open the same database file. While Access
does have record-level locking, you may still receive errors that
another user is currently in the database and you cannot make changes.
To avoid this, you provide each end user with their own front-end
database file, each linked to the same Access Data File.
Related
I have developed an app, which more than 2k users are going to use it. This app is connected to a database which contains some data.
I have some questions:
1. Is it ok to use mysql direct connection in app instead of API for just reading data?
2. Is there a way that someone find my server's information (address, pass, etc) from my application?
App is wpf.
Generally speaking (and as with all generalities there are all kinds of exceptions here, in both directions) it's okay to connect directly to the database if one of these two conditions is met:
The app and the database are on the same computer
or
The app and the database are on different computers, but within the same corporate network and traffic between the app and the database is adequately protected.
and if one of these conditions is also met:
The end user owns the app and doesn't share data with other users (they break it, that's their own problem and no one else's)
or
You issue separate accounts with only the necessary privileges to each user (the user owns the credential)
or
The machines where the application is deployed are controlled by the business, where you can securely deploy the application (and the account credentials it uses to connect to the database) in such a way that end users are not able to retrieve the account credentials directly. (The business owns everything).
It is not generally okay to connect directly to a database over the public Internet, or within a local network where traffic to the database is not adequately protected, and it is not generally okay to let end users have direct access to the database separate from the application (and if a user has ownership of their machine, they will be able to get that access).
I also need to expound on what I mean by "adequately protected". This involves a few things:
A good firewall between the clients and the database. In some smaller environments, the firewall on the OS hosting the database itself may be enough.
Measures to prevent MitM attacks on data packets to and from the DB. For traditional corporate networks, this usually means 802.1x is running even on the wired network, and wifi access is similarly protected (a pre-shared key wifi network like you use at home is not good enough, because anyone who can get the key can decrypt your traffic). Alternatively, you can implement encryption that runs from the client all the way into such a protected network. This is what many corporate VPNs are for (a public VPN service doesn't accomplish this for you). You may also be able to encrypt that actual database connection traffic. I know how to do this for Sql Server, for example, though I'm less clear on what direct support is in MySql in this area.
If you save the information inside your application, it can be found. You should consider using an API to handle the data reading. Applications can be reverse engineerd.
So, for reasons uncontrollable by me, myself and a few other developers are required to write a C# Web API to connect to an access database, to allow the data to be returned via JSON for various dashboarding and reporting needs.
Yes, I know this is an odd setup. Basically a client with a legacy Access DB, and they cannot change. Various alternative options have been put forward, but the above is the way it needs to be done unfortunately.
Anyway, my issue is that Access locks the database when opening it.
Scenario:
Client is viewing new dashboard etc. which received it's data from the Web API, which in turn connects to the access database.
All works fine, until the Access DB file is opened with Access to update some values. This kills the connection via the WebAPI with the following error:
This error only exists whilst the file is open in Access. Once the database is exited, normal service resumes.
Now I've done loads of reading about the interwebs about Access creating locks for users etc, and I know it is probably very bad practice to have multiple users accessing an Access DB at any one time etc.
Am I right in thinking that this is just a limitation of Access, or would there be some way for me to solve web services dying when the Access DB is open within Access? I've looked at the various users and the like, but I cannot seem to find a suitable workaround.
Thanks in advance.
Kindest Regards
I have a C# program that connects to MS-Access local file using:
dbConnection = new OleDbConnection("Provider=Microsoft.ACE.OLEDB.12.0;Data Source=" + sPath);
dbConnection.Open();
where sPath is the local file path.
The program does select, insert and update sql operations. I want to deploy the C# program to another computer wih is on WiFi LAN to access the same database. I assume I only have to provide correct Path? or anything else? Will the database allow both read and write operations?
RE: sharing the database over the network
Yes, essentially all you really need to do is put the .accdb (or .mdb) file in a shared folder and tell your C# program where to find it.
Notes:
Each machine needs to have its own copy of the Access Database Engine installed. If Access is not already installed on the machine then you can download the Access Database Engine installer here.
For multiple concurrent users to access the database all users must have sufficient privileges to create and modify the associated .laccdb (or .ldb) lock file in the folder where the database file resides. That is, all users need more than just read access to the folder. (That is a common mistake people make when they first deploy an Access back-end for multiple concurrent users.)
RE: WiFi
The ACE/Jet database engine relies on a solid network connection to the database file, so any intermittent network glitches resulting from a weak WiFi signal are going to cause problems. In the early days of WiFi that was a common cause of errors in many database applications (including, but not limited to, Access databases). WiFi technology has improved over the years, so this is less of a problem than it used to be, but do be aware that flaky WiFi can cause errors (and even corrupt the database file if you are unlucky).
I have a problem with new hosting. So far I have been using an fluent nhibernate aproch to access data from remote database. Due to certain circumstances I had to change to another hosting which don't have external database access. End users use internet connections without static IP (it is public for most of them, but it changes every 24-48h) What can I do in my situation to keep changes at minimum in my application ?
Data transfer is in both ways.
My ideas:
Use new hosting ftp to upload files for processing with php. Lots of work.
Design some kind of webaccess service. Same as above.
Out off above questions comes second one:
How access to database is provided in big systems where one can't limit connection only to known and safe sources ?
DMZ ?
If you do not have external access to a database (which is pretty common if not the default) you could use a VPN or SSH tunnel to connect to the external server and access the database as if it were a local one.
I am trying to create a document manager for my winforms application. It is not web-based.
I would like to be able to allow users to "attach" documents to various entities (personnel, companies, work orders, tasks, batch parts etc) in my application.
After lots of research I have made the decision to use the file system to store the files instead of a blob in SQL. I will set up a folder to store all the files, but I will store the document information (filepath, uploaded by, changed by, revision etc) in parent-child relationship with the entity in an sql database.
I only want users to be able to work with the documents through the application to prevent the files and database records getting out of sync. I some how need to protect the document folder from normal users but at the same time allow the application to work with it. My original thoughts were to set the application up with the only username and password with access to the folder and use impersonation to login to the folder and work with the files. From feedback in a recent thread I started I now believe this was not a good idea, and working with impersonation has been a headache.
I also thought about using a webservice but some of our clients just run the application on there laptops with no windows server. Most are using windows server or citrix/windows server.
What would be the best way to set this up so that only the application handles the documents?
I know you said you read about blobs but are you aware of the FILESTREAM options in SQL Server 2008 and onwards? Basically rather than saving blobs into your database which isn't always a good idea you can instead save the blobs to the NTFS file system using transactional NTFS. This to me sounds like exactly what you are trying to achieve.
All the file access security would be handled through SQL server (as it would be the only thing needing access to the folder) and you don't need to write your own logic for adding and removing files from the file system. To remove a file from the file system you just delete the related record in the sql server table and it handles removing it from the file system.
See:
http://technet.microsoft.com/en-us/library/bb933993.aspx
Option 1 (Easy): Security through Obscurity
Give everyone read (and write as appropriate) access to your document directories. Save your document 'path' as the full URI (\\servername\dir1\dir2\dir3\file.ext) so that your users can access the files, but they're not immediately available if someone goes wandering through their mapped drives.
Option 2 (Harder): Serve the File from SQL Server
You can use either a CLR function or SQLDMO to read the file from disk, present it as a varbinary field and reconstruct it at the client side. Upside is that your users will see a copy, not the real thing; makes viewing safer, editing and saving harder.
Enjoy! ;-)
I'd go with these options, in no particular order.
Create a folder on the server that's not accessible to users. Have a web service running on the server (either using IIS, or standalone WCF app) that has a method to upload & download files. Your web service should manage the directory where the files are being stored. The SQL database should have all the necessary metadata to find the documents. In this manner, only your app can get access to these files. Thus the users could only see the docs via the app.
I can see that you chose to store the documents on the file system. I wrote a similar system (e.g. attachments to customers/orders/sales people/etc...) except that I am storing it in SQL Server. It actually works pretty well. I initially worried that so much data is going to slowdown the database, but that turned out to be not the case. It's working great. The only advice I can give if you take this route is to create a separate database for all your attachments. Why? Because if you want to get a copy of the RDBMS for your local testing, you do not want to be copying a 300GB database that's made up of 1GB of actual data and 299GB of attachments.
You mentioned that some of your users will be carrying laptops. In that case, they might not be connected to the LAN. If that is the case, I'd consider storing the files (and maybe metadata itself) in the cloud (EC2, Azure, Rackspace, etc...).