I am not sure if this is asked before or not (as I googled it).
Well I have written a web-service that will be hosted with SQLite database.
Many clients would be performing CRUD Operations on it. I planed to use this just for simplicity.
Now I have written my most methods and at this time I thought that there is no DBMS with that SQLite (I suppose) so there may be conflicts and data inconsistency issues if two or more client applications write to my application.
or Does SQLite supports managing of operation for multiple connections? or I have to switch to SQL Server 2008
SQLite "supports managing of operation for multiple connections" in the sense that it won't blow up or cause data corruption. It is not, however, designed to be as efficient as MS-SQL Server is with a high load of concurrent operations. So, what it boils down to is how many is "Many clients". If you are talking about tens of simultaneous requests, you will be fine with SQLite. If you are talking about hundreds of simultaneous requests, you will probably need to migrate to MS-SQL Server. Note that in order for two requests to be simultaneous the two clients must press the 'Submit' button at roughly the same few-millisecond time window. So it takes hundreds of simultaneously connected clients to get dozens of simultaneous requests.
The short answer is yes. Take a look at this Sqlite FAQ entry. The longer answer is a bit more complicated... Would you want to use Sqlite in an architecture that is meant to handle heavy transaction loads? Probably not. If you do want to move in that direction I would suggest starting with SQL Server Express. If you need to upgrade to a full-blown SQL Server it won't be an issue at all...
Sqlite Excerpt:
(5) Can multiple applications or multiple instances of the same application access a single database file at the same time?
Multiple processes can have the same database open at the same time.
Multiple processes can be doing a SELECT at the same time. But only
one process can be making changes to the database at any moment in
time, however.
SQLite uses reader/writer locks to control access to the database. [...]
Yes SQLite supports concurrency and locking
Related
I have a C# Web App. I have multiple databases where data is the same, so I can use a Round Robin method to distribute the Database calls.
I plan to read in each connection string, and iterate through each DB and return the data for the first call that passes.
I would like to record the last database that was used, so I can try the next database in the list for the next call that comes in.
A database seems overkill for this, so could I use a static List to track this and lock the read and update of the list?
In terms of doing a round robin approach, you can definitely use a List with a Lock, but I make some general comments below which might be helpful.
You are trying to implement a Network Load Balancer here. You will face a couple of problems. IIS will happily spin up multiple threads of calls to your website if it receives multiple requests before the first one has completed. Secondly if your website is in a WebGarden (multiple instances of IIS on same computer) or runnning as a WebFarm (multiple instances of the OS) or is on Azure or some other cloud platform, then those multiple isntances of your web call might not even be on the same machine (or VM), so you need to be clear that it will be almost impossible to generate a true round robin series of database hits on a properly scalable website.
I'm not sure creating a new synchronisation point between all your web threads is a good idea for scalability. Round Robin is also not the best use of resources - if you want your website to run as fast as possible using as few resources as possible (generally why a NLB system is put in place) then use a Pool based approach to leasing an open database connection rather than iterating around a set of open database connections. The calling code gets handed the next connection which has not been released back into the pool.
I know this may be considered a generic question, but I honestly don't have the first clue even where to start. I've tried searching, and have not found any results that fit the application.
I'm trying to develop a front-end for an Access 2010 database that will allow users to add/modify records. Several of the users use a VPN to connect to the DB, and the current model we are using of an Access 2010 Navigation Form is horrendously slow, regardless of connection speed. I have verified that we can reach the DB over VPN with no privilege issues or security concerns, but even through the OleDb engine there is significant latency on the data access.
What I would like to do is to be able to have updates sent/received in a background process, say every 5-10 minutes, so that the end user will be able to update it as they need to, and have the changes written without the user really being aware of the latency. Would simply using a background worker suffice to do this, or is there a better way to send "packets" of updates over the connection?
Again, I know this is not code-specific exactly, but I've never worked with C# and DB updates before, so I'm kind of learning as I go. Nearly all the results I've found have dealt with engines other than OleDb, such as SQL, but we are locked into using Access (an accdb file) as we don't have any other database engines available to us. I appreciate any and all help, in whatever form it comes in.
This is a new enough project that so far the only code that I've developed for this has consisted of initializing the connection to the DB to verify that it's even possible.
MS Access is not designed to be concurrent. When you open access db on remote machine you also downloading the entire db onto the client machine memory that's why the bigger the file the more it become slower.. If you wished to make it concurrent use MS SQL Express and linked it to your MS Access application. This way it is much faster and better.
Beside its free, and can be upscale if you needed.
I have an application that once started will get some initial data from my database and after that some functions may update or insert data to it.
Since my database is not on the same computer of the one running the application and I would like to be able to freely move the application server around, I am looking for a more flexible way to insert/update/query data as needed.
I was thinking of using an website API on a separated thread on my application with some kinda of list where this thread will try to update the data every X minutes and if a given entry is updated it will be removed from the list.
This way instead of being held by the database queries and the such the application would run freely queuing what has to be update/inserted etc
The main point here is so I can run the functions without worrying about connectivity issues to the database end, or issues related, since all the changes are queued to be updated on it.
Is this approach ok ? bad ? are the better recommendations for this scenario ?
On "can access DB through some web server instead of talking directly to DB server": yes this is very common and recommended approach. It is much easier to limit set of operations exposed through custom API (web services, REST services, ...) than restrict direct communication with DB.
On "sync on separate thread..." - you need to figure out what are requirements of the synchronization. Delayed sync may be ok if you don't need to know latest data and not care if updates from client are commited to storage immediately.
I have a C# console application which does some processing and then writes to the database. I have it deployed multiple times on a server with different config settings to do slightly different things. However, they all have to write to the same database (and may need to insert the same data into to the same table if it doesn't already exist) using Linq to Entities.
If I were using threads I could lock the method, or stored procedures I could queue up the writes to avoid clashes, but is there any way to keep these as seperate applications, and prevent them both trying to write the same thing to the database at the same time?
I'm getting an exception every so often when there is a conflict.
Edit:
I'm not necessarily trying to debug why I'm getting the exception, looking more for any suggestions of a 'best practice' way of doing this e.g. Should this be handled at the console app level, the L2E level, or the database level.
Why can't you start a transaction with high isolation level so that the lock is active at the server side?
You may use locks (pessimistic concurrency model) or timestamps (optimistic concurrency model) to deal with concurrency issues.
It is a very wide topic so i would suggest you start by googling for database concurrency.
Question: I currently store ASP.net application data in XML files.
Now the problem is I have asynchronous operations, which means I ran into the problem of simultanous write access on a XML file...
Now, I'm considering moving to an embedded database to solve the issue.
I'm currently considering SQlite and embeddable Firebird.
I'm not sure however if SQlite or Firebird can handle multiple concurrent write access.
And I certainly don't want the same problem again.
Anybody knows ?
SQlite certainly is better known, but which one is better - SQlite or Firebird ? I tend to say Firebird, but I don't really know.
No MS-Access or MS-SQL-express recommodations please, I'm a sane person.
I wll choose Firebird for many reasons and for this too
Although it is transactional, SQLite
does not support concurrent
transactions, so if your embedded
application needs two or more
connections, they must be serialized.
An embedded Firebird database is
simple to upgrade to a fully shared
database - just change the shared
library.
May be you can also check this
SQLITE can be configured to gracefully handle simultaneous writes in most situations. What happens is that when one thread or process begins a write to the db, the file is locked. When the second write is attempted, and encounters the lock, it backs off for a short period before attempting the write again, until it succeeds or times out. The timeout is configurable, but otherwise all this happens without the application code having to do anything special except enabling the option, like this:
// set SQLite to wait and retry for up to 100ms if database locked
sqlite3_busy_timeout( db, 100 );
All this works very well and without any difficulty, except in two circumstances:
If an application does a great many writes, say a thousand inserts, all in one transaction, then the database will be locked up for a significant period and can cause problems for any other application attempting to write. The solution is to break up such large writes into seperate transactions, so other applications can get access to the database.
If the database is shared by different processes running on different machines, sharing a network mounted disk. Many operating systems have bugs in network mounted disks that making file locking unreliable. There is no answer to this. If you need to share a db on a network mounted disk, you need another database engine such as MySQL.
I do not have any experience with Firebird. I have used SQLITE in situations like this for many applications over several years.
Have you looked into Berkeley DB with the SQLite API for SQL support?
It sounds like SQLite will be a good fit. We use SQLite in a number of production apps, it supports, actually, it prefers transactions which go a long way to handling concurrency.
transactional sqlite? in C#
I would add #3 to the list from ravenspoint above: if you have a large call-center or order-processing center, say, where dozens of people might be hitting the SAVE button at the same time, even if each is updating or inserting just one record, you can run into problems using the busy timeout approach.
For scenario #3, a true SQL engine that can serialize is ideal; less ideal but serviceable is a dbms that can do byte-range record locking of a shared-file. But be aware that even a byte-range record lock will be inadequate for a large number of concurrent writes when new records are appended to the end of the file like a caboose on the end of a freight train, so that multiple processes are trying at the same time to set a lock on the same byte-range. On the other hand, a byte-range record locking scheme coupled with a hashed-key sparse file approach (e.g. the old Revelation/OpenInsight database for LANs) will be far superior to ISAM for this scenario.