I`m using SQLite with multi-threading,my program is working fine but I wish to make it faster. I read that SQLite has 3 threads modes
that can be set compile time(https://www.sqlite.org/threadsafe.html) where the default is "Serialized" but from what I read, the "Multi-thread" would be faster for me.
What I don't understand is how to set SQLite to "Multi-thread" mode in Visual Studio 2013.Anyone know how can I do this? I already found questions talking about this subject but none of them showed clearly how to set this mode.
This is how to make sqlite work in multiple threads.
Use BlockingCollection with ThreadPool.QueueUserWorkItem.
Any database query are queued and executed in FIFO (First In First Out) order.
Now the database is never locked while doing any SQL transaction from any thread.
This is an example in C#.
public class DatabaseQueueBus
{
private BlockingCollection<TransportBean> _dbQueueBus = new BlockingCollection<TransportBean>(new ConcurrentQueue<TransportBean>());
private CancellationTokenSource __dbQueueBusCancelToken;
public CancellationTokenSource _dbQueueBusCancelToken { get => __dbQueueBusCancelToken; set => __dbQueueBusCancelToken = value; }
public DatabaseQueueBus()
{
_dbQueueBusCancelToken = new CancellationTokenSource();
DatabaseQueue();
}
public void AddJob(TransportBean dto)
{
_dbQueueBus.Add(dto);
}
private void DatabaseQueue()
{
ThreadPool.QueueUserWorkItem((param) =>
{
try
{
do
{
string job = "";
TransportBean dto = _dbQueueBus.Take(_dbQueueBusCancelToken.Token);
try
{
job = (string)dto.DictionaryTransBean["job"];
switch (job)
{
case "SaveClasse":
//Save to table here
break;
case "SaveRegistrant":
//Save Registrant here
break;
}
}
catch (Exception ex)
{//TODO: Handle this exception or not
}
} while (_dbQueueBusCancelToken.Token.IsCancellationRequested != true);
}
catch (OperationCanceledException)
{
}
catch (Exception ex)
{
}
});
}
}
Related
I have a worker thread, whose job is to insert Objects that are stored in a Queue into the database.
We are currently using Entity framework to do the inserts. Now my question is, do I need to make a new Db Instance for every insert? or can I safely re-use the same db instance over and over?
private static void MainWorker()
{
while (true)
{
try
{
if (IncomingDataQueue.Any())
{
if (IncomingDataQueue.TryDequeue(out var items))
{
//Insert into db
using (var db = GetNewDbInstance())
{
if (db != null)
{
db.DataRaw.AddRange(items);
db.SaveChanges();
//Skip everything and continue to the next loop
continue;
}
}
}
}
}
catch (Exception ex)
{
Debug.WriteException("Failed to insert DB Data", ex);
//Delay here in case we are hitting the db 2 hard.
Thread.Sleep(100);
}
//Wait here as we did not have any items in the queue, so wait before checkign again
Thread.Sleep(20);
}
}
Here is my function which gets a new DB Instance:
private static DbEntities GetNewDbInstance()
{
try
{
var db = new DbEntities();
db.Configuration.ProxyCreationEnabled = false;
db.Configuration.AutoDetectChangesEnabled = false;
return db;
}
catch (Exception ex)
{
Debug.WriteLine("Error in getting db instance" + ex.Message);
}
return null;
}
Now I have not had any issues to date, however, I worry that this solution will not scale well if we are for example doing 1000s of inserts per minute?
I then also worry that with 1 static db instance that we could get memory leaks or that object will keep growing and not manage it's db connections properly?
What is the correct way to use EF with long term db connections?
I have a Web API 2 endpoint set up to test a 2+ second ADO.NET call. When attempting to "burst" this api, it fails horribly when using async methods. I'm getting connection timeouts and reader timeouts. This doesn't happen with the synchronous version of this method when "bursted". The real problem is as follows...
Async ADO.NET behaves strangely when exceptions are thrown. I can see the exceptions thrown in the Visual Studio output, but they are not caught by my code. As you'll see below, I've tried wrapping try/catches around just about everything and have had no results. I did this to be able to set break points. I understand that catching exceptions just to throw them is bad. Initially, I only wrapped the call in the API layer. The worst part is, this locks up the entire server.
Clearly, there's something I'm missing about async ADO.NET. Any ideas?
EDIT:
Just to clarify what I'm trying to do here. This is just some test code on my local computer that's talking to our developmental database. It was just to prove/disprove that we can handle more traffic with async methods against our longer running db calls. I think what's happening is that as the calls are stacking up. In doing so, the await'ed connections and readers are timing out because we're not getting back to them quickly enough. This is why it doesn't fail when it's ran synchronously. This is a completely different issue. My concern here is that the operations are not throwing exceptions in a way that can be caught. The below is not production code :)
Web API2 Controller:
[Route("api/async/books")]
[HttpGet]
public async Task<IHttpActionResult> GetBookAsync()
{
// database class instantiation removed
// search args instantiation removed
try
{
var books = await titleData.GetTitlesAsync(searchArgs);
return Ok(books);
}
catch (Exception ex)
{
return InternalServerError(ex);
}
}
Data Access:
public async Task<IEnumerable<Book>> GetTitlesAsync(SearchArgs args)
{
var procName = "myProc"
using(var connection = new SqlConnection(_myConnectionString))
using (var command = new SqlCommand(procName, connection) { CommandType = System.Data.CommandType.StoredProcedure, CommandTimeout = 90 })
{
// populating command parameters removed
var results = new List<Book>();
try
{
await connection.OpenAsync();
}
catch(Exception ex)
{
throw;
}
try
{
using (var reader = await command.ExecuteReaderAsync())
{
try
{
while (await reader.ReadAsync())
{
// FROM MSDN:
// http://blogs.msdn.com/b/adonet/archive/2012/07/15/using-sqldatareader-s-new-async-methods-in-net-4-5-beta-part-2-examples.aspx
// Since this is non-sequential mode,
// all columns should already be read in by ReadAsync
// Therefore we can access individual columns synchronously
var book = new Book
{
Id = (int)reader["ID"],
Title = reader.ValueOrDefault<string>("Title"),
Author = reader.ValueOrDefault<string>("Author"),
IsActive = (bool)reader["Item_Active"],
ImageUrl = GetBookImageUrl(reader.ValueOrDefault<string>("BookImage")),
ProductId = (int)reader["ProductID"],
IsExpired = (bool)reader["Expired_Item"]
};
results.Add(book);
}
}
catch(Exception ex)
{
throw;
}
}
}
catch(Exception ex)
{
throw;
}
return results;
}
}
Function that throws the ThirdPartyException (I don't know how does their code work) exception:
private void RequestDocuments(/* arguments... */) {
while(true) {
var revision = lastRevision;
var fetchedDocuments = 0;
try {
foreach(var document in connection.QueryDocuments(revision)) {
if(fetchedDocuments > fetchQuota) return;
container.Add(document);
++fetchedDocuments;
Logger.Log.InfoFormat("added document (revision: {0}) into inner container", document.Revision);
}
Logger.Log.Info("Done importing documents into the inner container");
return;
}
catch(Exception ex) {
if(ex is ThirdPartyException) {
// handle this in a certain way!
continue;
}
}
}
}
this function is called inside a worker thread like this:
private void ImportDocuments() {
while(!this.finishedEvent.WaitOne(0, false)) {
try {
var documents = new List<GohubDocument>();
RequestDocuments(remoteServerConnection, documents, lastRevision, 100);
}
catch(Exception ex) {
// here is where it really gets handled!!!?
}
}
}
the exception is handled only in the outermost (which is inside the ImportDocuments method) try/catch.
Why is that?
If that's a LINQ API which exposes IQueryable you don't get an error due to the deferred execution that LINQ to SQL implementations typically uses.
To prevent it you have to invoke .ToList(), FirstOrDefault() etc within your first method. That makes sure that the query really have been executed against your data source.
Solution:
var documents = connection.QueryDocuments(revision).ToList();
foreach(var document in documents) {
if(fetchedDocuments > fetchQuota) return;
// [...]
}
This question already has answers here:
Closed 12 years ago.
Possible Duplicate:
C# cleanest way to write retry logic?
i having a function contains web service call to the server which fails sometime (unable to connect remote server error) due to some disturbance in network. The code is in try catch block. i want to rerun the web service call within try block so that the web call will be done successfully.
const int MaxRetries = 5;
for(int i = 0; i < MaxRetries; i++)
{
try
{
// do stuff
break; // jump out of for loop if everything succeeded
}
catch(Exception)
{
Thread.Sleep(100); // optional delay here
}
}
bool success = false;
int retry = 0;
while (!success && retry<3)
{
try{
// web service calls
success = true;
} catch(Exception) {
retry ++;
}
}
public void Connect()
{
Connect(1);
}
private void Connect(int num)
{
if (num > 3)
throw new Exception("Maximum number of attempts reached");
try
{
// do stuff
}
catch
{
Connect(num++);
}
}
You can put a loop around the try catch block like this:
bool repeat = true
while( repeat){
repeat = false;
try
{
...
}
catch( Exception )
{
repeat = true;
}
}
I think you have your answer here. I just wanted to add a couple of suggestions based on my abundant experience with this problem.
If you add logging to the catch block, you can ascertain how often the web service call fails, and how many attempts were made in all. (Maybe put a toggle in web.config to turn this logging off once the issue subsides.)
That information may prove useful in discussions with system administrators if, for example, the web service provider is within your organization, such as on an intranet.
In addition, if you find that the calls are still failing too often, you could introduce a delay in the catch, so that the retry is not immediate. You might only want to do that on the final attempt. Sometimes it is worth the wait for the user, who doesn't want to lose all the data they have just entered.
And finally, depending on the situation, you could add a Retry button to the UI, so that the user could keep trying. The user could choose to wait five minutes for the network problem to clear itself up, and click Retry.
Wrap the try/catch in a while loop. Set a flag on success to exit the while (or just break out). Make sure you have some sort of retry limit so it won't keep going forever.
while (true)
{
try
{
// call webservice
// handle results
break;
}
catch (TemporaryException e)
{
// do any logging you wish
continue;
}
catch (FatalException e)
{
// do any logging you wish
break;
}
}
If you want to limit the retries, change the termination condition on the while loop.
void Method()
{
do
{
try
{
DoStuff();
return;
}
catch (Exception e)
{
// Do Something about exception.
}
}
while (true);
}
If you find yourself wanting to do this frequently in your code, you might consider implementing a reusable class that encapsulates the "re-try when an error is encountered" logic. This way, you can ensure that the behavior is standardized throughout your code base, instead of repeated each time.
There's an excellent example available on Dan Gartner's blog:
public class Retrier<TResult>
{
public TResult Try(Func<TResult> func, int maxRetries)
{
return TryWithDelay(func, maxRetries, 0);
}
public TResult TryWithDelay(Func<TResult> func, int maxRetries, int delayInMilliseconds)
{
TResult returnValue = default(TResult);
int numTries = 0;
bool succeeded = false;
while (numTries < maxRetries)
{
try
{
returnValue = func();
succeeded = true;
}
catch (Exception)
{
//todo: figure out what to do here
}
finally
{
numTries++;
}
if (succeeded)
return returnValue;
System.Threading.Thread.Sleep(delayInMilliseconds);
}
return default(TResult);
}
}
Well, the easiest would be to copy the code to the catch-block, right?
Another approach could look like:
private void YourMethodThatTriesToCallWebService()
{
//Don't catch errors
}
public void TryToCallWebService(int numTries)
{
bool failed = true;
for(int i = 0; i < numTries && failed; i++)
{
try{
YourMethodThatTriesToCallWebService();
failed = false;
}catch{
//do nothing
}
}
}
You should put the entire catch block into a while statement:
while(retryCount < MAX_RETRY && !success)
{
try
{
//do stuff , calling web service
success = true;
}
catch
{
retryCount++
success = false;
}
}
Some APIs, like the WebClient, use the Event-based Async pattern. While this looks simple, and probably works well in a loosely coupled app (say, BackgroundWorker in a UI), it doesn't chain together very well.
For instance, here's a program that's multithreaded so the async work doesn't block. (Imagine this is going in a server app and called hundreds of times -- you don't want to block your ThreadPool threads.) We get 3 local variables ("state"), then make 2 async calls, with the result of the first feeding into the second request (so they can't go parallel). State could mutate too (easy to add).
Using WebClient, things end up like the following (or you end up creating a bunch of objects to act like closures):
using System;
using System.Net;
class Program
{
static void onEx(Exception ex) {
Console.WriteLine(ex.ToString());
}
static void Main() {
var url1 = new Uri(Console.ReadLine());
var url2 = new Uri(Console.ReadLine());
var someData = Console.ReadLine();
var webThingy = new WebClient();
DownloadDataCompletedEventHandler first = null;
webThingy.DownloadDataCompleted += first = (o, res1) => {
if (res1.Error != null) {
onEx(res1.Error);
return;
}
webThingy.DownloadDataCompleted -= first;
webThingy.DownloadDataCompleted += (o2, res2) => {
if (res2.Error != null) {
onEx(res2.Error);
return;
}
try {
Console.WriteLine(someData + res2.Result);
} catch (Exception ex) { onEx(ex); }
};
try {
webThingy.DownloadDataAsync(new Uri(url2.ToString() + "?data=" + res1.Result));
} catch (Exception ex) { onEx(ex); }
};
try {
webThingy.DownloadDataAsync(url1);
} catch (Exception ex) { onEx(ex); }
Console.WriteLine("Keeping process alive");
Console.ReadLine();
}
}
Is there an generic way to refactor this event-based async pattern? (I.e. not have to write detailed extension methods for each API thats like this?) BeginXXX and EndXXX make it easy, but this event way doesn't seem to offer any way.
In the past I've implemented this using an iterator method: every time you want another URL requested, you use "yield return" to pass control back to the main program. Once the request finishes, the main program calls back into your iterator to execute the next piece of work.
You're effectively using the C# compiler to write a state machine for you. The advantage is that you can write normal-looking C# code in the iterator method to drive the whole thing.
using System;
using System.Collections.Generic;
using System.Net;
class Program
{
static void onEx(Exception ex) {
Console.WriteLine(ex.ToString());
}
static IEnumerable<Uri> Downloader(Func<DownloadDataCompletedEventArgs> getLastResult) {
Uri url1 = new Uri(Console.ReadLine());
Uri url2 = new Uri(Console.ReadLine());
string someData = Console.ReadLine();
yield return url1;
DownloadDataCompletedEventArgs res1 = getLastResult();
yield return new Uri(url2.ToString() + "?data=" + res1.Result);
DownloadDataCompletedEventArgs res2 = getLastResult();
Console.WriteLine(someData + res2.Result);
}
static void StartNextRequest(WebClient webThingy, IEnumerator<Uri> enumerator) {
if (enumerator.MoveNext()) {
Uri uri = enumerator.Current;
try {
Console.WriteLine("Requesting {0}", uri);
webThingy.DownloadDataAsync(uri);
} catch (Exception ex) { onEx(ex); }
}
else
Console.WriteLine("Finished");
}
static void Main() {
DownloadDataCompletedEventArgs lastResult = null;
Func<DownloadDataCompletedEventArgs> getLastResult = delegate { return lastResult; };
IEnumerable<Uri> enumerable = Downloader(getLastResult);
using (IEnumerator<Uri> enumerator = enumerable.GetEnumerator())
{
WebClient webThingy = new WebClient();
webThingy.DownloadDataCompleted += delegate(object sender, DownloadDataCompletedEventArgs e) {
if (e.Error == null) {
lastResult = e;
StartNextRequest(webThingy, enumerator);
}
else
onEx(e.Error);
};
StartNextRequest(webThingy, enumerator);
}
Console.WriteLine("Keeping process alive");
Console.ReadLine();
}
}
You might want to look into F#. F# can automate this coding for you with its «workflow» feature. The '08 PDC presentation of F# dealt with asynchronous web requests using a standard library workflow called async, which handles the BeginXXX/EndXXX pattern, but you can write a workflow for the event pattern without much difficulty, or find a canned one. And F# works well with C#.