Weird error on transactions -> using NonQuery getting Reader must be closed - c#

I'm developing a client in C# and POSTGRESQL.
It needs to parse some texts and insert data in the tables correctly, so we have a parsed which gives me a Dictionary for each tables (4 at the moment).
So we have a thread which insert in a queue those dictionaries on a ConcurrentQueue.
Now, we have two timers:
1) every 10 seconds commit an opened transaction and recreates one
these are the methods:
void transactionTimer_Elapsed(object sender, System.Timers.ElapsedEventArgs e)
{
try
{
Commit();
}
catch (Exception ex)
{
Logger.Log(Logger.LogType.ERROR, ex);
Rollback();
}
Transaction();
}
}
public void Commit()
{
if (trans != null)
{
trans.Commit();
trans.Dispose();
trans = null;
}
}
public void Transaction()
{
if (dbCon == null || (dbCon != null && dbCon.State != ConnectionState.Open))
dbCon = Connection;
if (trans == null)
trans = dbCon.BeginTransaction();
}
public void Rollback()
{
if (trans != null)
{
trans.Rollback();
trans.Dispose();
trans = null;
}
}
2) pick one hundred of data on the queue and do a huge insert ( not 100* insert by just one by using paramereters like this:
insert into Tabletest1_HandsData( handDataId, handData) values( #handDataId0, #handData0),( #handDataId1, #handData1),( #handDataId2, #handData2),( #handDataId3, #handData3) ....
by using this helper
public bool Insert(String tableName, Dictionary<String, object> data, bool usingTransaction = false)
{
Boolean returnCode = true;
var sql = GetSQL(tableName, data);
try
{
int rowsUpdated = -1;
var conn = (!usingTransaction) ? Connection : dbCon;
DbCommand mycommand = GetCommand(conn, sql);
GetCommandByDictionary(mycommand, data);
rowsUpdated = mycommand.ExecuteNonQuery();
if (!usingTransaction)
{
conn.Close();
conn.Dispose();
}
}
catch (Exception ex)
{
Logger.Log(Logger.LogType.ERROR, ex);
returnCode = false;
}
return returnCode;
}
protected override void GetCommandByDictionary(DbCommand cmd, Dictionary<string, object> data)
{
foreach (var val in data)
(cmd as NpgsqlCommand).Parameters.AddWithValue(val.Key.ToString(), val.Value);
}
So, we parse and insert on a queue, then every 3 seconds we pick 100 of these and insert, and every 10 second a transaction is committed and recreated
My error is that ExecuteNonQuery is giving me:
There is already an open DataReader associated with this Command which must be closed first.
Why is that happening?
I may take this opportunity to ask you, how can I do better? please feel free to insult me, I have tried a lot ot stuffs
Thanks
Luca

Related

Asynchronously updating multiple row updates 1/4 of rows instantly and then waits

I have a code to asynchronously update multiple rows in SQL Server's table. I tested it on updating 540 rows and 144 rows are updated in the table instanly, then it waits for about 5 minutes and then the rest is updated. At least this is how it looks when I check for updated rows with SELECT.. I'm wondering why is that.
The whole thing is triggered by button's click:
DialogResult res = MessageBox.Show($"Znaleziono {num} pasujących maszyn. Czy chcesz zaktualizować priorytet maszyny danymi z pliku?", "Potwierdź", MessageBoxButtons.YesNo, MessageBoxIcon.Question);
if(res == DialogResult.Yes)
{
await UpdatePriority();
MessageBox.Show("Updated!");
Here's UpdatePriority method that asynchronously call place.Edit() method for all places in the list of items:
public async Task<string> UpdatePriority()
{
List<Task<string>> UpdateTasks = new List<Task<string>>();
try
{
foreach (Place p in Items.Where(i => i.IsUpdated==true))
{
UpdateTasks.Add(Task.Run(()=> p.Edit()));
}
string response = "OK";
IEnumerable<string> res = await Task.WhenAll<string>(UpdateTasks);
}
catch (Exception ex)
{
throw;
}
return "Nie udało się zaktualizować danych żadnego zasobu..";
}
And here is Edit() method of place object. It basically updates place data in SQL server table:
public async Task<string> Edit()
{
string iSql = #"UPDATE JDE_Places
SET Priority=#Priority
WHERE PlaceId=#PlaceId";
string msg = "OK";
using (SqlCommand command = new SqlCommand(iSql, Settings.conn))
{
command.Parameters.AddWithValue("#PlaceId", PlaceId);
command.Parameters.AddWithValue("#Priority", Priority);
int result = -1;
try
{
result = await command.ExecuteNonQueryAsync();
IsUpdated = false;
}
catch (Exception ex)
{
msg = $"Wystąpił błąd przy edycji zasobu {Name}. Opis błędu: {ex.Message}";
}
}
return msg;
}
And here's Settings conn property that serves as reusable connection object:
public static class Settings
{
private static SqlConnection _conn { get; set; }
public static SqlConnection conn
{
get
{
if (_conn == null)
{
_conn = new SqlConnection(Static.Secrets.ConnectionString);
}
if (_conn.State == System.Data.ConnectionState.Closed || _conn.State == System.Data.ConnectionState.Closed)
{
try
{
_conn.Open();
}
catch (Exception ex)
{
MessageBox.Show("Nie udało się nawiązać połączenia z bazą danych.. " + ex.Message);
}
}
return _conn;
}
}
}
I realize it's probably better to keep the connection within using statement (instead of reusing it), but when I added it to place.Edit() method it worked even slower (and unreliably).
UPDATE: I ran few tests more and the time they took to add 540 rows varied from 15 seconds to 400 seconds.. Then I just changed result = await command.ExecuteNonQueryAsync() to result = command.ExecuteNonQuery() in Edit() of place object, ran few tests more, and all finished under 10 seconds! I don't know why async version of ExecuteNonQuery() was so much worse than non-async one, though. Single Edit() method was taking around 0,1 sec with ExecuteNonQuery() and 1 - 400 seconds with ExecuteNonQueryAsync(). Here are logs: ExecuteNonQuery() ExecuteNonQueryAsync()
Your issue here is your Settings class. You're essentially trying to use the same SqlConnection object in multiple Sqlcommands. SqlConnection is not threadsafe when used like this. You end up with multiple commands because your code is non-blocking and async. That is what is causing your code the "wait" (or deadlock). This is why when you run it sync (without the ExecuteNonQueryAsync, etc.) it works correctly.
You don't need this object at all anyway. ADO.Net handles connection pooling for you, so there is no advantage in re-using the same SqlConnection. Just create a new one for each SqlCommand:
public async Task<string> Edit()
{
using (SqlConnection conn = new SqlConnection(...))
using (SqlCommand command = new SqlCommand(iSql, conn))
{
...
}
}
and you should find that your "wait" goes away.

How to improve sqlite write performance in C#

I'm using sqlite to save log and meet write performance issue.
string log = "INSERT INTO Log VALUES ('2019-12-12 13:43:06','Error','Client','This is log message')"
public int WriteLog(string log)
{
return ExecuteNoQuery(log);
}
public int ExecuteNoQuery(string command)
{
int nResult = -1;
try
{
using (SQLiteConnection dbConnection = new SQLiteConnection(ConnectString))
{
dbConnection.Open();
using (SQLiteCommand dbCommand = dbConnection.CreateCommand())
{
dbCommand.CommandText = command;
nResult = dbCommand.ExecuteNonQuery();
}
}
}
catch (Exception e)
{
// Output error message
}
return nResult;
}
Search in google, transaction could improve the write performance significantly, but unfortunately I don't know when a log message will come, I could not combine the log message. Is there any other way to improve my log write performance?
I tried to add a timer to my code and commit transaction automatically. But I don't think it's a good way to speed up log write performance.
public class DatabaseManager : IDisposable
{
private static SQLiteTransaction transaction = null;
private SQLiteConnection dbConnection = null;
private static Timer transactionTimer;
private long checkInterval = 500;
private DatabaseManager(string connectionString)
{
dbConnection = new SQLiteConnection(connectionString);
dbConnection.Open();
StartTransactionTimer();
}
public void Dispose()
{
if(transaction != null)
{
transaction.Commit();
transaction = null;
}
dbConnection.Close();
dbConnection = null;
}
private void StartTransactionTimer()
{
transactionTimer = new Timer();
transactionTimer.Interval = checkInterval;
transactionTimer.Elapsed += TransactionTimer_Elapsed;
transactionTimer.AutoReset = false;
transactionTimer.Start();
}
private void TransactionTimer_Elapsed(object sender, ElapsedEventArgs e)
{
StartTransation();
transactionTimer.Enabled = true;
}
public void StartTransation()
{
try
{
if (dbConnection == null || dbConnection.State == ConnectionState.Closed)
{
return;
}
if (transaction != null)
{
transaction.Commit();
transaction = null;
}
transaction = dbConnection.BeginTransaction();
}
catch(Exception e)
{
LogError("Error occurs during commit transaction, error message: " + e.Message);
}
}
public int ExecuteNoQuery(string command)
{
int nResult = -1;
try
{
using (SQLiteCommand dbCommand = dbConnection.CreateCommand())
{
dbCommand.CommandText = command;
nResult = dbCommand.ExecuteNonQuery();
}
}
catch (Exception e)
{
LogError("Error occurs during execute sql no result query, error message: ", e.Message);
}
return nResult;
}
}
This started out as a comment, but it's evolving to an answer.
Get rid of the GC.Collect(); code line.
That's not your job to handle garbage collection - and you're probably degrading performance by using it.
No need to close the connection, you're disposing it in the next line anyway.
Why are you locking? Insert statements are usually thread safe - and this one doesn't seem to be an exception of that rule.
You are swallowing exceptions. That's a terrible habit.
Since you're only ever insert a single record, you don't need to return an int - you can simply return a bool (true for success, false for failure)
Why you don't use the entity framework to do the communications with the database?
For me is the easiest way. It's a Microsoft library so you can sure that the performance is very good.
I made some work with entity framework and sqlite db's and everything works very well.
Here an example of use:
var context = new MySqliteDatabase(new SQLiteConnection(#"DataSource=D:\\Mydb.db;cache=shared"));
var author = new Author {
FirstName = "William",
LastName = "Shakespeare",
Books = new List<Book>
{
new Book { Title = "Hamlet"},
new Book { Title = "Othello" },
new Book { Title = "MacBeth" }
}
};
context.Add(author);
context.SaveChanges();
The type of MySqliteDatabase can be created automatically using database first approach or with Code First approach. You have a lot of information and examples on the internet.
Here the link to the official documentation:
https://learn.microsoft.com/en-us/ef/ef6/

MySql executing a MySqlScript even when the connection has been closed

Is it possible for mysql to execute a script even when the connection has been closed?
I am using mysql community server , through a .NET connector API.
Was using c# to test out the API.
I have the following static class
using System;
using System.Data;
using MySql.Data;
using MySql.Data.MySqlClient;
public static class DataBase
{
static string connStr = "server=localhost;user=root;port=3306;password=*******;";
static MySqlConnection conn;
public static bool Connect()
{
conn = new MySqlConnection(connStr);
try
{
conn.Open();
}
catch (Exception Ex)
{
ErrorHandler(Ex);
return false;
}
return true;
}
public static int ExecuteScript(string scripttext) // returns the number of statements executed
{
MySqlCommand cmd = conn.CreateCommand();
cmd.CommandText = scripttext;
MySqlScript script;
int count= 0;
try
{
script = new MySqlScript(conn, cmd.CommandText);
script.Error += new MySqlScriptErrorEventHandler(script_Error);
script.ScriptCompleted += new EventHandler(script_ScriptCompleted);
script.StatementExecuted += new MySqlStatementExecutedEventHandler(script_StatementExecuted);
count = script.Execute();
}
catch (Exception Ex)
{
count = -1;
ErrorHandler(Ex);
}
return count;
}
# region EventHandlers
static void script_StatementExecuted(object sender, MySqlScriptEventArgs args)
{
string Message = "script_StatementExecuted";
}
static void script_ScriptCompleted(object sender, EventArgs e)
{
string Message = "script_ScriptCompleted!";
}
static void script_Error(Object sender, MySqlScriptErrorEventArgs args)
{
string Message = "script_Error: " + args.Exception.ToString();
}
# endregion
public static bool Disconnect()
{
try
{
conn.Close();
}
catch (Exception Ex)
{
ErrorHandler(Ex);
return false;
}
return true;
}
public static void ErrorHandler(Exception Ex)
{
Console.WriteLine(Ex.Source);
Console.WriteLine(Ex.Message);
Console.WriteLine(Ex.ToString());
}
}
and I am using the following code to test out this class
using System;
using System.Data;
namespace Sample
{
public class Sample
{
public static void Main()
{
if (DataBase.Connect() == true)
Console.WriteLine("Connected");
if (DataBase.Disconnect() == true)
Console.WriteLine("Disconnected");
int count = DataBase.ExecuteScript("drop database sample");
if (count != -1)
{
Console.WriteLine(" Sample Script Executed");
Console.WriteLine(count);
}
Console.ReadKey();
}
}
}
I noticed that even though I have closed my MySql connection using Disconnect() - which i have defined, mysql continues to execute the command i give next and no error is generated.
I feel like I am doing something wrong, as an error should be generated when i try to execute a script on a closed connection.
Is it a problem in my code/logic or some flaw in mysql connector?
I did check through the mysql workbench whether the command was executed properly and it was.
This is a decompile of MySqlScript.Execute code....
public unsafe int Execute()
{
......
flag = 0;
if (this.connection != null)
{
goto Label_0015;
}
throw new InvalidOperationException(Resources.ConnectionNotSet);
Label_0015:
if (this.query == null)
{
goto Label_002A;
}
if (this.query.Length != null)
{
goto Label_002C;
}
Label_002A:
return 0;
Label_002C:
if (this.connection.State == 1)
{
goto Label_0047;
}
flag = 1;
this.connection.Open();
....
As you can see, when you build the MySqlScript the connection passed is saved in an internal variable and before executing the script, if the internal connection variable is closed, the code opens it. Not checked but I suppose that it also closes the connection before exiting (notice that flag=1 before opening)
A part from this I suggest to change your code to avoid keeping a global MySqlConnection object. You gain nothing and risk to incur in very difficult bugs to track.
static string connStr = "server=localhost;user=root;port=3306;password=*******;";
public static MySqlConnection Connect()
{
MySqlConnection conn = new MySqlConnection(connStr);
conn.Open();
return conn;
}
This approach allows to write code that use the Using Statement
public static int ExecuteScript(string scripttext) // returns the number of statements executed
{
using(MySqlConnection conn = Database.Connect())
using(MySqlCommand cmd = conn.CreateCommand())
{
cmd.CommandText = scripttext;
....
}
}
The Using statement will close and dispose the connection and the command freeing valuable resources and also in case of exception you will be sure to have the connection closed and disposed

Input string was not in a correct format in c#.Net for textbox for some of the table names

I have a text box to get the SQL statement from the users. So, the users can type the sql into the textbox and get the results.
private int hasData
{
get
{
try
{
string query = SqlStatement_tbx.Text;
int _hasData = GenericDataAccess.ExecuteScalar(query);
return _hasData;
}
catch (Exception ex)
{
Errorhandling.LogError(ex);
return -1;
}
}
}
In the above method, sometimes it works for some of the tables and sometimes not. I could not figure it out where am doing wrong.
If i do Select * from dba.condition it works.
If i do Select * from dba.project it reports Input String was not in a correct format.
Thanks
Update:
These are the codes in my code behind file
private int hasData
{
get
{
try
{
string query = SqlStatement_tbx.Text;
int _hasData = GenericDataAccess.ExecuteScalar(query);
return _hasData;
}
catch (Exception ex)
{
Errorhandling.LogError(ex);
return -1;
}
}
}
private string sqlStatement
{
get
{
return SqlStatement_tbx.Text;
}
}
protected void RunSql_btn_Click(object sender, EventArgs e)
{
CustomValidator cv = (CustomValidator)CustomValidator1;
cv.Validate();
if (cv.IsValid)
{
Panel1.Visible = false;
ExportData_btn.Visible = false;
EmptyData_pnl.Visible = false;
if (hasData > 0)
{
Panel1.Visible = true;
ExportData_btn.Visible = true;
BindGridView1();
}
else if (hasData == 0)
{
EmptyData_pnl.Visible = true;
}
else
{
GenericMethods.showPopup(AlertType.Information, "Please check your sql statement!");
}
}
}
protected void BindGridView1()
{
try
{
GridView1.selectStatement = sqlStatement;
GridView1.PagerStyle.HorizontalAlign = HorizontalAlign.Left;
GridView1.ExtDataBind();
}
catch (Exception ex)
{
Errorhandling.LogError(ex);
}
}
protected void ValidateInput(object source, ServerValidateEventArgs args)
{
try
{
string input = sqlStatement.Trim();
if (input.StartsWith("select", true, null))
{
args.IsValid = true;
}
else
{
args.IsValid = false;
}
}
catch (Exception Ex)
{
Errorhandling.LogError(Ex);
}
}
ExecuteScalar
private static Object ExecuteScalar_returnObject_internal(string query, Object[] args)
{
using (SAConnection Conn = new SAConnection(GenerateConnectionString()))
{
SACommand Cmd = new SACommand(query, Conn);
foreach (Object o in args)
{
SAParameter p = Cmd.CreateParameter();
p.Value = o;
Cmd.Parameters.Add(p);
}
Conn.Open();
Object r = Cmd.ExecuteScalar();
Conn.Close();
Cmd.Dispose();
//SAConnection.ClearAllPools();
return r;
}
}
public static int ExecuteScalar(string query, params Object[] args)
{
return Convert.ToInt32(ExecuteScalar_returnObject_internal(query, args));
}
The error you are getting is being produced by the Convert.ToInt32 method in your GenericDataAccess.ExecuteScalar function. This means that the string (or whatever type it is) returned from ExecuteScalar_returnObject_internal is not a valid value that can be converted to an int.
Without seeing more of your code I cannot say for sure why it isn't returning the expected value, however I suggest you step through it with the debugger and evaluate the value being returned by ExecuteScalar_returnObject_internal. It should be fairly obvious why it cannot be converted to an int.
Update
Looking at more of your code we can see that the object being parsed is the result of Cmd.ExecuteScalar (i.e. SACommand.ExecuteScalar). Again, that's not part of the .Net framework but for the sake of speeding things up let's assume it uses SqlCommand.ExecuteScalar on the insides.
We can see that SqlCommand.ExecuteScalar has the following return value:
The first column of the first row in the result set, or a null
reference (Nothing in Visual Basic) if the result set is empty.
Returns a maximum of 2033 characters.
With that in mind, I would say that the first column of the first row when executing Select * from dba.project is either null (perhaps there are no results at all), or the value is not something that can be converted to an Int.
I recommend you debug your query to see what is the first row/column value, or add some error checking and do something else (such as display a message to say "no results found" or "Unexpected return type" or whatever you like)

need help to fine tune query

HI i have this update query which works fine but just it takes about 3-4 seconds before i get the messagebox update successfully. Could you help to see what goes wrong? Is it because of the using() and the transaction rollback?
public void Update()
{
System.Data.Common.DbTransaction transaction = null;
using (JamminDataContext db = new JamminDataContext())
{
try
{
db.Connection.Open();
transaction = db.Connection.BeginTransaction();
db.Transaction = transaction;
#region Update Users
db.Users.Attach(this, GetSingleUserById(this.Id));
db.Refresh(System.Data.Linq.RefreshMode.KeepCurrentValues, db.Users);
db.SubmitChanges();
#endregion
if (this.RoleId == (int)RoleTypes.Student)
{
#region Update CourseByStudents
foreach (CourseByStudent courseByStudent in this.courseByStudent)
{
if (courseByStudent == null) break;
if (courseByStudent.Id == 0)
{
courseByStudent.CourseUserStatus.UserId = this.Id;
db.CourseUserStatus.InsertOnSubmit(courseByStudent.CourseUserStatus);
db.SubmitChanges();
courseByStudent.StudentId = this.Id;
courseByStudent.CourseUserStatusId = courseByStudent.CourseUserStatus.Id;
db.CourseByStudents.InsertOnSubmit(courseByStudent);
db.SubmitChanges();
}
else
{
if(courseByStudent.CourseUserStatusCopy != courseByStudent.CourseUserStatus.Status
&& ( courseByStudent.CourseUserStatus.Status != null
&& courseByStudent.CourseUserStatus.Date != null))
{
//Insert to CourseUserStatus only when Status is change or add new row of course
courseByStudent.CourseUserStatus.UserId = this.Id;
db.CourseUserStatus.InsertOnSubmit(courseByStudent.CourseUserStatus);
db.SubmitChanges();
courseByStudent.CourseUserStatusId = courseByStudent.CourseUserStatus.Id;
}
courseByStudent.Update();
}
}
#endregion
}
transaction.Commit();
}
catch (Exception ex)
{
if (transaction != null) transaction.Rollback();
Logger.Error(typeof(User), ex);
throw;
}
finally
{
if (db.Connection.State == System.Data.ConnectionState.Open) db.Connection.Close();
}
}
}
Instead of doing all the individual db.SubmitChanges() make one call to db.SubmitChanges() right before the tx.Commit(). Let me know if this improves performance. It should prevent many roundtrips to the database and thus improve the overall performance.

Categories