I have a question on sql dependency. Lets assume my application receives notification when the underlying query data changes and I am planning to select the data from table, process it and resubscribe/start the dependency again. If the processing takes 1-2 minutes and in the mean time there may be some data added during this processing time. Not sure how that data will get notified or do I have to wait for the next change to occur which can be few minutes to hrs?
Below is my sample code let me know if I am missing something
Code:
private void LoadNotifications()
{
DataTable dt = new DataTable();
using (SqlCommand command = new SqlCommand("SELECT ID FROM dbo.NOTIFICATIONS", m_sqlConn))
{
command.Notification = null;
SqlDependency dependency = new SqlDependency(command);
dependency.OnChange += new OnChangeEventHandler(OnDependencyChange);
if (m_sqlConn.State == ConnectionState.Closed)
{
m_sqlConn.Open();
}
//using (SqlDataReader reader = command.ExecuteReader())
//{
// if (reader.HasRows)
// {
//lETS ASSUME THIS TAKES 2-3 MINUTES
// }
//}
}
}
private void OnDependencyChange(object sender, SqlNotificationEventArgs e)
{
SqlDependency dependency = sender as SqlDependency;
dependency.OnChange -= OnDependencyChange;
LoadNotifications();
}
What you're describing is the typical non-synchronous nature of data changes and app notification. In short, changes may be happening constantly and your app will not see them happen in real-time. Furthermore, the changes may be going on whether or not your front-end app is open. Is there a requirement to see the data as it is being changed in order to make decisions on the front-end app or do you need to review data changes made by other users.
One way in which you might achieve either is a queue of changes in the form of a table that is populated by the underlying trigger. Then code your front-end app to periodically read from that table and mark them as read. This would allow you to de-couple the data changes from the app views and maybe see some processing performance increases.
Related
In our application, we need to update certain fields of a building (ie whether or not it's occupied already) in real time when any other user using that same application changes those fields for a specific building.
However, this is only necessary if both users are currently viewing the same building. A building is selected in a grid, so when the selected row is changed, I create a new SqlDependency object with the correct SqlCommand.
RefreshDependency method:
private void RefreshDependency()
{
if (//Check if a row is selected at all and if it's a valid building)
{
if (_dependency != null)
{
_dependency.OnChange -= OnDependencyChange;
}
using (SqlConnection connection = new SqlConnection(_connectionString))
{
if (connection.State == ConnectionState.Closed)
{
connection.Open();
}
using (SqlCommand command = new SqlCommand($"SELECT [MyFields] FROM [dbo.MyTable] WHERE [BuildingID] = '{selectedBuilding.ID}'", connection))
{
SqlDependency dependency = new SqlDependency(command);
_dependency = dependency;
dependency.OnChange += new OnChangeEventHandler(OnDependencyChange);
using (SqlDataReader reader = command.ExecuteReader())
{
reader.Read();
}
}
connection.Close();
}
}
}
OnDependencyChange event:
private void OnDependencyChange(object sender, SqlNotificationEventArgs e)
{
RefreshDependency();
//Code to update fields
}
Since the OnChange event is not always called, I remove the OnChange event whenever RefreshDependency is called, rather than inside the OnChange event itself, before reassigning it.
The issue now is that while the code works perfectly fine and the application updates correctly whenever a change happens in the database, I noticed while looking into the memory leak issue caused by SqlDependency that every time a new SqlDependency is made, it creates a new conversation in sys.conversation_endpoints. That's alright so far, but if the user happens to keep their overview open for a long time and selects say, 100 buildings over the course of a few hours, then 100 new conversations are added to sys.conversation_endpoints. However, only those that receive a change will ever actually be set to CLOSED, the rest will remain on STARTED_OUTBOUND for an extremely long lifespan.
Now, I can clean out the CLOSED ones no issue, but I don't think I can simply do the same for STARTED_OUTBOUND, lest I remove conversations that actually need to still be open for other users, right? Since everyone naturally shares 1 database.
I'm not sure this is entirely an issue with my code either since even if only a single Dependency is ever made, if no one ever changes anything for that building and the user simply closes the app or overview (causing SqlDependency.Stop() to be called), that will leave that one conversation stuck on STARTED_OUTBOUND as well.
I've noticed that even if a field is changed much, much later, then the database will put all the related conversations to closed as well, no matter how long ago they were created, but considering multiple buildings may never receive a change, I'm a bit worried about leaving these conversations unchecked - I know CLOSED ones are considered a memory leak, and have already implemented a fix for those.
If this is by design for SqlDependency, should I look into using alternatives such as SqlDependencyEx or SqlTableDependency instead?
I am in this situtation : view image
there are 5 user controls stacked in the empty panel.
I have a vertical menu in which there are several buttons and when I click on a button, I use : BringToFront() for displaying them .
private void Button1_Click(object sender, EventArgs e)
{
tabRandom1.BringToFront();
}
each usercontrol contains datagridview, and other elements that have to be loaded with the data coming from database, but I would like that if I click on button1, only that the elements of usercontrol1 be loaded .
when i tried :
private async void UserControl1_Load(object sender, EventArgs e)
{
this.DataGridView1.DataSource = await GetDataAsync();
}
i get a exception #er-shoaib :
there is already an open DataReader associated with tis connection which must be closed first.
I am looking for the best way to load the elements of the active usercontrol
Error you are getting is clearly saying you have already opened DataReader.
You cannot have more than one opened DataReader inside one connection.
In order to make your code for communication with database more stable write code with using and it will automatically dispose of objects like this:
using(SqlConnection con = new SqlConnection("..."))
{
con.Open();
using(SqlCommand cmd = new SqlCommand("...", con))
{
using(SqlDataReader dr = cmd.ExecuteReader())
{
while(dr.Read())
// Do the things
}
}
// Do not need close since connection will be disposed
}
or if you opened one connection for let's say whole class (like i think you did up there) just do not wrap SqlConnection con = new Sql.... inside using but everything others do and you will have no problem expect do not forget to do connection.Close().
I am ALWAYS using using for every component in sql connection and it is not affecting my performance.
When you reorganize code this way you will get rid of that problem, but one advice is not to load data when someone open form since you will load it 5 times and user may use only one but better create method like RefreshData() inside your UC and before you do yourUserControl.BringToFront(); you also do yourUserControl.RefreshData() and this way you will load it only if needed and you will always have fresh one plus you will have easy method for refreshing data if needed anywhere.
I'm creating an winforms application.
In which one form is made transparent, This form is used to show some popup message boxes, using a timer this form queries database in each seconds.
Currently I'm using database connection inside using method (here postgres Data Base).
Method 1
namespace MyApplication
{
public partial class frmCheckStatus: Form
{
private void timerCheckStatus_Tick(object sender, EventArgs e)
{
using (NpgsqlConnection conn = new NpgsqlConnection("My Connection String"))
{
conn.Open();
//Database queries
//Show popup message
conn.Close();//Forsing to close
}
}
}
}
so in each seconds this connection object is created and disposed.
Note : I'm not using this object for any other purpose or inside any forms or methods.
Is it good to create and use a single connection object global to this class, and use inside timer tick function?, and dispose on form close event
Method 2
namespace MyApplication
{
public partial class frmCheckStatus: Form
{
Private NpgsqlConnection conn = new NpgsqlConnection("My Connection String");
private void timerCheckStatus_Tick(object sender, EventArgs e)
{
//Here use conn object for queries.
conn.Open();
//Database queries
//Show popup message
conn.Close();//Forsing to close
}
private void frmCheckStatus_FormClosing(object sender, FormClosingEventArgs e)
{
conn.Dispose();
}
}
}
Which will be better?, considering memory, resource usage, execution time etc. Please give proper reason for your choice of method.
Looking at the documentation for your connection class (Here), it would appear that this supports connection pooling. This will mean that connections to the same endpoint (same connection string) will reuse existing connections rather than incurring the overhead of creating new ones.
Im not familiar with your particular connection, but if the behaviour is anything like SQLConnection class for ADO.net, repeatedly creating a new connection to the same connection string should not be particularly expensive (computationally).
As an aside, i would wrap your connection logic in try / finally to ensure it gets closed in the event of an application exception.
I can't see any advantage to instantiating a new connection every time you run a new query. I know it's done often in code, but there is overhead associated with it, however small. If you're running multiple queries from the start of the program to the end of the program, I think you should re-use the existing connection object.
If your goal is to make the connection "disappear" from the server (which I wouldn't generally worry about if this program runs on one machine -- if it runs on dozens, that's another story -- look up PgBounce), then that should be just as easily accomplished by turning connection pooling off, and then the Close() method would take care of it.
You kind of asked for pros and cons, and while it's not necessarily harmful to instantiate the connection within the loop, I can't imagine how it could be better.
For what it's worth, you may want to consider carrying the connection as a property (preferably outside of the form class, since you may want to eventually use it elsewhere). Something like this:
private NpgsqlConnection _PgConnection;
public NpgsqlConnection PgConnection
{
get
{
if (_PgConnection == null)
{
NpgsqlConnectionStringBuilder sb = new NpgsqlConnectionStringBuilder();
sb.Host = "hostname.whatever.com";
sb.Port = 5432;
sb.UserName = "scott";
sb.Password = "tiger";
sb.Database = "postgres";
sb.Pooling = true;
_PgConnection = new NpgsqlConnection(sb.ToString());
}
if (!_PgConnection.State.Equals(ConnectionState.Open))
_PgConnection.Open();
return _PgConnection;
}
set { _PgConnection = value; }
}
Then, within your form (or wherever you execute your SQL), you can just call the property:
NpgSqlCommand cmd = new NpgSqlCommand("select 1", Database.PgConnection);
...
Database.PgConnection.Close();
And you don't need to worry if the connection is open or closed, or if it's even been created yet.
The only open question would be if you want that connection to actually disappear on the server, which would be changed by altering the Pooled property.
I wrote a clr trigger whenever a new file get inserted in to my table and then pass the value to my WCF service, now i have to change the process to "update" only the particular column get updated then i have to pull the value from other two tables.
Am just wondering is this anyway i can start the clr trigger just only the particular column get updated ?
The scenario like this
Table 1: Customer Details (Cust.No, Cust.Name,Desc)
Table 2: Address (DoorNo,Street,City,State).
Here what am trying to do, if the "Desc" column in Table1 get updated then the clr trigger get triggered and pass all the values in Table1 and Table2 based on the "Desc".
Here is my code for Insert:
[Microsoft.SqlServer.Server.SqlTrigger(Name = "WCFTrigger",Target = "tbCR", Event = "FOR UPDATE, INSERT")]
public static void Trigger1()
{
SqlCommand cmd;
SqlTriggerContext myContext = SqlContext.TriggerContext;
SqlPipe pipe = SqlContext.Pipe;
SqlDataReader reader;
if(myContext.TriggerAction== TriggerAction.Insert)
{
using (SqlConnection conn = new SqlConnection(#"context connection=true"))
{
conn.Open();
//cmd = new SqlCommand(#"SELECT * FROM tbCR", conn);
cmd = new SqlCommand(#"SELECT * FROM INSERTED", conn);
reader = cmd.ExecuteReader();
reader.Read();
//get the insert value's here
string Cust.No, Cust.Name,Desc;
Cust.No = reader[0].ToString();
Cust.Name = reader[1].ToString();
Desc = reader[2].ToString();
myclient.InsertOccured(Cust.No, Cust.Name,Desc);
reader.Dispose();
}
}
}
You cannot prevent running the trigger selectively, it will always run no matter the columns updated. However, once launched you can consult the COLUMNS_UPDATED() function:
Returns a varbinary bit pattern that indicates the columns in a table
or view that were inserted or updated. COLUMNS_UPDATED is used
anywhere inside the body of a Transact-SQL INSERT or UPDATE trigger to
test whether the trigger should execute certain actions.
So you would adjust your trigger logic to have appropiate action according to what columns where updated.
That being said, calling WCF from SQLCLR is a very very bad idea. Calling WCF from a trigger is even worse. Your server will die in production as transactions will block/abort waiting on some HTTP response to crawl back across the wire. Not to mention that your calls are inherently incorrect in presence of rollbacks, as you cannot undo an HTTP call. The proper way to do such actions is to decouple the operation and the WCF call by means of a queue. You can do this with tables used as queues, you could use true queues or you could use Change Tracking. Any of these would allow you to decouple the change and the WCF call and would allow you to make the call from a separate process, not from SQLCLR
I have a windows service that polls an Oracle database at a given interval and, based on some criteria, updates several fields. To do this it opens a cursor and iterates through the rows to update.
protected override void OnStart(string[] args)
{
TimerCallback timerDelegate = new TimerCallback(DoStuff);
serviceTimer = new Timer(timerDelegate, null, 0, 20000);
}
private void DoStuff(object state)
{
// Set-up connectionsString and sqlQuery
using (OracleConnection oraConnect = new OracleConnection(connectionString))
{
oraConnect.Open();
using (OracleCommand oraCommand = new OracleCommand(sqlQuery, oraConnect))
using (OracleDataReader oraReader = oraCommand.ExecuteReader())
{
while (oraReader.Read())
{
// Do some processing here – may take some time
// Update database here
}
}
}
My question is, say for example the timer interval is 20 seconds (as above). What happens if the cursor takes 30 seconds to iterate through? I realise that each timer will work in a separate thread, but given that a new database connection is established each time, will the second call see the changes made by the first?
The second process will only see changes committed by the first. If the second process starts before the first commits, then it will not see the changes made by the first.
The assumption made is that one commit is performed in your process. If you are performing a commit after each update, then all bets are off. Your second process will see some of the updates performed by the first, but not all - only those committed when the cursor is opened in the second process.
If you scare that the cursor takes more than 20seconds, you can stop your timer before processes, and then re-start timer after process completed. Is it possible for you to stop timer before process?