I have a grid/table that I am populating with the data coming from a scanner(USB). I am using SignalR to transfer data from API(Scanning) to client. I also have a functionality where I need to trigger a Print API when table/Grid reaches a pre defined number of rows. The Problem here is how can I manage that predefined value through SignalR.
Since it is a stateless process, I cant use hidden variable (I havent tried, but thought I can't). Also I cant go back again and again to Db (performance issues)
So my query is that, how can I whether number of rows have reached or not with every scan?
Related
I am using SignalR to update client-side when data is changing on the backend. The data can be either from SQL Server or NoSQL DB.
What I am doing now is: I have a timer on the backend that constantly is trying to check whether there is a change in the DB and if it detects it then I allow SignalR to update the client-side.
I find this approach not good: the timer's interval value can't be really small e.g. even with 1 second I have a huge load on the DB. Besides, it's not real-time and even 1 second is too big. Additionally, that it's quite difficult to detect a DB change if it's not about adding a new record, but calculating some hash for all the records every 1 second or even less is surely not an option.
I think I used to read about another approach that is based on some event triggered either by the DB or by something else, but I can't remember this.
So I was wondering if somebody could advise me of a better solution.
I'm afraid we need to take different actions for different scenarios on this case.
As you mentioned that your data may come from SQL server and nosql db. So I think the scenario should like:
You write code to update the data in both of the database, if so, you can write the calling signalr code after the updating data code, so that you are no need to using trigger.
Based on the scenario 1 as well, if the database you used support using stored procedure, you may also try to find the possibility of writing stored procedure to call the signalr code.
You may also check if the databases you used have event like "data_onchange_event", this document is related to this scenario. I mean if the data stored in the database updated, your database will send an event, you can then write custom code to capture these event and then call signalr.
If you can't or don't write code to update your databases, then you may only using external trigger to monitor the database, but just like you mentioned in the question, it doesn't support high performance because of huge load on the DB. It's based on the database but not the external trigger.
I can't seem to find a way to do this, so I decided to ask here. I am making an asp.net site which uses data from a SQL Server database. I am using javascript to get the data and format it as I want.
The issue is that I want to use server sent events in order to get the new entry in my database and display it in the page of the site. So far the only examples I saw were with timers on the server side and on the period they send data to the javascript. But I can't seem to figure out how I should do it so that when a new row enters the database to fire the event.
That should be done on server side but I don't have a clue where to begin.
SqlDependency:
http://msdn.microsoft.com/en-us/library/system.data.sqlclient.sqldependency.aspx
http://dotnet.dzone.com/articles/c-sqldependency-monitoring
Using the SqlDependency Class is a good way to make your data driven application (whether it be Web or Windows Forms) more efficient by removing the need to constantly re-query your database checking for data changes.
That’s how you use the SqlDependency Class for monitoring data changes in your database without having to use something like a timer control to re-query at certain intervals.
Here is an old book mark:
http://rusanu.com/2006/06/17/the-mysterious-notification/
So I'm developing an application that works as sort of a "sidekick" to a large proprietary application which I do not have the source code for nor the rights to modify. The proprietary application does store all of its data in a Microsoft SQL database (version 2008 R2 or higher, I believe), however, and I have a good idea what the data represents. What I need my application to do is to constantly monitor the data as it is being added, updated, and deleted, and then act on the data automatically (such as raising alerts).
The issue is figuring out the best approach to receiving changes made to the database by the other application as they're happening, because I don't wanna miss a beat.
Here is what I have done so far:
LINQ to SQL: As far as I know, each time I run a query, I receive a new set of data, but I do not get the ability to receive the changes only or be notified of changes.
Typed DataSet using DataSet.Load:
using (IDataReader reader = dataSetInstance.CreateDataReader())
{
dataSetInstance.Load(reader, LoadOption.OverwriteChanges, dataSetInstance.Table1, dataSetInstance.Table2, dataSetInstance.Table3);
}
This didn't work out too well when I did it. dataSetInstance only contained a set of unfilled tables after calling the Load method. I was hoping to call dataSetInstance.GetChanges and dataSetInstance.AcceptChanges at regular intervals after the first call to dataSetInstance.Load to get only the changes. Am I doing it wrong?
Typed DataSet with tables filled individually using their associated table adapters:
using (Table1TableAdapter adapter = new Table1TableAdapter())
{
adapter.Fill(dataSetInstance.Table1);
}
using (Table2TableAdapter adapter = new Table2TableAdapter())
{
adapter.Fill(dataSetInstance.Table2);
}
using (Table3TableAdapter adapter = new Table3TableAdapter())
{
adapter.Fill(dataSetInstance.Table3);
}
Of course, the problem is that there are actually way more than 3 tables which can add up to quite a lot of repetitive code (and maintenance work), but the real problem is that I will not receive any change notifications since I'm not using the Load/AcceptChanges methods (according to the documentation).
Row retrieval by date/time field: This was something I started work on, but something I stopped after observing the other application modify fields in the rows after creating them. Consider this:
There is a row with a time stamp of a transaction and a boolean field that specifies if the transaction was canceled later on. If it is canceled, the other application simply goes back to that row and toggles the value. The time stamp remains the same, and my application will never know of the news. There is no statute of limitations; the other application can change this field any time in the future.
By the way, I should mention that this other application does not implement any constraints within the database such as foreign and primary keys. I believe I read somewhere in the documentation that for row update events and such to fire on the typed DataTable classes, some sort of primary key is needed.
There must be some way to do this!!!
Have you considered SQL Server Query Notifications? This uses SQL Server Service Broker under the covers.
SqlDependency is the C# class to look at.
Using SqlDependency in a Windows Application (.NET Framework 2.0 example: should be very similar to later versions.)
SqlDependency in an ASP.NET Application
I’d consider solving this at SQL Server level by implementing auditing triggers or SQL Server traces.
Triggers – idea is to add triggers to all tables you want to monitor. Triggers will catch all changes and store the data in some other “history” table. Once this is setup all your application needs to do is to read from these tables.
Check this for more details Creating audit triggers in SQL Server
Traces – you can setup SQL Server traces that will store all info in trace files and then your app can parse trace files and see what’s going on.
There appears to be no silver bullet to the problem given the conditions, but anything is better than polling the database for changes every minute. What I will probably do now is take Mitch Wheat's suggestion and work from there:
Some tables have rows that are highly likely to change. A recent purchase, for example, is more likely to be cancelled than one from 7 days ago, or 6 months ago, or in the case of 1 year—probably never. The application will only need to monitor queries restricted to a certain time range. Older (in terms of creation time) rows will simply be refreshed at a much slower rate and without prompting from SQL Server query notifications. The application is going to have to tolerate some stale data in order to not needlessly pull entire tables from the database every minute.
For tables without chronological information, the application will have to receive notifications for queries on conditions that are important or have to be acted on right away such as WHERE Quantity < 0.
Some more clever approaches will need to be taken for the rest of the tables. Some tables are never updated nor their rows deleted, but they will gain new rows whenever some other table's rows changes. For example: every time the NumberOfPeople value changes for a row in table Room, another row is added to one of the tables CheckIn or CheckOut.
A lot more code needs to be written, but the application is probably going to be doing a lot less unnecessary work when it's running.
In the beginning of our app's development, we were using SqlDependency quite heavily to cache DB results until the notifications told our app to grab a fresh copy.
During testing, we've noticed that the SQL DB's performance was getting hammered by the SqlDependency notification service. We scaled back the number of tables that we were using SqlDependency and noticed a large gain in performance. So, we thought we were just over using it and we moved on. We are down to only a few tables now.
Later, we discovered that we couldn't scale back the security access level for the username that will establish the dependency. We could have more than one connection string for each DB (one for dependency and one for the rest of the app), but with multiple DBs and DB mirroring, this is a pain (from SQL DB admin point of view and app development).
At this point, we are just thinking about moving away from SqlDependency altogether based on the following logic:
We don't need "instant" notification that the data has changed. If we knew within 1 second, that would be fast enough.
With some slight refactoring, we could get it down to just 1 table and poll that table once a second.
Does anyone see a flaw in this logic?
Would polling one table once a second cause more or less load on the DB than SqlDependency?
Has anyone had similar performance issue with SqlDependency?
I do dare try answer your question. But I am not sure you'll get the answer you was hoping for...
I remember back in the early 90ies when Borland promoted this grand new feature of 'callbacks' in their database Interbase that would give the caller (Delphi) 'notifications' via some very nifty new tech where promises was made that the database could be 'active'.
This was later known as the 'waste of time theory'.
And I guess why this never took of is perhaps that while the concept of DBMS was looking very promising, the database is one of your tiers that you can only scale up and not horizontally.
So programming languages to the rescue. Or rather the idea of Service Oriented Architecture (SOA). Many confuse SOA for 'Webservices' that was indeed an included hype in this new concept.
But if you check out the Fiefdom/Emissary design pattern (or Master/Agent pattern renamed to make it sound more cool and professional), you will find that the major idea is having exclusive control of its resources (read databases) and that all calls are being funneled via one single data adapter.
Obviously such a design does not work at all with triggers nor any callback frameworks.
But I think you should reconsider your entire design. If you funnel all actions and all calls via a single 'DataLayer', perhaps using Entity Framework, and perhaps on top on that a Caching mechanism you would not have to rely on your database to forward messages back up the food chain.
To show how weird things can get when being to 'database-centric', here is an extreme actual live example of how not to send an email, written a long long time ago, by a coder I was not so much impressed with:
Fact 1: Sql Server can send emails.
Fact 2: Asp3 coder does not know if or how this can be done in VbScript.
Asp3: read textbox email-address, send to com+ layer
Com+: take email-address and forward to datalayer
Datalayer: take email-address and forward to a stored procedure
Sproc: take email-address and forward to sql function
function: do weird sub-string things to check that email-adress has # . in it. return true or false.
Sproc: return a recordset with one column and one row containing 1 or 0
Datalayer: return the table as is.
Com+: convert the first column and row with value 1 or 0 to true or false
Asp3: if true, send email-adress with email subject and email text to com+
Com+: sends the exact information to datalayer
Datalayer: calls an stored procedure..
Sproc: calls a sql-function...
function: uses sql server email agent to send the email
If you read this far, my advice is to let sql server manage tables, relations, indexes and transactions. It is very good at that. Anything beyond those tasks, and with that I do include cursors in stored procedures, is better handled via proper code.
I am working on a e-commerce website and there is an issue which we are trying to solve.
After customer completed order she is receiving three emails (all of them same) instead of one.
The website is running on three servers and we think that's the problem because using only one server brings one email delivered to the customer.
I would like to know what we should do so the user will receive only one email instead of three and we will still run the website on three servers.
Thanks in advance, Laziale
You cannot count on locking hints in the database for this. A hint is just a hint; there's no guarantee that the locking will happen as you expect (assuming this is SQL Server). In general, a relational database is just that, a database. A table is not a queuing mechanism and you will always have problems if you try to use it that way.
Nonetheless, in order to implement a different solution, we have to determine if a single record is being added to the "queue" or if three records are being added. If it is the first, and only a single record is added but three emails are sent out then the solution is simple. Instead of using a database table as your queue, use Microsoft Message Queues (MSMQ) instead. They are part of Windows Server and have been since at least 2003, maybe even all the way back to 2000. They will provide you with an actual queue specifically designed for what you're trying to accomplish.
If there are three actual records being added to the "queue" table in the database that means there is a code problem. Even with three Web servers in the load balancer, the fact remains that a single order submission only happens on one of those servers. The business logic that places the email notification in the queue could not come from more than one server because the request only originates from one server.
I would check the table first and determine if there are multiple records being added. If not, change the implementation to use MSMQ. If so, check your code to see why more than one record is being added in the request.