Is it possible to get the execution plan of a LINQ to SQL or ADO.NET Query programatically for displaying in debug information? If so, how?
Sure, there are 2 things you will need.
A custom implementation of DbConnection, DbCommand and DbDataReader. You can use that to intercept all the SQL sent to the DB. You basically set it up so you have a layer that logs all the SQL that is run. (we plan to open source something in this area in the next few months, so stay tuned)
A way to display an make sense of the data, which happens to be open source here: https://data.stackexchange.com/stackoverflow/s/345/how-unsung-am-i (see the include execution plan option)
Another approach is to do the diagnostics after the fact by looking at the proc cache. sys.dm_exec_query_stats contains cached plan handles which you can expand.
Related
Is there a way to send a correlation ID from C# code to SQL Server at the command level?
For instance, using x-correlation-id is an accepted way to track a request down to all parts of the system. we are looking for a way to pass this string value to stored procedure calls in SQL Server.
I spent sometime reading thru documents and posts but I was not able to find anything useful.
Can someone please let me know if there is a way to do this? The goal is to be able to track a specific call thru all services (which we can now) and DB calls (which we cannot and looking for a solution.)
I know the answer here is one year later. But in case, somebody has the same question.
Since EF core 2.2, MS provides a new method called "TagWith()" which you could pass your own annotation with the EF query into SQL server. In this way, you could easily track the SQL query with the same correlation id generated in your C# code.
https://learn.microsoft.com/en-us/ef/core/querying/tags
Unfortunately, this new feature is not available in EF 6. But it is not only us in this situation. If you just need a simple solution, you could check the thread here and MS documents.
If you need a more stable solution, you could check this NuGet plugin for EF 6 as well.
To pass your correlation id to SQL Server you have two options:
explicitly pass it as a parameter to your queries & stored procedures.
This is annoying as it requires work to change all your db calls to have a parameter like #correlationId, and often doesn't make sense having that parameter for simple data-retrieval queries. Perhaps you decide to only pass it for data-modification operations.
But on the positive side it's really obvious where the correlation info comes from (i.e. nobody reading the code will be confused) and doesn't require any additional db calls.
If all your data-modification is done using stored procs I think this is a good way to go.
use SQL Server's SESSION_CONTEXT(), which is a way you can set session state on a connection that can be retrieved from within stored procs etc.
You can find a way to inject it into your db layer (e.g. this) so the session context is always set on a connection before executing your real db calls. Then within your procs/queries you get the correlation id from the SESSION_CONTEXT and write to wherever you want to store it (e.g. some log table or as a column on tables being modified)
This can be good as you don't need to change each of your queries or procs to have the #correlationId parameter.
But it's often not so transparent how the session context is magically set. Also you need to be sure it's always set correctly which can be difficult with ORMs and connection pooling and other architectural complexities.
If you're not already using stored procs for all data modification, and you can get this working with your db access layer, and you don't mind the cost of the extra db calls this is a good option.
I wish this was easier.
Another option is to not pass it to SQL Server, but instead log all your SQL calls from the tier that makes the call and include the correlation id in those logs. That's how Application Insights & .NET seems to do it by default: logging SQL calls as a dependency along with the SQL statement and the correlation id.
We are building a reporting framework into our application, which necessitates the use of a query builder. Ultimately, we want power users to be able to build SELECT queries to be used to populate the report dataset.
Datasets are built using a DataAdapter (either MSSQL or SQLite). Are there any tools we can use to ensure that the queries built by the end user can only be SELECT statements?
EDIT:
As mentioned above, we target SQLite as one of our supported backends No DB permissions can be set on this platform.
Set right permissions to DB. It's the best solution.
EDIT:
For SQLLite you can set read only permissions for file - in the file system.
Give the user that you execute the SQL as only the db_datareader permission to ensure that they cannot do anything but read the data.
This question gives more info on how to do that:
How to give a user only select permission on a database
If the query builder is done in house, and if your query builder returns a the SQL statement in a string, you can parse it either looking for Update statements keyworks or with Regex, if you want to spare the users the trouble of creating an update query then realizing that they can't run it, then you should consider doing this check continiously as they create the query. Alternatively, you can use a third party query builder, like this one: http://www.activequerybuilder.com/, unfortunately i belive it doesn't support anything else but Select statements but it may be worth the shot.
I think all you have to do is wrap the QueryBuilder and expose only permited operations.
I is not good to do thinks the other way around, like letting the user construct a query and at the end you tell him it is not permissable.
I'm wondering what the best way to implement this would be.
Basically our project has a requirement that any change made to records in the database should be logged. I already have it completed in C# using Reflection and Generics but I'm 100% sure that I used the best method.
Is there a way to do this from inside the SQL database?
The big key is that the way our project works, the ObjectContext is disconnected, so we couldn't use the built in Change Tracking and had to do our own compares against previous Log items.
If you're using SQL Server 2008 or higher, you can implement either change tracking or change data capture directly on the database. Note that the latter is only available in the Enterprise edition engine. There are pros and cons to each method. You'll have to review each solution for yourself as there isn't enough requirement information to go on in the question.
If you're using SQL Server 2005 or below, you'll have to resort to a trigger-based solution, as suggested by the other answers.
You want to look at database triggers.
depending on the complexity of your datamodel you could setup on update/insert/delete triggers on the relevant tables - these triggers could log whatever is needed (old/new values, User, timestamp etc.)... see http://msdn.microsoft.com/de-de/library/ms189799.aspx
Look at my blog to see how you can track data changes without database scheme modification:
part1,part2
For your project requirement, SQL trigger is the better solution than the current C# reflection. Becaz triggers provides a way for the database management system to actively control, monitor, and manage a group of tables whenever an insert, update, or delete operation is performed. More over, the requirement is full filled at DataBase layer itself and so hosted as the single solution for various front end applications.
I just did the following:
var items =
from c in Items
where
c.Pid == Campaigns.Where(d=>d.Campaign_name=="US - Autos.com").First().Pid
&& c.Affid == Affiliates.Where(e=>e.Add_code=="CD4729").First().Affid
select c;
Then I want to update a field for all the results:
items.ToList().ForEach(c=>c.Cost_per_unit=8);
SubmitChanges();
When querying, I know I can use:
GetCommand(items);
To see the SQL that will be executed.
But on submitting changes, I don't know how to do that.
I looked at:
GetChangeSet()
And I see that there are about 18 updates in this case.
QUESTION 1: are there efficiency issues using L2S to update this way?
QUESTION 2 (maybe this should be a separate question but I'll try it here): is there a general way to just monitor the SQL statements that go to SQL Server 2008 R2? I guess I could disable all but TCP for the instance and WireShark the port (if the stuff is even readable), but I'm hoping there's an easier way.
The DataContext has a Log property that you can hook into to dump the executed SQL. There is also Linq To Sql Profiler which is awesome.
When querying, I know I can use:
GetCommand(items); To see the SQL
that will be executed.
But on submitting changes, I don't
know how to do that.
You may be able to use this:
yourContext.Log = Console.Out;
But I'm not certain if this logs all SQL or just selects.
Your SQL is different for each affected object. L2S will use dependencies to determine the order in which objects must be saved (if the order is important), then will construct SQL insert, update, and delete statements to persist the changes. The generated statements (especially for update) are dependent upon which properties of the object have changed. There is no way in particular to view the entire batch that will be executed.
QUESTION 1: are there efficiency
issues using L2S to update this way?
No, this is how any other automated data access layer would perform updates.
QUESTION 2 (maybe this should be a
separate question but I'll try it
here): is there a general way to just
monitor the SQL statements that go to
SQL Server 2008 R2? I disable all but
TCP for the instance and WireShark the
port, but I'm hoping there's an easier
way.
This should be another question, but the answer is to use a trace. While you can trace with any version of SQL Server (including Express), the SQL Server Profiler tool that comes with all versions other than Express makes this very easy to do. If you want more information on this, feel free to ask another question with your specific issues.
Regarding efficiency - Of course there are more efficient ways of performing the update as it exists in your question; For example, a SQL query like this would be much more efficient (there would not be a SELECT query executed, code executed to pull the data from SQL into a set of objects, code executed to perform the update on the objects, code executed to determine which objects changed, code executed to generate the appropriate SQL statements, and UPDATE queries executed on the SQL server):
UPDATE Items SET Cost_per_unit = #CostPerUnit
FROM Items
JOIN Campaigns ON ...
JOIN Affiliates ON ...
WHERE ...
But Linq to SQL doesn't provide any way of building such a query. If you are going to be updating thousands of rows in a very simple way similar to your question, you may be better off running a SQL statement like this instead. If there aren't going to be that many rows updated or if the logic is more complicated, then keep it in Linq to SQL.
I was wondering how to monitor a database for changes programmatically.
Suppose I want to have a .net application which would run after every 100th or(nth) row insertion or row deletion or updation . how can it be achieved?
I know little about triggers.they can be used to fire executable.
But I heard that it isn't a good practice.
Is there any other way?
2]Do database fire events on table updations? and can they be caught in a program?
3]Can SQL reporting services be used here?
(Also assuming that this application is independent from the actual program which does
database manipulation.)
SQL Server 2005 introduced query
notifications, new functionality that
allows an application to request a
notification from SQL Server when the
results of a query change. Query
notifications allow programmers to
design applications that query the
database only when there is a change
to information that the application
has previously retrieved.
Check out the MSDN link for more clarity
and sample immplementation
A trigger is really going to be your only way unless you aren't concerned about the accuracy of "100th" or "nth".
The answer to 2 and 3 are no.
You can write managed stored procedures (MSDN example) but that doesn't help you here really. In general triggers can be bad practice since they can block the initial caller but sometimes they are the only solution.
I think you need to question your requirement to place this low-level data monitoring in a separate application. Think about where your data changes could originate -
Do you have full understanding of every:
stored proc within your db (now and future) and which ones update this table?
application that may hit your database (now and future)
If not, then watching the changes right down at the data level (ie within the db) is probably the best option, and that probably means triggers...
Read about "Service Broker" at http://msdn.microsoft.com/en-us/library/ms166104(v=SQL.90).aspx