Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I've a complicated SQL query. I have to retrieve that from C#. What is the best method to retrieve complicated SQL queries? (Like QueryByAttribute, FetchXML, QueryExpression etc.)
Here is my code:
SELECT R.Name
FROM role R
WHERE R.roleid IN
(SELECT roleid
FROM systemuserroles SR
WHERE SR.systemuserid IN
(SELECT S.systemuserid
FROM systemuser S
WHERE S.new_departmentid3 =
(SELECT new_departmentid3
FROM systemuser S
WHERE S.systemuserid = '8B8825F9-6B27-E411-8BA9-000C29E0C100')))
Thanks for the replies.
If you are using an on-premise system I would use the Filtered Views and create a SQL query against them directly. This is by far the best performing option and is fully supported.
If this isn't an option because you are using CRM Online then FetchXML will give you the best performance available.
I'm going to disagree with Ben on a few issues:
The Filtered View contains security checks within them, and are designed to be able to run reports for users (they are not technically supported, but I have had only one breaking change over the course of 12 rollups, and it was realtively minor). The Non-Filtered Views are nearly identical, except that they don't contain all of the extra joins to ensure the user has access to query the information they are attempting to. So in this aspect, the Non-Filtered Views are going to give you the best possible performance, but I would recommend only using it when it makes a big performance difference, and only for reports. (Theoretically you could go directly to the Tables, but this is seems much more likely to be changed by Microsoft with any given Roll Up).
The best possible performance for large data requests available for online is not Fetch-Xml, but actually Odata since the payload is much smaller with O-Data than the Fetch Xml. However there are some technical limitations to using oData (You wouldn't be able to do your current query in one call due to it having too many joins, but you could do it in two).
P.S. I think this is an easier to read equivalent SQL statement:
SELECT R.Name
FROM role R
INNER JOIN systemuserroles SR ON R.roleId = SR.roleId
INNER JOIN SystemUser S ON SR.systemuserid = S.systemuserid
INNER JOIN systemuser S2 ON S.new_departmentid3 = S2.new_departmentid3
WHERE S2.systemuserid = '8B8825F9-6B27-E411-8BA9-000C29E0C100'
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I have different product types that have different attributes. They cannot be stored in a single table as the attributes are too distinct. There's a couple of options I'm currently looking at: EAV and a table for each type.
My situation is, at the moment, there are only a number of types (lets say 8) but in the near future with almost 100% certainty, this can grow. But the growth is controlled by me, its not defined by users. It will be up to me to grow the product type.
I'm currently inclined to use EAV (for the reason that I can cover the growth easily - I think) but I am not sure as I'm concerned with the performance as well as modeling them in my language of choice (C#). My question is, given the scenario above, is it better for me to create a single table for each product type and add as necessary, or would this be a good case (or not even good, lets say acceptable) to use EAV?
There's no short good or bad answer to this concern, because it depends of many things.
Do you have a lot of product types ?
How do you think each of them will evolve (think to what will happen when you will add new fields to products) ?
Do you need to handle "variants" of the products ?
Do you intend to add entirely new types of products ?
Etc.
EAV is probably a good way to go if you answer if you answer "yes" to some or all these questions.
Regarding C#, I have implemented in the past an EAV data catalog with it, and using Entity Framework over SQL Server (so a RDBMS).
It worked nice to me.
But if you need to handle a lot of products, performance can quickly become an issue. You could also look for a "NoSQL" solution, did you think about it ?
Just keep in mind that your model object does not have to match your data model.
For example you could perfectly have a stronly typed object for each type of product if you need so.
Much depends on the operations that will be performed on entities. If you will:
often add new attributes to products;
add a lot of products type;
implement full product type search (or other "full product type" feature);
I recommend you to use EAV.
I have implemented in the past EAV data structure with ADO.NET and MS SQL and don't have any problem with performance.
Also, Morten Bork above recommend use "sub types". But if you want implement some "full product type" features, I think it will be more difficult then use pure EAV model.
EAV doesn't really play well with a relational database. So if that is what you are doing. (IE connecting to SQL) Then I would say no. Take the hit in development time, and design a table pr type of product, or make a aggregate table that holds various properties for a product type, and then connect the properties to the relevant tables.
So if a product contains "Cogs" then you have a table with "teethcount", "radius" etc.
Another product type has "Scews" with properties "Length", "riling" etc.
And if a product type has both cogs and screws, it merely has relation to each of these subtypes.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
I am adding multiple entities in the database using AddRange in Entity Framework:
foreach (string tagNumber in notPresent)
{
element = new TagMaster { Name = Guid.NewGuid().ToString(), IsActive = true };
element.TagCollections.Add(new TagCollection { TagNumber = tagNumber });
newTagMasters.Add(element);
}
dbContext.TagMasters.AddRange(newTagMasters);
dbContext.SaveChanges();
What I was expecting is that by adding the complete collection in context using AddRange method, there would be a single query that will be sent to database. But to my surprise, I see multiple insert statements for each record to be inserted.
Any Insights?
The problem you are running in is that sadly the entity framework commands know NO bulk inserts. Instead they generate 1 statement per line that you want to insert.
There is no workaround to this.
The only possiblity to get 1 single statement that does all the inserts is to use specific classes or libraries. As example here SqlBulkCopy which needs no external lib to be downloaded to work.
Here a link to the msdn site:
https://msdn.microsoft.com/de-de/library/system.data.sqlclient.sqlbulkcopy(v=vs.110).aspx
The usage is quite easy. You only give the constructor your connection (after opening it beforehand!) and tell it what it shall write to teh server and what the destination table name is. Then you only need to close the connection afterwards again.
sqlcon.Open();
using (SqlBulkCopy sqlBulkCopyVariable= new SqlBulkCopy(sqlcon))
{
sqlBulkCopyVariable.BulkCopyTimeout = 600; // 10 minutes timeout
sqlBulkCopyVariable.DestinationTableName = "targetTableName";
sqlBulkCopyVariable.WriteToServer(MyData);
}
sqlcon.Close();
The WriteToServer takes DataTable, DataReader or even arrays of DataRow. The exact implementation there would depend on how you want to give the data to it. So far from my personal experience: That class is quite fast and generates only 1 single statement. BUT it is only there for SqlClients. Thus if you have a different client you need to look up which class or external library would be best fitting for you.
I am afraid insertions through Linq is not optimized as you would expect. It does that by multiple insert statements as you observed. You could instead bypass Linq in those cases and use the bulk copying alternatives instead (ie: for MS SQL server use SqlBulkCopy class, for postgreSQL use Copy etc).
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I'm working on a fairly high performance application, and I know database connections are usually one of the more expensive operations. I have a task that runs pretty frequently, and in the course of business it has to select data from Table1 and Table2. I have two options:
Keep making two entity framework queries like I am right now. select from Table1 and select from Table2 in linq queries. (What I'm currently doing now).
Created a stored procedure that returns both resultsets in one query, using multiple resultsets.
I'd imagine the cost to SQL Server is the same: the same IO is being performed. I'm curious if anyone can speak to the performance bump that may exist in a "hot" codepath where milliseconds matter.
and I know database connections are usually one of the more expensive operations
Unless you turn off connection pooling, then as long as there are connections already established in the pool and available to use, obtaining a connection is pretty cheap. It also really shouldn't matter here anyway.
When it comes to two queries (whether EF or not) vs one query with two result sets (and using NextResult on the data reader) then you will gain a little, but really not much. Since there's no need to re-establish a connection either way, there's only a very small reduction in the overhead of one over the other, that will be dwarfed by the amount of actual data if the results are large enough for you to care much about this impact. (Less overhead again if you could union the two resultsets, but then you could do that with EF too anyway).
If you mean the bytes going too and fro over the connection after it's been established, then you should be able to send slightly less to the database (but we're talking a handful of bytes) and about the same coming back, assuming that your query is only obtaining what is actually needed. That is you do something like from t in Table1Repository select new {t.ID, t.Name} if you need ids and names rather than pulling back complete entities for each row.
EntityFramework does a whole bunch of things, and doing anything costs, so taking on more of the work yourself should mean you can be tighter. However, as well as introducing new scope for error over the tried and tested, you also introduce new scope for doing things less efficiently than EF does.
Any seeking of commonality between different pieces of database-handling code gets you further and further along the sort of path that ends up with you producing your own version of EntityFramework, but with the efficiency of all of it being up to you. Any attempt to streamline a particular query brings you in the opposite direction of having masses of similar, but not identical, code with slightly different bugs and performance hits.
In all, you are likely better off taking the EF approach first, and if a particular query proves particularly troublesome when it comes to performance then first see if you can improve it while remaining with EF (optimise the linq, use AsNoTracking when appropriate and so on) and if it is still a hotspot then try to hand-roll with ADO for just that part and measure. Until then, saying "yes, it would be slightly faster to use two resultsets with ADO.NET" isn't terribly useful, because just what that "slightly" is depends.
If the query is a simple one to read from table1 and table2 then LinQ queries should give similar performance as executing stored procedure (plain SQL). But if the query runs across different databases then plain SQL is always better, where you can Union the result sets and have the data from all databases.
In MySQL "Explain" statement can be used to know the performance of query. See this link:
http://www.sitepoint.com/using-explain-to-write-better-mysql-queries/
Another useful tool, is to check the SQL generated for your LinQ query in the output window of Microsoft Visual Studio. You can execute this query directly in a SQL editor and check the performance.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 9 years ago.
Improve this question
I need to select a product for a user based on other data in the database.
If the data is filtered out on the database that will require less data to be send to the server.
User (Id)
Product (code)
Access (User_Id, code) // Matching users to object codes
Will this query execute on the database sending back the minimal amout of data?
var products = QueryOver.Of<Access>()
.Where(a => a.User_Id == User.Id())
.Select(Projections.Property<Acces>(a => a.Code));
var access = QueryOver.Of<Product>()
.WithSubquery.WhereProperty(h => h.Code)
.In(products)
.Future();
This is very reasonable way how to filter data. The result of your queries would look like one SELECT against the DB:
SELECT ...
FROM Product
WHERE Code IN (SELECT Code FROM Access WHERE UserId = #userId)
So, this will for sure be executed on the DB Server, less data will be transfered, and what's more, it also would allow you to do the correct paging (if needed) - this scenario is the way how to filter parent over its one-to-many relations (find Parents which child has...)
Maybe check these Join several queries to optimise QueryOver query, NHibernate QueryOver - Retrieve all, and mark the ones already "selected"
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
If I have same table and same multiple database servers. How to connect to multiple database servers, obtain those records from each database server and then display the first 10 of the combined results ?
Say for instance you are querying the multiple instances using different connection strings for a sample table Orders, you could try the following:
var orders = ConfigurationManager.ConnectionStrings.Cast<ConnectionStringSettings>()
// filter to the relevant connection strings
.Where(s => s.ConnectionString.ToLower().Contains("metadata"))
.SelectMany(s => {
// for each connection string, select a data context
using(var context = new NorthwindEntities(s.ConnectionString)) {
// for each context, select all relevant orders
return context.Orders.ToArray();
} // and dispose of the context when done with it
})
.Take(10)
.ToList();
Here are a couple solutions off the top of my head.
Solution 1:
1 - Create a staging database / table on server A.
2 - Import all data from all servers into table.
3 - Query table to get results.
Solution 2:
1 - Create a Linked server for each server B .. Z on server A.
2 - Create Query using 4 part notations on linked servers.
Overall, solution 2 can be slow since you are using distributed transactions.
Solution 1 allows you to store the aggregated results that can be indexed (for speed) and can be queried multiple times.
As for importing the data from server to server, just pick a way to do it. There are two many solutions are there to get into the particulars.