We have scalar functions in our database for returning things like "number of tasks for a customer" or "total invoice amount for a customer".
We are experimenting and looking to try to do this w/o stored procedures ... normally we would just call this function in our stored procedure and return it as a single value.
Is there a way to use or access scalar functions with LINQ to SQL? If so, I would be interested in see an example of how to ... if not, how would it be best to handle this type of situation ... if it is even doable.
LINQ-to-SQL supports use with UDFs, if that is what you mean. Just drag the UDF onto the designer surface and you're done. This creates a matching method on the data-context, marked [Function(..., IsComposable=true)] or similar, telling LINQ-to-SQL that it can use this in queries (note that EF doesn't support this usage).
You would then use it in your query like:
var qry = from cust in ctx.Custs
select new {Id = cust.Id, Value = ctx.GetTotalValue(cust.Id)};
which will become TSQL something like:
SELECT t1.Id, dbo.MyUdf(t1.Id)
FROM CUSTOMER t1
(or there-abouts).
The fact that it is composable means that you can use the value in queries - for example in a Where()/WHERE - and so reduce the data brought back from the server (although obviously the UDF will still need to be executed in some way).
Here's a similar example, showing a pseudo-UDF at use on a data-context, illustrating that the C# version of the method is not used.
Actually, I'm currently looking at such UDFs to provide "out of model" data in a composable way - i.e. a particular part of the system needs access to some data (that happens to be in the same database) that isn't really part of the same model, but which I want to JOIN in interesting ways. I also have existing SPs for this purpose... so I'm looking at porting those SPs to tabular UDFs, which provides a level of contract/abstraction surrounding the out-of-model data. So because it isn't part of my model, I can only get it via the UDF - yet I retain the ability to compose this with my regular model.
I believe this MSDN documentation is what you're after (as part of this wider topic of calling user-defined functions in LINQ to SQL). Can't say I've done it myself, but it sounds right...
Related
I have two tables in my database: TPM_AREAS and TPM_WORKGROUPS. There exists a many-to-many relationship between these two tables, and these relationships are stored in a table called TPM_AREAWORKGROUPS. This table looks like this:
What I need to do is load all these mappings into memory at once, in the quickest way possible. As TPM_AREAWORKGROUPS is an association, I can't just say:
var foo = (from aw in context.TPM_AREAWORKGROUPS select aw);
I can think of three ways to possibly do this, however I'm not quite sure how to accomplish each of them nor which one is the best.
1) Load in every workgroup, including the associated areas:
Something like:
var allWG = (from w in context.TPM_WORKGROUPS.Include("TPM_AREAS")
where w.TPM_AREAS.Count > 0
select w);
// Loop through this enumeration and manually build a mapping of distinct AREAID/WORKGROUPID combinations.
Pros: This is probably the standard EntityFramework way of doing things, and doesn't require me to change any of the database structure or mappings.
Cons: Could potentially be slow, since the TPM_WORKGROUPS table is rather large and the TPM_AREAWORKGROUPS table only has 13 rows. Plus, there's no TPM_AREAWORKGROUPS class, so I'd have to return a collection of Tuples or make a new class for this.
2) Change my model
Ideally, I'd like a TPM_AREAWORKGROUP class, and a context.TPM_AREAWORKGROUP property. I used the designer to create this model directly from the database, so I'm not quite sure how to force this association to be an actual model. Is there an easy way to do this?
Pros: It would allow me to select directly against this table, done in one line of code. Yay!
Cons: Forces me to change my model, but is this a bad thing?
3) Screw it, use raw SQL to get what I want.
I can get the StoreConnection property of the context, and call CreateCommand() directly. I can then just do:
using (DbCommand cmd = conn.CreateCommand())
{
cmd.CommandText = "SELECT AreaId, WorkgroupId FROM TPM_AREAWORKGROUPS";
var reader = cmd.ExecuteReader();
// Loop through and get each mapping
}
Pros: Fast, easy, doesn't require me to change my model.
Cons: Seems kind of hacky. Everywhere else in the project, we're just using standard Entity Framework code so this deviates from the norm. Also, it has the same issues as the first option; there's still no TPM_AREAWORKGROUPS class.
Question: What's the best solution for this problem?
Ideally, I'd like to do #2 however I'm not quite sure how to adjust my model. Or, perhaps someone knows of a better way than my three options.
You could do:
var result = context
.TPM_WORKGROUPS
.SelectMany(z => z.TPM_AREAS.Select(z2 => new
{
z2.AREAID,
z.WORKGROUPID
}));
The translated SQL will be a simple SELECT AREAID, WORKGROUPID FROM TPM_AREAWORKGROUPS.
About other options:
I wouldn't use option 3) because I personnally avoid raw SQL as much as possible when using Entity Framework (see https://stackoverflow.com/a/8880157/870604 for some reasons).
I wouldn't use option 2) because you would have to change your model, and there is a simple and efficient way that allows to not change it.
What about use projection to load data?
You could do that do fill a annonymous object and then work with it the way you like.
I want to dynamically query an object with System.Linq.Dynamic.
var selectData = (from i in data
select i).AsQueryable().Where("Name = #0","Bob1");//This works fine with a non-entity object
I know that we cannot project onto a mapped entity. I believe that is the reason this code fails
foreach (var item in rawQuery.ObsDataResultList)
{
var propertyData = (from i in item
select i).AsQueryable().Where("PropertyName = #0", "blah");
}//item is a Entity Complex Type
Error
Could not find an implementation of the query pattern for
source type 'ClassLibrary1.Model.bhcs_ObsData_2_Result'. 'Select' not
found.
Given the fact that I need to specify the PropertyName at runtime, I don't see any way to project with an anonymous type or a DTO.
I don't need to retain any of the Entity functionality at this point, I just need the data. Copying the data onto something that is queryable is a valid solution. So, is it possible to query entity framework with dynamic LINQ?
And here is the entity class header (the thing I'm trying to query, aka the item object)
[EdmComplexTypeAttribute(NamespaceName="MyDbModel", Name="blah_myQuery_2_Result")]
[DataContractAttribute(IsReference=true)]
[Serializable()]
public partial class blah_myQuery_2_Result : ComplexObject
{
First of all, let me clarify that System.Linq.Dynamic is not a full fledged Microsoft product. It is just a sample we release some time ago, and we don't thoroughly test different LINQ implementations to work correctly with it. If you are looking for a fully supported text-based query language for EF ObjectContext API you should take a look at Entity SQL instead.
Besides that, if you want to use System.Linq.Dynamic and you are ok with testing yourself that you don't hit anything that will block your application from working, then I'll try to see if I can help. I am going to need additional information since I am not sure I understand everything in your code snippets.
First of all I would like to understand, in your first example what is "data" and where did it come from? In your second snippet, what is "rawQuery" and where did it come from? Besdies, what is rawQuery.DataResultList and what is rawQuery.ObsDataResultList?
Also regarding your second snippet, it seems that you are trying to compose with query operators on top of an object that is not actually of a query type (although that doesn't explain the error you are getting given that you are calling AsQueryable the compiler should have complained before that bhcs_ObsData_2_Result is not an IEnumerable nor a non-generic IEnumerable).
In your propposed answer you are saying that you tried with ObjectResult and that seemed to help. Just be aware that ObjectResult is not a query object and therefore it won't allow you to build queries that get send to the server. In other words, any query operators that you apply to ObjectResult will be evaluated in memory and if you don't keep this in mind you may end up bringing all the data from that table into memory before you apply any filtering.
Query ObjectResult<blah_myQuery_2_Result> directly instead of the item blah_myQuery_2_Result. For example
var result = (from i in rawQuery.DataResultList
select i).AsQueryable().Where("CreatedDTM > #0", DateTime.Now.Subtract(new TimeSpan(30, 0, 0, 0)));
I see tons of questions on LINQ to SQL vs Stored Procs. I'm more curious about the benefits of using them in tandem as relates to object mapping.
I have my business objects defined, and I have stored procedures for all of my CRUD transactions.
Is it better to plop all the stored procs into a DBML file and call them from there, and then map the results to my business objects, or is it better to just use a DataReader and map it from there?
It's annoying to me because I want my objects as I define them, rather than use MyStoredProcResult objects as linq2sql generates, so I feel I'm doing the same field by field mapping as I would with a data reader.
Performance isn't necessarily key here (unless it's ridiculously slow). I'm looking to create a standard way for all our developers to load data from a database into an object in the simplest fashion with the least amount of code.
Mapping to LINQ2SQL has a serious advantage in being type-safe - you don't really have to worry about parsing the results or adding command parameters. It does it all for you.
On the other hand with calling stored procedures directly with SQLcommand and DataReader proves to have better performance (especially when reading/changing a lot of data).
Regardless of which you choose it is better to build a separate Data Access Layer as it allows more flexibility. The logic of accessing/changing database should not be built into your business objects cos if you are forced to change means of storing you data it updating you software will be painful.
Not direct answer to your question, but if you want your objects as result of query, you probably have to consider code first schemas. Linq2SQL does not support this, but Entity Framework and NHibernate does.
Direct answer is that DataReader will obviously has less overhead, but at the same time it will have much more magic strings. Overhead is bad in terms of perfomance(in your case not that big). Magic strings are bad in terms maintaining code. So definetly this will be your personal choise.
LINQ2SQL can provide your objects populated with the results of the query. You will have to build child objects in such a way as to support either a List(Of T) or List depending on your language choice.
Suppose you have a table with an ID, a Company Name, and a Phone Number for fields. Querying that table would be straight-forward in either LINQ or a stored procedure. The advantage that LINQ brings is the ability to map the results to either anonymous types or your own classes. So a query of:
var doSomething = from sList in myTableRef select sList;
would return an anonymous type. However, if you also have a class like this:
public class Company
{
public integer ID;
public string Company;
public string PhoneNumber;
}
changing your query to this will populate Company objects as it moves through the data:
List<Company> companies = (from sList in myTableRef select new Company
{ .ID = sList.id,
.Company = sList.company,
.PhoneNumber = sList.phonenumber }).ToList();
My C# syntax may not be 100% correct as I mainly code in VB, but it will be close enough to get you there.
I'm using the PetaPoco mini-ORM, which in my implementation runs stored procedures and maps them to object models I've defined. This works very intuitively for queries that pull out singular tables (i.e. SELECT * FROM Orders), but less so when I start writing queries that pull aggregate results. For example, say I've got a Customers table and Orders table, where the Orders table contains a foreign key reference to a CustomerID. I want to retrieve a list of all orders, but in the view of my application, display the Customer name as well as all the other order fields, i.e.
SELECT
Customers.Name,
Orders.*
FROM
Orders
INNER JOIN Customers
ON Orders.CustomerID = Customers.ID
Having not worked with an ORM of any sort before, I'm unsure of the proper method to handle this sort of data. I see two options right now:
Create a new aggregate model for the specific operation. I feel like I would end up with a ton of models in any large application by doing this, but it would let me map a query result directly to an object.
Have two separate queries, one that retrieves Orders, another that retrieves Customers, then join them via LINQ. This seems a better alternative than #1, but similarly seems obtuse as I am pulling out 30 columns when I desire one (although my particular mini-ORM allows me to pull out just one row and bind it to a model).
Is there a preferred method of doing this, either of the two I mentioned, or a better way I haven't thought of?
Option #1 is common in CQRS-based architectures. It makes sense when you think about it: even though it requires some effort, it maps intuitively to what you are doing, and it doesn't impact other pieces of your solution. So if you have to change it, you can do so without breaking anything elsewhere.
I'm working on an application that allows dentists to capture information about certain clinical activities. While the application is not highly customizable (no custom workflows or forms) it does offer some rudimentary customization capabilities; clients can choose to augment the predefined form fields with their own custom ones. There are about half a dozen different field types that admins can create (i.e. Text, Date, Numeric, DropDown, etc). We're using Entity-Attribute-Value (EAV) on the persistence side to model this functionality.
One of the other key features of the application is the ability to create custom queries against these custom fields. This is accomplished via a UI in which any number of rules (Date <= (Now - 5 Days), Text Like '444', DropDown == 'ICU') can be created. All rules are AND'ed together to produce a query.
The current implementation (which I "inherited") is neither object oriented nor unit testable. Essentially, there is a single "God" class that compiles all the myriad rule types directly into a complex dynamic SQL statement (i.e. inner joins, outer joins, and subselects). This approach is troublesome for several reasons:
Unit testing individual rules in isolation
is nearly impossible
That last point also means adding additional rule types in the
future will most definitely violate
the Open Closed Principle.
Business logic and persistence concerns are being co-mingled.
Slow running unit tests since a real database is required (SQLLite can't parse T-SQL and mocking out a parser would be uhh...hard)
I'm trying to come up with a replacement design that is flexible, maintainable and testable, while still keeping query performance fairly snappy. This last point is key since I imagine an OOAD based implementation will move at least some of the data filtering logic from the database server to the (.NET) application server.
I'm considering a combination of the Command and Chain-of-Responsibility patterns:
The Query class contains a collection of abstract Rule classes (DateRule, TextRule, etc). and holds a reference to a DataSet class that contains an unfiltered set of data. DataSet is modeled in a persistence agnostic fashion (i.e no references or hooks into database types)
Rule has a single Filter() method which takes in an DataSet, filters it appropriately, and then returns it to the caller. The Query class than simply iterates over each Rule, allowing each Rule to filter the DataSet as it sees fit. Execution would stop once all rules have been executed or once the DataSet has been filtered down to nothing.
The one thing that worries me about this approach are the performance implications of parsing a potentially large unfiltered data set in .NET. Surely there are some tried and true approaches to solving just this kind of problem that offer a good balance between maintainability and performance?
One final note: management won't allow the use of NHibernate. Linq to SQL might be possible, but I'm not sure how applicable that technology would be to the task at hand.
Many thanks and I look forward to everyone's feedback!
Update: Still looking for a solution on this.
I think that LINQ to SQL would be an ideal solution coupled, perhaps, with Dynamic LINQ from the VS2008 samples. Using LINQ, particularly with extension methods on IEnumerable/IQueryable, you can build up your queries using your standard and custom logic depending on the inputs that you get. I use this technique heavily to implement filters on many of my MVC actions to great effect. Since it actually builds an expression tree then uses it to generate the SQL at the point where the query needs to be materialized, I think it would be ideal for your scenario since most of the heavy lifting is still done by the SQL server. In cases where LINQ proves to generate non-optimal queries you can always use table-valued functions or stored procedures added to your LINQ data context as methods to take advantage of optimized queries.
Updated: You might also try using PredicateBuilder from C# 3.0 in a Nutshell.
Example: find all Books where the Title contains one of a set of search terms and the publisher is O'Reilly.
var predicate = PredicateBuilder.True<Book>();
predicate = predicate.And( b => b.Publisher == "O'Reilly" );
var titlePredicate = PredicateBuilder.False<Book>();
foreach (var term in searchTerms)
{
titlePredicate = titlePredicate.Or( b => b.Title.Contains( term ) );
}
predicate = predicate.And( titlePredicate );
var books = dc.Book.Where( predicate );
The way I've seen it done is by creating objects that model each of the conditions you want the user to build their query from, and build up a tree of objects using those.
From the tree of objects you should be able to recursively build up an SQL statement that satisfies the query.
The basic ones you'll need will be AND and OR objects, as well as objects to model comparison, like EQUALS, LESSTHAN etc. You'll probably want to use an interface for these objects to make chaining them together in different ways easier.
A trivial example:
public interface IQueryItem
{
public String GenerateSQL();
}
public class AndQueryItem : IQueryItem
{
private IQueryItem _FirstItem;
private IQueryItem _SecondItem;
// Properties and the like
public String GenerateSQL()
{
StringBuilder builder = new StringBuilder();
builder.Append(_FirstItem.GenerateSQL());
builder.Append(" AND ");
builder.Append(_SecondItem.GenerateSQL());
return builder.ToString();
}
}
Implementing it this way should allow you to Unit Test the rules pretty easily.
On the negative side, this solution still leaves the database to do a lot of the work, which it sounds like you don't really want to do.