I'm using Entity Framework 6.
I want to run a stored procedure that returns a non entity object (3 colunms per row)
using(var dbContext = new DBContextEntity())
{
var queryProducts = dbContext.Database.SqlQuery<DataTable>("dbo.GetProductByDesc #q", query);
}
How to get that data as DataSet or anonymous object that I can iterate that?
As far as I know EntityFramework does not provide anonymous object materialization. The reason for that is that it probably generates IL code for each type and caches it (or just does a plain PropertyInfo caching).
The solution is to just create a simple class with properties you need matching the names of the stored procedure result set and use this class as a generic parameter for SqlQuery.
Edit:
SqlQuery implements IEnumerable and when you iterate over it, it executes automatically in thge current thread. To iterate the result you can for example:
foreach(var product in queryProducts)
{
// do something with each product here
}
You can also pass a list of product class instances to a function expecting it:
ShowProducts(queryProducts.ToList());
You can also make the query run in background and return you a list of Product after it has finished, more information about asynchronous fetching can be found here: http://www.codeguru.com/csharp/.net/net_framework/performing-asynchronous-operations-using-entity-framework.htm
Like #Too said, it is best to define a POCO class with properties corresponding to the field names and datatypes returned by the StoredProcedure or other SQL statement.
It is generally better to avoid the use of DataSets in any new development work you are doing. They do have their uses but have a performance penalty in high throughput scenarios which the POCO's clearly avoid.
If the attraction for DataSets is the ability to easily serialize the data over the wire or to a file for later use, then the various serialization frameworks will help you with that eg DataContractSerializer, Newtonsoft.Json, etc.
This also allows for portability if the POCO is defined in a PCL (Portable Class Library).
If you must use DataSets, I would rather use typed DataSets. The DataRow's can be used as the POCO in #Too's answer, since they have a default constructor. Just be careful of nulls and their unique treatment in fields other than String.
Related
I am using entity framework to interface with a database and I would like to create a generic insert function!
I have got it working with standalone tables i.e. add a new record to Customer table which uses the Set function to get the table of the correct type. But the problem is that cross reference tables are mapped into lists in entity framework - conforming to an object orientated interface (so I understand why). However, how can I account for such inserts in a generic manor, as in general I will be dealing with whole entities however I have a few scenarios where I need to deal with the lists inside an entity.
Of course I can create specific methods to deal with these specific cases but I really want to avoid doing this!
One idea I have had is too create a dictionary of Type, Action, the type being types of DTO's associated with list inserts the service may receive and the action being specific insert code for dealing with the lists, that way I can still use the generic insert function, and just check if there are any "insert rules" in the dictionary that should be executed opposed to the generic insert code. This way from a client programming perspective only the one insert function is ever used. BUT this still requires writing specific insert code - which I really would like to avoid.
I don't have that much experience with EF - so what I would like to know is do I have any cleaner options for getting around this problem?
Code demonstrating my issue:
Normal generic insert -
public void InsertRecord(object newRecord, )
{
using (PIRSDBCon db = new PIRSDBCon())
{
var table = db.Set(newRecord.GetType());
table.Add(newRecord);
db.SaveChanges();
}
}
as you can see this handles standard inserts into tables
Insert into cross reference table -
public void InsertCarerCheck(int carer_id, Check newCheck)
{
using (PIRSDBCon db = new PIRSDBCon())
{
Foster_Carers carer_record = (from c in db.Foster_Carers
where c.foster_carer_id == carer_id
select c).SingleOrDefault();
carer_record.Checks1.Add(newCheck);
db.SaveChanges();
}
}
Checks1 is a list property generated by EF for a cross reference table linking a foster carer and a check record. How can scenarios like this be accounted for in a generic insert function?
I am trying to create driver for linqpad and have question:
When I am creating DynamicDataContextDriver, I must create class TypedDataContext.
What I should put in it?
How will it be populated?
Can I control how will it be populated?
If I use object database here, is there something that I must bear in mind?
I found some answer here, but I can not find there all the above answers.
A typed data context is simply a class with properties/fields suitable for querying. Those properties/fields will typically return IEnumerables or IQueryables. For example:
public class TypedDataContext
{
public IEnumerable<Customer> Customers { get { ... } }
public IEnumerable<Order> Orders { get { ... } }
...
}
When you use Visual Studio to create a new item of kind "LINQ to SQL classes" or "ADO.NET Entity Data Model", Visual Studio creates a typed data context for you which is an excellent example of what LINQPad expects. A typed data context can also expose methods (e.g., to map stored procedures or functions) - in fact it can expose anything that makes sense to the end user.
When you execute a query in LINQPad that has a connection, LINQPad subclasses the typed data context associated with the connection so that the query has access to all of its fields/properties. This is why Customers.Dump() is a valid query - we can just access Customers without having to instantiate the typed data context first.
A LINQPad driver can work in one of two ways. Either it can act like Visual Studio and build the typed data context automatically and on the fly (dynamic data context driver), or it can extract a typed data context from an existing assembly provided by the user (static data context driver). When you add a connection in LINQPad, you'll notice that the drivers are listed in two list boxes (Build data context automatically = dynamic driver, and Use a typed data context from your own assembly = static driver).
The typed data context is instantiated whenever a query executes. Because its properties typically return lazily evaluated IEnumerables/IQueryables, it's not helpful to think of "populating" it. However, it will need to know how to access an underlying data source, and this is done by passing arguments into the constructor.
LINQPad normally keeps the query's application domain alive between query runs, and this might be useful with caching and optimization should you be writing a driver for an object database. Other than that, there shouldn't be any special considerations for object databases.
I see tons of questions on LINQ to SQL vs Stored Procs. I'm more curious about the benefits of using them in tandem as relates to object mapping.
I have my business objects defined, and I have stored procedures for all of my CRUD transactions.
Is it better to plop all the stored procs into a DBML file and call them from there, and then map the results to my business objects, or is it better to just use a DataReader and map it from there?
It's annoying to me because I want my objects as I define them, rather than use MyStoredProcResult objects as linq2sql generates, so I feel I'm doing the same field by field mapping as I would with a data reader.
Performance isn't necessarily key here (unless it's ridiculously slow). I'm looking to create a standard way for all our developers to load data from a database into an object in the simplest fashion with the least amount of code.
Mapping to LINQ2SQL has a serious advantage in being type-safe - you don't really have to worry about parsing the results or adding command parameters. It does it all for you.
On the other hand with calling stored procedures directly with SQLcommand and DataReader proves to have better performance (especially when reading/changing a lot of data).
Regardless of which you choose it is better to build a separate Data Access Layer as it allows more flexibility. The logic of accessing/changing database should not be built into your business objects cos if you are forced to change means of storing you data it updating you software will be painful.
Not direct answer to your question, but if you want your objects as result of query, you probably have to consider code first schemas. Linq2SQL does not support this, but Entity Framework and NHibernate does.
Direct answer is that DataReader will obviously has less overhead, but at the same time it will have much more magic strings. Overhead is bad in terms of perfomance(in your case not that big). Magic strings are bad in terms maintaining code. So definetly this will be your personal choise.
LINQ2SQL can provide your objects populated with the results of the query. You will have to build child objects in such a way as to support either a List(Of T) or List depending on your language choice.
Suppose you have a table with an ID, a Company Name, and a Phone Number for fields. Querying that table would be straight-forward in either LINQ or a stored procedure. The advantage that LINQ brings is the ability to map the results to either anonymous types or your own classes. So a query of:
var doSomething = from sList in myTableRef select sList;
would return an anonymous type. However, if you also have a class like this:
public class Company
{
public integer ID;
public string Company;
public string PhoneNumber;
}
changing your query to this will populate Company objects as it moves through the data:
List<Company> companies = (from sList in myTableRef select new Company
{ .ID = sList.id,
.Company = sList.company,
.PhoneNumber = sList.phonenumber }).ToList();
My C# syntax may not be 100% correct as I mainly code in VB, but it will be close enough to get you there.
I have an existing SQL Server database, where I store data from large specific log files (often 100 MB and more), one per database. After some analysis, the database is deleted again.
From the database, I have created both a Entity Framework Model and a DataSet Model via the Visual Studio designers. The DataSet is only for bulk importing data with SqlBulkCopy, after a quite complicated parsing process. All queries are then done using the Entity Framework Model, whose CreateQuery Method is exposed via an interface like this
public IQueryable<TTarget> GetResults<TTarget>() where TTarget : EntityObject, new()
{
return this.Context.CreateQuery<TTarget>(typeof(TTarget).Name);
}
Now, sometimes my files are very small and in such a case I would like to omit the import into the database, but just have a an in-memory representation of the data, accessible as Entities. The idea is to create the DataSet, but instead of bulk importing, to directly transfer it into an ObjectContext which is accessible via the interface.
Does this make sense?
Now here's what I have done for this conversion so far: I traverse all tables in the DataSet, convert the single rows into entities of the corresponding type and add them to instantiated object of my typed Entity context class, like so
MyEntities context = new MyEntities(); //create new in-memory context
///....
//get the item in the navigations table
MyDataSet.NavigationResultRow dataRow = ds.NavigationResult.First(); //here, a foreach would be necessary in a true-world scenario
NavigationResult entity = new NavigationResult
{
Direction = dataRow.Direction,
///...
NavigationResultID = dataRow.NavigationResultID
}; //convert to entities
context.AddToNavigationResult(entity); //add to entities
///....
A very tedious work, as I would need to create a converter for each of my entity type and iterate over each table in the DataSet I have. Beware, if I ever change my database model....
Also, I have found out, that I can only instantiate MyEntities, if I provide a valid connection string to a SQL Server database. Since I do not want to actually write to my fully fledged database each time, this hinders my intentions. I intend to have only some in-memory proxy database.
Can I do simpler? Is there some automated way of doing such a conversion, like generating an ObjectContext out of a DataSet object?
P.S: I have seen a few questions about unit testing that seem somewhat related, but not quite exact.
There are tools that map between objects, such as automapper. This is a very good open source tool.
However, these tools sometimes have problems, for example generating duplicate entity keys, or problems when the structure of the objects being mapped are very different.
If you are trying to automate it, I think that there is a greater chance of it working if you use EF 4 and POCO objects.
If you end up writing the mapping code manually, I would move it into a seperate procedure with automated unit tests on it.
The way we do this is to create a static class with "Map" methods":
From DTO to EF object
From EF to DTO
Then write a test for each method in which we check that the fields were mapped correctly.
I have a Linq-To-Sql based repository class which I have been successfully using. I am adding some functionality to the solution, which will provide WCF based access to the database.
I have not exposed the generated Linq classes as DataContracts, I've instead created my own "ViewModel" as a POCO for each entity I am going to be returning.
My question is, in order to do updates and take advantage of some of the Linq-To-Sql features like cyclic references from within my Service, do I need to add a Rowversion/Timestamp field to each table in by database so I can use code like dc.Table.Attach(myDisconnectedObject)? The alternitive, seems ugly:
var updateModel = dc.Table.SingleOrDefault(t => t.ID == myDisconnectedObject.ID);
updateModel.PropertyA = myDisconnectedObject.PropertyA;
updateModel.PropertyB = myDisconnectedObject.PropertyB;
updateModel.PropertyC = myDisconnectedObject.PropertyC;
// and so on and so forth
dc.SubmitChanges();
I guess a RowVersion/TimeStamp column on each table might be the best and least intrusive option - just basically check for that one value, and you're sure whether or not your data might have been modified in the mean time. All other columns can be set to Update Check=Never. This will take care of handling the possible concurrency issues when updating your database from "returning" objects.
However, the other thing you should definitely check out is AutoMapper - it's a great little component to ease those left-right-assignment orgies you have to go through when using ViewModels / Data Transfer Objects by making this mapping between two object types a snap. It's well used, well tested, used by many and very stable - a winner!