I am using entity framework to interface with a database and I would like to create a generic insert function!
I have got it working with standalone tables i.e. add a new record to Customer table which uses the Set function to get the table of the correct type. But the problem is that cross reference tables are mapped into lists in entity framework - conforming to an object orientated interface (so I understand why). However, how can I account for such inserts in a generic manor, as in general I will be dealing with whole entities however I have a few scenarios where I need to deal with the lists inside an entity.
Of course I can create specific methods to deal with these specific cases but I really want to avoid doing this!
One idea I have had is too create a dictionary of Type, Action, the type being types of DTO's associated with list inserts the service may receive and the action being specific insert code for dealing with the lists, that way I can still use the generic insert function, and just check if there are any "insert rules" in the dictionary that should be executed opposed to the generic insert code. This way from a client programming perspective only the one insert function is ever used. BUT this still requires writing specific insert code - which I really would like to avoid.
I don't have that much experience with EF - so what I would like to know is do I have any cleaner options for getting around this problem?
Code demonstrating my issue:
Normal generic insert -
public void InsertRecord(object newRecord, )
{
using (PIRSDBCon db = new PIRSDBCon())
{
var table = db.Set(newRecord.GetType());
table.Add(newRecord);
db.SaveChanges();
}
}
as you can see this handles standard inserts into tables
Insert into cross reference table -
public void InsertCarerCheck(int carer_id, Check newCheck)
{
using (PIRSDBCon db = new PIRSDBCon())
{
Foster_Carers carer_record = (from c in db.Foster_Carers
where c.foster_carer_id == carer_id
select c).SingleOrDefault();
carer_record.Checks1.Add(newCheck);
db.SaveChanges();
}
}
Checks1 is a list property generated by EF for a cross reference table linking a foster carer and a check record. How can scenarios like this be accounted for in a generic insert function?
Related
I have two tables wherein i want to insert the data to the first one (MASTER) and the other table would copy some of the data from the Master table..
Here is my representation:
I want the Ven_ID to also be reflected in my Workflow table Workflow_ReqID automatically.
I know this is possible but can someone give me the directions ?
You can have a trigger/procedure at database level which will insert data into your second table. It depends if this table is updated anywhere else.
There are two ways to go about it :
Use SQL Server AFTER INSERT Trigger. You can find plenty of resources off the internet on how to create a trigger and how to declare its definition.
Another way to do it is through entity framework (I see you have tagged entityframework)
I will explain how you can use entity framework
Let's say you have the entity representing the WorkFlow table as WorkFlow and the table representing Ven (may be vendor) as Vendor.
Since you are having required foreign key in the WorkFlow table of the Vendor primary key, you must have a backing stub for that i.e. your WorkFlow table must have a virtual navigational property of type Vendor i.e.
public class WorkFlow
{
//other properties
public virtual Vendor Vendor{get;set;}
}
you just have to create WorkFlow object and the Vendor object (either create a new or retreive from db) and just assign it to the workflow object i.e.
WorkFlowObj.Vendor = objVendor
and EntityFramework will take care of rest.
I would prefer this way.
Though using triggers is not bad, but only problem with them is when you have to deploy, you must also deploy them triggers and every time you make changes to them, you must take care of them too.
If you want Ven_ID and Workflow_ReqID to be same get the Vent_ID in the output parameter in store procedure and pass it to the second table insert statement.
Get last inserted id using SCOPE_IDENTITY() after insertion and add it to workflow table. To save db trip you can use sproc for that.
I have been reading several examples of Sitecore DataProvider implementation on single database and single table (using the config file parameters to specify particular table and columns trying to integrate with). Just wonder if it is possible to implement the dataprovider working on multiple tables instead of just one. Couldn't find any examples on this, just asking for any ideas or possibilities.
First problem I encounter when I try to deal with multiple tables is to override GetItemDefinition method. Since this method returns only one item definition and needs to know which particular table it will get the item information from. (This is specified in the config file if just dealing with one table). Basically I am looking for a way to switch (dynamically) between tables without changing the config file params every time.
If you're creating a custom data provider then then implemetation is left entirely up to you. If you have been following some of the examples such, such as the Northwind Dataprovider then as you state the implementation acts on a single database as specified in config. But you can specify whatever you need in the methods that you implement, and run logic to switch the select statement you call in the methods such as GetItemDefinition() and GetItemFields(). You can see in the Northwind exmaple that the SQL query is dynamically built:
StringBuilder sqlSelect = new StringBuilder();
sqlSelect.AppendFormat("SELECT {0} FROM {1}", nameField, table);
If you are building a read-only dataprovider then you might be able to make use of SQL Views, allowing you to write a query to combine the results from several tables using UNION operator. As long as each record has a unique ID across tables (i.e. if you are using GUIDs as the ID) then this should work fine.
I'm using Entity Framework 6.
I want to run a stored procedure that returns a non entity object (3 colunms per row)
using(var dbContext = new DBContextEntity())
{
var queryProducts = dbContext.Database.SqlQuery<DataTable>("dbo.GetProductByDesc #q", query);
}
How to get that data as DataSet or anonymous object that I can iterate that?
As far as I know EntityFramework does not provide anonymous object materialization. The reason for that is that it probably generates IL code for each type and caches it (or just does a plain PropertyInfo caching).
The solution is to just create a simple class with properties you need matching the names of the stored procedure result set and use this class as a generic parameter for SqlQuery.
Edit:
SqlQuery implements IEnumerable and when you iterate over it, it executes automatically in thge current thread. To iterate the result you can for example:
foreach(var product in queryProducts)
{
// do something with each product here
}
You can also pass a list of product class instances to a function expecting it:
ShowProducts(queryProducts.ToList());
You can also make the query run in background and return you a list of Product after it has finished, more information about asynchronous fetching can be found here: http://www.codeguru.com/csharp/.net/net_framework/performing-asynchronous-operations-using-entity-framework.htm
Like #Too said, it is best to define a POCO class with properties corresponding to the field names and datatypes returned by the StoredProcedure or other SQL statement.
It is generally better to avoid the use of DataSets in any new development work you are doing. They do have their uses but have a performance penalty in high throughput scenarios which the POCO's clearly avoid.
If the attraction for DataSets is the ability to easily serialize the data over the wire or to a file for later use, then the various serialization frameworks will help you with that eg DataContractSerializer, Newtonsoft.Json, etc.
This also allows for portability if the POCO is defined in a PCL (Portable Class Library).
If you must use DataSets, I would rather use typed DataSets. The DataRow's can be used as the POCO in #Too's answer, since they have a default constructor. Just be careful of nulls and their unique treatment in fields other than String.
I am creating a Data Access Layer in C# for an SQL Server database table. The data access layer contains a property for each column in the table, as well as methods to read and write the data from the database. It seems to make sense to have the read methods be instance based. The question I have is regarding handling the database generated primary key property getter/setter and the write method. As far as I know I have three options...
Option 1: Using a static method while only allowing a getter on the primary key would allow me to enforce writing all of the correct values into the database, but is unwieldy as a developer.
Option 2: Using and instance based write method would be more maintainable, but I am not sure how I would handle the get/set on the primary key and it I would probably have to implement some kind of validation of the instance prior to writing to the database.
Option 3: Something else, but I am wary of LINQ and drag/drop stuff, they have burned me before.
Is there a standard practice here? Maybe I just need a link to a solid tutorial?
You might want to read up on active record patterns and some examples of them, and then implement your own class/classes.
Here's a rough sketch of a simple class that contains some basic concepts (below).
Following this approach you can expand on the pattern to meet your needs. You might be OK with retrieving a record from the DB as an object, altering its values, then updating the record (Option2). Or if that is too much overhead, using a static method that directly updates the record in the database (Option1). For an insert, the database (SP/query) should validate the natural/unique key on the table if you need to, and probably return a specific value/code indicating a unique constraint error). For updates, the same check would need to be performed if allowing natural key fields to be updated.
A lot of this depends on what functionality your application will allow for the specific table.
I tend to prefer retrieving an object from the DB then altering values and saving, over static methods. For me, it's easier to use from calling code and can handle arcane business logic inside the class easier.
public class MyEntityClass
{
private int _isNew;
private int _isDirty;
private int _pkValue;
private string _colValue;
public MyEntityClass()
{
_isNew = true;
}
public int PKValue
{
get {return _pkValue;}
}
public string ColValue
{
get {return _colValue;}
set
{
if (value != _colValue)
{
_colValue = value;
_isDirty = true;
}
}
}
public void Load(int pkValue)
{
_pkValue = pkValue;
//TODO: query database and set member vars based on results (_colVal)
// if data found
_isNew = false;
_isDirty = false;
}
public void Save()
{
if (_isNew)
{
//TODO: insert record into DB
//TODO: return DB generated PK ID value from SP/query, and set to _pkValue
}
else if (_isDirty)
{
//TODO: update record in DB
}
}
}
Have you had a look at the Entity Framework. I know you said you are wary of LINQ, but EF4 takes care of a lot of the things you mentioned and is a fairly standard practice for DALs.
I would stick with an ORM Tool (EF, OpenAccess by Telerik, etc) unless you need a customized dal that you need (not want) total control over. For side projects I use an ORM - at work however we have our own custom DAL with provider abstractions and with custom mappings between objects and the database.
Nhibernate is also a very solid tried and true ORM with a large community backing it.
Entity Framework is the way to go for your initial DAL, then optimize where you need it: Our company actually did some benchmarking in comparing EF vs SQL reader, and found that for querying the database for one or two tables worth of information, the speed is about 6's (neither being appreciably faster than the other). After two tables there is a performance hit, but its not terribly significant. The one place that writing your own SQL statements became worthwhile was in batch commit operations. At which point EF allows you to directly write the SQL queries. So save your self some time and use EF for the basic heavy lifting, and then use its direct connection for the more complicated operations. (Its the best of both worlds)
I have a design question related to Entity Framework entities.
I have created the following entity:
public class SomeEntity {
// full review details here
}
This entity has as an example 30 columns. When I need to create a new entity this works great. I have all of the required fields in order to insert into the database.
I have a few places in my app where I need to display some tabular data with some of the fields from SomeEntity, but I don't need all 30 columns, maybe only 2 or 3 columns.
Do I create an entirely new entity that has only the fields I need (which maps to the same table as SomeEntity, but only retrieves the column I want?)
Or does it make more sense to create a domain class (like PartialEntity) and write a query like this:
var partialObjects = from e in db.SomeEntities
select new PartialEntity { Column1 = e.Column1, Column2 = e.Column2 };
I am not sure what the appropriate way to do this type of thing. Is it a bad idea to have two entities that map to the same table/columns? I would never actually need the ability to create a PartialEntity and save it to the database, because it wouldn't have all of the fields that are required.
Your first approach is not possible. EF doesn't support multiple entities mapped to the same table (except some special cases like TPH inheritance or table splitting).
The second case is common scenario. You will create view model for your UI and either project your entity to view model directly in query (it will pass from DB only columns you project) or you will query whole entity and make conversion to view model in your application code (for example by AutoMapper as #Fernando mentioned).
If you are using EDMX file for mapping (I guess you don't because you mentioned ef-code-first) you can use third approach which takes part from both mentioned approaches. That approach defines QueryView - it is EF based view on the mapped entity which behaves as a new read only entity. Generally it is reusable projection stored directly in mapping.
What you proposed as a first solution is the "View model paradigm", where you create a class for the sole purpose of being the model of a view to retrieve data and then map it to the model class. You can use AutoMapper to map the values. Here's an article on how to apply this.
You could create a generic property filter method that takes in an object instance, and you pass in a string array of column names, and this method would return a dynamic object with only the columns you want.
I think it would add unnecessary complexity to your model to add a second entity based on the same data structure. I honestly don't see the problem in having a single entity for updating\editing\viewing. If you insist on separating the access to SomeEntity, you could have a database view: i.e. SomeEntityView, and create a separate entity based on that.