I am developing an HRM application to import and export xml data from database. The application receives exported xml data for the employee entry. I imported the xml file using linq to xml, where I converted the xml into respective objects. Then I want to attach (update) the employee objects.
I tried to use
//linqoper class for importing xml data and converts into IEnumerable employees object.
var emp = linqoper.importxml(filename.xml);
Using (EmployeedataContext db = new EmployeedatContext){
db.attachAllonSubmit(emp);
db.submitchange();
}
But I got error
“An entity can only be attached as modified without original state if it declares as version member or doesn't have an update check policy”.
I have also an option to retrieve each employee, and assign value to the new employee from xml data using this format.
//import IEnumerable of Employee objects
var employees = = linqoper.importxml(filename.xml)
using(Employeedatacontext db = new Employeedatacontext){
foreach(var empobj in employees)
{
Employee emp = db.Employee.where(m=>m.id==empobj.Id);
emp.FirstName=empobj.FirstName;
emp.BirthDate=empobj.BirthDate;
//….continue
}
db.submitChanges();
}
But the problem with the above is I have to iterate through the whole employee objects, which is very tiresome.
So is there any other way, I could attach (update) the employee entity in the database using LINQ to SQL.
I have seen some similar links on SO, but none of them seems to help.
https://stackoverflow.com/questions/898267/linq-to-sql-attach-refresh-entity-object
When linq-to-sql saves the changes to the database, it has to know properties of the object has been changed. It also checks if a potentially conflicting update to the database have been done during the update (optimistic concurrency).
To handle those cases LINQ-to-SQL needs two copies of the object when attaching. One with the original values (as present in the DB) and one with the new, changed values. There is also a more advanced mechanism involving a version member which is mapped to a rowversion column.
The linq-to-sql way to update a set of data is to first read all data from the database, then update the objects retrieved form the database and finally call SubmitChanges(). That would be my first approach in your situation.
If you experience performance problems, then it's time to go outside of linq-to-sql's toolbox. A solution with better performance is to load the new data into a separate staging table (for best performance, use bulk insert). Then run a SQL command or Stored Procedure that does the actual merging of data. The SQL Merge clause is excellent for this kind of updates.
LINQ to SQL is proper ORM, but if you want to take control of create/update/delete in your hand; than you can try some simple ORMs which just provide ways to do CRUD operations. I can recommend one http://crystalmapper.codeplex.com, it is simple yet powerful.
Why CrystalMapper?
I built this for large financial transaction system with lots of insert and update operations. What I need is speed and control of insert/update serving complex business scenarios ... hitting multiple tables just for one transaction.
When I put this to use in social text processing platform, it serves very well there too.
Related
I am using EF6 to create and populate a table with its primary key as a ServerTime (DateTime).
The table is very large, and in order to speed up access times as well as enjoy the benefit of having my table split into smaller partition files instead of one massive .ibd file when I perform an external query such as this:
SELECT
gbpusd.servertime,
gbpusd.orderbook
FROM
gbpusd
WHERE
gbpusd.servertime BETWEEN '2014-12-23 23:48:08.183000' AND '2015-03-23 23:48:08.183000'
I would like the table to be automatically partitioned by servertime during Code First creation.
I already know the raw MySql syntax for partitioning a table by range.
My current solution is to create and populate the database via EF6 Code first, and then manually execute the partitioning via raw MySQL query. Another solution is to use plain old ADO.NET directly after Code First creation, but I would have rather have everything streamlined inside EF6 code.
What I need to know is how I can accomplish the same thing via Code First implementation (assuming it is even possible)
Much thanks.
Is it about only speed up ? Where you need to populate this data? Assuming you have service layer or manager to use this entity.You can always use use your repository to perform over your entity using lembda expression.
example
using (var dataBaseContext= new your_DbContext())
{
var repoServer = new Repository<gbpusd>(dataBaseContext);
var searchedGbpusd =repoServer.SearchFor(i=> i.servertime <= date && i.servertime >= date).select;
///you can specify date what ever you like
///use this searched Data where ever you want to use.
}
Hope it will help.
Thanks
Fahad
You can just create your entity using EF6 code first practice. Using search via repository practice search the data and fill in that table. Like I suggested , use searched data and then fill the data and run this activity as frequently as you need. Like can do when users are not on peak. – Shaikh Muhammad Fahad 1
I got c# appliaction and entity famework as ORM.
I got database with table Images. Table have Id, TimeStamp, Data columns.
This table can have really ALOT entities. Also Datacolumn contain large byte array.
I need to take first entity starting from some date, or first 5 as example.
var result = Images.OrderBy(img => img.TimeStamp).FirstOrDefault(img => img.TimeStamp > someDate);
throws out of memory exception.
Is there some way to pass that?
Should i use stored procedure or something else?
If Images is already a queried object, then when you OrderBy it, it accesses the whole set. I'll assume it isn't, and it is directly your DbSet or an EF IQueryable (so you are querying using Linq-To-Entities and not Linq-To-Objects and the ordering is done on the query to the database, and not on the returned whole set).
Unless you need change tracking detection, use AsNoTracking on your DbSet (in this case, Context.Images.AsNoTracking().OrderBy(...). That should lower the memory requirements by a lot (change tracking detection requires more than twice the memory).
Also, if using large blob data, it might be wise to store it in its own table (with just an id and the data) and access it only when you need it (having a reference to this id on the table/entity where you are doing your operations) if you are using an ORM and want to work with the original entity all the time (you could also use a Select to project the query on a new entity without the blob field).
If you need to access the image data for the returned rows all the time, and there's not enough memory in the system for it, then tough luck.
I'm using Entity Framework and SQL Server 2008 with the Database First approach.
My problem is :
I have some tables that holds many many columns (~100), and when I try to retrieve a lot of rows it takes a significant time before it returns the results, even if sometimes I need to use just 3 or 4 columns from that table.
I passed half a day in Stackoverflow trying to find a way to solve this problem, and I came up with two solutions :
Using stored procedures to retrieve data with the columns I want.
Edit the .edmx (xml) and the .cs files to remove the columns that I won't use.
My problem again is :
If I use stored procedures to retrieve the data with the columns that I want, Entity Framework loose it benefit and I can use ADO.NET instead of it and call directly the stored procedures ...
I can't take the second solution, because every time I make a change in the database, I'm obliged to regenerate the .edmx file and I loose the changes I made before :'(
Is there a way to do this somehow in Entity Framework ? Is that possible !
I know that other ORMs exist like NHibernate or Dapper, but I don't know if they can offer this feature without causing a lot of pain.
You don't have to return every column each time. You can specify which columns you need.
var query = from t in db.Table
select new { t.Column1, t.Column2, t.Column3 };
Normally if you project the data into a different poco it will do this automatically in EF / L2S etc:
var slim = from row in db.Customers
select new CustomerViewModel {
Name = row.Name, Id = row.Id };
I would expect that to only read 2 columns.
For tools like dapper: since you control the SQL, only specify columns you want - don't use *
You can create a second project with a code-first DbContext, POCO's and maps that return the subset of columns that you require.
This is a case of cut and paste code but it will get you what you need.
You can just create classes and project the data into them but I'm not sure you can make updates using this method. You can use anonymous types within a single method but you'll need actual classes to pass around between methods.
Another option would be to move to a code first development.
I have a database with many tables and constraints (but not much data). The database contains a few separate entities that are bound together by an ID directly or indirectly, as illustrated below:
My target is to move one entire slice of data (including data from all tables in the database) to another physical database in an easy and safe way. It's OK if it doesn't perform very well. In the above example, I would want to move the company with a certain Id as well as all employees of that company and all data related to the employees etc. through all the tables.
I want to do it with a safe compile-checked method, as I want to catch errors whenever I change my database.
The IDs in the database are mostly guids, but there are a few tables using auto incremented IDs.
note
The "Companies" table contains perhaps 5 rows, one for each company. I need to move ONE row from that table, along with all data directly or indirectly related to that row.
Suppose you want to copy data from from a detailsview(tableName=Jobs) to another table(tablename=Company)
string apply = "INSERT INTO Company (JobTitle,CompanyName) select JobTitle,CompanyName from Jobs";
this is just an idea hope it help.
UPDATE :
So this will help you
MSDN - Multiple Bulk Copy Operations (ADO.NET)
With example
looking for examples/tutorial for custom user fields, not via EAV
EAV is going to be problematic for various reasons such as performance
there are many base entities/tables with over 100000 records each
there will likely be over a dozen attributes
the records are to be displayed in a flat ui grid incl. custom fields so flattening them would be an issue while maintaining performance
Looking at enabling this via DDL where all custom fields would go into a matching table such as
<tablename>_custom_<userid>
and all user attributes would map to a column each and all their metadata stored in a metadata table
the retrieval would be simpler where the query would simply be
select *
from <tablename> A, tableName_custom_userid B
where B.KeyField = A.KeyField --( perhaps using outer join, haven't gone that far yet )
Wondering if there are any gotchas down the road that i need to be aware of ?
of course any samples/pointers would be helpful to kickstart the effort
specifically would appreciate any advice on using DDL for Sql Server compact 4
One technique I have seen used is to use a sort of 'hard-coded' EAV pattern. Don't hang up! It worked well with the dataset sizes you were talking about and didn't actually use EAV - it was only EAV-esque.
The idea is to have a set of tables to store these custom attributes within it, with some triggers (described below) on them. The custom attributes tablesets store metadata about the attribute (what table it goes with, data type, constraints, etc). You can get very fancy with this but I did not haev the need.
The triggers on your meta-tables are there to re-generate views that rollup base+extension into first class objects within the DB. So instead of table person + employee extension table, you have an employee view that includes both. When you drop a new value into the custom attributes tables, the triggers will re-roll the views and include the new stuff. If you wanted to go nuts, you could also have the triggers re-write stored procedures as well. Depending on how your mid-tier code is structured, you would still be forced to re-code some, however this would be the case anyway should you be applying rules that read the data.
In testing, I found that for the relatively small # of records you're talking about, performance was somewhat slower but followed roughly the same pattern of degradation (2x the number of records, ~2x as slow).
-- edits --
How I saw it done, you had a table that represented your first class objects, so a row for 'person' and a row for 'employee,' etc. We'll call that FCO. Then you had a secondary table that stored what tables represented the FCO. We'll call that Srcs.. For person, there would be one row, which is the person table. For Employee, there would be two rows, the person table and the Employee extension. There is a third table, called Attribs, which stores the columns from the tables that constitute the FCO. For simplicity, we'll say Employee has ID, Name and Address, and Employee has Hire Date and Department, and obviously PersonID referring back to Person table. So, 2 rows in FCO table (person and employee), 3 rows in Src table, 8 rows in Attribs.
The view, we'll call it vw_Employee, selects PersonID, Name, Address, Hire Date, Department from the two tables. It is built by a SQL stored procedure we'll call OnMetadataChange.
This SP is fired (by trigger or batch process), and its purpose is to generate the CREATE VIEW statements. It will iterate through every First Class Object, collect which fields from which tables constitute the view, and will issue a CREATE statement based on that. So OnMetadataChange produces a DROP and CREATE for each view, it generates a dynamic SQL statement that is executed once per entry in FCO table. It is preferable to do this with Triggers but not necessary. Hopefully your FCO definitions won't change too often, and when they do, there will probably be a code release as well. You can run your OnMetadataChange SP at that time.
The end result is a 2-layer database. The views constitute the First Class Object layer, which is meaningful to the application. The application only uses views. The tables constitute the 'physical' layer, which the application shouldn't care about. The meta-tables are essentially your mapping between the FCO layer and the physical layer. It takes some time to set it up, but it's quite effective, and gives you many of the benefits of EAV, while at the same time giving you the concrete benefits of 3nf tables (indexability, etc).
If you'd like I can throw some sample SQL out there.
Part of the problem you are having is that you are trying to store schema-less data in a SQL database, which is not its strength. There are three approaches that would make your life far easier:
1) Have a column which stores the serialized custom fields, with whatever format is mst convenient. For example, this column could store xml. Upsides are that you can use SQL Server Compact and pulling back a record is trivial. Downsides are that you always have to pull/push the entire xml blob to do an update, and it is difficult to impossible to query on any custom fields.
2) Upgrade to SQL Server Express, and use XML columns. This is nearly the same as the first suggestion, except that any server ready version of SQL Server has native support for XML data. These columns can have indexes added and fields within the data can be used in queries.
3) Use a Schema-less Database, like MongoDB or CouchDB. These databases are all about storing schemaless data, so your custom fields will be no different than any other field. As such, you can index and query custom fields. Upsides are that custom data is incredibly easy to work with, downsides are that you would have to spend some time rethinking how you store data to fit within their model.
If you do not need to query based on custom fields, or if you can query custom fields within business logic, then the first option can work for you. In any other case, I would err towards something with more capabilities than compact. If cost is the deciding factor, both SQL Server Express and MongoDB are free.