Let's say that I have an object model with many different objects that I then, using the Entity Framework map to database tables. That all works fine. The problem is what happens when I want to save more objects to the database. The objects in the model are interconnected, so when I am saving objects to the database, they may have references to objects that are already in the database.
For example, if I have an object called Person which has a property Friends which in itself is a list of people. This means that there is a table of Persons in the database and each of these has a list of friends which are identified by their ID's. Assuming that if the ID's of two people are the same then they are the same person, then I believe the Entity Framework will throw an error if I try to add them again. So the problem is when I add another person object who already has a list of friends, then I want to add friends who are not in the database, and I do not want to add the ones that already are in the database. Also each of those people may have friends who are/aren't in the database and I want them considered too. Another thing to consider is that even if the person is already in the database I want to look through their list of friends and add any new ones.
I feel like this should be a fairly common problem when working with databases, but I'm probably using the wrong vocabulary since nothing useful is actually coming up in all of my searches. It mostly deals with how to scan databases for duplicates after the fact or something like that. I want this to be handled in a business logic C# code layer through an Entity-Framework-approved venue. Any help is appreciated. Thank you in advance.
The problem you pose is valid, but it does not fundamentally differ from adding objects to a Dictionary or a HashSet. You simply have to check whether a certain Person or related Friend is already contained in the Dictionary or if you will the Databasetable. The beauty of using the Entity Framework is that it makes these operations very similar.
So, you can use either a method like context.People.Contains(newPerson) (you must implement IEquatable<Person> for that), orcontext.People.Find(id).
Likewise you can use
foreach(friend in newPerson.Friends)
{
if(!context.People.Contains(friend)
context.People.Add(friend)
}
Related
Background:
I'm building a C# data migration tool to move data from an older application (with SQL Server database) to our new application (also using SQL Server database), but I am going through our Web API rather than direct inserts into the new database to reuse business logic and whatnot. I'm using Entity Framework for reading from the legacy database.
Issue:
The older database system, for reasons unknown to me, uses an archive table in addition to the table with the latest version of records. For example, there may be a "person" table and then also an "a_person" table with multiple archived copies of previous records. I am planning to keep these archived records within the same table, just chained together in a Point In Time architecture. So they are essentially identical columns, but because of EF6, these are two different objects which means I'm doubling all my code when I attempt to move values from "person" and "a_person" to the newest data object which will be sent to the API. If it was just the one example, no big deal, but there are about half a dozen tables that have this pattern.
I'm trying to think of the best way to handle this. I initially thought about added interfaces to the generated EF6 classes like semantic sugar to allow passing to a common method, but I still would need to cast that back to the original classes so it doesn't buy me anything.
Next I thought to serialize each of the tables into a json string that I can deserialize into a Dictionary - then have a generic method which would pull my values out. However, I feel like that may be unnecessarily slow.
Most recently I'm thinking about going more back to my original idea with interfaces, but partial classes to the EF6 that implements a common interface and an implementation that can return the different values of the parent EF6 class. So both "parent" and "a_parent" entities would have partial classes which implement an interface and which return all the values for the parent. Again, though, this just feels like a fancier way of duplicating my code of accessing the values.
Serializing and deserializing feels like the only way to truly eliminate the duplicate code. While the length of time the migration takes isn't a critical factor, I'd rather not create the most sluggish solution possible. I guess there's also Reflection. Would Reflection be preferred over the serializing and deserializing?
The solution I settled on and was quite happy with, was based on the comment from AlwaysLearning - I unified the two records.
I started to work with Serenity a couple of days ago and i got stacked by a small problem.
I have a grid made by a view of 2 Tables and i would like to add the values from that grid in those tables.
I know that i can not add from view directly and i need to create some functions which will add those items in the right place in those tables.
This is an example of the schema:
I created a view so the grid will be displayed with columns from both tables.
This is an example of the grid:
Serenity is great tool if you want to build an app very quick but is not very user friendly if you want to modify something in the code that is generated.
I tried to add some functions from StackOverflow, but that will modify the entire functionality of the program and this type of request is used only once. I can not modify the default create function, because for the rest of the tables this function will be useless.
If someone tried to work with Serenity and have an idea please give a hint so I can resolve this issue.
Thanks!
Serenity is great and I thank the guy. For those who may hit this thread don't give up. Check the current documentation. I have a few points that may be of help to those who follow. I had a proper db with foreign keys and stored procs that I wished to use to improve performance. I also had model classes mapped to the procs representing the objects I wished to use which routinely include multiple tables joined in the procs.
I found that Serenity included the joined table columns in the entity row class but not in the columns class and the row class property was attributed with the Expression tag which I removed. I believe I saw a comment from the author that he uses the foreign key declarations for this purpose.
I my case I was able to add a property to the columns class.
In the endpoint class I retrieved my proc results into my domain objects and then populated a list of the row class instances and added that list to the method return object.
This process resulted in a properly populated grid object.
Had I known how this worked to start with, it would be better to populate the list of row class objects directly.
Hope this helps.
Serenity allows updating two tables at once, there is even a sample for this in Customer dialog. At the bottom, Customer Details are listed from and updated on a separate table.
Serenity doesn't make assumptions about your UI or tables, unlike Mark Ewer states. I use Serenity on so many different legacy applications and databases, so it has to adapt to any DB structure.
Of course, because code generator handles simple cases, generating UI for a simple table is easier. For more complex cases, you should know where to inject the plug. Thats where samples, docs and GitHub issues comes in.
I might be way off here, and this question probably bordering subjective, but here goes anyway.
Currently I use IList<T> to cache information from the database in memory so I can use LINQ to query information from them. I have a ORM'ish layer I've written with the help of some questions here on SO, to easily query the information I need from the DB. For example:
IList<Customer> customers = DB.GetDataTable("Select * FROM Customers").ToList<Customer>();
Its been working fine. I also have extension methods to do CRUD updates on single items within these lists:
DB.Update<Customer>(customers(0));
Again working quite well.
Now in the GUI layer of my app, specifically when binding DataGridView's for the user to edit the data, i find myself bypassing this DAL layer and directly using TableAdapters within the forms which kind of breaks the layered architecture which smells a bit to me. I've also found the fact that I'm using TableAdapters here and ILists there, there are differing standards followed throughout my code which I would like to consolidate into one.
Ideally, I would like to be able to bind to these lists and then have the DAL update the list's 'dirty' data for me. To me, this process would involve the following:
Traversing the list for any 'dirty' items
For each of these, see if there is already an item with the PK in the DB
If (2), then update, else insert
Finally, perform a Delete FROM * WHERE ID NOT IN('all ids in list') query
I'm not entirely sure how this is handled in a TableAdapter, but I can see the performance of this method dropping significantly and quite quickly with increasing items in the list.
So my question is this:
Is there an easier way of committing List to a database? Note the word commit, as it may be an insert/update or delete.
Should I maybe convert to DataTable? e.g. here
I'm sure some of the more advanced ORM's will perform this type of thing, however is there any mini-orm (e.g. dapper/Petapoco/Simple.data etc) that can do this for me? I want to keep it simple (as is with my current DAL) and flexible (I don't mind writing the SQL if its gets me exactly what I need).
Currently I use IList to cache information from the database in memory so I can use LINQ to query information from them.
Linq also has a department called Linq-to-Datasets so this is not a compelling reason.
Better decide what you really want/need:
a full ORM like Entity Framework
use DataSets with DataDapters
use basic ADO.NET (DataReader and List<>) and implement your own change-tracking.
You can mix them to some extent but like you noted it's better to pick one.
Is there anyway to implement in-memory or fixed/hardcoded object instances in NHibernate that appear to all intents and purposes to be real instances of the object read from the database?
I have a historical database that has a number of missing foreign key values against a number of different tables as they are fixed/hard coded in the old DAL.
This is causing me problems in my NHibernate mapping.
An example of this would be a fixed immutable user, say 'ADMIN' that exists in code but not in the database. This 'ADMIN' user is still used in various foreign keys so needs to exist in NHibernate so that it can manage the FK mapping.
I've managed cheat loading by using a sql view which has the hard coded rows explicitly added, but of course I can't write to a view like that so need an alternative solution.
I did find a reference to the uNhAddIns WellKnowInstanceType that seems to do something similar, but I couldn't get to to work.
Anyone have any alternative suggestions?
one trick i can think of is attaching the imaginary User instance to the session befor querying using sess.Lock(admin, LockMode.None); that should take care of the reference. But I#m not sure what happens when eager loading the reference.
I want to create a dynamic datadriven application for practice purposes.
If I have a Modell with a Entity and I need a new one, then I want to create it only in the Diagram (modell) and thats all.
Everything else should be done dynamically, adding the new entity to b.e a Listbox, make it clickable and create a "Show Datas" and a "New/Edit" Tab with the right labels and textboxes in it. (For editing/creating new)
What I would like to know is, how can I:
Get the number of the entities
Is it possible to update the database, without needing to delete it and create new (Else I would loose all Data), if hopefully yes, how?
Get all the fields from a Entity? (Do I must work here with Reflection?)
Hope someone could help
1.Get the number of the entities
Using Context object you get the list of entities. there you can use the .Count() to check the no of entities of that type.
2.Is it possible to update the database, without needing to delete it and create new (Else I would loose all Data), if hopefully yes, how?
This question is little unclear. you want to delete database.. or entity?? you can do any operation on entities that will be reflected on back end if you want. Regarding database delete and create operation, entity framework is not designed for.
Yes you can add new entity to model and then map it with the back end tables.. it is possible to modify the model as per your backend. Even you can create you custom entites that reflect operation on multiple tables on the database but with some care about data integration.
3.Get all the fields from a Entity? (Do I must work here with Reflection?)
Yes.. To access the properties of Entity with out knowing their name you should go through reflection.