Updating relational entities - c#

I have a scenario in which I need some help.
Let us assume that there is a User who listens to some type of Music.
class User
{
public virtual List<UserMusicType> Music { get; set; }
}
public class UserMusicType
{
public int ID { get; set; }
public MusicType name { get; set; }
}
public class MusicType
{
public int ID { get; set; }
public string Name { get; set; }
}
There is a form where I am asking users to check/select all types of Music he listens to. He selects 3 types namely { Pop, Rock, and Electronic }
CASE 1:
Now I want to update the User Entity and insert these 3 new types. From my understanding, I need to first remove whatever MusicTypes for this users were saved in the Database then insert these new types again. Is it a correct approach? Removing all previous and Inserting new ones? Or any other way to do it?
CASE 2:
I am taking MusicType names as string of course. Now while updating the User Entity, I'll have to first fetch the MusicType.ID after that I'll be able to do this:
user.Music.Add(new UserMusicType() { ID = SOME_ID });
Is there a better approach for this case?
I'll be glad to have some replies from experienced people in EF. I want to learn if there is an efficient way of doing it. Or even if my approach/Models are totally wrong or could be improved.

First of all, you don't need the UserMusicType class, you can just declare the `User class as
class User
{
public virtual List<MusicType> Music { get; set; }
}
And entity framework will create a many to many relationship table in the database
As for the first question, it depends. If you use this relationship any where else, like payment or audit trail, then the best way would be to compare the posted values to the saved values, ex:
User selected Music 1, Music 2, Music 3 for the first time and saved, in this case the 3 records will be inserted.
User edited his selection and chose Music 1,Music 3,Music 4, in this case you will get the values submitted which is 1,3,4 and retrieve the values stored in the database which is 1,2,3
Then you will get the new values which are the items that exist in the new values but not in the old, in this case it will be 4
You will get the removed values, which exist in the old but not in the new, in this case it will be Music 2.
The rest can be ignored.
So, your query, will be add Music 4, remove Music 2.
If you don't depend on the relationship, then it is easier to just remove all user music and add the collection again.
As for the second part of your question, I assume you will display some chechboxes for the user, you should make the value for the checkbox control as the MusicType ID, and this is what will be posted to the backend and you can use it to link it to the user.
ex:
user.Music.Add(new MusicType{ID=[selected ID ]}
You should not depend on the music name

First question:
Actually, it is a personal preference. Because, wouldn't want to delete all rows which belongs to that user and then insert them. I would compare the collection which is posted from the form with the rows which is stored in the database. Then, delete those entities from the database which are not exist in the collection anymore. And, insert new ones. Even, you can update those entities which has modified some additional details.
By the way, you can easily achieve this with the newly released EntityGraphOperations for Entity Framework Code First. I am the author of this product. And I have published it in the github, code-project and nuget. With the help of InsertOrUpdateGraph method, it will automatically set your entities as Added or Modified. And with the help of DeleteMissingEntities method, you can delete those entities which exists in the database, but not in the current collection.
// This will set the state of the main entity and all of it's navigational
// properties as `Added` or `Modified`.
context.InsertOrUpdateGraph(user)
.After(entity =>
{
// And this will delete missing UserMusicType objects.
entity.HasCollection(p => p.Music)
.DeleteMissingEntities();
});
You can read my article on Code-project with a step-by-step demonstration and a sample project is ready for downloading.
Second question:
I don't know on which platform you are developing your application. But, generally I am storing such libraries as MusicType in a cache. And use DropDownList element for rendering all types. When user posts the form, I am getting values rather than names of the selected types. So, no additional work is required.

Related

EF6 SaveChanges fails claiming field supplied is NULL

I have been using EF for awhile but I am not sure what is missing as I am following a pattern I have used several times in the past. This is the SQL table definition:
Table LogTable
Columns
LogID (int, Identity)
fk_ref (int, not null)
action (nvarchar(60))
notes (nvarchar(200))
This is the code (names changed for ease of reading/understanding)
using(myEntity _me = new myEntity)
{
LogTable _lt = new LogTable();
_lt.fk_ref = 10;
_lt.action = "Some action";
_lt.notes = "even more text";
_me.LogTable.Add(_lt);
_me.SaveChanges();
}
This is where it blows up claiming that the field "fk_ref" is null.
When I go to the edmx and ModelBrowser all the fields are represented.
When I check the select SQL on the table name "_me.LogTable" during debug the SELECT statement is missing the field it claims as NULL.
I hope I have given enough information to turn on the light bulb in my head.
NOTE: I have tried dropping and re-adding the table. Gone as far as drop, clean, rebuild, re-add and no change.
Would really appreciate any help.
UPDATE: Since this is new functionality I took the liberty of breaking the foreign key enforcement on the reference table and ran the code as demonstrated above. I also removed the Not Null limitation. It wrote out the record but put a NULL in the fk_ref field.
UPDATE 2 As someone asked for it. This is the CS modified to match the shortened definition above.
public LogTable()
{
this.fk_ref = 0;
}
public int LogID { get; set; }
public Nullable<int> fk_ref { get; set; }
public string action { get; set; }
public string notes { get; set; }
prior to the changes I mentioned in the first update it was
public LogTable()
{
}
public int LogID { get; set; }
public fk_ref { get; set; }
public string action { get; set; }
public string notes { get; set; }
UPDATE 3 moving ahead with this I saved a record via the code above and while debugging checked the DB for the value inserted in the fk_ref field and it was null. So, i fetch the record back to the app via the LogID, manually set the field value to a random number and called SaveChanges again. Still null. Here is the code following the SaveChanges() above
//... prior code ...
// assume that 4 is the log id of the record just inserted
// and 1000 is the fk_ref intended to be inserted
LogTable _new = _me.LogTable.where(p=>p.LogID == 4).FirstOrDefault();
// when I inspect _new the fk_ref post save changes the value is 1000
_new.fk_ref = 999;
_me.SaveChanges();
Retrieving the record from the db again fk_ref is still null
Found the Answer
I have no idea how to categorize this but the answer was in a scope not included in the original question. My thanks to all who responded as you pushed me to look under different rocks - sometimes that is all you need.
additional scope for question
This project is part of an enterprise wide management tool delivered via a One-click interface. Each of the 20 or so different business surfaces has their own management project which may or may not be written within the IT development group. (Democritization and all that). The subordinate projects are user control DLLs that are then distributed along with the main shell. The entire solution gets data from several servers and more than a hundred DBs. The connection strings are managed through main shell. What I am currently working on is a project that has a number shared components and controls including an enhanced logger. (log4j and all related services are outlawed here). I was developing a new control for shared TimeKeeping that uses the shared controls project. More graphically it looks like this:
Timekeeping had a reference to the Shared Controls project which had a EF object that connected to the DB that had the Log Table.
Project 1 had an EF object that connected to the DB that had the Log Table
The Shared Controls EF object was up to date. (the fk_ref field was added some time back)
Project 1 has been around a while (longer than the shared controls) and it's EF object was out of date.
Even though Timekeeping did not have a reference to Project 1 when the EF object in the Shared Controls was writing it used the definition Project 1.
Oddly enough, on a read the Shared controls retrieved all the fields
How I "proved" it
Created a mock up of the original application (WPF)
Added a user control project with EF connecting to a DB
used the control to write data to the DB
Closed out of VS
Used SSMS to modify the table
Opened VS added a new project with an EF project that connects to the same table
Added a third project that used a class in the second to write to the table referenced by the first and second.
any attempt to insert from the third project wrote a NULL into the added field
updated Model form Database in the first control project - checked to make sure the new field was there
Attempted to insert from the third project and all fields were inserted.
This seems odd to me but heck with all VS and MS have done for me who can complain. Off to make up for lost time. Believe it or not, I am more than a little happy I figured it out with all your help. Maybe this experience will help someone else.

A correct way to alter Cassandra table via C#

My issue is in the next.
I have the next simple model in my code:
public class Client
{
public Guid Id { get; set; }
public string Name { get; set; }
}
I defined a mapping for it:
public class CustomMappings : Mappings
{
public CustomMappings()
{
For<Client>().TableName("clients")
.PartitionKey(x => x.Id);
}
}
I created the table via Table<TEntity>.CreateIfNotExist() method:
var table = new Table<Client>(session);
table.CreateIfNotExists();
And I can insert my data by the next way:
IMapper mapper = new Mapper(session);
var client = new Client
{
Id = Guid.NewGuid(),
Name = "John Smith"
};
await mapper.UpdateAsync(client);
After this, I've changed my model by adding a new property:
public class Client
{
public Guid Id { get; set; }
public string Name { get; set; }
public string Surname { get; set; }
}
I need to alter this table, because I want to add surname column to it.
Of course, I have the exception without it when I try to insert a value:
Cassandra.InvalidQueryException: Undefined column name surname
at Cassandra.Requests.PrepareHandler.Prepare(PrepareRequest request, IInternalSession session, Dictionary`2 triedHosts)
at Cassandra.Requests.PrepareHandler.Prepare(IInternalSession session, Serializer serializer, PrepareRequest request)
at Cassandra.Session.PrepareAsync(String query, IDictionary`2 customPayload)
at Cassandra.Mapping.Statements.StatementFactory.GetStatementAsync(ISession session, Cql cql, Nullable`1 forceNoPrepare)
at Cassandra.Mapping.Mapper.ExecuteAsync(Cql cql)
But class Cassandra.Data.Linq.Table<TEntity> does not contain neither nor .AlterOrCreate() nor .Alter() methods. Also, we don't have .GetAlter() method in Cassandra.Mapping.Statements.CqlGenerator.
Which way is more appropriate to solve this problem? I have two assumptions (besides creating a pull request with needed methods to datastax csharp driver repository on github :)).
To alter tables via cql script in .cql file which will be executed in c# code.
To create a new table after each changes of a model and migrate old data to it.
I'm a newbee in Cassandra and I have suspicions that needed method does not exist in the library for good reason. Maybe, are there any problems with consistency after altering because Cassandra is distributed database?
Changes in the Cassandra's schema should be done very accurately - you're correct about distributed nature of it, and when making changes you need to take into account. Usually it's recommended to make changes via only one node, and after execution of any DDL statement (create/drop/alter) you need to check for schema agreement (for example, via method CheckSchemaAgreementAsync of Metadata class), and don't execute next statement until schema is in agreement.
Talking about changes themselves - I'm not sure that C# driver is able to automatically generate the changes for schema, but you can execute the changes as CQL commands, as described in documentation (please read carefully about limitations!). The changes in schema could be separated into 2 groups:
That could be applied to table without need to migrate the data
That will require creation of new table with desired structure, and migration of data.
In the first group we can do following (maybe not a full list):
Add a new regular column to table
Drop a regular column from table
Rename the clustering column
Second group includes everything else:
Changing the primary key - adding or removing columns to/from it
Renaming of non-clustering columns
Changing the type of the column (it's really recommended to create completely new column with required type, copy data, and then drop original column - it's not recommended to use the same name with different type, as it could make your data inaccessible)
Data migration could be done by different tools, and it may depend on the specific requirements, like, type change, etc. But it's a different story.

MongoDB C# driver 2.0 update collection and ignore duplicates

I am very new with MongoDB (only spend a day learning). I have a relatively simple problem to solve and I choose to take the opportunity and learn about this popular nosql database.
In C# I have the following classes:
public class Item
{
[BsonId]
public string ItemId { get; set; }
public string Name { get; set; }
public ICollection<Detail> Details { get; set; }
}
public class Detail
{
//[BsonId]
public int DetailId { get; set; }
public DateTime StartDate { get; set; }
public double Qty { get; set; }
}
I want to be able to add multiple objects (Details) to the Details collection. However I know that some of the items I have (coming from a rest api) will already be stored in the database and I want to avoid the duplicates.
So far I can think of 2 ways of doing it, but I am not really happy with either:
Get all stored details (per item) from MongoDB and then in .net I can filter
and find the new items and add them to the db. This way I can be sure that there will be no duplicates. That is however far from ideal solution.
I can add [BsonId] attribute to the DetailId (without this attribute this solution does not work) and then use AddToSetEach. This works and my only problem with that is that I don’t quite understand it. I mean, it suppose to only add the new objects if they do not already exists in the database,
but how does it know? How does it compare the objects? Do I have any control over that comparison process? Can I supply custom comparers? Also I noticed that if I pass 2 objects with the same DetailId (this should never happen in the real app), it still adds both, so BsonId attribute does not guarantee uniqueness?
Is there any elegant solution for this problem? Basically I just want to update the Details collection by passing another collection (which I know that contain some objects already stored in the db i.e. first collection) and ignore all duplicates.
The AddToSetEach based version is certainly the way to go since it is the only one that scales properly.
I would, however, recommend you to drop the entire DetailId field unless it is really required for some other part of your application. Judging from a distance it would appear like any entry in your list of item details is uniquely identifiable by its StartDate field (plus potentially Qty, too). So why would you need the DetailId on top of that?
That leads directly to your question of why adding a [BsonId] attribute to the DetailId property does not result in guaranteed uniqueness inside your collection of Detail elements. One reason is that MongoDB simply cannot do it (see this link). The second reason is that MongoDB C# driver does not create an unique index or attempts other magic in order to ensure uniqueness here - probably because of reason #1. ;) All the [BsonId] attribute does is tell the driver to serialize the attributed property as the "_id" field (and write the other way round upon deserialization).
On the topic of "how does MongoDB know which objects are already present", the documentation is pretty clear:
If the value is a document, MongoDB determines that the document is a
duplicate if an existing document in the array matches the to-be-added
document exactly; i.e. the existing document has the exact same fields
and values and the fields are in the same order. As such, field order
matters and you cannot specify that MongoDB compare only a subset of
the fields in the document to determine whether the document is a
duplicate of an existing array element.
And, no, there is no option to specify custom comparers.

Entity framework (core), save incomplete model containing required fields

Using : dotnet core 1.1, entity framework code first, sql server.
Is there any elegant way to enable a user working on a large form, represented by a complexe model (40+ tables/C# objects), having multiple "required" fields, to save it's work temporarily and come back to complete it afterward?
Let's say I have this model :
[Table("IdentificationInfo", Schema = "Meta")]
public class IdentificationInfo : PocoBase
{
[...]
public int MetaDataId { get; set; }
[ForeignKey("MetaDataId")]
public virtual MetaData MetaData { get; set; }
public int ProgressId { get; set; }
[ForeignKey("ProgressId")]
public Progress Progress { get; set; }
public virtual MaintenanceInfo MaintenanceInfo { get; set; }
public int PresentationFormId { get; set; }
[ForeignKey("PresentationFormId")]
public PresentationForm PresentationForm { get; set; }
private string _abstract;
[Required]
public string Abstract
{
get { return _abstract; }
set { SetFieldValue(ref _abstract, value, "Abstract"); }
}
[...]
}
[Table("PresentationForm", Schema = "Meta")]
public class PresentationForm : PocoEnumeration
{
[...]
}
The user starts to fill everything (in a big form with multiples tabs or really long page!), but needs to stop and save the progress without having the time to save to fill the PresentationForm part, nor the abstract. Normally, in the database, those fields are not null, so it would fail when we try to save the model. Similarly, it would also fail with EF validation in the UI.
What would be nice is using the Progress property and disable EF model validation (model.isValid()), and also enable database insert even if the fields are null (it is not possible to put default values in those not nullable fields as they are often foreign keys to enum like table).
For the model validation part, I know we can make some custom validator, with custom annotation such as [RequiredIf("FieldName","Value","Message")]. I'm really curious about some method to do something similar in the database?
Would the easy way to do that be to save the model as JSON in a temporary table as long as the progress status is not completed, retrieve it when needed for edition directly from the JSON, and save it to the database only when the status is completed?
To support (elegantly) what you ask you should design it that way.
One table with it's required columns should be minimum segment that have to be inputted before any save. Should make segment optimal size.
You could set all fields to allow null but that would be very BAD design, so I would not consider that option at all.
Now if your input consist of several logical parts, and on form they could be different tabs so each tab is in one table in Db and main table have FKs of others tables.
That FK could be Nullable, and it would enable you to finish say first 2 tabs, Save it, and leave rest for after. So you will know that those FK column that have values are finished(and maybe could be edited still), while others are yet to be inserted. You can also have column Status:Draft/Active/...
What's more this design would allow you to have configurable tabs, so for example based on some chosen selection on main input you could chose what tables can be inputted, and which not and to enable/disable appropriate tabs.
If however you don't want FKs nullable than solution would be some temporary storage, one option being JSON in one string column, as you have mentioned your self. But I see no issues with nullable FKs in this case.

Save database entrys in an object?

I'm relatively new to programming so here is my question.
I have a C#-Forms application and an access database.
In the database I have the data of about 200-300 cars (Name, Year of construction, ...)
In my forms application i show all the cars in a list and i have a filter where I can search for specific words and types and so on.
At the moment I react to any filter input and then execute a new sql-query and list all the cars that fit the filter.
I obviously don't think that's a good solution because i have a database access every keyDown action.
Is it a viable way to create a car-class and create an instance of this class for every car and store them in a list?
What is the best way to handle all the 200 cars without reading them out of the database over and over again?
If your car data is not changing frequantly than you can store in memory data and after filter on that.
When new record is added you need to update in memory data.
the following code may help, although you should fill the blanks.
class CarDA
{
public const string YOUR_SPECIFIC_CACHE_KEY = "YOUR_SPECIFIC_CACHE_KEY";
public IList<Car> Search(string searchExpression)
{
var carList = ListAllCars();
//AMK: do the math and do the filter
return Filtered_Car_List;
}
private IList<Car> ListAllCars()
{
var ExpireTime = 10;
if (!MemoryCache.Default.Contains(YOUR_SPECIFIC_CACHE_KEY))
{
MemoryCache.Default.Add(
new CacheItem(YOUR_SPECIFIC_CACHE_KEY, PopulateCarList()),
new CacheItemPolicy
{
AbsoluteExpiration = DateTimeOffset.Now.AddMinutes(ExpireTime)
});
}
return MemoryCache.Default.Get(YOUR_SPECIFIC_CACHE_KEY) as IList<Car>;
}
private IList<Car> PopulateCarList()
{
//AMK: fetch car list from db and create a list
return new List<Car>();
}
}
class Car
{
public string Name { get; set; }
}
Seems more a philosophy question than a technical one, so I'd like to give you my 2 cents on this.
If the Cars database is modified only by your application and the application is run by a single person, you can read the data once and set a mechanism in your UI to reaload the in memory data when through the UI you update the database.
If you have multiple users using the data and updating the database, so that one user can add cars to the database while another is reading it to perform some operation on them, you have to prepare a mechanism to verify if data have been modified by someone else in the system to all users so that they can reload the cars list. (it can be something like a temporized query to a table where you memorize last date and time of update of the cars table for example, or it can be something more sophisticated in different cases).
Said so, Usually when working with databases I prepare a DataProvider Class that 'speaks' with the database, creating, updating, deleting and querying data, a Class that represents the Table Row of my data (in your case a Car class) and The DataProvider returns a List to my User interface that I can Use As is, or, if needed I can move to an ObservableCollection if using it with WPF UI objects and the data can change, or I can create something like a UICar object that is a Car plus other properties related to its use inside the User Interface that can be more useful to provide actions and functionality to the user using your application.
When you have your Cars collection inside your application, Searching in memory data using a simple Linq query becomes rather simple to implement and much more effective than calling the database at any change in the search textbox.
Also, as a suggestion, to avoid querying (memory or db is the same) at any keystroke, set a timer (just a few milliseconds) before starting the query reset in the key event so that if the user is typing a word the query starts just when he/she stops typing.
HTH

Categories