I am working with an existing project that has a database with what appears to be manually created data via SQL Server \ SSMS.
Further down the project someone else has come and created a seed data \ configuration file. This is where I have been introduced into the solution and have created a new migration file, and found that I am getting an error:
PRIMARY KEY constraint 'PK_AnswerTypes'. Cannot insert duplicate key in object 'forms.AnswerTypes'. The duplicate key value is (1)
Looking through Azure Pipelines, this appears to have been an issue since the configuration file was created.
The configure code is
public void Configure(EntityTypeBuilder<FieldType> builder)
{
if (builder == null)
{
throw new ArgumentNullException(nameof(builder));
}
builder.ToTable("FieldTypes", FormEngineSchemas.Forms);
// TODO: convert to enum
builder.HasData(
new FieldType
{
FieldTypeId = 1,
FieldTypes = "NUMBER"
},
new FieldType
{
FieldTypeId = 2,
FieldTypes = "DROPDOWN"
},
new FieldType
{
FieldTypeId = 3,
FieldTypes = "DATE"
});
}
The upscript is
protected override void Up(MigrationBuilder migrationBuilder)
{
migrationBuilder.InsertData(
schema: "forms",
table: "AnswerTypes",
columns: new[] { "AnswerTypeId", "AnswerTypes" },
values: new object[,]
{
{ 1, "Range" },
{ 2, "Length" },
{ 3, "regex" }
});
}
I would be grateful if someone could help advise me how to get passed this as I am looking to not have to delete the existing data in the database because I dont want to risk potential orphaned records, or risk failed deletes.
I have had a look round and this is the closest that I can see to my issue
https://github.com/dotnet/efcore/issues/12324
Looking here it looks like the seeding has been done correctly
https://www.learnentityframeworkcore.com/migrations/seeding
https://learn.microsoft.com/en-us/archive/msdn-magazine/2018/august/data-points-deep-dive-into-ef-core-hasdata-seeding
So, questions I have are;
If the database was all created and seeded from the beginning and all worked fine would all subsequent migrations work ok and not attempt to seed them again.
What is the best way to to get around this issue.
Is there anything I might have missed or not considered?
Thanks
Simon
Since you do not want to lose data, you can consider using migrationBuilder.UpdateData() instead of migrationBuilder.InsertData(). The InserData method will add new records to the DB while UpdateData will search for and update existing records in the DB.
Make sure you have EF Core v2.1 and up for this to work.
Related
So I made a program that'll get the schema from a table in a MySQL Server, create a table based on the said schema, and insert the data rows which are saved in a CSV file in a Google Cloud Storage.
Code for getting the schema from MySQL:
foreach (string table in listOfTableNames)
{
MySqlCommand query = new MySqlCommand($"desc {table}", openedMySQLConnection);
MySqlDataReader reader = query.ExecuteReader();
DataTable dt = new DataTable(table);
dt.Load(reader);
reader.Close();
object[][] result = dt.AsEnumerable().Select(x => x.ItemArray).ToArray();
TableSchemas.Add(table, result);
}
Google BigQuery table maker from schema looped per table:
var schemaBuilder = new TableSchemaBuilder();
foreach (var column in dictionaryOfSchemas[myTablename])
{
string columnType = column[1].ToString().ToLower();
schemaBuilder.Add(
column[0].ToString(),
columnType.Contains("varchar") ? BigQueryDbType.String :
(columnType.Contains("int") ? BigQueryDbType.Int64 :
(columnType.Contains("decimal") ? BigQueryDbType.Float64 :
(columnType.Contains("timestamp") ? BigQueryDbType.DateTime :
(columnType.Contains("datetime") ? BigQueryDbType.DateTime :
(columnType.Contains("date") ? BigQueryDbType.Date :
BigQueryDbType.String)))))
);
}
TableSchema schema = schemaBuilder.Build();
BigQueryTable newTable = bigquery.GetDataset(myDataset).CreateTable(myTablename, schema);
CSV to created GBQ table looped per table:
bigquery.CreateLoadJob(
$"gs://{myProjectIdString}/{myBucketNameString}/{myCSVFilename}",
newTable.Reference,
schema,
new CreateLoadJobOptions()
{
SourceFormat = FileFormat.Csv,
SkipLeadingRows = 1
}
).PollUntilCompleted();
No error shows up after running the CreateLoadJob method.
Schema of the new table seems to be good and matches the CSV and the MySQL table:
Here's a sneak peak into the CSV file in Cloud Storage with some data redacted out:
But there's still no data in the table:
Am I doing something wrong? I'm still learning Google services so any help and insight would be appreciated. :)
There are a few things wrong here, but it's mostly to do with the data rather than with the code itself.
In terms of code, when a job has completed, it may still have completed with errors. You can call ThrowOnAnyError() to observe that. You can get detailed errors via job.Resource.Status.Errors. (I believe the first of those detailed errors is the one in job.Resource.Status.ErrorResults.)
Once the correct storage URL is provided (which would be observed that way as well) you'll see errors like this, with the CSV file you provided:
Error while reading data, error message: CSV processing encountered too many errors, giving up. Rows: 1; errors: 1; max bad: 0; error percent: 0
Could not parse '06/12/2014' as DATE for field delivery_date (position 0) starting at location 95 with message 'Unable to parse'
At that point, the problem is in your CSV file. There are two issues here:
The date format is expected to be ISO-8601, e.g. "2014-06-12" rather than "06/12/2014" for example
The date/time format is expected to include seconds as well, so "2014-05-12 12:37:00" rather than "05/12/2014 12:37" for example
Hopefully you're able to run a preprocessing job to fix the data in your CSV file.
This is assuming that the schema you've created is correct, of course - we can't tell that from your post, but here's the schema that worked for me:
var schema = new TableSchemaBuilder
{
{ "delivery_date", BigQueryDbType.Date },
{ "delivery_hour", BigQueryDbType.Int64 },
{ "participant_id", BigQueryDbType.String },
{ "resource_id", BigQueryDbType.String },
{ "type_id", BigQueryDbType.String },
{ "price", BigQueryDbType.Numeric },
{ "date_posted", BigQueryDbType.DateTime },
{ "date_created", BigQueryDbType.DateTime }
}.Build();
After fiddling with my variables and links, I figured out that making a link to your bucket folder should be gs://<bucket_name>/<csv_filename> instead of gs://<project_id>/<bucket_name>/<csv_filename>.
My code's running ok for now and successfully transferred data.
I would like to use EF migrations to add some data inside my DB. The fact is that all Id are auto-generated. I found the standard way to achieve it with EF Core, which is the following
modelBuilder.Entity<TypeNote>()
.HasData(
new TypeNote { Id = 1, Name = "General" },
new TypeNote { Id = 2, Name = "E-mail" },
new TypeNote { Id = 3, Name = "Meeting" },
new TypeNote { Id = 4, Name = "Reminder" },
new TypeNote { Id = 5, Name = "Telephone" },
new TypeNote { Id = 6, Name = "Visit" }
);
My issue here is that I don't want to specify the Id, but it seems that there is no other way using HasData method.
Do you know another way to add data inside DB using migration?
I found a way to do an insert during migration:
migrationBuilder.InsertData(
table: "TypeNote",
columns: new[] { "Name" },
values: new object[,]
{
{ "Test" },
{ "Test1" }
});
The fact is that I wanted to access dbContext inside the migration. Which is impossible because DB is updating.
This is standard for HasData. Auto generated fields are not generated with HasData method.
Please have a look at this EntityFrameworkCore issue.
I think what you are looking for is referred to as "seed data"
Please check out this Microsoft Doc on how to add seed data to your database and let me know whether or not that is what you're looking for.
I have the following classes
public class Lookup
{
public int Id{get;set;}
public string Name{get;set;}
public int Order{get;set;}
}
public class CatalogType:Lookup // this was added on Add-migration "Second"
{
}
public class Catalog:Lookup
{
public int CatalogTypeId{get;set;} // this was added on add-migration "Second"
public CatalogType CatalogType{get;set;}
}
and I already have data in the database in the table Lookups that represent group of lookup classes like gender, marital statuses, catalog, etc. and the Lookups table contains a row with Name="General" that was used by Catalog(i.e Discriminator field="Catalog")
in the Configuration file inside the Seed function I wrote this code
context.Lookups.AddOrUpdate(p => new { p.Name, **p.GetType().FullName** },
new CatalogType
{
Name = "General",
IsActive = true,
Order = 1,
},
new CatalogType
{
Name = "Custom",
IsActive = true,
Order = 2,
});
context.SaveChanges();
My problem: I tried first context.Lookups.AddOrUpdate(p=>p.Name) and when I try to make update-database, the migration fails "sequence contains more than one element"
Then I tried to use p.GetType().Name the error was:
An anonymous type cannot have multiple properties with the same name.
Then I tried to use p.GetType().FullName and upon executing the update-database command, I got the following error:
The properties expression 'p => new <>f__AnonymousType18`2(Name =
p.Name, FullName = p.GetType().FullName)' is not valid. The expression
should represent a property: C#: 't => t.MyProperty' VB.Net:
'Function(t) t.MyProperty'. When specifying multiple properties use an
anonymous type: C#: 't => new { t.MyProperty1, t.MyProperty2 }'
VB.Net: 'Function(t) New With { t.MyProperty1, t.MyProperty2 }'.
I know that the problem caused because Lookups table contains already the Name="General" but how to tell the EntityFramework to take the column discriminator into consideration while trying to AddOrUpdate method?
in other words, i might have same data for 2 different objects and i want to add data on adding migration, how to achieve this if i have for example red car, red door and i want to add red apple for example? it will not allow me in my current situation, how to solve this issue?
Hope my explanation for this problem was clear.
try this :
//but two lines below in "OnModelCreating" method in your Context
modelBuilder.Entity<Lookup>().Map<Catalog>(m => m.Requires("IsCatalog").HasValue(true));
modelBuilder.Entity<Lookup>().Map<CatalogType>(m =>m.Requires("IsCatalog").HasValue(false));
// then :
context.Lookups.AddOrUpdate(p => new { p.Name , p.IsCatalog},
new CatalogType
{
Name = "General",
IsActive = true,
Order = 1,
},
new CatalogType
{
Name = "Custom",
IsActive = true,
Order = 2,
});
//context.SaveChanges(); //if you used base.OnModelCreating(modelBuilder);
base.OnModelCreating(modelBuilder); // then you don't need to save
after searching about this subject i came to this result
TPH will work with discriminator field to distinguish between derived classes
TPC does not depend on discrimintor field because the primary key of the inherited class is the same primary key of the derived class
when trying to add the data to the Catalog and i am putting constraint ( if Name repeated then make update else create), the EF failed to set the discriminator='Catalog' since it is TPC so the update will fail because table contains other data 'General'
when trying to add mapping conditions, this is not allowed by EF to use same inherited class for TPC and TPH at the same time.
hope this will help others fell in the same problem like me
I am using Entity Framework code first, and I have two tables that need to be initialized with fixed entries just on table creation. For example:
Columns for table A:
-Id
-Name
Columns for table B:
-Id
-Name
On table creation, I need to add the below information for each table:
For table A:
Id Name
1 "Name_A1"
2 "Name_A2"
3 "Name_A3"
4 "Name_A4"
For table B should be more or less the same:
1 "Name_B1"
2 "Name_B2"
3 "Name_B3"
4 "Name_B4"
5 "Name_B5"
6 "Name_B6"
7 "Name_B7"
8 "Name_B8"
so how to achieve this? I have thought to override method Seed and add those registries there (hard-coded) but I do not know if it is the best solution. Maybe in development phase it is a good practise to override this method and add there the information to the tables but once application will be deployed in the customer, maybe it is better to once created the database by entity framework in the development machine, then do an export and import it to the customer computer and finally fill those tables with the fixed entries.
As you and Raphaƫl mentioned, you can use the Seed() method:
protected override void Seed(MyContext context)
{
context.A.AddOrUpdate
(
a => a.Name,
new A { Name="Name_A1" },
new A { Name="Name_A2" },
new A { Name="Name_A3" },
new A { Name="Name_A4" }
);
context.SaveChanges();
context.B.AddOrUpdate
(
b => b.Name,
new B { Name="Name_B1" },
new B { Name="Name_B2" },
new B { Name="Name_B3" },
new B { Name="Name_B4" },
new B { Name="Name_B5" },
new B { Name="Name_B6" },
new B { Name="Name_B7" },
new B { Name="Name_B8" }
);
context.SaveChanges();
}
Using AddOrUpdate will ensure no duplicate data is inserted into the database. The first parameter, a => a.Name, lets EF identify each entity. The identity could be a PK value of the entity, but if your DB is generating these, you won't know the value, so using AddOrUpdate will check to see if the Name exists. If it doesn't, a new record will be inserted. If it does find a match, the entity will be updated. It is of course up to you to decide on using something that will uniquely identify each entity.
you can use EF Migration to achieve that first you should Enable EF Migration in your project using this command in powershell console :
Enable-Migration -ContextTypeName yourContextName
above command will create a folder called Migration, then in configuration file you can override seed method
just override seed method :
protected override void Seed(MyContext context)
{
context.tableA.AddOrUpdate(
p => p.Name,
new tableA { Name = "Name_A1" },
new tableA { Name = "Name_A2" },
new tableA { Name = "Name_A3" },
new tableA { Name = "Name_A4" }
);
context.tableB.AddOrUpdate(
p => p.Name,
new tableB { Name = "Name_B1" },
new tableB { Name = "Name_B2" },
new tableB { Name = "Name_B3" },
new tableB { Name = "Name_B4" }
);
}
into above method you can define your look-up data like list of postal codes, list of countrys and ....
I have, errr had a working wpf application that manipulates database info (using Entity Framework, database first).
The structure of the data is 4 tables of finance info (all 1:1 mapped to the main table of the 5), with a couple of lookup tables with foreign key refs in the main table.
I added a table (another 1:1 mapping to the main table) in SqlServer and then ran the 'Update Model from Database...' wizard to add the new table to the model. Everything looks alright in the .edmx file, including the '0..1' relationship link.
However, when I try to save, I am receiving a 'Violation of Unique Constraint' error.
My creation code:
private void AddNewStatementsQuery(LGFinanceEntities lGFinanceEntities)
{
StatementsMain newStatement = StatementsMain.CreateStatementsMain(9999, this.LocalGovt.StakeholderID, 161, this.Year.FinancialYearID);
StatementsIncome newInc = StatementsIncome.CreateStatementsIncome(newStatement.StatementsMainID);
StatementsNote newNote = StatementsNote.CreateStatementsNote(newStatement.StatementsMainID);
StatementsRSSFinPos newRSSFinPos = StatementsRSSFinPos.CreateStatementsRSSFinPos(newStatement.StatementsMainID);
StatementsSurplusDeficit newSurplusDeficit = StatementsSurplusDeficit.CreateStatementsSurplusDeficit(newStatement.StatementsMainID);
lGFinanceEntities.StatementsMains.Context.AddObject("StatementsMains", newStatement);
lGFinanceEntities.StatementsMains.Context.AddObject("StatementsIncomes", newInc);
lGFinanceEntities.StatementsMains.Context.AddObject("StatementsNotes", newNote);
lGFinanceEntities.StatementsMains.Context.AddObject("StatementsRSSFinPos", newRSSFinPos);
lGFinanceEntities.StatementsMains.Context.AddObject("StatementsSurplusDeficit", newSurplusDeficit);
if (lGFinanceEntities.SaveChanges() != 1) // this is causing the exception
{
MessageBox.Show("Error. New Statements not created", "Database Error");
}
}
Prior to adding the new table, the above code was working. The only change was the addition of the lines:
StatementsSurplusDeficit newSurplusDeficit =
StatementsSurplusDeficit.CreateStatementsSurplusDeficit(newStatement.StatementsMainID);
...
lGFinanceEntities.StatementsMains.Context.AddObject("StatementsSurplusDeficit",
newSurplusDeficit);
Interestingly, something is creating a record somewhere, because when I check SqlServer I do have new records for the 5 tables. Also interestingly, each time I try something and run the method, the primary key has been incremented by 2. It looks like the same record is being added twice, but I can't work out how.
Edit:
Following a comment suggestion, I changed the 'AddNewStatementsQuery' so lines that looked like:
lGFinanceEntities.StatementsMains.Context.AddObject("StatementsMains", newStatement);
were changed to:
lGFinanceEntities.StatementsMains.AddObject(newStatement);
and then to:
lGFinanceEntities.AddObject("StatementsMains", newStatement);
This did not solve the key violation error.
How do I find out where/how the data is being saved twice (ie, other than lGFinanceEntities.SaveChanges() in the if statement)?
Hmm. Looking at your code, I can see it being simplified down to:
// Create the new objects
var statement = new StatementsMain()
{
this.LocalGovt.StakeholderID, 161, this.Year.FinancialYearID
};
var income = new StatementsIncome()
{
StatementsMain = statement
};
var note = new StatementsNote()
{
StatementsMain = statement
};
var rss = new StatementsRSSFinPos()
{
StatementsMain = statement
};
var surplus = new StatementsSurplusDeficit()
{
StatementsMain = statement
};
// Add the objects into the context
lGFinancialEntities.AddObject(statement);
lGFinancialEntities.AddObject(income);
lGFinancialEntities.AddObject(note);
lGFinancialEntities.AddObject(rss);
lGFinancialEntities.AddObject(surplus);
// Persist the objects to the data storage
lGFinancialEntities.SaveChanges();
Or, even better:
// Create the main object
var statement = new StatementsMain()
{
this.LocalGovt.StakeholderID, 161, this.Year.FinancialYearID
};
// Add the objects into the context
lGFinancialEntities.AddObject(statement);
lGFinancialEntities.AddObject(new StatementsIncome() { StatementsMain = statement });
lGFinancialEntities.AddObject(new StatementsNote() { StatementsMain = statement });
lGFinancialEntities.AddObject(new StatementsRSSFinPos() { StatementsMain = statement });
lGFinancialEntities.AddObject(new StatementsSurplusDeficit() { StatementsMain = statement });
// Persist the objects to the data storage
lGFinancialEntities.SaveChanges();
But, this tells me there is a lot about your data schema that's not obvious here. For instance, what does the value 161 reference in the StatementsMain object?
FYI, assigning the primary object to the objects in which it is a foreign-key lets EF do the work of assigning the new ID to the other objects as they are persisted.