I think there were several similar question that I have now, but I am really confused about googling and reading them now. As I have a Code First approach, and I added migration support, updated to a database, and now I need some nasty stuff, like triggers, stored procedures and views. As far as I understand this, I need to write the SQL create triggers and create stored procedure as strings into my C# Code First code. But where? Where I need to add them (as static or const strings)? Need I write the drop triggers/stored procs strings as well? And how to integrate them into the next migration step? Do anybody know a really helpful step-by-step blog about this topic?
I got an advice as generate the next migration step with the "add-migration" command, then update the Up() and Down() methods with the trigger definitions. That's clear but a little bit away from the code first's point of view, I am afraid of that the table definition and the table triggers (and stored procedures) will be separated. Another advice says to override the context OnModelCreating()... but I can't see when it will execute, how to link to a specific migration step...
And please do not argue about "using a trigger is a stupid thing", as my question is wider than this... how to add any advanced sql server "object" to a code first which is not easy to define in C# as a code first?
I had a similar problem recently and the best solution I found was to run the script from an (initially) empty migration. I put the script in a file and added it to the project as a resource.
One interesting trick I had to do is to put special separators in the script file because the GO statement is not a T-SQL statement. I used the term GO--BATCH-- as a batch separator so that it works both in SQL Server Management studio and in code. In code I simply split the script by this separator and run multiple queries like this:
public partial class CodeHostDiscovery : DbMigration
{
public override void Up()
{
var batches = Properties.Resources.CodeHostDiscoverySqlScript.Split(new string[] {"GO--BATCH--"}, StringSplitOptions.None);
foreach (var batch in batches)
{
Sql(batch);
}
}
public override void Down()
{
}
}
Here is a snippet from the SQL script:
CREATE SCHEMA SystemServices
GO--BATCH--
CREATE TABLE [SystemServices].[HeartbeatConfiguration] (
I don't expect Code First to provide better facilities to do this, because the idea behind Code First is that you don't need stored procedures, triggers or anything else. You just use Code First. That doesn't always hold water of course, and for that you can run SQL on the database.
Related
For our project, we are using the Entity Framework (Version 6) with the code first database. So, when we want to change a procedure or a table, we do that in a class and generate a Migration File to update the Database (simple Update-Database in the Paket-Manager window).
If we want to change something that didn't get a class (like a View or a procedure) we change the migration file, which will look like this as an example:
public override void Up()
{
//Some other code...
Sql("ALTER VIEW ExampleView AS Select [Endless Lines of code]");
}
When it comes to bigger views, it gets very messy very fast.
My Question is
Is there a "smart" way to update small things in a procedure or maybe a view (like changing something in the FROM Statement) without creating a whole SQL statement that counts many rows just to do that?
Not sure what would qualify as "smart", but you can remove the SQL statement clutter from your migration classes by putting them in separate files. This article explains how.
I'm using Entity Framework 6 with this DbMigrationsConfiguration:
public sealed class Configuration : DbMigrationsConfiguration<DataContext>
{
public Configuration()
{
AutomaticMigrationsEnabled = true;
}
protected override void Seed(Danfoss.EnergyEfficiency.Data.DataContext context)
{
//Adding initial data to context
context.SaveChanges();
}
}
I'm using it in WebAPI in this way:
public static void Register(HttpConfiguration config)
{
Database.SetInitializer(new MigrateDatabaseToLatestVersion<DataContext, Configuration>());
}
I have noticed that Seed function is running every time my application start up. How can I prevent this? I only like it to run the first time it runs, when it build the initial tables.
The DbMigrationsConfiguration.Seed method is called every time you call Update-Database. The reasoning behind that is explained in this blog by One Unicorn.
That means that you have to write your Seed code to cope with existing data. If you don't like that, you can vote for a change on CodePlex.
In the meantime, to quote the blog:
The best way to handle this is usually to not use AddOrUpdate for
every entity, but to instead be more intentional about checking the
database for existing data using any mechanisms that are appropriate.
For example, Seed might check whether or not one representative entity
exists and then branch on that result to either update everything or
insert everything
Another option, that I have used in the past, is to add standing data that is related to a migration in the migration itself, using the Sql command. That way it only runs once. I have tended to move away from this because I prefer to keep the seeding in one place.
I've had the same issue in that I wanted an initial user created on database build. What I did was to create a blank migration and then add in the user creation there. Since a migration technically runs only once unless you remove and reapply migrations, this would ensure it would only run once on database creation.
I know that this question is a bit old, so it may help others or you may not be using migrations. But if you are, this is a handy technique.
Here's my situation:
I have been working on an ASP.NET MVC 3 application for a while. It has a database (built out of a db project; I'm going db-first) for which I have an edmx model and then a set of POCOs. My entities have plural names in the database and POCOs have singular names. Everything maps nicely without a problem.
Or used to until I added a new table (called TransactionStatuses). Now all of the old entities still work but the new one does not. When I try to eagerly load it together with a related entity:
var transactions = (from t in db.Transactions.Include(s => s.TransactionStatus) //TransactionStatus - navigation property in Transactions to TransactionStatuses
where t.CustomerID == CustomerID
select t).ToList();
I get
Invalid object name 'dbo.TransactionStatus'.
I even did a simpler test:
List<TransactionStatus> statuses = db.TransactionStatuses.ToList();
= same result.
I have updated (and even re-created) edmx from the db and have gone through it back and forth trying to figure out what is different about the mapping for dbo.TransactionStatus*es* which trips the whole thing up.
If somebody can point me in the direction of a fix it'd be wonderful.
P.S. Turning off pluralisation is not an option, thanks.
Update: I figured it out - my answer below.
This is probably happening because even though the intention was to use the Database First flow, in actual fact the application is using Code First to do the mapping. Let me explain a bit more because this can be confusing. :-)
When using Database First with the EF Designer and the DbContext templates in Visual Studio three very important things happen. First, the new Entity Data Model wizard adds a connection string to your app containing details of the Database First model (i.e. the EDMX) so that when the application is run it can find this model. The connection string will look something like this:
<connectionStrings>
<add name="MyEntities"
connectionString="metadata=res://*/MyModel.csdl|res://*/MyModel.ssdl|res://*/MyModel.msl;provider=System.Data.SqlClient;provider connection string="data source=.\sqlexpress;initial catalog=MyEntities;integrated security=True;multipleactiveresultsets=True;App=EntityFramework""
providerName="System.Data.EntityClient" />
</connectionStrings>
Second, the generated context class makes a call to the base DbContext constructor specifying the name of this connection string:
public MyEntities()
: base("name=MyEntities")
{
}
This tells DbContext to find and use the "MyEntities" connection string in the config. Using "name=" means that DbContext will throw if it doesn't find the connection string--it won't just go ahead and create a connection by convention.
If you want to use Database First, then you must use a connection string like the one that is generated. Specifically, it must contain the model data (the csdl, msl, ssdl from the EDMX) and you must make sure that DbContext finds it. Be very careful when changing the call to the base constructor.
The third thing that happens is that OnModelCreating is overridden in the generated context and made to throw:
protected override void OnModelCreating(DbModelBuilder modelBuilder)
{
throw new UnintentionalCodeFirstException();
}
This is done because OnModelCreating is only ever called when using Code First. This is because OnModelCreating is all about creating the model, but when you are using Database First the model already exists--there is nothing to create at runtime. So if OnModelCreating is called then it is probably because you started using Code First without meaning to, usually because of a change to the connection string or the call to the base constructor.
Now, it might be that you want to use Code First to map to an existing database. This is a great pattern and fully supported (see http://blogs.msdn.com/b/adonet/archive/2011/03/07/when-is-code-first-not-code-first.aspx) but you will need to make sure mappings are setup appropriately for this to work. If the mappings are not setup correctly then you will get exceptions like the one in this question.
Got it!
Horrible, horrible experience...
In short: EF cannot correctly pluralize entity names that end with "s" (e.g. "status", "campus", etc.)
Here's how I got it and proof.
I've created and re-created my original set up several times with the same result.
I also tried renaming my table to things like TransStatus and the like - no luck.
While I was researching about this I came across the pluralization article by Scott Hanselman where he added pluralization rules for words like sheep and goose. This got me thinking "what if he problem is in the actual name of the entity?"
I did some searching about the word status in particular and found this report of the problem on Connect. I rushed to try renaming my entity...
Renaming TransactionStatuses to TransactionStates (while even keeping the columns as StatusID) fixed the issue! I can now get List<TransactionState> statuses = db.TransactionStates.ToList();
I thought the problem was with the particular word status in the name... But after vocally complaining about this to a friend I was suggested that maybe the problem is with the word ending with "s"... So I decided to check it out.
I set up another table called Campuses and matching POCO Campus (and updated the edmx). Did the simple test List<Campus> test = db.Campuses.ToList(); and got the now expected
Invalid object name 'dbo.Campus'.
So there you go, the mighty EF cannot handle pluralization of words ending with "s". Hopefully, the next poor bugger hitting the problem will find this question and save him- or herself 3-4 hours of pain and frustration.
You mention EDMX but I can't tell if you are doing databse first with EDMX or code first and just using EDMX to see what's going on.
If you are using Code First, then you can use a configuration to specify the table name. Data Annotation is [Table("TransactionStatuses")], fluent is modelBuilder.Entity().ToTable("TransactionStatuses").
(I'm typing the annotation and fluent code from memory so double check references. ;) )
If you are using database first, the SSDL should absolutely be aware of the name of the database table, so I'm guessing that you are using code first & the edmx is just for exploration. (???)
hth
Sigh. Same type of an issue with the class and model Photo and Video; it is looking for the table Photoes and Videoes. I hate to just change the table name, but not much of a choice it looks like.
I'm a bit confused about Entity framework, I would like to use code-first approach; infact I would like to write how my database tables are composed through class definition.
The main problem is that I need to create a database (or open it) by choosing dinamically it's path (the user can choose what database to open and can create new one when he wants). I've chosen Sql server compact to achieve this, however I still don't understand how to use code-first approach for this situation because I don't understand how to choose where database should be created with code-first approach, if is possible.
Can anyone explain what I'm doing wrong and suggest a different route, if any? Thanks
I just had the same problem a few days ago. Here's how I got it to work:
In your application startup code add the following:
using System.Data.Entity.Database;
// ...
DbDatabase.SetInitializer(new MyDbInitializer());
DbDatabase.DefaultConnectionFactory = new SqlCeConnectionFactory(
"System.Data.SqlServerCe.4.0",
#"C:\Path\To\",
#"Data Source=C:\Path\To\DbFile.sdf");
The Initializer should look something like this:
using System.Data.Entity.Database;
public class MyDbInitializer : CreateDatabaseIfNotExists<MyDbContext>
{
protected override void Seed(MyDbContext context)
{
// create some sample data
}
}
If the CreateDatabaseIfNotExists is not the thing you want, there are more kinds of initializers you can use, or you can also create your own. More information on that can be found here:
http://blog.oneunicorn.com/2011/03/31/configuring-database-initializers-in-a-config-file/
Hope this helps.
I am trying to insert 61,000+ objects, obtained via a remote call, into a SQL Server 2005 database in the fastest method possible. Any suggestions?
I have looked at using SQLBulkCopy, but am having a few problems working out how to get the data into the right format since I am not starting with a DataTable, but instead have a list of objects. So if answers could contain code samples that would be appreciated.
I am trying to insert the data into a temp table before processing it to keep memory usage down.
Edit...
#JP - this is something that will run every night as a scheduled batch job with an IIS ASP.NET application.
Thanks.
If this is something you are doing one time or only periodically, you should look at using SSIS (it's basically DTS on steroids). You could build a package that gets the data from one datasource and inserts it into another. There are also features for stop/start and migration tracking. Without more details on your situation, I can't really provide code, but there are a lot of code samples out there on SSIS. You can learn more and play around with SSIS in Virtual Labs.
If you intend on using the SQLBulkCopy class I would suggest that you create a custom class that implements IDataReader that will be responsible for mapping the 61000 source data objects to the appropriate columns in the destination table and then using this custom class as a parameter to the SQLBulkCopy WriteToServer method.
The only tricky part will be implementing the IDataReader interface in your class. But even that shouldn't be too complicated. Just remember that your goal is to have this class map your 610000 data objects to column names. And that your class will be called by the SQLBulkCopy class to provide the data. The rest should come together pretty easily.
class CustomReaderClass : IDataReader
{
// make sure to implement the IDataReader inferface in this class
// and a method to load the 61 000 data objects
void Load()
{
// do whatever you have to do here to load the data..
// with the remote call..?!
}
}
//.. later you use it like so
SQLBulkCopy bulkCopyInstance;
CustomReaderClass aCustomReaderClass = new aCustomReaderClass();
aCustomReaderClass.Load();
// open destination connection
// .. and create a new instance of SQLBulkCopy with the dest connection
bulkCopyInstance.WriteToServer(aCustomReaderClass);
// close connection and you're done!
I hope the above "pseudo-code" makes some sense..
#Miky D had the right approach, but I would like to expand the details. Implementing IDataReader is not really that hard.
To get IDataReader working with a bulk inserter you should look at implementing:
Dispose();
FieldCount {
object GetValue(int i);
GetSchemaTable();
Read();
The rest can be stubs that throw NotImplementedExceptions, see this sample
Getting the schema table is also pretty easy. Just select one row from the target table and call GetSchemaTable().
To keep stuff clearer I like to have an abstract class that throws NotImplementedException on the non essential methods, perhaps down the line that abstract class can implement the missing bits for added robustness.
A couple of BIG caveats with this approach:
Which methods to implement is not documented in SQLBulkCopy
With the follow on that, in later versions of the framework/hotfixes or service pack may break you. So if I had mission critical code I would take the bite and implement the whole interface.
I think, that its pretty poor that SQLBulkCopy does not have an additional minimal interface for bulk inserting data, IDataReader is way to fat.