I am playing around with Code first, have created my entities and am now trying to generate a database from the model. I have gone through the wizard, successfully connected to the server, selected the name of the new database and exited. An edmx.sql script is automatically generated for me to run, except it's empty when I open it and it massively slows down VS 2010 to the point that I have to kill the process.
When I look in the server the database has been created (as expected since the wizard asked me to create it) but there are not tables in it (obviously because I haven't run the script).
Any ideas what is going wrong here?
I found out what caused this. Apparently the SP1 upgrade on VS2010 torches this sort of thing. The following files need to be re-installed manually from the DVD:
DACFramework_enu.msi
DACProjectSystemSetup_enu.msi
TSqlLanguageService_enu.msi
Now I can see the full script, ran it, sorted!
Related
I'm relatively new to C# and very new to WPF. I've been trying to wrap my head around the concept of MVVM and I've thrown ADO Entity into the mix now.
The purpose of my sample application is to track CAD items. I'm pulling items out of a database and successfully populating my view; great. I've added the information in manually to test that the views are working as they should be.
I'm now, from my application trying to add a new item through a function which I'm launching from my ICommand. As I understand it, I'm creating a new DBContext object, adding an item to it and saving my changes. Executing "SaveChanges()" successfully tells me that 1 row was updated but when I check, the data isn't there? In addition to this, if, I call SaveChanges() again (within the same debug session) it throws an error to indicate that there's multiple entries. Again, when viewing the data via "Show Table Data" I'm seeing nothing.
public void AddNewItem(object parameter)
{
using (var dbq = new DBEntities()) {
var tempItem = dbq.OutstandingCAD.Create();
tempItem.Id = 2;
dbq.OutstandingCAD.Add(tempItem);
dbq.SaveChanges();
}
}
Could someone please look over that small block of code and suggest whether what I'm doing is correct and whether my issue lies somewhere else?
Much appreciated
Your issue is not with your code but the way you are trying to evaluate your work.
There is a Microsoft article here which discusses the situation you are experiencing. It references Visual Studio 2005, but it is still relevant.
Basically, your database is an .mdf file that is stored in your project. your SQL connection string is something like AttachDbFileName=|DataDirectory|\data.mdf".
One of the things to know when working with local database files is that they are treated as any other content files. For desktop projects, it means that by default, the database file will be copied to the output folder (aka bin) each time the project is built. After F5, here's what it would look like on disk
MyProject\Data.mdf
MyProject\MyApp.vb
MyProject\Bin\Debug\Data.mdf
MyProject\Bin\Debug\MyApp.exe
At design-time, MyProject\Data.mdf is used by the data tools. At run-time, the app will be using the database under the output folder. As a result of the copy, many people have the impression that the app did not save the data to the database file. In fact, this is simply because there are two copies of the data file involved. Same applies when looking at the schema/data through the database explorer. The tools are using the copy in the project, not the one in the bin folder.
Essentially, you have 2 copies of your database, one for design-time, one for run-time. You are looking at the design-time database and not seeing changes that were made to the run-time database. However, every time you run the project, the run-time database gets overwritten, so it can appear that your changes disappear. This is really a debugging feature, to keep you from having hundreds of lines in the database from multiple tests of the same feature. However, some people prefer this kind of persistence, and would use an external SQL instance instead of a project contained .mdf.
I am working a school project with three other group members. We are using Microsoft Visual Studio 2013, and using Visual Studio Online as version control. The project is worked with .Net MVC 5 EF6. Our group is having troubles that we cannot figure out.
Here is the problem we are having. Every time that one of use grabs the newest version control changes and attempts to run the project with or without debug we get an error in browser:
"Cannot drop the database 'DB', because it does not exist or you do not have permission."
We can get it to work if we add a migration and update database. But this is causing us to have to migrate way to often and causing a lot of headaches.
We cannot figure out if we have a setting messed up or something else. I am hoping that someone can help us resolve this issue so we can get back to working more efficiently.
Also site and DB are hosted on goDaddy.
My bet is that you have AutomaticMigrationsEnabled set to true, otherwise your application wouldn't be trying to do anything with your database just because you pulled the latest version from TFS.
Also, automatic migrations don't try to drop your database, not unless your have DropCreateDatabaseWhenModelChanges or DropCreateDatabaseAlways database initializers set up. Check your code. You probably have. Here's an article. Please do not have these on production.
Also, your villain database initializer isn't being able to do it's job because the ASP.NET user on GoDaddy probably doesn't have permission to do so.
My advice: Disable AutomaticMigrationsEnabled. Do you migrations manually with add-migration and update-database. AutomaticMigrationsEnabled is not a good practice. It's even being discontinued from EF7. That should solve your problem.
I'm not sure which is worse: using GoDaddy as an ISP, or using VSS/TFS for version control :(
ANYWAY - Two suggestions:
1) Use local workspaces (and just checkin/checkout updates when you wish to synchronize):
https://msdn.microsoft.com/en-us/library/bb892960.aspx
2) See if VSO supports a macro or script to "kill all" connections before you sync versions:
https://social.msdn.microsoft.com/Forums/en-US/b41a7376-d5cd-4339-92b3-158e88a96dff/how-can-i-kill-all-connections-of-users-to-my-sql-server-database-?forum=adodotnetentityframework
I have a C# app (in VisualStudio 2010) that used SqlServer 2005 accessed through TableAdapters in C#.
I haven't discovered a good way to manage DB changes. For example, in my next release I have a bunch of db schema changes. I made all of my DB changes in Sql Server Management Studio. But now I have to manually make these changes on the production servers in turn after I deploy the new application code (slow and buggy).
Furthermore, if I decide to roll back my release to a previous version, I have to manually go through and undo all my db changes before I can deploy the old code (and now I am under time constraints because the app is down). Again, this is also very error prone.
Oh, and lets hope that one of my errors doesn't cause massive destruction to the production DB, otherwise I now have to pull the most recent backup out of storage and try again (very time consuming).
I have heard of things like Migrations from Rails (and ORMs like SubSonic). I think that the new ORM style (define your schema in c# code) helps alleviate a lot of this, but unfortunately, as I am using TableAdapters, I don't see how I could implement something like migrations.
How do people deal with this?
Release management for DBs usually involves migrations of static data and running of scripts to update/create programmability elements (sprocs, UDFs, triggers, etc) and modify existing schema definitions. Looks to me like you're missing the scripts. If you're making changes manually to your development DB and not creating scripts that mirror those changes, you will need to repeat the same manual steps against your test/production environments, which as you say is error prone and dangerous.
SQL Server Management Studio makes it easy to save scripts that reflect changes to any database objects. In the toolbar there should be an icon called "Generate change script", which gives you the option to save the SQL file to disk. You can then use this to perform the same change against another server. You can also manually script any or all stored procs, UDFs, triggers and so on, and run those against a server as well (just right-click on them).
As to rollback, that's normally achieved by restoring a backup of the database made just before the deployment process begins.
This whole process tends to be different for each company, but that's generally how it's done.
ORMs that auto-generate schemas have always seemed evil to me, not to mention pretty much impossible to use against a production box, but I guess there's also an option.
The easiest way to deal with this problem is to buy software that can detect db schema by comparing two databases changes and generate a change script that can update your target database. I am using Visual Studio Ultimate 2010 for that, but there's also cheaper software that can do the same. This works for me 99% of the time (the only instance where this did not work properly for me is when I renamed a table column).
If you don't have such a piece of software, it is crucial to generate your SQL change scripts by hand. Whenever you do a change to the database schema, keep track of the SQL you used for that changed and add it to one big file of db schema changes for the next version of your software. It's a bit tedious at the beginning, but you'll get used to it pretty quickly.
Then when you are ready to deploy the software, proceed as follows:
Take the website offline
Make a backup of your current production database.
Make a backup of your current production website.
Upload your new code to the server
Run the DB changes script you previously created (either by hand or with the software mentioned above)
Take the website back online and see if it works. If it doesn't and you can't easily fix the problem, revert to the previous website and db version until you have fixed the bug.
All of these steps can be easily automated using batch files and the SQL server agent or SQLCMD.
Generally you should deploy to a staging server first, then test your website very thoroughly and only then move on to the production server. This way you avoid longer downtimes on your production server and minimize the risk of losing any vital data.
Here at Red Gate Software we're currently tackling this exact issue. Please take a look at our SSMS add-in, SQL Source Control, in combination with SQL Compare Pro. We're also working on a 'Migrations' feature, due out later this year, allowing custom migrations scripts to be defined for specific version transitions. As we're still in the early stages of the project, there's still time to give us feedback and help us design a great solution. We'd love to speak to your further about your requirements!
I began recently a new job, a very interesting project (C#,.Net 4, Linq, VS 2010 and SQL Server). And immediately I got a very exciting challenge: I must implement either a new tool or integrate the logic when program start, or whatever, but what must happen is the following: the customers have previous application and database (full with their specific data). Now a new version is ready and the customer gets the update. In the mean time we made some modification on DB (new table, columns, maybe an old column deleted, or whatever). I’m pretty new in Linq and also SQL databases and my first solution can be: I check the applications/databases version and implement all the changes step by step comparing all tables, columns, keys, constrains, etc. (all this new information I have in my dbml and the old I asked from the existing DB). And I’ll do this each time the version changed. But somehow I feel, this is NOT a smart solution so I look for a general solution of this problem.
Is there a way to update customers DB from the dbml file? To create a new one is not a problem (CreateDatabase with DataContext), is there any update/alter database methods? I guess I’m not the only one who search for such a solution (I found nothing in internet – or I looked for bad keywords). How did you solve this problem? I look also for an external tool, but first for a solution with C#, Linq or something similar.
For any idea thank you in advance!
Best regards,
Emil
What I always do is use Red Gate's SQL Compare to compare the schema of the new database to the schema of the old database. It will generate a change script for you and then you can run that script in code.
We have a table that has a single row in it for program setup information. One of the columns in this table is the database version number. This will instantly tell us what database version the customer has when we do an update. Then we run every script that will update them to the latest version they need to be running. Whenever we release a new version (with database changes), we run the SQL Compare and make a script to go from the previous version to the next. We don't do any scripts that will skip versions, just in case of strange conflicts that may arise from that.
This also gives us the opportunity to do any data massaging we may have to do in between versions by writing a custom script and inserting that into the update scripts. Every update script changes that database version field as well.
This allows us to do a lot of automated updating. Having that database version allows the client to take a peek at that version before the user has a chance to use the application. If it's different and the application needs an update, it will go out to our ftp site and download the update and run the setup automatically.
Basically what you want to be able to do is to script the changes - to be able to run "something" that allows you to update one version of the database to the next and also to make any necessary changes to the data required by that change in the schema.
Good news is that you can do this with SQL, you can write DDL statements to create and modify a database schema.
My solution is to put my database schema maintenance entirely in code, I think this is the best version of the writeup I've done so far:
How to create "embedded" SQL 2008 database file if it doesn't exist?
Why in code? Because it works. May not be the best solution but its one I have had some success with and the results are consistent and repeatable. Oh and its version controlled too.
The big problem you may have in this specific instance is that you need to establish a baseline - to make sure that the existing databases are consistent in terms of their schema. This is where more complex and clever tools may serve you better - being able to do a schema diff and then update has a lot of appeal as a concept for example but equally you're somewhat dependent on having your reference database perfect and that raises other issues.
I am currently working on a project that include the use of SQLServer. I would like to know what strategy I should use when I install the software to build the database? I need to set up the tables, the stored procedures and the users.
Does my software need to make a check on start up to see if the database exist and then if it doesn't, create it up?
Is there any way that I could automate this when I install SQLServer?
Thank you.
EDIT
Ok right now I have plenty of nice solution, but I am really looking for a solution (free or open source would be awesome) that would allow me to deploy a new application that needs SQLServer to be freshly installed and setuped to the needs of the software.
RedGate software offers SQL Packager which gives you option to produce a script/.exe to deploy whole db including everything (stored procedures, tables etc) from one single .exe. If you can afford it and want to have easy way to do it (without having to do it yourself) it's the way to go ;-)
Easy roll-out of database updates across your client base
Script and compress your schema and data accurately and quickly
Package any pre-existing SQL script as a .exe, or launch as a C# project
Simplify deployments and updates for SQL Server 2000, 2005 and 2008
You could use migration framework like Migrator.Net, then you could run the migrations every time your application starts. The good thing about this approach is that you can update your database when you release a new version of your software.
Go take a look at their Getting started page. This might clear up the concept.
I have succesfully used this approach to solve the problem you are confronted with.
You do all of that with the SQL scripts. And then your installation program runs them against the customer's SQL Server.
You can write a T-SQL script that only creates objects when they do not exist. For a database:
if db_id('dbname') is null
create database dbname
For a stored procedure (see MSDN for a list of object types):
if object_id('spname', 'P') is null
exec ('create procedure dbo.spname as select 1')
go
alter procedure dbo.spname
as
<procedure definition>
The good thing about such scripts is that running them multiple times doesn't create a problem- all the objects will already exist, so the script won't do anything a second time.
Setting up the server is pretty straight forward if you're using MS SQL Server. As for creating the database and tables, you generally only do this once. The whole point of a database is that the data is persistent, so if there's a chance that the database won't exist you've either got a) major stability problems, or b) no need for an actual database.
Designing the database, tables, and procedures is an entire component of the software development process. When I do this I usually keep all of my creation scripts in source control. After creation you will write the program in such a way that it assumes the database already exists - checking for connectivity is one thing, but the program should never think that there is no database at all.
you can make a script from all of objects that exist in your db. after that you can run this script from your code.
when you create your db script with script wizard in sql server, in "choose script options" section, set "Include If Not Exist" to yes. with this work done only if db not exists.