This might be a dumb question. But does ADO.net support updating to a database without having to write SQL commands?
Example:
I have a method that reads from database and keeps the rows in memory. I then modify some of the rows. Is it then possible to make ADO.net update the newest changes to the database without having to write an update SQL statements but instead let ADO.net figure it out?
I am asking this because I might want to update at a much later point. I could just store the SQL statements in a list but then I would be doing many updates instead of just one big one which would take longer time.
What you need is some sort of ORM, and ADO is not an ORM. So, no. You must write the SQL. You could maybe simplify things by writing a stored procedure, though. Then you can use ADO parameters
If you want, you can save your changes as objects in memory until you need to actually persist them. Then you can have a mapper that will take the object and write the SQL for you. However, then you are redoing some of the work of what is already done in an ORM
Like the sql you used to get the data, you need sql to put the data. It also needs to update what column to update. I don't think it can be automatic. Or use the Entity Framework. Probably saving the objects to be updates (IDs) is the way to go or update instantly.
ADO.NET supports DataAdapters and DataSets which allow you to do the following:
Manipulate data within your DataSet.
Push changes to the database by passing your DataSet as a parameter to the Update method of the DataAdapter.
In order to get the DataAdapter to push the changes it is necessary to specify insert, update, and delete commands. You will have to specify some sql in your command configuration but it is like a template of the sql statement that will update each row that you operate upon rather than your having to manually track changes.
Once you have configured your commands, use the UPDATE method with the DataSet as the parameter and it will persist your changes based on your commands. You will not need to track the individual sql changes.
A sample of configuring commands can be found here.
A sample of calling the update can be found here.
Related
We have a lot of data that needs to be loaded into a number of tables. As far as I can see we have two options:
Include the data as part of the Configuration class seed method
Problems
1.a. this would be slow and involve writing a lot of C# code)
Use bulk insert with code first migrations - a lot quicker and probably a better solution.
Problems
2.a. It may prove tricky working with other data that gets loaded into the same tables as part of the seed.
2.b. It requires SQL Identity Insert to be switched on.
What solution is best and if it is 2 how do I go about bulk insert with code first migrations and how can I address the problems?
Bypassing EF and using ADO.NET/SQL is definitely a good approach for bulk data upload. The best approach depends on whether you want the data to be inserted as part of migration or just logic that runs on app start.
If you want it to be inserted as part of a migration (which may be nice since then you don't have to worry about defensive checks if the data exists etc.) then you can use the Sql(string) method to execute sql that uses whatever format and sql features you want (including switching IDENTITY_INSERT on/off). In EF6.1 there is also an overload that allows you to easily run a .sql file rather than having everything in code as a string.
If you want to do it on app start, then just create an instance of your context and then access Database.Connection to get the raw SqlConnection and use ADO.NET directly to insert the data.
I would like to have your advice.
I'm now developing a small WPF client application using C#, bindings, ADO.Net Entity Framework, ODP.net and an Oracle database.
The application is a small one, two XAML screens, about 15 tables. I was developing using entities by filling my entities through the application and using the SaveChanges method.
However our DBA said me that I don't have the right to make direct access to the but only using stored procedures. I asked him why and he said me that it is a security reason because using stored procedures forces to provide the row identifier when deleting a record in one table.
According him the risk is that the application will maybe delete all the rows in one table instead of only one row if the id is provided througe the stored procedure.
I find that is a lot of overkill for only 15 table.
What do you think about that?
Have you suggested to your DBA that you use Linq to SQL? That way you can extract objects, representing individual rows and it would make it far less likely you would accidentally delete multiple rows.
Personally I think EDM might be overkill for the size of DB.
I should say I'm a big proponent of LINQ to SQL and not a big fan of SPs however....
LINQ2SQL on top of ODP.NET is a great stack. And I agree with Andrew, because you would have to write code to load the records, delete all of them, and commit the changes, it's not exactly something that can happen "easily".
Forgetting a where clause in a LINQ statement is no easier or harder then forgetting a where clause in a stored procedure.
When I load from the database I use one store procedure which loads the DataItem and any Data associated with it. This comes back in one DataSet with two tables, the first table has one row and describes the DataItem and each row in the other table describing the related Data.
This DataSet is then used to populate my objects.
My problem comes when I have to save the objects back to the database. I am currently saving the DataItem and then looping through all of my Data and performing a save on each one. Completely horrible way to go about doing it, I know. It's both slow and it's not transactional.
So what I'd ideally like to do is convert my objects back into my DataSet and then save it all back to the database in one efficient transactional operation. What code do I need on the C# side to make this transactional and to allow me to pass back a DataSet. I presume this will involve using a TableAdapter. But given that I have two tables how will this work? What do I use on the SQL side - Can I use store procedures? (I would like to avoid having SQL in my C# project) Would I need to write something that will handle cycling through a datatable to save each record?
What's the best way to go about doing all this? This will form the lynchpin of a project I'm working on so I want it to be as fast and efficient as it can be!
(.NET 4.0 and SQL 2005)
Did not use TableAdapter in the end as it was more effort than it was worth.
From the comments:
http://msdn.microsoft.com/en-us/library/4esb49b4.aspx
I am trying to figure out the best way to design my C# application which utilizes data from a SQL Server backend.
My application periodically has to update 55K rows each one at a time from a loop in my application. Before it does an update it needs to check if the record to be updated exists.
If it exists it updates one field.
If not it performs an insert of 4 fields.
The table to be updated has 600K rows.
What is the most efficient way to handle these updates/inserts from my application?
Should I create a data dictionary in c# and load the 600K records and query the dictionary first instead of the database?
Is this a faster approach?
Should I use a stored procedure?
What’s the best way to achieve maximum performance based on this scenario?
You could use SqlBulkCopy to upload to a temp table then have a SQL Server job do the merge.
You should try to avoid "update 55K rows each one at a time from a loop". That will be very slow.
Instead, try to find a way to batch the updates (n at a time). Look into SQL Server table value parameters as a way to send a set of data to a stored procedure.
Here's an article on updating multiple rows with TVPs: http://www.sqlmag.com/article/sql-server-2008/using-table-valued-parameters-to-update-multiple-rows
What if you did something like this, instead?
By some means, get those 55,000 rows of data into the database; if they're not already there. (If you're right now getting those rows from some query, arrange instead for the query-results to be stored in a temporary table on that database. (This might be a proper application for a stored procedure.)
Now, you could express the operations that you need to perform, perhaps, as two separate SQL queries: one to do the updates, and one or more others to do the inserts. The first query might use a clause such as "WHERE FOO IN (SELECT BAR FROM #TEMP_TABLE ...)" to identify the rows to be updated. The others might use "WHERE FOO NOT IN (...)"
This is, to be precise, exactly the sort of thing that I would expect to need to use a stored procedure to do, because, if you think about it, "the SQL server itself" is precisely the right party to be doing the work, because he's the only guy around who already has the data on-hand that you intend to manipulate. He alone doesn't have to "transmit" those 55,000 rows anywhere. Perfect.
As the question asks really.
The EF modeller tool allows us to map the Insert/Update/Delete functions to a sproc, is there any benefit to overriding them?
If it requires some custom validation, then obviously yes, but if I'm happy with how it is now, is it worth creating sproc's for them all?
I can't remember how to view the SQL it's executing for them to find out the exact query, but I should imagine it'd be pretty similar to a standard Insert/Update/Delete query.
I can think of a few cases where it could be useful:
You're working with a legacy database which doesn't quite map to your EF model precisely.
You need extra queries to be executed on insert/update/delete, but you don't have rights to triggers on your database.
Soft deletes in your database which you want to abstract away from. So a regular delete will actually perform a soft delete.
Not quite sure how viable these options are, as I personally am more of a NHibernate guy. These are theoretical options.
As for viewing the executed queries, there's a few possibilities to do that. You could attach a profiler to your SQL Server instance and look at the raw queries that are executed. There's also Entity Framework Profiler (By Ayende/Oren Eini) which isn't free, but it does make reading and debugging the queries a lot easier.
Yes. There is a benefit to overriding them.
Not everybody actually updates or deletes a row of data when an update or delete happens.
In some cases, deleting a record really simply means setting an EffictiveUntil date to an existing record and keeping the record in the database for historical purposes.
The same can go for an Update. Instead of updating an existing row, the current row gets the EffectiveUntil date set and a brand new row gets inserted with the new data with a null EffectiveUntil date (or similar mechanism).
By providing Insert/Update/Delete logic to Entity Framework, you are allowed to specify exactly what those operations mean in terms of your database rather than what they mean in the scope of an RDBMS.
As for the second question (that I apparently originally missed), if you're happy with what is currently being generated, then no it's not worth creating them. You'd just add the extra headache of having to remember to update your Stored Procedures whenever you change the table structure.