The equivalent way of 'OracleCommand.ArrayBindCount' in npgsql - c#

I'm porting my old project. It previously use Oracle.DataAccess.Client library to access with oracle database.
Now, I want to use Npgsql library to access with Postgresql database. So I need to porting the old c# code which use Oracle library method. For example, the old code use OracleCommand.ArrayBindCount property to insert multi rows at once.
But I do not see the similar method or property to do the same thing.
I want to know if there're similar ways to achieve it.

No, there's nothing like ArrayBindCount in Npgsql/PostgreSQL.
However, you can batch two (or more) INSERT statements in the same CommandText (delimited by semicolons) and have all the parameters for all statements on that command. This would execute efficiently as a single batch.
For really efficient bulk insert, use COPY which is by far the fastest.
You also can refer below post.
Does PostgreSQL have the equivalent of an Oracle ArrayBind?

Related

What is an efficent way to insert about a million records into Oracle from C# console app?

I have a C# console app that I'm using Entity Framework to insert about a million records into an Oracle 11 DB. The first run took over 12 hours and I finally had to killed it. I'm logging the inserts for the moment so I can see what's going on and I'm sure that is slowing it down some. There were no errors, it was just taking for ever to insert that many files.
Someone suggested looking at SQL Loader for Oracle but I'm new to Oracle and I'm not sure I can run that from inside a console app and then I would have to make sure it completed successfully before moving on the next portion of the application which creates and export file.
Any suggestions on what I can look at to make the inserting happen quicker?
You can't use EF for this kind of massive job. I mean, you can, but as you saw is not efficient.
The best way here is to use ODP.NET (http://www.oracle.com/technetwork/topics/dotnet/index-085163.html) and make a plain old PL-SQL insert.
Take a look at this answer: Bulk Insert to Oracle using .NET
for more details or to this for a sample implementation http://www.oracle.com/technetwork/issue-archive/2009/09-sep/o59odpnet-085168.html
Obviously, you can still use EF for everything else, you just need to make a little arrangement to implement this step with plain old pl-sql. That's the way I use to work for example with SQL Server.
Hope it helps.
Seriously, look into SQL*Loader. Read the turorials. It's insanely fast. Anything else is just wasted time, both runtime and yours. You can probably learn how to use it and insert all your data in the time it takes to just run any alternative solution.
You can use the Process class to start external processes from your console application.
Use either SQL*Loader, OR .Net's Oracle Bulk Loader mechanism, which relies on the ODP.Net data provider. SQL*Loader is more "native" to Oracle, but ODP.Net is pure managed code without having to use an external process.
Here's a good post to help introduce these topics further: Bulk Insert to Oracle using .NET

Bulk insert using code first migrations for azure

We have a lot of data that needs to be loaded into a number of tables. As far as I can see we have two options:
Include the data as part of the Configuration class seed method
Problems
1.a. this would be slow and involve writing a lot of C# code)
Use bulk insert with code first migrations - a lot quicker and probably a better solution.
Problems
2.a. It may prove tricky working with other data that gets loaded into the same tables as part of the seed.
2.b. It requires SQL Identity Insert to be switched on.
What solution is best and if it is 2 how do I go about bulk insert with code first migrations and how can I address the problems?
Bypassing EF and using ADO.NET/SQL is definitely a good approach for bulk data upload. The best approach depends on whether you want the data to be inserted as part of migration or just logic that runs on app start.
If you want it to be inserted as part of a migration (which may be nice since then you don't have to worry about defensive checks if the data exists etc.) then you can use the Sql(string) method to execute sql that uses whatever format and sql features you want (including switching IDENTITY_INSERT on/off). In EF6.1 there is also an overload that allows you to easily run a .sql file rather than having everything in code as a string.
If you want to do it on app start, then just create an instance of your context and then access Database.Connection to get the raw SqlConnection and use ADO.NET directly to insert the data.

How to do multiple insert as well update by using dapper .net?

How to do multiple insert (50000 record) as well update using dapper .net ?
Is it possible to use SqlBulkCopy to achieve this? If yes then how?
Is there any best way to implement multiple hierarchical insert or update using Dapper.net?
Technologies : C#, SQL Server 2012, Dapper.net
If you just want to insert: SqlBulkCopy should be fine; if you want an "upsert", I suggest table-valued-parameters (which dapper supports) and the merge t-sql operation
Dapper just simplifies ado.net; if you think of a way to do it in ado.net, dapper can probably make it easier for you; however, it sounds like multiple TVPs might suffice
If you are mean to OK and able to segregate insert and update entities separately then I would suggest to use Dapper.Contrib library provided by Dapper.Net guys themselves. It is available via nuget. It has worked very efficiently for my project.
Here is the link to their Github project page.

ADO.net update without SQL?

This might be a dumb question. But does ADO.net support updating to a database without having to write SQL commands?
Example:
I have a method that reads from database and keeps the rows in memory. I then modify some of the rows. Is it then possible to make ADO.net update the newest changes to the database without having to write an update SQL statements but instead let ADO.net figure it out?
I am asking this because I might want to update at a much later point. I could just store the SQL statements in a list but then I would be doing many updates instead of just one big one which would take longer time.
What you need is some sort of ORM, and ADO is not an ORM. So, no. You must write the SQL. You could maybe simplify things by writing a stored procedure, though. Then you can use ADO parameters
If you want, you can save your changes as objects in memory until you need to actually persist them. Then you can have a mapper that will take the object and write the SQL for you. However, then you are redoing some of the work of what is already done in an ORM
Like the sql you used to get the data, you need sql to put the data. It also needs to update what column to update. I don't think it can be automatic. Or use the Entity Framework. Probably saving the objects to be updates (IDs) is the way to go or update instantly.
ADO.NET supports DataAdapters and DataSets which allow you to do the following:
Manipulate data within your DataSet.
Push changes to the database by passing your DataSet as a parameter to the Update method of the DataAdapter.
In order to get the DataAdapter to push the changes it is necessary to specify insert, update, and delete commands. You will have to specify some sql in your command configuration but it is like a template of the sql statement that will update each row that you operate upon rather than your having to manually track changes.
Once you have configured your commands, use the UPDATE method with the DataSet as the parameter and it will persist your changes based on your commands. You will not need to track the individual sql changes.
A sample of configuring commands can be found here.
A sample of calling the update can be found here.

C# copy data between databases

What is the fastest method for copying data between databases in c#?
I want to create c# database procedure than accepts query string as parameter and inserts data to another database (sql query and destination table will have the same columns).
I use SQLBulkCopy right now but causes some problems.
I know you've asked about C# in your post, but you might want to consider a couple of other methods, using the stored procedure you are about to create. All of the below would be in the stored procedure itself.
Have you considered a T-SQL approach using OPENROWSET() That way, you never have to pull the data back over to C# and send it back over to SQL server. You will find some recommendations from Microsoft here
and here is the specific link using OPENROWSET()
If you want to use C# still, you can also use SSIS, which has an ability to copy data in bulk mode between servers, and you can call it from C# using the appropriate package you'd have to create (visual workflow). Please see this thread for details.

Categories