Helo,
I have got ORACLE DB table with over 200M records and need migrate this to MSSQL. This process is repeated once a month and should be done +-over night. Fastest method seemed to be using bpc utils.
I tried to create bcp file using C# code, but when run a query to get data always end with exhausted DB temp space, which cannot be raised. When I split this single query with offset and fetch, I must add rownumber. This takes also a lot of time because of sorting.
Next solution could be direct export from oracle SQL developer with delimited columns to create csv file or bcp file, but some errors with encoding appeared. This is quite boring also this manual export takes about 1/3 of day.
My question is, is there any solution to create bpc file using C# in any normal time or any other way to achieve this?
Thanks.
Related
I have actually built an application using C# to do differential backup with compression option on daily to reduce bak file size and increase the speed of backup. However, the file size is still too large for me. So I plan to backup the whole database but I want one of a table (that contain the most data) to contain only one month data. Any idea ? Thanks.
If not mistaken what you meant by large is you tried to backup the whole database and it require more time and processing power.
I assume that you don't have the privileges to modify the table in the original database. Maybe you can try using this https://www.w3schools.com/sql/sql_select_into.asp
to copy the rows that you can set in the "WHERE" statement to another table (external database) and then backup it.
I have an excel file and i am querying this on my C# program with SQL using OleDB.
But i faced with a problem. My file has about 300K rows and querying takes too much long time. I have googled for this issue and used some libraries such as spreadsheetlight and EPPlus but they haven't got query feature.
Can anyone advice me for the fastest way to querying my file?
Thanks in advance.
I have worked with 400-800K rows Excel files. The task was to read all rows and insert them into SQL Server DB. From my experience OleDB was not able to process such big files in a timely manner, therefore we had to fall back to Excel file import directly into DB using SQL Server means, e.g. OPENROWSET.
Even smaller files, like 260K rows took approx. an hour with OleDB to import row-by-row into DB table using Core2 Duo generation hardware.
So, in your case you can consider the following:
1.Try reading Excel file in chunks using ranged SELECT:
OleDbCommand date = new OleDbCommand("SELECT ["+date+"] FROM [Sheet1$A1:Z10000]
WHERE ["+key+"]= " + array[i].ToString(), connection);
Note, [Sheet1$A1:Z10000] tells OleDB to process only first 10K rows of columns A to Z of the sheet instead the whole sheet. You can use this approach if for example your Excel file is sorted and you know that you don't need to check ALL rows but only for this year. Or you can change Z10000 dynamically to read the next chunk of the file and combine result with the previous one.
2.Get all your Excel file contents directly into DB using direct DB import, such as mentioned OPENROWSET of the MS SQL Server and then run your search queries against RDBMS instead of the Excel files.
I would personally suggest option #2. Comment if you can use DB at all and what RDBMS product/version is available to you, if any.
Hope this helps!
I'm recieving and parsing a large text file.
In that file I have a numerical ID identifying a row in a table, and another field that I need to update.
ID Current Location
=========================
1 Boston
2 Cambridge
3 Idaho
I was thinking of creating a single SQL command string and firing that off using ADO.Net, but some of these files I'm going to recieve have thousands of lines. Is this doable or is there a limit I'm not seeing?
If you may have thousands of lines, then composing a SQL statement is definitely NOT the way to go. Better code-based alternatives include:
Use SQLBulkCopy to insert the change data to a staging table and then UPDATE your target table using the staging table as the source. It also has excellent batching options (unlike the other choices)
Write a stored procedure to do the Update that accepts an XML parameter that contains the UPDATE data.
Write a stored procedure to do the Update that accepts a table-valued parameter that contains the UPDATE data.
I have not compared them myself but it is my understanding that #3 is generally the fastest (though #1 is plenty fast for almost any need).
Writing one huge INSERT statement well be very slow. You also don't want to parse the whole massive file at once. What you need to do is something along the lines of:
Figure out a good chunk size. Let's call it chunk_size. This will be the number of records you'll read from the file at a time.
Load chunk_size number of records from the file into a DataTable.
Use SQLBulkCopy to insert the DataTable into the DB.
Repeat 2 & 3 until the file is done.
You'll have to experiment to find an optimal size for chunk_size so start small and work your way up.
I'm not sure of an actual limit, if one exists, but why not take "bite sized" chunks of the file that you feel comfortable with and break it into several commands? You can always wrap it in a single transaction if it's important that they all fail or succeed.
Say grab 250 lines at a time, or whatever.
I've got a bunch of SQL dump files that I'd like to import into a dataset with C#. The machine this code will run on does not have SQL installed. Is there any way to do this short of parsing the text manually?
Thanks!
EDIT: The Dump file is just a long list of SQL statements specifying the schema and values of the database.
Is this a SQL dump of the backup sort that you create via a DUMP or BACKUP statement? If so the answer is pretty much no. It's essentially a snapshot of the physical structure of the database. You need to restore/load the dump/backup to an operable database before you can do anything with it.
I am pretty sure there is no way to turn a bunch of arbitrary SQL statements into an ADO.net DataSet without running the SQL commands through a database engine that understands the SQL in the dump file. You can't even do it between databases, e.g. a MySQL dump file will not run on an MS SQL server
You don't have to go manual, once you know the format. Creating a DataSet can be done 2 ways:
Set up code to loop the file and create a dataset directly
Set up code to loop the file and create an XML document in dataset format
Either will work. The best depends on your familiarity with the above (HINT: If you have no familiarity with either, choose the dataset).
It is all a question of format. What type of database is the DUMP file from? MySQL? Postgress? other?
my windows app is reading text file and inserting it into the database. The problem is text file is extremely big (at least for our low-end machines). It has 100 thousands rows and it takes time to write it into the database.
Can you guys suggest how should i read and write the data efficiently so that it does not hog machine memory?
FYI...
Column delimiter : '|'
Row delimiter : NewLine
It has approximately 10 columns.. (It has an information of clients...like first name, last name, address, phones, emails etc.)
CONSIDER THAT...I AM RESTRICTED FROM USING BULK CMD.
You don't say what kind of database you're using, but if it is SQL Server, then you should look into the BULK INSERT command or the BCP utility.
Given that there is absolutely no chance of getting help from your security folks and using BULK commands, here is the approach I would take:
Make sure you are reading the entire text file first before inserting into the database. Thus reducing the I/O.
Check what indexes you have on the destination table. Can you insert into a temporary table with no indexes or dependencies so that the individual inserts are fast?
Does this data need to be visible immediately after insert? If not then you can have a scheduled job to read from the temp table in step 2 above and insert into the destination table (that has indexes, foreign keys etc.).
Is it possible for you to register your custom assembly into Sql Server? (I'm assuming it's sql server because you've already said you used bulk insert earlier).
Than you can call your assembly to do (mostly) whatever you need, like getting a file from some service (or whatever your option is), parsing and inserting directly into tables.
This is not an option I like, but it could be a saver sometimes.