SSIS Package how to import CSV lines selectively based on query - c#

I have multiple CSV files with a 100 or so columns each that I want to import into an SQL database using an SSIS package. These CSV files are received every night, and I want our SQL tables to function as a history/track changes table.
In other words I'm required to evaluate each line of the CSV prior to import based on a unique identifier. I need to check the latest (based on date of import) entry of an ID in the table, and if it exists & differs from the new line in the CSV, it should be imported. If it's a duplicate, it should be ignored. if it doesn't exist at all, it should also be imported.
I can't simply filter out all duplicates, as a change from X to Y and then back to X should be recorded in the table (as it's a history/change table).
Originally I tried to get this working with a flat file import -> Lookup tool -> DB destination, but it doesn't look as though I can modify the lookup tool to use a specific query as opposed to just comparing the indicated columns with a DB to see if it exists. Is there a way to achieve this using the provided SSIS tools? The only alternative that I can see is creating a custom script task to evaluate each line in the CSV beforehand and writing it to a temp table or a new CSV with only the data that needs to be inserted. This might be a perfectly viable solution but I'm afraid of performance issues.

Related

Filter Excel Data : .Net vs SSIS

I have a huge amount of data in excel files, with at least 20 columns each file.
I am working with .net (c#), my task is to import rows that met the conditions to insert data into SQL database, for an example, I need to insert only rows with current year (or selected year), also I have column name is 'Full Employee Name' I need to check it if it exists in table Resource Human.
Also other condition is to check if the column name is the same in the SQL table.
I am succeeding to do it with code, but at least 200 lines to do all the possible checks. I read about SSIS (integration service, BI tool), and it looks that can help me to do my task.
My question how doing it? I am stacking with this new concept.
I think that choosing the best approach is based on your needs:
If you are looking to create automated jobs and to perform data import from excel to SQL periodically, i think it is better to go SSIS
If you are trying to create a small tool that convert an excel file to SQL table, then working with .NET is fine
If you are looking to loop over Excel files with different structure, then you should use .NET or you have to convert files to .csv then use SSIS.
Also you can refer to the following Microsoft documentation for more options in importing Excel files to SQL: (SQL queries, Linked servers, OPENROWSET ...)
Import data from Excel to SQL Server or Azure SQL Database
If you've already got a working .net solution, and 200 lines of code doesn't sound that bad to me, I wouldn't bother looking into SSIS to replace it.

Upload CSV File then Mapped to Tables in Database

How can we upload a CSV file through web(ASP.NET MVC C#) and mapped the column in CSV table to our tables in database
Example:
CSV File:
Username,Address,Roles,Date
How to add all the value in 'Username' column to User Table, Name Column ?
Value in 'Address' column to AddrDet Table, Address Column?
Value in 'Roles' column to RolesDet Table, Roles Column?
AND choose the CSV column to be added to database? (So not all column in CSV will be taken)
using ASP.NET MVC C#
Because all I know is when the CSV uploaded, it will create DataTable specially for CSV and all the column in CSV will be uploaded
Thank You
I'm using MVC and EF DB FIRST
This questions is being marked as duplicate of Upload CSV file to SQL server
I don't feel (& don't think) that the question is related to or has completely same topic, so I'm answering this. I have myself marked question as too broad, as there is too much to explain.
Also I will add some links to the question, however they are not here to fill the answer, only to give OP an idea what question/topics to look for himself.
Short explanation:
Usually when You want to import data (CSV file) into database, You already got structure & schema of data (and Database). There is existing TableA and TableB, where exist some columns inside. If You want to dynamically create new columns/update schema of DB based on CSV file, this is an uneasy work (normally is not happening).
C#/ASP/.NET application is working in a way where You give it an input (from users' clicks, data load, task scheduler passed some time checkpoint) and the APP do the work.
Typical job looks like: "We got data in this format, APP have to convert them to the inner representation (classes) and then insert them to the server". So You have to write an ASP page, where You allow user to paste/load the file. E.g.: File Upload ASP.NET MVC 3.0
Once You have loaded the file, You need to convert the CSV format (format of stored data) into Your internal representation. Which means create Your own class, with some properties and convert (Transform) CSV into the classes. E.g.: Importing CSV data into C# classes
Since You have this data inside classes (objects - instances of classes), You can work with them and carry out some internal work. This time we are looking for CRUD (Create/Read/Update/Delete) operations against SQL database. First You need to connect to SQL server, choose database and then run the queries. E.g.: https://www.codeproject.com/Articles/837599/Using-Csharp-to-connect-to-and-query-from-a-SQL-da
Plenty of developers are too lazy to write the queries themselves and they like more Object-Oriented access to this sort of problem. They are using ORM - Object-relation mapping, which allows users to have same class/object schema in Database and in the Application. One example for all is Entity-Framework (EF). E.g.: http://www.entityframeworktutorial.net/
As You can see this topic is not so easy and requires knowledge in several parts of programming.

Editing a large dataset for SQLBulkCopy into a SQL Server database

I have a VERY large (50 million+ records) dataset that I am importing from an old Interbase database into a new SQL Server database.
My current approach is:
acquire csv files from the Interbase database (done, used a program called "FBExport" I found somewhere online)
The schema of the old database doesn't match the new one (not under my control), so now I need to mass edit certain fields in order for them to work in the new database. This is the area I need help with
after editing to the correct schema, I am using SqlBulkCopy to copy the newly edited data set into the SQL Server database.
Part 3 works very quickly, diagnostics shows that importing 10,000 records at once is done almost instantly.
My current (slow) approach to part 2 is I just read the csv file line by line, and lookup the relevant information (ex. the csv file has an ID that is XXX########, whereas the new database has a separate column for each XXX and ########. ex2. the csv file references a model via a string, but the new database references via an ID in the model table) and then insert a new row into my local table, and then SqlBulkCopy after my local table gets large.
My question is: What would be the "best" approach (perfomance wise) for this data-editing step? I figure there is very likely a linq-type approach to this, would that perform better, and how would I go about doing that if it would?
If step #3’s importing is very quick, I would be tempted to create a temporary database whose schema exactly matches the old database and import the records into it. Then I’d look at adding additional columns to the temporary table where you need to split the XXX######## into XXX and ########. You could then use SQL to split the source column into the two separate ones. You could likewise use SQL to do whatever ID based lookups and updates you need to ensure the record relationships continue to be correct.
Once the data has been massaged into a format which is acceptable, you can insert the records into the final tables using IDENTITY_INSERT ON, excluding all legacy columns/information.
In my mind, the primary advantage of doing it within the temporary SQL DB is that at any time you can write queries to ensure that record relationships using the old key(s) are still correctly related to records using the new database’s auto generated keys.
This is of coursed based on me being more comfortable doing data transformations/validation in SQL than in C#.

Fastest way to compare CSV file to database in c#

I am writing an internal application and one of the functions will be importing data from a remote system. The data from the remote system comes over as a CSV file. I need to compare the data in my system with that of the CSV file.
I need to apply any changes to my system (Adds and Changes). I need to track each field that is changed.
My database is normalized so I'm dealing with about 10 tables to correspond with the data in the CSV file. What is the best way to implement this? Each CSV file has about 500,000 records that are processed daily. I started by querying row by row from my SQL database using a lookup ID then using c# do do a field by field compare and updating or inserting as necessary; however, this takes way too long.
Any suggestions?
You can do following:
Load cvs file into staging table in your db;
Perform validation and clean-up routines on it (if necessary)
Perform your comparisons and updates on your live data
Wipe out all data from staging table
Using that approach you can implement almost all clean-up, validation, and update logic using your RDBMS functionality.
If your RDBMS is SQL Server you can leverage SQL Server Integration Services.
If you have anything that serves as a unique key, you can do the following:
Create a new table Hashes that contains a unique key and a hash of all fields associated with that key (do not use .NET's object.GetHashCode(), as the value returned does change from time to time by design. I personally use Google's CityHash which I ported to C#).
When you get a new CSV file, compute the hash value for each key
Check the Hashes table for each row in the CSV file.
If there is no entry for the unique key, create one and insert the row.
If there is an entry, see if the hash has changed.
If it has, update the hash in the Hashes table and update data.
Expanding on the first comment to your question.
Create an appropriately indexed table that matches the format of your csv file and dump the data straight into it.
Have a stored procedure with appropriate queries to update/delete/insert to the active tables.
Get rid of the temporary table.

Multiple update from C# 3.0

I have a folder that contains more than 100 .txt files. These files contain huge amount of data that needs to be orchestrated and updated in the SQL Server table. Each text file line will have only two columns that interests me and based upon the 1st column data (which is a PK in the SQL Server Table) I want to update the 2nd column data in the DB table.
Please suggest the best way to do it in C# 3.0
Presently I am using a StringBuilder and appending the Update query.
I think first you should use Bulk Copy to import the data to the sql server and then you write stored procedure. Building the query and then running will make the whole process slow.

Categories