I'm working on the function Import, i have an excel file which contains some data, that later will be edited by the user, I managed to do the import excel by SmartXLS in C# and update all data to SQL Server Database, however, what I did is to fetch all data in excel file and update all rows into the SQL Table, which affects to the performance and I also updated unedited rows.
I would like to ask that is there any way that I can get only modified cells, rows in Excel and update to the correponding data in SQL Table?
var workbook = new WorkBook();
workbook.read(filePath);
var dataTable = workbook.ExportDataTable();
Just a Scenarion, maybe it helps you to understand what gordatron and i were talking about:
Following Situation:
There is a Table "Products" wich is central storage place for product informations
and a table "UpdatedProducts" which structure looks exactly like "Products" table but data
maybe different. Think of following scenarion: you export product table to excel in the morning. the whole
day you delete, add, update products in your excel table. At the end of the day you want to re-import your excel
data to "Products" table. What you need:
delete all records from "UpdatedProducts"
insert data from excel to
"UpdatedProducts" (bulk insert if possible)
update the "Products"
table
Then a Merge-Statement could look like this:
MERGE Products AS TARGET
USING UpdatedProducts AS SOURCE
ON TARGET.ProductID = SOURCE.ProductID
WHEN MATCHED AND TARGET.ProductName <> SOURCE.ProductName OR TARGET.Rate <> SOURCE.Rate
THEN UPDATE SET TARGET.ProductName = SOURCE.ProductName,
TARGET.Rate = SOURCE.Rate
WHEN NOT MATCHED BY TARGET
THEN INSERT (ProductID, ProductName, Rate)
VALUES (SOURCE.ProductID, SOURCE.ProductName, SOURCE.Rate)
WHEN NOT MATCHED BY SOURCE
THEN DELETE
What this Statement does:
WHEN MATCHED:
Data exist in both tables, we update data in "Products" if ProductName or Rate is different
WHEN NOT MATCHED BY TARGET:
Data exist in staging table but not in your original table, we add them to "Products"
WHEN NOT MATCHED BY SOURCE:
Data exists in your original table but not in staging table, thy will be deleted from "Products"
Thanks a lot to http://www.mssqltips.com/sqlservertip/1704/using-merge-in-sql-server-to-insert-update-and-delete-at-the-same-time/ for this perfect example!
Related
I'm in a pickle here. I'm reading an excel file that has three worksheet using exceldatareader's dataset function. After reading the file I have a dataset that contains three tables, one for each worksheet.
Now I need to export this data row by row into two different sql tables. These tables have an auto incremental primary key and PK of Table A makes up the FK in TableB . I'm using a stored procedure with SCOPE_IDENTITY() to achieve that.
Each column in excel file is a variable in stored procedure so as I iterate the sheets row by row I can assign these variables and then send them through stored procedure.
Now the question is how do I iterate through this dataset and assign each row[col] to variable for my stored procedure.
Thanks for help.
Update More Info:
1.Sheet 1 goes to TABLE 1 in sql
2.Sheet 2 goes to Table 2 in sql
3.Sheet 3 also goes to Table 2 in sql
4.Table 1 has 1 to many relationship to Table 2
Here is some half-pseudocode boilerplate to start from:
var datatable = dataset.Tables[0];
using(var proc = connection.CreateCommand(...))
{
proc.Parameters.Add("#firstcolumnname", SqlDbType.Int);
foreach(DataRow dr in datatable.Rows)
{
/* check for DbNull.Value if source column is nullable! */
proc.Parameters["#firstcolumnname"].Value = dr["FirstColumnName"];
proc.ExecuteNonQuery();
}
}
But this makes much sense only if you're doing some data processing inside the stored procedure. Otherwise this calls for a bulk insert, especially if you have lots (thousands) of rows in the excel sheets.
The one-to-many relation (foreign key) can be tricky, if you can control the order of inserts then simply start with the "one" table and then load the "many" table aftewards. Let the stored procedure match the key values, if they are not part of the Excel source data.
Whole different story if you are in a concurrent multi-user environment and the tables are accessed and written to, while you load. Then you'd have to add transactions to preserve referential integrety throughout the process.
I want to export data from different excel files to database ,
while exporting if the database table already contains the same row data present in the excel then that row should not be loaded to database.
can anybody provide me the code.
I know already how to export from excel to database, with this if am exporting same data to database then am seeing two rows with same data.
Thanks in advance
A simple solution would be to import data to a staging table and add non duplicates to the main table using
insert into tartget_table(col_list)
select t1.col_list from staging as t1 where not exists
(select * from target_table as t2 where t1.keycol=t2.keycol)
its better to export the data of excel to a temp table and then using distinct or any other select only distinct rows and insert then to your own tables
I am working on a functionality where i upload an EXCEL file and add/update those record(sheet 1) into SQL server. Now i was able to add the data in SQL server with this link.
But what it does, it truncates the table and adds the value again. I don't want to do that because there are 30% of data are generic and can not be deleted. There is field called OSID in excel sheet and same in database.That is the unique key in my table. What i want to do is update only those values in database where it matches with the key from database from the excel sheet.
I would suggest using the code from that link to import the excel data to a separate staging table and update your main table with a join to your staging table.
From that link, the table name they used was tdatamigrationtable. Your update query would look something like
update m set m.col1=s.col1, m.col2=s.col2, m.col3=s.col3
from dbo.mytable m
inner join dbo.tdatamigrationtable s on m.osid = s.osid;
I created two tables(FIRSTtable and SECONDtable) in the mysql database and two tables that are related.
The FIRST table, has a columns (product_id (pK), product_name).
The SECOND table has an columns (machine_id, production_date, product_id (fK),
product_quantity, operator_id).
Relations between the two tables using the product_id column with UpdateCascade and DeleteCascade. Both relationships are functioning normally when I try with the sql script. Suppose I delete all product_id in the FIRST table, all existing data in the SECOND table will be deleted.
Both of these tables displayed in datagridview. When I delete all the data in the FIRST table, the all rows in datagridview FIRST table will be deleted, also the data in mysql the FIRST table will be deleted.
I try to open the mysql database, the data are in SECOND Table also deleted, the problem why the view that in the second datagridview, can not be deleted, still keep the previous data? How to refresh datagridview binding in vb.net or C#? Thanks.
With Me.SECOND_DataGridView
.Datasource = Nothing ' tried this, but failed.
.DataSource = MyDataset.Tables("SECOND_table")
End With
I believe what you are running into is the fact the the MySQL Engine is actually performing the cascading deletes for you.
When you query the MySQL Data into a localized C# "DataTable" (Table within a DataSet), that data is now in memory and not directly linked to that on the disk. When you go to delete the rows in the "memory" version of the first data table, its causing the deletions to occur at the SERVER for the second level table and NOT directly updating you in-memory version of data table two.
That being said, you will probably have to do one of two things... Requery the entire dataset (tables one and two) to get a full refresh of what is STILL in the actual database... OR... As you are calling the delete from table one of the dataset, you'll have to perform the delete handling in the local datatable TWO as well to keep it in synch.
This is my first post.. I have 2 SQL Server databases located on different servers..
Let's say SDT for source data table from source database SDB to DDT (Destination data table) for Database DDB
I'm using C# for bulk copying from SDT to DDT..
My code is something like this:
sqlcommand = "Delete * from DDT where locID = #LocIDParam" // #LocIDParam is the parameter for a specific location //
then bulk copy "Select * from SDT where locID = #LocIDParam" // the steps are well known..
I just don't want to go for useless details..
However, my SDT has a huge data so that it causes high traffic for bulk copying the whole table
Is there anyway for bulk copying the only updated records from SDT to DDT as well as inserting the new ones???
Do you think using an SQL trigger for updated and newly inserted data is the best idea for this kind of scenarios? (trigger to insert the primary key value into a single column table for the new and update then deleting and inserting from/to DDT based on this )
PS. I don't want to use SQL replication for that since it has a lot of problems..
Thank you in advance
From the date I suppose you already fond your solution. In case not, here is how we deal with a somehow similar situation.
On the source table we have a column that shows if the data has to be send to the destination. We use a boolean but you can also have a datetime field that shows last update date.
Then our pull process does following :
Pull all the flagged data in a temporary table on the destination server
Update records that exists in both table
Insert all records from temporary table that don't exist in destination table
Drop the temporary table
If you use SQL 2008, there is a merge option that I don't know. Here a link that explains it :
SQL 208 MERGE command
Hope this will help you if you still need.