I am having issues updating my Database using linq to sql.
I have a master query that retrieves all records in the database (16,000 records)
PostDataContext ctxPost = new PostDataContext();
int n = 0;
var d = (from c in ctxPost.PWC_Gs
where c.status == 1
select c);
I then take the first 1000, and pass it to to another object after modification using the following query:
var cr = d.Skip(n).Take(1000);
I loop through the records using foreach loop
foreach (var _d in cr)
{
// Some stuffs here
_d.status = 0;
}
I then Call SubmitChanges
ctxPost.SubmitChanges();
No Record gets updated
Thanks to you all. I was missing the primary key on the ID field in the dbml file.
Related
Suppose I have executed an update query in loop using PetaPoco like,
foreach (var obj in mainObject) {
db.Execute("update Table set Column = #0 where C1=#1 and C2 =#2", Column1, obj.data1, obj.data2);
}
How to know if each of these queries has been executed successfully ?
Execute returns the number of affected rows. So if you update one row you'd get 1 as return value if it succeded, otherwise 0 (or an error).
bool allSucceeded = true;
foreach (var obj in mainObject)
{
int updated = db.Execute("update Table set Column = #0 where C1=#1 and C2 =#2", Column1, obj.data1, obj.data2);
bool succeed = updated != 0;
if(!succeed)
allSucceeded = false;
}
So Execute doesn't return 1 for succeed and 0 for fail. It returns the number of affected rows. If you for example execute this query: DELETE FROM Table you'd delete all rows of this table and the return value would be the number of rows in this table. So it depends on the logic and the query if 0 is a fail or 1 is a succeed.
By the way, this behaviour is consistent with ADO.NET methods like SqlCommand.ExecuteNonQuery.
Usually PetaPoco returns 1 OR greater if a single query is executed successfully or means if any rows are affected and 0 if failed.
With this scenario you can trace those values by adding that in the loop, like:
List<int> checkSuccess = new List<int>(); //To trace the value returned by execute query
foreach (var obj in mainObject) {
int updated = db.Execute("update Table set Column = #0 where C1=#1 and C2 =#2", Column1, obj.data1, obj.data2);
checkSuccess.Add(updated);
}
if (checkSuccess.All(i => i >= 1))
{
//Your every queries has been updated successfully
}
foreach (var obj in mainObject)
{
var result = db.Execute("update Table set Column = #0 where C1=#1 and C2 =#2", Column1, obj.data1, obj.data2);
if (result < 1)
{
//not ok
}
}
I am trying to merge data from two separate queries using C#. The data is located on separate servers or I would just combine the queries. I want to update the data in one of the columns of the first data set with the data in one of the columns of the second data set, joining on a different column.
Here is what I have so far:
ds.Tables[3].Columns[2].ReadOnly = false;
List<object> table = new List<object>();
table = ds.Tables[3].AsEnumerable().Select(r => r[2] = reader.AsEnumerable().Where(s => r[3] == s[0])).ToList();
The ToList() is just for debugging. To summarize, ds.Tables[3].Rows[2] is the column I want to update. ds.Tables[3].Rows[3] contains the key I want to join to.
In the reader, the first column contains the matching key to ds.Tables[3].Rows[3] and the second column contains the data with which I want to update ds.Tables[3].Rows[2].
The error I keep getting is
Unable to cast object of type 'WhereEnumerableIterator1[System.Data.IDataRecord]' to type 'System.IConvertible'.Couldn't store <System.Linq.Enumerable+WhereEnumerableIterator1[System.Data.IDataRecord]> in Quoting Dealers Column. Expected type is Int32.
Where am I going wrong with my LINQ?
EDIT:
I updated the line where the updating is happening
table = ds.Tables[3].AsEnumerable().Select(r => r[2] = reader.AsEnumerable().First(s => r[3] == s[0])[1]).ToList();
but now I keep getting
Sequence contains no matching element
For the record, the sequence does contain a matching element.
You can use the following sample to achieve the join and update operation. Let's suppose there are two Datatables:
tbl1:
tbl2:
Joining two tables and updating the value of column "name1" of tbl1 from column "name2" of tbl2.
public DataTable JoinAndUpdate(DataTable tbl1, DataTable tbl2)
{
// for demo purpose I have created a clone of tbl1.
// you can define a custom schema, if needed.
DataTable dtResult = tbl1.Clone();
var result = from dataRows1 in tbl1.AsEnumerable()
join dataRows2 in tbl2.AsEnumerable()
on dataRows1.Field<int>("ID") equals dataRows2.Field<int>("ID") into lj
from reader in lj
select new object[]
{
dataRows1.Field<int>("ID"), // ID from table 1
reader.Field<string>("name2"), // Updated column value from table 2
dataRows1.Field<int>("age")
// .. here comes the rest of the fields from table 1.
};
// Load the results in the table
result.ToList().ForEach(row => dtResult.LoadDataRow(row, false));
return dtResult;
}
Here's the result:
After considering what #DStanley said about LINQ, I abandoned it and went with a foreach statement. See code below:
ds.Tables[3].Columns[2].ReadOnly = false;
while (reader.Read())
{
foreach (DataRow item in ds.Tables[3].Rows)
{
if ((Guid)item[3] == reader.GetGuid(0))
{
item[2] = reader.GetInt32(1);
}
}
}
The application I am building allows a user to upload a .csv file containing multiple rows and columns of data. Each row contains a unique varchar Id. This will ultimately fill in fields of an existing SQL table where there is a matching Id.
Step 1: I am using LinqToCsv and a foreach loop to import the .csv fully into a temporary table.
Step 2: Then I have another foreach loop where I am trying to loop the rows from the temporary table into an existing table only where the Ids match.
Controller Action to complete this process:
[HttpPost]
public ActionResult UploadValidationTable(HttpPostedFileBase csvFile)
{
var inputFileDescription = new CsvFileDescription
{
SeparatorChar = ',',
FirstLineHasColumnNames = true
};
var cc = new CsvContext();
var filePath = uploadFile(csvFile.InputStream);
var model = cc.Read<Credit>(filePath, inputFileDescription);
try
{
var entity = new TestEntities();
var tc = new TemporaryCsvUpload();
foreach (var item in model)
{
tc.Id = item.Id;
tc.CreditInvoiceAmount = item.CreditInvoiceAmount;
tc.CreditInvoiceDate = item.CreditInvoiceDate;
tc.CreditInvoiceNumber = item.CreditInvoiceNumber;
tc.CreditDeniedDate = item.CreditDeniedDate;
tc.CreditDeniedReasonId = item.CreditDeniedReasonId;
tc.CreditDeniedNotes = item.CreditDeniedNotes;
entity.TemporaryCsvUploads.Add(tc);
}
var idMatches = entity.PreexistingTable.Where(x => x.Id == tc.Id);
foreach (var number in idMatches)
{
number.CreditInvoiceDate = tc.CreditInvoiceDate;
number.CreditInvoiceNumber = tc.CreditInvoiceNumber;
number.CreditInvoiceAmount = tc.CreditInvoiceAmount;
number.CreditDeniedDate = tc.CreditDeniedDate;
number.CreditDeniedReasonId = tc.CreditDeniedReasonId;
number.CreditDeniedNotes = tc.CreditDeniedNotes;
}
entity.SaveChanges();
entity.Database.ExecuteSqlCommand("TRUNCATE TABLE TemporaryCsvUpload");
TempData["Success"] = "Updated Successfully";
}
catch (LINQtoCSVException)
{
TempData["Error"] = "Upload Error: Ensure you have the correct header fields and that the file is of .csv format.";
}
return View("Upload");
}
The issue in the above code is that tc is inside the first loop, but the matches are defined after the loop with var idMatches = entity.PreexistingTable.Where(x => x.Id == tc.Id);, so I am only getting the last item of the first loop.
If I nest the second loop then it is way to slow (stopped it after 10 minutes) because there are roughly 1000 rows in the .csv and 7000 in the preexisting table.
Finding a better way to do this is plaguing me. Pretend that the temporary table didn't even come from a .csv and just think about the most efficient way to fill in rows in table 2 from table 1 where the id of that row matches. Thanks for your help!
As your code is written now, much of the work is being done by the application that could much more efficiently be done by SQL Server. You are making hundreds of unnecessary roundtrip calls to the database. When you are mass importing data you want a solution like this:
Bulk import the data. See this answer for helpful guidance on bulk import efficiency with EF.
Join and update destination table.
Processing the import should only require a single mass update query:
update PT set
CreditInvoiceDate = CSV.CreditInvoiceDate
,CreditInvoiceNumber = CSV.CreditInvoiceNumber
,CreditInvoiceAmount = CSV.CreditInvoiceAmount
,CreditDeniedDate = CSV.CreditDeniedDate
,CreditDeniedReasonId = CSV.CreditDeniedReasonId
,CreditDeniedNotes = CSV.CreditDeniedNotes
from PreexistingTable PT
join TemporaryCsvUploads CSV on PT.Id = CSV.Id
This query would replace your entire nested loop and apply the same update in a single database call. As long as your table is indexed properly this should run very fast.
After saving CSV record into second table which have same fileds as your primary table, execute following procedure in sqlserver
create proc [dbo].[excel_updation]
as
set xact_abort on
begin transaction
-- First update records
update first_table
set [ExamDate] = source.[ExamDate],
[marks] = source.[marks],
[result] = source.[result],
[dob] = source.[dob],
[spdate] = source.[spdate],
[agentName] = source.[agentName],
[companycode] = source.[companycode],
[dp] = source.[dp],
[state] = source.[state],
[district] = source.[district],
[phone] = source.[phone],
[examcentre] = source.[examcentre],
[examtime] = source.[examtime],
[dateGiven] = source.[dateGiven],
[smName] = source.[smName],
[smNo] = source.[smNo],
[bmName] = source.[bmName],
[bmNo] = source.[bmNo]
from tbUser
inner join second_table source
on tbUser.[UserId] = source.[UserId]
-- And then insert
insert into first_table (exprdate, marks, result, dob, spdate, agentName, companycode, dp, state, district, phone, examcentre, examtime, dateGiven, smName, smNo, bmName, bmNo)
select [ExamDate], [marks], [result], [dob], [spdate], [agentName], [companycode], [dp], [state], [district], [phone], [examcentre], [examtime], [dateGiven], [smName], [smNo], [bmName], [bmNo]
from second_table source
where not exists
(
select *
from first_table
where first_table.[UserId] = source.[UserId]
)
commit transaction
delete from second_table
The condition of this code is only that both table must have same id matching data. Which id match in both table, data of that particular row will be updated in first table.
As long as the probability of the match is high you should simply attempt update with every row from your CSV, with a condition that the id matches,
UPDATE table SET ... WHERE id = #id
I have two datatables let's just call them db1 and db2. db2 contains all the records db1 has but db1 doesn't contain all the records of db2 (they both have the same columns). I have to check the modifications every day in db1 and apply the same for db2.
Currently my tool "exports" both tables into DataTables, performs the conversion and updates/imports the records into db2:
SELECT * FROM db1 -> db1_table
SELECT * FROM db2 -> db2_table
for (int i = 0; i < db1_table.Rows.Count; i++)
{
for (int j = 0; j < db2_table.Rows.Count; j++)
{
//if db1_table.Rows[i] != db2_table.Rows[j] -> UPDATE db2 SET etc.
//if db1_table.Rows[i] doesn't exist in db2 -> INSERT INTO db2 etc.
}
}
This version becomes quite slow after a while. I'm talking about tens of thousands of records.
The other was my initial idea but I found it slow. I pull the whole db1, loop through all of its records and execute an sql query each time:
SELECT * FROM db1 -> db1_table
for (int i = 0; i < db1_table.Rows.Count; i++)
{
//SELECT * FROM db2 WHERE "attributes LIKE db1_table.Rows[i]
//do the comparsion here and execute the UPDATE/INSERT commands if necessary
}
Which is the faster(better) way? Are there any other option I might have?
Side note: you really shouldn't store duplicate data in two tables with the same structure in the first place...
Side note: you should be doing this update in SQL.
To answer your actual question. What you're experiencing is an O(N^2) algorithmic complexity. It can be reduced to around O(N) if you build a hashtable (dictionary) of one of the tables, and you only iterate on the other one. When you look for a match, then you look in the hashtable instead of iteration, that's around O(1) instead of O(N). You just need a good key value that you use for hashing.
Something like this:
var dict = db2_table.Rows.Cast<DataRow>().ToDictionary(row2 => row2["keycolumn"].Value); // this is the hashing, make sure no duplicate keys exist!
foreach (DataRow row1 in db1_table.Rows) {
DataRow row2;
if (dict.TryGetValue(row1["keycolumn"].Value, out row2)) {
// row1 and row2 match by the key column, do something with them
dict.Remove(row2["keycolumn"].Value);
}
// else no match, row1 must be a new row
}
// now dict contains the keys from db2 which have no match in db1, they must have been deleted
There's another option that's O(n) if you have a unique ID that you can order on and compare: Order both tables by the ID and walk them both at once, generating lists of pending changes. After that you can apply the pending changes. The reason for generating lists of changes is so that you can batch commands together at the end of the change detection and benefit from things like bulk inserts, CTEs or temp tables to join on for deletes, and batched command groups for updates -- all of which reduce one of the biggest sources of latency in this kind of operation: DB round trips.
The main loop looks like the following:
// Assuming that IDs are long. Change as required.
long db1_id;
long db2_id;
var idsToAppend = new List<long>();
var idsToUpdate = new List<long>();
var idsToDelete = new List<long>();
int i = 0;
int j = 0;
while (i < db1_table.Rows.Count && j < db2_table.Rows.Count) {
db1_id = db1_table.Rows[i]["ID"];
db2_id = db2_table.Rows[j]["ID"];
if (i == db1_table.Rows.Count && j < db2_table.Rows.Count) {
// There's extra rows in the destination that have been removed from the source
idsToDelete.Add(db1_id);
j++;
} else if (j < db1_table.Rows.Count && j == db2_table.Rows.Count) {
// There's extra rows in the source that need added to the destination
idsToAppend.Add(db1_id);
i++;
} else if (db1_id == db2_id) {
// On the same ID in both datasets
if !(db1_table.Rows[i] == db2_table.Rows[j]) {
// I know == won't work -- only do this if db1 may change and the changes must be propagated to db2
idsToUpdate.Add(db1_id);
}
i++;
j++;
} else if (db1_id > db2_id) {
// row in db1 was removed, remove row in db2
idsToDelete.Add(db1_id);
j++;
} else {
// implicit: db1_id < db2_id
// implicit: row in db1 doesn't exist in db2, needs added
idsToAppend(db1_id);
i++;
}
}
// Walk idsToAppend, idsToUpdate, and idsToDelete applying changes
I am attempting to update multiple records in my DB, I have the below code, which I am using to update 1 item.
How do I update multiple?
abcProduct productUpdate = dc.abcProducts.Single(p => p.feedSubmitId == submissionId);
productUpdate.prodPublished = '1';
Try this
dc.abcProducts.Where(p => p.feedSubmitId == submissionId).ToList().ForEach(x=>x.prodPublished = '1');
var products = dc.abcProducts.Where(p => p.feedSubmitId == submissionId);
foreach(var product in products)
{
product.prodPublished = '1';
}
You can also write a stored procedure in your database.
CREATE PROCEDURE SetProdPublished
#submissionId int
#prodPublished int
AS
UPDATE AbcProducts SET prodPublished = #prodPublished
WHERE feedSubmitId = #submissionId
Then you drag that stored procedure into your DBML. You can then call that stored procedure.
dataContext.SetProdPublished(someSubmissionId, 1);
If you are updating more than a few rows, this will be faster than updating them in a loop in code, and much faster if you're updating a lot of rows.