Update multiple records in database - c#

I am attempting to update multiple records in my DB, I have the below code, which I am using to update 1 item.
How do I update multiple?
abcProduct productUpdate = dc.abcProducts.Single(p => p.feedSubmitId == submissionId);
productUpdate.prodPublished = '1';

Try this
dc.abcProducts.Where(p => p.feedSubmitId == submissionId).ToList().ForEach(x=>x.prodPublished = '1');

var products = dc.abcProducts.Where(p => p.feedSubmitId == submissionId);
foreach(var product in products)
{
product.prodPublished = '1';
}

You can also write a stored procedure in your database.
CREATE PROCEDURE SetProdPublished
#submissionId int
#prodPublished int
AS
UPDATE AbcProducts SET prodPublished = #prodPublished
WHERE feedSubmitId = #submissionId
Then you drag that stored procedure into your DBML. You can then call that stored procedure.
dataContext.SetProdPublished(someSubmissionId, 1);
If you are updating more than a few rows, this will be faster than updating them in a loop in code, and much faster if you're updating a lot of rows.

Related

How can I process SQL Server paginated query results in a C# loop / list

DECLARE #PageNumber AS INT
DECLARE #RowsOfPage AS INT
DECLARE #MaxTablePage AS FLOAT
SET #PageNumber = 1
SET #RowsOfPage = 4
SELECT #MaxTablePage = COUNT(*) FROM SampleFruits
SET #MaxTablePage = CEILING(#MaxTablePage/#RowsOfPage)
WHILE #MaxTablePage >= #PageNumber
BEGIN
SELECT FruitName, Price
FROM SampleFruits
ORDER BY Price
OFFSET (#PageNumber-1) * #RowsOfPage ROWS
FETCH NEXT #RowsOfPage ROWS ONLY
SET #PageNumber = #PageNumber + 1
END
I have created 2 SQL Server paginated queries following the above sample script found at this link: https://www.sqlshack.com/pagination-in-sql-server/#:~:text=What%20is%20Pagination%20in%20SQL,pagination%20solution%20for%20SQL%20Server.
I want to load their results in .NET lists, something like this:
List<Item> currentItemVersion = GetCurrentItemVersion();
List<Item> itemVersionHistory = GetItemVersionHistory();
foreach (Item myItem in currentItemVersion)
{
if (myItem.IsGood == true)
{
List<Items> goodItems = itemVersionHistory.Where(x => x.Item_ID == myItem.Item_ID).ToList();
foreach (Item ItemVersions in goodItems)
{
// DO SOME THINGS HERE
}
}
}
In this C# code, lists currentItemVersion and itemVersionHistory have only the first 4 items returned by the first page of the underlying T-SQL paging query, so I can only process 4 items in my T-SQL's results first page.
How do I process all the items in the several pages returned by my underlying SQL Server paged queries?
Or is this actually the correct way of doing what I am trying to do?

Taking rows in chunks from datatable and inserting in database

I have around 25k records in datatable. I already have update query written by previous developer which I can't change. What I am trying to do is as follows:
Take 1000 records at a time from datatable, records can vary from 1 to 25k.
Update query which is in string, replace IN('values here') clause of that with these 1000 records and fire query against database.
Now, I know there are effecient ways to do it, like bulk insert by use of array binding , but I can't change present coding pattern due to restrictions.
What I have tried to do:
if (dt.Rows.Count>0)
{
foreach (DataRow dr in dt.Rows)
{
reviewitemsend =reviewitemsend + dr["ItemID"].ToString()+ ',';
//If record count is 1000 , execute against database.
}
}
Now above approach is taking me nowwhere and am like struck. So another better aproach which I am thinking is below :
int TotalRecords = dt.rows.count;
If (TotalRecords <1000 && TotalRecords >0 )
//Update existing query with this records by placing them in IN cluse and execute
else
{
intLoopCounter = TotalRecords/1000; //Manage for extra records, as counter will be whole number, so i will check modulus division also, if that is 0, means no need for extra counter, if that is non zero, intLoopCounter increment by 1
for(int i= 0;i < intLoopCounter; i++)
{
//Take thousand records at a time, unless last counter has less than 1000 records and execute against database
}
}
Also, note update query is below :
string UpdateStatement = #" UPDATE Table
SET column1=<STATUS>,
column2= '<NOTES>',
changed_by = '<CHANGEDBY>',
status= NULL,
WHERE ID IN (<IDS>)";
In above update query, IDS are already replaced with all 25K record ID's, which will be shown to end user like that, internally only I have to execute it as separate chunks, So within IN() cluase I need to insert 1k records at a time
You can split your Datatable using this linq method:
private static List<List<DataRow>> SplitDataTable(DataTable table, int pageSize)
{
return
table.AsEnumerable()
.Select((row, index) => new { Row = row, Index = index, })
.GroupBy(x => x.Index / pageSize)
.Select(x => x.Select(v => v.Row).ToList())
.ToList();
}
Then run the database query on each chunk:
foreach(List<DataRow> chuck in SplitDataTable(dt, 1000))
{
foreach(DataRow row in chuck)
{
// prepare data from row
}
// execute against database
}
Tip: you can modify the split query to prepare your data directly inside of it (by replacing the x.Select(v => v.Row) part, instead of looping twice on that huge DataTable.

Seeking a less costly solution for matching Ids of two tables

The application I am building allows a user to upload a .csv file containing multiple rows and columns of data. Each row contains a unique varchar Id. This will ultimately fill in fields of an existing SQL table where there is a matching Id.
Step 1: I am using LinqToCsv and a foreach loop to import the .csv fully into a temporary table.
Step 2: Then I have another foreach loop where I am trying to loop the rows from the temporary table into an existing table only where the Ids match.
Controller Action to complete this process:
[HttpPost]
public ActionResult UploadValidationTable(HttpPostedFileBase csvFile)
{
var inputFileDescription = new CsvFileDescription
{
SeparatorChar = ',',
FirstLineHasColumnNames = true
};
var cc = new CsvContext();
var filePath = uploadFile(csvFile.InputStream);
var model = cc.Read<Credit>(filePath, inputFileDescription);
try
{
var entity = new TestEntities();
var tc = new TemporaryCsvUpload();
foreach (var item in model)
{
tc.Id = item.Id;
tc.CreditInvoiceAmount = item.CreditInvoiceAmount;
tc.CreditInvoiceDate = item.CreditInvoiceDate;
tc.CreditInvoiceNumber = item.CreditInvoiceNumber;
tc.CreditDeniedDate = item.CreditDeniedDate;
tc.CreditDeniedReasonId = item.CreditDeniedReasonId;
tc.CreditDeniedNotes = item.CreditDeniedNotes;
entity.TemporaryCsvUploads.Add(tc);
}
var idMatches = entity.PreexistingTable.Where(x => x.Id == tc.Id);
foreach (var number in idMatches)
{
number.CreditInvoiceDate = tc.CreditInvoiceDate;
number.CreditInvoiceNumber = tc.CreditInvoiceNumber;
number.CreditInvoiceAmount = tc.CreditInvoiceAmount;
number.CreditDeniedDate = tc.CreditDeniedDate;
number.CreditDeniedReasonId = tc.CreditDeniedReasonId;
number.CreditDeniedNotes = tc.CreditDeniedNotes;
}
entity.SaveChanges();
entity.Database.ExecuteSqlCommand("TRUNCATE TABLE TemporaryCsvUpload");
TempData["Success"] = "Updated Successfully";
}
catch (LINQtoCSVException)
{
TempData["Error"] = "Upload Error: Ensure you have the correct header fields and that the file is of .csv format.";
}
return View("Upload");
}
The issue in the above code is that tc is inside the first loop, but the matches are defined after the loop with var idMatches = entity.PreexistingTable.Where(x => x.Id == tc.Id);, so I am only getting the last item of the first loop.
If I nest the second loop then it is way to slow (stopped it after 10 minutes) because there are roughly 1000 rows in the .csv and 7000 in the preexisting table.
Finding a better way to do this is plaguing me. Pretend that the temporary table didn't even come from a .csv and just think about the most efficient way to fill in rows in table 2 from table 1 where the id of that row matches. Thanks for your help!
As your code is written now, much of the work is being done by the application that could much more efficiently be done by SQL Server. You are making hundreds of unnecessary roundtrip calls to the database. When you are mass importing data you want a solution like this:
Bulk import the data. See this answer for helpful guidance on bulk import efficiency with EF.
Join and update destination table.
Processing the import should only require a single mass update query:
update PT set
CreditInvoiceDate = CSV.CreditInvoiceDate
,CreditInvoiceNumber = CSV.CreditInvoiceNumber
,CreditInvoiceAmount = CSV.CreditInvoiceAmount
,CreditDeniedDate = CSV.CreditDeniedDate
,CreditDeniedReasonId = CSV.CreditDeniedReasonId
,CreditDeniedNotes = CSV.CreditDeniedNotes
from PreexistingTable PT
join TemporaryCsvUploads CSV on PT.Id = CSV.Id
This query would replace your entire nested loop and apply the same update in a single database call. As long as your table is indexed properly this should run very fast.
After saving CSV record into second table which have same fileds as your primary table, execute following procedure in sqlserver
create proc [dbo].[excel_updation]
as
set xact_abort on
begin transaction
-- First update records
update first_table
set [ExamDate] = source.[ExamDate],
[marks] = source.[marks],
[result] = source.[result],
[dob] = source.[dob],
[spdate] = source.[spdate],
[agentName] = source.[agentName],
[companycode] = source.[companycode],
[dp] = source.[dp],
[state] = source.[state],
[district] = source.[district],
[phone] = source.[phone],
[examcentre] = source.[examcentre],
[examtime] = source.[examtime],
[dateGiven] = source.[dateGiven],
[smName] = source.[smName],
[smNo] = source.[smNo],
[bmName] = source.[bmName],
[bmNo] = source.[bmNo]
from tbUser
inner join second_table source
on tbUser.[UserId] = source.[UserId]
-- And then insert
insert into first_table (exprdate, marks, result, dob, spdate, agentName, companycode, dp, state, district, phone, examcentre, examtime, dateGiven, smName, smNo, bmName, bmNo)
select [ExamDate], [marks], [result], [dob], [spdate], [agentName], [companycode], [dp], [state], [district], [phone], [examcentre], [examtime], [dateGiven], [smName], [smNo], [bmName], [bmNo]
from second_table source
where not exists
(
select *
from first_table
where first_table.[UserId] = source.[UserId]
)
commit transaction
delete from second_table
The condition of this code is only that both table must have same id matching data. Which id match in both table, data of that particular row will be updated in first table.
As long as the probability of the match is high you should simply attempt update with every row from your CSV, with a condition that the id matches,
UPDATE table SET ... WHERE id = #id

Linq to Sql Update on extracted list not working

I am having issues updating my Database using linq to sql.
I have a master query that retrieves all records in the database (16,000 records)
PostDataContext ctxPost = new PostDataContext();
int n = 0;
var d = (from c in ctxPost.PWC_Gs
where c.status == 1
select c);
I then take the first 1000, and pass it to to another object after modification using the following query:
var cr = d.Skip(n).Take(1000);
I loop through the records using foreach loop
foreach (var _d in cr)
{
// Some stuffs here
_d.status = 0;
}
I then Call SubmitChanges
ctxPost.SubmitChanges();
No Record gets updated
Thanks to you all. I was missing the primary key on the ID field in the dbml file.

Efficient way to update a column from a table using LinqSql?

In Sql, for example I would do:
UPDATE MyTable SET MyColum='value' WHERE Id=132
How would be the equivalent in a EF Code First database?
UPDATE:
Seeing the responses, I need to clarify my question. I am looking for an efficient way to update one column. If I use Single() or any similar function, the performance is very poor for two reasons: 1) There are 2 SQL statements, one for SELECT, and one for UPDATE, 2) The Single function retrieves all columns.
UPDATE MyTable SET MyColum='value' WHERE Id=132
The above sentence is efficient because it is only one transaction and no values are sent to the client from the server. I would like which would be the equivalent sentence in Linq Sql.
SingleOrDefault would return the object if exists in the db, or null otherwise:
var row = context.MyTable.SingleOrDefault(x => x.id == 132);
if(row != null) {
row.MyColumn = "Value";
context.SaveChanges();
}
I think it is not possible with one transaction.you need first to check that row you want to update is in your table or not
using (MyEntities me=new MyEntities())
{
if( (from t in me.MyTables where mt.Id == 132 select t).Any())
{
MyTable mt= (from t in me.MyTables where mt.Id == 132 select t).Single();
mt.MyColumn= "Value";
me.SaveChanges();
}
}

Categories