How to know if every queries executed successfully in loop? - c#

Suppose I have executed an update query in loop using PetaPoco like,
foreach (var obj in mainObject) {
db.Execute("update Table set Column = #0 where C1=#1 and C2 =#2", Column1, obj.data1, obj.data2);
}
How to know if each of these queries has been executed successfully ?

Execute returns the number of affected rows. So if you update one row you'd get 1 as return value if it succeded, otherwise 0 (or an error).
bool allSucceeded = true;
foreach (var obj in mainObject)
{
int updated = db.Execute("update Table set Column = #0 where C1=#1 and C2 =#2", Column1, obj.data1, obj.data2);
bool succeed = updated != 0;
if(!succeed)
allSucceeded = false;
}
So Execute doesn't return 1 for succeed and 0 for fail. It returns the number of affected rows. If you for example execute this query: DELETE FROM Table you'd delete all rows of this table and the return value would be the number of rows in this table. So it depends on the logic and the query if 0 is a fail or 1 is a succeed.
By the way, this behaviour is consistent with ADO.NET methods like SqlCommand.ExecuteNonQuery.

Usually PetaPoco returns 1 OR greater if a single query is executed successfully or means if any rows are affected and 0 if failed.
With this scenario you can trace those values by adding that in the loop, like:
List<int> checkSuccess = new List<int>(); //To trace the value returned by execute query
foreach (var obj in mainObject) {
int updated = db.Execute("update Table set Column = #0 where C1=#1 and C2 =#2", Column1, obj.data1, obj.data2);
checkSuccess.Add(updated);
}
if (checkSuccess.All(i => i >= 1))
{
//Your every queries has been updated successfully
}

foreach (var obj in mainObject)
{
var result = db.Execute("update Table set Column = #0 where C1=#1 and C2 =#2", Column1, obj.data1, obj.data2);
if (result < 1)
{
//not ok
}
}

Related

C# ExecuteScalar() null COUNT vs SELECT

I noticed some odd behavior and hoped one of the experts could explain the difference. My UI requires an image is unique before presenting it to the user for their task. I store checksums in the database and query those for unique values. I noticed that my logic 'flips' depending on whether I use a standard SELECT query vs SELECT COUNT. I've isolated it down to this line of code but I don't understand why.
SELECT record FROM table WHERE checksum = something
//This code works correctly (true / false)
Object result = command.ExecuteScalar();
bool checksumExists = (result == null ? false : true);
//Returns TRUE no matter what
Object result = command.ExecuteScalar();
bool checksumExists = (result == DBNull.value ? false : true);
I changed to the following SQL for performance against a large table and my logic 'flipped'
SELECT COUNT (record) FROM table WHERE checksum = something
//Now this code always returns TRUE
Object result = command.ExecuteScalar();
bool checksumExists = (result == null ? false : true);
//Now this is the solution
Object result = command.ExecuteScalar();
bool checksumExists = (Convert.ToInt32(result) < 1 ? false : true);
Does the COUNT statement mean that it will always return a number, even if no rows are found?
Does the COUNT statement mean that it will always return a number, even if no rows are found?
Yes. Zero is a number. and
SELECT COUNT(someCol) c FROM table WHERE 1=2
will always return a single row, single column resultset like:
c
-----------
0
(1 row affected)
COUNT is not the most efficient way to check whether any rows meet a criterion, as it will continue to count them beyond the first.
You can use EXISTS or TOP 1 to generate a query that will stop after finding a single row. EG
select someMatchesExist = case when exists(select * from table where ...) then 1 else 0 end
or
select top (1) 1 as someMatchesExist from table where ...

Seeking a less costly solution for matching Ids of two tables

The application I am building allows a user to upload a .csv file containing multiple rows and columns of data. Each row contains a unique varchar Id. This will ultimately fill in fields of an existing SQL table where there is a matching Id.
Step 1: I am using LinqToCsv and a foreach loop to import the .csv fully into a temporary table.
Step 2: Then I have another foreach loop where I am trying to loop the rows from the temporary table into an existing table only where the Ids match.
Controller Action to complete this process:
[HttpPost]
public ActionResult UploadValidationTable(HttpPostedFileBase csvFile)
{
var inputFileDescription = new CsvFileDescription
{
SeparatorChar = ',',
FirstLineHasColumnNames = true
};
var cc = new CsvContext();
var filePath = uploadFile(csvFile.InputStream);
var model = cc.Read<Credit>(filePath, inputFileDescription);
try
{
var entity = new TestEntities();
var tc = new TemporaryCsvUpload();
foreach (var item in model)
{
tc.Id = item.Id;
tc.CreditInvoiceAmount = item.CreditInvoiceAmount;
tc.CreditInvoiceDate = item.CreditInvoiceDate;
tc.CreditInvoiceNumber = item.CreditInvoiceNumber;
tc.CreditDeniedDate = item.CreditDeniedDate;
tc.CreditDeniedReasonId = item.CreditDeniedReasonId;
tc.CreditDeniedNotes = item.CreditDeniedNotes;
entity.TemporaryCsvUploads.Add(tc);
}
var idMatches = entity.PreexistingTable.Where(x => x.Id == tc.Id);
foreach (var number in idMatches)
{
number.CreditInvoiceDate = tc.CreditInvoiceDate;
number.CreditInvoiceNumber = tc.CreditInvoiceNumber;
number.CreditInvoiceAmount = tc.CreditInvoiceAmount;
number.CreditDeniedDate = tc.CreditDeniedDate;
number.CreditDeniedReasonId = tc.CreditDeniedReasonId;
number.CreditDeniedNotes = tc.CreditDeniedNotes;
}
entity.SaveChanges();
entity.Database.ExecuteSqlCommand("TRUNCATE TABLE TemporaryCsvUpload");
TempData["Success"] = "Updated Successfully";
}
catch (LINQtoCSVException)
{
TempData["Error"] = "Upload Error: Ensure you have the correct header fields and that the file is of .csv format.";
}
return View("Upload");
}
The issue in the above code is that tc is inside the first loop, but the matches are defined after the loop with var idMatches = entity.PreexistingTable.Where(x => x.Id == tc.Id);, so I am only getting the last item of the first loop.
If I nest the second loop then it is way to slow (stopped it after 10 minutes) because there are roughly 1000 rows in the .csv and 7000 in the preexisting table.
Finding a better way to do this is plaguing me. Pretend that the temporary table didn't even come from a .csv and just think about the most efficient way to fill in rows in table 2 from table 1 where the id of that row matches. Thanks for your help!
As your code is written now, much of the work is being done by the application that could much more efficiently be done by SQL Server. You are making hundreds of unnecessary roundtrip calls to the database. When you are mass importing data you want a solution like this:
Bulk import the data. See this answer for helpful guidance on bulk import efficiency with EF.
Join and update destination table.
Processing the import should only require a single mass update query:
update PT set
CreditInvoiceDate = CSV.CreditInvoiceDate
,CreditInvoiceNumber = CSV.CreditInvoiceNumber
,CreditInvoiceAmount = CSV.CreditInvoiceAmount
,CreditDeniedDate = CSV.CreditDeniedDate
,CreditDeniedReasonId = CSV.CreditDeniedReasonId
,CreditDeniedNotes = CSV.CreditDeniedNotes
from PreexistingTable PT
join TemporaryCsvUploads CSV on PT.Id = CSV.Id
This query would replace your entire nested loop and apply the same update in a single database call. As long as your table is indexed properly this should run very fast.
After saving CSV record into second table which have same fileds as your primary table, execute following procedure in sqlserver
create proc [dbo].[excel_updation]
as
set xact_abort on
begin transaction
-- First update records
update first_table
set [ExamDate] = source.[ExamDate],
[marks] = source.[marks],
[result] = source.[result],
[dob] = source.[dob],
[spdate] = source.[spdate],
[agentName] = source.[agentName],
[companycode] = source.[companycode],
[dp] = source.[dp],
[state] = source.[state],
[district] = source.[district],
[phone] = source.[phone],
[examcentre] = source.[examcentre],
[examtime] = source.[examtime],
[dateGiven] = source.[dateGiven],
[smName] = source.[smName],
[smNo] = source.[smNo],
[bmName] = source.[bmName],
[bmNo] = source.[bmNo]
from tbUser
inner join second_table source
on tbUser.[UserId] = source.[UserId]
-- And then insert
insert into first_table (exprdate, marks, result, dob, spdate, agentName, companycode, dp, state, district, phone, examcentre, examtime, dateGiven, smName, smNo, bmName, bmNo)
select [ExamDate], [marks], [result], [dob], [spdate], [agentName], [companycode], [dp], [state], [district], [phone], [examcentre], [examtime], [dateGiven], [smName], [smNo], [bmName], [bmNo]
from second_table source
where not exists
(
select *
from first_table
where first_table.[UserId] = source.[UserId]
)
commit transaction
delete from second_table
The condition of this code is only that both table must have same id matching data. Which id match in both table, data of that particular row will be updated in first table.
As long as the probability of the match is high you should simply attempt update with every row from your CSV, with a condition that the id matches,
UPDATE table SET ... WHERE id = #id

Linq to Sql Update on extracted list not working

I am having issues updating my Database using linq to sql.
I have a master query that retrieves all records in the database (16,000 records)
PostDataContext ctxPost = new PostDataContext();
int n = 0;
var d = (from c in ctxPost.PWC_Gs
where c.status == 1
select c);
I then take the first 1000, and pass it to to another object after modification using the following query:
var cr = d.Skip(n).Take(1000);
I loop through the records using foreach loop
foreach (var _d in cr)
{
// Some stuffs here
_d.status = 0;
}
I then Call SubmitChanges
ctxPost.SubmitChanges();
No Record gets updated
Thanks to you all. I was missing the primary key on the ID field in the dbml file.

Checking for duplicates and removing row from a DataTable but appending column value

have slightly tricky question. I have a datatable with thousands or rows. It has two columns. Using one of the column as the key i need to check for duplicates. If there there is I will need to add the value in the other column into one column and remove the duplicate row. I am able to find the duplicate and add the values. But when i remove one row, it affects the rest as the index have been changed. Plus I not sure if I am doing efficiently or not. Please advice.
Cate_Id TrxnCount
---------- ----------
ER01 0
ER02 0
ER41 0
ER53 1
ER53 2
ER56 0
ER56 0
ER56 0
ER57 8
ER57 9
After remmoving and adding the value
Cate_Id TrxnCount
---------- ----------
ER01 0
ER02 0
ER41 0
ER53 3
ER56 0
ER57 17
How can I achieve this in the easist and efficent manner. Please advice.
Here is what i have done:
List<DataRow> rowsToDelete = new List<DataRow>();
int newValue = 0;
for (int i = 15; i < dt.Rows.Count; i++)
{
if (i > 0)
{
// Compare with previous row using index
if (dt.Rows[i]["Cate_Id "].ToString() == dt.Rows[i - 1]["Cate_Id "].ToString())
{
newValue = Convert.ToInt32(dt.Rows[i]["TrxnCount"].ToString()) + Convert.ToInt32(dt.Rows[i - 1]["TrxnCount"].ToString());
dt.Rows[i]["TrxnCount"] = newValue;
rowsToDelete.Add(dt.Rows[i - 1]);
newValue = 0;
}
}
if (i < dt.Rows.Count - 1)
{
if (dt.Rows[i]["Cate_Id"].ToString() == dt.Rows[i + 1]["Cate_Id"].ToString())
{
newValue = Convert.ToInt32(dt.Rows[i]["TrxnCount"].ToString()) + Convert.ToInt32(dt.Rows[i - 1]["TrxnCount"].ToString());
dt.Rows[i]["TrxnCount"] = newValue;
rowsToDelete.Add(dt.Rows[i - 1]);
newValue = 0;
}
}
}
foreach(var r inrowsToDelete )
dt.Rows.Remove(r);
To do that in the more efficient way you need to do it in the DB side.
You can write an stored procedure that does this:
select the sum and count of the TrxnCount grouped by the Cate_Id, having count > 1
store the result in a temp table of duplicates
delete from the original table all the records whose Cate_Id exists in the temp table of duplicates
insert all the values from the temp table into the original table
That's much more efficient that doing it in the client side.
BTW, this has a terrible design flaw: the table doesn't have a PK, which is very bad idea.
The code for tyhe Stored Procedure (I'm supposing the original table name is T)
BEGIN TRAN
SELECT Cate_Id, SUM(T.TrxnCount) AS TrxnCount
INTO #dups
FROM T GROUP BY T.Cate_Id HAVING COUNT(*) > 1
DELETE FROM T WHERE Cate_Id IN (SELECT Cate_Id FROM #dups)
INSERT INTO T SELECT * FROM #Dups
COMMIT TRAN

DataTable find or if not found insert row

I have a DataTable dt with 2 columns. First col (call it CustomerId) is unique and doesn't allow nulls. the second one allows nulls and is not unique.
From a method I get a CustomerId and then I would like to either insert a new record if this CustomerId doesn't exist or increment by 1 what's in the second column corresponding to that CustomerId if it exists.
I'm not sure how I should approach this. I wrote a select statement (which returns System.Data.DataRow) but I don't know how to test whether it returned an empty string.
Currently I have:
//I want to insert a new row
if (dt.Select("CustomerId ='" + customerId + "'") == null) //Always true :|
{
DataRow dr = dt.NewRow();
dr["CustomerId"] = customerId;
}
If the datatable is being populated by a database. I would recommend making the customerid a identity column. That way when you add a new row it will automatically create a new customerid which will be unique and 1 greater than the previous id (depending on how you setup your identity column)
I would check the row count which is returned from the select statement. Something like
I would also use string.Format...
So it would look like this
var selectStatement = string.Format("CustomerId = {0}", customerId);
var rows = dt.Select(selectStatement);
if (rows.Count < 1){
var dr = dt.NewRow();
dr["CustomerId"] = customerId;
}
This is my method to solve similar problem. You can modify it to fit your needs.
public static bool ImportRowIfNotExists(DataTable dataTable, DataRow dataRow, string keyColumnName)
{
string selectStatement = string.Format("{0} = '{1}'", keyColumnName, dataRow[keyColumnName]);
DataRow[] rows = dataTable.Select(selectStatement);
if (rows.Length == 0)
{
dataTable.ImportRow(dataRow);
return true;
}
else
{
return false;
}
}
The Select Method returns an array of DataRow objects. Just check if its length is zero (it's never null).
By the way, don't write such statements in the code directly as in this example. There's a technique for breaching your code's security called "SQL Injection", I encourage you to read the Wikipedia Article. In brief, an experienced user could write SQL script that gets executed by your database and potentially do harmful things if you're taking customerId from the user as a string. I'm not experienced in database programming, this is just "general knowledge"...

Categories