I am trying to do "running window" like for labels. I tried to find similar solution in the google but it got me nowhere.
EXAMPLE: 5 numbers needed to be displayed at different counter values. This was macro to my timer_Start() thus, the counter increases every 5 seconds which was set at my main form.
Display: 21 23 24 25 26
If I insert another value, eg. 23, the last 5 number should be displayed.
Display: 23 21 23 24 25 ,
However, for my code below, when I insert another value, all 5 of them will change. If i change to if(counter == 2), it unable to get update when the counter == 3.
int counter = 0;
sql_cmd = sql_conn.CreateCommand();
sql_cmd.CommandText = "SELECT * FROM temp where id=12";
try
{
sql_conn.Open();
sql_reader = sql_cmd.ExecuteReader();
while (sql_reader.Read()) // start retrieve
{
if (counter >= 1)
{
this.avg1.Text = sql_reader["Temp1"].ToString();
}
}
sql_conn.Close();
}
catch (Exception e)
{
MessageBox.Show(e.Message);
}
if (counter >= 2)
{
avg2.Text = avg1.Text;
}
if (counter >= 3)
{
avg3.Text = avg2.Text;
}
if (counter >= 4)
{
avg4.Text = avg3.Text;
}
if (counter >= 5)
{
avg5.Text = avg4.Text;
counter = 0;
}
Any help is much appreciated. Thanks.
Your problem is with your series of if statements. Simple debugging would allow you to see this, so I would suggest stepping through your code before coming here next time. With that, your IF statements can be refracted out into a simple method for you to use.
private void UpdateLabels(string newValue) {
avg5.Text = avg4.Text;
avg4.Text = avg3.Text;
avg3.Text = avg2.Text;
avg2.Text = avg1.Text;
avg1.Text = newValue;
}
What is important here is that you have the correct update order. Your original if statements where not in the correct order, which would be why you were having issues. If you want to see why this works, walk through both sets of code in a debugger and see how the Label.Text properties change.
Now you can call this new method after you get your new value from the database... Here we can update your timer code to be slightly better.
sql_cmd = sql_conn.CreateCommand();
sql_cmd.CommandText = "SELECT * FROM temp where id=12";
string newValue = String.Empty;
try {
sql_conn.Open();
sql_reader = sql_cmd.ExecuteReader();
while (sql_reader.Read()) {
newValue = sql_reader["Temp1"].ToString(); // store in local variable
}
} catch (Exception e) {
MessageBox.Show(e.Message);
} finally {
sql_conn.Close(); // SqlConnection.Close should be in finally block
}
UpdateLabels(newValue);
First there is no need for a counter anymore (based off your original code you posted, it was never needed). Since Label.Text can accept blank strings, you can always copy the values, regardless if its the first update or one millionth.
Second you can store your database value in a temporary variable. This will allow you to update the Labels even if there is a database error. After all the database operations are finished, you then call UpdateLabels with your new value and you are all set.
I created 2 GeoCoordinate objects, LocA and LocB. I am continually updating the location info in LocA and LocB by using a timer and storing the values in a tuple list. In my code, LocA is the last added point and LocB is the second to last added point.
But I always face an exception during runtime.
How can I prevent this?
Here is my code;
public partial class Form1 : Form
{
List<Tuple<double, double>> myTuple2 = new List<Tuple<double, double>>();
GeoCoordinate LocA, LocB;
private void timer1_Tick(object sender, EventArgs e)
{
myTuple2.Add(new Tuple<double, double>(Convert.ToSingle(gPathBoylam), Convert.ToSingle(gPathEnlem)));
//gPathBoylam=Longitude info coming from gps,
//gPathEnlem=Lattitude info coming from gps,
if (myTuple2 != null && myTuple2.Any())
{
for (int i = 1; i < myTuple2.Count; i++)
{
LocA = new GeoCoordinate(myTuple2[i].Item1, myTuple2[i].Item2);
LocB = new GeoCoordinate(myTuple2[i-1].Item1, myTuple[i-1].Item2);
}
}
The problem you are experiencing is twofold:
The for loop is completely unnecessary as you are continually overwriting the same variables. After the for loop has run, you will only have the results of the last iteration. Since your code does not care about the outcome of the previous iterations (since it blindly ignores and overwrites it), you should avoid performing them.
The exception you are seeing is not related to your code, but rather the data that you have received from the GPS.
Let me elaborate on both:
1 - Remove the for loop
Let's say you have 4 items in your list. You then run the for loop:
for (int i = 1; i < myTuple2.Count; i++)
{
LocA = new GeoCoordinate(myTuple2[i].Item1, myTuple2[i].Item2);
LocB = new GeoCoordinate(myTuple2[i-1].Item1, myTuple[i-1].Item2);
}
Let me work out the iteration, number by number, so you see where you are going wrong.
//i = 1
LocA = new GeoCoordinate(myTuple2[1].Item1, myTuple2[1].Item2);
LocB = new GeoCoordinate(myTuple2[0].Item1, myTuple[0].Item2);
//i = 2
LocA = new GeoCoordinate(myTuple2[2].Item1, myTuple2[2].Item2);
LocB = new GeoCoordinate(myTuple2[1].Item1, myTuple[1].Item2);
//i = 3
LocA = new GeoCoordinate(myTuple2[3].Item1, myTuple2[3].Item2);
LocB = new GeoCoordinate(myTuple2[2].Item1, myTuple[2].Item2);
//the for loop stops here because when i = 4; it violates the "i < tuples.Count" condition
There is no point to the first two times where you set LocA and LocB. You overwrite the values immediately. So instead, just do the last iteration manually:
int lastItemIndex = myTuple.Count - 1;
int secondToLastItemIndex = lastItemIndex - 1;
LocA = new GeoCoordinate(myTuple2[lastItemIndex].Item1, myTuple2[lastItemIndex].Item2);
LocB = new GeoCoordinate(myTuple2[secondToLastItemIndex].Item1, myTuple[secondToLastItemIndex].Item2);
Note: You will want to add checks to prevent errors if there is only 1 item in the list. I kept my example simple to address the core issue
2 - The exception you see
The exception you see tells you the following:
System.ArgumentOutOfRangeException
The value of the parameter must be from -180.0 to 180.0
This is not a standard .Net exception. This is a custom exception that is being thrown intentionally. And the message of the exception reveals the problem:
Both latitude and longitude can only have relevant values between -180 and +180. Any number greater or smaller than this range is "incorrect" since there can only be a range of 360° in a circle.
LocA = new GeoCoordinate(-15, 75); //correct
LocA = new GeoCoordinate(0, 180); //correct
LocA = new GeoCoordinate(525, 12545); //INCORRECT (as per the exception message)
My GUESSES as to the cause of this issue:
1 There is a culture difference between the decimal , (comma) and . (dot). If this is the case, a value like 175,5 might be parsed as 1755. You need to make sure that this is not an issue.
Stop reading here and check this first. If this is the case, nothing else can help you except fixing the culture issue.
2 There could be something seriously wrong with your data. Maybe you are getting nonsensical numbers. I highly suggest using breakpoints to see which values you are getting. You might be getting garbage numbers that have no meaning.
If this is the case, you will have to read the GPS documentation.
It's not your fault that the data you are provided with is incorrect. However, it should be your duty to make sure that an unexpected value does not crash the system and instead handles this internally (or gives feedback to the user in a nice way).
3 The GPS is returning values between 0 and 360, but your library wants values between -180 and 180. (This is the most likely example. Technically speaking, your GPS could return values from any arbitrary range that spans 360°, but it seems weird to use anything other than (0)<->(360) or (-180)<->(180).
It's possible to calculate the correct number based on an incorrect value by adding or subtracting 360° until you get in the correct range. E.g. a value of 359° can be converted to a value of -1°.
So the question becomes: how do I make sure that my value is between -180 and +180? Here's a simple method that will ensure your value is within the proper range:
public static double NormalizeAngle(double theAngle)
{
if(theAngle <= -180)
{
//we keep adding 360° until we reach an acceptable value.
while(!IsValidAngle(theAngle))
{
theAngle += 360;
}
return theAngle;
}
if(180 <= theAngle)
{
//we keep subtracting 360° until we reach an acceptable value.
while(!IsValidAngle(theAngle))
{
theAngle -= 360;
}
return theAngle;
}
//if neither if block was entered, then the angle was already valid!
return theAngle;
}
private static bool IsValidAngle(double theAngle)
{
return (-180 <= theAngle) && (theAngle <= 180);
}
If you add this (somewhere in your code base), you can then do the following:
LocA = new GeoCoordinate(
NormalizeAngle(myTuple2[lastItemIndex].Item1),
NormalizeAngle(myTuple2[lastItemIndex].Item2)
);
LocB is the same of course.
I have a program that is almost done. The issue is that when I print out the "new" CSV file every thing is correct except the very first column in excel. It is printing the information twice but only in the first column. I have looked throughout my code and am unable to see where I would be printing it out twice or calling the token twice.
The purpose of the program is to simply re-organize the columns and format them into a desired manner. The token I am accessing is at position inputBuffer[23] and I have it set to be equal to outputBuffer[0] and I only do this 1 time but when I run the program and check the file, the first column of the first record should hold the value 841 but instead it is coming up 841841 and I have no clue how. All of the other columns are perfectly fine.
Can anyone spot what's wrong?
My Method
/*
* This method uses the fields (array elements) in the output
* buffer to assemble a CSV record (string variable). The
* CSV record is then written to the output file.
*/
public static void BuildRecordAndWriteOutput()
{
string record = outputBuffer[0];
for (int i = 0; i < outputBuffer.Length; i++)
{
if (outputBuffer[i].Contains(","))
{
string x = "\"" + outputBuffer[i] + "\"";
record += x;
}
else
{
record += outputBuffer[i];
}
if (i < outputBuffer.Length - 1)
{
record += ",";
}
}
/*for (int i = 1; i < outputBuffer.Length; i++)
{
record = record + "," + outputBuffer[i];
}*/
output.WriteLine(record);
}
When I call the method
static void Main(string[] args)
{
input.SetDelimiters(",");
/*
* This loop reads input data and calls methods to
* build an output record and write data to a CSV file.
*/
while (!input.EndOfData)
{
inputBuffer = input.ReadFields(); // Read a CSV record in to the inputBuffer.
SetOutputBufferDefaultValues(); // Put default values in the output buffer.
MapInputFieldsToOutputFields(); // Move fields from the input buffer to the output buffer.
BuildRecordAndWriteOutput(); // Build record from output buffer and write it.
}
Console.WriteLine("done");
input.Close();
output.Close();
Console.Read();
}
Here is a screenshot in case my explanation was not clear
There is more data to the code and I have not posted it all , as of now, but I can if it will help.
Thanks!
in your BuildRecordAndWriteOutput, you assign record:
string record = outputBuffer[0];
then start your loop at 0, appending outputbuffer[0] to record:
for (int i = 0; i < outputBuffer.Length; i++)
{
record += ....
}
That's what's causing your first column to have the data duplicated.
You can fix this by simply initializing your record to an empty string before the loop:
string record = "";
This is a very weird situation, first the code...
The code
private List<DispatchInvoiceCTNDataModel> WorksheetToDataTableForInvoiceCTN(ExcelWorksheet excelWorksheet, int month, int year)
{
int totalRows = excelWorksheet.Dimension.End.Row;
int totalCols = excelWorksheet.Dimension.End.Column;
DataTable dt = new DataTable(excelWorksheet.Name);
// for (int i = 1; i <= totalRows; i++)
Parallel.For(1, totalRows + 1, (i) =>
{
DataRow dr = null;
if (i > 1)
{
dr = dt.Rows.Add();
}
for (int j = 1; j <= totalCols; j++)
{
if (i == 1)
{
var colName = excelWorksheet.Cells[i, j].Value.ToString().Replace(" ", String.Empty);
lock (lockObject)
{
if (!dt.Columns.Contains(colName))
dt.Columns.Add(colName);
}
}
else
{
dr[j - 1] = excelWorksheet.Cells[i, j].Value != null ? excelWorksheet.Cells[i, j].Value.ToString() : null;
}
}
});
var excelDataModel = dt.ToList<DispatchInvoiceCTNDataModel>();
// now we have mapped everything expect for the IDs
excelDataModel = MapInvoiceCTNIDs(excelDataModel, month, year, excelWorksheet);
return excelDataModel;
}
The problem
When I am running the code on random occasion it would throw IndexOutOfRangeException on the line
dr[j - 1] = excelWorksheet.Cells[i, j].Value != null ? excelWorksheet.Cells[i, j].Value.ToString() : null;
For some random value of i and j. When I step over the code (F10), since it is running in a ParallelLoop, some other thread kicks and and other exception is throw, that other exception is something like (I could not reproduce it, it just came once, but I think it is also related to this threading issue) Column 31 not found in excelWorksheet. I don't understand how could any of these exception occur?
case 1
The IndexOutOfRangeException should not even occur, as the only code/shared variable dt I have locked around accessing it, rest all is either local or parameter so there should not have any thread related issue. Also, if I check the value of i or j in debug window, or even evaluate this whole expression dr[j - 1] = excelWorksheet.Cells[i, j].Value != null ? excelWorksheet.Cells[i, j].Value.ToString() : null; or a part of it in Debug window, then it works just fine, no errors of any sort or nothing.
case 2
For the second error, (which unfortunately is not reproducing now, but still) it should not have occurred as there are 33 columns in the excel.
More Code
In case someone might need how this method was called
using (var xlPackage = new ExcelPackage(viewModel.postedFile.InputStream))
{
ExcelWorksheets worksheets = xlPackage.Workbook.Worksheets;
// other stuff
var entities = this.WorksheetToDataTableForInvoiceCTN(worksheets[1], viewModel.Month, viewModel.Year);
// other stuff
}
Other
If someone needs more code/details let me know.
Update
Okay, to answer some comments. It is working fine when using for loop, I have tested that many times. Also, there is no particular value of i or j for which the exception is thrown. Sometimes it is 8, 6 at other time it could be anything, say 19,2or anything. Also, in the Parallel loop the +1 is not doing any damage as the msdn documentation says it is exclusive not inclusive. Also, if that were the issue I would only be getting exception at the last index (the last value of i) but that's not the case.
UPDATE 2
The given answer to lock around the code
dr = dt.Rows.Add();
I have changed it to
lock(lockObject) {
dr = dt.Rows.Add();
}
It is not working. Now I am getting ArgumentOutOfRangeException, still if I run this in debug window, it just works fine.
Update 3
Here is the full exception detail, after update 2 (I am getting this on the line that I mentioned in update 2)
System.ArgumentOutOfRangeException was unhandled by user code
HResult=-2146233086
Message=Index was out of range. Must be non-negative and less than the size of the collection.
Parameter name: index
Source=mscorlib
ParamName=index
StackTrace:
at System.ThrowHelper.ThrowArgumentOutOfRangeException()
at System.Collections.Generic.List`1.get_Item(Int32 index)
at System.Data.RecordManager.NewRecordBase()
at System.Data.DataTable.NewRecordFromArray(Object[] value)
at System.Data.DataRowCollection.Add(Object[] values)
at AdminEntity.BAL.Service.ExcelImportServices.<>c__DisplayClass2e.<WorksheetToDataTableForInvoiceCTN>b__2d(Int32 i) in C:\Projects\Manager\Admin\AdminEntity\AdminEntity.BAL\Service\ExcelImportServices.cs:line 578
at System.Threading.Tasks.Parallel.<>c__DisplayClassf`1.<ForWorker>b__c()
InnerException:
Okay. So there are a few problems with your existing code, most of which have been touched on by others:
Parallel threads are at the mercy of the OS scheduler; therefore, although threads are queued in-order, they may (and often do) complete execution out-of-order. For example, given Parallel.For(0, 10, (i) => { Console.WriteLine(i); });, the first four threads (on a quad-core system) will be queued with i values 0-3. But any one of those threads may start or finish executing before any other. So you may see 2 printed first, whereupon thread 4 will be queued. Then thread 1 might complete, and thread 5 will be queued. Then thread 4 might complete, even before threads 0 or 3 do. Etc., etc. TL;DR: You CANNOT assume an ordered output in parallel.
Given that, as #ScottChamberlain noted, it's a very bad idea to do column generation within your parallel loop - because you have no guarantee that the thread doing column generation will create all your columns before another thread starts assigning data in rows to those column indices. E.g. you could be assigning data to cell [0,4] before your table actually has a fifth column.
It's worth noting that this should really be broken out of the loop anyway, purely from a code cleanliness perspective. At the moment, you have two nested loops, each with special behavior on a single iteration; better to separate that setup logic into its own loop and leave the main loop to assign data and nothing else.
For the same reason, you should not be creating new rows in the table within your parallel loop - because you have no guarantee that the rows will be added to the table in their source order. Break that out too, and access rows within the loop based on their index.
Some have mentioned using DataRow.NewRow() before Rows.Add(). Technically, NewRow() is the right way to go about things, but the actual recommended access pattern is a bit different than is probably appropriate for a cell-by-cell function, particularly when parallelism is intended (see MSDN: DataTable.NewRow Method). The fact remains that adding a new, blank row to a DataTable with Rows.Add() and populating it afterwards functions properly.
You can clean up your string formatting with the null-coalescing operator ??, which evaluates whether the preceding value is null, and if so, assigns the subsequent value. For example, foo = bar ?? "" is the equivalent of if (bar == null) { foo = ""; } else { foo = bar; }.
So right off the bat, your code should look more like this:
private void ReadIntoTable(ExcelWorksheet sheet)
{
DataTable dt = new DataTable(sheet.Name);
int height = sheet.Dimension.Rows;
int width = sheet.Dimension.Columns;
for (int j = 1; j <= width; j++)
{
string colText = (sheet.Cells[1, j].Value ?? "").ToString();
dt.Columns.Add(colText);
}
for (int i = 2; i <= height; i++)
{
dt.Rows.Add();
}
Parallel.For(1, height, (i) =>
{
var row = dt.Rows[i - 1];
for (int j = 0; j < width; j++)
{
string str = (sheet.Cells[i + 1, j + 1].Value ?? "").ToString();
row[j] = str;
}
});
// convert to your special Excel data model
// ...
}
Much better!
...but it still doesn't work!
Yep, it still fails with an IndexOutOfRange exception. However, since we took your original line dr[j - 1] = excelWorksheet.Cells[i, j].Value != null ? excelWorksheet.Cells[i, j].Value.ToString() : null; and split it into a couple pieces, we can see exactly which part it fails on. And it fails on row[j] = str;, where we actually write the text into the row.
Uh-oh.
MSDN: DataRow Class
Thread Safety
This type is safe for multithreaded read operations. You must synchronize any write operations.
*sigh*. Yeah. Who knows why DataRow uses static anything when assigning values, but there you have it; writing to DataRow isn't thread-safe. And sure enough, doing this...
private static object s_lockObject = "";
private void ReadIntoTable(ExcelWorksheet sheet)
{
// ...
lock (s_lockObject)
{
row[j] = str;
}
// ...
}
...magically makes it work. Granted, it completely destroys the parallelism, but it works.
Well, it almost completely destroys the parallelism. Anecdotal experimentation on an Excel file with 18 columns and 46319 rows shows that the Parallel.For() loop creates its DataTable in about 3.2s on average, whereas replacing Parallel.For() with for (int i = 1; i < height; i++) takes about 3.5s. My guess is that, since the lock is only there for writing data, there is a very small benefit realized by writing data on one thread and processing text on the other(s).
Of course, if you can create your own DataTable replacement class, you can see a much larger speed boost. For example:
string[,] rows = new string[height, width];
Parallel.For(1, height, (i) =>
{
for (int j = 0; j < width; j++)
{
rows[i - 1, j] = (sheet.Cells[i + 1, j + 1].Value ?? "").ToString();
}
});
This executes in about 1.8s on average for the same Excel table mentioned above - about half the time of our barely-parallel DataTable. Replacing the Parallel.For() with the standard for() in this snippet makes it run in about 2.5s.
So you can see a significant performance boost from parallelism, but also from a custom data structure - although the viability of the latter will depend on your ability to easily convert the returned values to that Excel data model thing, whatever it is.
The line dr = dt.Rows.Add(); is not thread safe, you are corrupting the internal state of the array in the DataTable that hold the rows for the table.
At first glance changing it to
if (i > 1)
{
lock (lockObject)
{
dr = dt.Rows.Add();
}
}
should fix it, but that does not mean other thread safety problems are not there from excelWorksheet.Cells being accessed from multiple threads. (If excelWorksheet is this class and you are running a STA main thread (WinForms or WPF) COM should marshal the cross thread calls for you)
EDIT: New thory, the problem comes from the fact that you are setting up your schema inside the parallel loop and attempting to write to it at the same time. Pull out all of the i == 1 logic to before the loop and then start at i == 2
private List<DispatchInvoiceCTNDataModel> WorksheetToDataTableForInvoiceCTN(ExcelWorksheet excelWorksheet, int month, int year)
{
int totalRows = excelWorksheet.Dimension.End.Row;
int totalCols = excelWorksheet.Dimension.End.Column;
DataTable dt = new DataTable(excelWorksheet.Name);
//Build the schema before we loop in parallel.
for (int j = 1; j <= totalCols; j++)
{
var colName = excelWorksheet.Cells[1, j].Value.ToString().Replace(" ", String.Empty);
if (!dt.Columns.Contains(colName))
dt.Columns.Add(colName);
}
Parallel.For(2, totalRows + 1, (i) =>
{
DataRow dr = null;
lock(lockObject) {
dr = dt.Rows.Add();
}
for (int j = 1; j <= totalCols; j++)
{
dr[j - 1] = excelWorksheet.Cells[i, j].Value != null ? excelWorksheet.Cells[i, j].Value.ToString() : null;
}
});
var excelDataModel = dt.ToList<DispatchInvoiceCTNDataModel>();
// now we have mapped everything expect for the IDs
excelDataModel = MapInvoiceCTNIDs(excelDataModel, month, year, excelWorksheet);
return excelDataModel;
}
You code is incorrect:
1) Parallel.For has its own batching mechanism (can be customized with ForEach with partitioners though) and does not guarantee that operation with (for) i==n will be executed after operation with i==m where n>m.
So line
dr[j - 1] = excelWorksheet.Cells[i, j].Value != null ? excelWorksheet.Cells[i, j].Value.ToString() : null;
throw exception when required column is not added yet (in {i==1} operation}
2) And it's recommended to use NewRow method:
dr=tbl.NewRow->Populate dr->tbl.Rows.Add(dr)
or Rows.Add(object[] values):
values=[KnownColumnCount]->Populate values->tbl.Rows.Add(values)
3) It's really better to populate columns first in this case, because it's sequential access to excel file (seek) and It would not harm perfomance
Have you tried using NewRow when creating the new datarow and moving the creation of the columns outside the parallel loop like Scott Chamberlain suggested above? By using newrow you're creating a row with the same schema as the parent datatable. I got the same error as you when I tried your code with a random excel file, but got it to work like this:
for (int x = 1; x <= totalCols; x++)
{
var colName = excelWorksheet.Cells[1, x].Value.ToString().Replace(" ", String.Empty);
if (!dt.Columns.Contains(colName))
dt.Columns.Add(colName);
}
Parallel.For(2, totalRows + 1, (i) =>
{
DataRow dr = null;
for (int j = 1; j <= totalCols; j++)
{
dr = dt.NewRow();
dr[j - 1] = excelWorksheet.Cells[i, j].Value != null
? excelWorksheet.Cells[i, j].Value.ToString()
: null;
lock (lockObject)
{
dt.Rows.Add(dr);
}
}
});
I have a DataTable dtOne, having records as below:
ColumnA ColumnB ColumnC
1001 W101 ARCH
1001 W102 ARCH
1002 W103 CUSS
1003 W104 ARCH
And another DataTable dtTwo, having values as:
ColumnA
ARCH
CUSS
I need to check whether values of dtTwo exist in dtOne or not, if not write it on the webpage.
I wrote the below code, but it doesn't work properly. I need to check like, if ARCH from dtTwo table is present in dtOne, don't check further, just write it to the webpage.
for (int counter = 0; counter < dtTwo.Rows.Count; counter++)
{
var contains=dtOne.Select("ColumnC= '" + dtTwo.Rows[counter][0].ToString() + "'");
if (contains.Length == 0)
{
Response.Write("CostCode "+dtTwo.Rows[counter][0].ToString()+" not present in the Excel");
}
}
Experts please help.
EDIT:
My functionality is achieved when I write the below code, but get a warning that unreachable code detected at counter variable.
I don't think its correct.
for (int counter = 0; counter < dtTwo.Rows.Count; counter++)
{
var contains=dtOne.Select("ColumnC= '" + dtTwo.Rows[counter][0].ToString() + "'");
if (contains.Length == 0)
{
Response.Write("CostCode "+dtTwo.Rows[counter][0].ToString()+" not present in the Excel");
}
break;
}
Regards
According to your clarification, you want to stop if you find a match from the second table in the first table.
Then you need to add the break statement when your Select finds one or more rows that matches the condition
for (int counter = 0; counter < dtTwo.Rows.Count; counter++)
{
var contains=dtOne.Select("ColumnC= '" + dtTwo.Rows[counter][0].ToString() + "'");
if (contains.Length != 0)
{
Response.Write("CostCode "+dtTwo.Rows[counter][0].ToString()+" not present in the Excel");
break;
}
}
From the C# reference
The break statement terminates the closest enclosing loop or switch
statement in which it appears. Control is passed to the statement that
follows the terminated statement, if any.