DataTable do not get updates? - c#

I have a DataTable with one column filled with list of words from a text file, I create a method to read a string if the string is founded the row must be deleted, but the problem is that the DataTable don't get the updates.
foreach(string line in file)
{
tagst.Rows.Add(line)
}
string s;
for (int k = 0; k < tagst.Rows.Count; k++)
{
s = tagst.Rows[k]["Tags"].ToString();
if(s.Equals("Jad"))
{
tagst.Rows[k].Delete();
}
}

after your loop, call tagst.AcceptChanges();
per the documentation:
When AcceptChanges is called, any DataRow object still in edit mode successfully ends its edits. The DataRowState also changes: all Added and Modified rows become Unchanged, and Deleted rows are removed.
As #LarsTech stated, you'll want to rework your loop like:
for (int i = tagst.Rows.Count - 1; i >= 0; i--)
{
// ....
}

One issue with your coding is that when you removing rows - you need to start from the last one backward for very simple reason. Once you remove row 3,10,20 out of 100 how many rows will remain? Why do you need to start from bottom? Because tagst will be automatically re-indexed and reordered as soon as delete completed.
But your tagst.Rows.Count is already set and not getting refreshed ever!
Basically once your counter(K) hit number where rows already been deleted you will see error at best, at worse your app will crush if you do not have error handling routines set. Since you did not post actual code for how you create your tgst I will show how it can be done in array. Declaration of variable ommited...
Try this:
for (int k=tagstArray.Count; k>0; k--)
{
s = tagstArray[k].ToString();
if(s.Contains("Jad"))
{
tagstArray[k].Remove(k);
}
}

Related

Issues with line-clearing algorithm for Tetris game?

I'm working on a text-based Tetris game and running into some issues with clearing lines. I have my DeleteRow() method which deletes the given row by working upwards from the given row, overwriting each row's data with the data of the row above it. This seems to work:
/**
* Deletes row r from the array of landed Tetromino blocks.
* #param r The row to delete from the landed array.
**/
private void DeleteRow(int row) {
for(int r = row; r > 0; r--) {
for(int c = 0; c < Board.WIDTH; c++) {
// Overwrite the value of this column from the row above.
still[r, c] = still[(r - 1), c];
}
}
}
Where "still" is the 2D array defined as such.
private int[,] still;
And initialized here:
public Board() {
still = new int[Board.HEIGHT, Board.WIDTH];
}
But in my CheckRows() method, which is what calls DeleteRow(), I seem to be having an issue where it will clear the first row that's passed to it, but subsequent rows are either ignored or it will delete the wrong row:
/**
* Checks each row to see if they are full, and if so, deletes the row and adds to score.
* #param score A long representing the current score, passed as reference.
**/
public void CheckRows(ref long score) {
List<int> rowsToClear = new List<int>();
for(int r = 0; r < Board.HEIGHT; r++) {
bool zero = false;
for(int c = 0; c < Board.WIDTH; c++) {
if(still[r, c] == 0) {
zero = true;
break;
}
}
if(!zero) rowsToClear.Add(r);
}
// Delete all the rows that did not contain zeros.
if(rowsToClear.Count > 0) {
// Add to the score depending on the number of rows cleared.
score += (new int[4] { 40, 100, 300, 1200 })[rowsToClear.Count - 1];
// Delete each row in the list and then increment all other row indices to account for
// the fact that the rows above this one are being copied to this one.
for(int i = (rowsToClear.Count - 1); i >= 0; i--) {
DeleteRow(rowsToClear[i]);
rowsToClear.ForEach(x => x++);
}
}
}
I have to assume this has something to do with the line following the call to DeleteRow, where I increment the row numbers of the other rows that need to be cleared to account for the fact that I'm shifting each row downward to delete the row.
I have noticed though that if those rows are not deleted in the first CheckRows() call, they will be in the next iteration of the main game loop.
Is there some flaw in the logic I'm using? This is my first time making a Tetris game. I'm doing this to familiarize myself with C#, and I'm just not sure what the issue is here. Can someone else spot what's wrong?
I have to assume this has something to do with the line following the call to DeleteRow, where I increment the row numbers of the other rows that need to be cleared to account for the fact that I'm shifting each row downward to delete the row.
This is the line you are speaking of:
rowsToClear.ForEach(x => x++);
That line of code does absolutely nothing: The values will not be incremented.
You can perform an action on the element passed to the delegate in ForEach but you cannot replace it with a new value. Here is the documentation from MSDN:
Modifying the underlying collection in the body of the Action delegate is not supported and causes undefined behavior.
That will not work and neither will this:
foreach (var thisNum in nums)
{
thisNum++; // <== will not compile
}
This will increment the list:
for (int i = nums.Count - 1; i >= 0; i--)
{
nums[i]++;
}
<== Try Me ==>
After rearranging my CheckRows method around, I realized that the solution was that I needed to delete the rows in top to bottom order, as opposed to bottom to top.

Extreme performance difference when using DataTable.Add

Take a look at the program below. It's pretty self-explanatory, but I'll explain anyway :)
I have two methods, one fast and one slow. These methods do the exact same thing: they create a table with 50,000 rows and 1000 columns. I write to a variable number of columns in the table. In the code below I've picked 10 (NUM_COLS_TO_WRITE_TO).
In other words, only 10 columns out of the 1000 will actually contain data. OK. The only difference between the two methods is that the fast populates the columns and then calls DataTable.AddRow, whereas the slow one does it after. That's it.
The performance difference however is shocking (to me anyway). The fast version is almost completely unaffected by changing the number of columns we write to, whereas the slow one goes up linearly. For example, when the number of columns I write to is 20, the fast version takes 2.8 seconds, but the slow version takes over a minute.
What in the world could possibly be going on here?
I thought that maybe adding dt.BeginLoadData would make a difference, and it did to some extent, it brought the time down from 61 seconds to ~50 seconds, but that's still a huge difference.
Of course, the obvious answer is, "Well, don't do it that way." OK. Sure. But what in world is causing this? Is this expected behavior? I sure didn't expect it. :)
public class Program
{
private const int NUM_ROWS = 50000;
private const int NUM_COLS_TO_WRITE_TO = 10;
private const int NUM_COLS_TO_CREATE = 1000;
private static void AddRowFast() {
DataTable dt = new DataTable();
//add a table with 1000 columns
for (int i = 0; i < NUM_COLS_TO_CREATE; i++) {
dt.Columns.Add("x" + i, typeof(string));
}
for (int i = 0; i < NUM_ROWS; i++) {
var theRow = dt.NewRow();
for (int j = 0; j < NUM_COLS_TO_WRITE_TO; j++) {
theRow[j] = "whatever";
}
//add the row *after* populating it
dt.Rows.Add(theRow);
}
}
private static void AddRowSlow() {
DataTable dt = new DataTable();
//add a table with 1000 columns
for (int i = 0; i < NUM_COLS_TO_CREATE; i++) {
dt.Columns.Add("x" + i, typeof(string));
}
for (int i = 0; i < NUM_ROWS; i++) {
var theRow = dt.NewRow();
//add the row *before* populating it
dt.Rows.Add(theRow);
for (int j=0; j< NUM_COLS_TO_WRITE_TO; j++){
theRow[j] = "whatever";
}
}
}
static void Main(string[] args)
{
var sw = Stopwatch.StartNew();
AddRowFast();
sw.Stop();
Console.WriteLine(sw.Elapsed.TotalMilliseconds);
sw.Restart();
AddRowSlow();
sw.Stop();
Console.WriteLine(sw.Elapsed.TotalMilliseconds);
//When NUM_COLS is 5
//FAST: 2754.6782
//SLOW: 15794.1378
//When NUM_COLS is 10
//FAST: 2777.431 ms
//SLOW 32004.7203 ms
//When NUM_COLS is 20
//FAST: 2831.1733 ms
//SLOW: 61246.2243 ms
}
}
Update
Calling theRow.BeginEdit and theRow.EndEdit in the slow version makes the slow version more or less constant (~4 seconds on my machine). If I actually had some constraints on the table, I guess this might make sense to me.
When attached to table, much more work is being done to record and track the state on every change.
For example, if you do this,
theRow.BeginEdit();
for (int j = 0; j < NUM_COLS_TO_WRITE_TO; j++)
{
theRow[j] = "whatever";
}
theRow.CancelEdit();
Then in BeginEdit() , internally it's taking a copy of the contents of the row, so that at any point, you can rollback - and the end result of the above is an empty row again without whatever. This is still possible, even when in BeginLoadData mode. Following the path ofBeginEdit if attached to a DataTable, eventually you get into DataTable.NewRecord() which shows that it is just copying each value for every column to store the original state incase a cancel is needed - not much magic here. On the other hand, if not attached to a datatable, not much is happening in BeginEdit at all and it is exited quickly.
EndEdit() is similarly pretty heavy (when attached), as here all the constraints etc are checked (max length, does column allow nulls etc). Also it fires a bunch of events, explictly frees storage used incase the edit was cancelled, and makes available for recall with DataTable.GetChanges(), which is still possible in BeginLoadData. Infact looking at source all BeginLoadData seems to do is turn off constraint checking and indexing.
So that describes what BeginEdit and EditEdit does, and they are completely different when attached or not attached in terms of what is stored. Now consider that a single theRow[j] = "whatever" you can see on the indexer setter for DataRow it calls BeginEditInternal and then EditEdit on every single call (if it is not already in an edit because you explicitly called BeginEdit earlier). So that means it's copying and storing every single value for each column in the row, every time you do this call. So you're doing it 10 times, that means with your 1,000 column DataTable, over 50,000 rows, it means it is allocating 500,000,000 objects. On top of this there is all the other versioning, checks and events being fired after every single change, and so, overall, it's much slower when the row is attached to a DataTable than when not.

Delete step in scenario in Add-in for Enterprise Architect

I have problem with deleting steps from scenario in Add-in for Enterprise Architect
I want delete empty steps of scenario from element, but its not working, this "deleted" steps exists in this scenario.
Where is mistake in my code?
short esCnt = element.Scenarios.Count;
for (short esIdx = (short)(esCnt - 1); esIdx >= 0; --esIdx)
{
EA.IDualScenario es = element.Scenarios.GetAt(esIdx);
short essCnt = es.Steps.Count;
for (short essIdx = (short)(essCnt - 1); essIdx >= 0; --essIdx)
{
EA.IDualScenarioStep ess = es.Steps.GetAt(essIdx);
if (ess.Name.Trim().Length == 0 &&
ess.Uses.Trim().Length == 0 &&
ess.Results.Trim().Length == 0)
{
//1. section
es.Steps.Delete(essIdx);
ess.Update();
}
}
//2. section
es.Update();
}
Do you have any ideas?
Looks like off-by-one. The Collection index is zero-based, but the scenario step numbering in the GUI, as reflected in ScenarioStep.Pos, starts from 1. So you might in fact be deleting the wrong steps.
To be on the safe side, you shouldn't use a foreach loop when you're making changes to the collection you're looping over, but a reverse for loop:
int nrScenarios = element.Scenarios.Count;
for (int scenIx = nrScenarios - 1; scenIx >= 0; --scenIx) {
Scenario scenario = element.Scenarios.GetAt(scenIx);
int nrSteps = scenario.Steps.Count;
for (int stepIx = nrSteps - 1; stepIx >= 0; --stepIx) {
In this case, not as important in the outer loop as in the inner, since that's the collection you're manipulating.
Other than that, you shouldn't need to call es.Update() at all, and element.Scenarios.Refresh() should be called outside the inner loop.
Finally, are you sure that Step.Name is actually empty? I'm unable to create steps with empty names ("Action") in the GUI, but you might have been able to do it through the API.
I think the problem lies in the Update() calls.
The EA.Collection.DeleteAt() will immediately delete the element from the database (but not from the collection in memory). If however you call Update() on the object you just created I think it will recreate it, possibly with a new sequence number; which explains why the deleted steps are now "moved" to the end.
Try removing the Update() call and see if that helps.

Weird issue when removing columns from a datatable

I want to remove every column after the 3rd column from a CSV file loaded into a datatable, but I'm getting odd results. Here's my code.
System.Data.DataTable csv_datatable = null;
using (System.IO.StreamReader re = new System.IO.StreamReader(model.file.InputStream))
{
csv_datatable = CsvParser.Parse(re as System.IO.TextReader);
for (int x = 3; x < csv_datatable.Columns.Count + 1; x++)
{
csv_datatable.Columns.RemoveAt(x);
}
}
My sample CSV file:
7 columns, and I want to keep the first three.
email,first,last,middle,nick,job,phone
roryok#fakedomain.com,rory,wally,ok,danger,developer,555-0123
This is the result I get.
email,first,last,nick,phone
roryok#fakedomain.com,rory,ok,danger,555-0123
As you can see, rather than removing the last columns as I would expect, the code actually removes the 4th and 6th columns.
As usual, figured this out as I was posting, but I'll post the solution anyway in case it helps someone.
As each column is removed, the index for each column changes. So when you remove column 3, 4 becomes the new 3. This code works:
for (int x = 0; x < csv_datatable.Columns.Count; x++)
{
csv_datatable.Columns.RemoveAt(3);
}
This will loop over the number of columns, and remove the third column over and over again until everything is gone.

Looping and changing a list - remove doesn't always work

I'm trying to go through a loop 40 times and changing a list in the process.
This is the code:
for (int i = 0; i < 40; i++)
{
location = rand.Next(rows.Count);
rank = rand2.Next(pondRanks.Count);
ComputerPonds[rows[location]].Rank = (PondRank)pondRanks[rank];
rows.Remove(location);
pondRanks.Remove(rank);
}
For some reason the remove doesn't happen all the time, and only sometimes. Anyone has a suggestion?
Both of the list are List , they have 40 elements, and I want to remove the element itself.
Even when debugging I can see that the list count isn't the same (they both have the same initial numbers and they both need to do remove at this loop). If it matters, I'm working on windows phone platform..
I'm pretty sure you should be using List.RemoveAt not List.Remove. RemoveAt will remove the item at the specified index, whereas Remove will look for that object you passed in and remove it from the List if it's in there. But I'm pretty sure that looking at your code that location and rank represent the index, not the objects themselves.
for (int i = 0; i < 39; i++)
{
location = rand.Next(rows.Count);
rank = rand2.Next(pondRanks.Count);
ComputerPonds[location].Rank = (PondRank)pondRanks[rank];
rows.RemoveAt(location);
pondRanks.RemoveAt(rank);
}
EDIT: You may also want to consider making sure that your rows and pondRanks have enough elements (39) before starting the loop (or altering the i < 39 to max out at the upper limit of their length)

Categories