Replace repeated values in collection with its sum - c#

I have a list of custom class ModeTime, its structure is below:
private class ModeTime
{
public DateTime Date { get; set; }
public string LineName { get; set; }
public string Mode { get; set; }
public TimeSpan Time { get; set; }
}
In this list I have some items, whose LineName and Modeare the same, and they are written in the list one by one. I need to sum Time property of such items and replace it with one item with sum of Time property without changing LineName and Mode, Date should be taken from first of replaced items. I will give an example below:
Original: Modified:
Date | LineName | Mode | Time Date | LineName | Mode | Time
01.09.2018 | Line1 | Auto | 00:30:00 01.09.2018 | Line1 | Auto | 00:30:00
01.09.2018 | Line2 | Auto | 00:10:00 01.09.2018 | Line2 | Auto | 00:15:00
01.09.2018 | Line2 | Auto | 00:05:00 01.09.2018 | Line2 | Manual | 00:02:00
01.09.2018 | Line2 | Manual | 00:02:00 01.09.2018 | Line2 | Auto | 00:08:00
01.09.2018 | Line2 | Auto | 00:08:00 01.09.2018 | Line1 | Manual | 00:25:00
01.09.2018 | Line1 | Manual | 00:25:00 01.09.2018 | Line2 | Auto | 00:24:00
01.09.2018 | Line2 | Auto | 00:05:00 02.09.2018 | Line1 | Auto | 00:05:00
02.09.2018 | Line2 | Auto | 00:12:00
02.09.2018 | Line2 | Auto | 00:07:00
02.09.2018 | Line1 | Auto | 00:05:00
I have tried to write method to do it, it partly works, but some not summarized items still remain.
private static List<ModeTime> MergeTime(List<ModeTime> modeTimes)
{
modeTimes = modeTimes.OrderBy(e => e.Date).ToList();
var mergedModeTimes = new List<ModeTime>();
for (var i = 0; i < modeTimes.Count; i++)
{
if (i - 1 != -1)
{
if (modeTimes[i].LineName == modeTimes[i - 1].LineName &&
modeTimes[i].Mode == modeTimes[i - 1].Mode)
{
mergedModeTimes.Add(new ModeTime
{
Date = modeTimes[i - 1].Date,
LineName = modeTimes[i - 1].LineName,
Mode = modeTimes[i - 1].Mode,
Time = modeTimes[i - 1].Time + modeTimes[i].Time
});
i += 2;
}
else
{
mergedModeTimes.Add(modeTimes[i]);
}
}
else
{
mergedModeTimes.Add(modeTimes[i]);
}
}
return mergedModeTimes;
}
I have also tried to wrap for with do {} while() and reduce source list modeTimes length. Unfortunately it leads to loop and memory leak (I waited till 5GB memory using).
Hope someone can help me. I searched this problem, in some familiar cases people use GroupBy. But I don't think it will work in my case, I must sum item with the same LineName and Mode, only if they are in the list one by one.

Most primitive solution would be something like this.
var items = GetItems();
var sum = TimeSpan.Zero;
for (int index = items.Count - 1; index > 0; index--)
{
var item = items[index];
var nextItem = items[index - 1];
if (item.LineName == nextItem.LineName && item.Mode == nextItem.Mode)
{
sum += item.Time;
items.RemoveAt(index);
}
else
{
item.Time += sum;
sum = TimeSpan.Zero;
}
}
items.First().Time += sum;
Edit: I missed last line, where you have to add leftovers. This only applies if first and second elements of the collection are the same. Without it, it would not assign aggregated time to first element.

You can use LINQ's GroupBy. To group only consecutive elements, this uses a trick. It stores the key values in a tuple together with a group index which is only incremented when LineName or Mode changes.
int i = 0; // Used as group index.
(int Index, string LN, string M) prev = default; // Stores previous key for later comparison.
var modified = original
.GroupBy(mt => {
var ret = (Index: prev.LN == mt.LineName && prev.M == mt.Mode ? i : ++i,
LN: mt.LineName, M: mt.Mode);
prev = (Index: i, LN: mt.LineName, M: mt.Mode);
return ret;
})
.Select(g => new ModeTime {
Date = g.Min(mt => mt.Date),
LineName = g.Key.LN,
Mode = g.Key.M,
Time = new TimeSpan(g.Sum(mt => mt.Time.Ticks))
})
.ToList();
This produces the expected 7 result rows.

Related

Search multiple strings between two specific strings in txt files and display in a datagrid

I am trying to mine some data from raw .txt files which are saved in a folder. In each file I got RegretionModel and multiple PeakPoints.
My Raw data file looks something like this;
Model ApprunningVersion="10.4." LastExecution time= ......bla bla bla wr3r43f34f RegretionModel = Linear221....bal bal...
k7878k7 wef34ferf PeakPoints = 11.11.... bal bal
dwedw wf343f4 PeakPoints = 322.11..... bla blaa....
gewwg45gww35w PeakPoints = 6711.11.... bla bla blaaa...
I wanted to extract RegretionModel and all the PeakPoints values into two different RichTextBoxes.
if(all_files.Count>0)
{
var word_1 = "RegretionValue";
var word_2 = "PeakPoints";
foreach (string srd in all_files)
{
using (var sr = new StreamReader(srd))
{
while (!sr.EndOfStream)
{
var line = sr.ReadLine();
if (String.IsNullOrEmpty(line)) continue;
if (line.IndexOf(word_1, StringComparison.CurrentCultureIgnoreCase) >= 0)
{
int startIndex = line.IndexOf("RegretionValue \=") + "RegretionValue \=".Length;
int endIndex = line.IndexOf("\" LAPNum");
string flt_1 = line.Substring(startIndex, endIndex - startIndex);
richTextBox1.Text += flt_1 + "\r";
}
if (line.IndexOf(word_2, StringComparison.CurrentCultureIgnoreCase) >= 0)
{
int count = line.IndexOf(word_2, StringComparison.CurrentCultureIgnoreCase);
int startIndex_1 = line.IndexOf("PeakPoints \=") + "PeakPoints \=".Length;
int flt_2 = line.IndexOf("\" LPPCode");
string newString_1 = line.Substring(startIndex_1, flt_2 - startIndex_1);
richTextBox2.Text += newString_1 + "\r";
counter_1++;
label2.Text = counter_1.ToString() + " of " + matches + " completed";
label4.Text = count.ToString();
}
}
}
}
}
It gives me this, as i expected;
|---------------------|------------------|
| Linear221 | 11.11 |
|---------------------|------------------|
| | 322.11 |
|---------------------|------------------|
| | 6711.11 |
|---------------------|------------------|
But the issue is, When I read multiple files Everything gets mixed up.
|---------------------|------------------|
| Linear221 | 11.11 |
|---------------------|------------------|
| Linear321 | 322.11 |
|---------------------|------------------|
| | 6711.11 |
|---------------------|------------------|
| | 1.11 |
|---------------------|------------------|
| | 21.11 |
|---------------------|------------------|
Which is actually suppose to be;
|---------------------|------------------|
| Linear221 | 11.11 |
|---------------------|------------------|
| Linear221 | 322.11 |
|---------------------|------------------|
| Linear221 | 6711.11 |
|---------------------|------------------|
| Linear321 | 1.11 |
|---------------------|------------------|
| Linear321 | 21.11 |
|---------------------|------------------|
I know, using these two RichTextBoxes are not the best option here. So I thought of putting it to a data grid view without using a database, but I am stuck with linking each Peakpoint to a corresponding RegretionModel,
for example if i read one file I have one RegretionModel Name and multiple Peakpoints, how do i put each Peakpoint with corresponding RegrationModel to a Datagrid.
I am a newbie, any help would be appreciated.
Thank You.

how to parse two column in string in one cycle?

I have a string like this. I want to put the second row in an array(3,9,10,11...), and the third(5,8,4,3...) in an array
C8| 3| 5| 0| | 0|1|
C8| 9| 8| 0| | 0|1|
C8| 10| 4| 0| | 0|1|
C8| 11| 3| 0| | 0|1|
C8| 12| 0| 0| | 0|1|
C8| 13| 0| 0| | 0|1|
C8| 14| 0| 0| | 0|1|
This method originally parsed numbers by rows. now i have columns..
How to do this in this Parse method? I am trying for hours, i dont know what to do.
The Add method waits 2 integer. int secondNumberFinal, int thirdNumberFinal
private Parse(string lines)
{
const int secondColumn = 1;
const int thirdColum = 2;
var secondNumbers = lines[secondColumn].Split('\n'); // i have to split by new line, right?
var thirdNumbers = lines[thirdColum].Split('\n'); // i have to split by new line, right?
var res = new Collection();
for (var i = 0; i < secondNumbers.Length; i++)
{
try
{
var secondNumberFinal = Int32.Parse(secondNumbers[i]);
var thirdNumberFinal = Int32.Parse(thirdNumbers[i]);
res.Add(secondNumberFinal, thirdNumberFinal);
}
catch (Exception ex)
{
log.Error(ex);
}
}
return res;
}
thank you!
Below piece of code should do it for you. The logic is simple: Split the array with '\n' (please check if you need "\r\n" or some other line ending format) and then split with '|'. Returning the data as an IEnumerable of Tuple will provide flexibility and Lazy execution both. You can convert that into a List at the caller if you so desire using the Enumerable.ToList extension method
It uses LINQ (Select), instead of foreach loops due to its elegance in this situation
static IEnumerable<Tuple<int, int>> Parse(string lines) {
const int secondColumn = 1;
const int thirdColum = 2;
return lines.Split('\n')
.Select(line => line.Split('|'))
.Select(items => Tuple.Create(int.Parse(items[secondColumn]), int.Parse(items[thirdColum])));
}
If the original is a single string, then split once on newline to produce an array of string. Parse each of the new string by splitting on | & select the second & third values.
Partially rewriting your method for you :
private Parse(string lines)
{
const int secondColumn = 1;
const int thirdColum = 2;
string [] arrlines = lines.Split('\r');
foreach (string line in arrlines)
{
string [] numbers = line.Split('|');
var secondNumberFinal = Int32.Parse(numbers[secondNumbers]);
var thirdNumberFinal = Int32.Parse(numbers[thirdNumbers]);
// Whatever you want to do with them here
}
}

Horrible response time with NHibernate and Oracle11g

My problem is at the same time simple and complex :
I'm working with NHibernate 3.3 and Oracle 11g with ODP drivers.
This piece of code works like a charm:
var query = Session.CreateSQLQuery("SELECT * FROM wip_event_log WHERE track_id='" + trackId + "'");
query.AddEntity("l", typeof(MotIdenWipEventLog));
var results = query.List<MotIdenWipEventLog>();
in a couple of milliseconds I get the result set. (only 5 records from a table with 11.000.000 of records)
In the other hand, this piece of code :
var results = Session.Query<MotIdenWipEventLog>().Where(m => m.TRACK_ID == trackId).ToList();
takes about 4 seconds to get 5 the records!.
I read about an problem with the AnsiString columns in Oracle databases (http://bit.ly/1bbSlB7) and added a custom convention for work with strings on my fluent configuration:
Fluently
.Configure(new Configuration().Configure())
.Database(OracleClientConfiguration
.Oracle10
.ConnectionString(c => c.Is("User ID=XXXX;Password=XXXX;Data Source=(DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = 1.1.1.1)(PORT = 1521)))(CONNECT_DATA = (SID = iden01)))"))
)
.Mappings(
cfg => cfg.FluentMappings.LocalAddFromAssemblyOf<MotIdenPackSalesModelsHeaderMap>().Conventions.Add<OracleStringPropertyConvention>()
).BuildConfiguration();
and the custom convention MotIdenPackSalesModelsHeaderMap is:
public class OracleStringPropertyConvention : IPropertyConvention
{
public void Apply(IPropertyInstance instance)
{
if (instance.Property.PropertyType == typeof(string)) instance.CustomType("AnsiString");
}
}
The entity MotIdenWipEventLog is defined below:
[Serializable]
public class MotIdenWipEventLog
{
public virtual String TRACK_ID { get; set; } // VARCHAR2(16 BYTE) No
public virtual String ASSY_PART_NUM { get; set; } // VARCHAR2(20 BYTE) Yes
public virtual String ASSY_VER_CODE { get; set; } // VARCHAR2(4 BYTE) Yes
public virtual int PROC_ID { get; set; } // NUMBER(9,0) Yes
public virtual String WIP_EVENT_CODE { get; set; } // VARCHAR2(4 BYTE) Yes
public virtual DateTime EVENT_DATETIME { get; set; } // DATE No
public virtual int EVENT_CLKSEQ { get; set; } // NUMBER(12,0) Yes
public virtual String AREA_ID { get; set; } // VARCHAR2(8 BYTE) Yes
public virtual String PERSONNEL_ID { get; set; } // VARCHAR2(11 BYTE) Yes
public virtual String STN_ID { get; set; } // VARCHAR2(20 BYTE) Yes
public virtual int WIP_COUNT { get; set; } // NUMBER(3,0) Yes
public virtual String STN_GROUP { get; set; } // VARCHAR2(8 BYTE) Yes
}
Mapped through the class MotIdenWipEventLogMap:
public class MotIdenWipEventLogMap : ClassMap<MotIdenWipEventLog>
{
public MotIdenWipEventLogMap()
{
Table("WIP_EVENT_LOG");
Id(m => m.TRACK_ID, "TRACK_ID").GeneratedBy.Assigned();
#region Fields
Map(m => m.TRACK_ID).Not.Nullable()
.Length(16).Index("WIP_EVENT_LOG_IDX1"); // VARCHAR2(16 BYTE) No
Map(m=>m.ASSY_PART_NUM).Nullable().Length(20); // VARCHAR2(20 BYTE) Yes
Map(m=>m.ASSY_VER_CODE).Nullable().Length(4); // VARCHAR2(4 BYTE) Yes
Map(m=>m.PROC_ID).Nullable(); // NUMBER(9,0) Yes
Map(m=>m.WIP_EVENT_CODE).Nullable().Length(4); // VARCHAR2(4 BYTE) Yes
Map(m=>m.EVENT_DATETIME).Not.Nullable(); // DATE No
Map(m=>m.EVENT_CLKSEQ).Nullable(); // NUMBER(12,0) Yes
Map(m=>m.AREA_ID).Nullable().Length(8); // VARCHAR2(8 BYTE) Yes
Map(m=>m.PERSONNEL_ID).Nullable().Length(11); // VARCHAR2(11 BYTE) Yes
Map(m=>m.STN_ID).Nullable().Length(20); // VARCHAR2(20 BYTE) Yes
Map(m=>m.WIP_COUNT).Nullable(); // NUMBER(3,0) Yes
Map(m=>m.STN_GROUP).Nullable().Length(8); // VARCHAR2(8 BYTE) Yes
#endregion
}
}
Looking my log file for NHibernate in Debug level of Log4Net:
(...)
2013-11-06 14:24:22,375 DEBUG - Opened IDataReader, open IDataReaders: 1
2013-11-06 14:24:22,376 DEBUG - processing result set
2013-11-06 14:24:26,956 DEBUG - result set row: 0
2013-11-06 14:24:26,959 DEBUG - returning 'F7012B200ZMH' as column: TRACK1_6_
(...)
and seeing in the NHibernate source code of class Loader.cs :
(...)
try
{
HandleEmptyCollections(queryParameters.CollectionKeys, rs, session);
EntityKey[] keys = new EntityKey[entitySpan]; // we can reuse it each time
if (Log.IsDebugEnabled)
{
Log.Debug("processing result set");
}
int count;
for (count = 0; count < maxRows && rs.Read(); count++)
{
if (Log.IsDebugEnabled)
{
Log.Debug("result set row: " + count);
}
object result = GetRowFromResultSet(rs, session, queryParameters, lockModeArray, optionalObjectKey, hydratedObjects, keys, returnProxies);
results.Add(result);
(...)
I cant find where the problem is...
What I doing wrong?
Any idea?
Rather a patch than a solution, but you could create a function-based index to match the type your application is requesting.
E.g.,
create index patch_index on your_table(cast(your_column as nvarchar2(16)));
Illustrating this on Oracle 11g using EXPLAIN PLAN.
Using
create table t(x varchar2(10));
create index idx on t(x);
insert into t values ('a');
The query
select * from t where x = 'a';
gives you the following plan
| Id | Operation | Name | Rows | Bytes | Cost (%CPU) | Time |
--------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 7 | 3 (0) | 00:00:01 |
|* 1 | TABLE ACCESS FULL | T | 1 | 7 | 3 (0) | 00:00:01 |
--------------------------------------------------------------------------
After adding the following index
create index t2 on t(cast(x as nvarchar2(10)))
The same query now gives you the following plan
------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 19 | 2 (0) | 00:00:01 |
| 1 | TABLE ACCESS BY INDEX ROWID| T | 1 | 19 | 2 (0) | 00:00:01 |
|* 2 | INDEX RANGE SCAN | T2 | 1 | | 1 (0) | 00:00:01 |
------------------------------------------------------------------------------------
You can apply this technique if you cannot fix the problem on the application side.

reading a CSV into a Datatable without knowing the structure

I am trying to read a CSV into a datatable.
The CSV maybe have hundreds of columns and only up to 20 rows.
It will look something like this:
+----------+-----------------+-------------+---------+---+
| email1 | email2 | email3 | email4 | … |
+----------+-----------------+-------------+---------+---+
| ccemail1 | anotherccemail1 | 3rdccemail1 | ccemail | |
| ccemail2 | anotherccemail2 | 3rdccemail2 | | |
| ccemail3 | anotherccemail3 | | | |
| ccemail4 | anotherccemail4 | | | |
| ccemail5 | | | | |
| ccemail6 | | | | |
| ccemail7 | | | | |
| … | | | | |
+----------+-----------------+-------------+---------+---+
i am trying to use genericparser for this; however, i believe that it requires you to know the column names.
string strID, strName, strStatus;
using (GenericParser parser = new GenericParser())
{
parser.SetDataSource("MyData.txt");
parser.ColumnDelimiter = "\t".ToCharArray();
parser.FirstRowHasHeader = true;
parser.SkipStartingDataRows = 10;
parser.MaxBufferSize = 4096;
parser.MaxRows = 500;
parser.TextQualifier = '\"';
while (parser.Read())
{
strID = parser["ID"]; //as you can see this requires you to know the column names
strName = parser["Name"];
strStatus = parser["Status"];
// Your code here ...
}
}
is there a way to read this file into a datatable without know the column names?
It's so simple!
var adapter = new GenericParsing.GenericParserAdapter(filepath);
DataTable dt = adapter.GetDataTable();
This will automatically do everything for you.
I looked at the source code, and you can access the data by column index too, like this
var firstColumn = parser[0]
Replace the 0 with the column number.
The number of colums can be found using
parser.ColumnCount
I'm not familiar with that GenericParser, i would suggest to use tools like TextFieldParser, FileHelpers or this CSV-Reader.
But this simple manual approach should work also:
IEnumerable<String> lines = File.ReadAllLines(filePath);
String header = lines.First();
var headers = header.Split(new[]{','}, StringSplitOptions.RemoveEmptyEntries);
DataTable tbl = new DataTable();
for (int i = 0; i < headers.Length; i++)
{
tbl.Columns.Add(headers[i]);
}
var data = lines.Skip(1);
foreach(var line in data)
{
var fields = line.Split(new[]{','}, StringSplitOptions.RemoveEmptyEntries);
DataRow newRow = tbl.Rows.Add();
newRow.ItemArray = fields;
}
i used generic parser to do it.
On the first run through the loop i get the columns names and then reference them to add them to a list
In my case i have pivoted the data but here is a code sample if it helps someone
bool firstRow = true;
List<string> columnNames = new List<string>();
List<Tuple<string, string, string>> results = new List<Tuple<string, string, string>>();
while (parser.Read())
{
if (firstRow)
{
for (int i = 0; i < parser.ColumnCount; i++)
{
if (parser.GetColumnName(i).Contains("FY"))
{
columnNames.Add(parser.GetColumnName(i));
Console.Log("Column found: {0}", parser.GetColumnName(i));
}
}
firstRow = false;
}
foreach (var col in columnNames)
{
double actualCost = 0;
bool hasValueParsed = Double.TryParse(parser[col], out actualCost);
csvData.Add(new ProjectCost
{
ProjectItem = parser["ProjectItem"],
ActualCosts = actualCost,
ColumnName = col
});
}
}

Generate Sitemap from URLs in Database

Problem Statement:
URLs are stored in a database, example:
home/page1
gallery/image1
info/IT/contact
home/page2
home/page3
gallery/image2
info/IT/map
and so on.
I would like to arrange the above urls into a tree fashion as shown below (each item will be a url link). The final output would be a simple HTML List (plus any sub list(s))
thus:
home gallery info
page1 image1 IT
page2 image2 contact
page3 map
Programming Language is C# , platform is asp.net
EDIT 1:
In the above example, we end up with Three Lists because in our example there is three main 'groups' eg: home, gallery, info.
Naturally, this can change, the algorithm needs to be able to somehow build the lists recursively..
Well,sorting those strings need a lot of work,I've done something similar to your condition.I wish to share the strategy with you.
First of all,(if you can change design of your tables indeed)
Create a table URL like below
----------------
| URL Table |
----------------
| ID |
| ParentID |
| Page |
|..extra info..|
----------------
It's an implementation of category and sub category in same table.In a similar manner,you can contain insert a lot of page and subpage.For example,
-------------------------------------
| ID | ParentID | Page | ...
------------------------------------
| 0 | null | Home |
| 1 | null | Gallery |
| 2 | null | Info |
| 3 | 0 | Page1 |
| 4 | 0 | Page2 |
| 5 | 0 | Page3 | ...
| 6 | 1 | Image1 |
| 7 | 1 | Image2 |
| 8 | 2 | IT |
| 9 | 8 | contact |
| 1 | 8 | map |
------------------------------------- ...
when ParentID is null then its highest level
when ParentID is and ID then its a sublevel of whatever level is on that ID and so on...
From C# side,you know the top pages where ParentID's are null.
You can bring sub pages of them by selected ID's of top pages.It's some ADO.NET work.
Hope this helps
Myra
ok, did it:
First created a class:
public class Node
{
private string _Parent = string.Empty;
private string _Child = string.Empty;
private bool _IsRoot = false;
public string Parent
{
set { _Parent = value; }
get { return _Parent; }
}
public string Child
{
set { _Child = value; }
get { return _Child; }
}
public Node(string PChild, string PParent)
{
_Parent = PParent;
_Child = PChild;
}
public bool IsRoot
{
set { _IsRoot = value; }
get { return _IsRoot; }
}
}
then generated the SiteMap, by transforming the urls strings directly as follows:
private static string MakeTree()
{
List<Node> __myTree = new List<Node>();
List<string> urlRecords = new List<string>();
urlRecords.Add("home/image1");
urlRecords.Add("home/image2");
urlRecords.Add("IT/contact/map");
urlRecords.Add("IT/contact/address");
urlRecords.Add("IT/jobs");
__myTree = ExtractNode(urlRecords);
List<string> __roots = new List<string>();
foreach(Node itm in __myTree)
{
if (itm.IsRoot)
{
__roots.Add(itm.Child.ToString());
}
}
string __trees = string.Empty;
foreach (string roots in __roots)
{
__trees += GetChildren(roots, __myTree) + "<hr/>";
}
return __trees;
}
private static string GetChildren(string PRoot, List<Node> PList)
{
string __res = string.Empty;
int __Idx = 0;
foreach (Node x in PList)
{
if (x.Parent.Equals(PRoot))
{
__Idx += 1;
}
}
if (__Idx > 0)
{
string RootHeader = string.Empty;
foreach (Node x in PList)
{
if (x.IsRoot & PRoot == x.Child)
{
RootHeader = x.Child;
}
}
__res += RootHeader+ "<ul>\n";
foreach (Node itm in PList)
{
if (itm.Parent.Equals(PRoot))
{
__res += string.Format("<ul><li>{0}{1}</li></ul>\n", itm.Child, GetChildren(itm.Child, PList));
}
}
__res += "</ul>\n";
return __res;
}
return string.Empty;
}
private static List<Node> ExtractNode(List<string> Urls)
{
List<Node> __NodeList = new List<Node>();
foreach (string itm in Urls)
{
string[] __arr = itm.Split('/');
int __idx = -1;
foreach (string node in __arr)
{
__idx += 1;
if (__idx == 0)
{
Node __node = new Node(node, "");
if (!__NodeList.Exists(x => x.Child == __node.Child & x.Parent == __node.Parent))
{
__node.IsRoot = true;
__NodeList.Add(__node);
}
}
else
{
Node __node = new Node(node, __arr[__idx - 1].ToString());
{
if (!__NodeList.Exists (x => x.Child == __node.Child & x.Parent == __node.Parent))
{
__NodeList.Add(__node);
}
}
}
}
}
return __NodeList;
}
anyway it's not optimised, I'm sure I can clean it up a lot..

Categories