I am trying to learn how to make graphs and populate them using a data list I cal from my class. I made a method to read all my Scale values then I call it to my form. After doing research I found one that looked promising so I modified it to work for me. Upon completion I received the following errors and after looking it up I'm still not sure why. Is there a better way of filling my chart with points using a list and or how can I salvage this one. Thank you in advance.
errors:
cannot convert from 'System.Windows.Forms.DataVisualization.Charting.DataPointCollection' to 'WindowsFormsApplication1.Class1'
The best overloaded method match for 'System.Collections.Generic.List.Add(WindowsFormsApplication1.Class1)' has some invalid arguments
Method I am using:
List<Class1> points = new List<Class1>();
for (int i = 0; i < 100; i++)
{
points.Add(new DataPoint(i, 1));
}
points.Clear();
Okay, I have this piece of code:
int arrPanelsRequired = 4;
string airport;
int arrPanelsMade = 0;
for (; arrPanelsMade <= arrPanelsRequired; arrPanelsMade++)
{
Panel arrPanelXXX = new Panel();
and I need to replace the XXX with the value of arrPanelsMade. I found several ways to name an object after a variable's value, however, I wasn't able to apply any of them here :(
I don't think it would be a good idea in general, let alone in C#. I can see many arr prefixes, which leads me to the use of an array. I think you might be looking for something like this:
int arrPanelsRequired = 4;
Panel[] panels = new Panel[arrPanelsRequired];
for (int arrPanelsMade = 0; arrPanelsMade < arrPanelsRequired; arrPanelsMade++)
{
panels[arrPanelsMade] = new Panel();
}
What you are trying to do is bad practice. It passes in weakly-typed languages, but in strongly-typed languages such as C#, variables need to be explicitly declared. This helps the compiler to catch errors that can be caused by typos or general development mistakes. For example, if you have a variable arrPanel089, how can you/the compiler know if it has been declared? Strong typing can seem like a constraint when coming from weakly-typed languages, but it is really a great tool that saves you a ton of headache as the developer.
If you want to create a set of panels, store them in an array/list/etc and access them by their index. If you want to give them a key, store them in a dictionary which allows you to access values by key instead.
You should add the panels to a List or array, and access them later by index:
arrPanel = new Panel[arrPanelsRequired];
for( int idx = 0 ; idx < arrPanelsRequired ; idx++)
arrPanel[idx] = new Panel();
You have a couple of things going on here that raise red flags. I'll digress from your question just a bit, because I think you're probably going down the wrong road in a couple of respects.
Drop the "arr" prefix, and just call your variables what they are — don't try to emulate Hungarian Notation unless you've got good reason to do so.
Consider using a while loop instead of a for loop if you're not going to perform your initialization in the loop header.
Now on to your question. No, the language doesn't work like that. Use a List<Panel> class instead. That will allow you to add more panels dynamically without having to predefine how many you want, such as in an array.
I have two multi-dimentional arrays declared like this:
bool?[,] biggie = new bool?[500, 500];
bool?[,] small = new bool?[100, 100];
I want to copy part of the biggie one into the small. Let’s say I want from the index 100 to 199 horizontally and 100 to 199 vertically.
I have written a simple for statement that goes like this:
for(int x = 0; x < 100; x++)
{
For(int y = 0; y < 100; y++)
{
Small[x,y] = biggie[x+100,y+100];
}
}
I do this A LOT in my code, and this has proven to be a major performance jammer.
Array.Copy only copies single-dimentional arrays, and with multi-dimentional arrays it just considers as if the whole matrix is a single array, putting each row at the end of the other, which won’t allow me to cut a square in the middle of my array.
Is there a more efficient way to do this?
Ps.: I do consider refactoring my code in order not to do this at all, and doing whatever I want to do with the bigger array. Copying matrixes just can’t be painless, the point is that I have already stumbled upon this before, looked for an answer, and got none.
In my experience, there are two ways to do this efficiently:
Use unsafe code and work directly with pointers.
Convert the 2D array to a 1D array and do the necessary arithmetic when you need to access it as a 2D array.
The first approach is ugly and it uses potentially invalid assumptions since 2D arrays are not guaranteed to be laid out contiguously in memory. The upshot to the first approach is that you don't have to change your code that is already using 2D arrays. The second approach is as efficient as the first approach, doesn't make invalid assumptions, but does require updating your code.
I was working on some code recently and came across a method that had 3 for-loops that worked on 2 different arrays.
Basically, what was happening was a foreach loop would walk through a vector and convert a DateTime from an object, and then another foreach loop would convert a long value from an object. Each of these loops would store the converted value into lists.
The final loop would go through these two lists and store those values into yet another list because one final conversion needed to be done for the date.
Then after all that is said and done, The final two lists are converted to an array using ToArray().
Ok, bear with me, I'm finally getting to my question.
So, I decided to make a single for loop to replace the first two foreach loops and convert the values in one fell swoop (the third loop is quasi-necessary, although, I'm sure with some working I could also put it into the single loop).
But then I read the article "What your computer does while you wait" by Gustav Duarte and started thinking about memory management and what the data was doing while it's being accessed in the for-loop where two lists are being accessed simultaneously.
So my question is, what is the best approach for something like this? Try to condense the for-loops so it happens in as little loops as possible, causing multiple data access for the different lists. Or, allow the multiple loops and let the system bring in data it's anticipating. These lists and arrays can be potentially large and looping through 3 lists, perhaps 4 depending on how ToArray() is implemented, can get very costy (O(n^3) ??). But from what I understood in said article and from my CS classes, having to fetch data can be expensive too.
Would anyone like to provide any insight? Or have I completely gone off my rocker and need to relearn what I have unlearned?
Thank you
The best approach? Write the most readable code, work out its complexity, and work out if that's actually a problem.
If each of your loops is O(n), then you've still only got an O(n) operation.
Having said that, it does sound like a LINQ approach would be more readable... and quite possibly more efficient as well. Admittedly we haven't seen the code, but I suspect it's the kind of thing which is ideal for LINQ.
For referemce,
the article is at
What your computer does while you wait - Gustav Duarte
Also there's a guide to big-O notation.
It's impossible to answer the question without being able to see code/pseudocode. The only reliable answer is "use a profiler". Assuming what your loops are doing is a disservice to you and anyone who reads this question.
Well, you've got complications if the two vectors are of different sizes. As has already been pointed out, this doesn't increase the overall complexity of the issue, so I'd stick with the simplest code - which is probably 2 loops, rather than 1 loop with complicated test conditions re the two different lengths.
Actually, these length tests could easily make the two loops quicker than a single loop. You might also get better memory fetch performance with 2 loops - i.e. you are looking at contiguous memory - i.e. A[0],A[1],A[2]... B[0],B[1],B[2]..., rather than A[0],B[0],A[1],B[1],A[2],B[2]...
So in every way, I'd go with 2 separate loops ;-p
Am I understanding you correctly in this?
You have these loops:
for (...){
// Do A
}
for (...){
// Do B
}
for (...){
// Do C
}
And you converted it into
for (...){
// Do A
// Do B
}
for (...){
// Do C
}
and you're wondering which is faster?
If not, some pseudocode would be nice, so we could see what you meant. :)
Impossible to say. It could go either way. You're right, fetching data is expensive, but locality is also important. The first version may be better for data locality, but on the other hand, the second has bigger blocks with no branches, allowing more efficient instruction scheduling.
If the extra performance really matters (as Jon Skeet says, it probably doesn't, and you should pick whatever is most readable), you really need to measure both options, to see which is fastest.
My gut feeling says the second, with more work being done between jump instructions, would be more efficient, but it's just a hunch, and it can easily be wrong.
Aside from cache thrashing on large functions, there may be benefits on tiny functions as well. This applies on any auto-vectorizing compiler (not sure if Java JIT will do this yet, but you can count on it eventually).
Suppose this is your code:
// if this compiles down to a raw memory copy with a bitmask...
Date morningOf(Date d) { return Date(d.year, d.month, d.day, 0, 0, 0); }
Date timestamps[N];
Date mornings[N];
// ... then this can be parallelized using SSE or other SIMD instructions
for (int i = 0; i != N; ++i)
mornings[i] = morningOf(timestamps[i]);
// ... and this will just run like normal
for (int i = 0; i != N; ++i)
doOtherCrap(mornings[i]);
For large data sets, splitting the vectorizable code out into a separate loop can be a big win (provided caching doesn't become a problem). If it was all left as a single loop, no vectorization would occur.
This is something that Intel recommends in their C/C++ optimization manual, and it really can make a big difference.
... working on one piece of data but with two functions can sometimes make it so that code to act on that data doesn't fit in the processor's low level caches.
for(i=0, i<10, i++ ) {
myObject object = array[i];
myObject.functionreallybig1(); // pushes functionreallybig2 out of cache
myObject.functionreallybig2(); // pushes functionreallybig1 out of cache
}
vs
for(i=0, i<10, i++ ) {
myObject object = array[i];
myObject.functionreallybig1(); // this stays in the cache next time through loop
}
for(i=0, i<10, i++ ) {
myObject object = array[i];
myObject.functionreallybig2(); // this stays in the cache next time through loop
}
But it was probably a mistake (usually this type of trick is commented)
When data is cycicly loaded and unloaded like this, it is called cache thrashing, btw.
This is a seperate issue from the data these functions are working on, as typically the processor caches that separately.
I apologize for not responding sooner and providing any kind of code. I got sidetracked on my project and had to work on something else.
To answer anyone still monitoring this question;
Yes, like jalf said, the function is something like:
PrepareData(vectorA, VectorB, xArray, yArray):
listA
listB
foreach(value in vectorA)
convert values insert in listA
foreach(value in vectorB)
convert values insert in listB
listC
listD
for(int i = 0; i < listB.count; i++)
listC[i] = listB[i] converted to something
listD[i] = listA[i]
xArray = listC.ToArray()
yArray = listD.ToArray()
I changed it to:
PrepareData(vectorA, vectorB, ref xArray, ref yArray):
listA
listB
for(int i = 0; i < vectorA.count && vectorB.count; i++)
convert values insert in listA
convert values insert in listB
listC
listD
for(int i = 0; i < listB.count; i++)
listC[i] = listB[i] converted to something
listD[i] = listA[i]
xArray = listC.ToArray()
yArray = listD.ToArray()
Keeping in mind that the vectors can potentially have a large number of items. I figured the second one would be better, so that the program wouldnt't have to loop n times 2 or 3 different times. But then I started to wonder about the affects (effects?) of memory fetching, or prefetching, or what have you.
So, I hope this helps to clear up the question, although a good number of you have provided excellent answers.
Thank you every one for the information. Thinking in terms of Big-O and how to optimize has never been my strong point. I believe I am going to put the code back to the way it was, I should have trusted the way it was written before instead of jumping on my novice instincts. Also, in the future I will put more reference so everyone can understand what the heck I'm talking about (clarity is also not a strong point of mine :-/).
Thank you again.
I'm looking for resources that can help me determine which approach to use in creating a 2d data structure with C#.
Do you mean multidimensional array? It's simple:
<type>[,] <name> = new <type>[<first dimenison>,<second dimension>];
Here is MSDN reference:
Multidimensional Arrays (C#)
#Traumapony-- I'd actually state that the real performance gain is made in one giant flat array, but that may just be my C++ image processing roots showing.
It depends on what you need the 2D structure to do. If it's storing something where each set of items in the second dimension is the same size, then you want to use something like a large 1D array, because the seek times are faster and the data management is easier. Like:
for (y = 0; y < ysize; y++){
for (x = 0; x < xsize; x++){
theArray[y*xsize + x] = //some stuff!
}
}
And then you can do operations which ignore neighboring pixels with a single passthrough:
totalsize = xsize*ysize;
for (x = 0; x < totalsize; x++){
theArray[x] = //some stuff!
}
Except that in C# you probably want to actually call a C++ library to do this kind of processing; C++ tends to be faster for this, especially if you use the intel compiler.
If you have the second dimension having multiple different sizes, then nothing I said applies, and you should look at some of the other solutions. You really need to know what your functional requirements are in order to be able to answer the question.
Depending on the type of the data, you could look at using a straight 2 dimensional array:
int[][] intGrid;
If you need to get tricky, you could always go the generics approach:
Dictionary<KeyValuePair<int,int>,string>;
That allows you to put complex types in the value part of the dictionary, although makes indexing into the elements more difficult.
If you're looking to store spatial 2d point data, System.Drawing has a lot of support for points in 2d space.
For performance, it's best not to use multi-dimensional arrays ([,]); instead, use jagged arrays. e.g.:
<type>[][] <name> = new <type>[<first dimension>];
for (int i = 0; i < <first dimension>; i++)
{
<name>[i] = new <type>[<second dimension>];
}
To access:
<type> item = <name>[<first index>][<second index>];
Data Structures in C#
Seriously, I'm not trying to be critical of the question, but I got tons of useful results right at the top of my search when I Googled for:
data structures c#
If you have specific questions about specific data structures, we might have more specific answers...