This question already has answers here:
C# Finding the maximum value of a string with linq
(3 answers)
Closed 4 years ago.
I want to choose the media based on the size, but can't seem to figure out how to pick the one with lowest memory consumption in a smart way.
The memory usage is stored in the field Size.
using System;
using System.Collections.Generic;
struct placeholder
{
string size;
public placeholder(string input)
{
this.size = input;
}
}
public class Program
{
public static void Main()
{
List<placeholder> list = new List<placeholder>();
list.Add(new placeholder("1"));
list.Add(new placeholder("2"));
list.Add(new placeholder("3"));
list.Add(new placeholder("4"));
// How to find the entry in list with lowest placeholder.size?
Console.WriteLine("Hello World");
}
}
But how do I pick the one with with the lowest memory, the size is stored as a string?
I could do it a for loop, but is there something smarter?
There is nothing smarter than going through the list. Period.
It is a LIST - it has no inner magic knowledge about a special condition, so to find one matching a specific condition (lowest memory) you must go through all the elements and evaluate them. Period - intrinsic basic logic, nothing smart is possible here.
Take every element, compare the size to the size stored for the size of the smallest element and replace the smallest element if it is smaller (and define what to do on same size).
You cannot avoid "visiting" each element of the entire list regardless of the solution you go with because in order to get the min item you're required to compare against every other item.
You can get the result via Linq Aggregate :
var min = list.Aggregate((l, r) => int.Parse(l.size) > int.Parse(r.size) ? r : l);
or use a typical for loop.
Typical solution for this problem is sorting the list then taking the first or last item depending on your sort algorithm.
Related
Simply put - I'm starting out in C# and I'm trying to make a game right now (text based) where the computer generates a number between 20-30. Then the user takes a number from 1-3 away, then the computer takes a number between 1-3 away and the last person to take a number away to make 0 loses.
My question is - I want to setup an array for user answers, but I cannot figure out how to add an element to the array based on how many rounds it takes the user to finish the game. E.g. They could put in 1 a bunch of times and take their time - adding a lot of elements to the array, or they could do a bunch of 3s and make the game quick.
{
class Program
{
static void Main(string[] args)
{
Random rnd = new Random();
int randomStart = 0;
randomStart = rnd.Next(20, 30);
Console.WriteLine("The starting number is {0}", randomStart);
Console.ReadLine();
}
}
}
I've tried experimenting with array placement and ideas but I left them out of the code that I've put here because it may be easier for you to guide me by just adding to it.
Thanks for the help.
You can't change the dimensions of an array dynamically in C#. If you need to add values at run time you to use a different list type.
You might consider a standard List (https://msdn.microsoft.com/en-us/library/6sh2ey19(v=vs.110).aspx) or an ArrayList (https://msdn.microsoft.com/en-us/library/system.collections.arraylist(v=vs.110).aspx).
Use the List instead of Array..
Visit http://csharp.net-informations.com/collection/list.htm
Let us assume that I have defined a function named computeValue (double x):
whose input is a double value
that returns a value obtained by performing a certain set of operations using the elements of an array, to be described below.
Also, we have the above mentioned array that
is some class's member
contains only 4 positions.
The 1st position contains a certain value, the 4th contains another value that will be the input to our function
(here comes the tricky bit), the 2nd and 3rd values of the array should be the result of a linear interpolation between positions 1 and 4 of the array. That is, if we modify the position 1 or 4 of the array, then positions 2 and 3 should change their values automatically according to the interpolation method.
My aim is to invoke a root-finding algorithm (such as Newton-Raphson, Secant Method, etc) that will aim to minimize the following expression:
f = CONSTANT - computeValue(array[4])
As you may have already observed, the problem is that every time my root-finding routine modifies the 4th element of my array to obtain a new solution, the positions 2 and 3 of my array should be modified accordingly given that they are the result of a interpolation (as mentioned in point 4 above), thus changing the result of computeValue.
What is a possible way of making the values of the array change dynamically as the root finding algorithm works its way towards the root? Maybe something to do with an array storing lambda expressions defining the interpolation?
It cannot be done with a classic array, but you can implement your own type that solves the problem. This type internally uses an array of length four and offers access by
public int this[int index]
{
// get and set accessors
}
Here, you can write your own getters and setters, so that you can recalculate values as soon as the others were changed.
Instead of confusing yourself with array indices, this seems like an excellent time to create your own object. You can use JF Meier's approach and manually create or augment an array class. But I suggest just creating a new object entirely. You can make your interpolated points get only properties that return the proper interpolation. Your object might look like this:
public class Interpolator
{
public double Constant {get; set;} //same as your array[0]
public double Value {get; set;} //same as your array[3]
public double Interpolation1 { get { return CalculateInterpolation1(); } }
public double Interpolation2 { get { return CalculateInterpolation2(); } }
private double CalculateInterpolation1()
{
//define your interpolation here
}
private double CalculateInterpolation2()
{
//define your interpolation here
}
}
A brief .Net Fiddle demo
This question already has answers here:
Why does List<T>.Sort method reorder equal IComparable<T> elements?
(7 answers)
Closed 8 years ago.
I've found that when I implement CompareTo(..) for one of my classes, the ordering is inconsistent between machines. When 2 objects come out equal, they don't always sort in the same order. I'd assume some single threaded iterative approach would be use to sort so I would have assumed a consistant ordering.
For Given the following class..
class Property : ICompareable<Property>
{
public int Value;
public int Name;
public int CompareTo(Property other)
{
if(this.Value > other.Value)
return 1;
if(this.Value < other.Value)
return -1;
return 0;
}
}
And given the following objects
{
List.add(new Property( name="apple", value = 1) );
List.add(new Property( name="grape", value = 2) );
List.add(new Property( name="banana", value = 1) );
}
When I execute
List.sort();
Then when stepping through the list using indexes, the order of banana and apple changes depending on what PC i am executing the code on. Why is this?
List.Sort doesn't provide a stable sort, as per MSDN:
This implementation performs an unstable sort; that is, if two elements are equal, their order might not be preserved. In contrast, a stable sort preserves the order of elements that are equal.
If you need a stable sort consider using OrderBy from LINQ, which is a stable sort.
Since the CompareTo function only compares off the integer (Value) axis, there is no guarantee about the ordering along the other (Name) axis.
Even an unstable sort (e.g. List.Sort) with the same implementation and the same input sequence should result in the same sorted output. Thus, given an incomplete compare function as per above, the results may differ across environments (PCs) because of the following reasons:
The sort implementation uses a different or environment-sensitive algorithm, or;
The initial order of the items differs across environments.
A stable sort (e.g. OrderBy in LINQ to Objects) will produce consistent output along different environments if and only if the input sequence is in the same order in each environment: a stable sort is free of issue #1 but still affected by #2.
(While the input order is guaranteed given the posted sample code, it might not be in a "real" situation, such as if the items initially came from an unordered source.)
I would likely update the compare function such that it defines a total ordering over all item attributes - then the results will be consistent for all input sequences over all environments and sorting methods.
An example of such an implementation:
public int CompareTo(Property other)
{
var valueCmp = this.Value.CompareTo(other.Value);
if (valueCmp == 0) {
// Same Value, order by Name
return this.Name.CompareTo(other.Name);
} else {
// Different Values
return valueCmp;
}
}
Some sorting methods guarantee that items which compare equal will appear in the same sequence relative to each other after sorting as they did before sorting, but such a guarantee generally comes with a cost. Very few sorting algorithms can move an item directly from its original position to its final position in a single step (and those which do so are generally slow). Instead, sorting algorithms move items around a few times, and generally don't keep track of where items came from. In the Anthony Hoare's "quicksort" algorithm, a typical step will pick some value and move things around so that everything which compares less than that value will end up in array elements which precede all the items which are greater than that value. If one ignores array elements that precisely equal the "pivot" value, there are four categories of items:
Items which are less than the pivot value, and started in parts of the array that precede the pivot value's final location.
Items which are less than the pivot value, but started in parts of the array that follow the pivot value's final location.
Items which are greater than the pivot value, but started in parts of the array that precede the pivot value's final location.
Items which are greater than the pivot value, and started in parts of the array that follow the pivot value's final location.
When moving elements to separate those above and below the pivot value, items in the first and fourth categories (i.e. those which started and ended on the same side of the pivot value's final location) generally remain in their original order, but are intermixed with those which started on the other side, which get copied in reverse order. Because the values from each side get intermixed, there's no way the system can restore the sequence of items which crossed from one side of the pivot to the other unless it keeps track of which items those are. Such tracking would require extra time and memory, which Sort opts not to spend.
If two runs of Sort on a given list always perform the same sequence of steps, then the final ordering should match. The documentation for Sort, however, does not specify the exact sequence of steps it performs, and it would be possible that one machine has a version of .NET which does things slightly differently from the other. The essential thing to note is that if you need a "stable" sort [one in which the relative sequence of equal items after sorting is guaranteed to match the sequence before sorting], you will either need to use a sorting method which guarantees such behavior, or else add an additional field to your items which could be preloaded with their initial position in the array, and make your CompareTo function check that field when items are otherwise equal.
given size of array 5, with five numbers in it, sort them from smallest to largest without comparing.(hint, access time O(n)
I tried to search a lot but didnt knew, how it can be done. O(n), means which algo/data structure.i am unaware.
I suppose you need Counting sort, it has linear time, but takes some memory and depends on min/max value of your initial array
The Counting Sort will do this for you, although if I were in an interview and on the spot I'd probably do something like the below which is vaguely similar as I can never remember these "classic" algorithms off the top of my head!
The key idea here is to use the each actual unsorted integer value as an index into a target array that contains N elements where N is the max. of the values to be sorted.
I am using a simple class to record both the value and the number of times it occurred so you can reconstruct an actual array from it if you need to keep discrete values that occurred multiple times in the original array.
So all you need to do is walk the unsorted array once, putting each value into the corresponding index in the target array, and (ignoring empty elements) your values are already sorted from smallest to largest without having ever compared them to one another.
(I personally am not a fan of interview questions like this where the answer is "oh, use Counting Sort" or whatever - I would hope that the interviewer asking this question would be genuinely interested to see what approach you took to solving a new problem, regardless of if you got a strictly correct answer or not)
The performance of the below is O(n) meaning it runs in linear time (1 element takes X amount of time, 10 elements takes 10X,etc) but it can use a lot of memory if the max element is large,cannot do in place sorting, will only work with primitives and it's not something I'd hope to ever see in production code :)
void Main()
{
//create unsorted list of random numbers
var unsorted = new List<int>();
Random rand = new Random();
for(int x=0;x<10;x++)
{
unsorted.Add(rand.Next(1,10));
}
//create array big enough to hold unsorted.Max() elements
//note this is indirectly performing a comparison of the elements of the array
//but not for the sorting, so I guess that is allowable :)
var sorted = new NumberCount[unsorted.Max()+1];
//loop the unsorted array
for (int index=0;index<unsorted.Count;index++)
{
//get the value at the current index and use as an index to the target array
var value = unsorted[index];
//if the sorted array contains the value at the current index, just increment the count
if (sorted[value]!=null && sorted[value].Value!=0)
{
sorted[value].Count++;
}
else
{
//insert the current value in it's index position
sorted[value]=new NumberCount{Value=value,Count=1};
}
}
//ignore all elements in sorted that are null because they were not part of the original list of numbers.
foreach (var r in sorted.Where(r=>r!=null))
{
Console.WriteLine("{0}, occurs {1} times",r.Value,r.Count);
}
}
//just a poco to hold the numbers and the number of times they occurred.
public class NumberCount
{
public int Value{get;set;}
public int Count{get;set;}
}
I can't figured out in remove duplicates entries from an Array of struct
I have this struct:
public struct stAppInfo
{
public string sTitle;
public string sRelativePath;
public string sCmdLine;
public bool bFindInstalled;
public string sFindTitle;
public string sFindVersion;
public bool bChecked;
}
I have changed the stAppInfo struct to class here thanks to Jon Skeet
The code is like this: (short version)
stAppInfo[] appInfo = new stAppInfo[listView1.Items.Count];
int i = 0;
foreach (ListViewItem item in listView1.Items)
{
appInfo[i].sTitle = item.Text;
appInfo[i].sRelativePath = item.SubItems[1].Text;
appInfo[i].sCmdLine = item.SubItems[2].Text;
appInfo[i].bFindInstalled = (item.SubItems[3].Text.Equals("Sí")) ? true : false;
appInfo[i].sFindTitle = item.SubItems[4].Text;
appInfo[i].sFindVersion = item.SubItems[5].Text;
appInfo[i].bChecked = (item.SubItems[6].Text.Equals("Sí")) ? true : false;
i++;
}
I need that appInfo array be unique in sTitle and sRelativePath members the others members can be duplicates
EDIT:
Thanks to all for the answers but this application is "portable" I mean I just need the .exe file and I don't want to add another files like references *.dll so please no external references this app is intended to use in a pendrive
All data comes form a *.ini file what I do is: (pseudocode)
ReadFile()
FillDataFromFileInAppInfoArray()
DeleteDuplicates()
FillListViewControl()
When I want to save that data into a file I have these options:
Using ListView data
Using appInfo array (this is more faster¿?)
Any other¿?
EDIT2:
Big thanks to: Jon Skeet, Michael Hays thanks for your time guys!!
Firstly, please don't use mutable structs. They're a bad idea in all kinds of ways.
Secondly, please don't use public fields. Fields should be an implementation detail - use properties.
Thirdly, it's not at all clear to me that this should be a struct. It looks rather large, and not particularly "a single value".
Fourthly, please follow the .NET naming conventions so your code fits in with all the rest of the code written in .NET.
Fifthly, you can't remove items from an array, as arrays are created with a fixed size... but you can create a new array with only unique elements.
LINQ to Objects will let you do that already using GroupBy as shown by Albin, but a slightly neater (in my view) approach is to use DistinctBy from MoreLINQ:
var unique = appInfo.DistinctBy(x => new { x.sTitle, x.sRelativePath })
.ToArray();
This is generally more efficient than GroupBy, and also more elegant in my view.
Personally I generally prefer using List<T> over arrays, but the above will create an array for you.
Note that with this code there can still be two items with the same title, and there can still be two items with the same relative path - there just can't be two items with the same relative path and title. If there are duplicate items, DistinctBy will always yield the first such item from the input sequence.
EDIT: Just to satisfy Michael, you don't actually need to create an array to start with, or create an array afterwards if you don't need it:
var query = listView1.Items
.Cast<ListViewItem>()
.Select(item => new stAppInfo
{
sTitle = item.Text,
sRelativePath = item.SubItems[1].Text,
bFindInstalled = item.SubItems[3].Text == "Sí",
sFindTitle = item.SubItems[4].Text,
sFindVersion = item.SubItems[5].Text,
bChecked = item.SubItems[6].Text == "Sí"
})
.DistinctBy(x => new { x.sTitle, x.sRelativePath });
That will give you an IEnumerable<appInfo> which is lazily streamed. Note that if you iterate over it more than once, however, it will iterate over listView1.Items the same number of times, performing the same uniqueness comparisons each time.
I prefer this approach over Michael's as it makes the "distinct by" columns very clear in semantic meaning, and removes the repetition of the code used to extract those columns from a ListViewItem. Yes, it involves building more objects, but I prefer clarity over efficiency until benchmarking has proved that the more efficient code is actually required.
What you need is a Set. It ensures that the items entered into it are unique (based on some qualifier which you will set up). Here is how it is done:
First, change your struct to a class. There is really no getting around that.
Second, provide an implementation of IEqualityComparer<stAppInfo>. It may be a hassle, but it is the thing that makes your set work (which we'll see in a moment):
public class AppInfoComparer : IEqualityComparer<stAppInfo>
{
public bool Equals(stAppInfo x, stAppInfo y) {
if (ReferenceEquals(x, y)) return true;
if (x == null || y == null) return false;
return Equals(x.sTitle, y.sTitle) && Equals(x.sRelativePath,
y.sRelativePath);
}
// this part is a pain, but this one is already written
// specifically for your question.
public int GetHashCode(stAppInfo obj) {
unchecked {
return ((obj.sTitle != null
? obj.sTitle.GetHashCode() : 0) * 397)
^ (obj.sRelativePath != null
? obj.sRelativePath.GetHashCode() : 0);
}
}
}
Then, when it is time to make your set, do this:
var appInfoSet = new HashSet<stAppInfo>(new AppInfoComparer());
foreach (ListViewItem item in listView1.Items)
{
var newItem = new stAppInfo {
sTitle = item.Text,
sRelativePath = item.SubItems[1].Text,
sCmdLine = item.SubItems[2].Text,
bFindInstalled = (item.SubItems[3].Text.Equals("Sí")) ? true : false,
sFindTitle = item.SubItems[4].Text,
sFindVersion = item.SubItems[5].Text,
bChecked = (item.SubItems[6].Text.Equals("Sí")) ? true : false};
appInfoSet.Add(newItem);
}
appInfoSet now contains a collection of stAppInfo objects with unique Title/Path combinations, as per your requirement. If you must have an array, do this:
stAppInfo[] appInfo = appInfoSet.ToArray();
Note: I chose this implementation because it looks like the way you are already doing things. It has an easy to read for-loop (though I do not need the counter variable). It does not involve LINQ (wich can be troublesome if you aren't familiar with it). It requires no external libraries outside of what .NET framework provides to you. And finally, it provides an array just like you've asked. As for reading the file in from an INI file, hopefully you see that the only thing that will change is your foreach loop.
Update
Hash codes can be a pain. You might have been wondering why you need to compute them at all. After all, couldn't you just compare the values of the title and relative path after each insert? Well sure, of course you could, and that's exactly how another set, called SortedSet works. SortedSet makes you implement IComparer in the same way that I implemented IEqualityComparer above.
So, in this case, AppInfoComparer would look like this:
private class AppInfoComparer : IComparer<stAppInfo>
{
// return -1 if x < y, 1 if x > y, or 0 if they are equal
public int Compare(stAppInfo x, stAppInfo y)
{
var comparison = x.sTitle.CompareTo(y.sTitle);
if (comparison != 0) return comparison;
return x.sRelativePath.CompareTo(y.sRelativePath);
}
}
And then the only other change you need to make is to use SortedSet instead of HashSet:
var appInfoSet = new SortedSet<stAppInfo>(new AppInfoComparer());
It's so much easier in fact, that you are probably wondering what gives? The reason that most people choose HashSet over SortedSet is performance. But you should balance that with how much you actually care, since you'll be maintaining that code. I personally use a tool called Resharper, which is available for Visual Studio, and it computes these hash functions for me, because I think computing them is a pain, too.
(I'll talk about the complexity of the two approaches, but if you already know it, or are not interested, feel free to skip it.)
SortedSet has a complexity of O(log n), that is to say, each time you enter a new item, will effectively go the halfway point of your set and compare. If it doesn't find your entry, it will go to the halfway point between its last guess and the group to the left or right of that guess, quickly whittling down the places for your element to hide. For a million entries, this takes about 20 attempts. Not bad at all. But, if you've chosen a good hashing function, then HashSet can do the same job, on average, in one comparison, which is O(1). And before you think 20 is not really that big a deal compared to 1 (after all computers are pretty quick), remember that you had to insert those million items, so while HashSet took about a million attempts to build that set up, SortedSet took several million attempts. But there is a price -- HashSet breaks down (very badly) if you choose a poor hashing function. If the numbers for lots of items are unique, then they will collide in the HashSet, which will then have to try again and again. If lots of items collide with the exact same number, then they will retrace each others steps, and you will be waiting a long time. The millionth entry will take a million times a million attempts -- HashSet has devolved into O(n^2). What's important with those big-O notations (which is what O(1), O(log n), and O(n^2) are, in fact) is how quickly the number in parentheses grows as you increase n. Slow growth or no growth is best. Quick growth is sometimes unavoidable. For a dozen or even a hundred items, the difference may be negligible -- but if you can get in the habit of programming efficient functions as easily as alternatives, then it's worth conditioning yourself to do so as problems are cheapest to correct closest to the point where you created that problem.
Use LINQ2Objects, group by the things that should be unique and then select the first item in each group.
var noDupes = appInfo.GroupBy(
x => new { x.sTitle, x.sRelativePath })
.Select(g => g.First()).ToArray();
!!! Array of structs (value type) + sorting or any kind of search ==> a lot of unboxing operations.
I would suggest to stick with recommendations of Jon and Henk, so make it as a class and use generic List<T>.
Use LINQ GroupBy or DistinctBy, as for me it is much simple to use built in GroupBy, but it also interesting to take a look at an other popular library, perhaps it gives you some insights.
BTW, Also take a look at the LambdaComparer it will make you life easier each time you need such kind of in place sorting/search, etc...