I have inserted values into Stack in C# . I need values of mystack[0], mystack[1]. How to do it. I have tried methods in stack but please give me hint of code i will try it
You can use ElementAt() for this.
Stack<Int32> foo = new Stack<Int32>();
foo.Push(5); //element 1
foo.Push(1); //element 0
int val = foo.ElementAt(1); //This is 5
Since stacks are last on first out, if you want to get the first item you added to the stack, you can use:
int val = foo.ElementAt(foo.Count - 1);
Keep in mind, ElementAt is a LINQ extension method that will enumerate the stack as an array and return the desired index. For large stacks, or where performance is critical, you might want to consider using another data structure such as List<T>.
If you need the items by index, perhaps a List<T> would be a more appropriate data structure?
A stack is intended to only allow you to get the most recently inserted item. There are ways to bypass that behavior, but surely if you need the items by index this works better:
var myList = new List<Int32>();
myList.Add(100);
myList.Add(200);
myList.Add(300);
myList.Add(400);
Console.Out.WriteLine(myList[2]); // Prints "300"
In a proficiency assessment as part of Exam 98-361, Software Development Fundamentals, this question pops up:
Scenario 3-3: Using Stacks
You are writing a program that uses two stacks. The data in each stack is already in descending order. You need to process the contents of both stacks in such a way that the output is printed on the screen ascending order. How would you write such a program?
Now, I have this scenario already coded. My solution is to iterate over two separate stacks, merge them into a List by popping their items off until the stack is empty, and sort the list into the correct order.
However, it strikes me that the question is a bit vague on whether or not I should be merging the stacks. It's kind of implied, but it kind of isn't.
If you were reading this question, how would you interpret it?
Note that I'm not actually taking this exam, just prepping for it. It's more of a requirements interpretation issue, at this point, in my mind.
My point of view:
DECLARE NEW_LIST;
INT COUNT = STACK_A.COUNT() + STACK_B.COUNT()
FOR I=0 TO COUNT-1
IF STACK_A.PEEK() > STACK_B.PEEK()
NEW_LIST.ADD(STACK_A.POP())
ELSE
NEW_LIST.ADD(STACK_B.POP());
Now you have NEW_LIST which is sorted - you just need to decide on the printing order
(reverse for ascending order)
Assuming m, n are the initial sizes of the two stacks,
Merging both stacks and then sorting a list will cost you more -
O((n+m)log(n+m)) for a quicksort, which is obviously slower than
O(m+n) - for the solution above
I think you're correct. That was my initial interpretation (although I agree it seems a bit vague). And on further thought, it seems to me there'd be no need to specify two stacks unless they were supposed to be merged.
This question sounds like a set-up for a merge-sort; to get the contents of both in a combined, sorted (but reversed) order, you repeatedly peek the top of each stack to see which value is lower (id est, earlier in a descending order sort), pop the value off that stack and push it to a third stack. Repeat until both source stacks are empty, and because your third stack is FILO the items you added in descending order will pop off in ascending order.
I'm writing an application with Watin. Its great, but running a performance analysis on my program, over 50% of execution time is spent looping through lists of elements.
For example:
foreach (TextField bT in browser.TextFields)
{
Is very slow.
I seem to remember seeing somewhere there is a faster way of doing this in WatiN, but unfortunately I can't find the page again.
Accessing the number of elements also seems to be slow, eg;
browser.CheckBoxes.Count
Thanks for any tips,
Chris
I think I could answer you better if I had a better idea of what you were trying to do, but I can share some observations on what I've learned with WatiN so far.
The more specific your selectors are, the faster things will go. Avoid using "browser.Elements" as that is really generic. I'm not sure that it saves much, but doing something like browser.Body.Elements throws the header elements out of the scope of things to check and may save a few calculations.
When I say "scope", consider that WatiN always starts with the entire DOM. Can you think of ways to limit the scope of elements perhaps to the text fields within the main div on your page? WatiN returns Elements and ElementCollections, each of which may have its own ElementCollection. That div probably has a specific ID, so you can do something like
var textFields = ie.Div("divId").TextFields;
Look for opportunities to be more specific, and you can use LINQ to describe what you want more clearly. For example, can you write something like:
ie.Body.TextFields.
Where(tf => !string.IsNullOrWhiteSpace(tf.ClassName) && tf.ClassName.Contains("classname")).ToList().
Foreach(tf => tf.Value = "Your Text");
I would refine that further by reducing the number of times I scan the collection by doing something like:
ie.Body.TextFields.ToList().
Foreach(tf => {
if(!string.IsNullOrWhiteSpace(tf.ClassName) && tf.ClassName.Contains("classname")) {
tf => tf.Value = "Your Text"
}
});
The "Find.By*" specifiers also help WatiN operate on the collections you want faster and are a more elegant short-hand for what I wrote above:
ie.Body.TextFields.Filter(Find.ByClass("class")).ToList().ForEach(tf => tf.Value = "Your Text");
And as a last piece of advice, this project lets you find elements using jQuery/CSS style selectors.
So, tl;dr: Narrow down the scope of what you're looking for, and be specific.
Hope that helps. I'm looking for ways to speed up my own tests.
If you really need to iterate through all text fields, there is no other way. As #Xaqron pointed out, it depends on IE. But maybe you just need to iterate through text fields of eg. specified <div/>? Finding it first, and then iterating through it's text fields would be faster.
Thanks Dahv for a really detailed answer. In my case I've sped up my tests by about 10x using a number of tricks, some similar to yours:
Refining scope as you and prostynick (in my case using Form1.TextField etc.)
First checking if browser.html matches my regex before seeing if
fields do
Using the GehSoft.PRCE RegEx wrapper - its native code regex
matching is far faster than .NET's for small haystacks. So to find a TextField I'd do:
Gehtsoft.PCRE.Regex regexString = new Gehtsoft.PCRE.Regex("[Nn]ame");
foreach (TextField bT in browser.TextFields)
{
//Skip if no match
if (!regexString.Execute(bT.Name).Success) continue;
Before I was looping on a list of regexes, then inside that i was looping on TextFields. Making the TextFields loop the top loop improved speed about 3x.
I was reading a blog post on msdn about iterators which talks about about how Concat has O(m^2) performance where m is the length of the first IEnumerable. One of the comments, by richard_deeming on the second page, provides some sample code which he says is much faster. I don't really understand why it's faster and was hoping someone could explain it to me.
Thanks.
He's simply saying that instead of using Concat to create an iterator which is actually equivalent to creating an iterator over:
...(((a+b)+c)+d)...
which is caused by:
for (int i = 0; i < length; ++i)
ones = ones.Concat(list);
create a list of iterators you need and return each of those iterators you created previously.
This way you're not ending up with a lot of stacked iterators in the first collection of elements.
Also it's worth mentioning that the claim about O(m^2) is not "really right". It's true in this specific case, but this is like saying + is O(m^2) when you're calculating (((a+b)+c)+d)... case. It's the specific usage pattern that makes it O(m^2).
I don't think the blog post is saying that Concat is O(m^2), at least, it shouldn't be - at one point the fact that Concat is O(m+n) is mentioned - and this is much more believeable. It's the use of Concat in a loop as given on that post that is O(m^2) - and I don't think that this is a particularly shocking finding as you'd expect many calls to multiply up the complexity!
Richard's follow-up is suggesting deferring the Concat operations until they're needed, by storing a list of iterators, and then moving through each of these starting from the first, then when that's exhausted, moving on to the next, which makes perfect sense - however, for 'light usage' Concat as-is would be fine.
This question already has answers here:
How to find the index of an element in an array in Java?
(15 answers)
Closed 6 years ago.
I was asked this question in an interview. Although the interview was for dot net position, he asked me this question in context to java, because I had mentioned java also in my resume.
How to find the index of an element having value X in an array ?
I said iterating from the first element till last and checking whether the value is X would give the result. He asked about a method involving less number of iterations, I said using binary search but that is only possible for sorted array. I tried saying using IndexOf function in the Array class. But nothing from my side answered that question.
Is there any fast way of getting the index of an element having value X in an array ?
As long as there is no knowledge about the array (is it sorted? ascending or descending? etc etc), there is no way of finding an element without inspecting each one.
Also, that is exactly what indexOf does (when using lists).
How to find the index of an element having value X in an array ?
This would be fast:
int getXIndex(int x){
myArray[0] = x;
return 0;
}
A practical way of finding it faster is by parallel processing.
Just divide the array in N parts and assign every part to a thread that iterates through the elements of its part until value is found. N should preferably be the processor's number of cores.
If a binary search isn't possible (beacuse the array isn't sorted) and you don't have some kind of advanced search index, the only way I could think of that isn't O(n) is if the item's position in the array is a function of the item itself (like, if the array is [10, 20, 30, 40], the position of an element n is (n / 10) - 1).
Maybe he wants to test your knowledge about Java.
There is Utility Class called Arrays, this class contains various methods for manipulating arrays (such as sorting and searching)
http://download.oracle.com/javase/6/docs/api/java/util/Arrays.html
In 2 lines you can have a O(n * log n) result:
Arrays.sort(list); //O(n * log n)
Arrays.binarySearch(list, 88)); //O(log n)
Puneet - in .net its:
string[] testArray = {"fred", "bill"};
var indexOffset = Array.IndexOf(testArray, "fred");
[edit] - having read the question properly now, :) an alternative in linq would be:
string[] testArray = { "cat", "dog", "banana", "orange" };
int firstItem = testArray.Select((item, index) => new
{
ItemName = item,
Position = index
}).Where(i => i.ItemName == "banana")
.First()
.Position;
this of course would find the FIRST occurence of the string. subsequent duplicates would require additional logic. but then so would a looped approach.
jim
It's a question about data structures and algorithms (altough a very simple data structure). It goes beyond the language you are using.
If the array is ordered you can get O(log n) using binary search and a modified version of it for border cases (not using always (a+b)/2 as the pivot point, but it's a pretty sophisticated quirk).
If the array is not ordered then... good luck.
He can be asking you about what methods you have in order to find an item in Java. But anyway they're not faster. They can be olny simpler to use (than a for-each - compare - return).
There's another solution that's creating an auxiliary structure to do a faster search (like a hashmap) but, OF COURSE, it's more expensive to create it and use it once than to do a simple linear search.
Take a perfectly unsorted array, just a list of numbers in memory. All the machine can do is look at individual numbers in memory, and check if they are the right number. This is the "password cracker problem". There is no faster way than to search from the beginning until the correct value is hit.
Are you sure about the question? I have got a questions somewhat similar to your question.
Given a sorted array, there is one element "x" whose value is same as its index find the index of that element.
For example:
//0,1,2,3,4,5,6,7,8,9, 10
int a[10]={1,3,5,5,6,6,6,8,9,10,11};
at index 6 that value and index are same.
for this array a, answer should be 6.
This is not an answer, in case there was something missed in the original question this would clarify that.
If the only information you have is the fact that it's an unsorted array, with no reletionship between the index and value, and with no auxiliary data structures, then you have to potentially examine every element to see if it holds the information you want.
However, interviews are meant to separate the wheat from the chaff so it's important to realise that they want to see how you approach problems. Hence the idea is to ask questions to see if any more information is (or could be made) available, information that can make your search more efficient.
Questions like:
1/ Does the data change very often?
If not, then you can use an extra data structure.
For example, maintain a dirty flag which is initially true. When you want to find an item and it's true, build that extra structure (sorted array, tree, hash or whatever) which will greatly speed up searches, then set the dirty flag to false, then use that structure to find the item.
If you want to find an item and the dirty flag is false, just use the structure, no need to rebuild it.
Of course, any changes to the data should set the dirty flag to true so that the next search rebuilds the structure.
This will greatly speed up (through amortisation) queries for data that's read far more often than written.
In other words, the first search after a change will be relatively slow but subsequent searches can be much faster.
You'll probably want to wrap the array inside a class so that you can control the dirty flag correctly.
2/ Are we allowed to use a different data structure than a raw array?
This will be similar to the first point given above. If we modify the data structure from an array into an arbitrary class containing the array, you can still get all the advantages such as quick random access to each element.
But we gain the ability to update extra information within the data structure whenever the data changes.
So, rather than using a dirty flag and doing a large update on the next search, we can make small changes to the extra information whenever the array is changed.
This gets rid of the slow response of the first search after a change by amortising the cost across all changes (each change having a small cost).
3. How many items will typically be in the list?
This is actually more important than most people realise.
All talk of optimisation tends to be useless unless your data sets are relatively large and performance is actually important.
For example, if you have a 100-item array, it's quite acceptable to use even the brain-dead bubble sort since the difference in timings between that and the fastest sort you can find tend to be irrelevant (unless you need to do it thousands of times per second of course).
For this case, finding the first index for a given value, it's probably perfectly acceptable to do a sequential search as long as your array stays under a certain size.
The bottom line is that you're there to prove your worth, and the interviewer is (usually) there to guide you. Unless they're sadistic, they're quite happy for you to ask them questions to try an narrow down the scope of the problem.
Ask the questions (as you have for the possibility the data may be sorted. They should be impressed with your approach even if you can't come up with a solution.
In fact (and I've done this in the past), they may reject all your possibile approaches (no, it's not sorted, no, no other data structures are allowed, and so on) just to see how far you get.
And maybe, just maybe, like the Kobayashi Maru, it may not be about winning, it may be how you deal with failure :-)