Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I have a 2D array of size [3000,3] and I have to find Euclidean Distances between the 3000 values in first dimension 3 times (second dimension).
What I am doing now is makin a nested for loop, I looked for ways of making it faster, but the only think I found was setting up a structure as here.
Perhaps doing 3 for loops is faster than doing a nested loop. Does anyone know how the processing time goes in this case?
It won't matter at all whether you run a loop three times via nested-loop or via separate loops, as long as the amount of iterations are the same.
If you can improve your algorithm, so that you need fewer iterations (fewer than 3000 x 3), that might get you somewhere.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
Many times I avoid to use arrays, because of must be careful to work its size, and I must certainly know data type of elements.
I notice, I usually use List, I am not sure this is necessary many times.
I would like to know how to design code when I work with collections.
Can someone help me to approach to collections? Thanks.
* Arrays
* List and List<T>
* Dictionary, Hashtable, Queue, Stack ...etc.
* Sets
Your question is more fundamental. First of all read about basic differences between that data structures. Then you would understand when is is better to use which of them.
Complete guide you could find here -
https://msdn.microsoft.com/en-us/library/ms379570%28v=vs.80%29.aspx?f=255&MSPPError=-2147217396
You would become data strutures guru after reading this.
For example:
If you need to call element by index - Array is your choice as it allows you to do it easily, but if you know that you would be continuously adding elements to data structure - then List is much better.
Good practice for you would be trying to implement yourself all data structures you point out using simple Array.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I have 3 tables, I need to get data from without any relation and get them in one datatable the question is should I do it is SQL union (single query) or 3 different queries with 3 datatables and merging them in C# code
The difference is likely to be negligible in terms of performance. The difference is the compiling of executing of three queries rather than one. But -- assuming you are returning a fair amount of data -- the returning of the data will probably dominate the performance.
That said, be sure you use union all rather than union. The latter removes duplicates which adds significant overhead.
One exception would be if you ran the three queries asynchronously. In that case, running the three queries might be faster than running a single query.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I was following this tutorial:
https://www.codeproject.com/tips/357063/a-very-simple-car-race-game-in-csharp-and-opengl
Determining place in this game is trivial, as the cars only move in one dimension on a straight line. How would it be done in more generic maps with the cars moving in more than one dimension?
EDIT:
By place, I mean 1st, 2nd, 3rd place, ad nauseum.
How do you determine which car is closer to finishing than the others?
In 3D Racing Games, how is place determined?
By that I assume you mean position.
If so by a simple scalar quanity.
Cars on a 3D race track in a computer or reality, are still constrained to being on a "road". Assuming no intersections, you can ignore the third dimension. Because a track is a loop, you can also unwind it and any point on the path can be a scalar value between 0 and 1 with:
0 being the start of the track
0.5 half-way along; and
1 reaching the start again
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I have to architect and design a web application which needs a dynamic scaling solutions. It needs to support a a few million requests during mornings (from approx. 10 AM to 12 AM), with only a few thousand requests during the rest of the day.
Same way, during a peek season or holiday seasons i should be able to support peaks of up to several million requests per hour.
I dont want to scale up or scale out statically, as the usage pattern of the site differs from time to time, so I'm looking into dynamic scaling.
What are some of the approaches to this problem? I am looking for best practices and experience report (public and private clouds).
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I am implementing 2d-bin-packing algorithm in canvas. My task is to place rectangles as optimal as it can be on a canvas.
the following shows how to do it:
http://incise.org/2d-bin-packing-with-javascript-and-canvas.html
BUT, it starts with the origin. I would like to tell the algorithm where to put a rectangle and that the next one not to be on top of him.
What should be changed in the code?
Is there another algorithm to use for it?
I know a better algorithm(in terms of compactness, not speed) than the one you linked to is called MaxRects.
This was my implementation of it in C++. While not fast, it was very effective at packing compactly.
This is a pdf discussing and comparing all sorts of algorithms in terms of both time and compactness.
EDIT:
I threw together an example of an image packed using MaxRects .