For some reason i my code doesn't properly compare when an instantiated object is being overlapped.
What i want to do is, to generate random platforms with different positions and scale (X).
And since its random, it is possible of overlapping happening. So in order to solve this problem, ive tried to compare each and every platform and see if it overlaps and when it does, it will delete itself and instantiate another one.
An addition to this question is,
If i have the overlapping problem solved, is it possible to make it so the platforms are at a certain distance away from each other, for X Y and Z.
So . .
What have i done wrong ?
What can i do ?
void Platform_Position_Scale_Generator(int i) {
posX[i] = Random.Range(minPosRange, maxPosRange + 1);
posY[i] = Random.Range(minPosRange, maxPosRange + 1);
posZ[i] = 0;
scaleX[i] = Random.Range(minScaleRange, maxScaleRange + 1);
scaleY[i] = 1;
scaleZ[i] = 1;
}
void Platform_Generator(int i) {
platformPrefabPosition[i].x = posX[i];
platformPrefabPosition[i].y = posY[i];
platformPrefabPosition[i].z = posZ[i];
Instantiate(platformPrefab, platformPrefabPosition[i], Quaternion.identity);
platformPrefab.transform.localScale = new Vector3(scaleX[i], 1, 1);
}
// Error with this
void Detect_Collision(int i) {
for(int f = 0; f < i; f++) {
for(int s = f + 1; s < i; s++) {
bool xOverlap = (posX[s] > posX[f] && posX[s] < posX[f] + scaleX[i]) || (posX[f] > posX[s] && posX[f] < posX[s] + scaleX[i]);
bool yOverlap = (posY[s] > posY[f] && posY[s] < posY[f] + scaleY[i]) || (posY[f] > posY[s] && posY[f] < posY[s] + scaleY[i]);
if(xOverlap && yOverlap) {
Debug.Log("xOverlap: " + xOverlap + " yOverlap: " + yOverlap);
}
else {
//Debug.Log("xOverlap: " + xOverlap + " yOverlap: " + yOverlap);
}
}
}
}
I wouldn't recommend using completely random generation for something like this, as it can easily create something totally unplayable, and making it playable can be more difficult than trying a more methodical approach.
One interesting approach could be the one shown in this video:
https://www.youtube.com/watch?v=VkGG9Umag0M
That approach is using pre-built chunks of levels manually generated to be playable, that are later randomly chosen in run-time, to create infinite levels.
Another approach could be dynamically generating a sequence of "viable" platforms.
I'm assuming this is a 2D platform game, but the same logic could apply to other types.
For example, the following sequence:
Add a platform on the left edge (random Y position and size, if desired).
Determine viable positions of the next platform to the right and vertically, taking into account both the end position of the previous platform and other things, such as jumping distance. That would give you a max position at which you can place things and still have the player make it. You can use that max and possibly a min. distance to chose a random value between them and still have a viable platform without overlaps.
Repeat step 2 until you reach some end condition, such as size of level, amount of platforms, etc.
You can also add a more complex logic, such as allowing overlaps on one axis as long as there isn't any on the other axis, or a minimum separation between both. That way you could get two nearly parallel platforms, and things like that.
Other rules could be more complex, expecting an actual specific solution from the player, such as double-jumps, bouncing off walls, etc. In that scenario you could have item 2 just be one of many generation strategies from which to chose from.
This type of generation would also be much less expensive than actual instantiation and deletion in case of collisions.
NOTE:
If you still want to stick to 100% random generation, but guarantee gaps between platforms. Just assume an "imaginary" border surrounding the actual platform. Instead of just taking into account current real points, just add offsets to them when testing collisions.
You should be able to test intersection without physics, using something similar to what is show here:
http://answers.unity3d.com/questions/581014/2d-collision-detection-box-intersection-without-ph.html
if (object1.renderer.bounds.Intersects(object2.renderer.bounds)) {
// Do some stuff
}
Related
// EDIT:
This is not a duplicate to: "When should I use a List vs a LinkedList". Check the answer I've provided below.
LinkedList might be useful though if someone wants to insert some positions at a specific place after we have a properly ordered List (in that case - LinkedList) already - how to make one, check my answer below. //
How would you iterate backwards (with pauses for player input) through a set of randomly generated numbers?
I'm trying to build a turn-based game. The order of actions is determined by a result of something like that:
int randomPosition = Random.Range(1,31) + someModifier;
// please note that someModifier can be a negative number!
// There is no foreseeable min or max someModifier.
// Let's assume we can't set limits.
I have a List of KeyValue pairs already containing names of objects and their corresponding randomPosition. // Update: Values of it are already sorted by a custom Sort() function, from highest to lowest.
// A list of KeyValue pairs containing names of objects and their corresponding randomPosition.
public List<KeyValuePair<string, int>> turnOrder = new List<KeyValuePair<string, int>> ();
// GameObject names are taken from a list of player and AI GameObjects.
List <GameObject> listOfCombatants = new List<GameObjects>();
// Each GameObject name is taken from listOfCombatants list.
listOfCombatants[i].name;
I thought, maybe let's add player and AI GameObjects to a list on Index positions equal to each of their randomPosition. Unfortunately, a Generic List can't have "gaps" in it. So we can't create it, let alone iterate it backwards.
Also, I'm not sure how we'd stop a for loop to wait for player input. I have a button, pressing which will perfom an action - switch state and run some functions.
Button combat_action_button;
combat_action_button.onClick.AddListener (AttackButton);
// When player takes his turn, so in TurnState.PLAYER_ACTION:
public void AttackButton() {
switch(actionState) {
case PlayerAction.ATTACK:
Debug.Log (actionState);
// Do something here - run function, etc. Then...
currentState = TurnState.ENEMY_ACTION;
break;
}
To make things worse, I've read that "pausing" a while loop isn't good for performance. That it's better to take player input out of loops.
So maybe I should create more for loops, iterate a new loop from position to position until the last GameObject acted, and use some delegates/events, as some things players/AI can do are in different scripts (classes).
This is not a real-time project, so we can't base anything on time (other than potential max turn time).
Another thing is, we don't know how many GameObjects will take turns.
But maybe there's some collection type that can store GameObjects with gaps between their index positions and iterate a loop from highest to lowest with no problem?...
I want to make it as simple as possible.
For the issue of simultaneously having user input and looping, I recommend looking into background workers or threading/tasks. This should facilitate your problem with simultaneously doing two things at once.
For your list problem I personally prefer the lists as well which is why I would designate each "gap" with a specific character like - 1, and when referencing the data ignore the - 1. To ignore the gaps I recommend LINQ queries but if you don't want to use LINQ that should not be a problem.
EDIT*
I know very little about Unity but from what people have been saying it sounds like running multiple threads is or can be an issue. I looked into this issue and it sounds like you just cannot call the unity api from a background thread. So basically, as long as you do not reference the unity api from the background thread, you should be ok. With that said, you may possibly need/want to make a call to the api inside of the background worker. To do this you need to either invoke the call before or after the background worker thread. I am pretty sure there is also a simultaneous invocation from the main thread by setting the workers step property to true.
I've decided to share my own solutions as I think I've developped some accurate answers to my questions.
Solution #1. First, declare an array, 2 variables and 1 function:
int[] arrayOrderedCombatants;
int min = 100000;
int max;
public void SomePlayerAction() {
Debug.Log ("This is some player or AI action.");
}
public void IterateThroughTurnOrderPositions() {
for (int i=max; i >= min; i--) {
if (arrayOrderedCombatants [i] >= 0 && arrayOrderedCombatants [i] >= min) {
Debug.Log ("This is an existing position in turn order: " + arrayOrderedCombatants [i]);
foreach (var position in turnOrder) {
if (position.Value == arrayOrderedCombatants [i]) {
max = (arrayOrderedCombatants [i] - 1);
goto ExitLoop;
}
}
}
}
ExitLoop:
SomePlayerAction ();
}
Then, for our testing purposes let's use an Update() method triggered by Input.GetKeyDown:
if (Input.GetKeyDown (KeyCode.O)) {
arrayOrderedCombatants = new int[turnOrder[0].Value + 1];
Debug.Log ("This is arrayOrderedCombatants Length: " + arrayOrderedCombatants.Length);
foreach (var number in turnOrder) {
if (number.Value < min)
min = number.Value;
}
Debug.Log ("This is min result in random combat order: " + min);
for (int i=0; i < arrayOrderedCombatants.Length; i++)
arrayOrderedCombatants[i] = min -1;
foreach (var combatant in turnOrder) {
if (combatant.Value >= 0) {
arrayOrderedCombatants [combatant.Value] = combatant.Value;
}
}
max = turnOrder [0].Value;
while (max >= min)
IterateThroughTurnOrderPositions ();
}
The above code answers my question. Unfortunately, problems with this solution may be two. First - you can't have a negative index position. So if someModifier makes randomPosition go below 0, it won't work. Second problem - if you have more than 1 occurance of any value from randomPosition, it will be added to the arrayOrderedCombatants only once. So it will be iterated once too.
But that's obvious - you can't have more than one value occupying an int type Arrays' index position.
So I will provide you a better solution. It's a different approach, but works like it should.
Solution #2. First, declare a list of GameObjects:
List<GameObject> orderedCombatants = new List<GameObject> ();
Then, in Update() method:
if (Input.GetKeyDown (KeyCode.I)) {
orderedCombatants.Clear ();
foreach (var combatant in initiative) {
Debug.Log (combatant);
for (int i=0; i < listOfCombatants.Count; i++) {
if (listOfCombatants[i].name.Contains(combatant.Key)) {
orderedCombatants.Add(listOfCombatants[i]);
}
}
}
foreach (var combatant in orderedCombatants) {
Debug.Log (combatant.name);
}
}
The above creates a new list of GameObjects already set in the right order. Now you can iterate through it, easily access each GameObject and perform any actions you need.
I have two arrays:
Vector3[] positions;
Matrix4x4[] transforms;
And a point in space:
Vector3 point;
For each position I get the distance from the point:
float distance = GetDistance(point, transforms[i] * positions[i]);
I'm comfortable enough with using delegates to sort a single array, but how can I sort the two arrays at the same time?
I need the operation to be as fast as possible, so i'd like to avoid packing into a temporary array and then unpacking the result.
I'm using .NET 2.0 so no Linq.
Instead of managing two parallel array you should use a better model that binds the data together.
Tuple<Vector3, Matrix4x4>[] posTransforms;
//add like this
posTransforms.Add(new Tuple<Vector3, Matrix4x4>(vec, matrix));
// order by Y cooridinate of the vectors for example
posTransforms.OrderBy(x => x.Item1.Y)
The simplest solution is to do something like what #evanmcdonnal suggests and just package the corresponding elements together (because they are obviously related and should be part of the same data structure). Then sort them using either a built in sort function or some such thing.
If you are really opposed to doing that for whatever reason, you will need to write your own sort method that will move the elements of both arrays at the same time.
Uncompiled and untested example (but should give you a decent idea of how to proceed):
bool isSorted;
do
{
isSorted = true;
for (int i = 0; i < positions.Length - 1; i++)
{
float distance = GetDistance(point, transforms[i] * positions[i]);
float distanceNext = GetDistance(point, transforms[i + 1] * positions[i + 1]);
if (distanceNext < distance)
{
var swapTransform = transforms[i];
transforms[i] = transforms[i + 1];
transforms[i + 1] = swapTransforms;
var swapPosition = positions[i];
positions[i] = positions[i + 1];
positions[i + 1] = swapPosition ;
isSorted = false;
}
}
} while(!isSorted);
Note that I used a bubble sort here (which is in no way efficient, just really easy to write). I suggest finding a much more efficient algorithm to use if you decide to go this route.
First things first:
I have a git repo over here that holds the code of my current efforts and an example data set
Background
The example data set holds a bunch of records in Int32 format. Each record is composed of several bit fields that basically hold info on events where an event is either:
The detection of a photon
The arrival of a synchronizing signal
Each Int32 record can be treated like following C-style struct:
struct {
unsigned TimeTag :16;
unsigned Channel :12;
unsigned Route :2;
unsigned Valid :1;
unsigned Reserved :1; } TTTRrecord;
Whether we are dealing with a photon record or a sync event, time
tag will always hold the time of the event relative to the start of
the experiment (macro-time).
If a record is a photon, valid == 1.
If a record is a sync signal or something else, valid == 0.
If a record is a sync signal, sync type = channel & 7 will give either a value indicating start of frame or end of scan line in a frame.
The last relevant bit of info is that Timetag is 16 bit and thus obviously limited. If the Timetag counter rolls over, the rollover counter is incremented. This rollover (overflow) count can easily be obtained from channel overflow = Channel & 2048.
My Goal
These records come in from a high speed scanning microscope and I would like to use these records to reconstruct images from the recorded photon data, preferably at 60 FPS.
To do so, I obviously have all the info:
I can look over all available data, find all overflows, which allows me to reconstruct the sequential macro time for each record (photon or sync).
I also know when the frame started and when each line composing the frame ended (and thus also how many lines there are).
Therefore, to reconstruct a bitmap of size noOfLines * noOfLines I can process the bulk array of records line by line where each time I basically make a "histogram" of the photon events with edges at the time boundary of each pixel in the line.
Put another way, if I know Tstart and Tend of a line, and I know the number of pixels I want to spread my photons over, I can walk through all records of the line and check if the macro time of my photons falls within the time boundary of the current pixel. If so, I add one to the value of that pixel.
This approach works, current code in the repo gives me the image I expect but it is too slow (several tens of ms to calculate a frame).
What I tried already:
The magic happens in the function int[] Renderline (see repo).
public static int[] RenderlineV(int[] someRecords, int pixelduration, int pixelCount)
{
// Will hold the pixels obviously
int[] linePixels = new int[pixelCount];
// Calculate everything (sync, overflow, ...) from the raw records
int[] timeTag = someRecords.Select(x => Convert.ToInt32(x & 65535)).ToArray();
int[] channel = someRecords.Select(x => Convert.ToInt32((x >> 16) & 4095)).ToArray();
int[] valid = someRecords.Select(x => Convert.ToInt32((x >> 30) & 1)).ToArray();
int[] overflow = channel.Select(x => (x & 2048) >> 11).ToArray();
int[] absTime = new int[overflow.Length];
absTime[0] = 0;
Buffer.BlockCopy(overflow, 0, absTime, 4, (overflow.Length - 1) * 4);
absTime = absTime.Cumsum(0, (prev, next) => prev * 65536 + next).Zip(timeTag, (o, tt) => o + tt).ToArray();
long lineStartTime = absTime[0];
int tempIdx = 0;
for (int j = 0; j < linePixels.Length; j++)
{
int count = 0;
for (int i = tempIdx; i < someRecords.Length; i++)
{
if (valid[i] == 1 && lineStartTime + (j + 1) * pixelduration >= absTime[i])
{
count++;
}
}
// Avoid checking records in the raw data that were already binned to a pixel.
linePixels[j] = count;
tempIdx += count;
}
return linePixels;
}
Treating photon records in my data set as an array of structs and addressing members of my struct in an iteration was a bad idea. I could increase speed significantly (2X) by dumping all bitfields into an array and addressing these. This version of the render function is already in the repo.
I also realised I could improve the loop speed by making sure I refer to the .Length property of the array I am running through as this supposedly eliminates bounds checking.
The major speed loss is in the inner loop of this nested set of loops:
for (int j = 0; j < linePixels.Length; j++)
{
int count = 0;
lineStartTime += pixelduration;
for (int i = tempIdx; i < absTime.Length; i++)
{
//if (lineStartTime + (j + 1) * pixelduration >= absTime[i] && valid[i] == 1)
// Seems quicker to calculate the boundary before...
//if (valid[i] == 1 && lineStartTime >= absTime[i] )
// Quicker still...
if (lineStartTime > absTime[i] && valid[i] == 1)
{
// Slow... looking into linePixels[] each iteration is a bad idea.
//linePixels[j]++;
count++;
}
}
// Doing it here is faster.
linePixels[j] = count;
tempIdx += count;
}
Rendering 400 lines like this in a for loop takes roughly 150 ms in a VM (I do not have a dedicated Windows machine right now and I run a Mac myself, I know I know...).
I just installed Win10CTP on a 6 core machine and replacing the normal loops by Parallel.For() increases the speed by almost exactly 6X.
Oddly enough, the non-parallel for loop runs almost at the same speed in the VM or the physical 6 core machine...
Regardless, I cannot imagine that this function cannot be made quicker. I would first like to eke out every bit of efficiency from the line render before I start thinking about other things.
I would like to optimise the function that generates the line to the maximum.
Outlook
Until now, my programming dealt with rather trivial things so I lack some experience but things I think I might consider:
Matlab is/seems very efficient with vectored operations. Could I achieve similar things in C#, i.e. by using Microsoft.Bcl.Simd? Is my case suited for something like this? Would I see gains even in my VM or should I definitely move to real HW?
Could I gain from pointer arithmetic/unsafe code to run through my arrays?
...
Any help would be greatly, greatly appreciated.
I apologize beforehand for the quality of the code in the repo, I am still in the quick and dirty testing stage... Nonetheless, criticism is welcomed if it is constructive :)
Update
As some mentioned, absTime is ordered already. Therefore, once a record is hit that is no longer in the current pixel or bin, there is no need to continue the inner loop.
5X speed gain by adding a break...
for (int i = tempIdx; i < absTime.Length; i++)
{
//if (lineStartTime + (j + 1) * pixelduration >= absTime[i] && valid[i] == 1)
// Seems quicker to calculate the boundary before...
//if (valid[i] == 1 && lineStartTime >= absTime[i] )
// Quicker still...
if (lineStartTime > absTime[i] && valid[i] == 1)
{
// Slow... looking into linePixels[] each iteration is a bad idea.
//linePixels[j]++;
count++;
}
else
{
break;
}
}
My Problem
I have a data stream coming from a program that connects to a GPS device and an inclinometer (they are actually both stand alone devices, not a cellphone) and logs the data while the user drives around in a car. The essential data that I receive are:
Latitude/Longitude - from GPS, with a resolution of about +-5 feet,
Vehicle land-speed - from GPS, in knots, which I convert to MPH
Sequential record index - from the database, it's an auto-incrementing integer and nothing ever gets deleted,
some other stuff that isn't pertinent to my current problem.
This data gets stored in a database and read back from the database into an array. From start to finish, the recording order is properly maintained, so even though the timestamp that is recorded from the GPS device is only to 1 second precision and we sample at 5hz, the absolute value of the time is of no interest and the insertion order suffices.
In order to aid in analyzing the data, a user performs a very basic data input task of selecting the "start" and "end" of curves on the road from the collected path data. I get a map image from Google and I draw the curve data on top of it. The user zooms into a curve of interest, based on their own knowledge of the area, and clicks two points on the map. Google is actually very nice and reports where the user clicked in Latitude/Longitude rather than me having to try to backtrack it from pixel values, so the issue of where the user clicked in relation to the data is covered.
The zooming in on the curve clips the data: I only retrieve data that falls in the Lat/Lng window defined by the zoom level. Most of the time, I'm dealing with fewer than 300 data points, when a single driving session could result in over 100k data points.
I need to find the subsegment of the curve data that falls between those to click points.
What I've Tried
Originally, I took the two points that are closest to each click point and the curve was anything that fell between them. That worked until we started letting the drivers make multiple passes over the road. Typically, a driver will make 2 back-and-forth runs over an interesting piece of road, giving us 4 total passes. If you take the two closest points to the two click points, then you might end up with the first point corresponding to a datum on one pass, and the second point corresponding to a datum on a completely different pass. The points in the sequence between these two points would then extend far beyond the curve. And, even if you got lucky and all the data points found were both on the same pass, that would only give you one of the passes, and we need to collect all passes.
For a while, I had a solution that worked much better. I calculated two new sequences representing the distance from each data point to each of the click points, then the approximate second derivative of that distance, looking for the inflection points of the distance from the click point over the data points. I reasoned that the inflection point meant that the points previous to the inflection were getting closer to the click point and the points after the inflection were getting further away from the click point. Doing this iteratively over the data points, I could group the curves as I came to them.
Perhaps some code is in order (this is C#, but don't worry about replying in kind, I'm capable of reading most languages):
static List<List<LatLngPoint>> GroupCurveSegments(List<LatLngPoint> dataPoints, LatLngPoint start, LatLngPoint end)
{
var withDistances = dataPoints.Select(p => new
{
ToStart = p.Distance(start),
ToEnd = p.Distance(end),
DataPoint = p
}).ToArray();
var set = new List<List<LatLngPoint>>();
var currentSegment = new List<LatLngPoint>();
for (int i = 0; i < withDistances.Length - 2; ++i)
{
var a = withDistances[i];
var b = withDistances[i + 1];
var c = withDistances[i + 2];
// the edge of the map can clip the data, so the continuity of
// the data is not exactly mapped to the continuity of the array.
var ab = b.DataPoint.RecordID - a.DataPoint.RecordID;
var bc = c.DataPoint.RecordID - b.DataPoint.RecordID;
var inflectStart = Math.Sign(a.ToStart - b.ToStart) * Math.Sign(b.ToStart - c.ToStart);
var inflectEnd = Math.Sign(a.ToEnd - b.ToEnd) * Math.Sign(b.ToEnd - c.ToEnd);
// if we haven't started a segment yet and we aren't obviously between segments
if ((currentSegment.Count == 0 && (inflectStart == -1 || inflectEnd == -1)
// if we have started a segment but we haven't changed directions away from it
|| currentSegment.Count > 0 && (inflectStart == 1 && inflectEnd == 1))
// and we're continuous on the data collection path
&& ab == 1
&& bc == 1)
{
// extend the segment
currentSegment.Add(b.DataPoint);
}
else if (
// if we have a segment collected
currentSegment.Count > 0
// and we changed directions away from one of the points
&& (inflectStart == -1
|| inflectEnd == -1
// or we lost data continuity
|| ab > 1
|| bc > 1))
{
// clip the segment and start a new one
set.Add(currentSegment);
currentSegment = new List<LatLngPoint>();
}
}
return set;
}
This worked great until we started advising the drivers to drive around 15MPH through turns (supposedly, it helps reduce sensor error. I'm personally not entirely convinced what we're seeing at higher speed is error, but I'm probably not going to win that argument). A car traveling at 15MPH is traveling at 22fps. Sampling this data at 5hz means that each data point is about four and a half feet apart. However, our GPS unit's precision is only about 5 feet. So, just the jitter of the GPS data itself could cause an inflection point in the data at such low speeds and high sample rates (technically, at this sample rate, you'd have to go at least 35MPH to avoid this problem, but it seems to work okay at 25MPH in practice).
Also, we're probably bumping up sampling rate to 10 - 15 Hz pretty soon. You'd need to drive at about 45MPH to avoid my inflection problem, which isn't safe on most of the curves of interest. My current procedure ends up splitting the data into dozens of subsegments, over road sections that I know had only 4 passes. One section that only had 300 data points came out to 35 subsegments. The rendering of the indication of the start and end of each pass (a small icon) indicated quite clearly that each real pass was getting chopped up into several pieces.
Where I'm Thinking of Going
Find the minimum distance of all points to both the start and end click points
Find all points that are within +10 feet of that distance.
Group each set of points by data continuity, i.e. each group should be continuous in the database, because more than one point on a particular pass could fall within the distance radius.
Take the data mid-point of each of those groups for each click point as the representative start and end for each pass.
Pair up points in the two sets per click point by those that would minimize the record index distance between each "start" and "end".
Halp?!
But I had tried this once before and it didn't work very well. Step #2 can return an unreasonably large number of points if the user doesn't click particularly close to where they intend. It can return too few points if the user clicks very, particularly close to where they intend. I'm not sure just how computationally intensive step #3 will be. And step #5 will fail if the driver were to drive over a particularly long curve and immediately turn around just after the start and end to perform the subsequent passes. We might be able to train the drivers to not do this, but I don't like taking chances on such things. So I could use some help figuring out how to clip and group this path that doubles back over itself into subsegments for passes over the curve.
Okay, so here is what I ended up doing, and it seems to work well for now. I like that it is a little simpler to follow than before. I decided that Step #4 from my question was not necessary. The exact point used as the start and end isn't critical, so I just take the first point that is within the desired radius of the first click point and the last point within the desired radius of the second point and take everything in the middle.
protected static List<List<T>> GroupCurveSegments<T>(List<T> dbpoints, LatLngPoint start, LatLngPoint end) where T : BBIDataPoint
{
var withDistances = dbpoints.Select(p => new
{
ToStart = p.Distance(start),
ToEnd = p.Distance(end),
DataPoint = p
}).ToArray();
var minToStart = withDistances.Min(p => p.ToStart) + 10;
var minToEnd = withDistances.Min(p => p.ToEnd) + 10;
bool startFound = false,
endFound = false,
oldStartFound = false,
oldEndFound = false;
var set = new List<List<T>>();
var cur = new List<T>();
foreach(var a in withDistances)
{
// save the previous values, because they
// impact the future values.
oldStartFound = startFound;
oldEndFound = endFound;
startFound =
!oldStartFound && a.ToStart <= minToStart
|| oldStartFound && !oldEndFound
|| oldStartFound && oldEndFound
&& (a.ToStart <= minToStart || a.ToEnd <= minToEnd);
endFound =
!oldEndFound && a.ToEnd <= minToEnd
|| !oldStartFound && oldEndFound
|| oldStartFound && oldEndFound
&& (a.ToStart <= minToStart || a.ToEnd <= minToEnd);
if (startFound || endFound)
{
cur.Add(a.DataPoint);
}
else if (cur.Count > 0)
{
set.Add(cur);
cur = new List<T>();
}
}
// if a data stream ended near the end of the curve,
// then the loop will not have saved it the pass.
if (cur.Count > 0)
{
cur = new List<T>();
}
return set;
}
I manually adjust the thread count:
if (items.Count == 0) { threads = 0; }
else if (items.Count < 1 * hundred) { threads = 1; }
else if (items.Count < 3 * hundred) { threads = 2; }
else if (items.Count < 5 * hundred) { threads = 4; }
else if (items.Count < 10 * hundred) { threads = 8; }
else if (items.Count < 20 * hundred) { threads = 11; }
else if (items.Count < 30 * hundred) { threads = 15; }
else if (items.Count < 50 * hundred) { threads = 30; }
else threads = 40;
I need a function that returns the necessary/optimized thread count.
Ok, now forget above. I need a graph curve to plot. I give the coords, function plots the curve. Imagine the point(0,0) and point(5,5) -in (x,y) form. It should be straight line. So then I can measure x for y=3.
What happens if I give the points (0,0), (2,3), (8,10), (15,30) and (30,50). It will be a curve like thing. Now can I calculate x for given y or vice versa?
I think you get the idea. Should I use MathLab or could it be done in C#?
You're looking for curve fitting, or the derivation of a function describing a curve from a set of data points. If you're looking to do this once, from a constant set of data, Matlab would do the job just fine. If you want to do this dynamically, there are libraries and algorithms out there.
Review the Wikipedia article on linear regression. The least squares approach mentioned in that article is pretty common. Look around, and you'll find libraries and code samples using that approach.
You can probably make that run faster by reordering the tests (and using nested if). But that's not a smooth function, there's not likely to be any simpler description.
Or are you trying to find a smooth function that passes near those points?
You could use a linear regression; you would get something like this:
So I would probably encode it in C# like this:
int threads = (int) Math.Ceiling(0.0056*items.Count + 0.5);
I used Math.Ceiling to ensure that you don’t get 0 when the input isn’t 0. Of course, this function gives you 1 even if the input is 0; if that matters, you can always catch that as a special case, or use Math.Round instead.
However, this means the number of threads will go up continuously. It will not level out at 40. If that’s what you want, you might need to research different kinds of regression.