The last couple of days have been full with making calculations and formulas and I'm beginning to lose my mind (a little bit). So now I'm turning to you guys for some insight/help.
Here's the problem; I'm working with bluetooth beacons whom are placed all over an entire floor in a building to make an indoor GPS showcase. You can use your phone to connect with these beacons, which results in receiving your longitude and latitude location from them. These numbers are large float/double variables, looking like this:
lat: 52.501288451787076
lng: 6.079107635606511
The actual changes happen at the 4th and 5th position after the point. I'm converting these numbers to the Cartesian coordinate system using;
x = R * cos(lat) * cos(lon)
z = R *sin(lat)
Now the coordinates from this conversion are kind of solid. They are numbers with which I can work with. I use them in a 3d engine (Unity3d) to make a real-time map where you can see where someone is walking.
Now for the actual problem! These beacons are not entirely accurate. These numbers 'jump' up and down even when you lay your phone down. Ranging from, let's assume the same latitude as mentioned above, 52.501280 to 52.501296. If we convert this and use it as coordinates in a 3d engine, the 'avatar' for a user jumps from one position to another (more small jumps than large jumps).
What is a good way to cope with these jumping numbers? I've tried to check for big jumps and ignore those, but the jumps are still too big. A broader check will result in almost no movement, even when a phone is moving. Or is there a better way to convert the lat and long variables for use in a 3d engine?
If there is someone who has had the same problem as me, some mathematical wonder who can give a good conversion/formula to start with or someone who knows what I'm possibly doing wrong then please, help a fellow programmer out.
Moving Average
You could use this: (Taken here: https://stackoverflow.com/a/1305/5089204)
Attention: Please read the comments to this class as this implementation has some flaws... It's just for quick test and show...
public class LimitedQueue<T> : Queue<T> {
private int limit = -1;
public int Limit {
get { return limit; }
set { limit = value; }
}
public LimitedQueue(int limit)
: base(limit) {
this.Limit = limit;
}
public new void Enqueue(T item) {
if (this.Count >= this.Limit) {
this.Dequeue();
}
base.Enqueue(item);
}
}
Just test it like this:
var queue = new LimitedQueue<float>(4);
queue.Enqueue(52.501280f);
var avg1 = queue.Average(); //52.50128
queue.Enqueue(52.501350f);
var avg2 = queue.Average(); //52.5013161
queue.Enqueue(52.501140f);
var avg3 = queue.Average(); //52.50126
queue.Enqueue(52.501022f);
var avg4 = queue.Average(); //52.5011978
queue.Enqueue(52.501635f);
var avg5 = queue.Average(); //52.50129
queue.Enqueue(52.501500f);
var avg6 = queue.Average(); //52.5013237
queue.Enqueue(52.501505f);
var avg7 = queue.Average(); //52.5014153
queue.Enqueue(52.501230f);
var avg8 = queue.Average(); //52.50147
The limited queue will not grow... You just define the count of elements you want to use (in this case I specified 4). The 5th element pushes the first out and so on...
The average will always be a smooth sliding :-)
Related
How can I speed up my code? And what am I doing wrong? I have a two-dimensional array of cells that store data about what is in them. And I have a map of only 100x100 and with 10 colonists and this already causes freezes. Although my game is quite raw.
And when should I build a route for a colonist? Every step he takes? Because if an unexpectedly built wall appears on his way. Then he will have to immediately change the route.
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class PathFinding : MonoBehaviour
{
public MainWorld MainWorld;
public MainWorld.Cell CurrentWorldCell;
public MainWorld.Cell NeighborWorldCell;
public List<MainWorld.Cell> UnvisitedWorldCells;
public List<MainWorld.Cell> VisitedWorldCells;
public Dictionary<MainWorld.Cell, MainWorld.Cell> PathTraversed;
public List<MainWorld.Cell> PathToObject;
public List<MainWorld.Cell> FindPath(MainWorld.Cell StartWorldCell, string ObjectID)
{
CurrentWorldCell = new MainWorld.Cell();
NeighborWorldCell = new MainWorld.Cell();
UnvisitedWorldCells = new List<MainWorld.Cell>();
VisitedWorldCells = new List<MainWorld.Cell>();
PathTraversed = new Dictionary<MainWorld.Cell, MainWorld.Cell>();
UnvisitedWorldCells.Add(StartWorldCell);
while (UnvisitedWorldCells.Count > 0)
{
CurrentWorldCell = UnvisitedWorldCells[0];
NeighborWorldCell = MainWorld.Data[CurrentWorldCell.Position.x, CurrentWorldCell.Position.y + 1];
CheckWorldCell(CurrentWorldCell, NeighborWorldCell, ObjectID);
NeighborWorldCell = MainWorld.Data[CurrentWorldCell.Position.x + 1, CurrentWorldCell.Position.y];
CheckWorldCell(CurrentWorldCell, NeighborWorldCell, ObjectID);
NeighborWorldCell = MainWorld.Data[CurrentWorldCell.Position.x, CurrentWorldCell.Position.y - 1];
CheckWorldCell(CurrentWorldCell, NeighborWorldCell, ObjectID);
NeighborWorldCell = MainWorld.Data[CurrentWorldCell.Position.x - 1, CurrentWorldCell.Position.y];
CheckWorldCell(CurrentWorldCell, NeighborWorldCell, ObjectID);
UnvisitedWorldCells.Remove(CurrentWorldCell);
if (CurrentWorldCell.ObjectID == ObjectID)
{
return CreatePath(StartWorldCell, CurrentWorldCell);
}
}
return null;
}
public void CheckWorldCell(MainWorld.Cell CurrentWorldCell, MainWorld.Cell NeighborWorldCell, string ObjectID)
{
if (VisitedWorldCells.Contains(NeighborWorldCell) == false)
{
if (NeighborWorldCell.IsPassable == true ||
NeighborWorldCell.ObjectID == ObjectID)
{
UnvisitedWorldCells.Add(NeighborWorldCell);
VisitedWorldCells.Add(NeighborWorldCell);
PathTraversed.Add(NeighborWorldCell, CurrentWorldCell);
}
else
{
VisitedWorldCells.Add(NeighborWorldCell);
}
}
}
public List<MainWorld.Cell> CreatePath(MainWorld.Cell StartWorldCell, MainWorld.Cell EndWorldCell)
{
PathToObject = new List<MainWorld.Cell>();
PathToObject.Add(EndWorldCell);
while (PathToObject[PathToObject.Count - 1] != StartWorldCell)
{
PathToObject.Add(PathTraversed[PathToObject[PathToObject.Count - 1]]);
}
PathToObject.Reverse();
return PathToObject;
}
}
On each iteration of your pathfinding loop, there is a check for each of the four neighboring cells. Within each check, you are searching the VisitedWorldCells List. If there are many cells in that list, it could be your first issue. Next, you either add an element to one or three lists. Manipulating lists is slow. To create a path, you are making a list and adding a bunch of elements. Doing all of this 10 times per frame is unreasonable.
Use preallocated 2d boolean arrays. You know how large the map is.
bool[,] array = new bool[mapSizeX, mapSizeY];
Using this, you can check the cells directly with something like
if(!VistedWorldCells[14,16])
Instead of checking the entire VisitedWorldCells list (which also could be a slow comparison depending on what the MainWorld.Cell type is).
Preallocated arrays and directly checking/setting the values will speed this up by orders of magnitude.
As far as I can tell your algorithm is a simple breadth first search. The first step should be to replace the UnvisitedWorldCells list (often called the 'open set') with a queue, and the VisitedWorldCells (often called closed set) with a hashSet. Or just use a 2D array for a constant time lookup. You might also consider using a more compact representation of your node, like a simple pair of x/y coordinates.
The next improvement step would be Djikstras shortest path algorithm. And there are plenty of examples how this works if you search around a bit. This works in a similar way, but uses a priority queue (usually a min heap) for the unvisited cells, this allows for different 'cost' to be specified.
The next step should probably be A*, this is an 'optimal' algorithm for pathfinding, but if you only have 100x100 nodes I would not expect a huge difference. This uses a heuristic to guess what nodes to visit first, so the priority queue does not just use the cost to traverse to the node, but also the estimated remaining cost.
As always when it comes to performance you should measure and profile your code to find bottlenecks.
When to pathfind would be a more complicated issue. A fairly simple and probably effective solution would be to create a path once, and just follow it. If a node has been blocked you can redo the path finding then. You could also redo pathfinding every n seconds in case a better path has appeared. If you have multiple units you might introduce additional requirements like "pushing" blocking units out of the way, trying to keep groups of units in a compact group as they navigate tight passages etc. You can probably spend years taking every possible feature in consideration.
How would I go about generating a serializable variable of random numbers to a list, then comparing those generated numbers to a target number?
What I want to do is make a program that takes in a number, such as 42, and generates that number of random numbers to a list while still keeping the original variable, in this case 42, to be referenced later. Super ham-handed pseudo-code(?) example:
public class Generate {
[SerializeField]
int generate = 42;
List<int> results = new List<int>;
public void Example() {
int useGenerate = generate;
//Incoming pseudo-code (rather, code that I don't know how to do, exactly)
while (useGenerate => 1) {
results.add (random.range(0,100)); //Does this make a number between 0 and 99?
int useGenerate = useGenerate - 1;
}
}
}
I think this will do something to that effect, once I figure out how to actually code it properly (Still learning).
From there, I'd like to compare the list of results to a target number, to see how many of them pass a certain threshold, in this case greater than or equal to 50. I assume this would require a "foreach" thingamabobber, but I'm not sure how to go about doing that, really. With each "success", I'd like to increment a variable to be returned at a later point. I guess something like this:
int success = 50;
int target = 0;
foreach int in List<results> {
if (int => success) {
int target = target + 1;
}
}
If I have the right idea, please just teach me how to properly code it. If you have any suggestions on how to improve it (like the whole ++ and -- thing I see here and there but don't know how to use), please teach me that, too. I looked around the web for using foreach with lists and it seemed really complicated and people were seemingly pulling some new bit of information from the Aether to include in the operation. Thanks for reading, and thanks in advance for any advice!
I'm implementing the K-nearest neighbours classification algorithm in C# for a training and testing set of about 20,000 samples each, and 25 dimensions.
There are only two classes, represented by '0' and '1' in my implementation. For now, I have the following simple implementation :
// testSamples and trainSamples consists of about 20k vectors each with 25 dimensions
// trainClasses contains 0 or 1 signifying the corresponding class for each sample in trainSamples
static int[] TestKnnCase(IList<double[]> trainSamples, IList<double[]> testSamples, IList<int[]> trainClasses, int K)
{
Console.WriteLine("Performing KNN with K = "+K);
var testResults = new int[testSamples.Count()];
var testNumber = testSamples.Count();
var trainNumber = trainSamples.Count();
// Declaring these here so that I don't have to 'new' them over and over again in the main loop,
// just to save some overhead
var distances = new double[trainNumber][];
for (var i = 0; i < trainNumber; i++)
{
distances[i] = new double[2]; // Will store both distance and index in here
}
// Performing KNN ...
for (var tst = 0; tst < testNumber; tst++)
{
// For every test sample, calculate distance from every training sample
Parallel.For(0, trainNumber, trn =>
{
var dist = GetDistance(testSamples[tst], trainSamples[trn]);
// Storing distance as well as index
distances[trn][0] = dist;
distances[trn][1] = trn;
});
// Sort distances and take top K (?What happens in case of multiple points at the same distance?)
var votingDistances = distances.AsParallel().OrderBy(t => t[0]).Take(K);
// Do a 'majority vote' to classify test sample
var yea = 0.0;
var nay = 0.0;
foreach (var voter in votingDistances)
{
if (trainClasses[(int)voter[1]] == 1)
yea++;
else
nay++;
}
if (yea > nay)
testResults[tst] = 1;
else
testResults[tst] = 0;
}
return testResults;
}
// Calculates and returns square of Euclidean distance between two vectors
static double GetDistance(IList<double> sample1, IList<double> sample2)
{
var distance = 0.0;
// assume sample1 and sample2 are valid i.e. same length
for (var i = 0; i < sample1.Count; i++)
{
var temp = sample1[i] - sample2[i];
distance += temp * temp;
}
return distance;
}
This takes quite a bit of time to execute. On my system it takes about 80 seconds to complete. How can I optimize this, while ensuring that it would also scale to larger number of data samples? As you can see, I've tried using PLINQ and parallel for loops, which did help (without these, it was taking about 120 seconds). What else can I do?
I've read about KD-trees being efficient for KNN in general, but every source I read stated that they're not efficient for higher dimensions.
I also found this stackoverflow discussion about this, but it seems like this is 3 years old, and I was hoping that someone would know about better solutions to this problem by now.
I've looked at machine learning libraries in C#, but for various reasons I don't want to call R or C code from my C# program, and some other libraries I saw were no more efficient than the code I've written. Now I'm just trying to figure out how I could write the most optimized code for this myself.
Edited to add - I cannot reduce the number of dimensions using PCA or something. For this particular model, 25 dimensions are required.
Whenever you are attempting to improve the performance of code, the first step is to analyze the current performance to see exactly where it is spending its time. A good profiler is crucial for this. In my previous job I was able to use the dotTrace profiler to good effect; Visual Studio also has a built-in profiler. A good profiler will tell you exactly where you code is spending time method-by-method or even line-by-line.
That being said, a few things come to mind in reading your implementation:
You are parallelizing some inner loops. Could you parallelize the outer loop instead? There is a small but nonzero cost associated to a delegate call (see here or here) which may be hitting you in the "Parallel.For" callback.
Similarly there is a small performance penalty for indexing through an array using its IList interface. You might consider declaring the array arguments to "GetDistance()" explicitly.
How large is K as compared to the size of the training array? You are completely sorting the "distances" array and taking the top K, but if K is much smaller than the array size it might make sense to use a partial sort / selection algorithm, for instance by using a SortedSet and replacing the smallest element when the set size exceeds K.
My Problem
I have a data stream coming from a program that connects to a GPS device and an inclinometer (they are actually both stand alone devices, not a cellphone) and logs the data while the user drives around in a car. The essential data that I receive are:
Latitude/Longitude - from GPS, with a resolution of about +-5 feet,
Vehicle land-speed - from GPS, in knots, which I convert to MPH
Sequential record index - from the database, it's an auto-incrementing integer and nothing ever gets deleted,
some other stuff that isn't pertinent to my current problem.
This data gets stored in a database and read back from the database into an array. From start to finish, the recording order is properly maintained, so even though the timestamp that is recorded from the GPS device is only to 1 second precision and we sample at 5hz, the absolute value of the time is of no interest and the insertion order suffices.
In order to aid in analyzing the data, a user performs a very basic data input task of selecting the "start" and "end" of curves on the road from the collected path data. I get a map image from Google and I draw the curve data on top of it. The user zooms into a curve of interest, based on their own knowledge of the area, and clicks two points on the map. Google is actually very nice and reports where the user clicked in Latitude/Longitude rather than me having to try to backtrack it from pixel values, so the issue of where the user clicked in relation to the data is covered.
The zooming in on the curve clips the data: I only retrieve data that falls in the Lat/Lng window defined by the zoom level. Most of the time, I'm dealing with fewer than 300 data points, when a single driving session could result in over 100k data points.
I need to find the subsegment of the curve data that falls between those to click points.
What I've Tried
Originally, I took the two points that are closest to each click point and the curve was anything that fell between them. That worked until we started letting the drivers make multiple passes over the road. Typically, a driver will make 2 back-and-forth runs over an interesting piece of road, giving us 4 total passes. If you take the two closest points to the two click points, then you might end up with the first point corresponding to a datum on one pass, and the second point corresponding to a datum on a completely different pass. The points in the sequence between these two points would then extend far beyond the curve. And, even if you got lucky and all the data points found were both on the same pass, that would only give you one of the passes, and we need to collect all passes.
For a while, I had a solution that worked much better. I calculated two new sequences representing the distance from each data point to each of the click points, then the approximate second derivative of that distance, looking for the inflection points of the distance from the click point over the data points. I reasoned that the inflection point meant that the points previous to the inflection were getting closer to the click point and the points after the inflection were getting further away from the click point. Doing this iteratively over the data points, I could group the curves as I came to them.
Perhaps some code is in order (this is C#, but don't worry about replying in kind, I'm capable of reading most languages):
static List<List<LatLngPoint>> GroupCurveSegments(List<LatLngPoint> dataPoints, LatLngPoint start, LatLngPoint end)
{
var withDistances = dataPoints.Select(p => new
{
ToStart = p.Distance(start),
ToEnd = p.Distance(end),
DataPoint = p
}).ToArray();
var set = new List<List<LatLngPoint>>();
var currentSegment = new List<LatLngPoint>();
for (int i = 0; i < withDistances.Length - 2; ++i)
{
var a = withDistances[i];
var b = withDistances[i + 1];
var c = withDistances[i + 2];
// the edge of the map can clip the data, so the continuity of
// the data is not exactly mapped to the continuity of the array.
var ab = b.DataPoint.RecordID - a.DataPoint.RecordID;
var bc = c.DataPoint.RecordID - b.DataPoint.RecordID;
var inflectStart = Math.Sign(a.ToStart - b.ToStart) * Math.Sign(b.ToStart - c.ToStart);
var inflectEnd = Math.Sign(a.ToEnd - b.ToEnd) * Math.Sign(b.ToEnd - c.ToEnd);
// if we haven't started a segment yet and we aren't obviously between segments
if ((currentSegment.Count == 0 && (inflectStart == -1 || inflectEnd == -1)
// if we have started a segment but we haven't changed directions away from it
|| currentSegment.Count > 0 && (inflectStart == 1 && inflectEnd == 1))
// and we're continuous on the data collection path
&& ab == 1
&& bc == 1)
{
// extend the segment
currentSegment.Add(b.DataPoint);
}
else if (
// if we have a segment collected
currentSegment.Count > 0
// and we changed directions away from one of the points
&& (inflectStart == -1
|| inflectEnd == -1
// or we lost data continuity
|| ab > 1
|| bc > 1))
{
// clip the segment and start a new one
set.Add(currentSegment);
currentSegment = new List<LatLngPoint>();
}
}
return set;
}
This worked great until we started advising the drivers to drive around 15MPH through turns (supposedly, it helps reduce sensor error. I'm personally not entirely convinced what we're seeing at higher speed is error, but I'm probably not going to win that argument). A car traveling at 15MPH is traveling at 22fps. Sampling this data at 5hz means that each data point is about four and a half feet apart. However, our GPS unit's precision is only about 5 feet. So, just the jitter of the GPS data itself could cause an inflection point in the data at such low speeds and high sample rates (technically, at this sample rate, you'd have to go at least 35MPH to avoid this problem, but it seems to work okay at 25MPH in practice).
Also, we're probably bumping up sampling rate to 10 - 15 Hz pretty soon. You'd need to drive at about 45MPH to avoid my inflection problem, which isn't safe on most of the curves of interest. My current procedure ends up splitting the data into dozens of subsegments, over road sections that I know had only 4 passes. One section that only had 300 data points came out to 35 subsegments. The rendering of the indication of the start and end of each pass (a small icon) indicated quite clearly that each real pass was getting chopped up into several pieces.
Where I'm Thinking of Going
Find the minimum distance of all points to both the start and end click points
Find all points that are within +10 feet of that distance.
Group each set of points by data continuity, i.e. each group should be continuous in the database, because more than one point on a particular pass could fall within the distance radius.
Take the data mid-point of each of those groups for each click point as the representative start and end for each pass.
Pair up points in the two sets per click point by those that would minimize the record index distance between each "start" and "end".
Halp?!
But I had tried this once before and it didn't work very well. Step #2 can return an unreasonably large number of points if the user doesn't click particularly close to where they intend. It can return too few points if the user clicks very, particularly close to where they intend. I'm not sure just how computationally intensive step #3 will be. And step #5 will fail if the driver were to drive over a particularly long curve and immediately turn around just after the start and end to perform the subsequent passes. We might be able to train the drivers to not do this, but I don't like taking chances on such things. So I could use some help figuring out how to clip and group this path that doubles back over itself into subsegments for passes over the curve.
Okay, so here is what I ended up doing, and it seems to work well for now. I like that it is a little simpler to follow than before. I decided that Step #4 from my question was not necessary. The exact point used as the start and end isn't critical, so I just take the first point that is within the desired radius of the first click point and the last point within the desired radius of the second point and take everything in the middle.
protected static List<List<T>> GroupCurveSegments<T>(List<T> dbpoints, LatLngPoint start, LatLngPoint end) where T : BBIDataPoint
{
var withDistances = dbpoints.Select(p => new
{
ToStart = p.Distance(start),
ToEnd = p.Distance(end),
DataPoint = p
}).ToArray();
var minToStart = withDistances.Min(p => p.ToStart) + 10;
var minToEnd = withDistances.Min(p => p.ToEnd) + 10;
bool startFound = false,
endFound = false,
oldStartFound = false,
oldEndFound = false;
var set = new List<List<T>>();
var cur = new List<T>();
foreach(var a in withDistances)
{
// save the previous values, because they
// impact the future values.
oldStartFound = startFound;
oldEndFound = endFound;
startFound =
!oldStartFound && a.ToStart <= minToStart
|| oldStartFound && !oldEndFound
|| oldStartFound && oldEndFound
&& (a.ToStart <= minToStart || a.ToEnd <= minToEnd);
endFound =
!oldEndFound && a.ToEnd <= minToEnd
|| !oldStartFound && oldEndFound
|| oldStartFound && oldEndFound
&& (a.ToStart <= minToStart || a.ToEnd <= minToEnd);
if (startFound || endFound)
{
cur.Add(a.DataPoint);
}
else if (cur.Count > 0)
{
set.Add(cur);
cur = new List<T>();
}
}
// if a data stream ended near the end of the curve,
// then the loop will not have saved it the pass.
if (cur.Count > 0)
{
cur = new List<T>();
}
return set;
}
I'm gathering light statistics of EXIF data for a large collection of photos and I'm trying to find the simplest way (i.e. performance doesn't matter) of translating Exposure Time values to/from usable data. There is (as far as I can find) no standard for what values camera manufacturers might use i.e. I can't just scan the web for random images and hard-code a map.
Here are is a sample set of values I've encountered (" indicates seconds):
279", 30", 5", 3.2", 1.6", 1.3", 1", 1/1.3, 1/1.6, 1/2, 1/2.5, 1/3, 1/4, 1/5, 1/8, 1/13, 1/8000, 1/16000
Also take into consideration that I'd also like to find the average (mean) ... but it should be one of the actual shutter speeds collected and not just some arbitrary number.
EDIT:
By usable data I mean some sort of creative? numbering system that I can convert to/from for performing calculations. I thought about just multiplying everything by 1,000,000 except some fractions when divided are quite exotic.
EDIT #2:
To clarify, I'm using ExposureTime instead of ShutterSpeed because it contains photographer friendly values e.g. 1/50. ShutterSpeed is more of an approximation (which varies between camera manufacturers) and leads to values such as 1/49.
You want to parse them into some kind of time-duration object.
A simple way, looking at that data, would be to check wheter " or / occurs, if " parse as seconds, / parse as fraction of seconds. I don't really understand what else you could mean. For an example you'd need to specify a language--also, there might be a parser out there already.
Shutter speed is encoded in the EXIF metadata as an SRATIONAL, 32-bits for the numerator and 32-bits for the denominator. Sample code that retrieves it, using System.Drawing:
var bmp = new Bitmap(#"c:\temp\canon-ixus.jpg");
if (bmp.PropertyIdList.Contains(37377)) {
byte[] spencoded = bmp.GetPropertyItem(37377).Value;
int numerator = BitConverter.ToInt32(spencoded, 0);
int denominator = BitConverter.ToInt32(spencoded, 4);
Console.WriteLine("Shutter speed = {0}/{1}", numerator, denominator);
}
Output: Shutter speed = 553859/65536, sample image retrieved here.
It seems there are three types of string you will encounter:
String with double quotes " for seconds
String with leading 1/
String with no special characters
I propose you simply test for these conditions and parse the value accordingly using floats:
string[] InputSpeeds = new[] { "279\"", "30\"", "5\"", "3.2\"", "1.6\"", "1.3\"", "1\"", "1/1.3", "1/1.6", "1/2", "1/2.5", "1/3", "1/4", "1/5", "1/8", "1/13", "1/8000", "1/16000" };
List<float> OutputSpeeds = new List<float>();
foreach (string s in InputSpeeds)
{
float ConvertedSpeed;
if (s.Contains("\""))
{
float.TryParse(s.Replace("\"", String.Empty), out ConvertedSpeed);
OutputSpeeds.Add(ConvertedSpeed);
}
else if (s.Contains("1/"))
{
float.TryParse(s.Remove(0, 2), out ConvertedSpeed);
if (ConvertedSpeed == 0)
{
OutputSpeeds.Add(0F);
}
else
{
OutputSpeeds.Add(1 / ConvertedSpeed);
}
}
else
{
float.TryParse(s, out ConvertedSpeed);
OutputSpeeds.Add(ConvertedSpeed);
}
}
If you want TimeSpans simply change the List<float> to List<TimeSpan> and instead of adding the float to the list, use TimeSpan.FromSeconds(ConvertedSpeed);