So I have a Bing Map control, a GeoCordWatcher getting gps lat and lon, a timer for position intervals and a RouteQuery maker turning the GPS cords into a path for the map.
The points are correct, +- a couple of meters. The problem is that if I am near an intersection or a side street it when the route query runs it takes me off on a half mile expedition that I never went on.
I have tried using both default accuracy and high accuracy but I get the same results. Actually seems to be worse with high accuracy.
Has anyone else had this issue?
RouteQuery rq = new RouteQuery();
List<GeoCoordinate> cords = new List<GeoCoordinate>();
foreach (classes.PositionObj posObj in waypoints)
{
cords.Add(new GeoCoordinate(Convert.ToDouble(posObj.Lattitude), Convert.ToDouble(posObj.Longitude)));
}
rq.Waypoints = cords;
rq.QueryCompleted += rw_QueryCompleted;
rq.QueryAsync();
void rw_QueryCompleted(object sender, QueryCompletedEventArgs<Route> e)
{
try {
if (e.Error == null)
{
Route myroute = e.Result;
mapRoute = new MapRoute(myroute);
mapRoute.Color = (Color)Application.Current.Resources["PhoneAccentColor"];
myMap.AddRoute(mapRoute);
}
}
catch (Exception error) { MessageBox.Show(error.Message); MessageBox.Show(error.StackTrace); leaveFeedback(error.StackTrace); }
}
I haven't been able to test it yet but I think that this is the answer I am looking for.
rq.RouteOptimization = RouteOptimization.MinimizeDistance;
This is the documentation I found
http://msdn.microsoft.com/en-US/library/windowsphone/develop/microsoft.phone.maps.services.routeoptimization(v=vs.105).aspx
http://msdn.microsoft.com/en-US/library/windowsphone/develop/microsoft.phone.maps.services.routequery(v=vs.105).aspx
Try the code:
routeQuery.TravelMode = TravelMode.Driving;
Related
Right now I'm trying to make a simple game editor in C#, however there's a problem with when the user adds more than one platform to the screen:
private void tmrRunGame_Tick(object sender, EventArgs e)
{
foreach(Platform plat in platList)
{
if (plat.getBounds().IntersectsWith(player.getBounds()))
{
tmrGravity.Stop();
isColliding = true;
}
else
{
isColliding = false;
}
}
if(player.getY() < 500 && !isJumping && !isColliding)
{
tmrGravity.Start();
}
else
{
tmrGravity.Stop();
}
}
This code only stops the user from falling through the last created platform, all of the ones before that the user is able to just fall right through. What makes this all the more confusing is that the program is detecting collisions for all of the platforms, but only doing what it's supposed to for one! It's very frustrating and any help is appreciated.
This is how I'm adding platforms if that helps in any way:
private void pbPlatformSelect_MouseClick(object sender, MouseEventArgs e)
{
Platform plat = new Platform(100, 10, 50, 50);
plat.drawTo(this);
platList.Add(plat);
}
Replace foreach loop with this code:
var playerBounds = player.GetBounds ();
isColliding = platList.Any (plat => plat.GetBounds ().IntersectsWith (playerBounds);
if (isColliding) tmrGravity.Stop ();
if you don't like LINQ, you can change you loop like this:
var playerBounds = player.GetBounds ();
isColliding = false;
foreach (var plat in platList) {
if (plat.GetBounds ().IntersectsWith (playerBounds)) {
isColliding = true;
tmrGravity.Stop ();
break;
}
}
I think you want to break out of the foreach loop once you determine you have collided with something. If you have 3 platforms, and you collide with the first, isColliding is true, but if it doesn't collide with the second platform, it will switch isColliding to false. In the end, whatever the intersection result of the last platform in the list is, is what isColliding's value will be.
So try putting 'break;' right after 'isColliding = true';
This is also an efficiency improvement because if you have 1,000 platforms and the player collides with the first one, we don't really care about the others (from what I can tell) and we save ourselves 999 iterations of the loop.
I'm using Microsoft Band of development on Windows Phone 8.1. Trying to figure out he full capability of the UV sensor. I found few examples, that find the UV level. They show that an enum can be used:
namespace Microsoft.Band.Sensors
{
public enum UVIndexLevel
{
None = 0,
Low = 1,
Medium = 2,
High = 3,
VeryHigh = 4,
}
}
Would also want to know, what's the scale of these enums. As I know there's like 0 to 11+ levels of UI. What ranges are those enums?
I'm basically using line of code:
{
bandClient.SensorManager.UV.ReadingChanged += Ultraviolet_ReadingChanged;
await bandClient.SensorManager.UV.StartReadingsAsync();
*Later on code*
await bandClient.SensorManager.UV.StopReadingsAsync();
bandClient.SensorManager.UV.ReadingChanged -= Ultraviolet_ReadingChanged;
}
The async method:
async void Ultraviolet_ReadingChanged(object sender, BandSensorReadingEventArgs<IBandUVReading> e)
{
IBandUVReading ultra = e.SensorReading;
UVIndexLevel potatoUV = ultra.IndexLevel;
}
But for some reason, I don't get Indexes most of the time. I sometimes get readings around 8 million to 10 million (or thousands) when in direct sunlight. Values are in "int" (Though sometimes gives the enums).
I am interested, on how I can measure it. Also, exactly what UV is it reading? I know there are many kinds of UV exposures. But how can I use this data?
If it's a range, then maybe I can put a range value, but I need to somehow sample it, what UV Index it has and give that information to the user. And use the index in later calculations.
ALSO...
I happened to fall on a bug. While testing the UV, when I was standing in direct light, the reading did not display. Only once I moved to another UV level, it changed (But never back to the first one). But seems like the first reading either does not change (As method is "readingchanged") or is the default location. However much sense this makes. Is there a way to call out the reading on button click?
If need be, I can search the examples I used, for mode depth of the code. But most of it is here.
A new Band SDK will be available soon and will fix the UV sensor data not correctly returned. Stay tune!
This sensor is a bit different than the others. It does require the user to do the action to acquire the data and then the data can be gathered.
Here is a code which is working on a WP8.1 with a Band.
DateTime start;
private async void Button_Click(object sender, RoutedEventArgs e)
{
try
{
this.viewModel.StatusMessage = "Connecting to Band";
// Get the list of Microsoft Bands paired to the phone.
IBandInfo[] pairedBands = await BandClientManager.Instance.GetBandsAsync();
if (pairedBands.Length < 1)
{
this.viewModel.StatusMessage = "This sample app requires a Microsoft Band paired to your phone. Also make sure that you have the latest firmware installed on your Band, as provided by the latest Microsoft Health app.";
return;
}
// Connect to Microsoft Band.
using (IBandClient bandClient = await BandClientManager.Instance.ConnectAsync(pairedBands[0]))
{
start = DateTime.Now;
this.viewModel.StatusMessage = "Reading ultraviolet sensor";
// Subscribe to Ultraviolet data.
bandClient.SensorManager.UV.ReadingChanged += UV_ReadingChanged;
await bandClient.SensorManager.UV.StartReadingsAsync();
// Receive Accelerometer data for a while.
await Task.Delay(TimeSpan.FromMinutes(5));
await bandClient.SensorManager.UV.StopReadingsAsync();
bandClient.SensorManager.UV.ReadingChanged -= UV_ReadingChanged;
}
this.viewModel.StatusMessage = "Done";
}
catch (Exception ex)
{
this.viewModel.StatusMessage = ex.ToString();
}
}
void UV_ReadingChanged(object sender, Microsoft.Band.Sensors.BandSensorReadingEventArgs<Microsoft.Band.Sensors.IBandUVReading> e)
{
var span = (DateTime.Now - start).TotalSeconds;
IBandUVReading ultra = e.SensorReading;
string text = string.Format("Ultraviolet = {0}\nTime Stamp = {1}\nTime Span = {2}\n", ultra.IndexLevel, ultra.Timestamp, span);
Dispatcher.RunAsync(CoreDispatcherPriority.Normal, () => { this.viewModel.StatusMessage = text; }).AsTask();
start = DateTime.Now;
}
I am using FaceTracking application from kinect for Windows SDK samples.
I have read that head pose angles will help to detect head movement(Head nod or head shake).
At the Moment, i have values of head pose angles.
My question is how to calculate the difference of head pose angles?
I know that I Need the values of previous Frame.
I really dont know how store the values of previous Frame and use it for further Analysis.
Does anyone has idea on this Topic?
Looking Forward for your suggestions.
Thank you so much in advance.
Edit: I have tried one Approach to store the previous Frame and Display the difference between the angles. But my Frame is not updating and i am not getting expected difference between the angles. I have taken below code from facetracking application. Could some please tell me where am i doing wrong?
internal void OnFrameReady(KinectSensor kinectSensor, ColorImageFormat colorImageFormat, byte[] colorImage, DepthImageFormat depthImageFormat, short[] depthImage, Skeleton skeletonOfInterest)
{
this.skeletonTrackingState = skeletonOfInterest.TrackingState;
if (this.skeletonTrackingState != SkeletonTrackingState.Tracked)
{
// nothing to do with an untracked skeleton.
return;
}
if (this.faceTracker == null)
{
try
{
this.faceTracker = new FaceTracker(kinectSensor);
}
catch (InvalidOperationException)
{
// During some shutdown scenarios the FaceTracker
// is unable to be instantiated. Catch that exception
// and don't track a face.
Debug.WriteLine("AllFramesReady - creating a new FaceTracker threw an InvalidOperationException");
this.faceTracker = null;
}
}
if (this.faceTracker != null)
{
FaceTrackFrame frame = this.faceTracker.Track(
colorImageFormat, colorImage, depthImageFormat, depthImage, skeletonOfInterest);
this.lastFaceTrackSucceeded = frame.TrackSuccessful;
if (this.lastFaceTrackSucceeded)
{
/*
pitch = frame.Rotation.X;
yaw = frame.Rotation.Y;
roll = frame.Rotation.Z;*/
Vector3DF faceRotation = frame.Rotation;
pose = string.Format("Pitch:\t{0:+00;-00}°\nYaw:\t{1:+00;-00}°\nRoll:\t{2:+00;-00}°", faceRotation.X, faceRotation.Y, faceRotation.Z);
if (oldFrame != null)
{
Vector3DF faceRotation1 = oldFrame.Rotation;
difference = string.Format("Pitch:\t{0:+00;-00}°\nYaw:\t{1:+00;-00}°\nRoll:\t{2:+00;-00}°", faceRotation.X - faceRotation1.X, faceRotation.Y - faceRotation1.Y, faceRotation.Z-faceRotation1.Z);
}
if (faceTriangles == null)
{
// only need to get this once. It doesn't change.
faceTriangles = frame.GetTriangles();
}
this.facePoints = frame.GetProjected3DShape();
}
}
oldFrame = frame; // FaceTrackFrame oldFrame;
}
I have made chnages to the following line to copy the current Frame to other Frame object.
I have used the clone method.
oldFrame = (FaceTrackFrame)frame.Clone();
Now it is giving me the correct difference
im trying to fit 2 pois on a bing map for WinRT Universal APP, commonly user position and any other point, so the user can see the map between two points, i have this code:
private void centerMap(Geoposition pos, MapPoi pos2)
{
try {
#if WINDOWS_APP
//TODO PC Maps
#elif WINDOWS_PHONE_APP
GeoboundingBox geoboundingBox = new Windows.Devices.Geolocation.GeoboundingBox(
new BasicGeoposition() { Latitude = pos.Coordinate.Latitude, Longitude = pos.Coordinate.Longitude },
new BasicGeoposition() { Latitude = pos2.lat, Longitude = pos2.lng });
map1._map.TrySetViewBoundsAsync(geoboundingBox,null,MapAnimationKind.Linear);
#endif
}catch(Exception ex)
{
Debug.WriteLine(ex.ToString());
}
}
So one more time i have a problem i cannot solve myself.
First i need to understand something that i must be doing bad, im getting System.ArgumentException on the GeoboundingBox constructor, here stack trace:
A first chance exception of type 'System.ArgumentException' occurred in EspartAPP.WindowsPhone.exe
System.ArgumentException: Value does not fall within the expected range.
at Windows.Devices.Geolocation.GeoboundingBox..ctor(BasicGeoposition northwestCorner, BasicGeoposition southeastCorner)
at EspartAPP.Modulos.Modulo_Mapa.centerMap(Geoposition pos, MapPoi pos2)
Here are the coordinates used for the test:
pos var: Lat 40.4564664510315 lng -3.65939843190291
pos2 var: Lat 40.4579103 lngs -3.6532357
Both coordinates are correct, one is the user position just got from gps, the other is a close location.
I cant understand what can be going bad, there must be something that i need to know.
The code i posted just throw that exception, but, if i just do +1 to the pos latitude it works, and seems to be doing it right.
If i add pos2 first (as northwestCorner) it doesnt throw exception, it zooms out to max so i can almost see the entire map (which is obviously wrong)
There must be some rules im ignoring, maybe i have to calc which coordinate should be on northwest position?
EDIT: Was exactly that, working code:
private void centerMap(Geoposition pos, MapPoi pos2)
{
try {
BasicGeoposition nw = new BasicGeoposition();
nw.Latitude = Math.Max(pos.Coordinate.Latitude, pos2.lat);
nw.Longitude = Math.Min(pos.Coordinate.Longitude, pos2.lng);
BasicGeoposition se = new BasicGeoposition();
se.Latitude = Math.Min(pos.Coordinate.Latitude, pos2.lat);
se.Longitude = Math.Max(pos.Coordinate.Longitude, pos2.lng);
#if WINDOWS_APP
//TODO PC Maps
#elif WINDOWS_PHONE_APP
GeoboundingBox geoboundingBox = new Windows.Devices.Geolocation.GeoboundingBox(nw,se);
map1._map.TrySetViewBoundsAsync(geoboundingBox,null,MapAnimationKind.Bow);
#endif
}catch(Exception ex)
{
Debug.WriteLine(ex.ToString());
}
}
Sorry for my bad english and problems with this bing maps, this is really poor documented i expect because its the newer API and its a mess to find what you need for the API you are working for.
Thanks in advance.
You are not providing the correct coordinates. As you can read here the first coordinate must be upper left corner of the region (the north west coordinate), the second must be the lower right corner (the south east coordinate). Your pos is the lower left corner and pos2 is the upper right corner.
You get the right coordinates like this:
var nw = new BasicGeoposition();
nw.Latitude = Max(pos.Coordinate.Latitude, pos2.lat);
nw.Longitude = Min(pos.Coordinate.Longitude, pos2.lng);
var se = new BasicGeoposition();
se.Latitude = Min(pos.Coordinate.Latitude, pos2.lat);
se.Longitude = Max(pos.Coordinate.Longitude, pos2.lng);
I'm hoping someone can help me out here. My ultimate goal with this code is to extract the color of the sweater I am wearing. Like the title suggests, I'm trying to exctract RBG values from a certain Skeleton point (ie. skeleton.Joint[JointType.Spine].Position). I do this using the following mapping:
All of the following code is within the SensorAllFramesReady event:
private void SensorAllFramesReady(object sender, AllFramesReadyEventArgs e)
{
Skeleton[] skeletons = new Skeleton[0];
using (SkeletonFrame skeletonFrame = e.OpenSkeletonFrame())
{
if (skeletonFrame != null)
{
skeletons = new Skeleton[skeletonFrame.SkeletonArrayLength];
skeletonFrame.CopySkeletonDataTo(skeletons);
}
}
if (skeletons.Length != 0)
{
foreach (Skeleton skel in skeletons)
{
if (skel.TrackingState == SkeletonTrackingState.Tracked)
{
colorPoint = this.SkeletonPointToColor(skel.Joints[JointType.Spine].Position);
}
}
}
}
private Point SkeletonPointToColor(SkeletonPoint skelpoint)
{
ColorImagePoint colorPoint = this.sensor.CoordinateMapper.MapSkeletonPointToColorPoint(skelpoint, ColorImageFormat.RgbResolution640x480Fps30);
return new Point(colorPoint.X, colorPoint.Y);
}
I assign the returned Point to a variable "ColorPoint", and here is how I (somewhat successful) extract the RBG values:
using (ColorImageFrame colorFrame = e.OpenColorImageFrame())
{
if (colorFrame != null)
{
int arrayLength = colorFrame.PixelDataLength;
this.colorPixelData = new byte[arrayLength];
colorFrame.CopyPixelDataTo(this.colorPixelData);
blue = (int)colorPixelData[(int)(4* (colorPoint.X + (colorPoint.Y * colorFrame.Width)))+0];
green = (int)colorPixelData[(int)(4 * (colorPoint.X + (colorPoint.Y * colorFrame.Width))) + 1];
red = (int)colorPixelData[(int)(4 * (colorPoint.X + (colorPoint.Y * colorFrame.Width))) + 2];
}
}
I then draw an ellipse on my Windows Form window using the retrieved RBG values. Now this works, kind of. I do get a color which resembles the color of the sweater I'm wearing, but even if I do my best to stand very still the color is always changing. It's almost as if I'm getting random RBG values within a certain range, and only the range is dictated by the color of my sweater. Why is this? Is there another way I should be solving this problem?
Thank you for reading!
EDIT: I apologise for the formatting, this is my firs time submitting a question and I realise the formatting in the first code block is a bit off. The SkeletonPointToColor method is naturally not within the SensorAllFramesReady method. My apologies
First of all I really recomend you have a look at the Coding4Fun Kinect Toolkit which provides many useful functions for dealing with the Kinect data. There is a for instance an extension for returning a bitmap named ToBitmapSource() which should be of use.
Another observation is that without using some algorithm to get an average of the color values received it's quite normal to have the color values jump around. Also not confident that you can expect the skeleton and image frames to be 100% in sync.