I'm creating an app for windows phone using c# that uses the accelerator but its not smooth when displayed on the screen. I only need to move the Y-axis. I have seen this formula on microsofts website but I'm not sure how i should use it
O = O-1 + α(I – O-1)
where O is the output, α is the coefficient and I is the input(raw value)
how do i implement this into my code which is
private void UpdateUI(AccelerometerReading accelerometerReading)
{
statusTextBlock.Text = "getting data";
Vector3 acceleration = accelerometerReading.Acceleration;
// Show the numeric values on screen.
yTextBlock.Text = "Y: " + acceleration.Y.ToString("0.00");
//low pass filter
//????
//move ball on screen
var TopMar = (278.5*acceleration.Y)+278.5;
var BotMar = 557 - TopMar;
yDot.Margin = new Thickness(203, BotMar, 203, TopMar);
}
If Vector3 has overloaded the operators, this should do:
private Vector3 MeanAcceleration = null;
private void UpdateUI(AccelerometerReading accelerometerReading)
{
const double alpha = 0.05;
statusTextBlock.Text = "getting data";
Vector3 acceleration = accelerometerReading.Acceleration;
// Show the numeric values on screen.
yTextBlock.Text = "Y: " + acceleration.Y.ToString("0.00");
//low pass filter
if (MeanAcceleration == null)
MeanAcceleration = acceleration;
else
MeanAcceleration = (1 - alpha) * MeanAcceleration + alpha * acceleration;
//move ball on screen
var TopMar = (278.5 * MeanAcceleration.Y) + 278.5;
var BotMar = 557 - TopMar;
yDot.Margin = new Thickness(203, BotMar, 203, TopMar);
}
You need a field (or something of similar scope) and assign it the mean value. Every timestep, you update this mean value.
Alpha must be between 0 and, 1, to effectively low-pass the signal it should be 0.1 or below. Decrease this if the output is too wiggly and increase alpha if the output is too slow. If both is the case, you probably need a more sophisticated digital filter.
If the beginning is not important, you can initialize the mean with something like
private Vector3 MeanAcceleration = new Vector3(0, 0, 0);
but I'm not sure about the constructor, because I don't know exactly which Vector3 that is.
Related
I’m currently developing a 3d multiplayer game in Monogame and noticing freezing every 5 seconds and quite allot of garbage collection. I’m trying to solve the issue by finding what creates garbage for the garbage collector and remove the garbage.
The below examples are only a smart part of a very large project which is coded in the same way as the examples shown, for example the games networking initializes variables in runtime as it collects data from the server, for example this code is checking what a player is wearing. This code is sent by the server when a player changes any gear it is wearing to update other players with the changes. (this code is called only in a specific situation)
if (messageTitle == "Equipment")
{
string who = msg.ReadString();
//get the item names
string helmet = msg.ReadString();
string shoulder = msg.ReadString();
string chest = msg.ReadString();
string shirt = msg.ReadString();
string cape = msg.ReadString();
string bracers = msg.ReadString();
string gloves = msg.ReadString();
string belt = msg.ReadString();
string pants = msg.ReadString();
string boots = msg.ReadString();
string slot1 = msg.ReadString();
string slot2 = msg.ReadString();
string slot3 = msg.ReadString();
string slot4 = msg.ReadString();
if (unitDatabase.realUnits.ContainsKey(who) == true)
{
CheckEquipment(who + "_Equipment_Helmet", helmet);
CheckEquipment(who + "_Equipment_Shoulder", shoulder);
CheckEquipment(who + "_Equipment_Chest", chest);
CheckEquipment(who + "_Equipment_Shirt", shirt);
CheckEquipment(who + "_Equipment_Cape", cape);
CheckEquipment(who + "_Equipment_Bracers", bracers);
CheckEquipment(who + "_Equipment_Gloves", gloves);
CheckEquipment(who + "_Equipment_Belt", belt);
CheckEquipment(who + "_Equipment_Pants", pants);
CheckEquipment(who + "_Equipment_Boots", boots);
CheckEquipment(who + "_Equipment_slot1", slot1);
CheckEquipment(who + "_Equipment_slot2", slot2);
CheckEquipment(who + "_Equipment_slot3", slot3);
CheckEquipment(who + "_Equipment_slot4", slot4);
}
}
But the code below is always sent by the server unreliably so mostly every frame it will be initializing these variables.
if (messageTitle == "Stats")
{
string who = msg.ReadString();
int health = msg.ReadInt32();
int mana = msg.ReadInt32();
int energy = msg.ReadInt32();
int rage = msg.ReadInt32();
bool inCombat = msg.ReadBoolean();
int experience = msg.ReadInt32();
if (unitDatabase.realUnits.ContainsKey(who) == true)
{
unitDatabase.realUnits[who].attributes.health = health;
unitDatabase.realUnits[who].attributes.mana = mana;
unitDatabase.realUnits[who].attributes.energy = energy;
unitDatabase.realUnits[who].attributes.rage = rage;
unitDatabase.realUnits[who].attributes.inCombat = inCombat;
if (unitDatabase.realUnits[who].attributes.experience != experience)
{
int difference = experience - unitDatabase.realUnits[who].attributes.experience;
unitDatabase.realUnits[who].attributes.experience = experience;
floatingTextDatabase.AddFloatingText("XP: " + difference, unitDatabase.realUnits[who], 0.35f, new Vector3(0, 0, 20), Color.Blue);
}
}
}
Everything received from the server is setup to receive data in this way, I believe the initializing in runtime all these variables would be creating allot of memory for the garbage collector which may be causing the 1 second freezing every 5seconds.
public void Draw(SpriteBatch spriteBatch, SpriteFont font)
{
#region DrawAllScreenFloatingText
for (int i = 0; i < floatingText.Count(); i++)
{
string message = floatingText[i].text.ToString();
Vector2 origin = font.MeasureString(message) / 2;
float textSize = floatingText[i].size;
Color backdrop = new Color((byte)50, (byte)0, (byte)0, (byte)MathHelper.Clamp(floatingText[i].fade, 0, 255));
spriteBatch.DrawString(font, message, (floatingText[i].startPositon + floatingText[i].position) + new Vector2(-4 * textSize, -4 * textSize), backdrop, 0, origin, textSize, 0, 1);
spriteBatch.DrawString(font, message, (floatingText[i].startPositon + floatingText[i].position) + new Vector2(-4 * textSize, 4 * textSize), backdrop, 0, origin, textSize, 0, 1);
spriteBatch.DrawString(font, message, (floatingText[i].startPositon + floatingText[i].position) + new Vector2(4 * textSize, -4 * textSize), backdrop, 0, origin, textSize, 0, 1);
spriteBatch.DrawString(font, message, (floatingText[i].startPositon + floatingText[i].position) + new Vector2(4 * textSize, 4 * textSize), backdrop, 0, origin, textSize, 0, 1);
spriteBatch.DrawString(font, message, (floatingText[i].startPositon + floatingText[i].position), new Color(floatingText[i].color.R, floatingText[i].color.G, floatingText[i].color.B, (byte)MathHelper.Clamp(floatingText[i].fade, 0, 255)), 0, origin, textSize, 0, 1);
}
#endregion
}
The above code is drawing floating text on the screen, like damage being done to an enemy. Allot of the drawing would be handled like this and would be drawn every frame if there is damage being done to something. As you can see I would be initializing a string, vector2, float and color variable each draw frame multiplied by each damage number shown.
public void Draw(GraphicsDevice graphics, BasicEffect basicEffect, Effect modelEffect, MeshRenderer meshRenderer, ItemDatabase itemDatabase, ThirdPersonCamera camera, Weather weather, LightDatabase lightDatabase, AudioList audioList)
{
sw = new Stopwatch();
sw.Start();
if (camera.target != null)
{
foreach (var unit in realUnits)
{
//only draw close enough to us
double distance = Math.Sqrt((unit.Value.body.position.X - camera.target.body.position.X) * (unit.Value.body.position.X - camera.target.body.position.X) +
(unit.Value.body.position.Y - camera.target.body.position.Y) * (unit.Value.body.position.Y - camera.target.body.position.Y) +
(unit.Value.body.position.Z - camera.target.body.position.Z) * (unit.Value.body.position.Z - camera.target.body.position.Z));
if (distance < unit.Value.renderDistance)
{
//only draw units inside our view
if (camera.InsideCamera(unit.Value.body.position) == true || camera.target == unit.Value)
{
unit.Value.Draw(graphics, unitList[unit.Value.name], arrow, basicEffect, meshRenderer, itemDatabase, camera, weather, lightDatabase);
}
}
}
}
sw.Stop();
if (drawMs < sw.Elapsed.TotalMilliseconds) drawMs = sw.Elapsed.TotalMilliseconds;
}
Another code being updated every draw frame, it checks if the player being drawn is within view distance of the camera and then within the view frustum of the camera. You can see it is initializing a double every draw frame.
and inside the unit.Value.Draw(); function its initializing:
new SamplerState //for the shadow casting
Matrix[] bones
Matrix getWorld //the players world transform into shader
Matrix worldMatrix
float colorR //ambient sky color
float colorG //ambient sky color
float colorB //ambient sky color
int MAXLIGHTS //total light sources in scene
Vector3[] PointLightPosition = new Vector3[MAXLIGHTS];
Vector4[] PointLightColor = new Vector4[MAXLIGHTS];
float[] PointLightPower = new float[MAXLIGHTS];
float[] PointLightRadius = new float[MAXLIGHTS];
These would be initialized inside the draw call every draw frame if this player is within view distance of the player and view frustum of the camera.
I believe all these initialized variables in runtime every frame would be creating allot of garbage collection. Before I rework the whole game to eliminate calling new variables each frame I wanted to make sure this could be the reason for the garbage collection filling up and freezing the game every 5seconds.
Thank you for taking the time to read my question.
Edit: adding images of VS2019 profiler
So I figured I'll make a quick project testing how the garbage collector is working. Below I added a project that updates the below code every draw call (exactly what I'm currently doing)
protected override void Draw(GameTime gameTime)
{
GraphicsDevice.Clear(Color.CornflowerBlue);
string[] test = new string[100000];
// TODO: Add your drawing code here
base.Draw(gameTime);
}
This is what the VS2019 profiler recorded
The next example I initialize string[] test at the start of the program only
public Game1()
{
_graphics = new GraphicsDeviceManager(this);
Content.RootDirectory = "Content";
IsMouseVisible = true;
string[] test = new string[100000];
}
This is what the VS2019 profiler recorded
So from this example it looks like I never want to initialize variables during runtime and try to initialize all variables at the start of the program. I will need to re-use the same variables to avoid the garbage collector freezing the game. Thankyou everyone who replied.
Just wanted to update everyone the freezing was actually not due to garbage collection, It was actually because of this line of code.
if (pingDelay > 3)
{
ping = Int32.Parse(pingSender.Send(client.ServerConnection.RemoteEndPoint.Address, 120, Encoding.ASCII.GetBytes(“0”), options).RoundtripTime.ToString());
pingDelay = 0;
}
pingSender.Send() is supposed to be Async. Because it wasn’t it was freezing the game while it waited for the server to return the result.
For a mod I'm working on, I'd like to incorporate the player's theme colors and use them to generate UI elements. However, I'm running into an issue where not all color themes have colors that provide a good contrast ratio as outlined in 1.4.3 Contrast (Minimum) of Web Content Accessibility Guidelines (WCAG) 2.1.
I can currently check the contrast with the following:
float RelativeLuminance(Color color)
{
float ColorPartValue(float part)
{
return part <= 0.03928f ? part / 12.92f : Mathf.Pow((part + 0.055f) / 1.055f, 2.4f);
}
var r = ColorPartValue(color.r);
var g = ColorPartValue(color.g);
var b = ColorPartValue(color.b);
var l = 0.2126f * r + 0.7152f * g + 0.0722f * b;
return l;
}
private float ColorContrast(Color a, Color b)
{
float result = 0f;
var La = RelativeLuminance(a) + 0.05f;
var Lb = RelativeLuminance(b) + 0.05f;
result = Mathf.Max(La, Lb) / Mathf.Min(La, Lb);
return result;
}
I use the found color contrast to determine whether or not the initial text color is good enough.
public Color GetContrastingColors(Color backgroundColor, Color textColor)
{
Color contrastColor;
// See if we have good enough contrast already
if (!(ColorContrast(backgroundColor, textColor) < 4.5f))
{
return textColor;
}
Color.RGBToHSV(textColor, out var textH, out var textS, out var textV);
Color.RGBToHSV(backgroundColor, out var bgH, out var bgS, out var bgV);
// Modify textV by some value to provide enough contrast.
contrastColor = Color.HSVToRGB(textH, textS, textV);
return contrastColor;
}
However, I'm unsure how to adjust the colors so that the text color just brightens (or dims) enough to get to that 4.5:1 contrast ratio. Originally, I was thinking of working the algebra for the luminosity and contrast equations to the point where the sRGB values are multiplied by some value X. I remembered HSV though, and adjusting the brightness of the color seems a lot simpler to me. The issue is, I'm unsure how to compare the contrasts of 2 HSV colors, let alone use their values to manipulate a color's brightness to reach a desired contrast.
My current thought process is to do something dumb like this:
float targetL;
bool brighter = false;
var backL = RelativeLuminance(backgroundColor);
var textL = RelativeLuminance(textColor);
var ratio = 4.5f;
// Try to go in the direction of brightness originally.
if (textL > backL)
{
targetL = ((backL + 0.05f) * ratio) - 0.05f;
brighter = true;
if (targetL > 1f)
{
targetL = ((backL + 0.05f) / ratio) - 0.05f;
brighter = false;
}
}
else
{
targetL = ((backL + 0.05f) / ratio) - 0.05f;
if (targetL > 0f)
{
targetL = ((backL + 0.05f) * ratio) - 0.05f;
brighter = true;
}
}
Color adjustedColor = textColor;
while ((!brighter && textL > targetL) || (brighter && textL < targetL))
{
Color.RGBToHSV(adjustedColor, out var textH, out var textS, out var textV);
textV += brighter ? 0.01f : -0.01f;
adjustedColor = Color.HSVToRGB(textH, textS, textV);
textL = RelativeLuminance(adjustedColor);
}
contrastColor = adjustedColor;
But that's not really efficient, so how can I manipulate the text color so that it "remains the same" but provides enough contrast?
Edit:
To give more context to what I'm trying to do, imagine I have the following set of 4 colors as the player's theme.
In terms of HTML codes, that's:
#32263d
#3d1c70
#7347b6
#320d68
I want to incorporate 2 of those colors from their theme when creating a UI for them. However, not all of them are easily distinguishable, you can see the various contrasts in this case here:
Now each theme contains a darker and lighter color just like the center 2 rows in this example, but also like this example, their contrast may not always be accessible for the end user to read. Moving along with the example, in this case, we're going to be using #32263d and #7347b6 to build our UI.
While I could try to randomly create a shade of purple similar, I want to keep it as close to the original as possible and just brighten it. We can see how it'd look in the various levels of light, here:
If we set #7347b6 to the maximum brightness at #a163ff, we get the following pair now:
While better than before, this is only a contrast of 3.88 : 1 still. So now I want to scale down the brightness of #32263d. If we reduce it to #251B2D, we then end up with this:
The two new colors then have a color contrast of 4.51 : 1.
Now, I could go through each theme manually, but given the number of them, I'd prefer to write an algorithm that generates the updated colors on the fly.
Check out my answer for Adapt given color pairs to adhere to W3C Accessibility standard for ePubs
You can skip the part where I talk about the contrast ratio formula since you have that already but I talk about how to adjust the colors to get better contrast.
If I were to actually code my recommendation from the previous answer, I would be more efficient and rather than adding or subtracting 1 from each RGB component and recomputing the luminance, I would probably add/subtract 10 and recompute. If not sufficient contrast, do 10 again. Once I get enough contrast, I could then re-adjust the values, perhaps by 2 each time, in the opposite direction until I got as close to 4.5 without going under.
I ended up using a loop after all for my code. While slugolicious's answer was close to what I wanted, I found that adjust all the RGB components by the same amount was not what I wanted as is actually affected the hue, so I ended up using HSV instead.
public Color[] GetContrastingColors(Color backgroundColor, Color textColor, float ratio)
{
Color[] colors = new Color[2];
var backL = RelativeLuminance(backgroundColor);
var textL = RelativeLuminance(textColor);
if (textL > backL)
{
colors[0] = textColor;
colors[1] = backgroundColor;
}
else
{
colors[1] = textColor;
colors[0] = backgroundColor;
}
// See if we have good enough contrast already
if (!(ColorContrast(backgroundColor, textColor) < ratio))
{
return colors;
}
Color.RGBToHSV(colors[0], out var lightH, out var lightS, out var lightV);
Color.RGBToHSV(colors[1], out var darkH, out var darkS, out var darkV);
// If the darkest color can be darkened enough to have enough contrast after brightening the color.
if (ColorContrast(Color.HSVToRGB(darkH, darkS, 0f), Color.HSVToRGB(lightH, lightS, 1f)) >= ratio)
{
var lightDiff = 1f - lightV;
var darkDiff = darkV;
var steps = new float[] { 0.12f, 0.1f, 0.08f, 0.05f, 0.04f, 0.03f, 0.02f, 0.01f, 0.005f };
var step = 0;
var lightRatio = (lightDiff / (lightDiff + darkDiff));
var darkRatio = (darkDiff / (lightDiff + darkDiff));
while (ColorContrast(Color.HSVToRGB(lightH, lightS, lightV), Color.HSVToRGB(darkH, darkS, darkV)) < ratio)
{
while (ColorContrast(Color.HSVToRGB(lightH, lightS, lightV + lightRatio * steps[step]), Color.HSVToRGB(darkH, darkS, darkV - darkRatio * steps[step])) > ratio && step < steps.Length - 1)
{
step++;
}
lightV += lightRatio * steps[step];
darkV -= darkRatio * steps[step];
}
colors[0] = Color.HSVToRGB(lightH, lightS, lightV);
colors[1] = Color.HSVToRGB(darkH, darkS, darkV);
}
// Fall back to using white.
else
{
colors[0] = Color.white;
while (ColorContrast(Color.white, Color.HSVToRGB(darkH, darkS, darkV)) < ratio)
{
darkV -= 0.01f;
}
colors[1] = Color.HSVToRGB(darkH, darkS, darkV);
}
return colors;
}
I'm new here and also quite new at C#/Unity2019.4.14f with VS2019, MRTK V2.5.3, and Microsoft Hololens 2 programming. I would like to ask you for advice on a problem that I have not been able to solve for weeks. First of all, I would like to quickly explain what kind of problem it is. My task is to track an object that is in an examination cube with the Spatial Mesh and to represent its shape as well as possible.Explanation screen for the task description
The calculations of where the examination cube is located in space work without any problems. But for some reason, I cannot query the Spatial Awareness Mesh Observer. Anyway, no meshes seem to be present although they are visible.
Since I am at a complete loss and no one I have asked so far has been able to help me, I am publishing my code for this function below. Please bear with me, as I am still a beginner in writing code.
public void ReadAndDrawMesh(){
//Provide a list of the cube coordinates
Vector3[] CubeCoordinateList = new Vector3[24];
//Convert Local to World Coordinates
var localToWorld = transform.localToWorldMatrix;
Vector3 cubeWorldPos = Cube.transform.position; // Reading out the centre position
Vector3[] cubeVertices = Cube.GetComponent<MeshFilter>().mesh.vertices; //World coordinates?
List<Vector3> cubeWorldVertices = new List<Vector3>();
for (int i = 0; i <= (cubeVertices.Length) - 1; i++)
{
CubeCoordinateList[i] = localToWorld.MultiplyPoint3x4(cubeVertices[i]);
}
//CubeVerticies from Vector A[0] to E[4]
float normalX1 = CubeCoordinateList[4].x - CubeCoordinateList[0].x;
float normalY1 = CubeCoordinateList[4].y - CubeCoordinateList[0].y;
float normalZ1 = CubeCoordinateList[4].z - CubeCoordinateList[0].z;
float amount1 = Mathf.Sqrt((normalX1 * normalX1) + (normalY1 * normalY1) + (normalZ1 * normalZ1));
//Create a new vector
Vector3 direction1 = new Vector3(normalX1, normalY1, normalZ1);
direction1 = direction1 / amount1;
//CubeVerticies from Vector A[0] to B[2]
float normalX2 = CubeCoordinateList[2].x - CubeCoordinateList[0].x;
float normalY2 = CubeCoordinateList[2].y - CubeCoordinateList[0].y;
float normalZ2 = CubeCoordinateList[2].z - CubeCoordinateList[0].z;
float amount2 = Mathf.Sqrt((normalX2 * normalX2) + (normalY2 * normalY2) + (normalZ2 * normalZ2));
//Create a new vector
Vector3 direction2 = new Vector3(normalX2, normalY2, normalZ2);
direction2 = direction2 / amount2;
//CubeVerticies from Vector A[0] to D[3]
float normalX3 = CubeCoordinateList[3].x - CubeCoordinateList[0].x;
float normalY3 = CubeCoordinateList[3].y - CubeCoordinateList[0].y;
float normalZ3 = CubeCoordinateList[3].z - CubeCoordinateList[0].z;
float amount3 = Mathf.Sqrt((normalX3 * normalX3) + (normalY3 * normalY3) + (normalZ3 * normalZ3));
//Create a new vector
Vector3 direction3 = new Vector3(normalX3, normalY3, normalZ3);
direction3 = direction3 / amount3;
//From MRTK 2.5.3
// Use CoreServices to quickly get access to the IMixedRealitySpatialAwarenessSystem
var spatialAwarenessService = CoreServices.SpatialAwarenessSystem;
// Cast to the IMixedRealityDataProviderAccess to get access to the data providers
var dataProviderAccess = spatialAwarenessService as IMixedRealityDataProviderAccess;
var meshObserverName = "SpatialAwarenessMeshObserverProfile";
var MeshObserver = dataProviderAccess.GetDataProvider<IMixedRealitySpatialAwarenessMeshObserver>(meshObserverName);
foreach (SpatialAwarenessMeshObject meshObject in MeshObserver.Meshes.Values)
{
Vector3[] meshObjectarray = meshObject.Filter.mesh.vertices;
//Reading the Spatial Mesh of the room
foreach (Vector3 verticiesCoordinaten in meshObjectarray)
{
//List for the mesh coordinates that come out of the scalar product calculation
Vector3[] MeshPositionList = new Vector3[meshObjectarray.Length]; ;
//Determining Direction between verticiesCoordinates of MeshObject with Centre of Cube
var dir_vectorMesh = verticiesCoordinaten - cubeWorldPos;
//Calculating the scalar product of the coordinates
var result1 = Mathf.Abs(Vector3.Dot(dir_vectorMesh, direction1)) * 2;
var result2 = Mathf.Abs(Vector3.Dot(dir_vectorMesh, direction2)) * 2;
var result3 = Mathf.Abs(Vector3.Dot(dir_vectorMesh, direction3)) * 2;
// If scalar product Negative, then write it in the list
if (result1 > amount1 && result2 > amount2 && result3 > amount3)
{
MeshPositionList[meshObjectarray.Length] = verticiesCoordinaten;
}
//Creating a new visible mesh from the points in the list
Mesh mesh = new Mesh();
mesh.vertices = MeshPositionList;
mesh.Optimize();
Graphics.DrawMeshNow(mesh, Vector3.zero, Quaternion.identity);
mesh.RecalculateNormals();
}
}
}
I hope that one of you can help me and I look forward to any constructive answers. Thank you to everyone who reads this post and perhaps responds.
Thank You.
I'm trying to upgrade an app of mine to use Windows 10 Mobile device sensors as a VR device for the pc (like Google Cardboard).
I'm experiencing a problem with the sensor readouts when the device changes from pointing below the horizon to above the horizon (happens for both landscape and portrait, however in this case only landscape is important). A small sketch:
Raw sensor readouts (pointing downward):
Inclinometer Pitch: -000.677 , Roll: -055.380 , Yaw: +013.978
Now after changing to pointing upward:
Inclinometer Pitch: -178.550 , Roll: +083.841 , Yaw: +206.219
As you can see, all 3 values changed, by a significant amount. In reality only one axis should have changed, roll or pitch (depending on sensor orientation)
I'm 95% sure, this problem didn't exist in Windows Phone 8. I'm unable to find any documentation about this weird behaviour of the sensors and it's stopping me from creating Augmented Reality and Virtual Reality apps.
Here are 2 pictures of the problem:
Here is the code for this demonstration:
<Grid Background="{ThemeResource ApplicationPageBackgroundThemeBrush}">
<TextBlock Style="{StaticResource BodyTextBlockStyle}"
x:Name="output"
FontFamily="Consolas"
Foreground="Black"
Text="test"/>
</Grid>
Code behind:
public MainPage()
{
this.InitializeComponent();
timer = new DispatcherTimer();
timer.Interval = TimeSpan.FromMilliseconds(250);
timer.Tick += Timer_Tick;
}
private void Timer_Tick(object sender, object e)
{
output.Text = "";
output.Text = DateTime.Now.ToString("HH:mm:ss.fff") + Environment.NewLine;
Print();
}
DispatcherTimer timer;
public void WriteValue(String desc, String val)
{
StringBuilder b = new StringBuilder();
int length = desc.Length + val.Length;
int topad = 40 - length;
if (topad < 0)
topad = length - 40;
output.Text += desc + val.PadLeft(topad + val.Length) + Environment.NewLine;
}
public String ValueToString(double value)
{
String ret = value.ToString("000.00000");
if (value > 0)
ret = " +" + ret;
else if (value == 0)
ret = " " + ret;
else
ret = " " + ret;
return ret;
}
public static double RadianToDegree(double radians)
{
return radians * (180 / Math.PI);
}
public void Print()
{
WriteValue("DisplayOrientation", LastDisplayOrient.ToString());
WriteValue("Inclinometer", "");
WriteValue("Pitch", ValueToString(LastIncline.PitchDegrees));
WriteValue("Roll", ValueToString(LastIncline.RollDegrees));
WriteValue("Yaw", ValueToString(LastIncline.YawDegrees));
WriteValue("YawAccuracy", LastIncline.YawAccuracy.ToString());
WriteValue("OrientationSensor", "");
var q = LastOrient.Quaternion;
double ysqr = q.Y * q.Y;
// roll (x-axis rotation)
double t0 = +2.0f * (q.W * q.X + q.Y * q.Z);
double t1 = +1.0f - 2.0f * (q.X * q.X + ysqr);
double Roll = RadianToDegree(Math.Atan2(t0, t1));
// pitch (y-axis rotation)
double t2 = +2.0f * (q.W * q.Y - q.Z * q.X);
t2 = t2 > 1.0f ? 1.0f : t2;
t2 = t2 < -1.0f ? -1.0f : t2;
double Pitch = RadianToDegree(Math.Asin(t2));
// yaw (z-axis rotation)
double t3 = +2.0f * (q.W * q.Z + q.X * q.Y);
double t4 = +1.0f - 2.0f * (ysqr + q.Z * q.Z);
double Yaw = RadianToDegree(Math.Atan2(t3, t4));
WriteValue("Roll", ValueToString(Roll));
WriteValue("Pitch", ValueToString(Pitch));
WriteValue("Yaw", ValueToString(Yaw));
}
Inclinometer sIncline;
DisplayInformation sDisplay;
OrientationSensor sOrient;
protected override void OnNavigatedTo(NavigationEventArgs e)
{
base.OnNavigatedTo(e);
sIncline = Inclinometer.GetDefault(SensorReadingType.Absolute);
sDisplay = DisplayInformation.GetForCurrentView();
sOrient = OrientationSensor.GetDefault(SensorReadingType.Absolute);
sOrient.ReadingChanged += SOrient_ReadingChanged;
sDisplay.OrientationChanged += SDisplay_OrientationChanged;
sIncline.ReadingChanged += SIncline_ReadingChanged;
LastDisplayOrient = sDisplay.CurrentOrientation;
LastIncline = sIncline.GetCurrentReading();
LastOrient = sOrient.GetCurrentReading();
timer.Start();
}
private void SOrient_ReadingChanged(OrientationSensor sender, OrientationSensorReadingChangedEventArgs args)
{
LastOrient = args.Reading;
}
private void SDisplay_OrientationChanged(DisplayInformation sender, object args)
{
LastDisplayOrient = sDisplay.CurrentOrientation;
}
OrientationSensorReading LastOrient;
InclinometerReading LastIncline;
DisplayOrientations LastDisplayOrient;
private void SIncline_ReadingChanged(Inclinometer sender, InclinometerReadingChangedEventArgs args)
{
LastIncline = args.Reading;
}
protected override void OnNavigatingFrom(NavigatingCancelEventArgs e)
{
base.OnNavigatingFrom(e);
sIncline.ReadingChanged -= SIncline_ReadingChanged;
sDisplay.OrientationChanged -= SDisplay_OrientationChanged;
sOrient.ReadingChanged -= SOrient_ReadingChanged;
timer.Stop();
}
edit: Added the following sketch and more description
Take a look at this following, more in depth sketch:
Position the phone as seen in step (1). Phone in landscape mode, camera facing slightly downward, screen facing slightly upward.
Change to step (2). You slightly tilt it forward, only one axis should change (in this case Inclinometer will show you only "Roll" changing. THIS IS CORRECT)
Change to step (3). You now tilt your phone back. As soon as the switch over point comes, where the camera is no longer facing the ground, but now the sky and the screen is now facing slightly downward, all 3 values change by a significant amount. Pitch will jump by about -180°, Roll will jump by about 90° additionally to the amount you actually changed, and Yaw will jump by about +180°.
As long as the camera is ONLY pointing to EITHER the earth or the sky, the sensors behave fine! ONLY when switch from one to the other does this problem occur! (This scenario happens all the time with VR and AR, so this is a big problem)
Raw sensor readouts (pointing downward):
Inclinometer Pitch: -000.677 , Roll: -055.380 , Yaw: +013.978
Now after changing to pointing upward:
Inclinometer Pitch: -178.550 , Roll: +083.841 , Yaw: +206.219
As you can see, all 3 values changed, by a significant amount. In reality only one axis should have changed, roll or pitch (depending on sensor orientation)
Your test is not strict. If you want to observe Roll and Pitch, you should set a stationary coordinate for testing.
I have tested the Inclinometer in my physical device. When I slide the phone in the direction of the X axis. All 3 values changed, But only the Pitch value changes most clearly,and the remaining two values are within the error range.
The Inclinometer sensor specifies the yaw, pitch, and roll values of a device and work best with apps that care about how the device is situated in space. Pitch and roll are derived by taking the accelerometer’s gravity vector and by integrating the data from the gyrometer. Yaw is established from magnetometer and gyrometer (similar to compass heading) data. Inclinometers offer advanced orientation data in an easily digestible and understandable way. Use inclinometers when you need device orientation but do not need to manipulate the sensor data.
For more you could refer to Sensors.
I want to calculate the lean angle when I ride my motorcycle at a track and think about using my lumia 920 for that. The product that got my interest was http://leanometer.com . I found out that using only a gyroscope is bad as it will drift over time so using a complementary filter with the accelerometer seemed the way to go. I found this code for doing this from http://www.pieter-jan.com/node/11:
#define ACCELEROMETER_SENSITIVITY 8192.0
#define GYROSCOPE_SENSITIVITY 65.536
#define M_PI 3.14159265359
#define dt 0.01 // 10 ms sample rate!
void ComplementaryFilter(short accData[3], short gyrData[3], float *pitch, float *roll)
{
float pitchAcc, rollAcc;
// Integrate the gyroscope data -> int(angularSpeed) = angle
*pitch += ((float)gyrData[0] / GYROSCOPE_SENSITIVITY) * dt; // Angle around the X-axis
*roll -= ((float)gyrData[1] / GYROSCOPE_SENSITIVITY) * dt; // Angle around the Y-axis
// Compensate for drift with accelerometer data if !bullshit
// Sensitivity = -2 to 2 G at 16Bit -> 2G = 32768 && 0.5G = 8192
int forceMagnitudeApprox = abs(accData[0]) + abs(accData[1]) + abs(accData[2]);
if (forceMagnitudeApprox > 8192 && forceMagnitudeApprox < 32768)
{
// Turning around the X axis results in a vector on the Y-axis
pitchAcc = atan2f((float)accData[1], (float)accData[2]) * 180 / M_PI;
*pitch = *pitch * 0.98 + pitchAcc * 0.02;
// Turning around the Y axis results in a vector on the X-axis
rollAcc = atan2f((float)accData[0], (float)accData[2]) * 180 / M_PI;
*roll = *roll * 0.98 + rollAcc * 0.02;
}
}
I have converted this code to c# but I have problems to get something accurate with this. One thing is that I do not know the gyro/accelerometer sensitivity or how to get it. After a while I started to google some more on acclerometer and angle and found this: http://www.hobbytronics.co.uk/accelerometer-info which is a bit different from the angle above from the accelerometer but it seemed to work. When using the algorithm from hobbytronics and putting it into the code above I got a strange behavior so that it Always tried to get near -1.4 degrees angle.
I got the real code on Another computer but this is how I do it:
var lastReading = new DateTime();
var angleX = 0.0;
var gyro = new Gyroscope();
gyro.TimeBetweenUpdates = 20ms;
gyro.CurrentValueChanged += ValueChanged;
var acc = new Accelerometer();
acc.TimeBetweenUpdates = 20ms;
gyro.Start(); acc.Start();
void ValueChanged(SensorReading reading)
{
if(lastReading < 1 sec old)
{
var dt = lastReading - reading.Timestamp;
if(reading.X > 0.05 || Reading.X < -0.05) // Got so much errors if I did not use this, maybe too high?
{
var x = reading.X * dt.TotalSeconds;
var accData = acc.CurrentValue();
var accX = atan(accData.X / Math.Sqrt(accData.Y*accData.Y+accData.Z*accData.Z));
angleX = 0.98*(angleX + x *180 / Math.PI) + 0.02 * (accX *180 /Math.PI);
TxtAngleX.Text = "x:" + angleX.ToString("F");
}
}
lastReading = reading.Timestamp;
}
What am I missing? One way to make it better could be to set the angle from accelerometer only, when the lastReading is older then one sec but I know that that is not the problem