Microsoft Band UV sensor data unfamiliar - c#

I'm using Microsoft Band of development on Windows Phone 8.1. Trying to figure out he full capability of the UV sensor. I found few examples, that find the UV level. They show that an enum can be used:
namespace Microsoft.Band.Sensors
{
public enum UVIndexLevel
{
None = 0,
Low = 1,
Medium = 2,
High = 3,
VeryHigh = 4,
}
}
Would also want to know, what's the scale of these enums. As I know there's like 0 to 11+ levels of UI. What ranges are those enums?
I'm basically using line of code:
{
bandClient.SensorManager.UV.ReadingChanged += Ultraviolet_ReadingChanged;
await bandClient.SensorManager.UV.StartReadingsAsync();
*Later on code*
await bandClient.SensorManager.UV.StopReadingsAsync();
bandClient.SensorManager.UV.ReadingChanged -= Ultraviolet_ReadingChanged;
}
The async method:
async void Ultraviolet_ReadingChanged(object sender, BandSensorReadingEventArgs<IBandUVReading> e)
{
IBandUVReading ultra = e.SensorReading;
UVIndexLevel potatoUV = ultra.IndexLevel;
}
But for some reason, I don't get Indexes most of the time. I sometimes get readings around 8 million to 10 million (or thousands) when in direct sunlight. Values are in "int" (Though sometimes gives the enums).
I am interested, on how I can measure it. Also, exactly what UV is it reading? I know there are many kinds of UV exposures. But how can I use this data?
If it's a range, then maybe I can put a range value, but I need to somehow sample it, what UV Index it has and give that information to the user. And use the index in later calculations.
ALSO...
I happened to fall on a bug. While testing the UV, when I was standing in direct light, the reading did not display. Only once I moved to another UV level, it changed (But never back to the first one). But seems like the first reading either does not change (As method is "readingchanged") or is the default location. However much sense this makes. Is there a way to call out the reading on button click?
If need be, I can search the examples I used, for mode depth of the code. But most of it is here.

A new Band SDK will be available soon and will fix the UV sensor data not correctly returned. Stay tune!

This sensor is a bit different than the others. It does require the user to do the action to acquire the data and then the data can be gathered.
Here is a code which is working on a WP8.1 with a Band.
DateTime start;
private async void Button_Click(object sender, RoutedEventArgs e)
{
try
{
this.viewModel.StatusMessage = "Connecting to Band";
// Get the list of Microsoft Bands paired to the phone.
IBandInfo[] pairedBands = await BandClientManager.Instance.GetBandsAsync();
if (pairedBands.Length < 1)
{
this.viewModel.StatusMessage = "This sample app requires a Microsoft Band paired to your phone. Also make sure that you have the latest firmware installed on your Band, as provided by the latest Microsoft Health app.";
return;
}
// Connect to Microsoft Band.
using (IBandClient bandClient = await BandClientManager.Instance.ConnectAsync(pairedBands[0]))
{
start = DateTime.Now;
this.viewModel.StatusMessage = "Reading ultraviolet sensor";
// Subscribe to Ultraviolet data.
bandClient.SensorManager.UV.ReadingChanged += UV_ReadingChanged;
await bandClient.SensorManager.UV.StartReadingsAsync();
// Receive Accelerometer data for a while.
await Task.Delay(TimeSpan.FromMinutes(5));
await bandClient.SensorManager.UV.StopReadingsAsync();
bandClient.SensorManager.UV.ReadingChanged -= UV_ReadingChanged;
}
this.viewModel.StatusMessage = "Done";
}
catch (Exception ex)
{
this.viewModel.StatusMessage = ex.ToString();
}
}
void UV_ReadingChanged(object sender, Microsoft.Band.Sensors.BandSensorReadingEventArgs<Microsoft.Band.Sensors.IBandUVReading> e)
{
var span = (DateTime.Now - start).TotalSeconds;
IBandUVReading ultra = e.SensorReading;
string text = string.Format("Ultraviolet = {0}\nTime Stamp = {1}\nTime Span = {2}\n", ultra.IndexLevel, ultra.Timestamp, span);
Dispatcher.RunAsync(CoreDispatcherPriority.Normal, () => { this.viewModel.StatusMessage = text; }).AsTask();
start = DateTime.Now;
}

Related

reduce CPU overhead while proccessing Video Stream

I am developing C# WPF Auto Number Plate Recognition Using an OCR.
The Flow is, i am getting a pictures from a video stream MJPEG and this images should be passed to the OCR to get the plate number and other details.
The problem is : the Video stream is producing about 30 Frame/second and the CPU can't handle this much of processing also it will take around 1 Sec to process 1 frame, Also when i will get many frames on the Queue the CPU will be 70% used (Intel I7 4th G).
Can anyone suggest solution and better implementation.
//This is the queue where it will hold the frames
// produced from the video streaming(video_Newfram1)
private readonly Queue<byte[]> _anpr1Produces = new Queue<byte[]>();
//I am using AForg.Video to read the MJPEG Streaming
//this event will be triggered for every frame
private void video_NewFrame1(object sender, NewFrameEventArgs eventArgs)
{
var frameDataAnpr = new Bitmap(eventArgs.Frame);
AnprCam1.Source = GetBitmapimage(frameDataAnpr);
//add current fram to the queue
_anpr1Produces.Enqueue(imgByteAnpr);
//this worker is the consumer that will
//take the frames from the queue to the OCR processing
if (!_workerAnpr1.IsBusy)
{
_workerAnpr1.RunWorkerAsync(imgByteAnpr);
}
}
//This is the consumer, it will take the frames from the queue to the OCR
private void WorkerAnpr1_DoWork(object sender, DoWorkEventArgs e)
{
while (true)
{
if (_anpr1Produces.Count <= 0) continue;
BgWorker1(_anpr1Produces.Dequeue());
}
}
//This method will process the frames that sent from the consumer
private void BgWorker1(byte[] imageByteAnpr)
{
var anpr = new cmAnpr("default");
var objgxImage = new gxImage("default");
if (imageByteAnpr != null)
{
objgxImage.LoadFromMem(imageByteAnpr, 1);
if (anpr.FindFirst(objgxImage) && anpr.GetConfidence() >= Configs.ConfidanceLevel)
{
var vehicleNumber = anpr.GetText();
var vehicleType = anpr.GetType().ToString();
if (vehicleType == "0") return;
var imagename = string.Format("{0:yyyy_MMM_dd_HHmmssfff}", currentDateTime) + "-1-" +
vehicleNumber + ".png";
//this task will run async to do the rest of the process which is saving the vehicle image, getting vehicle color, storing to the database ... etc
var tsk = ProcessVehicle("1", vehicleType, vehicleNumber, imageByteAnpr, imagename, currentDateTime, anpr, _anpr1Produces);
}
else
{
GC.Collect();
}
}
}
What you should do is this:
First, figure out if a frame is worth processing or not. If you're using a compressed video stream, you can usually quickly read the frame's compressed size. It stores the difference between the current frame and the previous one.
When it's small, not much changed (i.e: no car drove by).
That's a low-tech way to do motion detection, without even having to decode a frame, and it should be extremely fast.
That way, you can probably decide to skip 80% of the frames in a couple of milliseconds.
Once and a while you'll get frames that need processing. Make sure that you can buffer enough frames so that you can keep recording while you're doing your slow processing.
The next thing to do is find a region of interest, and focus on those first. You could do that by simply looking at areas where the color changed, or try to find rectangular shapes.
Finally, one second of processing is SLOW if you need to process 30 fps. You need to make things faster, or you'll have to build up a gigantic buffer, and hope that you'll ever catch up if it's busy on the road.
Make sure to make proper use of multiple cores if they are available, but in the end, knowing which pieces of the image are NOT relevant is the key to faster performance here.

Tracking location in realtime in Windows phone 8.1

Sorry if this is a dumb question, I'm a beginner to windows phone 8.1 development.
I am using MapControl to display my current location on the map but as I move my position does not get updated automatically in realtime unless I click on a button and re-initialize the position of the pushpin which was earlier created. Is there a better way where this happens in the without the user having to push the button everytime he wants to see his current location.
private async Task setMyLocation()
{
try
{
var gl = new Geolocator() { DesiredAccuracy = PositionAccuracy.High };
Geoposition location = await gl.GetGeopositionAsync(TimeSpan.FromMinutes(5), TimeSpan.FromSeconds(5));
var pin = new MapIcon()
{
Location = location.Coordinate.Point,
Title = "You are here",
NormalizedAnchorPoint = new Point() { X = 0, Y = 0 },
};
myMapView.MapElements.Add(pin);
await myMapView.TrySetViewAsync(location.Coordinate.Point, 20);
}
catch
{
myMapView.Center = new Geopoint(App.centerPin);
myMapView.ZoomLevel = 20;
Debug.WriteLine("GPS NOT FOUND");
}
App.centerPin = myMapView.Center.Position;
}
Thanks in Advance!
Rather than running a timer and polling, handle the Geolocator.PositionChanged event. This will fire each time the position changes and will be significantly more efficient.
You can set the Geolocator.MovementThreshold property so it will only fire if the user has moved a given distance and not do anything if the user stands in the same place. The threshold you pick will probably depend on how far your map is zoomed in.
The way I would do it is to put a Timer that requests information from the server at a given interval.
Take a look onto this answer containing a code snippet : Stackoverflow answer

C# Maps - RouteQuery giving a false route

So I have a Bing Map control, a GeoCordWatcher getting gps lat and lon, a timer for position intervals and a RouteQuery maker turning the GPS cords into a path for the map.
The points are correct, +- a couple of meters. The problem is that if I am near an intersection or a side street it when the route query runs it takes me off on a half mile expedition that I never went on.
I have tried using both default accuracy and high accuracy but I get the same results. Actually seems to be worse with high accuracy.
Has anyone else had this issue?
RouteQuery rq = new RouteQuery();
List<GeoCoordinate> cords = new List<GeoCoordinate>();
foreach (classes.PositionObj posObj in waypoints)
{
cords.Add(new GeoCoordinate(Convert.ToDouble(posObj.Lattitude), Convert.ToDouble(posObj.Longitude)));
}
rq.Waypoints = cords;
rq.QueryCompleted += rw_QueryCompleted;
rq.QueryAsync();
void rw_QueryCompleted(object sender, QueryCompletedEventArgs<Route> e)
{
try {
if (e.Error == null)
{
Route myroute = e.Result;
mapRoute = new MapRoute(myroute);
mapRoute.Color = (Color)Application.Current.Resources["PhoneAccentColor"];
myMap.AddRoute(mapRoute);
}
}
catch (Exception error) { MessageBox.Show(error.Message); MessageBox.Show(error.StackTrace); leaveFeedback(error.StackTrace); }
}
I haven't been able to test it yet but I think that this is the answer I am looking for.
rq.RouteOptimization = RouteOptimization.MinimizeDistance;
This is the documentation I found
http://msdn.microsoft.com/en-US/library/windowsphone/develop/microsoft.phone.maps.services.routeoptimization(v=vs.105).aspx
http://msdn.microsoft.com/en-US/library/windowsphone/develop/microsoft.phone.maps.services.routequery(v=vs.105).aspx
Try the code:
routeQuery.TravelMode = TravelMode.Driving;

Motion Detection

I really cannot get my head around this, so I hope that someone can give me a little hand ^^
I'm trying to detect motion in C# via my webcam.
So far I've tried multiple libraries (AForge Lib), but failed because I did not understand how to use it.
At first I just wanted to compare the pixels from the current frame with the last one, but that turned out to work like utter s**t :I
Right now, my webcam runs an event "webcam_ImageCaptured" every time the picture from the webcam, which is like 5-10 fps.
But I cannot find a simple way to get the difference from the two images, or at least something that works decent.
Has anybody got an idea on how I could do this rather simple (as possible as that is)?
Getting motion detection to work using the libraries you mention is trivial. Following is an AForge (version 2.2.4) example. It works on a video file but you can easily adapt it to the webcam event.
Johannes' is right but I think playing around with these libraries eases the way to understanding basic image processing.
My application processes 720p video at 120FPS on a very fast machine with SSDs and around 50FPS on my development laptop.
public static void Main()
{
float motionLevel = 0F;
System.Drawing.Bitmap bitmap = null;
AForge.Vision.Motion.MotionDetector motionDetector = null;
AForge.Video.FFMPEG.VideoFileReader reader = new AForge.Video.FFMPEG.VideoFileReader();
motionDetector = GetDefaultMotionDetector();
reader.Open(#"C:\Temp.wmv");
while (true)
{
bitmap = reader.ReadVideoFrame();
if (bitmap == null) break;
// motionLevel will indicate the amount of motion as a percentage.
motionLevel = motionDetector.ProcessFrame(bitmap);
// You can also access the detected motion blobs as follows:
// ((AForge.Vision.Motion.BlobCountingObjectsProcessing) motionDetector.Processor).ObjectRectangles [i]...
}
reader.Close();
}
// Play around with this function to tweak results.
public static AForge.Vision.Motion.MotionDetector GetDefaultMotionDetector ()
{
AForge.Vision.Motion.IMotionDetector detector = null;
AForge.Vision.Motion.IMotionProcessing processor = null;
AForge.Vision.Motion.MotionDetector motionDetector = null;
//detector = new AForge.Vision.Motion.TwoFramesDifferenceDetector()
//{
// DifferenceThreshold = 15,
// SuppressNoise = true
//};
//detector = new AForge.Vision.Motion.CustomFrameDifferenceDetector()
//{
// DifferenceThreshold = 15,
// KeepObjectsEdges = true,
// SuppressNoise = true
//};
detector = new AForge.Vision.Motion.SimpleBackgroundModelingDetector()
{
DifferenceThreshold = 10,
FramesPerBackgroundUpdate = 10,
KeepObjectsEdges = true,
MillisecondsPerBackgroundUpdate = 0,
SuppressNoise = true
};
//processor = new AForge.Vision.Motion.GridMotionAreaProcessing()
//{
// HighlightColor = System.Drawing.Color.Red,
// HighlightMotionGrid = true,
// GridWidth = 100,
// GridHeight = 100,
// MotionAmountToHighlight = 100F
//};
processor = new AForge.Vision.Motion.BlobCountingObjectsProcessing()
{
HighlightColor = System.Drawing.Color.Red,
HighlightMotionRegions = true,
MinObjectsHeight = 10,
MinObjectsWidth = 10
};
motionDetector = new AForge.Vision.Motion.MotionDetector(detector, processor);
return (motionDetector);
}
Motion detection is a complex matter, and it requires a lot of computing power.
Try to limit what you want to detect first. With increasing complexity: Do your want to detect whether there is motion or not? Do you want to detect how much motion? Do you want to detect which areas of the image are actually moving?
I assume you just want to know when something changed:
subtract adjacent frames from each other
calc the sum of all squares of all pixel differences
divide by number of pixels
watch the number for your webcam stream. It will have a certain ground noise and will significantly go up when something moves.
try to limit to a certain color channel only, this may improve things

Windows Phone Location API

I am developing a location based social networking application and am using a geocoordinatewatcher on high accuracy and a movement threshold of 20m to obtain the user's location. My question is about the frequency of the location fixes. From the documentation, I gather that a movement threshold of 20m simply means that the position changed event is not triggered if the current location is 20m away from the location at the previous position changed event. This suggests that location fixes still happen, but they do not trigger the event handler if <20m. How does the device then decide how often to perform a location fix? Does changing the movement threshold change this in any way? Any extra documentation which I may have missed is welcome!
Thank you!
I think you are wanting to know about how MovementThreshold works and how to set that up.
basically you can say:
public class MyClass
{
private IGeoPositionWatcher<GeoCoordinate> _geoCoordinateWatcher;
/// <summary>
/// Gets the geo coordinate watcher.
/// </summary>
private IGeoPositionWatcher<GeoCoordinate> GeoCoordinateWatcher
{
get
{
if (_geoCoordinateWatcher == null)
{
_geoCoordinateWatcher = new GeoCoordinateWatcher(GeoPositionAccuracy.High);
((GeoCoordinateWatcher)_geoCoordinateWatcher).MovementThreshold = 3;
}
return _geoCoordinateWatcher;
}
}
}
Someplace else you might have
DispatcherTimer currentSpeedTimer = new DispatcherTimer();
currentSpeedTimer.Interval = new TimeSpan(0, 0, 1);
currentSpeedTimer.Tick += (sender, e) =>
{
if (this.GeoCoordinateWatcher.Position.Location.HorizontalAccuracy < 10)
{
if (DateTime.Now - this.GeoCoordinateWatcher.Position.Timestamp.DateTime > new TimeSpan(0, 0, 2))
{
CurrentSpeed = 0;
}
else
{
CurrentSpeed = double.IsNaN(this.GeoCoordinateWatcher.Position.Location.Speed) ? 0 : this.GeoCoordinateWatcher.Position.Location.Speed;
}
}
};
currentSpeedTimer.Start();
It's also worth pointing out that I found working with .NET Reactive Extensions and the IGeoPositionWatcher worked out really well for me.
http://msdn.microsoft.com/en-us/data/gg577609.aspx
To me it sounds like if current location > 20m from previous position fires event..
if there is a way to change the threshold, that seems will trigger differently, however maximum resolution could be 20m as that's usually what satellites have as max res, if I remember correct, not sure.

Categories