I've encountered a problem, and need help to figure out what's happening. The idea is to synchronize 2 or more android devices with respect to a GPS location timestamp. I've heard that GPS time is very accurate, unlike system time, which may vary across few seconds. However the results I get are not what I expected.
void Start() {
Input.location.Start ();
double UTC_timestamp = Input.location.lastData.timestamp;
Input.location.Stop ();
}
So in this case, UTC_timestamp represents the total number of seconds since January 1st 1970, 00:00:00.
But if I request a timestamp on 2 different devices, I am getting quite a sure difference, and my attempts to synchronize them via satellite's timestamp fails.
Here is how I tested that:
using UnityEngine;
using UnityEngine.UI;
using System.Collections;
public class GPSTime: MonoBehaviour {
double UTC_timestamp;
public Text txt_UTC;
IEnumerator Start() {
UTC_timestamp = getGPSTime ();
while (Application.isPlaying) {
yield return new WaitForSeconds (1);
UTC_timestamp ++;
}
}
void Update() {
print("UTC: "+ UTC_timestamp);
txt_UTC.text = "UTC: " + UTC_timestamp;
}
private double getGPSTime() {
Input.location.Start ();
UTC_timestamp = Input.location.lastData.timestamp;
Input.location.Stop ();
return UTC_timestamp;
}
}
I run this on two different devices, and the timestamp I am getting differs quite significantly:
So out of three app launches I get the following:
DEVICE 1
1) 1438782375.605
2) 1438782610.260
3) 1438782681.926
4) 1438782960.266
DEVICE 2
1) 1438782505.306
2) 1438782680.011
3) 1438782675.226
4) 1438782967.400
So first launch differs by ~130 seconds!!! Second trial differs by ~70 seconds, and third trial differs by 6-7 seconds. Fourth trial differs by around 7 seconds.
Why such strange differences? What can I do to get the best results possible?
Seems that I have found my answer after all. In case anyone else is facing these issues, here is what I found out.
Input.location.Start (); //initializer, takes two arguments
Default arguments are:
Input.location.Start (10, 10);
both are of type float, first is accuracy (10 meters) and the second is how much the device has to be displaced to request a pulse from the satellite, (10 meters). So the reason to why I was getting such strange numbers for timestamp on both devices, is that I did not consider that I actually had to move, for the time to get updated. So what I did is I decreased the accuracy to 100 meters (I do not need such high accuracy for getting a timestamp, as 10 meters) and set the displacement value to zero, and I got perfects PPS (pulse per second) where both devices were in sync, which is exactly what I needed.
Another point to make, gps signals don't get through very well if you are indoors. Much depends on whether the sky is open and how tall the buildings are around you. That factors in as well, in case you are experiencing some gps fix troubles in this context. However I can confirm that GPS time is very accurate indeed, and works very well on unity engine on android devices. iOS not tested yet.
Related
Trying to make accurate replay system in unity and c#
Hi all,
Im working on a racing game and I decided to add a replay system to allow "ghost car" too, initially I was recordng data in some events like key pressed but only recording that data in all frames I manage a smooth replay, well its still ok as file is not huge and replay works, but the trouble is there is always a slight variation in time, only like 0.1 seconds or 0.2 at the most, I have a list of keyframes and in each position I record a time to be shown, the trouble I think is that because fps vary then not in all runs the same time marks are shown then the winning frame's time is not always being rendered so the winning frame happens in next update slightly after it should be shown. Im using c# and unity just in case, but I think its independent to this mainly. Thanks a lot about any clue, I have been around this issue for some time now
It sounds like you're doing frame-by-frame replay which, as you've discovered, requires your frames to play back with the same delay as the recording. In a game-render loop, that's not guaranteed to happen.
As you record the car states (position, heading, etc) per frame, you need to also record a timestamp (in this case, accumulating Time.deltaTime from race start should suffice).
When you play it back, find the current timestamp and interpolate (ie, Lerp) the car's state from the recorded bounding frames.
Hints for frame interpolation:
class Snapshot {
public float Timestamp;
public Matrix4x4 Transform; // As an example. Put more data here.
}
private int PrevIndex = 0;
private List<Snapshot> Snapshots = (new List<Snapshot>()).OrderBy(m => m.Timestamp).ToList();
private float GetLerpFactor(float currentTimestamp) {
if ( PrevIndex == Snapshots.Count - 1)
return 0; // Reached end of Snapshots
while (currentTimestamp >= Snapshots[PrevIndex + 1].Timestamp)
PrevIndex++; // move 'frame' forward
var currentDelta = Mathf.Max(0f, currentTimestamp - Snapshots[PrevIndex].Timestamp);
var fullDelta = Snapshots[PrevIndex + 1].Timestamp - Snapshots[PrevIndex].Timestamp;
var lerpFactor = currentDelta / fullDelta;
return lerpFactor;
}
Unless for some reason you need to interact with "ghost" car (like collisions) record final data on position/speed/direction at frequent enough original moments of time and interpolate that to new simulation. I would not record raw inputs but rather resulting changes (like gear shifts) unless you need to measure/show how fast user reacted to something.
If you really want to replay the same inputs you'd have to run two separate simulations at the same time so physics and timing of "real" version don't impact "ghost" one, most likely you'll have to again interpolate output of "ghost" simulation to align with real one (unless you have fixed time steps).
I am creating a rythm VR game for google cardboard, inspired by Beat Saber on PSVR and Oculus Rift.
The concept is that blocks with different directions come at you, following the rythm of the music, just like in Beat Saber.
For every music I have created a ScriptableObject called music, with every parameter it should contain like the difficulty of the song, the name, etc. but also an array of Vector3 called notes : the first float of the Vector corresponds to the time the note should be beaten in the rythm of the music starting from 0: ScriptableObject
Then in a script called SlashSpawnManagement, where every thing is based on WaitForSeconds(), I spawn the blocks to smash. It's realy hard for me to explain wih words the order I do it in, so here is an image : Explanation
In theory, what this script does, it waits for some time, spawns a block, waits for some time, spawn a block, etc. The logic seems okay, but here's the weird part. As you play the song the distance between each block gradually becomes bigger and bigger, meaning the blocks become more and more out of sync with the music. It starts very well, but at the end of the music there is at least a 5s gap.
I figured it has something to do with the frame rate drop, so I tried to set the frameRate to something low with :
QualitySettings.vSyncCount = 0; // VSync must be disabled
Application.targetFrameRate = 30;
But it doesn't solve the problem. I tried with WaitForSecondsRealtime instead of WaitForSeconds, but nothing changes. I have read somewhere that WaitForSeconds depends on the frame rate... Where I calculated the time, I tried sustractng h divided by a number to even out the gradual gap. This works for some blocks but not every block: Notes[j][0] - h / 500
So here's my question, how do I make the WaitForSeconds, or any other method consistent with the seconds provided ?
Thank you In advance,
PS : For more clarifications, please ask and please forgive my typos and my english :)
If you want something to happen in regular time intervals, it is important to make sure that errors don't accumulate.
Don't:
private IEnumerable DoInIntervals()
{
while (this)
{
yield return new WaitForSeconds(1f); // this will NOT wait exactly 1 second but a few ms more. This error accumulates over time
DoSomething();
}
}
Do:
private IEnumerable DoInIntervals()
{
const float interval = 1f;
float nextEventTime = Time.time + interval;
while (this)
{
if (Time.time >= nextEventTime)
{
nextEventTime += interval; // this ensures that nextEventTime is exactly (interval) seconds after the previous nextEventTime. Errors are not accumulated.
DoSomething();
}
yield return null;
}
}
This will make sure your events happen in regular intervals.
Note: Even though this will be regular, it does not guarantee that it will stay in sync with other systems like audio. That can only be achieved by having a shared time between systems, like spender mentioned in his comment on your question.
Trying to use WaitForSeconds the timing of audio and hope for the best is hoping a big hope.
Have a list of Vector3s prepared in advance. If you want to prepare the list using the rythm - it will work. Use AudioSource's time to check every Update whether the timing is right and spawn a block at that moment.
void Update () {
SpawnIfTimingIsRight();
}
void SpawnIfTimingIsRight() {
float nextTiming = timingsArray[nextIndex].x;
// Time For Block To Get To Position
float blockTime = this.blockDistanceToTravel / this.blockSpeed;
float timeToSpawnBlock = nextTiming - blockTime;
if (this.audioSource.time >= nextTiming && (this.audioSource.time - Time.deltaTime) < nextTiming) {
// Spawn block
}
}
I am trying to implement a volume meter to help users select their microphone using NAudio. I need to do my best to weed out devices that just have background noise and insure I show something when they talk.
We are currently using version 1.7.3 within a Unity3D application so none of the MMDevice related approaches are available as they crash.
I am using a WaveInEvent that I feed into a WaveInProvider that I subsequently feed to a SampleChannel. I feed the SampleChannel into a MeteringSampleProvider which I have subscribed to the StreamVolume event.
In my OnPostVolumeMeter event handler when I receive the StreamVolumeEventArgs (I named the parameter e) I'm wondering how to calculate decibels. I have seen plenty of examples that fish out the peak volume (or sometimes it seems to be referred to as an amplitude) from e.MaxSampleValues[0]. Some examples check whether it is a stereo signal and will grab the max between e.MaxSampleValues[0] or e.MaxSampleValues[1].
Anyway, what are the values of this number? Is it a percentage? They are relatively small decimals (10^-3 or 10^-4) when I hillbilly debug to the console.
Is the calculation something like,
var peak = e.MaxSampleValues[0];
if (e.MaxSampleValues.Length > 1)
{
peak = Mathf.Max(e.MaxSampleValues[0], e.MaxSampleValues[1]);
}
var dB = Mathf.Max(20.0f*Mathf.Log10(peak), -96.0f);
or do I need to divide peak by 32768.0? As in,
var dB = Mathf.Max(20.0f*Mathf.Log10(peak/32768.0f), -96.0f);
Is this approach totally incorrect and I need to collect a buffer of samples that I do an RMS sort of calculation where I calculate the square root of the sum of the averages divided by the number of samples all divided by 32768 and feed that into the Log10?
I've seen several references to look at the AudioPlaybackPanel of the NAudioDemo and it sets the volumeMeter Amplitude to be the values of e.MaxSampleValues[0] and e.MaxSampleValues[1]
looking at the date of your post this is probably a solved issue for you but of the benefit of others here goes.
Audio signals swing between negative and positive values in a wave. The frequency of the swing and the Amplitude or height of the swing effect what you hear.
You are correct in saying you are looking for the amplitude to see if audio is present.
For a meter as the sample rate is much higher than the refresh rate of any meter you are likely to display, you will need to either record the peak using math.max or do an average over a number of samples. In your case either would work, unless you are trying to show an accurate meter in bdFS the db calculation would not be needed.
In apps where I have been looking to trigger things based on the presence of audio or lack their of. I normally convert the samples to a float this will give you a range between 0 and 1 and then pick a threshold say 0.2 and say if any sample is above that we have audio.
a float also provides a nice indicative meter for display. Note if your app was for a pro audio application and you were asking about accurate metering my answer would be totally different.
I have written a C# program that captures video from a specialized camera through the camera manufacturer's proprietary API. I am able to write captured frames to disk through a FileStream object, but I am at the mercy of the camera and disk I/O when it comes to the framerate.
What is the best way to make sure I write to disk at the required framerate? Is there a certain algorithm available that would compute the real-time average framerate and then add/discard frames to maintain a certain desired framerate?
It's difficult to tell much because of the lack of information.
What's the format? Is there compression?
How is the camera API sending the frames? Are they timed, so the camera will send the frame rate you asked for? If it is, you are really dealing with I/O speed.
If you need high quality, and are writing without compression, you could experiment some lossless compression algorithms to balance between processing and drive I/O. You could gain some speed if the bottleneck is with a high drive I/O.
For frames, there are ways to implement that. Normally frames have time-stamp, you should search on that, and discard frames that are so near of the other.
Let's say that you want 60 fps, so the space in ms between frames is 1000/60=16ms, if the frame you get has a time stamp of 13ms after the last, you can discard it and don't write to disk.
In a perfect world, you would check the first second which gives you a number of frames per second that your system supports.
Say your camera is capturing 60 fps but your computer can really only handle 45 fps. What you have to do is skip a total of 15 frames per second in order to keep up. Up to here, that's easy enough.
The math in this basic case is:
60 / 15 = 4
So skip one frame every four incoming frames like so (keep frames marked with an X, skip the others):
000000000011111111112222222222333333333344444444445555555555
012345678901234567890123456789012345678901234567890123456789
XXX XXX XXX XXX XXX XXX XXX XXX XXX XXX XXX XXX XXX XXX XXX
Of course, it is likely that you do not get such a clear cut case. The math remains the same, it's just that you'll end up skipping frames at what looks like varying points in time:
// simple algorithm based on fps from source & destination
// which fails to keep up long term
double camera_fps = 60.0;
double io_fps = camera_fps;
for(;;)
{
double frames_step = camera_fps / io_fps;
double start_time = time(); // assuming you get at least ms precision
double frame = 0.0;
for(int i(0); i < camera_fps; ++i)
{
buffer frame_data = capture_frame();
if(i == (int)frame) // <- part of the magic happens here
{
save_frame(frame_data);
}
frame += frames_step;
}
double end_time = time();
double new_fps = camera_fps / (end_time - start_time);
if(new_fps < io_fps)
{
io_fps = new_fps;
}
}
As this algorithm shows, you want to adjust your fps as time goes by. During the first second, it is likely to give you an invalid result for all sorts of reasons. For example, writing to disk is likely going to be buffered so go really fast and it may look as if you could support 60 fps. Later, the fps will slow down and you may find that your maximum I/O speed is 57 fps instead.
One issue with this basic algorithm is that you can easily reduce the number of frames to make it work within 1 second, but it will only reduce the fps (i.e. I update io_fps only when new_fps is smaller). If you find the correct number for io_fps, you're fine. If you went too far, you're dropping frames when you shouldn't. This is because new_fps is going to be 1.0 when (end_time - start_time) is exactly 1 second, meaning that you did not spend too much time capturing and saving the incoming frames.
The solution to this issue is to time your save_frame() function. If the total amount is less than 1 second within the inner loop, then you can increase the amount of frames you can save. This will work better if you can use two threads. One thread reads the frame, pushes the frame on an in memory FIFO, the other thread retrieves the frames from that FIFO. This means the amount of time to capture one frame doesn't (as much) affect the time it takes to save a frame.
bool stop = false;
// capture
for(;;)
{
buffer frame_data = capture_frame();
fifo.push(frame_data);
if(stop)
{
break;
}
}
// save to disk
double expected_time_to_save_one_frame = 1.0 / camera_fps;
double next_start_time = time();
for(;;)
{
buffer frame_data = fifo.pop();
double start_time = next_start_time;
save_frame(frame_data);
next_start_time = time();
double end_time = time();
if(start_time - end_time > expected_time_to_save_one_frame)
{
fifo.pop(); // skip one frame
}
}
This is just pseudo code and I may have made a few mistakes. It also expects that the gap between start_time and end_time is not going to be more than one frame (i.e. if you have a case where the capture is 60 fps and the I/O supports less than 30 fps, you often would have to skip two frames in a row).
For people who are compressing frames, keep in mind that the timing of one call to save_frame() will vary a lot. At times, the frame can easily be compressed (no horizontal movement) and at times it's really slow. This is where such dynamism can help tremendously. Your disk I/O, assuming not much else occurs while you are recording, should not vary much once you reach the maximum speed supported.
IMPORTANT NOTE: these algorithms suppose that the camera fps is fixed; this is likely not quite true; you can also time the camera and adjust the camera_fps parameter accordingly (which means the expected_time_to_save_one_frame variable can also vary over time).
I work on a 2D game engine that has a function called LimitFrameRate to ensure that the game does not run so fast that a user cannot play the game. In this game engine the speed of the game is tied to the frame rate. So generally one wants to limit the frame rate to about 60 fps. The code of this function is relatively simple: calculate the amount of time remaining before we should start work on the next frame, convert that to milliseconds, sleep for that number of milliseconds (which may be 0), repeat until it's exactly the right time, then exit. Here's the code:
public virtual void LimitFrameRate(int fps)
{
long freq;
long frame;
freq = System.Diagnostics.Stopwatch.Frequency;
frame = System.Diagnostics.Stopwatch.GetTimestamp();
while ((frame - previousFrame) * fps < freq)
{
int sleepTime = (int)((previousFrame * fps + freq - frame * fps) * 1000 / (freq * fps));
System.Threading.Thread.Sleep(sleepTime);
frame = System.Diagnostics.Stopwatch.GetTimestamp();
}
previousFrame = frame;
}
Of course I have found that due to the imprecise nature of the sleep function on some systems, the frame rate comes out quite differently than expected. The precision of the sleep function is only about 15 milliseconds, so you can't wait less than that. The strange thing is that some systems achieve a perfect frame rate with this code and can achieve a range of frame rates perfectly. But other systems don't. I can remove the sleep function and then the other systems will achieve the frame rate, but then they hog the CPU.
I have read other articles about the sleep function:
Sleep function in c in windows. Does a function with better precision exist?
Sleep Function Error In C
What's a coder to do? I'm not asking for a guaranteed frame rate (or guaranteed sleep time, in other words), just a general behavior. I would like to be able to sleep (for example) 7 milliseconds to yield some CPU to the OS and have it generally return control in 7 milliseconds or less (so long as it gets some of its CPU time back), and if it takes more sometimes, that's OK. So my questions are as follows:
Why does sleep work perfectly and precisely in some Windows environments and not in others? (Is there some way to get the same behavior in all environments?)
How to I achieve a generally precise frame rate without hogging the CPU from C# code?
You can use timeBeginPeriod to increase the timer/sleep accuracy. Note that this globally affects the system and might increase the power consumption.
You can call timeBeginPeriod(1) at the beginning of your program. On the systems where you observed the higher timer accuracy another running program probably did that.
And I wouldn't bother calculating the sleep time and just use sleep(1) in a loop.
But even with only 16ms precision you can write your code so that the error averages out over time. That's what I'd do. Isn't hard to code and should work with few adaptions to your current code.
Or you can switch to code that makes the movement proportional to the elapsed time. But even in this case you should implement a frame-rate limiter so you don't get uselessly high framerates and unnecessarily consume power.
Edit: Based on ideas and comments in this answer, the accepted answer was formulated.
Almost all game engines handle updates by passing the time since last frame and having movement etc... behave proportionally to time, any other implementation than this is faulty.
Although CodeInChaos suggestion is answers your question, and might work partially in some scenarios it's just a plain bad practice.
Limiting the framerate to your desired 60fps will work only when a computer is running faster. However the second that a background task eats up some processor power (for example the virusscanner starts) and your game drops below 60fps everything will go much slower. Even though your game could be perfectly playable on 35fps this will make it impossible to play the game because everything goes half as fast.
Things like sleep are not going to help because they halt your process in favor of another process, that process must first be halted, sleep(1ms) just means that after 1ms your process is returned to the queue waiting for permission to run, sleep(1ms) therefor can easily take 15ms depending on the other running processes and their priorities.
So my suggestions is that you as quickly as possible at some sort of "elapsedSeconds" variable that you use in all your update methods, the earlier you built it in, the less work it is, this will also ensure compatibility with MONO.
If you have 2 parts of your engine, say a physics and a render engine and you want to run these at different framerates, then just see how much time has passed since the last frame, and then decide to update the physics engine or not, as long as you incorporate the time since last update in your calculations you will be fine.
Also never draw more often than you're moving something. It's a waste to draw 2 times the exact same screen, so if the only way something can change on screen is by updating your physics engine, then keep render engine and physics engine updates in sync.
Based on ideas and comments on CodeInChaos' answer, this was the final code I arrived at. I originally edited that answer, but CodeInChaos suggested this be a separate answer.
public virtual void LimitFrameRate(int fps)
{
long freq;
long frame;
freq = System.Diagnostics.Stopwatch.Frequency;
frame = System.Diagnostics.Stopwatch.GetTimestamp();
while ((frame - fpsStartTime) * fps < freq * fpsFrameCount)
{
int sleepTime = (int)((fpsStartTime * fps + freq * fpsFrameCount - frame * fps) * 1000 / (freq * fps));
if (sleepTime > 0) System.Threading.Thread.Sleep(sleepTime);
frame = System.Diagnostics.Stopwatch.GetTimestamp();
}
if (++fpsFrameCount > fps)
{
fpsFrameCount = 0;
fpsStartTime = frame;
}
}