Mediaelement WPF with 2 video one is sometimes freezing - c#

I develop a desktop application using wpf and mediaelement.
There's 2 different mediaelement with all transport fonction play, pause, next/prev frame....
I have 2 problems in my application:
- Sometimes one video is freezing, it's often happen when I'm doing a play and then nextframe or nextframe many times on both video1 and video2.
- When I set the position of the video the image of the video does not always correspond to the position I setted.
Is there anyway to force the video to really refresh to the good position?
Any idea for the freezing staff?
Code for NextFrame:
m_currentMedia.Pause();
m_currentMedia1.Pause();
VideoStatus = VideoStatusEnum.Pause;
VideoStatus1 = VideoStatusEnum.Pause;
SpeedRatio = 0;
SpeedRatio1 = 0;
double NewPos = Math.Round(m_currentMedia.Position.TotalSeconds, 2) + 0.04;
TimeSpan NewPosition = TimeSpan.FromSeconds(NewPos);
double NewPos1 = Math.Round(m_currentMedia1.Position.TotalSeconds, 2) + 0.04;
TimeSpan NewPosition1 = TimeSpan.FromSeconds(NewPos1);
m_currentMedia.Position = NewPosition;
m_currentMedia1.Position = NewPosition1;

Related

How to avoid audio glitches whilst switching volume of an Audiosource each Update()-Frame?

I'm developing an audiotool that plays 64 Audiosources simultaneously. Therefore I created four arrays containing 16 Audiosources each. Each array of Audiosources is routed to its own Mixer. Furthermore, two mixer output to the left channel, two to the right. My DSP Buffer Size is set to Best Performance, meaning 1024 samples and there are enough real / virtual voices available.
In the beginning, 60 Audiosources are set to Volume = 0, while four of them are running with Volume = 0.5. Each Update()-Frame, I set the Volume of those playing with 0.5 to zero, therefore setting four new audiosources that were zero before to 0.5.
Something like this:
void SwitchSources()
{
noseposInd++;
if (noseposInd > 15) noseposInd = 0;
audioSources_Lm[noseposIndTemp].volume = 0.0f;
audioSources_Ln[noseposIndTemp].volume = 0.0f;
audioSources_Rm[noseposIndTemp].volume = 0.0f;
audioSources_Rn[noseposIndTemp].volume = 0.0f;
audioSources_Lm[noseposInd].volume = 0.5f;
audioSources_Ln[noseposInd].volume = 0.5f;
audioSources_Rm[noseposInd].volume = 0.5f;
audioSources_Rn[noseposInd].volume = 0.5f;
noseposIndTemp = noseposInd;
}
For test purposes, I loaded a rectangle signal with f = 2Hz (results in an audible click per second) into each Audiosource. Recording my output with Audacity results in something that can be seen on the attached photo:
It seems that the buffer of one of the four signals is not written to the output because the amplitude regarding a positive or negative pulse is just half. The width of the "notches" is exactly one blocklength. Meaning 1024 samples with a samplerate of 44.1kHz, so that there is no output for about 23ms.
Increasing the rate of changing the volume also increases the occurences of notches / time outs or however this can be called. Has anyone had the same problem or can help out with some knowledge about how the Update()-Method and the audio-block-writing of the mixers interfere?
Thanks in advance!

Get Basic audio spectrum data in unity

I want to visualize if an audio clip has sound or not. The microphone and the
audiosource is working correctly but I am stuck with its visualizing part. I have hard time understanding the official document and I want a solution.
I tried the following code:
void Update () {
AnalyzeSound();
text1.text = "sound!\n"+ " rmsValue : " + rmsValue ;
}
void AnalyzeSound()
{
audio.GetOutputData(samples, 0);
//GetComponent rms
int i = 0;
float sum = 0;
for (; i < SAMPLE_SIZE; i++)
{
sum = samples[i] * samples[i];
}
rmsValue = Mathf.Sqrt(sum / SAMPLE_SIZE);
//get the dbValue
dbValue = 20 * Mathf.Log10(rmsValue / 0.1f);
}
Can I take rmsValue as the input of sound on microphone? or should I take the dbValue? what should be the threshold value?
in a few words, When can I say the microphone has sound?
There is no hard and fast definition that would separate noise from silence in all cases. It really depends on how loud the background noise is. Compare for example, silence recorded in an anechoic chamber vs silence recorded next to an HVAC system. The easiest thing to try is to experiment with different dB threshold values below which you consider the signal as noise and above which it is considered signal. Then adjust the threshold value up or down to suit your needs. Depending on the nature of the signal (e.g. music vs. speech) you could look into other techniques such as Voice Activity Detection (https://en.wikipedia.org/wiki/Voice_activity_detection) or a convolutional neural network to segment speech and music

Looking for ways to optimize multi poly-lines on canvas in wpf

I'm working on program in WPF that need to plot on one of her screens 50 poly-lines that describes movement of some energy in space. One more demand from the program is to move two lines and a dot to indicate the mouse location when its relevant.
The problem is that when the poly-lines cover a high percent of the canvas and the mouse is fly over the canvas then all start to run really slowly.
I succeeded to improve this a little but its still not good enough so I'm asking for your help please.
So because this canvas related to another one, I spited the work for two - the handler and the movement function:
private void rayTraceCanvas_MouseMove(object sender, MouseEventArgs e)
{
System.Windows.Point pos = e.GetPosition(rayTraceCanvas);
mouseMove(pos);
if (m_MouseMoveCallback != null)
m_MouseMoveCallback(m_CurrentDepth);
}
And then the movement function:
private void mouseMove(Point currentPos)
{
m_CurrentDepth = Math.Round((currentPos.Y) / (m_PixelPerDepthUnit));
m_CurrentRange = Math.Round((currentPos.X) / (m_PixelPerRangeUnit));
DepthPos.Text = "D: " + m_CurrentDepth.ToString();
RangePos.Text = "R: " + m_CurrentRange.ToString();
rayTraceWidthLine.Y1 = currentPos.Y;
rayTraceWidthLine.Y2 = currentPos.Y;
rayTraceHeightLine.X1 = currentPos.X;
rayTraceHeightLine.X2 = currentPos.X;
Canvas.SetLeft(rayTraceDotOnGraph, currentPos.X - (rayTraceDotOnGraph.Width / 2));
Canvas.SetTop(rayTraceDotOnGraph, currentPos.Y - (rayTraceDotOnGraph.Height / 2));
}
I tried this without the deleget from the handler function and its work the same so the problem isn't there.

Unity 3D: Camera Background Color not applying

in my 3D Google Cardboard VR mini game, before switching to another scene, I'd like to fade the current scene's background to white for a nice transition effect.
I built a function which changes the color value from yellow to white whithin 2 seconds:
within Update ():
if (started) {
if (startTime >= startDelay) {
//start
} else {
//fade
thisBrightness = startTime / 2; // runs 2 seconds
if (thisBrightness > 1) {
thisBrightness = 1; // just in case
}
Camera.main.backgroundColor = Color.Lerp (mainCameraBackground, mainCameraFaded, thisBrightness);
startTime += Time.deltaTime;
}
}
I logged the float "thisBrightness" and it changes from 0 to 1 as it should. Also, I can see in the inspector that the color field in Camera > Background changes, but in my Game Preview, it does NOT - the color stays.
Anybody has any explanation and solution for this??!
1000 thanks!
Felix
Unity 5.5.0f3 personal
Google Cardboard 1.0
Edit: I just came back to this question and found it's not really answered.
I found out that main camera is converted to separate cameras left + right by Google VR SDK.
You'll need to handle both separately, see code below for example how I resoved this in the end:
public Camera leftCamera;
public Camera rightCamera;
mainCameraBackground = new Color (1, 0.8f, 0); // set to yellow initially
mainCameraFaded = new Color(1f,1f,1f);
mainCameraCurrent = new Color (0f, 0f, 0f);
// main camera is converted to left + right by Google VR SDK.
// this is why we need to handle both separately
leftCamera.clearFlags = CameraClearFlags.SolidColor;
leftCamera.backgroundColor = mainCameraBackground;
rightCamera.clearFlags = CameraClearFlags.SolidColor;
rightCamera.backgroundColor = mainCameraBackground;
and then:
mainCameraCurrent = Color.Lerp (mainCameraBackground, mainCameraFaded, thisBrightness);
rightCamera.backgroundColor = mainCameraCurrent;
leftCamera.backgroundColor = mainCameraCurrent;

NAudio: Using MixingSampleProvider correctly with VolumeSampleProvider

I have been using NAudio with the
"Fire and Forget Audio Playback with NAudio" tutorial (thank you Mark for this awesome utility!) as written here:
http://mark-dot-net.blogspot.nl/2014/02/fire-and-forget-audio-playback-with.html
I managed to add a VolumeSampleProvider to it, using the MixingSampleProvider as input. However, when I now play two sounds right after each other, the first sound always gets the volume of the second as well, even though the first is already playing.
So my question is: How do I add sounds with an individual volume per sound?
This is what I used:
mixer = new MixingSampleProvider(waveformat);
mixer.ReadFully = true;
volumeProvider = new VolumeSampleProvider(mixer);
panProvider = new PanningSampleProvider(volumeProvider);
outputDevice.Init(panProvider);
outputDevice.Play();
I realized (thanks to itsmatt) that the only way to make this work, is to leave the mixer alone and adjust the panning and volume of each CachedSound individually, before adding it to the mixer. Therefore I needed to rewrite the CachedSoundSampleProvider, using a pan and volume as extra input parameters.
This is the new constructor:
public CachedSoundSampleProvider(CachedSound cachedSound, float volume = 1, float pan = 0)
{
this.cachedSound = cachedSound;
LeftVolume = volume * (0.5f - pan / 2);
RightVolume = volume * (0.5f + pan / 2);
}
And this is the new Read() function:
public int Read(float[] buffer, int offset, int count)
{
long availableSamples = cachedSound.AudioData.Length - position;
long samplesToCopy = Math.Min(availableSamples, count);
int destOffset = offset;
for (int sourceSample = 0; sourceSample < samplesToCopy; sourceSample += 2)
{
float outL = cachedSound.AudioData[position + sourceSample + 0];
float outR = cachedSound.AudioData[position + sourceSample + 1];
buffer[destOffset + 0] = outL * LeftVolume;
buffer[destOffset + 1] = outR * RightVolume;
destOffset += 2;
}
position += samplesToCopy;
return (int)samplesToCopy;
}
I'm not 100% certain of what you are asking and I don't know if you solved this already but here's my take on this.
ISampleProvider objects play the "pass the buck" game to their source ISampleProvider via the Read() method. Eventually, someone does some actual reading of audio bytes. Individual ISampleProvider classes do whatever they do to the bytes.
MixingSampleProvider, for instance, takes N audio sources... those get mixed. When Read() is called, it iterates the audio sources and reads count bytes from each.
Passing it to a VolumeSampleProvider handles all the bytes (from those various sources) as a group... it says:
buffer[offset+n] *= volume;
That's going to adjust the bytes across the board... so every byte gets adjusted in the buffer by the volume multiplier;
The PanningSampleProvider just provides a multiplier to the stereo audio and adjusts the bytes accordingly, doing the same sort of thing as the VolumeSampleProvider.
If you want to individually handle audio source volumes, you need to handle that upstream of the MixingSampleProvider. Essentially, the things that you pass to the MixingSampleProvider need to be able to have their volume adjusted independently.
If you passed a bunch of SampleChannel objects to your MixingSampleProvider... you could accomplish independent volume adjustment. The Samplechannel class incorporates a VolumeSampleProvider object and provides a Volume property that allows one to set the volume on that VolumeSampleProvider object.
SampleChannel also incorporates a MeteringSampleProvider that provides reporting of the maximum sample value during a given period. It raises an event that gives you an array of those values, one per channel.

Categories