How can I draw a waveform efficiently - c#

I am trying to display the waveform of an audio file. I would like the waveform to be drawn progressively as ffmpeg processes the file, as opposed to all at once after it's done. While I have achieved this effect, it's REALLY slow; like painfully slow. Its starts out really fast, but the speed degrades to the point of taking minutes to draw a sample.
I feel there has to be a way to do this more efficiently, as there is a program I use that does it, I just don't know how. The other program can take in >10 hours of audio and progressively display the waveform with no speed degradation. I have set ffmpeg to process the file at 500 samples/sec, but the other program samples at 1000/sec and it still runs faster than what I wrote. The other program's waveform display only takes about 120MB of RAM with a 10 hour file, where mine takes 1.5GB with a 10 minute file.
I'm fairly certain the slowness is caused by all the UI updates and the RAM usage is from all the rectangle objects being created. When I disable drawing the waveform, the async stream completes pretty fast; less than 1 min for a 10 hour file.
This is the only way I could think to accomplish what I want. I would welcome any help to improve what I wrote or any suggestions for an all together different way to accomplish it.
As a side note, this isn't all I want to display. I will eventually want to add a background grid to help judge time, and draggable line annotations to mark specific places in the waveform.
MainWindow.xaml
<ItemsControl x:Name="AudioDisplayItemsControl"
DockPanel.Dock="Top"
Height="100"
ItemsSource="{Binding Samples}">
<ItemsControl.Resources>
<DataTemplate DataType="{x:Type poco:Sample}">
<Rectangle Width="{Binding Width}"
Height="{Binding Height}"
Fill="ForestGreen"/>
</DataTemplate>
</ItemsControl.Resources>
<ItemsControl.ItemsPanel>
<ItemsPanelTemplate>
<Canvas Background="Black"
Width="500"/>
</ItemsPanelTemplate>
</ItemsControl.ItemsPanel>
<ItemsControl.ItemContainerStyle>
<Style TargetType="ContentPresenter">
<Setter Property="Canvas.Top" Value="{Binding Top}"/>
<Setter Property="Canvas.Left" Value="{Binding Left}"/>
</Style>
</ItemsControl.ItemContainerStyle>
</ItemsControl>
MainWindow.xaml.cs
private string _audioFilePath;
public string AudioFilePath
{
get => _audioFilePath;
set
{
if (_audioFilePath != value)
{
_audioFilePath = value;
NotifyPropertyChanged();
}
}
}
private ObservableCollection<IShape> _samples;
public ObservableCollection<IShape> Samples
{
get => _samples;
set
{
if (_samples != value)
{
_samples = value;
NotifyPropertyChanged();
}
}
}
//Eventhandler that starts this whole process
private async void GetGetWaveform_Click(object sender, RoutedEventArgs e)
{
((Button)sender).IsEnabled = false;
await GetWaveformClickAsync();
((Button)sender).IsEnabled = true;
}
private async Task GetWaveformClickAsync()
{
Samples.Clear();
double left = 0;
double width = .01;
double top = 0;
double height = 0;
await foreach (var sample in FFmpeg.GetAudioWaveform(AudioFilePath).ConfigureAwait(false))
{
// Map {-32,768, 32,767} (pcm_16le) to {-50, 50} (height of sample display)
// I don't this this mapping is correct, but that's not important right now
height = ((sample + 32768) * 100 / 65535) - 50;
// "0" pcm values are not drawn in order to save on UI updates,
// but draw position is still advanced
if (height==0)
{
left += width;
continue;
}
// Positive pcm values stretch "height" above the canvas center line
if (height > 0)
top = 50 - height;
// Negative pcm values stretch "height" below the centerline
else
{
top = 50;
height = -height;
}
Samples.Add(new Sample
{
Height = height,
Width = width,
Top = top,
Left = left,
ZIndex = 1
});
left += width;
}
}
Classes used to define a sample
public interface IShape
{
double Top { get; set; }
double Left { get; set; }
}
public abstract class Shape : IShape
{
public double Top { get; set; }
public double Left { get; set; }
public int ZIndex { get; set; }
}
public class Sample : Shape
{
public double Width { get; set; }
public double Height { get; set; }
}
FFMpeg.cs
public static class FFmpeg
{
public static async IAsyncEnumerable<short> GetAudioWaveform(string filename)
{
var args = GetFFmpegArgs(FFmpegTasks.GetWaveform, filename);
await foreach (var sample in RunFFmpegAsyncStream(args))
{
yield return sample;
}
}
/// <summary>
/// Streams raw results of running ffmpeg.exe with given arguments string
/// </summary>
/// <param name="args">CLI argument string used for ffmpeg.exe</param>
private static async IAsyncEnumerable<short> RunFFmpegAsyncStream(string args)
{
using (var process = new Process())
{
process.StartInfo.FileName = #"External\ffmpeg\bin\x64\ffmpeg.exe";
process.StartInfo.Arguments = args;
process.StartInfo.UseShellExecute = false;
process.StartInfo.RedirectStandardError = true;
process.StartInfo.RedirectStandardOutput = true;
process.StartInfo.CreateNoWindow = true;
process.Start();
process.BeginErrorReadLine();
var buffer = new byte[2];
while (true)
{
// Asynchronously read a pcm16_le value from ffmpeg.exe output
var r = await process.StandardOutput.BaseStream.ReadAsync(buffer, 0, 2);
if (r == 0)
break;
yield return BitConverter.ToInt16(buffer);
}
}
}
FFmpegTasks is just an enum.
GetFFmpegArgs uses a switch argument on FFmpegTasks to return the appropriate CLI arguments for ffmpeg.exe.
I tried using the following class instead of the standard ObservableCollection because I was hoping that less UI updates would speed things up, but it actually made drawing the waveform slower.
RangeObservableCollection.cs
public class RangeObservableCollection<T> : ObservableCollection<T>
{
private bool _suppressNotification = false;
protected override void OnCollectionChanged(NotifyCollectionChangedEventArgs e)
{
if (!_suppressNotification)
base.OnCollectionChanged(e);
}
public void AddRange(IEnumerable<T> list)
{
if (list == null)
throw new ArgumentNullException("list");
_suppressNotification = true;
foreach (T item in list)
{
Add(item);
}
_suppressNotification = false;
OnCollectionChanged(new NotifyCollectionChangedEventArgs(NotifyCollectionChangedAction.Reset));
}
}

You could try drawing a Path by hand. I use this to draw histograms of images in an application:
/// <summary>
/// Converts a histogram to a <see cref="PathGeometry"/>.
/// This is used by converters to draw a path.
/// It is easiest to use the default Canvas width and height and then use a ViewBox.
/// </summary>
/// <param name="histogram">The counts of each value.</param>
/// <param name="CanvasWidth">Width of the canvas</param>
/// <param name="CanvasHeight">Height of the canvas.</param>=
/// <returns>A path geometry. This value can be bound to a <see cref="Path"/>'s Data.</returns>
public static PathGeometry HistogramToPathGeometry(int[] histogram, double CanvasWidth = 100.0, double CanvasHeight = 100.0)
{
double xscale = CanvasWidth / histogram.Length;
double histMax = histogram.Max();
double yscale = CanvasHeight / histMax;
List<LineSegment> segments = new List<LineSegment>();
for (int i = 0; i < histogram.Length; i++)
{
double X = i * xscale;
double Y1 = histogram[i] * yscale;
double Y = CanvasHeight - Y1;
if (Y == double.PositiveInfinity) Y = CanvasHeight;
segments.Add(new LineSegment(new Point(X, Y), true));
}
segments.Add(new LineSegment(new Point(CanvasWidth, CanvasHeight), true));
PathGeometry geometry = new PathGeometry();
PathFigure figure = new PathFigure(new Point(0, CanvasHeight), segments, true);
geometry.Figures = new PathFigureCollection
{
figure
};
return geometry;
}
Then have this in your Xaml:
<Canvas Width="100" Height="100">
<Path Data="{Binding ConvertedPathGeometry}" />
</Canvas>
You could change this up so it handles the data as it is coming in. I'm not sure how well it would work with a lot of points, but you could only update the view after a handful of new points have come in. I've dealt with trying to draw many rectangles in a display and have the same issue you are running into.

As promised, here is all the code that is involved with drawing my waveform. I probably added a little more than necessary, but there is it. I was hoping to draw the waveform as it was processed, but gave up on that idea... for now. I hope this helps someone out there, because it took me 2 weeks in all to get this worked out. Comments are welcome.
Just as an FYI, on my pc it takes about 35 seconds to process a ~10hr m4b file and <2ms to display the image.
Also, it assumes a little-endian system. I put no checks in for big-endian systems, as I am not on one. If you are, the Buffer.BlockCopy will need some attention.
Mainwindow.xaml
<Border Background="Black"
Height="100"
Width="720"
BorderThickness="0">
<Image x:Name="WaveformImg" Source="{Binding Waveform, Mode=OneWay}"
Height="100"
Width="{Binding ImageWidth, Mode=OneWayToSource}"
Stretch="Fill" />
</Border>
<Button Content="Get Waveform"
Padding="5 0"
Margin="5 0"
Click="GetGetWaveform_Click" />
Mainwindow.xaml.cs
private async void GetGetWaveform_Click (object sender, RoutedEventArgs e)
{
((Button)sender).IsEnabled = false;
await ProcessAudiofile();
await GetWaveformClickAsync(0, AudioData!.TotalSamples - 1);
((Button)sender).IsEnabled = true;
}
private async Task ProcessAudiofile ()
{
var milliseconds = FFmpeg.GetAudioDurationSeconds(AudioFilePath);
AudioData = new AudioData(await milliseconds, FFmpeg.SampleRate, new Progress<ProgressStatus>(ReportProgress));
await AudioData.ProcessStream(FFmpeg.GetAudioWaveform(AudioFilePath)).ConfigureAwait(false);
}
private const int _IMAGE_WIDTH = 720;
private async Task GetWaveformClickAsync (int min, int max)
{
var sw = new Stopwatch();
sw.Start();
int color_ForestGreen = 0xFF << 24 | 0x22 << 16 | 0x8c << 8 | 0x22 << 0; //
Waveform = new WriteableBitmap(_IMAGE_WIDTH, 100, 96, 96, PixelFormats.Bgra32, null);
int col = 0;
int currSample = 0;
int row;
var sampleTop = 0;
var sampleBottom = 0;
var sampleHeight = 0;
//I thought this would draw the wave form line by line but it blocks the UI and draws the whole waveform at once
foreach (var sample in AudioData.GetSamples(_IMAGE_WIDTH, min, max))
{
sampleBottom = 50 + (int)(sample.min * (double)50 / short.MinValue);
sampleTop = 50 - (int)(sample.max * (double)50 / short.MaxValue);
sampleHeight = sampleBottom - sampleTop;
try
{
Waveform.Lock();
DrawLine(col, sampleTop, sampleHeight, color_ForestGreen);
col++;
}
finally
{
Waveform.Unlock();
}
}
sw.Stop();
Debug.WriteLine($"Image Creation: {sw.Elapsed}");
}
private void DrawLine (int column, int top, int height, int color)
{
unsafe
{
IntPtr pBackBuffer = Waveform.BackBuffer;
for (int i = 0; i < height; i++)
{
pBackBuffer = Waveform.BackBuffer; // Backbuffer start address
pBackBuffer += (top + i) * Waveform.BackBufferStride; // Move to address or desired row
pBackBuffer += column * 4; // Move to address of desired column
*((int*)pBackBuffer) = color;
}
}
try
{
Waveform.AddDirtyRect(new Int32Rect(column, top, 1, height));
}
catch (Exception) { } // I know this isn't a good way to deal with exceptions, but its what i did.
}
AudioData.cs
public class AudioData
{
private List<short> _amp; // pcm_s16le amplitude values
private int _expectedTotalSamples; // Number of samples expected to be returned by FFMpeg
private int _totalSamplesRead; // Current total number of samples read from the file
private IProgress<ProgressStatus>? _progressIndicator; // Communicates progress to the progress bar
private ProgressStatus _progressStatus; // Progress status to be passed to progress bar
/// <summary>
/// Total number of samples obtained from the audio file
/// </summary>
public int TotalSamples
{
get => _amp.Count;
}
/// <summary>
/// Length of audio file in seconds
/// </summary>
public double Duration
{
get => _duration;
private set
{
_duration = value < 0 ? 0 : value;
}
}
private double _duration;
/// <summary>
/// Number of data points per second of audio
/// </summary>
public int SampleRate
{
get => _sampleRate;
private set
{
_sampleRate = value < 0 ? 0 : value;
}
}
private int _sampleRate;
/// <summary>Update the window's size.</summary>
/// <param name = "duration" > How long the audio file is in milliseconds.</param>
/// <param name = "sampleRate" > Number of times per second to sample the audio file</param>
/// <param name = "progressIndicator" >Used to report progress back to a progress bar</param>
public AudioData (double duration, int sampleRate, IProgress<ProgressStatus>? progressIndicator = null)
{
Duration = duration;
SampleRate = sampleRate;
_progressIndicator = progressIndicator;
_expectedTotalSamples = (int)Math.Ceiling(Duration * SampleRate);
_amp = new List<short>();
_progressStatus = new ProgressStatus();
}
/// <summary>
/// Get values from an async pcm_s16le stream from FFMpeg
/// </summary>
/// <param name = "sampleStream" >FFMpeg samples stream</param>
public async Task ProcessStream (IAsyncEnumerable<(int read, short[] samples)> sampleStream)
{
_totalSamplesRead = 0;
_progressStatus = new ProgressStatus
{
Label = "Started",
};
await foreach ((int read, short[] samples) in sampleStream)
{
_totalSamplesRead += read;
_amp.AddRange(samples[..read]); // Only add the number of samples that where read this iteration
UpdateProgress();
}
Duration = _amp.Count() / SampleRate; // update duration to the correct value; incase duration reported by FFMpeg was wrong
UpdateProgress(true);
}
/// <summary>
/// Report progress back to the UI
/// </summary>
/// <param name="finished">Is FFmpeg done processing the file</param>
private void UpdateProgress (bool finished = false)
{
int percent = (int)(100 * (double)_totalSamplesRead / _expectedTotalSamples);
// Calculate progress update interval; once every 1%
if (percent == _progressStatus.Value)
return;
// update progress status bar object
if (finished)
{
_progressStatus.Label = "Done";
_progressStatus.Value = 100;
}
else
{
_progressStatus.Label = $"Running ({_totalSamplesRead} / {_expectedTotalSamples})";
_progressStatus.Value = percent;
}
_progressIndicator?.Report(_progressStatus);
}
/// <summary>
/// Get evenly spaced sample subsets of the entire audio file samples
/// </summary>
/// <param name="count">Number of samples to be returned</param>
/// <returns>An IEnumerable tuple containg the minimum and maximum amplitudes of a range of samples</returns>
public IEnumerable<(short min, short max)> GetSamples (int count)
{
foreach (var sample in GetSamples(count, 0, -_amp.Count - 1))
yield return sample;
}
/// <summary>
/// Get evenly spaced sample subsets of a section of the audio file samples
/// </summary>
/// <param name="count">number of data points to return</param>
/// <param name="min">inclusive starting index</param>
/// <param name="max">inclusive ending index</param>
/// <returns>An IEnumerable tuple containing the minimum and maximum amplitudes of a range of samples</returns>
public IEnumerable<(short min, short max)> GetSamples (int count, int min, int max)
{
// Protect from out of range exception
max = max >= _amp.Count ? _amp.Count - 1 : max;
max = max < 1 ? 1 : max;
min = min >= _amp.Count - 1 ? _amp.Count - 2 : min;
min = min < 0 ? 0 : min;
double sampleSize = (max - min) / (double)count; // Number of samples to inspect for return value
short rMin; // Minimum return value
short rMax; // Maximum return value
int ssOffset;
int ssLength;
for (int n = 0; n < count; n++)
{
// Calculate offset; no account for min
ssOffset = (int)(n * sampleSize);
// Determine how many samples to get, with a minimum of 1
ssLength = sampleSize <= 1 ? 1 : (int)((n + 1) * sampleSize) - ssOffset;
//shift offset to account for min
ssOffset += min;
// Double check that ssLength wont take us out of bounds
ssLength = ssOffset + ssLength >= _amp.Count ? _amp.Count - ssOffset : ssLength;
// Get the minimum and maximum amplitudes in this sample range
rMin = _amp.GetRange(ssOffset, ssLength).Min();
rMax = _amp.GetRange(ssOffset, ssLength).Max();
// In case this sample range has no (-) values, make the lowest one zero. This makes the rendered waveform look better.
rMin = rMin > 0 ? (short)0 : rMin;
rMax = rMax < 0 ? (short)0 : rMax;
yield return (rMin, rMax);
}
}
}
public class ProgressStatus
{
public int Value { get; set; }
public string Label { get; set; }
}
FFMpeg.cs
private const int _BUFER_READ_SIZE = 500; // Number of bytes to read from process.StandardOutput.BaseStream. Must be a multiple of 2
public const int SampleRate = 1000;
public static async IAsyncEnumerable<(int, short[])> GetAudioWaveform (string filename)
{
using (var process = new Process())
{
process.StartInfo = FFMpegStartInfo(FFmpegTasks.GetWaveform, filename);
process.Start();
process.BeginErrorReadLine();
await Task.Delay(1000); // Give process.StandardOutput a chance to build up values in BaseStream
var bBuffer = new byte[_BUFER_READ_SIZE]; // BaseStream.ReadAsync buffer
var sBuffer = new short[_BUFER_READ_SIZE / 2]; // Return value buffer; bBuffer.length
int read = 1; // ReadAsync returns 0 when BaseStream is empty
while (true)
{
read = await process.StandardOutput.BaseStream.ReadAsync(bBuffer, 0, _BUFER_READ_SIZE);
if (read == 0)
break;
Buffer.BlockCopy(bBuffer, 0, sBuffer, 0, read);
yield return (read / 2, sBuffer);
}
}
}
private static ProcessStartInfo FFMpegStartInfo (FFmpegTasks task, string inputFile1, string inputFile2 = "", string outputFile = "", bool overwriteOutput = true)
{
if (string.IsNullOrWhiteSpace(inputFile1))
throw new ArgumentException(nameof(inputFile1), "Path to input file is null or empty");
if (!File.Exists(inputFile1))
throw new FileNotFoundException($"No file found at: {inputFile1}", nameof(inputFile1));
var args = task switch
{
// TODO: Set appropriate sample rate
// TODO: remove -t xx
FFmpegTasks.GetWaveform => $#" -i ""{inputFile1}"" -ac 1 -filter:a aresample={SampleRate} -map 0:a -c:a pcm_s16le -f data -",
FFmpegTasks.DetectSilence => throw new NotImplementedException(),
_ => throw new NotImplementedException(),
};
return new ProcessStartInfo()
{
FileName = #"External\ffmpeg\bin\x64\ffmpeg.exe",
Arguments = args,
UseShellExecute = false,
RedirectStandardError = true,
RedirectStandardOutput = true,
CreateNoWindow = true
};
public enum FFmpegTasks
{
GetWaveform
}
}

Related

Is there a way to extract frames from a video file using ffmpeg to memory and make some manipulation on each frame?

The goal is to extract each time a frame from the video file then make histogram from the image and then to move to the next frame. this way all the frames.
The frames extraction and the histogram manipulation is working fine when the frames have saved as images on the hard disk. but now i want to do it all in memory.
to extract the frames i'm using ffmpeg because i think it's fast enough:
ffmpeg -r 1 -i MyVid.mp4 -r 1 "$filename%03d.png
for now i'm using the ffmpeg in command prompt window.
with this command it will save on the hard disk over 65000 images(frames).
but instead saving them on the hard disk i wonder if i can make the histogram manipulation on each frame in memory instead saving all the 65000 frames to the hard disk.
then i want to find specific images using the histogram and save to the hard disk this frames.
the histogram part for now is also using files from the hard disk and not from the memory:
private void btnLoadHistogram_Click(object sender, System.EventArgs e)
{
string[] files = Directory.GetFiles(#"d:\screenshots\", "*.jpg");
for (int i = 0; i < files.Length; i++)
{
sbInfo.Text = "Loading image";
if (pbImage.Image != null)
pbImage.Image.Dispose();
pbImage.Image = Image.FromFile(files[i]);//txtFileName.Text);
Application.DoEvents();
sbInfo.Text = "Computing histogram";
long[] myValues = GetHistogram(new Bitmap(pbImage.Image));
Histogram.DrawHistogram(myValues);
sbInfo.Text = "";
}
}
public long[] GetHistogram(System.Drawing.Bitmap picture)
{
long[] myHistogram = new long[256];
for (int i=0;i<picture.Size.Width;i++)
for (int j=0;j<picture.Size.Height;j++)
{
System.Drawing.Color c = picture.GetPixel(i,j);
long Temp=0;
Temp+=c.R;
Temp+=c.G;
Temp+=c.B;
Temp = (int) Temp/3;
myHistogram[Temp]++;
}
return myHistogram;
}
and the code of the class of the constrol HistogramaDesenat :
using System;
using System.Collections;
using System.ComponentModel;
using System.Drawing;
using System.Data;
using System.Windows.Forms;
namespace Histograma
{
/// <summary>
/// Summary description for HistogramaDesenat.
/// </summary>
public class HistogramaDesenat : System.Windows.Forms.UserControl
{
/// <summary>
/// Required designer variable.
/// </summary>
private System.ComponentModel.Container components = null;
public HistogramaDesenat()
{
// This call is required by the Windows.Forms Form Designer.
InitializeComponent();
// TODO: Add any initialization after the InitializeComponent call
this.Paint += new PaintEventHandler(HistogramaDesenat_Paint);
this.Resize+=new EventHandler(HistogramaDesenat_Resize);
}
/// <summary>
/// Clean up any resources being used.
/// </summary>
protected override void Dispose( bool disposing )
{
if( disposing )
{
if(components != null)
{
components.Dispose();
}
}
base.Dispose( disposing );
}
#region Component Designer generated code
/// <summary>
/// Required method for Designer support - do not modify
/// the contents of this method with the code editor.
/// </summary>
private void InitializeComponent()
{
//
// HistogramaDesenat
//
this.Font = new System.Drawing.Font("Tahoma", 8.25F, System.Drawing.FontStyle.Regular, System.Drawing.GraphicsUnit.Point, ((System.Byte)(0)));
this.Name = "HistogramaDesenat";
this.Size = new System.Drawing.Size(208, 176);
}
#endregion
private void HistogramaDesenat_Paint(object sender, PaintEventArgs e)
{
if (myIsDrawing)
{
Graphics g = e.Graphics;
Pen myPen = new Pen(new SolidBrush(myColor),myXUnit);
//The width of the pen is given by the XUnit for the control.
for (int i=0;i<myValues.Length;i++)
{
//We draw each line
g.DrawLine(myPen,
new PointF(myOffset + (i*myXUnit), this.Height - myOffset),
new PointF(myOffset + (i*myXUnit), this.Height - myOffset - myValues[i] * myYUnit));
//We plot the coresponding index for the maximum value.
if (myValues[i]==myMaxValue)
{
SizeF mySize = g.MeasureString(i.ToString(),myFont);
g.DrawString(i.ToString(),myFont,new SolidBrush(myColor),
new PointF(myOffset + (i*myXUnit) - (mySize.Width/2), this.Height - myFont.Height ),
System.Drawing.StringFormat.GenericDefault);
}
}
//We draw the indexes for 0 and for the length of the array beeing plotted
g.DrawString("0",myFont, new SolidBrush(myColor),new PointF(myOffset,this.Height - myFont.Height),System.Drawing.StringFormat.GenericDefault);
g.DrawString((myValues.Length-1).ToString(),myFont,
new SolidBrush(myColor),
new PointF(myOffset + (myValues.Length * myXUnit) - g.MeasureString((myValues.Length-1).ToString(),myFont).Width,
this.Height - myFont.Height),
System.Drawing.StringFormat.GenericDefault);
//We draw a rectangle surrounding the control.
g.DrawRectangle(new System.Drawing.Pen(new SolidBrush(Color.Black),1),0,0,this.Width-1,this.Height-1);
}
}
long myMaxValue;
private long[] myValues;
private bool myIsDrawing;
private float myYUnit; //this gives the vertical unit used to scale our values
private float myXUnit; //this gives the horizontal unit used to scale our values
private int myOffset = 20; //the offset, in pixels, from the control margins.
private Color myColor = Color.Black;
private Font myFont = new Font("Tahoma",10);
[Category("Histogram Options")]
[Description ("The distance from the margins for the histogram")]
public int Offset
{
set
{
if (value>0)
myOffset= value;
}
get
{
return myOffset;
}
}
[Category("Histogram Options")]
[Description ("The color used within the control")]
public Color DisplayColor
{
set
{
myColor = value;
}
get
{
return myColor;
}
}
/// <summary>
/// We draw the histogram on the control
/// </summary>
/// <param name="myValues">The values beeing draw</param>
public void DrawHistogram(long[] Values)
{
myValues = new long[Values.Length];
Values.CopyTo(myValues,0);
myIsDrawing = true;
myMaxValue = getMaxim(myValues);
ComputeXYUnitValues();
this.Refresh();
}
/// <summary>
/// We get the highest value from the array
/// </summary>
/// <param name="Vals">The array of values in which we look</param>
/// <returns>The maximum value</returns>
private long getMaxim(long[] Vals)
{
if (myIsDrawing)
{
long max = 0;
for (int i=0;i<Vals.Length;i++)
{
if (Vals[i] > max)
max = Vals[i];
}
return max;
}
return 1;
}
private void HistogramaDesenat_Resize(object sender, EventArgs e)
{
if (myIsDrawing)
{
ComputeXYUnitValues();
}
this.Refresh();
}
private void ComputeXYUnitValues()
{
myYUnit = (float) (this.Height - (2 * myOffset)) / myMaxValue;
myXUnit = (float) (this.Width - (2 * myOffset)) / (myValues.Length-1);
}
}
}
so in the end this is what i want to do :
extract the frames from the video file in memory using the ffmpeg.
instead using Directory.GetFiles i want to make the histogram manipulation on each frame from the memory that is extracted by the ffmpeg.
each extracted frame image to use the histogram to find if there is a lightning(weather lightning) in the image.
if there is a lightning save the frame image to the hard disk.
For ffmpeg, try FFmpeg.AutoGen
But you need learn about ffmpeg api for demuxer and decoder to get raw frame.
For opencv, try emgucv (recomend)
You can try search example somewhere like this

How to make image zoom in & out with mouse wheel in Blazor?

I want to zoom in & out an image in blazor on asp.net.
As I use for Google Maps, I want to move the image position by zooming and dragging the image with the mouse wheel.(I want to use an image file, not a Google map.)
Is there a way to zoom in, zoom out and drag specific images in blazor?
Note:
I would only use this on BlazorWasm and not BlazorServer because there might be quite a bit of lag if the network is slow.
It is probably easier to just use JavaScript and/or JavaScript interoperability (JS interop) but for this example I decided not to use JS Interop.
This component enables you to zoom by pressing shift while mouse wheel up (zoom out) or mouse wheel down (zoom in) and move the image by pressing mouse button 1 down while moving. (it's more panning than dragging it)
Restriction in Blazor: (at the time of writing this)
The biggest issue at the moment is not having access to the mouse OffsetX and OffsetY within the html element as described here and also here, so the moving of the image has to be done using CSS only.
The reason I used Shift for scrolling is because scrolling is not being blocked or disabled as described here even with #onscroll:stopPropagation, #onwheel:stopPropagation, #onmousewheel:stopPropagation and/or #onscroll:preventDefault, #onwheel:preventDefault, #onmousewheel:preventDefault set on the parent mainImageContainer element. The screen will still scroll left and right if the content is wider than the viewable page.
Solution:
The Zooming part is pretty straight forward, all you need to do is set the transform:scale(n); property in the #onmousewheel event.
The moving of the image is a bit more complex because there is no reference to where the mouse pointer is in relation to the image or element boundaries. (OffsetX and OffsetY)
The only thing we can determine is if a mouse button is pressed and then calculate what direction the mouse is moving in left, right, up or down.
The idea is then to move the position of element with the image by setting the top and left CSS values as percentages.
This component code:
#using System.Text;
<div id="mainImageContainer" style="display: block;width:#($"{ImageWidthInPx}px");height:#($"{ImageHeightInPx}px");overflow: hidden;">
<div id="imageMover"
#onmousewheel="MouseWheelZooming"
style="#MoveImageStyle">
<div id="imageContainer"
#onmousemove="MouseMoving"
style="#ZoomImageStyle">
#*this div is used just for moving around when zoomed*#
</div>
</div>
</div>
#if (ShowResetButton)
{
<div style="display:block">
<button #onclick="ResetImgage">Reset</button>
</div>
}
#code{
/// <summary>
/// The path or url of the image
/// </summary>
[Parameter]
public string ImageUrlPath { get; set; }
/// <summary>
/// The width of the image
/// </summary>
[Parameter]
public int ImageWidthInPx { get; set; }
/// <summary>
/// The height of the image
/// </summary>
[Parameter]
public int ImageHeightInPx { get; set; }
/// <summary>
/// Set to true to show the reset button
/// </summary>
[Parameter]
public bool ShowResetButton { get; set; }
/// <summary>
/// Set the amount the image is scaled by, default is 0.1f
/// </summary>
[Parameter]
public double DefaultScaleBy { get; set; } = 0.1f;
/// <summary>
/// The Maximum the image can scale to, default = 5f
/// </summary>
[Parameter]
public double ScaleToMaximum { get; set; } = 5f;
/// <summary>
/// Set the speed at which the image is moved by, default 2.
/// 2 or 3 seems to work best.
/// </summary>
[Parameter]
public double DefaultMoveBy { get; set; } = 2;
//defaults
double _CurrentScale = 1.0f;
double _PositionLeft = 0;
double _PositionTop = 0;
double _OldClientX = 0;
double _OldClientY = 0;
double _DefaultMinPosition = 0;//to the top and left
double _DefaultMaxPosition = 0;//to the right and down
//the default settings used to display the image in the child div
private Dictionary<string, string> _ImageContainerStyles;
Dictionary<string, string> ImageContainerStyles
{
get
{
if (_ImageContainerStyles == null)
{
_ImageContainerStyles = new Dictionary<string, string>();
_ImageContainerStyles.Add("width", "100%");
_ImageContainerStyles.Add("height", "100%");
_ImageContainerStyles.Add("position", "relative");
_ImageContainerStyles.Add("background-size", "contain");
_ImageContainerStyles.Add("background-repeat", "no-repeat");
_ImageContainerStyles.Add("background-position", "50% 50%");
_ImageContainerStyles.Add("background-image", $"URL({ImageUrlPath})");
}
return _ImageContainerStyles;
}
}
private Dictionary<string, string> _MovingContainerStyles;
Dictionary<string, string> MovingContainerStyles
{
get
{
if (_MovingContainerStyles == null)
{
InvokeAsync(ResetImgage);
}
return _MovingContainerStyles;
}
}
protected async Task ResetImgage()
{
_PositionLeft = 0;
_PositionTop = 0;
_DefaultMinPosition = 0;
_DefaultMaxPosition = 0;
_CurrentScale = 1.0f;
_MovingContainerStyles = new Dictionary<string, string>();
_MovingContainerStyles.Add("width", "100%");
_MovingContainerStyles.Add("height", "100%");
_MovingContainerStyles.Add("position", "relative");
_MovingContainerStyles.Add("left", $"{_PositionLeft}%");
_MovingContainerStyles.TryAdd("top", $"{_PositionTop}%");
await InvokeAsync(StateHasChanged);
}
string ZoomImageStyle { get => DictionaryToCss(ImageContainerStyles); }
string MoveImageStyle { get => DictionaryToCss(MovingContainerStyles); }
private string DictionaryToCss(Dictionary<string, string> styleDictionary)
{
StringBuilder sb = new StringBuilder();
foreach (var kvp in styleDictionary.AsEnumerable())
{
sb.AppendFormat("{0}:{1};", kvp.Key, kvp.Value);
}
return sb.ToString();
}
protected async void MouseMoving(MouseEventArgs e)
{
//if the mouse button 1 is not down exit the function
if (e.Buttons != 1)
{
_OldClientX = e.ClientX;
_OldClientY = e.ClientY;
return;
}
//get the % of the current scale to move by at least the default move speed plus any scaled changes
//basically the bigger the image the faster it moves..
double scaleFrac = (_CurrentScale / ScaleToMaximum);
double scaleMove = (DefaultMoveBy * (DefaultMoveBy * scaleFrac));
//moving mouse right
if (_OldClientX < e.ClientX)
{
if ((_PositionLeft - DefaultMoveBy) <= _DefaultMaxPosition)
{
_PositionLeft += scaleMove;
}
}
//moving mouse left
if (_OldClientX > e.ClientX)
{
//if (_DefaultMinPosition < (_PositionLeft - DefaultMoveBy))
if ((_PositionLeft + DefaultMoveBy) >= _DefaultMinPosition)
{
_PositionLeft -= scaleMove;
}
}
//moving mouse down
if (_OldClientY < e.ClientY)
{
//if ((_PositionTop + DefaultMoveBy) <= _DefaultMaxPosition)
if ((_PositionTop - DefaultMoveBy) <= _DefaultMaxPosition)
{
_PositionTop += scaleMove;
}
}
//moving mouse up
if (_OldClientY > e.ClientY)
{
//if ((_PositionTop - DefaultMoveBy) > _DefaultMinPosition)
if ((_PositionTop + DefaultMoveBy) >= _DefaultMinPosition)
{
_PositionTop -= scaleMove;
}
}
_OldClientX = e.ClientX;
_OldClientY = e.ClientY;
await UpdateScaleAndPosition();
}
async Task<double> IncreaseScale()
{
return await Task.Run(() =>
{
//increase the scale first then calculate the max and min positions
_CurrentScale += DefaultScaleBy;
double scaleFrac = (_CurrentScale / ScaleToMaximum);
double scaleDiff = (DefaultMoveBy + (DefaultMoveBy * scaleFrac));
double scaleChange = DefaultMoveBy + scaleDiff;
_DefaultMaxPosition += scaleChange;
_DefaultMinPosition -= scaleChange;
return _CurrentScale;
});
}
async Task<double> DecreaseScale()
{
return await Task.Run(() =>
{
_CurrentScale -= DefaultScaleBy;
double scaleFrac = (_CurrentScale / ScaleToMaximum);
double scaleDiff = (DefaultMoveBy + (DefaultMoveBy * scaleFrac));
double scaleChange = DefaultMoveBy + scaleDiff;
_DefaultMaxPosition -= scaleChange;
_DefaultMinPosition += scaleChange;//DefaultMoveBy;
//fix descaling, move the image back into view when descaling (zoomin out)
if (_CurrentScale <= 1)
{
_PositionLeft = 0;
_PositionTop = 0;
}
else
{
//left can not be more than max position
_PositionLeft = (_DefaultMaxPosition < _PositionLeft) ? _DefaultMaxPosition : _PositionLeft;
//top can not be more than max position
_PositionTop = (_DefaultMaxPosition < _PositionTop) ? _DefaultMaxPosition : _PositionTop;
//left can not be less than min position
_PositionLeft = (_DefaultMinPosition > _PositionLeft) ? _DefaultMinPosition : _PositionLeft;
//top can not be less than min position
_PositionTop = (_DefaultMinPosition > _PositionTop) ? _DefaultMinPosition : _PositionTop;
}
return _CurrentScale;
});
}
protected async void MouseWheelZooming(WheelEventArgs e)
{
//holding shift stops the page from scrolling
if (e.ShiftKey == true)
{
if (e.DeltaY > 0)
{
_CurrentScale = ((_CurrentScale + DefaultScaleBy) >= 5) ? _CurrentScale = 5f : await IncreaseScale();
}
if (e.DeltaY < 0)
{
_CurrentScale = ((_CurrentScale - DefaultScaleBy) <= 0) ? _CurrentScale = DefaultScaleBy : await DecreaseScale();
}
await UpdateScaleAndPosition();
}
}
/// <summary>
/// Refresh the values in the moving style dictionary that is used to position the image.
/// </summary>
async Task UpdateScaleAndPosition()
{
await Task.Run(() =>
{
if (!MovingContainerStyles.TryAdd("transform", $"scale({_CurrentScale})"))
{
MovingContainerStyles["transform"] = $"scale({_CurrentScale})";
}
if (!MovingContainerStyles.TryAdd("left", $"{_PositionLeft}%"))
{
MovingContainerStyles["left"] = $"{_PositionLeft}%";
}
if (!MovingContainerStyles.TryAdd("top", $"{_PositionTop}%"))
{
MovingContainerStyles["top"] = $"{_PositionTop}%";
}
});
}
}
This is the usage:
#page "/"
#using BlazorWasmApp.Components
Welcome to your new app.
<ZoomableImageComponent ImageUrlPath="images/Capricorn.png"
ImageWidthInPx=400
ImageHeightInPx=300
ShowResetButton=true
DefaultScaleBy=0.1f />
and this is the result:
I only tested this in chrome on a desktop computer without touch input.

NAudio : Keeping last ex: 5s of recorded audio and save it anytime

I'm coding a soundboard using NAudio as a library to manage my audio. One of the features I want to implement is the ability to be continuously recording one (or many) audio inputs and be able, at any time to save them.
The way I see this possible is by keeping a circular buffer of the last ex: 5s of samples picked up by the audio input.
I also want to avoid keeping all the data up to the point when it started as I don't want to overuse memory.
I've tried many approaches to this problem:
Using a circular buffer and feed it with data from "DataAvailable" event;
Using "Queue<ISampleProvider>" and "BufferedWaveProvider" to add the buffer data and transform it into a sample.
I tried using 2 "BufferedWaveProvider" and alternating which was getting filled depending on which was full.
I also tried to use 2 wave inputs and timers to alternate which was recording.
I tried using an array of bytes and use it as a circular buffer. I filled the buffer using the "DataAvailable" event from "WaveInEvent". The "WaveInEventArgs" has a buffer so I added the data from it to the circular buffer.
private int _start = 0, _end = 0;
private bool _filled = false;
private byte[] _buffer; // the size was set in the constructor
// its an equation to figure out how many samples
// a certain time needs.
private void _dataAvailable(object sender, WaveInEventArgs e)
{
for (int i = 0; i < e.BytesRecorded; i++)
{
if (_filled)
{
_start = _end + 1 > _buffer.Length - 1 ? _end + 1 : 0;
}
if (_end > _buffer.Length - 1 && !_filled) _filled = true;
_end = _end > _buffer.Length - 1 ? _end + 1 : 0;
_buffer[_end] = e.Buffer[i];
}
}
Some of the attempts I made kind of worked but, most of the time, they would work for the first 5 seconds (I am aware that using a "BufferredWaveProvider" can cause that issue. I think that part of the problem is that there is a certain amount of data that is required at the beginning of the buffer, and as soon as the buffer starts overwriting that data, the audio player doesn't understand it anymore.
Another very possible cause of the problem is that I'm just starting to use NAudio and don't quite understand it fully yet.
I've been stuck with this issue for a while now and I appreciate all the help anyone can give me.
I have some more code that I could add, but I thought this question was already getting long.
Thank you in advance!
If anyone else wants to do something similar, I'm leaving the whole class. Use it as you want.
using System;
using NAudio.Wave;
using System.Diagnostics;
public class AudioRecorder
{
public WaveInEvent MyWaveIn;
public readonly double RecordTime;
private WaveOutEvent _wav = new WaveOutEvent();
private bool _isFull = false;
private int _pos = 0;
private byte[] _buffer;
private bool _isRecording = false;
/// <summary>
/// Creates a new recorder with a buffer
/// </summary>
/// <param name="recordTime">Time to keep in buffer (in seconds)</param>
public AudioRecorder(double recordTime)
{
RecordTime = recordTime;
MyWaveIn = new WaveInEvent();
MyWaveIn.DataAvailable += DataAvailable;
_buffer = new byte[(int)(MyWaveIn.WaveFormat.AverageBytesPerSecond * RecordTime)];
}
/// <summary>
/// Starts recording
/// </summary>
public void StartRecording()
{
if (!_isRecording)
{
try
{
MyWaveIn.StartRecording();
}
catch (InvalidOperationException)
{
Debug.WriteLine("Already recording!");
}
}
_isRecording = true;
}
/// <summary>
/// Stops recording
/// </summary>
public void StopRecording()
{
MyWaveIn.StopRecording();
_isRecording = false;
}
/// <summary>
/// Play currently recorded data
/// </summary>
public void PlayRecorded()
{
if (_wav.PlaybackState == PlaybackState.Stopped)
{
var buff = new BufferedWaveProvider(MyWaveIn.WaveFormat);
var bytes = GetBytesToSave();
buff.AddSamples(bytes, 0, bytes.Length);
_wav.Init(buff);
_wav.Play();
}
}
/// <summary>
/// Stops replay
/// </summary>
public void StopReplay()
{
if (_wav != null) _wav.Stop();
}
/// <summary>
/// Save to disk
/// </summary>
/// <param name="fileName"></param>
public void Save(string fileName)
{
var writer = new WaveFileWriter(fileName, MyWaveIn.WaveFormat);
var buff = GetBytesToSave();
writer.Write(buff, 0 , buff.Length);
writer.Flush();
}
private void DataAvailable(object sender, WaveInEventArgs e)
{
for (int i = 0; i < e.BytesRecorded; ++i)
{
// save the data
_buffer[_pos] = e.Buffer[i];
// move the current position (advances by 1 OR resets to zero if the length of the buffer was reached)
_pos = (_pos + 1) % _buffer.Length;
// flag if the buffer is full (will only set it from false to true the first time that it reaches the full length of the buffer)
_isFull |= (_pos == 0);
}
}
public byte[] GetBytesToSave()
{
int length = _isFull ? _buffer.Length : _pos;
var bytesToSave = new byte[length];
int byteCountToEnd = _isFull ? (_buffer.Length - _pos) : 0;
if (byteCountToEnd > 0)
{
// bytes from the current position to the end
Array.Copy(_buffer, _pos, bytesToSave, 0, byteCountToEnd);
}
if (_pos > 0)
{
// bytes from the start to the current position
Array.Copy(_buffer, 0, bytesToSave, byteCountToEnd, _pos);
}
return bytesToSave;
}
/// <summary>
/// Starts recording if WaveIn stopped
/// </summary>
/// <param name="sender"></param>
/// <param name="e"></param>
private void Stopped(object sender, StoppedEventArgs e)
{
Debug.WriteLine("Recording stopped!");
if (e.Exception != null) Debug.WriteLine(e.Exception.Message);
if (_isRecording)
{
MyWaveIn.StartRecording();
}
}
}
The code inside your _dataAvailable method is strange to me. I would simply write bytes from start to end and then again from the start and so forth. And then when you want to get the actual bytes to save them, create a new array that goes from the current position to the end and from the start to the current position. Check my code below.
private int _pos = 0;
private bool _isFull = false;
private byte[] _buffer; // intialized in the constructor with the correct length
private void _dataAvailable(object sender, WaveInEventArgs e)
{
for(int i = 0; i < e.BytesRecorded; ++i) {
// save the data
_buffer[_pos] = e.Buffer[i];
// move the current position (advances by 1 OR resets to zero if the length of the buffer was reached)
_pos = (_pos + 1) % _buffer.Length;
// flag if the buffer is full (will only set it from false to true the first time that it reaches the full length of the buffer)
_isFull |= (_pos == 0);
}
}
public byte[] GetBytesToSave()
{
int length = _isFull ? _buffer.Length : _pos;
var bytesToSave = new byte[length];
int byteCountToEnd = _isFull ? (_buffer.Length - _pos) : 0;
if(byteCountToEnd > 0) {
// bytes from the current position to the end
Array.Copy(_buffer, _pos, bytesToSave, 0, byteCountToEnd);
}
if(_pos > 0) {
// bytes from the start to the current position
Array.Copy(_buffer, 0, bytesToSave, byteCountToEnd, _pos);
}
return bytesToSave;
}

Tap to focus for camera implementation

I'm trying to implement a manual focus feature for my camera page so that the user can tap to focus the camera.
I'm following this StackOverflow question that's currently written in Java for native Android. I've been converting it to C# for my Xamarin.Forms Android app.
Here's what I have so far:
public class CameraPage : PageRenderer, TextureView.ISurfaceTextureListener, Android.Views.View.IOnTouchListener, IAutoFocusCallback
{
global::Android.Hardware.Camera camera;
TextureView textureView;
public void OnAutoFocus(bool success, Android.Hardware.Camera camera)
{
var parameters = camera.GetParameters();
if (parameters.FocusMode != Android.Hardware.Camera.Parameters.FocusModeContinuousPicture)
{
parameters.FocusMode = Android.Hardware.Camera.Parameters.FocusModeContinuousPicture;
if (parameters.MaxNumFocusAreas > 0)
{
parameters.FocusAreas = null;
}
camera.SetParameters(parameters);
camera.StartPreview();
}
}
public bool OnTouch(Android.Views.View v, MotionEvent e)
{
if (camera != null)
{
var parameters = camera.GetParameters();
camera.CancelAutoFocus();
Rect focusRect = CalculateTapArea(e.GetX(), e.GetY(), 1f);
if (parameters.FocusMode != Android.Hardware.Camera.Parameters.FocusModeAuto)
{
parameters.FocusMode = Android.Hardware.Camera.Parameters.FocusModeAuto;
}
if (parameters.MaxNumFocusAreas > 0)
{
List<Area> mylist = new List<Area>();
mylist.Add(new Android.Hardware.Camera.Area(focusRect, 1000));
parameters.FocusAreas = mylist;
}
try
{
camera.CancelAutoFocus();
camera.SetParameters(parameters);
camera.StartPreview();
camera.AutoFocus(OnAutoFocus); //Here is the issue. How do I use the callback?
}
catch (System.Exception ex)
{
Console.WriteLine(ex.ToString());
Console.Write(ex.StackTrace);
}
return true;
}
return false;
}
private Rect CalculateTapArea(object x, object y, float coefficient)
{
var focusAreaSize = 500;
int areaSize = Java.Lang.Float.ValueOf(focusAreaSize * coefficient).IntValue();
int left = clamp((int)x - areaSize / 2, 0, textureView.Width - areaSize);
int top = clamp((int)y - areaSize / 2, 0, textureView.Height - areaSize);
RectF rectF = new RectF(left, top, left + areaSize, top + areaSize);
Matrix.MapRect(rectF);
return new Rect((int)System.Math.Round(rectF.Left), (int)System.Math.Round(rectF.Top), (int)System.Math.Round(rectF.Right), (int)System.Math.Round(rectF.Bottom));
}
private int clamp(int x, int min, int max)
{
if (x > max)
{
return max;
}
if (x < min)
{
return min;
}
return x;
}
}
I've managed to convert most of it but I'm not sure how to properly use the AutoFocusCallback here. What should I do to call OnAutoFocus from my OnTouch event like in the java answer I linked above?
After I attached the callback, then all I need to do is subscribe the OnTouch event to my page correct or...?
For example, I tried:
textureView.Click += OnTouch; but 'no overload for 'OnTouch' matches delegate 'EventHandler'. Is there a specific event handler I need to use?
You can try change
camera.AutoFocus(OnAutoFocus);
to
camera.AutoFocus(this);
and it will be using OnAutoFocus because it implementation from IAutoFocusCallback.
And for your question about subscribe event you can try to subscribe event in OnElementChanged like this
protected override void OnElementChanged(ElementChangedEventArgs<Xamarin.Forms.Page> e)
{
base.OnElementChanged(e);
if (e.OldElement != null || Element == null)
{
return;
}
try
{
this.SetOnTouchListener(this);
}
catch (Exception e)
{
}
}
And btw I don't see to use TextureView.ISurfaceTextureListener in this code.
All that happened in the linked Java answer is that they provided the code to run when the OS calls the callback:
camera.autoFocus(new Camera.AutoFocusCallback() {
#Override
public void onAutoFocus(boolean success, Camera camera) {
camera.cancelAutoFocus();
Parameters params = camera.getParameters();
if(params.getFocusMode() != Camera.Parameters.FOCUS_MODE_CONTINUOUS_PICTURE){
params.setFocusMode(Camera.Parameters.FOCUS_MODE_CONTINUOUS_PICTURE);
camera.setParameters(params);
}
}
});
the above does not "call" the call back, just provides the call back code to run. the OS calls the call back. So in Xamarin, you need to pass in the type that is implementing the IAutoFocusCallback interface, so You should be able to do this I would think since CameraPage is implementing the IAutoFocusCallback interface:
camera.AutoFocus(this); // "this" refers to your current CameraPage which implements the interface.
the clue here is that when you type the opening parenthesis after camera.AutoFocus the popup shows that you need to pass in a type IAutoFocusCallback, which means any type that implements that interface, so in this case that is "this" CameraPage. :-)
Since there's no complete example here, here's mine.
This solution works fine, at least for me. The camera will focus continously until a focus point is tapped. It will then focus on the tap point until you move the camera away. Then it goes back to continous focus mode.
public class CameraPageRenderer : PageRenderer, TextureView.ISurfaceTextureListener, Android.Hardware.Camera.IPictureCallback, Android.Hardware.Camera.IShutterCallback, IAutoFocusCallback
{
// ... code removed for brevity
/// <summary>
/// Occurs whenever the user touches the screen. Here we set the focus mode to FocusModeAuto and set a focus area based on the tapped coordinates.
/// </summary>
public override bool OnTouchEvent(MotionEvent e)
{
var parameters = camera.GetParameters();
parameters.FocusMode = Camera.Parameters.FocusModeAuto;
if (parameters.MaxNumFocusAreas > 0)
{
var focusRect = CalculateTapArea(e.GetX(), e.GetY(), textureView.Width, textureView.Height, 50f);
parameters.FocusAreas = new List<Area>()
{
new Area(focusRect, 1000)
};
}
try
{
camera.CancelAutoFocus();
camera.SetParameters(parameters);
camera.AutoFocus(this);
}
catch (Exception ex)
{
Debug.WriteLine(ex);
}
return true;
}
/// <summary>
/// Auto focus callback. Here we reset the focus mode to FocusModeContinuousPicture and remove any focus areas
/// </summary>
public void OnAutoFocus(bool success, Camera camera)
{
var parameters = camera.GetParameters();
parameters.FocusMode = Parameters.FocusModeContinuousPicture;
if (parameters.MaxNumFocusAreas > 0)
{
parameters.FocusAreas = null;
}
camera.SetParameters(parameters);
}
/// <summary>
/// Calculates a tap area using the focus coordinates mentioned in <see href="https://developer.android.com/reference/android/hardware/Camera.Parameters.html#getFocusAreas()"/>
/// <para>
/// Coordinates of the rectangle range from -1000 to 1000. (-1000, -1000) is the upper left point. (1000, 1000) is the lower right point. The width and height of focus areas cannot be 0 or negative.</para>
/// </summary>
/// <param name="x">The X coordinate of the tapped area</param>
/// <param name="y">The Y coordinate of the tapped area</param>
/// <param name="width">The total width of the tappable area</param>
/// <param name="height">The total height of the tappable area</param>
/// <param name="focusAreaSize">The desired size (widht, height) of the created rectangle</param>
/// <returns></returns>
private Rect CalculateTapArea(float x, float y, float width, float height, float focusAreaSize)
{
var leftFloat = x * 2000 / width - 1000;
var topFloat = y * 2000 / height - 1000;
var left = RoundFocusCoordinate(leftFloat);
var top = RoundFocusCoordinate(topFloat);
var right = RoundFocusCoordinate(leftFloat + focusAreaSize);
var bottom = RoundFocusCoordinate(topFloat + focusAreaSize);
return new Rect(left, top, right, bottom);
}
/// <summary>
/// Round, convert to int, and clamp between -1000 and 1000
/// </summary>
private int RoundFocusCoordinate(float value)
{
var intValue = (int)Math.Round(value, 0, MidpointRounding.AwayFromZero);
return Math.Clamp(intValue, -1000, 1000);
}
// ... code removed for brevity
}

Animate opacity over time in XNA

I would like to animate the opacity value of a text string containing the name of the level in and out with a delay in the middle.
So the sequence of events would be like:
Start transparent
Fade in to solid white over a second of game time
Wait a second
Fade out to transparent again over a second.
The code I have written to animate the alpha value isn't working. Plus, it's pretty ugly and I'm sure there's a better way to do it using the XNA framework.
I've been unable to find any advice elsewhere about doing this. Surely animating values like this isn't that uncommon. How can I do it?
Here's my current code as requested (yes it's horrible).
private int fadeStringDirection = +1;
private int fadeStringDuration = 1000;
private float stringAlpha = 0;
private int stringRef = 0;
private int stringPhase = 1;
...
if (!pause)
{
totalMillisecondsElapsed += gameTime.ElapsedGameTime.Milliseconds;
if (fadestringDirection != 0)
{
stringAlpha = ((float)(totalMillisecondsElapsed - stringRef) / (float)(fadeStringDuration*stringPhase)) * fadeStringDirection;
stringAlpha = MathHelper.Clamp(stringAlpha, 0, 1);
if (topAlpha / 2 + 0.5 == fadeStringDirection)
{
fadeStringDirection = 0;
stringRef = totalMillisecondsElapsed;
stringPhase++;
}
}
else
{
stringRef += gameTime.ElapsedGameTime.Milliseconds;
if (stringRef >= fadeStringDuration * stringPhase)
{
stringPhase++;
fadeStringDirection = -1;
stringRef = totalMillisecondsElapsed;
}
}
}
Here's the solution I have now. Much nicer than what I had before (and in a class of its own).
/// <summary>
/// Animation helper class.
/// </summary>
public class Animation
{
List<Keyframe> keyframes = new List<Keyframe>();
int timeline;
int lastFrame = 0;
bool run = false;
int currentIndex;
/// <summary>
/// Construct new animation helper.
/// </summary>
public Animation()
{
}
public void AddKeyframe(int time, float value)
{
Keyframe k = new Keyframe();
k.time = time;
k.value = value;
keyframes.Add(k);
keyframes.Sort(delegate(Keyframe a, Keyframe b) { return a.time.CompareTo(b.time); });
lastFrame = (time > lastFrame) ? time : lastFrame;
}
public void Start()
{
timeline = 0;
currentIndex = 0;
run = true;
}
public void Update(GameTime gameTime, ref float value)
{
if (run)
{
timeline += gameTime.ElapsedGameTime.Milliseconds;
value = MathHelper.SmoothStep(keyframes[currentIndex].value, keyframes[currentIndex + 1].value, (float)timeline / (float)keyframes[currentIndex + 1].time);
if (timeline >= keyframes[currentIndex + 1].time && currentIndex != keyframes.Count) { currentIndex++; }
if (timeline >= lastFrame) { run = false; }
}
}
public struct Keyframe
{
public int time;
public float value;
}
}

Categories