Mixed Reality WebRTC - Screen capturing with GraphicsCapturePicker - c#

Setup
Hey,
I'm trying to capture my screen and send/communicate the stream via MR-WebRTC. Communication between two PCs or PC with HoloLens worked with webcams for me, so I thought the next step could be streaming my screen. So I took the uwp application that I already had, which worked with my webcam and tried to make things work:
UWP App is based on the example uwp app from MR-WebRTC.
For Capturing I'm using the instruction from MS about screen capturing via GraphicsCapturePicker.
So now I'm stuck in the following situation:
I get a frame from the screen capturing, but its type is Direct3D11CaptureFrame. You can see it below in the code snipped.
MR-WebRTC takes a frame type I420AVideoFrame (also in a code snipped).
How can I "connect" them?
I420AVideoFrame wants a frame in the I420A format (YUV 4:2:0).
Configuring the framePool I can set the DirectXPixelFormat, but it has no YUV420.
I found this post on so, saying that it its possible.
Code Snipped Frame from Direct3D:
_framePool = Direct3D11CaptureFramePool.Create(
_canvasDevice, // D3D device
DirectXPixelFormat.B8G8R8A8UIntNormalized, // Pixel format
3, // Number of frames
_item.Size); // Size of the buffers
_session = _framePool.CreateCaptureSession(_item);
_session.StartCapture();
_framePool.FrameArrived += (s, a) =>
{
using (var frame = _framePool.TryGetNextFrame())
{
// Here I would take the Frame and call the MR-WebRTC method LocalI420AFrameReady
}
};
Code Snippet Frame from WebRTC:
// This is the way with the webcam; so LocalI420 was subscribed to
// the event I420AVideoFrameReady and got the frame from there
_webcamSource = await DeviceVideoTrackSource.CreateAsync();
_webcamSource.I420AVideoFrameReady += LocalI420AFrameReady;
// enqueueing the newly captured video frames into the bridge,
// which will later deliver them when the Media Foundation
// playback pipeline requests them.
private void LocalI420AFrameReady(I420AVideoFrame frame)
{
lock (_localVideoLock)
{
if (!_localVideoPlaying)
{
_localVideoPlaying = true;
// Capture the resolution into local variable useable from the lambda below
uint width = frame.width;
uint height = frame.height;
// Defer UI-related work to the main UI thread
RunOnMainThread(() =>
{
// Bridge the local video track with the local media player UI
int framerate = 30; // assumed, for lack of an actual value
_localVideoSource = CreateI420VideoStreamSource(
width, height, framerate);
var localVideoPlayer = new MediaPlayer();
localVideoPlayer.Source = MediaSource.CreateFromMediaStreamSource(
_localVideoSource);
localVideoPlayerElement.SetMediaPlayer(localVideoPlayer);
localVideoPlayer.Play();
});
}
}
// Enqueue the incoming frame into the video bridge; the media player will
// later dequeue it as soon as it's ready.
_localVideoBridge.HandleIncomingVideoFrame(frame);
}

I found a solution for my problem by creating an issue on the github repo. Answer was provided by KarthikRichie:
You have to use the ExternalVideoTrackSource
You can convert from the Direct3D11CaptureFrame to Argb32VideoFrame
 
// Setting up external video track source
_screenshareSource = ExternalVideoTrackSource.CreateFromArgb32Callback(FrameCallback);
struct WebRTCFrameData
{
public IntPtr Data;
public uint Height;
public uint Width;
public int Stride;
}
public void FrameCallback(in FrameRequest frameRequest)
{
try
{
if (FramePool != null)
{
using (Direct3D11CaptureFrame _currentFrame = FramePool.TryGetNextFrame())
{
if (_currentFrame != null)
{
WebRTCFrameData webRTCFrameData = ProcessBitmap(_currentFrame.Surface).Result;
frameRequest.CompleteRequest(new Argb32VideoFrame()
{
data = webRTCFrameData.Data,
height = webRTCFrameData.Height,
width = webRTCFrameData.Width,
stride = webRTCFrameData.Stride
});
}
}
}
}
catch (Exception ex)
{
}
}
private async Task<WebRTCFrameData> ProcessBitmap(IDirect3DSurface surface)
{
SoftwareBitmap softwareBitmap = await SoftwareBitmap.CreateCopyFromSurfaceAsync(surface, Windows.Graphics.Imaging.BitmapAlphaMode.Straight);
byte[] imageBytes = new byte[4 * softwareBitmap.PixelWidth * softwareBitmap.PixelHeight];
softwareBitmap.CopyToBuffer(imageBytes.AsBuffer());
WebRTCFrameData argb32VideoFrame = new WebRTCFrameData();
argb32VideoFrame.Data = GetByteIntPtr(imageBytes);
argb32VideoFrame.Height = (uint)softwareBitmap.PixelHeight;
argb32VideoFrame.Width = (uint)softwareBitmap.PixelWidth;
var test = softwareBitmap.LockBuffer(BitmapBufferAccessMode.Read);
int count = test.GetPlaneCount();
var pl = test.GetPlaneDescription(count - 1);
argb32VideoFrame.Stride = pl.Stride;
return argb32VideoFrame;
}
private IntPtr GetByteIntPtr(byte[] byteArr)
{
IntPtr intPtr2 = System.Runtime.InteropServices.Marshal.UnsafeAddrOfPinnedArrayElement(byteArr, 0);
return intPtr2;
}

Related

DirectX 11 render BGRA32 Frame

First time trying to render something and I have big troubles... I am using DirectN library and SwapChainSurface class from KlearTouch.MediaPlayer. I am trying to render BGRA32 frame using D3D11Device.
For this I have slightly modified OnNewSurfaceAvailable:
public void OnNewSurfaceAvailable2(Action<ID3D11Device, ID3D11DeviceContext> updateSurface)
{
if (rendering)
{
return;
}
try
{
if (this.swapChain is null || swapChainComObject is null)
{
return;
}
swapChainComObject.GetDesc(out var swapChainDesc).ThrowOnError();
if (swapChainDesc.BufferDesc.Width != PanelWidth || swapChainDesc.BufferDesc.Height != PanelHeight)
{
swapChainComObject.ResizeBuffers(2, PanelWidth, PanelHeight, DXGI_FORMAT.DXGI_FORMAT_UNKNOWN, 0).ThrowOnError();
}
var device = swapChain.Object.GetDevice1().Object.As<ID3D11Device>();
device.GetImmediateContext(out var context);
// context.ClearRenderTargetView(renderTargetView.Object, new []{0f, 1f, 1f, 1f});
updateSurface(device, context);
swapChainComObject.Present(1, 0).ThrowOnError();
}
catch (ObjectDisposedException)
{
Reinitialize();
}
catch (Exception ex)
{
System.Diagnostics.Debug.WriteLine("\nException: " + ex, nameof(SwapChainSurface) + '.' + nameof(OnNewSurfaceAvailable));
}
rendering = false;
}
OnSurfaceAvailable2 is called from:
void VideoFrameArrived(Bgra32VideoFrame frame)
{
DispatcherQueue.TryEnqueue(() =>
{
previewSurface.OnNewSurfaceAvailable2((device, context) =>
{
var size = frame.m_height * frame.m_height * 4;
D3D11_TEXTURE2D_DESC td;
td.ArraySize = 1;
td.BindFlags = (uint) D3D11_BIND_FLAG.D3D11_BIND_SHADER_RESOURCE;
td.Usage = D3D11_USAGE.D3D11_USAGE_DYNAMIC;
td.CPUAccessFlags = (uint) D3D11_CPU_ACCESS_FLAG.D3D11_CPU_ACCESS_WRITE;
td.Format = DXGI_FORMAT.DXGI_FORMAT_B8G8R8A8_UNORM;
td.Height = (uint) frame.m_height;
td.Width = (uint) frame.m_width;
td.MipLevels = 1;
td.MiscFlags = 0;
td.SampleDesc.Count = 1;
td.SampleDesc.Quality = 0;
D3D11_SUBRESOURCE_DATA srd;
srd.pSysMem = frame.m_pixelBuffer;
srd.SysMemPitch = (uint) frame.m_height;
srd.SysMemSlicePitch = 0;
var texture = device.CreateTexture2D<ID3D11Texture2D>(td, new []{srd});
var mappedResource = context.Map(texture.Object, 0, D3D11_MAP.D3D11_MAP_WRITE_DISCARD);
var mappedData = mappedResource.pData;
unsafe
{
Buffer.MemoryCopy(frame.m_pixelBuffer.ToPointer(), mappedData.ToPointer(), size, size);
}
// Just for debug
var pixelsInFrame = new byte[size];
var pixelsInResource = new byte[size];
Marshal.Copy(frame.m_pixelBuffer, pixelsInFrame, 0, size);
Marshal.Copy(mappedResource.pData, pixelsInResource, 0, size);
context.Unmap(texture.Object, 0);
});
});
}
Problem is that I can't see anything rendered and surface stay black and I assume it should not be.
Update: Project repository
Update 2:
I solved my issue. I had too little knowledge about DX11 so I had to study more how things work there. With this knowledge I updated repository which can display preview from black magic design card. It is just example with many issues so be careful and feel free to look for or inspiration there.
There's a various amount of issues here.
First on frame arrived, you have
var texture = device.CreateTexture2D<ID3D11Texture2D>(td, new []{srd});
So your create a texture, but you do not use it anywhere, it needs to be blitted to the swapchain (can do a CopyResource on device context or draw a full screen triangle/quad).
Note that CopyResource will only work if your swapchain has the same size as your incoming texture, which is rather unlikely, so you will have to draw a blit with a shader most likely.
Also you actually copying the data in the texture twice :
var texture = device.CreateTexture2D<ID3D11Texture2D>(td, new []{srd});
Since you provide initial data, the content is already there.
also, pitch is incorrect :
srd.SysMemPitch = (uint) frame.m_height;
pitch is the length (in bytes) of a line, so it should be :
srd.SysMemPitch = frame.GetRowBytes();
Please also note that in case of a non converted Decklink frame,
GetRowBytes can be different from width*4 (they can align row size to multiple of 16/32 or other values).
Next, in the case of resource map, the following is also incorrect :
unsafe
{
Buffer.MemoryCopy(frame.m_pixelBuffer.ToPointer(), mappedData.ToPointer(), size, size);
}
You are not checking the pitch/stride requirement of a texture (which can be different as well),
so you need to do :
if (mappedResource.RowPitch == frame.GetRowBytes())
{
//here you can use a direct copy as above
}
else
{
//here you need to copy data line per line
}

AVMutableComposition output freezes at the last frame of the first video

I am trying to merge multiple clips(videos) into one using AVMutableCompositions, I have successfully done this as well as rotating and translating each instruction, however, there is still one issue that remains.
When the first clip finishes the output freezes at its last frame (the last frame of the first clip); this only happens if there is another clip visible, so, for example, if I were to set the opacity of the second and third clips to 0 at CMTime.Zero and the first one to 0 at firstClip.Duration, the result would be a video that displays the first clip's video, and once this finishes it displays a black background.
The clips' audio works perfectly.
Here is my code:
public void TESTING()
{
//microphone
AVCaptureDevice microphone = AVCaptureDevice.DefaultDeviceWithMediaType(AVMediaType.Audio);
AVMutableComposition mixComposition = AVMutableComposition.Create();
AVVideoCompositionLayerInstruction[] Instruction_Array = new AVVideoCompositionLayerInstruction[Clips.Count];
foreach (string clip in Clips)
{
var asset = AVUrlAsset.FromUrl(new NSUrl(clip, false)) as AVUrlAsset;
#region HoldVideoTrack
//This range applies to the video, not to the mixcomposition
CMTimeRange range = new CMTimeRange()
{
Start = CMTime.Zero,
Duration = asset.Duration
};
var duration = mixComposition.Duration;
NSError error;
AVMutableCompositionTrack videoTrack = mixComposition.AddMutableTrack(AVMediaType.Video, 0);
AVAssetTrack assetVideoTrack = asset.TracksWithMediaType(AVMediaType.Video)[0];
videoTrack.InsertTimeRange(range, assetVideoTrack, duration, out error);
videoTrack.PreferredTransform = assetVideoTrack.PreferredTransform;
if (microphone != null)
{
AVMutableCompositionTrack audioTrack = mixComposition.AddMutableTrack(AVMediaType.Audio, 0);
AVAssetTrack assetAudioTrack = asset.TracksWithMediaType(AVMediaType.Audio)[0];
audioTrack.InsertTimeRange(range, assetAudioTrack, duration, out error);
}
#endregion
AVAssetTrack videoTrackWithMediaType = mixComposition.TracksWithMediaType(AVMediaType.Video)[0];
var instruction = AVMutableVideoCompositionLayerInstruction.FromAssetTrack(videoTrackWithMediaType);
#region Instructions
int counter = Clips.IndexOf(clip);
Instruction_Array[counter] = TestingInstruction(asset, mixComposition.Duration, videoTrackWithMediaType);
#endregion
}
// 6
AVMutableVideoCompositionInstruction mainInstruction = AVMutableVideoCompositionInstruction.Create() as AVMutableVideoCompositionInstruction;
CMTimeRange rangeIns = new CMTimeRange()
{
Start = new CMTime(0, 0),
Duration = mixComposition.Duration
};
mainInstruction.TimeRange = rangeIns;
mainInstruction.LayerInstructions = Instruction_Array;
var mainComposition = AVMutableVideoComposition.Create();
mainComposition.Instructions = new AVVideoCompositionInstruction[1] { mainInstruction };
mainComposition.FrameDuration = new CMTime(1, 30);
mainComposition.RenderSize = new CGSize(mixComposition.NaturalSize.Height, mixComposition.NaturalSize.Width);
finalVideo_path = NSUrl.FromFilename(Path.Combine(Path.GetTempPath(), "Whole2.mov"));
if (File.Exists(Path.GetTempPath() + "Whole2.mov"))
{
File.Delete(Path.GetTempPath() + "Whole2.mov");
}
//... export video ...
AVAssetExportSession exportSession = new AVAssetExportSession(mixComposition, AVAssetExportSessionPreset.HighestQuality)
{
OutputUrl = NSUrl.FromFilename(Path.Combine(Path.GetTempPath(), "Whole2.mov")),
OutputFileType = AVFileType.QuickTimeMovie,
ShouldOptimizeForNetworkUse = true,
VideoComposition = mainComposition
};
exportSession.ExportAsynchronously(_OnExportDone);
FinalVideo = Path.Combine(Path.GetTempPath(), "Whole2.mov");
}
private AVMutableVideoCompositionLayerInstruction TestingInstruction(AVAsset asset, CMTime currentTime, AVAssetTrack mixComposition_video_Track)
{
var instruction = AVMutableVideoCompositionLayerInstruction.FromAssetTrack(mixComposition_video_Track);
var startTime = CMTime.Subtract(currentTime, asset.Duration);
//NaturalSize.Height is passed as a width parameter because IOS stores the video recording horizontally
CGAffineTransform translateToCenter = CGAffineTransform.MakeTranslation(mixComposition_video_Track.NaturalSize.Height, 0);
//Angle in radiants, not in degrees
CGAffineTransform rotate = CGAffineTransform.Rotate(translateToCenter, (nfloat)(Math.PI / 2));
instruction.SetTransform(rotate, (CMTime.Subtract(currentTime, asset.Duration)));
instruction.SetOpacity(1, startTime);
instruction.SetOpacity(0, currentTime);
return instruction;
}
}
Does anyone know how to solve this?
If you need more information I will provide it as soon as I see your request. Thank you all for your time, have a nice day. (:
I believe I figured out the problem in your code. You are only creating instructions on the first track. Look at these two lines here:
AVAssetTrack videoTrackWithMediaType = mixComposition.TracksWithMediaType(AVMediaType.Video)[0];
var instruction = AVMutableVideoCompositionLayerInstruction.FromAssetTrack(videoTrackWithMediaType);
AVMutableComposition.tracksWithMediaType gets an array of tracks so, at the end of the first line, [0], grabs only the first track in the composition, which is the first video. As you loop through you are just creating instructions for the first video multiple times.
Your code and me not being familiar with Xamarin is confusing me, but I believe you can just do this and it should work:
Change these lines:
AVAssetTrack videoTrackWithMediaType = mixComposition.TracksWithMediaType(AVMediaType.Video)[0];
var instruction = AVMutableVideoCompositionLayerInstruction.FromAssetTrack(videoTrackWithMediaType);
#region Instructions
int counter = Clips.IndexOf(clip);
Instruction_Array[counter] = TestingInstruction(asset, mixComposition.Duration, videoTrackWithMediaType);
#endregion
To this:
var instruction = AVMutableVideoCompositionLayerInstruction.FromAssetTrack(videoTrack);
#region Instructions
int counter = Clips.IndexOf(clip);
Instruction_Array[counter] = TestingInstruction(asset, mixComposition.Duration, videoTrack);
#endregion
All I did here was get rid of the videoTracksWithMediaType variable you made and used videoTrack instead. No need to fetch the corresponding track since you already created it and still have access to it within the code block you are in when creating instructions.

UWP Kinect V2 keep frame rate constant (30fps)

I am processing frames received from Kinect v2 (Color and IR) in UWP. The program runs on remote machine (XBOX One S). The main goal is to get frames and write them to the disk with 30 fps for Color and IR to later process them further.
I am using the following code to check the frame rate:
public MainPage()
{
this.InitialiseFrameReader(); // initialises MediaCapture for IR and Color
}
const int COLOR_SOURCE = 0;
const int IR_SOURCE = 1;
private async void InitialiseFrameReader()
{
await CleanupMediaCaptureAsync();
var allGroups = await MediaFrameSourceGroup.FindAllAsync();
if (allGroups.Count == 0)
{
return;
}
_groupSelectionIndex = (_groupSelectionIndex + 1) % allGroups.Count;
var selectedGroup = allGroups[_groupSelectionIndex];
var kinectGroup = selectedGroup;
try
{
await InitializeMediaCaptureAsync(kinectGroup);
}
catch (Exception exception)
{
_logger.Log($"MediaCapture initialization error: {exception.Message}");
await CleanupMediaCaptureAsync();
return;
}
// Set up frame readers, register event handlers and start streaming.
var startedKinds = new HashSet<MediaFrameSourceKind>();
foreach (MediaFrameSource source in _mediaCapture.FrameSources.Values.Where(x => x.Info.SourceKind == MediaFrameSourceKind.Color || x.Info.SourceKind == MediaFrameSourceKind.Infrared)) //
{
MediaFrameSourceKind kind = source.Info.SourceKind;
MediaFrameSource frameSource = null;
int frameindex = COLOR_SOURCE;
if (kind == MediaFrameSourceKind.Infrared)
{
frameindex = IR_SOURCE;
}
// Ignore this source if we already have a source of this kind.
if (startedKinds.Contains(kind))
{
continue;
}
MediaFrameSourceInfo frameInfo = kinectGroup.SourceInfos[frameindex];
if (_mediaCapture.FrameSources.TryGetValue(frameInfo.Id, out frameSource))
{
// Create a frameReader based on the source stream
MediaFrameReader frameReader = await _mediaCapture.CreateFrameReaderAsync(frameSource);
frameReader.FrameArrived += FrameReader_FrameArrived;
_sourceReaders.Add(frameReader);
MediaFrameReaderStartStatus status = await frameReader.StartAsync();
if (status == MediaFrameReaderStartStatus.Success)
{
startedKinds.Add(kind);
}
}
}
}
private async Task InitializeMediaCaptureAsync(MediaFrameSourceGroup sourceGroup)
{
if (_mediaCapture != null)
{
return;
}
// Initialize mediacapture with the source group.
_mediaCapture = new MediaCapture();
var settings = new MediaCaptureInitializationSettings
{
SourceGroup = sourceGroup,
SharingMode = MediaCaptureSharingMode.SharedReadOnly,
StreamingCaptureMode = StreamingCaptureMode.Video,
MemoryPreference = MediaCaptureMemoryPreference.Cpu
};
await _mediaCapture.InitializeAsync(settings);
}
private void FrameReader_FrameArrived(MediaFrameReader sender, MediaFrameArrivedEventArgs args)
{
using (var frame = sender.TryAcquireLatestFrame())
{
if (frame != null)
{
//Settings.cameraframeQueue.Enqueue(null, frame.SourceKind.ToString(), frame.SystemRelativeTime.Value); //Add to Queue to process frame
Debug.WriteLine(frame.SourceKind.ToString() + " : " + frame.SystemRelativeTime.ToString());
}
}
}
I am trying to debug the application to check the frame rate so I have removed further processing.
I am not sure if I am not calculating it properly or something else is wrong.
For example, System Relative Time from 04:37:06 to 04:37:48 gives :
IR:
Fps(Occurrence)
31(1)
30(36)
29(18)
28(4)
Color:
Fps(Occurrence)
30(38)
29(18)
28(3)
I want this frame rate to be constant (30 fps) and aligned so IR and Color and same number of frames for that time.
This does not include any additional code. As soon as I have a process queue or any sort of code, the fps decreases and ranges from 15 to 30.
Can anyone please help me with this?
Thank you.
UPDATE:
After some testing and working around, it has come to my notice that PC produces 30fps but XBOX One (remote device) on debug mode produces very low fps. This does however improve when running it on release mode but the memory allocated for UWP apps is quite low.
https://learn.microsoft.com/en-us/windows/uwp/xbox-apps/system-resource-allocation
XBOX One has maximum available memory of 1 GB for Apps and 5 for Games.
https://learn.microsoft.com/en-us/windows/uwp/xbox-apps/system-resource-allocation
While in PC the fps is 30 (as the memory has no such restrictions).
This causes the frame rate to drop. However, the fps did improve when running it on release mode or published to MS Store.

How can a custom audio effect produce some sound after a AudioFileInputNode has finished playing

I'm developping an audio application in C# and UWP using the AudioGraph API.
My AudioGraph setup is the following :
AudioFileInputNode --> AudioSubmixNode --> AudioDeviceOutputNode.
I attached a custom echo effect on the AudioSubmixNode.
If I play the AudioFileInputNode I can hear some echo.
But when the AudioFileInputNode playback finishes, the echo sound stops brutally.
I would like it to stop gradually after few seconds only.
If I use the EchoEffectDefinition from the AudioGraph API, the echo sound is not stopped after the sample playback has finished.
I don't know if the problem comes from my effect implementation or if it's a strange behavior of the AudioGraph API...
The behavior is the same in the "AudioCreation" sample in the SDK, scenario 6.
Here is my custom effect implementation :
public sealed class AudioEchoEffect : IBasicAudioEffect
{
public AudioEchoEffect()
{
}
private readonly AudioEncodingProperties[] _supportedEncodingProperties = new AudioEncodingProperties[]
{
AudioEncodingProperties.CreatePcm(44100, 1, 32),
AudioEncodingProperties.CreatePcm(48000, 1, 32),
};
private AudioEncodingProperties _currentEncodingProperties;
private IPropertySet _propertySet;
private readonly Queue<float> _echoBuffer = new Queue<float>(100000);
private int _delaySamplesCount;
private float Delay
{
get
{
if (_propertySet != null && _propertySet.TryGetValue("Delay", out object val))
{
return (float)val;
}
return 500.0f;
}
}
private float Feedback
{
get
{
if (_propertySet != null && _propertySet.TryGetValue("Feedback", out object val))
{
return (float)val;
}
return 0.5f;
}
}
private float Mix
{
get
{
if (_propertySet != null && _propertySet.TryGetValue("Mix", out object val))
{
return (float)val;
}
return 0.5f;
}
}
public bool UseInputFrameForOutput { get { return true; } }
public IReadOnlyList<AudioEncodingProperties> SupportedEncodingProperties { get { return _supportedEncodingProperties; } }
public void SetProperties(IPropertySet configuration)
{
_propertySet = configuration;
}
public void SetEncodingProperties(AudioEncodingProperties encodingProperties)
{
_currentEncodingProperties = encodingProperties;
// compute the number of samples for the delay
_delaySamplesCount = (int)MathF.Round((this.Delay / 1000.0f) * encodingProperties.SampleRate);
// fill empty samples in the buffer according to the delay
for (int i = 0; i < _delaySamplesCount; i++)
{
_echoBuffer.Enqueue(0.0f);
}
}
unsafe public void ProcessFrame(ProcessAudioFrameContext context)
{
AudioFrame frame = context.InputFrame;
using (AudioBuffer buffer = frame.LockBuffer(AudioBufferAccessMode.ReadWrite))
using (IMemoryBufferReference reference = buffer.CreateReference())
{
((IMemoryBufferByteAccess)reference).GetBuffer(out byte* dataInBytes, out uint capacity);
float* dataInFloat = (float*)dataInBytes;
int dataInFloatLength = (int)buffer.Length / sizeof(float);
// read parameters once
float currentWet = this.Mix;
float currentDry = 1.0f - currentWet;
float currentFeedback = this.Feedback;
// Process audio data
float sample, echoSample, outSample;
for (int i = 0; i < dataInFloatLength; i++)
{
// read values
sample = dataInFloat[i];
echoSample = _echoBuffer.Dequeue();
// compute output sample
outSample = (currentDry * sample) + (currentWet * echoSample);
dataInFloat[i] = outSample;
// compute delay sample
echoSample = sample + (currentFeedback * echoSample);
_echoBuffer.Enqueue(echoSample);
}
}
}
public void Close(MediaEffectClosedReason reason)
{
}
public void DiscardQueuedFrames()
{
// reset the delay buffer
_echoBuffer.Clear();
for (int i = 0; i < _delaySamplesCount; i++)
{
_echoBuffer.Enqueue(0.0f);
}
}
}
EDIT :
I changed my audio effect to mix the input samples with a sine wave. The ProcessFrame effect method runs continuously before and after the sample playback (when the effect is active). So the sine wave should be heared before and after the sample playback. But the AudioGraph API seems to ignore the effect output when there is no active playback...
Here is a screen capture of the audio output :
So my question is : How can the built-in EchoEffectDefinition output some sound after the playback finished ? An access to the EchoEffectDefinition source code would be a great help...
By infinitely looping the file input node, then it will always provide an input frame until the audio graph stops. But of course we do not want to hear the file loop, so we can listen the FileCompleted event of AudioFileInputNode. When the file finishes playing, it will trigger the event and we just need to set the OutgoingGain of AudioFileInputNode to zero. So the file playback once, but it continues to silently loop passing input frames that have no audio content to which the echo can be added.
Still using scenario 4 in the AudioCreation sample as an example. In the scenario4, there is a property named fileInputNode1. As mentioned above, please add the following code in fileInputNode1 and test again by using your custom echo effect.
fileInputNode1.LoopCount = null; //Null makes it loop infinitely
fileInputNode1.FileCompleted += FileInputNode1_FileCompleted;
private void FileInputNode1_FileCompleted(AudioFileInputNode sender, object args)
{
fileInputNode1.OutgoingGain = 0.0;
}

Playing WAVE file in C# using DirectX and threading?

at the moment im trying to figure out how i can manage to play a wave file in C# by filling up the secondary buffer with data from the wave file through threading and then play the wave file.
Any help or sample coding i can use?
thanks
sample code being used:
public delegate void PullAudio(short[] buffer, int length);
public class SoundPlayer : IDisposable
{
private Device soundDevice;
private SecondaryBuffer soundBuffer;
private int samplesPerUpdate;
private AutoResetEvent[] fillEvent = new AutoResetEvent[2];
private Thread thread;
private PullAudio pullAudio;
private short channels;
private bool halted;
private bool running;
public SoundPlayer(Control owner, PullAudio pullAudio, short channels)
{
this.channels = channels;
this.pullAudio = pullAudio;
this.soundDevice = new Device();
this.soundDevice.SetCooperativeLevel(owner, CooperativeLevel.Priority);
// Set up our wave format to 44,100Hz, with 16 bit resolution
WaveFormat wf = new WaveFormat();
wf.FormatTag = WaveFormatTag.Pcm;
wf.SamplesPerSecond = 44100;
wf.BitsPerSample = 16;
wf.Channels = channels;
wf.BlockAlign = (short)(wf.Channels * wf.BitsPerSample / 8);
wf.AverageBytesPerSecond = wf.SamplesPerSecond * wf.BlockAlign;
this.samplesPerUpdate = 512;
// Create a buffer with 2 seconds of sample data
BufferDescription bufferDesc = new BufferDescription(wf);
bufferDesc.BufferBytes = this.samplesPerUpdate * wf.BlockAlign * 2;
bufferDesc.ControlPositionNotify = true;
bufferDesc.GlobalFocus = true;
this.soundBuffer = new SecondaryBuffer(bufferDesc, this.soundDevice);
Notify notify = new Notify(this.soundBuffer);
fillEvent[0] = new AutoResetEvent(false);
fillEvent[1] = new AutoResetEvent(false);
// Set up two notification events, one at halfway, and one at the end of the buffer
BufferPositionNotify[] posNotify = new BufferPositionNotify[2];
posNotify[0] = new BufferPositionNotify();
posNotify[0].Offset = bufferDesc.BufferBytes / 2 - 1;
posNotify[0].EventNotifyHandle = fillEvent[0].Handle;
posNotify[1] = new BufferPositionNotify();
posNotify[1].Offset = bufferDesc.BufferBytes - 1;
posNotify[1].EventNotifyHandle = fillEvent[1].Handle;
notify.SetNotificationPositions(posNotify);
this.thread = new Thread(new ThreadStart(SoundPlayback));
this.thread.Priority = ThreadPriority.Highest;
this.Pause();
this.running = true;
this.thread.Start();
}
public void Pause()
{
if (this.halted) return;
this.halted = true;
Monitor.Enter(this.thread);
}
public void Resume()
{
if (!this.halted) return;
this.halted = false;
Monitor.Pulse(this.thread);
Monitor.Exit(this.thread);
}
private void SoundPlayback()
{
lock (this.thread)
{
if (!this.running) return;
// Set up the initial sound buffer to be the full length
int bufferLength = this.samplesPerUpdate * 2 * this.channels;
short[] soundData = new short[bufferLength];
// Prime it with the first x seconds of data
this.pullAudio(soundData, soundData.Length);
this.soundBuffer.Write(0, soundData, LockFlag.None);
// Start it playing
this.soundBuffer.Play(0, BufferPlayFlags.Looping);
int lastWritten = 0;
while (this.running)
{
if (this.halted)
{
Monitor.Pulse(this.thread);
Monitor.Wait(this.thread);
}
// Wait on one of the notification events
WaitHandle.WaitAny(this.fillEvent, 3, true);
// Get the current play position (divide by two because we are using 16 bit samples)
int tmp = this.soundBuffer.PlayPosition / 2;
// Generate new sounds from lastWritten to tmp in the sound buffer
if (tmp == lastWritten)
{
continue;
}
else
{
soundData = new short[(tmp - lastWritten + bufferLength) % bufferLength];
}
this.pullAudio(soundData, soundData.Length);
// Write in the generated data
soundBuffer.Write(lastWritten * 2, soundData, LockFlag.None);
// Save the position we were at
lastWritten = tmp;
}
}
}
public void Dispose()
{
this.running = false;
this.Resume();
if (this.soundBuffer != null)
{
this.soundBuffer.Dispose();
}
if (this.soundDevice != null)
{
this.soundDevice.Dispose();
}
}
}
}
The concept is the same that im using but i can't manage to get a set on wave byte [] data to play
I have not done this.
But the first place i would look is XNA.
I know that the c# managed directx project was ditched in favor of XNA and i have found it to be good for graphics - i prefer using it to directx.
what is the reason that you decided not to just use soundplayer, as per this msdn entry below?
private SoundPlayer Player = new SoundPlayer();
private void loadSoundAsync()
{
// Note: You may need to change the location specified based on
// the location of the sound to be played.
this.Player.SoundLocation = http://www.tailspintoys.com/sounds/stop.wav";
this.Player.LoadAsync();
}
private void Player_LoadCompleted (
object sender,
System.ComponentModel.AsyncCompletedEventArgs e)
{
if (this.Player.IsLoadCompleted)
{
this.Player.PlaySync();
}
}
usually i just load them all up in a thread, or asynch delegate, then play or playsynch them when needed.
You can use the DirectSound support in SlimDX: http://slimdx.org/ :-)
You can use nBASS or better FMOD both are great audio libraries and can work nicely together with .NET.
DirectSound is where you want to go. It's a piece of cake to use, but I'm not sure what formats it can play besides .wav
http://msdn.microsoft.com/en-us/library/windows/desktop/ee416960(v=vs.85).aspx

Categories