I am currently working with the HoloLens and thus Unity. The idea is to make a webcam Livestream with as little delay as possible to an instance of FFmpeg on a different device for encoding. However, I run into some issues.
My first problem is the Conversion from a Color32 Array to a Byte Array. It creates a huge amount of bandwidth, especially when I am trying to record and stream with 30fps and a 1280x720 resolution.
Here is some code:
IEnumerator Init()
{
webCam = new WebCamTexture(1280, 720, 30);
webCam.Play();
image.texture = webCam;
Debug.Log("WebCam Width: " + webCam.width);
Debug.Log("WebCam Height: " + webCam.height);
currentTexture = new Texture2D(webCam.width, webCam.height);
data = new Color32[webCam.width * webCam.height];
listener = new TcpListener(IPAddress.Any, 10305);
listener.Start();
while (webCam.width < 100)
{
yield return null;
}
StartCoroutine("Recording");
}
WaitForEndOfFrame endOfFrame = new WaitForEndOfFrame();
IEnumerator Recording()
{
TcpClient tcpClient = null;
NetworkStream tcpStream = null;
bool isConnected = false;
Loom.RunAsync(() =>
{
while(!stopCapture)
{
tcpClient = listener.AcceptTcpClient();
Debug.Log("Client Connected!");
isConnected = true;
tcpStream = tcpClient.GetStream();
}
});
while(!isConnected)
{
yield return endOfFrame;
}
readyToGetFrame = true;
byte[] messageLength = new byte[SEND_RECEIVE_COUNT];
while(!stopCapture)
{
yield return endOfFrame;
webCam.GetPixels32(data);
byte[] webCamBytes = Utilities.Color32ArrayToByteArray(data);
readyToGetFrame = false;
Loom.RunAsync(() => {
tcpStream.Write(webCamBytes, 0, webCamBytes.Length);
readyToGetFrame = true;
});
while (!readyToGetFrame)
{
yield return endOfFrame;
}
}
}
public static class Utilities
{
public static byte[] Color32ArrayToByteArray(Color32[] colors)
{
if (colors == null || colors.Length == 0)
{
return null;
}
int lengthOfColor32 = Marshal.SizeOf(typeof(Color32));
int byteArrayLength = lengthOfColor32 * colors.Length;
byte[] bytes = new byte[byteArrayLength];
GCHandle handle = default(GCHandle);
try
{
handle = GCHandle.Alloc(colors, GCHandleType.Pinned);
IntPtr ptr = handle.AddrOfPinnedObject();
Marshal.Copy(ptr, bytes, 0, byteArrayLength);
}
finally
{
if(handle != default(GCHandle))
{
handle.Free();
}
}
return bytes;
}
}
To fix this problem, I tried to use EncodePNG and EncodeJPG instead. But there is my second problem. If I use these methods, I got a big loss of performance.
So instead of this
webCam.GetPixels32(data);
byte[] webCamBytes = Utilities.Color32ArrayToByteArray(data);
I use this
currentTexture.SetPixels(webCam.GetPixels());
byte[] webCamBytes = currentTexture.EncodeToJPG();
Thus my question:
Is there any way to reduce the bandwith by maybe cutting off the alpha values as they are not needed? Or is there another good way such as converting to a different color or picture format?
Because right now I ran out of usable ideas and feel kinda stuck. Might just be me though ;-)
PS: Using external libraries is a tad difficult as it needs to run on HoloLens and due to other regulations.
Thank you in advance!
Related
I am using VideoLan.VLC to get 20 seconds of an audio stream every 30 seconds.
I have a loop, something like
private LibVLC libvlc = new LibVLC();
private MediaPlayer mediaPlayer = null;
private Media Media = null;
private string AudioSampleFileName { get {
return "audio_" + DateTime.Now.ToString("yyyyMMdd_HHmmss")+
".ts";
} }
public void Start(){
mediaPlayer = new MediaPlayer(libvlc);
while(...){
Get_20_seconds_audio_sample();
Wait_30_Seconds();
}
}
public void Get_sample(Uri playPathUri, string FileName)
{
var currentDirectory = Path.GetDirectoryName(Assembly.GetEntryAssembly().Location);
var destination = Path.Combine(currentDirectory, FileName);
var mediaOptions = new string[]
{
":sout=#file{dst=" + destination + ",channels=1,samplerate=16000}",
":sout-keep"
};
if (Media == null)
{
Media = new Media(libvlc, playPathUri, mediaOptions);
mediaPlayer.Media = Media;
}
else {
Media.AddOption(":sout=#file{dst=" + destination + ",channels=1,samplerate=16000}");
}
mediaPlayer.Play();
}
public void Get_20_seconds_audio_sample(){
Get_sample(RadioURI,AudioSampleFileName);
Wait_20_seconds();
Stop();
}
public void Stop()
{
mediaPlayer.Stop();
}
The Problem is that radio streming ususally starts with a commercial that lasts about 25 seconds. Every sample plays the commercial. It seems that Stop() closes the stream until Play() is called again and it restart the stream. I tired to pause the audio but, well...it makes no much sense to pause and play.
I can accept to get the commercial only in the first sample but then I want the regular radio audio. Is there a way not to close the stream every time? I am not tied to VideoLan dll so I can start from scratch if you have a better way to do it.
Use Audio callbacks and save the audio stream yourself. Full sample:
class Program
{
// This sample shows you how you can use SetAudioFormatCallback and SetAudioCallbacks. It does two things:
// 1) Play the sound from the specified video using NAudio
// 2) Extract the sound into a file using NAudio
static void Main(string[] args)
{
Core.Initialize();
using var libVLC = new LibVLC(enableDebugLogs: true);
using var media = new Media(libVLC,
new Uri("http://commondatastorage.googleapis.com/gtv-videos-bucket/sample/ElephantsDream.mp4"),
":no-video");
using var mediaPlayer = new MediaPlayer(media);
using var outputDevice = new WaveOutEvent();
var waveFormat = new WaveFormat(8000, 16, 1);
var writer = new WaveFileWriter("sound.wav", waveFormat);
var waveProvider = new BufferedWaveProvider(waveFormat);
outputDevice.Init(waveProvider);
mediaPlayer.SetAudioFormatCallback(AudioSetup, AudioCleanup);
mediaPlayer.SetAudioCallbacks(PlayAudio, PauseAudio, ResumeAudio, FlushAudio, DrainAudio);
mediaPlayer.Play();
mediaPlayer.Time = 20_000; // Seek the video 20 seconds
outputDevice.Play();
Console.WriteLine("Press 'q' to quit. Press any other key to pause/play.");
while (true)
{
if (Console.ReadKey().KeyChar == 'q')
break;
if (mediaPlayer.IsPlaying)
mediaPlayer.Pause();
else
mediaPlayer.Play();
}
void PlayAudio(IntPtr data, IntPtr samples, uint count, long pts)
{
int bytes = (int)count * 2; // (16 bit, 1 channel)
var buffer = new byte[bytes];
Marshal.Copy(samples, buffer, 0, bytes);
waveProvider.AddSamples(buffer, 0, bytes);
writer.Write(buffer, 0, bytes);
}
int AudioSetup(ref IntPtr opaque, ref IntPtr format, ref uint rate, ref uint channels)
{
channels = (uint)waveFormat.Channels;
rate = (uint)waveFormat.SampleRate;
return 0;
}
void DrainAudio(IntPtr data)
{
writer.Flush();
}
void FlushAudio(IntPtr data, long pts)
{
writer.Flush();
waveProvider.ClearBuffer();
}
void ResumeAudio(IntPtr data, long pts)
{
outputDevice.Play();
}
void PauseAudio(IntPtr data, long pts)
{
outputDevice.Pause();
}
void AudioCleanup(IntPtr opaque) { }
}
}
I'm new to unity.
I'm trying to stream audio from the microphone with a live video from one app to the another, Currently, I have 2 apps. were app 1 is the server/sender and app 2 is the client/receiver. In-app 1 I successfully send the video bytes to the client. and on the client side, I'm also receiving all of the bytes. I'm using sockets and TCP.
Now the problem is I don't know how to send microphone's audio along with video from the server to the client.
Below code works perfectly for live video streaming from server side
using UnityEngine;
using System.Collections;
using System.IO;
using UnityEngine.UI;
using System;
using System.Text;
using System.Net;
using System.Net.Sockets;
using System.Threading;
using System.Collections.Generic;
public class Connecting : MonoBehaviour
{
WebCamTexture webCam;
public RawImage myImage;
public bool enableLog = false;
Texture2D currentTexture;
private TcpListener listner;
private const int port = 8010;
private bool stop = false;
private List<TcpClient> clients = new List<TcpClient>();
//This must be the-same with SEND_COUNT on the client
const int SEND_RECEIVE_COUNT = 15;
private void Start()
{
Application.runInBackground = true;
//Start WebCam coroutine
StartCoroutine(initAndWaitForWebCamTexture());
}
//Converts the data size to byte array and put result to the fullBytes array
void byteLengthToFrameByteArray(int byteLength, byte[] fullBytes)
{
//Clear old data
Array.Clear(fullBytes, 0, fullBytes.Length);
//Convert int to bytes
byte[] bytesToSendCount = BitConverter.GetBytes(byteLength);
//Copy result to fullBytes
bytesToSendCount.CopyTo(fullBytes, 0);
}
//Converts the byte array to the data size and returns the result
int frameByteArrayToByteLength(byte[] frameBytesLength)
{
int byteLength = BitConverter.ToInt32(frameBytesLength, 0);
return byteLength;
}
IEnumerator initAndWaitForWebCamTexture()
{
// Open the Camera on the desired device, in my case IPAD pro
webCam = new WebCamTexture();
// Get all devices , front and back camera
webCam.deviceName = WebCamTexture.devices[WebCamTexture.devices.Length - 1].name;
// request the lowest width and heigh possible
webCam.requestedHeight = 10;
webCam.requestedWidth = 10;
myImage.texture = webCam;
webCam.Play();
currentTexture = new Texture2D(webCam.width, webCam.height);
// Connect to the server
listner = new TcpListener(IPAddress.Any, port);
listner.Start();
while (webCam.width < 100)
{
yield return null;
}
//Start sending coroutine
StartCoroutine(senderCOR());
}
WaitForEndOfFrame endOfFrame = new WaitForEndOfFrame();
IEnumerator senderCOR()
{
bool isConnected = false;
TcpClient client = null;
NetworkStream stream = null;
// Wait for client to connect in another Thread
Loom.RunAsync(() =>
{
while (!stop)
{
// Wait for client connection
client = listner.AcceptTcpClient();
// We are connected
clients.Add(client);
isConnected = true;
stream = client.GetStream();
}
});
//Wait until client has connected
while (!isConnected)
{
yield return null;
}
LOG("Connected!");
bool readyToGetFrame = true;
byte[] frameBytesLength = new byte[SEND_RECEIVE_COUNT];
while (!stop)
{
//Wait for End of frame
yield return endOfFrame;
currentTexture.SetPixels(webCam.GetPixels());
byte[] pngBytes = currentTexture.EncodeToPNG();
//Fill total byte length to send. Result is stored in frameBytesLength
byteLengthToFrameByteArray(pngBytes.Length, frameBytesLength);
//Set readyToGetFrame false
readyToGetFrame = false;
Loom.RunAsync(() =>
{
//Send total byte count first
stream.Write(frameBytesLength, 0, frameBytesLength.Length);
LOG("Sent Image byte Length: " + frameBytesLength.Length);
//Send the image bytes
stream.Write(pngBytes, 0, pngBytes.Length);
LOG("Sending Image byte array data : " + pngBytes.Length);
//Sent. Set readyToGetFrame true
readyToGetFrame = true;
});
//Wait until we are ready to get new frame(Until we are done sending data)
while (!readyToGetFrame)
{
LOG("Waiting To get new frame");
yield return null;
}
}
}
void LOG(string messsage)
{
if (enableLog)
Debug.Log(messsage);
}
private void Update()
{
myImage.texture = webCam;
}
// stop everything
private void OnApplicationQuit()
{
if (webCam != null && webCam.isPlaying)
{
webCam.Stop();
stop = true;
}
if (listner != null)
{
listner.Stop();
}
foreach (TcpClient c in clients)
c.Close();
}
}
Below code works perfect for live video streaming from Client side
using UnityEngine;
using System.Collections;
using UnityEngine.UI;
using System.Net.Sockets;
using System.Net;
using System.IO;
using System;
public class reciver : MonoBehaviour
{
public RawImage image;
public bool enableLog = false;
const int port = 8010;
public string IP = "192.168.1.165";
TcpClient client;
Texture2D tex;
private bool stop = false;
//This must be the-same with SEND_COUNT on the server
const int SEND_RECEIVE_COUNT = 15;
// Use this for initialization
void Start()
{
Application.runInBackground = true;
tex = new Texture2D(0, 0);
client = new TcpClient();
//Connect to server from another Thread
Loom.RunAsync(() =>
{
LOGWARNING("Connecting to server...");
// if on desktop
client.Connect(IPAddress.Loopback, port);
// if using the IPAD
//client.Connect(IPAddress.Parse(IP), port);
LOGWARNING("Connected!");
imageReceiver();
});
}
void imageReceiver()
{
//While loop in another Thread is fine so we don't block main Unity Thread
Loom.RunAsync(() =>
{
while (!stop)
{
//Read Image Count
int imageSize = readImageByteSize(SEND_RECEIVE_COUNT);
LOGWARNING("Received Image byte Length: " + imageSize);
//Read Image Bytes and Display it
readFrameByteArray(imageSize);
}
});
}
//Converts the data size to byte array and put result to the fullBytes array
void byteLengthToFrameByteArray(int byteLength, byte[] fullBytes)
{
//Clear old data
Array.Clear(fullBytes, 0, fullBytes.Length);
//Convert int to bytes
byte[] bytesToSendCount = BitConverter.GetBytes(byteLength);
//Copy result to fullBytes
bytesToSendCount.CopyTo(fullBytes, 0);
}
//Converts the byte array to the data size and returns the result
int frameByteArrayToByteLength(byte[] frameBytesLength)
{
int byteLength = BitConverter.ToInt32(frameBytesLength, 0);
return byteLength;
}
/////////////////////////////////////////////////////Read Image SIZE from Server///////////////////////////////////////////////////
private int readImageByteSize(int size)
{
bool disconnected = false;
NetworkStream serverStream = client.GetStream();
byte[] imageBytesCount = new byte[size];
var total = 0;
do
{
var read = serverStream.Read(imageBytesCount, total, size - total);
//Debug.LogFormat("Client recieved {0} bytes", total);
if (read == 0)
{
disconnected = true;
break;
}
total += read;
} while (total != size);
int byteLength;
if (disconnected)
{
byteLength = -1;
}
else
{
byteLength = frameByteArrayToByteLength(imageBytesCount);
}
return byteLength;
}
/////////////////////////////////////////////////////Read Image Data Byte Array from Server///////////////////////////////////////////////////
private void readFrameByteArray(int size)
{
bool disconnected = false;
NetworkStream serverStream = client.GetStream();
byte[] imageBytes = new byte[size];
var total = 0;
do
{
var read = serverStream.Read(imageBytes, total, size - total);
//Debug.LogFormat("Client recieved {0} bytes", total);
if (read == 0)
{
disconnected = true;
break;
}
total += read;
} while (total != size);
bool readyToReadAgain = false;
//Display Image
if (!disconnected)
{
//Display Image on the main Thread
Loom.QueueOnMainThread(() =>
{
displayReceivedImage(imageBytes);
readyToReadAgain = true;
});
}
//Wait until old Image is displayed
while (!readyToReadAgain)
{
System.Threading.Thread.Sleep(1);
}
}
void displayReceivedImage(byte[] receivedImageBytes)
{
tex.LoadImage(receivedImageBytes);
image.texture = tex;
}
// Update is called once per frame
void Update()
{
}
void LOG(string messsage)
{
if (enableLog)
Debug.Log(messsage);
}
void LOGWARNING(string messsage)
{
if (enableLog)
Debug.LogWarning(messsage);
}
void OnApplicationQuit()
{
LOGWARNING("OnApplicationQuit");
stop = true;
if (client != null)
{
client.Close();
}
}
}
My Achievement for video so far is below, in below code audio is successfully playing from microphone
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class Audio1 : MonoBehaviour
{
const int FREQUENCY = 44100;
AudioClip mic;
int lastPos, pos;
// Use this for initialization
void Start()
{
mic = Microphone.Start(null, true, 10, FREQUENCY);
AudioSource audio = GetComponent<AudioSource>();
audio.clip = AudioClip.Create("test", 10 * FREQUENCY, mic.channels, FREQUENCY, false);
audio.loop = true;
}
// Update is called once per frame
void Update()
{
if ((pos = Microphone.GetPosition(null)) > 0)
{
if (lastPos > pos) lastPos = 0;
if (pos - lastPos > 0)
{
// Allocate the space for the sample.
float[] sample = new float[(pos - lastPos) * mic.channels];
// Get the data from microphone.
mic.GetData(sample, lastPos);
// Put the data in the audio source.
AudioSource audio = GetComponent<AudioSource>();
audio.clip.SetData(sample, lastPos);
if (!audio.isPlaying) audio.Play();
lastPos = pos;
}
}
}
void OnDestroy()
{
Microphone.End(null);
}
}
I was trying a basic "hello world" app for a first kinect project, to stream the video data to the stream. The code is here: https://github.com/fschwiet/HelloKinect/blob/8803b6b959ee6dba5f9284b9e732fb11a897dea4/HelloKinect/ShowCameraCommand.cs
What I find is that I can poll for frame data in a loop, but I am not receiving the frame-ready events. The sourcecode is below. When UsePolling is true, frame data is sent to the form. When UsePolling is false, there is console output "Hit return to exit." indicating everything has run, but no events are ever received.
I have a feeling this has to do with windows message pumps, that I need to wait in an alertable state and/or pump a message queue. I haven't been able to make it work though, anyone have any hints?
public class ShowCameraCommand : ConsoleCommand
{
static private Form EchoForm;
private bool UsePolling;
public ShowCameraCommand()
{
this.IsCommand("show-camera");
this.HasOption("p", "Use polling to check frame data", v => UsePolling = true);
}
public override int Run(string[] remainingArguments)
{
var sensor = KinectSensor.KinectSensors.Where(s => s.Status == KinectStatus.Connected).FirstOrDefault();
if (sensor == null)
{
Console.WriteLine("Kinect was not detected");
Console.WriteLine();
return -1;
}
EchoForm = new Form();
EchoForm.Width = 640;
EchoForm.Height = 480;
EchoForm.Show();
sensor.ColorStream.Enable(ColorImageFormat.RawYuvResolution640x480Fps15);
if (!UsePolling)
{
sensor.ColorFrameReady += sensor_ColorFrameReady;
}
sensor.Start();
if (UsePolling)
{
Console.WriteLine("Use any key to exit.");
while (!Console.KeyAvailable)
{
using (var frame = sensor.ColorStream.OpenNextFrame(10 * 1000))
{
HandleFrame(frame);
}
Thread.Sleep(50);
}
}
else
{
Console.WriteLine("Hit return to exit.");
Console.ReadLine();
}
return 0;
}
void sensor_ColorFrameReady(object sender, ColorImageFrameReadyEventArgs e)
{
Console.WriteLine("Frame received");
using (ColorImageFrame frame = e.OpenColorImageFrame())
{
HandleFrame(frame);
}
}
private void HandleFrame(ColorImageFrame frame)
{
var bitmap = ImageToBitmap(frame);
using (var g = EchoForm.CreateGraphics())
{
g.DrawImage(bitmap, 0, 0);
Console.WriteLine("Frame drawn");
}
}
// http://stackoverflow.com/questions/10848190/convert-kinect-colorframe-to-bitmap
Bitmap ImageToBitmap(ColorImageFrame Image)
{
byte[] pixeldata = new byte[Image.PixelDataLength];
Image.CopyPixelDataTo(pixeldata);
Bitmap bmap = new Bitmap(Image.Width, Image.Height, PixelFormat.Format32bppRgb);
BitmapData bmapdata = bmap.LockBits(
new Rectangle(0, 0, Image.Width, Image.Height),
ImageLockMode.WriteOnly,
bmap.PixelFormat);
IntPtr ptr = bmapdata.Scan0;
Marshal.Copy(pixeldata, 0, ptr, Image.PixelDataLength);
bmap.UnlockBits(bmapdata);
return bmap;
}
}
Oh, figured it out. I needed to call Application.Run() to start pumping events.
I'm trying to run primitive game server. So far I get to this point (commented line) and server runs smootly. But if I send object again from client, it doens't update more than once.
So for example, client sends serialized player object with new position Vector2(400,50), but server deserliaze it to object that got old position.
Player code
namespace Commons.Game
{
[Serializable]
public class Unit
{
#region Fields
public int ID;
public Vector2 position;
public string name;
public int HP;
public int XP;
public int Lvl;
public bool active;
public float speed;
public string password;
#endregion
public Unit(Vector2 position, int HP, float speed, string name, string password, int ID)
{
active = true;
this.position = position;
this.HP = HP;
this.XP = 0;
this.speed = speed;
this.name = name;
this.Lvl = 1;
this.password = password;
this.ID = ID;
}
Server code
namespace SocketServer.Connection
{
class Server
{
#region Fields
Unit[] players;
UdpClient playersData;
Thread INHandlePlayers;
BinaryFormatter bf;
IPEndPoint playersEP;
#endregion
public Server()
{
this.players = new Unit[5];
bf = new BinaryFormatter();
this.playersData = new UdpClient(new IPEndPoint(IPAddress.Any, 3001));
this.playersEP = new IPEndPoint(IPAddress.Any, 3001);
this.INHandlePlayers = new Thread(new ThreadStart(HandleIncomePlayers));
this.INHandlePlayers.Name = "Handle income players.";
this.INHandlePlayers.Start();
}
private void HandleIncomePlayers()
{
Console.Out.WriteLine("Players income handler started.");
MemoryStream ms = new MemoryStream();
while (true)
{
byte[] data = playersData.Receive(ref playersEP);
ms.Write(data, 0, data.Length);
ms.Position = 0;
Unit player = null;
player = bf.Deserialize(ms) as Unit; //<-- 1st deserialization is OK, bu after another client update, object doesn't change. So I change Vector with position at client, that sends correct position I want, but at server side position doesn't change after first deserialization.
Console.Out.WriteLine(player.name + " " + player.position.X + " " + player.position.Y);
ms.Flush();
for (int i = 0; i < players.Length; i++)
{
if (players[i] != null && player.ID == players[i].ID)
{
players[i] = player;
break;
}
}
}
}
}
Dude, I'm not quite sure about it, but I'd instantiate the MemoryStream object inside your loop, like below.
I can't try this code right know, sorry about that.
private void HandleIncomePlayers()
{
Console.Out.WriteLine("Players income handler started.");
MemoryStream ms;
while (true)
{
ms = new MemoryStream(); //here
byte[] data = playersData.Receive(ref playersEP);
ms.Write(data, 0, data.Length);
ms.Position = 0;
Unit player = null;
player = bf.Deserialize(ms) as Unit;
Console.Out.WriteLine(player.name + " " + player.position.X + " " + player.position.Y);
ms.Dispose(); //and here
for (int i = 0; i < players.Length; i++)
{
if (players[i] != null && player.ID == players[i].ID)
{
players[i] = player;
break;
}
}
}
}
Good day user1181369,
I've not used MemoryStream before myself, but I've been doing a little research through the MSDN library and my suspicion is that 'ms' is not clearing and all data you've loaded is actually being added to the stream, rather than replacing it.
MemorySteam.Flush(), for example, doesn't actually do anything (http://msdn.microsoft.com/en-us/library/system.io.memorystream.flush.aspx). If this is the case, then the for-loop would break upon finding the first instance of a specific player ID, not finding the newer versions.
Also, I am uncertain how the code you have supplied would deserialise multiple players, but that is outside the scope of this question and also, perhaps, beyond my current field of knowledge.
Regrettably, while I think I may have diagnosed the problem, I am not equipped at this point to offer a solution beyond instantiating a new memorystream in each loop.
at the moment im trying to figure out how i can manage to play a wave file in C# by filling up the secondary buffer with data from the wave file through threading and then play the wave file.
Any help or sample coding i can use?
thanks
sample code being used:
public delegate void PullAudio(short[] buffer, int length);
public class SoundPlayer : IDisposable
{
private Device soundDevice;
private SecondaryBuffer soundBuffer;
private int samplesPerUpdate;
private AutoResetEvent[] fillEvent = new AutoResetEvent[2];
private Thread thread;
private PullAudio pullAudio;
private short channels;
private bool halted;
private bool running;
public SoundPlayer(Control owner, PullAudio pullAudio, short channels)
{
this.channels = channels;
this.pullAudio = pullAudio;
this.soundDevice = new Device();
this.soundDevice.SetCooperativeLevel(owner, CooperativeLevel.Priority);
// Set up our wave format to 44,100Hz, with 16 bit resolution
WaveFormat wf = new WaveFormat();
wf.FormatTag = WaveFormatTag.Pcm;
wf.SamplesPerSecond = 44100;
wf.BitsPerSample = 16;
wf.Channels = channels;
wf.BlockAlign = (short)(wf.Channels * wf.BitsPerSample / 8);
wf.AverageBytesPerSecond = wf.SamplesPerSecond * wf.BlockAlign;
this.samplesPerUpdate = 512;
// Create a buffer with 2 seconds of sample data
BufferDescription bufferDesc = new BufferDescription(wf);
bufferDesc.BufferBytes = this.samplesPerUpdate * wf.BlockAlign * 2;
bufferDesc.ControlPositionNotify = true;
bufferDesc.GlobalFocus = true;
this.soundBuffer = new SecondaryBuffer(bufferDesc, this.soundDevice);
Notify notify = new Notify(this.soundBuffer);
fillEvent[0] = new AutoResetEvent(false);
fillEvent[1] = new AutoResetEvent(false);
// Set up two notification events, one at halfway, and one at the end of the buffer
BufferPositionNotify[] posNotify = new BufferPositionNotify[2];
posNotify[0] = new BufferPositionNotify();
posNotify[0].Offset = bufferDesc.BufferBytes / 2 - 1;
posNotify[0].EventNotifyHandle = fillEvent[0].Handle;
posNotify[1] = new BufferPositionNotify();
posNotify[1].Offset = bufferDesc.BufferBytes - 1;
posNotify[1].EventNotifyHandle = fillEvent[1].Handle;
notify.SetNotificationPositions(posNotify);
this.thread = new Thread(new ThreadStart(SoundPlayback));
this.thread.Priority = ThreadPriority.Highest;
this.Pause();
this.running = true;
this.thread.Start();
}
public void Pause()
{
if (this.halted) return;
this.halted = true;
Monitor.Enter(this.thread);
}
public void Resume()
{
if (!this.halted) return;
this.halted = false;
Monitor.Pulse(this.thread);
Monitor.Exit(this.thread);
}
private void SoundPlayback()
{
lock (this.thread)
{
if (!this.running) return;
// Set up the initial sound buffer to be the full length
int bufferLength = this.samplesPerUpdate * 2 * this.channels;
short[] soundData = new short[bufferLength];
// Prime it with the first x seconds of data
this.pullAudio(soundData, soundData.Length);
this.soundBuffer.Write(0, soundData, LockFlag.None);
// Start it playing
this.soundBuffer.Play(0, BufferPlayFlags.Looping);
int lastWritten = 0;
while (this.running)
{
if (this.halted)
{
Monitor.Pulse(this.thread);
Monitor.Wait(this.thread);
}
// Wait on one of the notification events
WaitHandle.WaitAny(this.fillEvent, 3, true);
// Get the current play position (divide by two because we are using 16 bit samples)
int tmp = this.soundBuffer.PlayPosition / 2;
// Generate new sounds from lastWritten to tmp in the sound buffer
if (tmp == lastWritten)
{
continue;
}
else
{
soundData = new short[(tmp - lastWritten + bufferLength) % bufferLength];
}
this.pullAudio(soundData, soundData.Length);
// Write in the generated data
soundBuffer.Write(lastWritten * 2, soundData, LockFlag.None);
// Save the position we were at
lastWritten = tmp;
}
}
}
public void Dispose()
{
this.running = false;
this.Resume();
if (this.soundBuffer != null)
{
this.soundBuffer.Dispose();
}
if (this.soundDevice != null)
{
this.soundDevice.Dispose();
}
}
}
}
The concept is the same that im using but i can't manage to get a set on wave byte [] data to play
I have not done this.
But the first place i would look is XNA.
I know that the c# managed directx project was ditched in favor of XNA and i have found it to be good for graphics - i prefer using it to directx.
what is the reason that you decided not to just use soundplayer, as per this msdn entry below?
private SoundPlayer Player = new SoundPlayer();
private void loadSoundAsync()
{
// Note: You may need to change the location specified based on
// the location of the sound to be played.
this.Player.SoundLocation = http://www.tailspintoys.com/sounds/stop.wav";
this.Player.LoadAsync();
}
private void Player_LoadCompleted (
object sender,
System.ComponentModel.AsyncCompletedEventArgs e)
{
if (this.Player.IsLoadCompleted)
{
this.Player.PlaySync();
}
}
usually i just load them all up in a thread, or asynch delegate, then play or playsynch them when needed.
You can use the DirectSound support in SlimDX: http://slimdx.org/ :-)
You can use nBASS or better FMOD both are great audio libraries and can work nicely together with .NET.
DirectSound is where you want to go. It's a piece of cake to use, but I'm not sure what formats it can play besides .wav
http://msdn.microsoft.com/en-us/library/windows/desktop/ee416960(v=vs.85).aspx