I'm trying to record an input and merge it together with a song (not concatenate). I have a guitar that i recorded while listening to a song and I want to put the guitar on the song (like audcaity).
Is there any way for doing it? If its not possible on real time mixing - is it possible to merge them after i recorded? Like after I recorded the guitar and now its a wav file and i want to mix 2 wav files together.
Thats the input device:
private void Capture()
{
input = new WasapiCapture((MMDevice)inputCombo.SelectedItem);
bufferedWaveProvider = new BufferedWaveProvider(input.WaveFormat);
input.DataAvailable += WaveInOnDataAvailable;
input.StartRecording();
write = new WaveFileWriter(System.IO.Path.GetTempFileName(), input.WaveFormat);
}
private void WaveInOnDataAvailable(object sender, WaveInEventArgs e)
{
bufferedWaveProvider.AddSamples(e.Buffer, 0, e.BytesRecorded);
write.Write(e.Buffer, 0, e.BytesRecorded);
write.Flush();
}
Instead of writing it into a blank file i want to write it into a wav file thats already exists and not override it. Is it maybe possible with the MixingSampleProvider?
That should be possible with a WaveMixerStream32, e.g like this
var mixer = new WaveMixerStream32 { AutoStop = true};
var wav1 = new WaveFileReader(#"c:\...\1.wav");
var wav2 = new WaveFileReader(#"c:\...\2.wav");
mixer.AddInputStream(new WaveChannel32(wav1));
mixer.AddInputStream(new WaveChannel32(wav2));
WaveFileWriter.CreateWaveFile("mixed.wav", new Wave32To16Stream(mixer));
To mix multiple ISampleProvider sources using a MixingSampleProvider, you can do the following:
Here SignalGenerator has a Gain property which allows to specify how loud it should be in the mix.
using System;
using NAudio.Wave;
using NAudio.Wave.SampleProviders;
namespace ConsoleApplication1
{
internal class Program
{
private static void Main(string[] args)
{
ISampleProvider provider1 = new SignalGenerator
{
Frequency = 1000.0f,
Gain = 0.5f
};
ISampleProvider provider2 = new SignalGenerator
{
Frequency = 1250.0f,
Gain = 0.5f
};
var takeDuration1 = TimeSpan.FromSeconds(5); // otherwise it would emit indefinitely
var takeDuration2 = TimeSpan.FromSeconds(10);
var sources = new[]
{
provider1.Take(takeDuration1),
provider2.Take(takeDuration2)
};
var mixingSampleProvider = new MixingSampleProvider(sources);
var waveProvider = mixingSampleProvider.ToWaveProvider();
WaveFileWriter.CreateWaveFile("test.wav", waveProvider);
}
}
}
Related
I am trying to read a vide file, resize the frames and write them to an output file:
using System;
using System.Drawing;
using System.Windows.Forms;
using OpenCvSharp;
using OpenCvSharp.Extensions;
namespace VideoProcessing
{
public class Player
{
private VideoCapture capture;
private VideoWriter writer;
private Mat matInternal;
public Bitmap bmInternal;
private bool bIsPlaying = false;
public Timer MyTimer = new Timer();
const string outname = "output.avi";
OpenCvSharp.Size dsize = new OpenCvSharp.Size(640, 480);
public void InitPlayer(string videoName)
{
capture = new VideoCapture(videoName);
writer = new VideoWriter(outname, FourCC.MJPG, capture.Fps, dsize);
matInternal = Mat.Zeros(dsize, MatType.CV_8UC3);
bmInternal = matInternal.ToBitmap();
var delay = 1000 / (int)capture.Fps;
MyTimer.Interval = delay;
MyTimer.Tick += new EventHandler(mk_onTick());
MyTimer.Start();
}
private Action<object, EventArgs>
mk_onTick()
{
return (object sender, EventArgs e) =>
{
capture.Read(matInternal);
if (matInternal.Empty())
{
Console.WriteLine("Empty frame!");
}
else
{
matInternal.Resize(dsize);
bmInternal = matInternal.ToBitmap();
writer.Write(matInternal);
}
};
}
public void Dispose()
{
capture.Dispose();
writer.Dispose();
}
}
}
This is executed in my main function as follows:
using System;
using System.Drawing;
using OpenCvSharp;
using OpenCvSharp.Extensions;
namespace VideoProcessing
{
internal class Program
{
private static void Main(string[] args)
{
var videoName = "input.mp4";
var pl = new Player();
pl.InitPlayer(videoName);
// Some other code that executes in the meantime
pl.Dispose();
}
}
}
The writer can get disposed before the video finishes, which is fine because this will later be adapted for live camera video streams. However, the VideoWriter here produces an apparently empty, 0 second long video file. The codec setting does not produce any errors, and the video is only at 24 FPS so it should not be running into any speed issues. What could be causing this?
I think you have to delay your main thread.
By adding Thread.Sleep(2000) for instance.
I try your code with camera and it works well.
I've been wracking my brain at this for awhile now. I've looked all over on SO already and I'm not finding any answers to my problem here. What I'm attempting to accomplish is a function that will allow my to select an input card for a microphone and output card to go to a radio. This code works on the initial try, but once i stop "Transmitting" you can hear what sounds like a doubled up audio stream and it becomes laggy and eventually crashes with a buffer full exception. I'm not sure what i'm doing wrong here.
public WaveOutEvent outputDevice = new WaveOutEvent() { DeviceNumber = -1 };
public WaveInEvent inputDevice = new WaveInEvent() { DeviceNumber = -1 };
public bool transmit = false;
public bool markerActive = false;
public bool alert1Active = false;
public SerialPort port = new SerialPort();
public string[] ports = SerialPort.GetPortNames();
private BufferedWaveProvider bufferedWaveProvider;
public string keyTransmitter()
{
string label;
if (transmit)
{
transmit = false;
port.DtrEnable = false;
port.RtsEnable = false;
label = "Transmit";
bufferedWaveProvider.ClearBuffer();
inputDevice.StopRecording();
inputDevice.Dispose();
outputDevice.Dispose();
outputDevice.Stop();
}
else
{
transmit = true;
port.DtrEnable = true;
port.RtsEnable = true;
label = "Transmitting";
bufferedWaveProvider = new BufferedWaveProvider(inputDevice.WaveFormat);
inputDevice.DataAvailable += OnDataAvailable;
inputDevice.StartRecording();
outputDevice.Init(bufferedWaveProvider);
outputDevice.Play();
}
return label;
}
public void OnDataAvailable(object sender, WaveInEventArgs args)
{
bufferedWaveProvider.AddSamples(args.Buffer, 0, args.BytesRecorded);
//bufferedWaveProvider.DiscardOnBufferOverflow = true;
}
A BufferedWaveProvider has a size limit, so it must be changed if it wants more audio, but it will eventually overload, so we'll use "DiscardOnBufferOverflow", so here's the code example:
NOTE: if you are using a single reproduction, it is not necessary to change BufferedWaveProvider to a new one.
BufferedWaveProvider waveChannel = new BufferedWaveProvider(INFO HERE);
waveChannel.BufferDuration = new TimeSpan(0, 1, 0); //BufferDuration, 1 minute
waveChannel.DiscardOnBufferOverflow = true; //The name says
It will not be necessary to use "init" or "play" again, because BufferedWaveProvider, when not buffered, it generates silence for WaveOut
English from google translator
I'm working on the application in c# that record video stream from IP cameras.
I'm using Accord.Video.FFMPEG.VideoFileWriter and nVLC C# wrapper.
I have a class that captures audio from the stream using nVLC, which should implement the IAudioSource interface, so I've used CustomAudioRendere to capture sound data, then raised the event NewFrame that contains the signal object.
The problem is when saving the signal to video file, the sound is terrifying(discrete) when the record from RTSP stream, but in good quality when the record from the local mic(from the laptop).
Here is the code that raises the event:
public void Start()
{
_mFactory = new MediaPlayerFactory();
_mPlayer = _mFactory.CreatePlayer<IAudioPlayer>();
_mMedia = _mFactory.CreateMedia<IMedia>(Source);
_mPlayer.Open(_mMedia);
var fc = new Func<SoundFormat, SoundFormat>(SoundFormatCallback);
_mPlayer.CustomAudioRenderer.SetFormatCallback(fc);
var ac = new AudioCallbacks { SoundCallback = SoundCallback };
_mPlayer.CustomAudioRenderer.SetCallbacks(ac);
_mPlayer.Play();
}
private void SoundCallback(Sound newSound)
{
var data = new byte[newSound.SamplesSize];
Marshal.Copy(newSound.SamplesData, data, 0, (int)newSound.SamplesSize);
NewFrame(this, new Accord.Audio.NewFrameEventArgs(new Signal(data,Channels, data.Length, SampleRate, Format)));
}
private SoundFormat SoundFormatCallback(SoundFormat arg)
{
Channels = arg.Channels;
SampleRate = arg.Rate;
BitPerSample = arg.BitsPerSample;
return arg;
}
And here is the code that captures the event:
private void source_NewFrame(object sender, NewFrameEventArgs eventArgs)
{
Signal sig = eventArgs.Signal;
duration += eventArgs.Signal.Duration;
if (videoFileWrite == null)
{
videoFileWrite = new VideoFileWriter();
videoFileWrite.AudioBitRate = sig.NumberOfSamples*sig.NumberOfChannels*sig.SampleSize;
videoFileWrite.SampleRate = sig.SampleRate;
videoFileWrite.FrameSize = sig.NumberOfSamples/sig.NumberOfFrames;
videoFileWrite.Open("d:\\output.mp4");
}
if (isStartRecord)
{
DoneWriting = false;
using (MemoryStream ms = new MemoryStream())
{
encoder = new WaveEncoder(ms);
encoder.Encode(eventArgs.Signal);
ms.Seek(0, SeekOrigin.Begin);
decoder = new WaveDecoder(ms);
Signal s = decoder.Decode();
videoFileWrite.WriteAudioFrame(s);
encoder.Close();
decoder.Close();
}
DoneWriting = true;
}
}
I've solved this problem by taking only one channel from Sound object "newSound" in SoundCallback void, then create a signal from that array of bytes and raise the event "NewFrame".
The main idea behind this solution is that when the audio stream contains more than one channel, the SampleData property in Sound object in SoundCallback void contains array of bytes for all channels by the following formation:
let's assume that sound data for one channel for samples of two bytes are A1A2B1B2C1C2...etc, so the SampleData would be for two channels as:
A1A2A1A2B1B2B1B2C1C2C1C2.....etc.
hope that could help you out.
I can't seem to record the audio from default audio device, and play it on another audio device..
I don't want to record the microphone, but the audio device..
When I play a movie, I can hear sound, through my headphones, I want to copy that sound to any audio device I choose..
If you have any suggestions, it doesn't have to be with NAudio..
As far as I can tell, NAudio can't do this..
This is the code I use for the task at this time, but it only takes input from my Microphone: Code snippet with NAudio..
void playSoundCopy(int dv0)
{
disposeWave0();// stop previous sounds before starting
var waveOut0 = new WaveOut();
waveOut0.DeviceNumber = dv0;
wave0 = waveOut0;
Defaultwave0 = new WaveIn();
Defaultwave0.DeviceNumber = (int)GetDefaultDevice(Defaultdevice.FriendlyName);
var waveinReader0 = new WaveInProvider(Defaultwave0);
wave0.Init(waveinReader0);
play0 = false;
Thread.Sleep(1000);
play0 = true;
t0 = new Thread(() => { timeline0(); });
t0.IsBackground = true;
t0.Start();
Defaultwave0.StartRecording();
wave0.Play();
}
The real problem is actually that I can't record from a WaveOut device, only WaveIn..
Working Result:
void playSoundCopy(int dv0)
{
disposeWave0();// stop previous sounds before starting
var waveOut0 = new WaveOut();
waveOut0.DeviceNumber = dv0;
wave0 = waveOut0;
var format0 = Defaultdevice.AudioClient.MixFormat;
buffer0 = new BufferedWaveProvider(format0);
wave0.Init(buffer0);
capture = new WasapiLoopbackCapture(Defaultdevice);
capture.ShareMode = AudioClientShareMode.Shared;
capture.DataAvailable += CaptureOnDataAvailable;
play0 = false;
Thread.Sleep(1000);
play0 = true;
t0 = new Thread(() => { timeline0(); });
t0.IsBackground = true;
t0.Start();
capture.StartRecording();
wave0.Play();
}
void CaptureOnDataAvailable(object sender, WaveInEventArgs waveInEventArgs)
{
try
{
var itm = new byte[waveInEventArgs.BytesRecorded];
Array.Copy(waveInEventArgs.Buffer, itm, waveInEventArgs.BytesRecorded);
buffer0.AddSamples(itm, 0, itm.Length);
}
catch { }
}
You can capture audio being sent to a device using WasapiLoopbackCapture. Then you could pipe that into a BufferedWaveProvider and use that to feed another output device. There would be some latency introduced though, so don't expect the two devices to be in sync.
I want to play sound in three external sound cards at the same time,I mean when I click in a button I can hear sound from three speakers which are related to my three sound cards.
I have a function but it plays sound only in one device,the first one it finds,I mean in this code the first device is number 0,so it play sound in it,but if you write device number 1 at first,it will play sound in it,as a conclusion it plays sound only in one device,it dont works for all the devices at the same time.
This is its code:
public void playAllAvailableDevices()
{
//create a new class for each wav file & output etc.
WaveOut waveOut1 = new WaveOut();
WaveFileReader waveReader1 = new WaveFileReader(fileName);
WaveOut waveOut2 = new WaveOut();
WaveFileReader waveReader2 = new WaveFileReader(fileName);
WaveOut waveOut3 = new WaveOut();
WaveFileReader waveReader3 = new WaveFileReader(fileName);
switch (waveOutDevices)
{
case 1:
waveOut1.Init(waveReader1);
waveOut1.DeviceNumber = 0;
waveOut1.Play();
break;
case 2:
waveOut1.Init(waveReader1);
waveOut1.DeviceNumber = 0;
waveOut1.Play();
waveOut2.Init(waveReader2);
waveOut2.DeviceNumber = 1;
waveOut2.Play();
break;
case 3:
waveOut1.Init(waveReader1);
waveOut1.DeviceNumber = 0;
waveOut1.Play();
waveOut2.Init(waveReader2);
waveOut2.DeviceNumber = 1;
waveOut2.Play();
waveOut3.Init(waveReader3);
waveOut3.DeviceNumber = 2;
waveOut3.Play();
break;
}}
fileName is the name of the sound file I want to play,in my code I get this name from a darabase:
private void btnAlarm1_Click(object sender, EventArgs e)
{
OdbcConnection cn = new OdbcConnection("DSN=cp1");
cn.Open();
OdbcCommand cmd1 = new OdbcCommand("select chemin from alarme where code_alarme=41", cn);
cmd1.Connection = cn;
fileName = cmd1.ExecuteScalar().ToString();
wave = new WaveOut();
playAllAvailableDevices();
}
Can you help me to find the solution please????
Thank you in advance.
Good day.
You need to set the DeviceNumber property on the WaveOut object before calling Init. You could clean up your code a lot by using a simple function:
private void PlaySoundInDevice(int deviceNumber, string fileName)
{
if (outputDevices.ContainsKey(deviceNumber))
{
outputDevices[deviceNumber].WaveOut.Dispose();
outputDevices[deviceNumber].WaveStream.Dispose();
}
var waveOut = new WaveOut();
waveOut.DeviceNumber = deviceNumber;
WaveStream waveReader = new WaveFileReader(fileName);
// hold onto the WaveOut and WaveStream so we can dispose them later
outputDevices[deviceNumber] = new PlaybackSession { WaveOut=waveOut, WaveStream=waveReader };
waveOut.Init(waveReader);
waveOut.Play();
}
private Dictionary<int, PlaybackSession> outputDevices = new Dictionary<int, PlaybackSession>();
class PlaybackSession
{
public IWavePlayer WaveOut { get; set; }
public WaveStream WaveStream { get; set; }
}
The dictionary holds onto the WaveOut so it doesn't get garbage collected during playback, and allows us to clean up properly. Before you exit your application, make sure you clean up properly:
private void DisposeAll()
{
foreach (var playbackSession in outputDevices.Values)
{
playbackSession.WaveOut.Dispose();
playbackSession.WaveStream.Dispose();
}
}
And now you can play in all available devices using a for loop instead of your switch statement that requires you to duplicate code:
public void PlayInAllAvailableDevices(string fileName)
{
int waveOutDevices = WaveOut.DeviceCount;
for (int n = 0; n < waveOutDevices; n++)
{
PlaySoundInDevice(n, fileName);
}
}
I think what you are looking for is the BASS audio library:
http://www.un4seen.com/
This might also work:
http://www.alvas.net/alvas.audio.aspx
I don't think C# is set up to do this without a third-party add-in like the ones listed above. Maybe someone smarter than I can help you get it working but if not, these libraries will help you down the path you want.