C# - Change volume using external Midi controller - c#

In my app i use Midi-dot-net to get NoteON messages and NAudio to play audio samples / notes. In NAudio i'm using ASIO implementation for lower latency and it works perfectly. But i have a problem with controlling volume.
Before i used ASIO engine i was able to controll volume this way (some part of the code):
private void...()
{
int NewVolume = ((ushort.MaxValue / 50) * trackWave.Value);
uint NewVolumeAllChannels = (((uint)NewVolume & 0x0000ffff) | ((uint)NewVolume << 16));
waveOutSetVolume(IntPtr.Zero, NewVolumeAllChannels);
volumeN.Text = trackWave.Value.ToString();
}
When i'm using ASIO implementation in NAudio it's not working, i can only mute sounds, but can't change its volume.
Do you know how can i controll volume by the use of volume slider placed in external Midi controller? Somehow it works when i was testing Steinberg or Synthogy or other audio software producers with ASIO drivers.
Thank you for any help.

with ASIO you change volume by modifying the level of the samples you send to the device. There isn't a concept of device volume. So include a VolumeSampleProvider or similar in your signal chain, and set the volume on that

Related

Playing (not recording/exporting) multiple MIDI tracks

I would like to have multiple MIDI instruments playing at once with NAudio. I've found instructions for how to play a single MIDI instrument, and I've found instructions for how to export multiple tracks in a single MidiEventCollection to a file. However, I can't seem to put these ideas together.
Here is some dumb example code I have that cycles through all my MIDI instruments, playing a major 3rd harmony for each one:
var midiOut = new MidiOut(0);
for (var i = 0; i <= 127; i++)
{
midiOut.Send(MidiMessage.ChangePatch(i, 1).RawData);
midiOut.Send(MidiMessage.StartNote(60, 127, 1).RawData);
midiOut.Send(MidiMessage.StartNote(64, 127, 1).RawData);
Thread.Sleep(500);
}
This works fine of course, but if I wanted that C andE to be played by different instruments, I'd be out of luck. I only have the one MIDI device and I can only have one connection to that open at a time, and MidiOut does not appear to support adding multiple tracks.
On the other hand, the MidiEventController code looks like it would be more or less what I want, but I only see examples of exporting that to a file, rather than actually playing the events. I put together something like this:
var events = new MidiEventCollection(1, 120);
var track = events.AddTrack();
var setInstrument = new PatchChangeEvent(0, 1, 66);
var play = new NoteOnEvent(0, 1, 60, 127, 1000);
track.Add(setInstrument);
track.Add(play);
But at this point I cannot figure out how to actually play the track, rather than export it.
If you want to play two different patches at the same time, this is what MIDI channels are for.
At your disposal are 16 channels, of which channel 10 is reserved for percussion if you're using a GM scheme.
In your first code snippet, you appear to be using only MIDI channel 1.
How about using more than one channel and loading different patches for each channel?

Intel Real Sense Audio Not Coming

Am recording the video by intel real sense camera. The video recording is done and working successfully. But audio is not coming in that video.
For that my question is...
My configuration is Lenovo Yoga 15 with internal real sense camera
I want to install audio driver for sound ? Is that are required ?
please give me some suggestion.
session = PXCMSession.CreateInstance();
senseManager = session.CreateSenseManager();
senseManager.captureManager.SetFileName("new4.rssdk", true);
senseManager.EnableStream(PXCMCapture.StreamType.STREAM_TYPE_COLOR, WIDTH, HEIGHT, 30);
senseManager.Init();
for (int i = 0; i < 200; i++)
{
if (senseManager.AcquireFrame(true).IsError()) break;
PXCMCapture.Sample sample = senseManager.QuerySample();
senseManager.ReleaseFrame();
colorBitmap.Dispose();
}
I didn't understand the question well, so do you want to run audio and record the video together?
If yes, you have to create an instance of the class responsible for doing that.
take a look here Speech Recognition
I used two threads in order to do that. I have an application where I use facial recognition and audio recognition. I decided to split them and it worked very well.

Playing Mono encoded audio out rear channels through SlimDX XAudio2 when Sound Card only reports Channel Mask of 3

I had an older system (XP) that allowed me to play mono encoded PCM audio out the rear channels (back right and/or back left) through DirectX using a DirectSoundBuffer and WAVEFORMATEXTENSIBLE object with the ChannelMask Set for Back Speakers.
I am trying to redo the same functionality on the old system as well and a new system with .Net and SlimDx XAudio2 but when I Call SetOutputMatrix on the old system it ignores everything I set for anything but the Front left and right Channels even though the sound card is set for 5.1 surround sound. On the new system it works fine. When I call GetDeviceDetails on the old system it reports that the channelmask is 3 (only front two speakers). I am guessing that is why it only allows me to play out the front even though the old C++ DirectSound worked. I am guessing the sound driver is in error but I can't find any updates for this.
Is there a workaround for this?
Here is a sample from the XAudio2 C++ Basic Sound Demo that has the same behavior
if( FAILED( hr = XAudio2Create( &pXAudio2, flags ) ) )
{
wprintf( L"Failed to init XAudio2 engine: %#X\n", hr );
CoUninitialize();
return 0;
}
UINT32 devCount;
hr = pXAudio2->GetDeviceCount(&devCount);
XAUDIO2_DEVICE_DETAILS devDetails;
hr = pXAudio2->GetDeviceDetails(0, &devDetails);
Here when I get the Device Details the ChannelMask is 3 (Front speakers) even though the SoundCard configuration is set for 5.1. The soundcard is an old SoundMax AC'97 card. Later on when I do the following:
float fMatrix[6] = {0};
pSourceVoice->GetOutputMatrix(pMasteringVoice, 1, 6, fMatrix);
fMatrix[0] = 0.0;
fMatrix[1] = 1.0;
fMatrix[2] = 1.0;
fMatrix[3] = 1.0;
fMatrix[4] = 1.0;
fMatrix[5] = 1.0;
hr = pSourceVoice->SetOutputMatrix(NULL,1,6, fMatrix);
pSourceVoice->GetOutputMatrix(pMasteringVoice, 1, 6, fMatrix);
After the first GetOutputMatrix the fMatrix is has [0] and [1] indexes set to 1.0. When I call SetOutputMatrix I am trying to set other channels to 1.0 but after the GetOutputMatrix the fMatrix only has [1] set to 1.0. Everything else is 0.
NOTE: When calling SetOutputMatrix and it doesn't set all the values it doesn't fail, it returns OK.

NAudio recording from headset

I have been using the code in http://opensebj.blogspot.com/2009/04/naudio-tutorial-5-recording-audio.html to record audio. Basically this code:
WaveIn waveInStream;
WaveFileWriter writer;
waveInStream = new WaveIn(44100,2);
writer = new WaveFileWriter(outputFilename, waveInStream.WaveFormat);
waveInStream.DataAvailable += new EventHandler<WaveInEventArgs>(waveInStream_DataAvailable);
waveInStream.StartRecording();
It works perfectly and record every sound on the system. The problem arises when I pluck in a headset (not usb, just directly into the headset jack on the built in soundcard on my laptop). This has the effect that any sound I can hear in the headset is not recorded.
I think it has something to do with which device i am recording from, but I can't quite figure it out.
I am trying to record a conversation, which means I would like to record the sound that comes from the mic and the sound I can hear in the headset at the same time.
Can someone point me in the right direction for this? Thanks.
// Get default capturer
waveInStream = new WaveIn(44100,2);
Now if you plug in your headset microphone/speaker then Windows will detect it but your program is still using the old 'default' endpoint. Not the lastly added device. This needs to be updated.
One of the solution would be to poll the endpoints and see if any new (default) device has been added to the system and then stop-start the recording with the new device.
waveInStream.StopRecording();
// assign the default recording device if not already done
//waveInStream.DeviceNumber = 0;
waveInStream.StartRecording();
Let me know if I was not very clear in the explanation.

How to mute the microphone c#

I wanted to know, what would the coding be if I wanted to toggle mute/unmute of my microphone. I am making a program that can run in the background and pickup a keypress event and toggle mute/unmute of the mic. Any help with any of that coding would be very helpful. I am pretty new to C#, and this is just a really simple program I wanted to make. That is all it does, is it will listen for keypress of the spacebar, even when the program is in the background, then when the spacebar is pressed it will mute/unmute the mic.
Thank you for any and all help!
For Windows Vista and newer, you can no longer use the Media Control Interface, Microsoft has a new Core Audio API that you must access to interface with audio hardware in these newer operating systems.
Ray Molenkamp wrote a nice managed wrapper for interfacing with the Core Audio API here:
Vista Core Audio API Master Volume Control
Since I needed to be able to mute the microphone from XP, Vista and Windows 7 I wrote a little Windows Microphone Mute Library which uses Ray's library on the newer operating systems and parts of Gustavo Franco's MixerNative library for Windows XP and older.
You can download the source of a whole application which has muting the microphone, selecting it as a recording device, etc.
http://www.codeguru.com/csharp/csharp/cs_graphics/sound/article.php/c10931/
you can use MCI (Media Control Interface) to access mics and change their volume system wise. Check the code below it should be setting volume to 0 for all system microphones. Code is in c; check pinvoke for details on how to translate this code to c#
#include "mmsystem.h"
...
void MuteAllMics()
{
HMIXER hmx;
mixerOpen(&hmx, 0, 0, 0, 0);
// Get the line info for the wave in destination line
MIXERLINE mxl;
mxl.cbStruct = sizeof(mxl);
mxl.dwComponentType = MIXERLINE_COMPONENTTYPE_DST_WAVEIN;
mixerGetLineInfo((HMIXEROBJ)hmx, &mxl, MIXER_GETLINEINFOF_COMPONENTTYPE);
// find the microphone source line connected to this wave in destination
DWORD cConnections = mxl.cConnections;
for (DWORD j=0; j<cConnections; j++)
{
mxl.dwSource = j;
mixerGetLineInfo((HMIXEROBJ)hmx, &mxl, MIXER_GETLINEINFOF_SOURCE);
if (MIXERLINE_COMPONENTTYPE_SRC_MICROPHONE == mxl.dwComponentType)
{
// Find a volume control, if any, of the microphone line
LPMIXERCONTROL pmxctrl = (LPMIXERCONTROL)malloc(sizeof MIXERCONTROL);
MIXERLINECONTROLS mxlctrl =
{
sizeof mxlctrl,
mxl.dwLineID,
MIXERCONTROL_CONTROLTYPE_VOLUME,
1,
sizeof MIXERCONTROL,
pmxctrl
};
if (!mixerGetLineControls((HMIXEROBJ) hmx, &mxlctrl, MIXER_GETLINECONTROLSF_ONEBYTYPE))
{
DWORD cChannels = mxl.cChannels;
if (MIXERCONTROL_CONTROLF_UNIFORM & pmxctrl->fdwControl)
cChannels = 1;
LPMIXERCONTROLDETAILS_UNSIGNED pUnsigned = (LPMIXERCONTROLDETAILS_UNSIGNED)
malloc(cChannels * sizeof MIXERCONTROLDETAILS_UNSIGNED);
MIXERCONTROLDETAILS mxcd =
{
sizeof(mxcd),
pmxctrl->dwControlID,
cChannels,
(HWND)0,
sizeof MIXERCONTROLDETAILS_UNSIGNED,
(LPVOID) pUnsigned
};
mixerGetControlDetails((HMIXEROBJ)hmx, &mxcd, MIXER_SETCONTROLDETAILSF_VALUE);
// Set the volume to the middle (for both channels as needed)
//pUnsigned[0].dwValue = pUnsigned[cChannels - 1].dwValue = (pmxctrl->Bounds.dwMinimum+pmxctrl->Bounds.dwMaximum)/2;
// Mute
pUnsigned[0].dwValue = pUnsigned[cChannels - 1].dwValue = 0;
mixerSetControlDetails((HMIXEROBJ)hmx, &mxcd, MIXER_SETCONTROLDETAILSF_VALUE);
free(pmxctrl);
free(pUnsigned);
}
else
{
free(pmxctrl);
}
}
}
mixerClose(hmx);
}
here you can find more code on this topic
hope this helps, regards
I have several microphones in win7 and class WindowsMicrophoneMuteLibrary.CoreAudioMicMute is incorrect in this case.
so I change the code and works great because now his cup Whistle all microphones and not just in the last recognized by win7.
I am attaching the new class to put in place.
http://www.developpez.net/forums/d1145354/dotnet/langages/csharp/couper-micro-sous-win7/

Categories