Playing (not recording/exporting) multiple MIDI tracks - c#

I would like to have multiple MIDI instruments playing at once with NAudio. I've found instructions for how to play a single MIDI instrument, and I've found instructions for how to export multiple tracks in a single MidiEventCollection to a file. However, I can't seem to put these ideas together.
Here is some dumb example code I have that cycles through all my MIDI instruments, playing a major 3rd harmony for each one:
var midiOut = new MidiOut(0);
for (var i = 0; i <= 127; i++)
{
midiOut.Send(MidiMessage.ChangePatch(i, 1).RawData);
midiOut.Send(MidiMessage.StartNote(60, 127, 1).RawData);
midiOut.Send(MidiMessage.StartNote(64, 127, 1).RawData);
Thread.Sleep(500);
}
This works fine of course, but if I wanted that C andE to be played by different instruments, I'd be out of luck. I only have the one MIDI device and I can only have one connection to that open at a time, and MidiOut does not appear to support adding multiple tracks.
On the other hand, the MidiEventController code looks like it would be more or less what I want, but I only see examples of exporting that to a file, rather than actually playing the events. I put together something like this:
var events = new MidiEventCollection(1, 120);
var track = events.AddTrack();
var setInstrument = new PatchChangeEvent(0, 1, 66);
var play = new NoteOnEvent(0, 1, 60, 127, 1000);
track.Add(setInstrument);
track.Add(play);
But at this point I cannot figure out how to actually play the track, rather than export it.

If you want to play two different patches at the same time, this is what MIDI channels are for.
At your disposal are 16 channels, of which channel 10 is reserved for percussion if you're using a GM scheme.
In your first code snippet, you appear to be using only MIDI channel 1.
How about using more than one channel and loading different patches for each channel?

Related

How to read text from 'simple' screenshot fast and effectively?

I'm working on a small personal application that should read some text (2 sentences at most) from a really simple Android screenshot. The text is always the same size, same font, and in approx. the same location. The background is very plain, usually a few shades of 1 color (think like bright orange fading into a little darker orange). I'm trying to figure out what would be the best way (and most importantly, the fastest way) to do this.
My first attempt involved the IronOcr C# library, and to be fair, it worked quite well! But I've noticed a few issues with it:
It's not 100% accurate
Despite having a community/trial version, it sometimes throws exceptions telling you to get a license
It takes ~400ms to read a ~600x300 pixel image, which in the case of my simple image, I consider to be rather long
As strange as it sounds, I have a feeling that libraries like IronOcr and Tesseract may just be too advanced for my needs. To improve speeds I have even written a piece of code to "treshold" my image first, making it completely black and white.
My current IronOcr settings look like this:
ImageReader = new AdvancedOcr()
{
CleanBackgroundNoise = false,
EnhanceContrast = false,
EnhanceResolution = false,
Strategy = AdvancedOcr.OcrStrategy.Fast,
ColorSpace = AdvancedOcr.OcrColorSpace.GrayScale,
DetectWhiteTextOnDarkBackgrounds = true,
InputImageType = AdvancedOcr.InputTypes.Snippet,
RotateAndStraighten = false,
ReadBarCodes = false,
ColorDepth = 1
};
And I could totally live with the results I've been getting using IronOcr, but the licensing exceptions ruin it. I also don't have $399 USD to spend on a private hobby project that won't even leave my own PC :(
But my main goal with this question is to find a better, faster or more efficient way to do this. It doesn't necessarily have to be an existing library, I'd be more than willing to make my own kind of letter-detection code that would work (only?) for screenshots like mine if someone can point me in the right direction.
I have researched about this topic and the best solution which I could find is Azure cognitive services. You can use Computer vision API to read text from an image. Here is the complete document.
How fast does it have to be?
If you are using C# I recommend the Google Cloud Vision API. You pay per request but the first 1000 per month are free (check pricing here). However, it does require a web request but I find it to be very quick
using Google.Cloud.Vision.V1;
using System;
namespace GoogleCloudSamples
{
public class QuickStart
{
public static void Main(string[] args)
{
// Instantiates a client
var client = ImageAnnotatorClient.Create();
// Load the image file into memory
var image = Image.FromFile("wakeupcat.jpg");
// Performs label detection on the image file
var response = client.DetectText(image);
foreach (var annotation in response)
{
if (annotation.Description != null)
Console.WriteLine(annotation.Description);
}
}
}
}
I find it works well for pictures and scanned documents so it should work perfectly for your situation. The SDK is also available in other languages too like Java, Python, and Node

Dynamically send n audio sources (concurrently ) to specifc channel on ASIO device

So i have been having some fun exploring the NAudio lib.
However, I'm not sure whether I am missing something using the ASIO class. Basically my requirements are the following:
Dynamically output (mono) sources to an ASIO device, each source to a dedicated channel (later on I will probably be working with 64 channels)
Be free to 'stream' the n audio sources to the device at any time during the session (multiple sources simultaneously)
Have the control over each channel
So in Code I'd have something like:
...
WaveFileReader source1 = new WaveFileReader( pathToMyFile1 );
WaveFileReader source2 = new WaveFileReader( pathToMyFile2 );
WaveFileReader source3 = new WaveFileReader( pathToMyFile3 );
...
WaveFileReader sourceN = new WaveFileReader( pathToMyFileN );
AsioOut out = new AsioOut();
...
/*Now init out...*/
...
out.Play();
...
/* now react on events, possibly within a multi-threaded enviroment*/
/* and concurrently send each of these sources to a dedicated channel*/
/* as required, (as stated, possibly even many at the same time) */
...
So my Question basically is:
Can I, using one of the existing classes, achieve something like this? Or will I have to engineer my own implentation of one of the interfaces (ISampleProvider, IWaveProvider etc, pretty sure it will somehow work going down an abstraction level)?
Thanks for any Input on this!
The MultiplexingWaveProvider can do something close to what you want. Read about it here. For 64 channel work you might find you need to write your own fine-tuned code as performance could become an issue.

Windows named pipe in node js (preferred shared memory)

I am using named pipe to share some data between 2 processes in windows. One is a node process and other is a C# process. Here is a sample of code I use in my node process:
var net = require('net');
var PIPE_NAME = "mypipe";
var PIPE_PATH = "\\\\.\\pipe\\" + PIPE_NAME;
var L = console.log;
var server = net.createServer(function(stream) {
L('Server: on connection')
stream.on('data', function(c) {
L('Server: on data:', c.toString());
});
stream.on('end', function() {
L('Server: on end')
server.close();
});
stream.write('Take it easy!');
});
server.on('close',function(){
L('Server: on close');
})
server.listen(PIPE_PATH,function(){
L('Server: on listening');
})
I use a NamedPipeClientStream in c# to read the data. I do this in a loop on both the sides, such as my node process is a producer and C# process is a consumer.
This works fine.
But sometimes the C# loop hangs and at that point in my node process I want to overwrite the new data over the old data. I was wondering if I can specify some max size in my pipe (the one I create in nodejs) or a timeout for the data but couldn't find such things in standard documentation.
If it cannot be solved this way, there is a shared memory route to solve the problem but I couldn't find any stable shared memory library for nodejs which works nicely on windows (and I don't have much time to write one right now). I need some pointers to move in the right direction.
Any advice is appreciated. Thanks.
EDIT: I would really want to implement the above stuff using shared memory since I need to share large amount of data at a fast rate and I need to tweak for performance. Any pointers on how to implement it?
I figured out a way to use the drain event in writable stream of nodejs as per my requirement.

Intel Real Sense Audio Not Coming

Am recording the video by intel real sense camera. The video recording is done and working successfully. But audio is not coming in that video.
For that my question is...
My configuration is Lenovo Yoga 15 with internal real sense camera
I want to install audio driver for sound ? Is that are required ?
please give me some suggestion.
session = PXCMSession.CreateInstance();
senseManager = session.CreateSenseManager();
senseManager.captureManager.SetFileName("new4.rssdk", true);
senseManager.EnableStream(PXCMCapture.StreamType.STREAM_TYPE_COLOR, WIDTH, HEIGHT, 30);
senseManager.Init();
for (int i = 0; i < 200; i++)
{
if (senseManager.AcquireFrame(true).IsError()) break;
PXCMCapture.Sample sample = senseManager.QuerySample();
senseManager.ReleaseFrame();
colorBitmap.Dispose();
}
I didn't understand the question well, so do you want to run audio and record the video together?
If yes, you have to create an instance of the class responsible for doing that.
take a look here Speech Recognition
I used two threads in order to do that. I have an application where I use facial recognition and audio recognition. I decided to split them and it worked very well.

How to mute the microphone c#

I wanted to know, what would the coding be if I wanted to toggle mute/unmute of my microphone. I am making a program that can run in the background and pickup a keypress event and toggle mute/unmute of the mic. Any help with any of that coding would be very helpful. I am pretty new to C#, and this is just a really simple program I wanted to make. That is all it does, is it will listen for keypress of the spacebar, even when the program is in the background, then when the spacebar is pressed it will mute/unmute the mic.
Thank you for any and all help!
For Windows Vista and newer, you can no longer use the Media Control Interface, Microsoft has a new Core Audio API that you must access to interface with audio hardware in these newer operating systems.
Ray Molenkamp wrote a nice managed wrapper for interfacing with the Core Audio API here:
Vista Core Audio API Master Volume Control
Since I needed to be able to mute the microphone from XP, Vista and Windows 7 I wrote a little Windows Microphone Mute Library which uses Ray's library on the newer operating systems and parts of Gustavo Franco's MixerNative library for Windows XP and older.
You can download the source of a whole application which has muting the microphone, selecting it as a recording device, etc.
http://www.codeguru.com/csharp/csharp/cs_graphics/sound/article.php/c10931/
you can use MCI (Media Control Interface) to access mics and change their volume system wise. Check the code below it should be setting volume to 0 for all system microphones. Code is in c; check pinvoke for details on how to translate this code to c#
#include "mmsystem.h"
...
void MuteAllMics()
{
HMIXER hmx;
mixerOpen(&hmx, 0, 0, 0, 0);
// Get the line info for the wave in destination line
MIXERLINE mxl;
mxl.cbStruct = sizeof(mxl);
mxl.dwComponentType = MIXERLINE_COMPONENTTYPE_DST_WAVEIN;
mixerGetLineInfo((HMIXEROBJ)hmx, &mxl, MIXER_GETLINEINFOF_COMPONENTTYPE);
// find the microphone source line connected to this wave in destination
DWORD cConnections = mxl.cConnections;
for (DWORD j=0; j<cConnections; j++)
{
mxl.dwSource = j;
mixerGetLineInfo((HMIXEROBJ)hmx, &mxl, MIXER_GETLINEINFOF_SOURCE);
if (MIXERLINE_COMPONENTTYPE_SRC_MICROPHONE == mxl.dwComponentType)
{
// Find a volume control, if any, of the microphone line
LPMIXERCONTROL pmxctrl = (LPMIXERCONTROL)malloc(sizeof MIXERCONTROL);
MIXERLINECONTROLS mxlctrl =
{
sizeof mxlctrl,
mxl.dwLineID,
MIXERCONTROL_CONTROLTYPE_VOLUME,
1,
sizeof MIXERCONTROL,
pmxctrl
};
if (!mixerGetLineControls((HMIXEROBJ) hmx, &mxlctrl, MIXER_GETLINECONTROLSF_ONEBYTYPE))
{
DWORD cChannels = mxl.cChannels;
if (MIXERCONTROL_CONTROLF_UNIFORM & pmxctrl->fdwControl)
cChannels = 1;
LPMIXERCONTROLDETAILS_UNSIGNED pUnsigned = (LPMIXERCONTROLDETAILS_UNSIGNED)
malloc(cChannels * sizeof MIXERCONTROLDETAILS_UNSIGNED);
MIXERCONTROLDETAILS mxcd =
{
sizeof(mxcd),
pmxctrl->dwControlID,
cChannels,
(HWND)0,
sizeof MIXERCONTROLDETAILS_UNSIGNED,
(LPVOID) pUnsigned
};
mixerGetControlDetails((HMIXEROBJ)hmx, &mxcd, MIXER_SETCONTROLDETAILSF_VALUE);
// Set the volume to the middle (for both channels as needed)
//pUnsigned[0].dwValue = pUnsigned[cChannels - 1].dwValue = (pmxctrl->Bounds.dwMinimum+pmxctrl->Bounds.dwMaximum)/2;
// Mute
pUnsigned[0].dwValue = pUnsigned[cChannels - 1].dwValue = 0;
mixerSetControlDetails((HMIXEROBJ)hmx, &mxcd, MIXER_SETCONTROLDETAILSF_VALUE);
free(pmxctrl);
free(pUnsigned);
}
else
{
free(pmxctrl);
}
}
}
mixerClose(hmx);
}
here you can find more code on this topic
hope this helps, regards
I have several microphones in win7 and class WindowsMicrophoneMuteLibrary.CoreAudioMicMute is incorrect in this case.
so I change the code and works great because now his cup Whistle all microphones and not just in the last recognized by win7.
I am attaching the new class to put in place.
http://www.developpez.net/forums/d1145354/dotnet/langages/csharp/couper-micro-sous-win7/

Categories