What kind of timer is most suitable for MIDI timing? - c#

I am writing a simple MIDI app for my own use, in C# and .NET 4.x.
I assume that commercial DAWs (on the Windows platform that is) use Windows Multimedia Timers for timing the playback of MIDI data. But I can't verify this, since commercial stuff is closed-source.
I'm hoping someone has knowledge in this area, and can tell me if my assumption is right. Or, are these DAWs using timers I'm not aware of? I want to make sure there isn't anything more suitable that I've overlooked.

Although I have not had the privilege of coding for commercial DAW projects specifically, I do have experience in coding for scenarios where the accurate timing of events is important, including MIDI processing/routing.
Your assumption is correct. Well-coded MIDI sequencers use the Windows Multimedia Timer to schedule the sending of MIDI messages to MIDI devices.
Microsoft's Multimedia Timer Reference actually mentions MIDI sequencing as an example use:
These timer services are useful for applications that demand high-resolution timing. For example, a MIDI sequencer requires a high-resolution timer because it must maintain the pace of MIDI events within a resolution of 1 millisecond.
How to use the timer is for another topic, I won't get into that here. But I'd like to point out a few things:
All modern hardware supports a timer resolution of 1 millisecond, it's basically a given, but you should still call timeGetDevCaps to make sure before calling timeBeginPeriod.
Using the multimedia timer, at 1 millisecond resolution, will have the effect of quantizing your MIDI messages to a 1 millisecond grid +/- some variance/jitter. In the vast majority of cases this is a non-issue because that is still a sufficiently fine resolution to provide reasonably nuanced timing, so far as musical timing goes anyway. If you absolutely need sub-millisecond timing, you will have to do as #iinspectable suggests - use the timer to get "close" and then spin to precisely time the sending of your MIDI messages. However, this approach comes at a cost. I don't know what your intentions are for the app, but should you have several simultaneous MIDI tracks playing, each with continuous controllers and pitch bend, you'll find your app spinning all the time and you'll peg a CPU core to 100% which is just plain bad.
Do look into Windows' Multimedia Class Scheduler Service to get prioritized access to CPU resources for your MIDI playback thread.
Don't be discouraged by comments from #iinspectable. You can absolutely do high-performance MIDI sequencing using C#/.NET. Your only concern needs to be that of the garbage collector unpredicatably pausing your app. You can always code in a way that minimizes GC (e.g. use structs not classes, don't create/destroy anything during playback). Also consider using GC.TryStartNoGCRegion to, for example, prevent GC pauses during playback.

Related

Dealing with extremely small increments of time

OK, that title was perhaps vague, but allow me to explain.
I'm dealing with a large list, of hundreds of messages to be sent to a CAN bus as byte arrays. Each of these messages has an Interval property detailing how often the message must be sent, in milliseconds. But I'll get back to that.
So I have a thread. The thread loops through this giant list of messages until stopped, with the body roughly like this:
Stopwatch timer = new Stopwatch();
sw.Start();
while(!ShouldStop)
{
foreach(Message msg in list)
{
if(msg.IsReadyToSend(timer)) msg.Send();
}
}
This works great, with phenomenal accuracy in honoring the Message objects' Interval. However, it hogs an entire CPU. The problem is that, because of the massive number of messages and the nature of the CAN bus, there is generally less than half a millisecond before the thread has to send another message. There would never be a case the thread would be able to sleep for, say, more than 15 milliseconds.
What I'm trying to figure out is if there is a way to do this that allows for the thread to block or yield momentarily, allowing the processor to sleep and save some cycles. Would I get any kind of accuracy at all if I try splitting the work into a thread per message? Is there any other way of doing this that I'm not seeing?
EDIT: It may be worth mentioning that the Message's Interval property is not absolute. As long as the thread continues to spew messages, the receiver should be happy, but if the thread regularly sleeps for, say, 25 ms because of higher priority threads stealing its time-slice, it could raise red flags for the receiver.
Based on the updated requirement there is very good chance that default setup with Sleep(0) could be enough - messages may be sent in small bursts, but it sounds like is ok. Using multimedia timer may make burst less noticeable. Building more tolerance to receiver of the messages may be better approach (if possible).
If you need hard milliseconds accuracy with good guarantees - C# on Windows is not the best choice - separate hardware (even Adruino) may be needed, or at least lower level code that C#.
Windows is not RT OS, so you can't really get sub-millisecond accuracy.
Busy loop (possibly on high-pri thread) as you have is common approach if you need sub-millisecond accuracy.
You can try using Multimedia timers (sample - Multimedia timer interrupts in C# (first two interrupts are bad)), as well to change default time slice to 1ms (see Why are .NET timers limited to 15 ms resolution? for sample/explanation).
In any case you should be aware that your code can loose its time-slice if there are other higher priority threads to be scheduled and all your efforts would be lost.
Note: you obviously should consider if more sensible data structure is more suitable (i.e. heap or priority queue may work better to find next item).
As you have discovered, the most accurate way to "wait" on a CPU is to poll the RTC. However that is computationally intensive. If you are needing to get to the clock accuracy in timing, there is no other way.
However, in your original post, you said that the timing was in the order of 15ms.
On my 3.3GHz Quad Core i5 at home, 15ms x 3.3GHz = 50 Million Clock cycles (or 200 million if you count all the cores).
That is an eternity.
Loose sleep timing is most likely more than accurate enough for your purposes.
To be frank if you needed Hard RT, C# on the .net VM running on the .net GC on the Windows Kernel is the wrong choice.

How to c# several accurate timers (exactly for 10 milliseconds interval)

I've started the a desktop app development in C# (VS2010 .Net Fw 4.0) involving several timers.
At first, I was using System.Timers in order to send data by USB to a data bus. My point is that I need to send different periodic binary messages with several specific intervals of time such as 10ms, 20ms, 40ms, 100ms, 500ms, 1000ms ....
In each timer I have a callback function who has to send the correct data flow for each raised event.
The problem is when I have the full amount of signals working, the real interval is different than expected (3000ms raises to 3500, 100ms to 340) because when I add the 10ms to 40ms timers, the cpu is overload almost 100% and loses all the precission.
I'm working in VMWare with 2GB RAM and 2CPU cores. The Host machine is an i7-2600K CPU # 3,40GHz. I think this is no the problem.(but I think).
Before I wrote here, i was looking for an answer about how to be more exact with timing, using the most optimized resource consumption. but all I found it was unespecific.
I already knows about System.Diagnostics.Stopwatch reading about it here but this class have not events. Also System.Windows.Forms.Timer but is more imprecise, and with low resolution.
There is a good implementation here with microseconds resolution but it is the reason of my overloading CPU!
What do you think about my further steps!? I'll appreciate any help or idea you have
10% timing resolution is the goal in 10ms' interval!
I will clarify any extra info you need!

Throttle CPU Usage of Application

A new game server just came out which our company would like to offer for rental. However, the game developers did not create any sort of hibernation mode to shut down the physics when no players are connected, so an empty server is eating 30% or so CPU.
I found this game panel addon which limits the CPU usage of Applications.
I have written a few small apps in C# .NET for our company to help improve our services and I am wondering how I would go about creating something like this. Is it possible?
You might consider simply lowering the priority of the process down. This won't limit CPU directly, but will cause the processes threads to be scheduled less often than processes with normal and higher priorities.
Check System.Diagnostics.Process.PriorityClass (doc)
My guess is that the server app is doing polling instead being event driven. Polling will use CPU unless this piece of code is converted to be event driven. The application will sleep until it receives an event from the OS that it needs to process. Polling will just spin looking for an event and wastes the CPU. Reducing the priority of the process will not really help unless with CPU usage reduction in any way. This app needs to be rewritten to be more CPU efficient.
This answer might be interesting for you and that's how I would do it.
How to restrict the CPU usage a C# program takes?
I don't know if you can do that, but you can change the thread priority of the executing thread via the Priority property. You would set that by:
Thread.CurrentThread.Priority = ThreadPriority.Lowest;
Also, I don't think you really want to cap it. If the machine is otherwise idle, you'd like it to get busy on with the task, right? ThreadPriority helps communicate this to the scheduler.
I'm assuming the game server is threaded. If this is the case, you may be able to pragmatically force CPU affinity on the application. If you had a way to tell if the game had users or not, ie if UDP packets are coming in on the assigned port, you could say "hey, no one is connected". You could then have your program force all working threads onto the same core.
So, if you had an 8 core cpu and all the threads were on one core, then at most it would use 12.5% cpu.
Once you see packets coming in on the assigned port, you could assign the affinity back to all cores.
You could take this a step further and say "Are there any "idle" games. If there are any idle games, which are all on.. lets say.. core 7, then run an infinite loop of the HLT instruction at a higher priority than the game, but force the thread to sleep so it doesn't completely starve the game.
This would cause the CPU to use less power, but would be a lot more work and have a higher chance of problems.
I would stick to forcing affinity only, and just let all the idle games share some given core.

using C# for real time applications

Can C# be used for developing a real-time application that involves taking input from web cam continuously and processing the input?
You cannot use any main stream garbage collected language for “hard real-time systems”, as the garbage collect will sometimes stop the system responding in a defined time. Avoiding allocating object can help, however you need a way to prove you are not creating any garbage and that the garbage collector will not kick in.
However most “real time” systems don’t in fact need to always respond within a hard time limit, so it all comes down do what you mean by “real time”.
Even when parts of the system needs to be “hard real time” often other large parts of the system like the UI don’t.
(I think your app needs to be fast rather than “real time”, if 1 frame is lost every 100 years how many people will get killed?)
I've used C# to create multiple realtime, high speed, machine vision applications that run 24/7 and have moving machinery dependent on the application. If something goes wrong in the software, something immediately and visibly goes wrong in the real world.
I've found that C#/.Net provide pretty good functionality for doing so. As others have said, definitely stay on top of garbage collection. Break up to processing into several logical steps, and have separate threads working each. I've found the Producer Consumer programming model to work well for this, perhaps ConcurrentQueue for starters.
You could start with something like:
Thread 1 captures the camera image, converts it to some format, and puts it into an ImageQueue
Thread 2 consumes from the ImageQueue, processing the image and comes up with a data object that is put onto a ProcessedQueue
Thread 3 consumes from the ProcessedQueue and does something interesting with the results.
If Thread 2 takes too long, Threads 1 and 3 are still chugging along. If you have a multicore processor you'll be throwing more hardware at the math. You could also use several threads in place of any thread that I wrote above, although you'd have to take care of ordering the results manually.
Edit
After reading other peoples answers, you could probably argue my definition of "realtime". In my case, the computer produces targets that it sends to motion controllers which do the actual realtime motion. The motion controllers provide their own safety layers for things like timing, max/min ranges, smooth accel/decelerations and safety sensors. These controllers read sensors across an entire factory with a cycle time of less than 1ms.
Absolutely. The key will be to avoid garbage collection and memory management as much as possible. Try to avoid new-ing objects as much as possible, using buffers or object pools when you can.
Of course, someone has even developed a library to do that: AForge.NET
As with any real-time application and not just C#, you'll have to manage the buffers well as #David suggested.
Not only that, there're also the XNA Framework (for things like 3D games) and you can program DirectX using C# as well which are very real-time.
And did you know that, if you want, you can do pointer manipulations in C# too?
It depends on how 'real-time' it needs to be; ie, what your timing constraints are, and how quickly you need to 'do something'.
If you can handle 'doing something' maybe every 300ms or so in .NET, say on a timer event, I've found Windows to work okay. Note that this is something I found true on multiple systems of different ages and different speeds. As always, YMMV.
But that number is awfully long for a lot of applications. Maybe not for yours.
Do some research, make sure your app responds quickly enough for your application.

How to enable MMCSS in C# app?

I want to try Multimedia Class Scheduler Service http://msdn.microsoft.com/en-us/library/ms684247(v=VS.85).aspx
I hope it can reduce latency by scheduling my threads better.
How can it be done in C# ?
Note: my app is nothing to do with multimedia I just need features of MMCSS.
Each thread that is performing work
related to a particular task calls the
AvSetMmMaxThreadCharacteristics or
AvSetMmThreadCharacteristics function
to inform MMCSS that it is working on
that task.
It would seem all you need is to P/Invoke one or other of those API calls.
However, I suspect all that will be in vain when the garbage collector steps in and messes things up.
Have you done any profiling of the app to see what's going on under the covers? If you app is truly that latency sensitive then C# is probably the wrong choice of language to be honest.
I'm not sure what the point of using the MMCSS would be in a managed application. After all, the point of the MMCSS is to adjust the scheduling priority of the process to avoid stalls during multimedia stream processing - we're talking nanosecond level scheduling. But with a managed language where a garbage collection can happen at any time and potentially take tens or even hundreds of milliseconds, then I'm not sure what benefit the MMCSS would provide that wouldn't be totally wiped out by garbage collection.
With that in mind, I wouldn't expect to see a managed interface to the MMCSS any time soon. You can certainly access it via P/Invoke, but I wouldn't expect miracles from it :)

Categories