ECG digital signal processing in C# - c#

I'm looking for a C# .NET library for digital filtering (lowpass, highpass, notch) to filter ECG waveforms in real-time. Any suggestions?

If this is non commercial use, I have heard good things about the Signal Lab library. It is free for non commercial use, $570 for commercial use. It it a bit overkill if you are just needing low pass, high pass, and band pass filters. but it does come with controls for visualizing the data if you do not have any yet.
If you just need the filters you may just want to write your own code for the 3 filters. You can check the wikipedia pages for psudocode examples of a Low-pass filter and High-pass filter, I did not quickly find a code example of a noch filter.
Here are some C examples of various filters, to help give you a clue on what you need to do.

If your data is arriving in discrete chunks, I would use Reactive Extensions. This allows the input to control what happens next (reacting to data) instead of using "pull" operations. You can then react to this data by passing it through filters, and then react to that data by displaying it or performing additional calculations.
If you only need notch, high, and low filters, these are trivial to write. As each chunk of data arrives, you can decide whether or not to pass it to the next step (or whether or not to modify the data first). I would imagine you could write this whole section of code in less than 20 lines (maybe less than 10) using Rx. It would result in some pretty elegant code for this use case.

As far as I know you can write your own, because I did.
This should be a good starter for you (coded in C++ but you can easily covert the syntax to C#) - http://www.codeproject.com/KB/cpp/ecg_dsp.aspx
Third party libraries wouldn't be very flexible on the filter equation parameters. As you only will know the characteristics of your signal (amplitudes, frequency band and sampling etc.)
I recommend using a waveshaping algorithm first to get a smooth signal on the C# side before you apply filters, if your ECG sampling rate is low.

Related

animated chart control for asp.net

I am trying to write an application that will reflect and update some statistics info of performance while a task is running (a benchmark app)
the issue is, that it's related to systems info, Such as CPU usage and memory allocation
so i need to reflect those calculations and make a simple way to update that graph with the new numbers translated visually ( with any user /programmer custom interval of update frequency) or even calculated based on overall system resources so if the system is not too busy
frequency will be max speed...
the question is , what to start with ?
using a progress bar of Winforms was my first thought (thinking simple as less as i could use lines of code for that specific task of displaying the data )
i am using c# .net 4.0 ASP Webforms .
can you please guide me to a simple way, how to implement this?
could be in C# asp.net or mixed with javascript /jQuery
PS
i was thinking of an approach, to have multiple images :
from an empty-bar till full-bar (switch between them according to calculation) , though i need a faster response than replacing src of img.
maybe a pixel wise action, ... a divs with bgcolor... height will change so it's kind of a bar meter... really I could think of many ways to try implementing the task at hand,
though i thought there is a known way (keeping simple in mind).
just for illustration , this is 2 options(vertical or horizontal ) i thought of how to display statistics visually .
i don't mind which of those graphs(LINK) as long as the implementation will be as simple as it could be
and response will be fast to make it happen without being heavy task .
I would consider FusionCharts They have a nice assortment of chart types, very nice visual, and very simple implementation. You can supply data directly in XML or JSON format, either from server-side or directly to the client, so real-time updates are supported as well.
Oh and even though this is a commercial product, they do have a free version.

calculating fft with complex number in c#

I use this formula to get frequency of a signal but I dont understand how to implement code with complex number? There is "i" in formula that relates Math.Sqrt(-1). How can I code this formula to signal in C# with NAduio library?
If you want to go back to a basic level then:
You'll want to use some form of probabilistic model, something like a hidden Markov model (HMM). This will allow you to test what the user says to a collection of models, one for each word they are allowed to say.
Additionally you want to transform the audio waveform into something that your program can more easily interpret. Something like a fast Fourier transform (FFT) or a wavelet transform (CWT).
The steps would be:
Get audio
Remove background noise
Transform via FFT or CWT
Detect peaks and other features of the audio
Compare these features with your HMMs
Pick the HMM with the best result about a threshold.
Of course this requires you to previously train the HMMs with the correct words.
A lot of languages actually provide Libraries for this that come, built in. One example, in C#.NET, is at this link. This gives you a step by step guide to how to set up a speech recognition program. It also abstracts you away from the low level detail of parsing audio for certain phenomes etc (which frankly is pointless with the amount of libraries there are about, unless you wish to write a highly optimized version).
It is a difficult problem nonetheless and you will have to use a ASR framework to do it. I have done something slightly more complex (~100 words) using Sphinx4. You can also use HTK.
In general what you have to do is:
write down all the words that you want to recognize
determine the syntax of your commands like (direction) (amount)
Then choose a framework, get an acoustic model, generate a dictionary and a language model compatible with that framework. Then integrate the framework into your application.
I hope I have mentioned all important things you need to do. You can google them separately or go to your chosen framework's tutorial.
Your task is relatively simple in terms of speech recognition and you should get good results if you complete it.

How to create a fixed width word art like object in C# without MS Word or Interop

I work at a shop that has sales both weekly and monthly. To advertise these sales, somebody in the company spends a shift making small and large signs in MS word from a standardized template. This is really time consuming, and is prone to mistakes.
I want to design a program to pull the necessary product information from our database and put it into these signs.
I want to use wordart or a wordart substitute to create many of the objects, as this will ensure a standard size (to fit on the signs) and style. I don't care so much for the effects, I am just concerned with the height and width of the words as a whole.
I have created a small program that does this using the Interop library, and while it creates a near perfect replica of the original sign, I fear it might be too slow to pull off doing 30-50 signs in one sitting.
Is there an alternative to MS wordart that would allow me to create either an image or other text object that can be scaled to fit within a certain size?
If you are trying to replace (or relieve) an employee from making signs by writing code, and your code is near perfect but just a bit slow, then you should profile your code to see why it is slow. I can't image that your code is slower than the employee :D. So you shouldn't discard Word interop just because of the speed, if it does just exactly what you want it to do.
Also, since Word-art is a Word-thing, doing that without Word is a huge amount of work. If you have the correct fonts, you might be able to do this from .NET using GDI+ (the standard image drawing interface). However, this will require some tutorial reading and trial-and-error. There is a praised GDI+ FAQ with lots of information on the subject.
A possible cause for slow interop is the creation (and destruction) of Word instances.
Use .docx and xml direct change

Analyzing audio to create Guitar Hero levels automatically

I'm trying to create a Guitar-Hero-like game (something like this) and I want to be able to analyze an audio file given by the user and create levels automatically, but I am not sure how to do that.
I thought maybe I should use BPM detection algorithm and place an arrow on a beat and a rail on some recurrent pattern, but I have no idea how to implement those.
Also, I'm using NAudio's BlockAlignReductionStream which has a Read method that copys byte[] data, but what happens when I read a 2-channels audio file? does it read 1 byte from the first channel and 1 byte from the second? (because it says 16-bit PCM) and does the same happen with 24-bit and 32-bit float?
Beat detection (or more specifically BPM detection)
Beat detection algorithm overview for using a comb filter:
http://www.clear.rice.edu/elec301/Projects01/beat_sync/beatalgo.html
Looks like they do:
A fast Fourier transform
Hanning Window, full-wave rectification
Multiple low pass filters; one for each range of the FFT output
Differentiation and half-wave rectification
Comb filter
Lots of algorithms you'll have to implement here. Comb filters are supposedly slow, though. The wiki article didn't point me at other specific methods.
Edit: This article has information on streaming statistical methods of beat detection. That sounds like a great idea: http://www.flipcode.com/misc/BeatDetectionAlgorithms.pdf - I'm betting they run better in real time, though are less accurate.
BTW I just skimmed and pulled out keywords. I've only toyed with FFT, rectification, and attenuation filters (low-pass filter). The rest I have no clue about, but you've got links.
This will all get you the BPM of the song, but it won't generate your arrows for you.
Level generation
As for "place an arrow on a beat and a rail on some recurrent pattern", that is going to be a bit trickier to implement to get good results.
You could go with a more aggressive content extraction approach, and try to pull the notes out of the song.
You'd need to use beat detection for this part too. This may be similar to BPM detection above, but at a different range, with a band-pass filter for the instrument range. You also would swap out or remove some parts of the algorithm, and would have to sample the whole song since you're not detecting a global BPM. You'd also need some sort of pitch detection.
I think this approach will be messy and will guarantee you need to hand-scrub the results for every song. If you're okay with this, and just want to avoid the initial hand transcription work, this will probably work well.
You could also try to go with a content generation approach.
Most procedural content generation has been done in a trial-and-error manner, with people publishing or patenting algorithms that don't completely suck. Often there is no real qualitative analysis that can be done on content generation algorithms because they generate aesthetics. So you'd just have to pick ones that seem to give pleasing sample results and try it out.
Most algorithms are centered around visual content generation, including terrain, architecture, humanoids, plants etc. There is some research on audio content generation, Generative Music, etc. Your requirements don't perfectly match either of these.
I think algorithms for procedural "dance steps" (if such a thing exists - I only found animation techniques) or Generative Music would be the closest match, if driven by the rhythms you detect in the song.
If you want to go down the composition generation approach, be prepared for a lot of completely different algorithms that are usually just hinted about, but not explained in detail.
E.g.:
http://tones.wolfram.com/about/faqs/howitworks.html
http://research.microsoft.com/en-us/um/redmond/projects/songsmith/

Pattern (regex) based searching systems

I'm looking for a way to search through terabytes of data for patterns matching regexes. The implementation does need to support a lot of the finer capabilities of regexes, such as beginning and end of line data, full TR1 support (preferably with POSIX and/or PCRE support), and the like. We're effectively using this application to test policy regarding storage of potentially sensitive information.
I've looked into indexing solutions, but the majority of the commercial suites don't seem to have the finer regex capabilites we'd like (to date, they've all utterly failed at parsing the complex regexes we're using).
This is a complicated problem because of the sheer mass of the amount of data we have, and the amount of system resources we have to dedicate to the task of scanning (not much, its just checks on policy compliance, so there isn't much of a budget there for hardware).
I looked into Lucene but I'm a little hesitant about using index systems that aren't fully capable of dealing with our regex battery, and while searching through the entire dataset would remedy this problem, we'd have to let the servers chug along at performing these actions for a couple weeks at least.
Any suggestions?
PowerGREP can handle any regular expression and has been designed for exactly this purpose. I've found it to be extremely fast searching through large amounts of data, but I haven't tried it on the order of terabytes yet. But since there's a 30 day trial, it's worth a shot, I'd say.
It's especially powerful when it comes to searching specific parts of files. You can section the file according your own criteria, and then apply another search only on those sections. Plus, it has got very good reporting capabilities.
You might want to take a look at Apache Hadoop. Enormous sites like Yahoo and Facebook use Hadoop for a variety of things, one of them being processing multi-TB of text logs.
In the Hadoop documentation there is an example of a distributed Grep that could be scaled to handle any concievable data set size.
There is also a SequenceFileInputFilter.RegexFilter in the Hadoop API if you wanted to roll your own solution.
I can only offer a high-level answer. Building on Tim's and shadit's answers, use a two-pass approach implemented as a MapReduce algorithm on EC2 or Azure Compute. In each pass the Map could take a chunk of data with an identifier and return to Reduce the identifier if a match is found, else a null value. Scale it as wide as you need to shrink the processing time.
The grep program is highly optimized for regex searching in files, to the point where I would say you could not beat it with any general-purpose regex library. Even that would be impractically slow for searching terabytes, so I think you're out of luck on doing full regex searches.
One option might be to use an indexer as a first-pass to find likely matches, then extract some bytes on either side of each match and run a full regex match on it.
disclaimer: i am not a search expert.
if you really need all the generality of regexps then there's going to be nothing better than trawling through all the data (but see comments below on speeding that up).
however, i would guess that is not really the case. so the first thing to do is see if you can use an index to identify possible documents. if, for example, you know that you all your matches will include a word (any word) then you can index the words, use that to find the (hopefully small) set of documents that include that word, and then use grep or equivalent only on those files.
so, for example, maybe you need to find documents that have "FoObAr" at the start of the line. you would start with a caseless index to identify files that have "foobar" anywhere, and then grep (only) those for "^FoObAr".
next, how to grep as quickly as possible. you're likely going to be limited by io speed. so look at using several disks (there may be no need to use raid - you could just have one thread per disk). also, consider compression. you don't need random access to these files, and if they are text (i assume they are if you are grepping them) then they will compress nicely. that will reduce the amount of data you need to read (and store).
finally, note that if your index doesn't work for ALL queries, then it's probably not worth using. you can "grep" for all expressions in a single pass, and the expensive process is reading the data, not the details of the grep, so even if there is "just one" query that cannot be indexed, and you therefore need to scan everything, then building and using an index is probably not a good use of your time.

Categories