Schedule with Constraints - c#

I want to schedule tasks with Constraints (similar to the job shop scheduling problem) and thought I could use something like the Microsoft Solver Foundation (I need to use C#). But as far as I know you can only solve problems by finding the optimal maximal or minimal which takes way to long. I need an approximation so the scheduling is not optimal (as good as possible) concerning the total time but all the Constraints are fulfilled.
Any ideas how to approach this problem?

I would suggest you using Z3 solver. It provides you C# API. Basically, it is a SMT solver, which searches for 'good enough' solution with respect to given constraints. It could be rather difficult to define your problem in SMTLIB language.
If it's too hard for you, look at Minizinc or Clingo solvers - just generate problem formulation as a text file, run a solver as a separate process from your C# code, parse solution back from output text file.
EDIT
If you want to minimize a length of a schedule, you can try the following approach. Let's assume that there is a schedule of length K. Is your planning problem satisfiable under this assumption? Let's call a solver to find this out! Generate several problems with different K's and run the solver iteratively. Use binary search to reduce the number of trials.

Related

Math.Net and alglib returning different FFT outputs by default

I am developing an application in C# with spectrogram drawing functionality.
For my fist try, I used MathNet.Numerics, and now I am continuing to develop with alglib. When I changed from one to the other, I noticed that the output differs for them. Mathnet uses some kind of correction by default, which alglib seems to omit. I am not really into signal processing, also a newbie to programming, and I could not figure out what the difference exactly comes from.
MathNet default output (raw magnitude) values are ranging from ~0.1 to ~274 in my case.
And with alglib I get values ranging from ~0.2 to ~6220.
I found that MathNet Fourier.Forward uses a default scaling option. Here is says, the FourierOptions.Default is "Universal; Symmetric scaling and common exponent (used in Maple)."
https://numerics.mathdotnet.com/api/MathNet.Numerics.IntegralTransforms/FourierOptions.htm
If I use FourierOptions.NoScaling, the output is the same as what alglib produces.
In MathNet, I used Fourier.Forward function: https://numerics.mathdotnet.com/api/MathNet.Numerics.IntegralTransforms/Fourier.htm#Forward
In case of alglib, I used fftr1d function: https://www.alglib.net/translator/man/manual.csharp.html#sub_fftr1d
What is that difference in their calculation?
What is the function that I could maybe use to convert alglib output magnitude to that of MathNet, or vice versa?
In what cases should I use these different "scalings"? What are they for exactly?
Please share your knowledge. Thanks in advance!
I worked it out by myself, after reading a bunch of posts mentioning different methods of FFT output scaling. I still find this aspect of FFT processing heavily unsdocumented everywhere. I have not yet found any reliable source that explains what is the use of these scalings, which fields of sciences or what processing methods use them.
I have yet found out three different kinds of scalings, regarding the raw FFT output (complexes' magnitudes). This means multiplying them by: 1. 1/numSamples 2. 2/numSamples 3. 1/sqrt(numSamples) 4. (no scaling)
MathNet.IntegralTransforms.Fourier.Forward function (and according to various posts on the net, also possibly Matlab and Maple) by default, uses the third one. Which results in the better distinguishable graphical output when using logarithmic colouring, in my opinion.
I would still be grateful if you know something and share your ideas, or if you can reference a good paper explaining on these.

calculating fft with complex number in c#

I use this formula to get frequency of a signal but I dont understand how to implement code with complex number? There is "i" in formula that relates Math.Sqrt(-1). How can I code this formula to signal in C# with NAduio library?
If you want to go back to a basic level then:
You'll want to use some form of probabilistic model, something like a hidden Markov model (HMM). This will allow you to test what the user says to a collection of models, one for each word they are allowed to say.
Additionally you want to transform the audio waveform into something that your program can more easily interpret. Something like a fast Fourier transform (FFT) or a wavelet transform (CWT).
The steps would be:
Get audio
Remove background noise
Transform via FFT or CWT
Detect peaks and other features of the audio
Compare these features with your HMMs
Pick the HMM with the best result about a threshold.
Of course this requires you to previously train the HMMs with the correct words.
A lot of languages actually provide Libraries for this that come, built in. One example, in C#.NET, is at this link. This gives you a step by step guide to how to set up a speech recognition program. It also abstracts you away from the low level detail of parsing audio for certain phenomes etc (which frankly is pointless with the amount of libraries there are about, unless you wish to write a highly optimized version).
It is a difficult problem nonetheless and you will have to use a ASR framework to do it. I have done something slightly more complex (~100 words) using Sphinx4. You can also use HTK.
In general what you have to do is:
write down all the words that you want to recognize
determine the syntax of your commands like (direction) (amount)
Then choose a framework, get an acoustic model, generate a dictionary and a language model compatible with that framework. Then integrate the framework into your application.
I hope I have mentioned all important things you need to do. You can google them separately or go to your chosen framework's tutorial.
Your task is relatively simple in terms of speech recognition and you should get good results if you complete it.

Wanted: C# programming library for mathematics

What I need is: plots creation, stuff for interpolation, stuff for counting such things as
and
where L(x) is an interpolation built from some data (points) generated from original known function f(x). meaning we know original function. we have a range (-a, a) - known. We need library to help us calculate data points in range. we need to calculate L(x) a polinom using that data in that range.
I need this library to be free and opensource
Perhaps Math.NET can help you.
Check this other answer https://stackoverflow.com/questions/1387430/recommended-math-library-for-c-net, in particular several people think that MathDotNet is nice.
For plot creation, you may want excel interop (why not ?), or ILNumerics.NET.
But I don't understand the other requirements. You want to measure interpolation errors (in the max and L1 norm) from a function you don't know ? This is not a programming question, it is a math question.
I suggest you look at interpolation libraries (Math.NET contains one for instance, but many others also do) and see if they provide such things as "error estimation".
Otherwise, what you need is a math book which will explain you the assumptions on f that you need to estimate the interpolation error. It depends on what you know about the regularity of f and the interpolation method.
Edit, regarding additional information provided: There are closed form formulas for interpolation errors (here as a starting point). But any numerical integration routine (which Math.NET does not provide) will get what you want. Have a look at libraries other people pointed out, this link will get you started.
Since you seem to have regular functions (since you do polynomial interpolation), I'd go with simple Romberg integration, which is quite simple to implement in case you don't find a library that suits your need (I doubt it). Have a look at Numerical Recipes, 3rd edition for sample code.
What about using Mathematica?
Math.NET and ILNumerics.Net are both open source and will both solve your equations.

Programmatically checking code complexity, possibly via c#?

I'm interested in data mining projects, and have always wanted to create a classification algorithm that would determine which specific check-ins need code-reviews, and which may not.
I've developed many heuristics for my algorithm, although I've yet to figure out the killer...
How can I programmatically check the computational complexity of a chunk of code?
Furthermore, and even more interesting - how could I use not just the code but the diff that the source control repository provides to obtain better data there..
IE: If I add complexity to the code I'm checking in - but it reduces complexity in the code that is left - shouldn't that be considered 'good' code?
Interested in your thoughts on this.
UPDATE
Apparently I wasn't clear. I want this
double codeValue = CodeChecker.CheckCode(someCodeFile);
I want a number to come out based on how good the code was. I'll start with numbers like VS2008 gives when you calculate complexity, but would like to move to further heuristics.
Anyone have any ideas? It would be much appreciated!
Have you taken a look at NDepend? This tool can be used to calculated code complexity and supports a query language by which you can get an incredible amount of data on your application.
The NDepend web site contains a list of definitions of various metrics. Deciding which are most important in your environment is largely up to you.
NDepend also has a command line version that can be integrated into your build process.
Also, Microsoft's Code Analysis (ships with VS Team Suite) includes metrics which check the cyclomatic complexity of code, and raises a build error (or warning) if this number is over a certain threshold.
I don't know off hand, but ut may be worth checking whether this number is configurable to your requirements. You could then modify your build process to run code analysis any time something is checked in.
See Semantic Designs C# Metrics Tool for a tool that computes a variety of standard metrics value both over complete files, and all reasonable subdivisions (methods, classes, ...).
The output is an XML document, but extracting the value(s) you want from that should be trivial with an XML reader.

How do text differencing applications work?

How do applications like DiffMerge detect differences in text files, and how do they determine when a line is new, and not just on a different line than the file being checked against?
Is this something that is fairly easy to implement? Are there already libraries to do this?
Here's the paper that served as the basis for the UNIX command-line tool diff.
That's a complex question. Performing a diff means finding the minimum edit distance between the two files. That is, the minimum number of changes you must make to transform one file into the other. This is equivalent to finding the longest common subsequence of lines between the two files, and this is the basis for the various diff programs. The longest common subsequence problem is well known, and you should be able to find the dynamic programming solution on google.
The trouble with the dynamic programming approach is that it's O(n^2). It's thus very slow on large files and unusable for large, binary strings. The hard part in writing a diff program is optimizing the algorithm for your problem domain, so that you get reasonable performance (and reasonable results). The paper "An Algorithm for Differential File Comparison" by Hunt and McIlroy gives a good description of an early version of the Unix diff utility.
There are libraries. Here's one: http://code.google.com/p/google-diff-match-patch/
StackOverflow uses Beyond Compare for its diff. I believe it works by calling Beyond Compare from the command line.
It actually is pretty simple; DIFF programes - most of the time - are based on the Longest Common Sequence, which can be solved using a graph algorithm.
This web page gives example implementations in C#.

Categories