I want to compare the contrast of several images, for that purpose I need to measure the contrast. In fact I need the local contrast, not global. I already have a solution which just compares the neighboring pixel of each image pixel. The results are ok, but now I need to compare it with another algorithm which I do not understand. It is mentioned in this paper from Peli: http://www.eri.harvard.edu/faculty/peli/papers/ContrastJOSA.pdf and is called "Band-Limited Contrast".
I understand it like this: Transfer an image to Frequency Space and there apply a low pas filter. Well. Apply another lp-filter with frequency range +1 next??? I really don't know what I have to do next... When I apply a lp-filter with range from 0 to 100 and another with 0 to 101 and then divide them and substract 1, the result is not what I had expected.
Does anybody know this kind of filter?
Thanks in advance
Matthias
Related
I recently got an assigment , where I have 4 on 5 matrix with 0 and 1(zero's represent the cut parts of the matrix). I need to find a way to calculate in how many pieces matrix will be sliced and as I mentioned zero's represent the cut parts of the matrix so if from one border to another goes straight line of zero's it means the matrix was sliced on that line , as for example(in this pic I've marked where matrix would be split, a line of zero's slice it) :
So guys I know that you won't entirely solve this code for me and I don't need that, but what I need is to understand this :
Firstly how should I tell a compiler in wich direction(when he is going threw matrix ) he should be going, I have an idea with enumarations.
Secondly , what kind of condition sentence I should use that the compiler would recognise the line of zero's(if there is such) ?
Any help would be appreciated :)
You did not specify any requirements for the algorithm (such as time and space complexity), so I guess one answer, that correlates to some specific solutions, would be:
Go in all 4 directions
Don't condition on 0s to create the line, but try to look at the 1s and find to which piece they belong.
A general algorithm for this can be implemented as follows:
Create a helper matrix of the same size, a function to give you a new symbol (for example by increasing some number any time a symbol is asked for), and a data structure to store collisions
Go in all 4 directions starting from anywhere
Whenever you find a 0 in the original matrix, issue some new symbol to each of the new directions you are going from there
Whenever you find 1, try to store the values in the helper matrix. If there is already a value there, then:
Store in the collisions data structure that you found a collision between 2 symbols
Don't continue in any direction from this one.
This will traverse each cell at most 4 time, so time complexity is O(n), and when you are done you will have a data structure with all the collisions.
Now all you need to do is combine all entries in the other data structure to collect how many unique pieces you really have.
First of all, I apologize if the question has already been asked, but in about 10 hours of intensive research on every single link Google offered for every single phrase I gave it, I wasn't able to find anything that could help me with my problem.
What I want to do is the following:
I retrieve two excel sheets with data from two different scientifical measurements. Each sheet contains information that can easily be compared to the other sheet, respectively.
The only difference between the two sheets is the amount of data points they contain.
For example: The first sheet contains data for a time span of 200 seconds, with one point representing 1 second. The second sheet also contains data for the same time span, but with one point representing 0.5 seconds.
The problem I have to solve, is to "scale" the sheet with less data points in a way that they can easily be compared in a single chart, so that each line in the chart uses the same space on the X axis.
The problem I'm having with this task is that im lacking sufficient mathematical background to create an algorithm.
I've already created the entire application with a GUI, the import of the excel sheets and smoothing with moving average (only useful if datasets have equal length).
Any idea or link to any place where this could be explained is welcome.
I also want to say that any code I currently have is completely irrelevant to this question, it's just about an additional method with said functionality.
Thanks in advance,
marfuc
If there is a direct correlation between the data points of both sets - ie the time matches up for both - then it might be sufficient to do a linear interpolation on the smaller set to generate the missing points.
For instance, let's say your first set of data is:
Time Value
12:00:00.0 100.0
12:00:01.0 120.0
12:00:02.0 117.5
...and your second set looks like:
Time Value
12:00:00.0 2.5
12:00:00.5 3.0
12:00:01.0 2.6
12:00:01.5 2.9
12:00:02.0 2.8
We can fill in the gaps in the first list in a couple of ways, depending on what you're trying to do with the data afterwards.
The simplest is to do a linear interpolation of the values. If your points are equidistant from the value you're looking for (ie: you're finding the value at the half-way point) then just average them together at the missing points:
Time Value Lerp
12:00:00.0 100.0
12:00:00.5 110.0
12:00:01.0 120.0
12:00:01.5 118.75
12:00:02.0 117.5
This is OK if the sample rate is high enough with relation to the rate at which the input varies. I've seen a lot of audio processing algorithms that use this sort of calculation for doubling sample rate. Doesn't work so well when you have high frequency data with sample rates that are too low to capture the transitions well.
The second option is to use a spline function to fit a curve against the series of points, then synthesize the missing points as offsets on the curve. This will give you smoother and more natural interpolations, with humps in the data looking much more realistic. This will also give you a fairly good way to offset your data if the timing isn't well aligned between the data sets - calculate each point as an offset along the curve with distance equal to the timing offset. There are plenty of spline implementations out there that you could use for this. I'd suggest Catmull-Rom as a starting algorithm.
Warning: If you're doing some sort of statistical analysis on the outputs then you're not going to get good results doing this, no matter how you do it. Cut the bigger group down instead of fabricating data into the smaller group if analysis is your goal.
Objective: I want to count from image below.
What are the ideas can work here?
I tried FindContour(). It returns boundary. Further I need to use those contour points.Using matchShape() and Contour.slice() is not helping.. Any working example for this case will be very helpful.
Any help will be appreciated.
Basically, perform a normalized cross correlation and find relevant peaks. To improve your results, you need to rethink/redo the earlier steps that got to this image your are showing. You need to consider whether you actually did the best/correct steps to get to it.
Here is the normalized cross relation result cropped to the original size, and the non-black points for where the result is greater than 0.35 (the implementation I used produces values in the range [-1, 1]).
The right image is trivially binarized, and gives 5 components, which is your result.
I'm looking for an efficient way to find a object pattern in an array.
Here is the problem that I have to tackle. I'm writing a tangible interface application that collects data from the webcam, converts it in to a black and white image from which I create an array. The array that is created looks similar to this:
1111111111111111111111111111
1111110001111111111000111111
1111100000111111110000011111
1111100000111111110000011111
1111110001111111111000111111
1111111111111111111111111111
Where the zeros represent the color black in the image. I have about 32 (4 rows with 8 circles in each) circles and I need to find an efficient way to find their coordinates. I don't need the whole shape, just a set of coordinates for each circle.
Thank you for the help.
Regards,
Teodor Stoyanov
Three options that I can see immediately (Tuple is used to represent the coordinates in your matrix):
You could use a BitArray for
each point in the matrix, the bit is set if the coordinate has an O, cost would be
O(row length x column length) for storage. Retrieval is O(1) if you know the coordinates you want to check otherwise, O(n) if you just want to find all O's
You could use a List<Tuple<int,int>>
to only store the coordinates for
each O in the matrix, cost would be
O(m) for storage, m being the number of O's. Retrieval is also O(m)
Alternatively to option 2 you could
use a Dictionary<Tuple<int, int>,
bool>, which allows O(1) retrieval
time if you know the coordinates you
want to check.
Pick an arbitrary 0 and do a flood fill from it. Average the coordinates of all the 0s you find to get the center of the circle. Erase the 0s you flooded and repeat.
There really isn't an easy way to do this, but the best thing you can do is tinker around with Artificial Neural Networks. They allow you to feed in data and get an output over many different data inputs. If you build the network right, it'll self-adjust its weights over many iterations.
Sorry, but I doubt you're going to get the exact solution spelled out for you in code. Although I haven't used any of these libraries or resources, a quick glance over them makes them look pretty decent:
http://franck.fleurey.free.fr/NeuralNetwork/
http://sourceforge.net/projects/neurondotnet/
I have a single dimensional array of floating point values (c# doubles FYI) and I need to find the "peak" of the values ... as if graphed.
I can't just take the highest value, as the peak is actually a plateau that has small fluctuations. This plateau is in the middle of a bunch of noise. I'm looking find a solution that would give me the center of this plateau.
An example array might look like this:
1,2,1,1,2,1,3,2,4,4,4,5,6,8,8,8,8,7,8,7,9,7,5,4,4,3,3,2,2,1,1,1,1,1,2,1,1,1,1
where the peak is somewhere in the bolded section.
Any ideas?
You can apply a low-pass filter to your input array, to smooth out the small fluctuations,
then find the peak in the filtered data. The simplest example is probably a "boxcar"
filter, where the output value is the sum of the input values within a certain distance
from the current array position. In pseudocode, it would look something like this:
for i = 0, samplecount-1
if (i < boxcar_radius) or (i >= (samplecount - boxcar_radius)) then
filtered_data[i] = 0 // boxcar runs off edge of input array, don't use
else
filtered_data[i] = 0
for j = i-boxcar_radius, i+boxcar_radius
filtered_data[i] = filtered_data[i] + input_data[j]
endfor
endif
endfor
If you have some idea how wide the "plateau" will be, you can choose the boxcar radius (approximately half the expected plateau width) to detect features at the appropriate scale.
You need to first define what you mean by 'small'. Say, 'small' fluctuation around the maximum is defined as any value that is within ± ϵ of the maximum. Then, it is straightforward to identify the plateau.
Pass through the data to identify the maximum and then do a second pass to identify all values that are within ± ϵ of the maximum.
Peak detection is one of the stages in Phase Correlation and other motion estimation algorithms used in places like video compression. One approach is this: consider a candidate for a peak and a window of a certain number of neighbours. Now fit a quadratic function using standard regression. The peak, with subpixel accuracy, is at the maximum of the fitted quadratic.
Obviously exact solution depends on details. If your distribution is always nice as in your example you could have:
def GetPeak(l):
large = max(l) * 0.8
above_large = [i for i in xrange(len(l)) if l[i] > large]
left_peak = min(above_large)
right_peak = max(above_large)
return (left_peak, right_peak)