I have a trackbar associated with a picture box where I am drawing an image based on the selected zoom factor. The range is from 1% to 1,000% so the lower you slide it, the faster it appears to zoom out.
This is expected but not desired. Is there a way to scale interpret the slider values so that zooming appears more natural to the user, specially in the < 50% range.
This is easily done:
myTrackBar.Minimum = 0;
myTrackBar.Maximim = 3000;
...
public double RealValue
{
get
{
var trackPos = myTrackBar.Value;
return Math.Pow(10.0, trackPos / 1000.0);
}
set
{
var logValue = Math.Log10(value) * 1000;
myTrackBar.Value = (int) logValue;
}
}
To understand how this works, consider your range - 1 to 1000, or expressed as powers of 10 it is 1e0 to 1e3. Hence if we give the track bar a range from 0 to 3 and raise 10 to the value, we get a nice exponential set of values, just like you want.
But if we set the range to 0..3 we could only select from 4 different values: 0, 1, 2, 3 which would translate into 1, 10, 100 and 100 respectively.
To give us values inbetween, we simply multiply the range by a thousand, giving us 3001 different values that the track bar can keep track off, and then divide the trackbar's value by a thousand.
Related
I have some question about visualization in OpenGL. I have points in 3D space, each point also have one extraValue, which represent diffrent values, eg temperature, pressure and so on. User chooses one of this and other method sets extraValue to each point.
First problem is, that this values have diffrent ranges, eg:
temperature: <80; 2000>
preassure: <-500; 400>
gamma: <0,5; 1,8>
...
Now I want to visualize it to look for reliable, for example the temperature: 80 C is cold, so blue color, 2000 is hot so red. Similarry for others, preassure, gamma and so on.
The second problem is that Gl.glColor3f accepts 3 parameters: red, green, blue. I have only ONE parameter to each ponit.
Range of RGB is <0;1>, my values have diffrent ranges.
Does anybody have an idea, or some algorithm that could help mi with this ?
Firstly, remap your value into a range of 0-1, like so:
double t = ( value - min ) / ( max - min ); // i.e. min could be 80 and max 2000
// it might be a good idea to limit t to 0-1 here, in case
// your original value could be outside the valid range
Then, do a linear interpolation between the colors, like so (pseudocode):
Color a = Blue, b = Red
double inv = 1.0 - t
Color result = Color( inv * a.R + t * b.R,
inv * a.G + t * b.G,
inv * a.B + t * b.B )
That should get you started!
I write program in C# but hope that C++ and C# in background exactly same.
What i want - take grayscaled image and separate colors over 127 and under 17 to separate images. If i simply get "white" colors and programmatically stretch them from range (127-255) to (0-255) like
// pseudocode
int min = 127, max = 255;
for(int x; x< width; x++)
pixels[x] = pixels[x]/(max-min) * max;
Then here will be not smooth interval.. I mean, that 127 converts to 0 but 128 converts to 2 and colors 1,3,5,... are not exist.
That is original image with alpha:image original
That is image with "extracted white":image original
That is image with "extracted black": snorgg.ru/patchwork/tst_black.png.
I don't clearly understand how it can be realized so exampe code will like:
{
im.MagickImage image = new im.MagickImage("c:/55/11.png");
im.MagickImage imageWhite = ExtractWhite(image);
im.MagickImage imageBlack = ExtractBlack(image);
}
....
public static im.MagickImage ExtractWhite(im.MagickImage img){
im.MagickImage result = new im.MagickImage(img);
?????
?????
return result;
}
thankы in advance ))
I think your calculation is wrong. You are confusing the input range with the output range. The input ranges from min to max and the output ranges from 0 to 255. It is a coincidence that your input max is equal to your output max (255).
If you want to stretch a value in the range of min ... max (= input range) to 0 ... 255 (= output range) then calculate this
int brightness = pixel[x];
if (brightness <= min) {
pixel[x] = 0;
} else if (brightness >= max) {
pixel[x] = 255;
} else {
pixel[x] = 255 * (brightness - min) / (max - min);
}
Where min >= 0 and max <= 255 and min < max.
First you have to make sure the brightness is within the range min ... max, otherwise your result will exceed the range 0 ... 255. You could also limit the range of the output afterwards, but in any case you have to make a range check.
Then subtract min from the brightness. Now you have a value between 0 and (max - min). By dividing by (max - min) you get a value between 0 and 1. Multiply the result by 255 and you get a value in the desired range 0 ... 255.
Also you must be aware of the fact that you are performing integer arithmetic. Therefore multiply by 255 first and then divide. If you start by dividing you get either 0 or 1 as intermediate result (because integer arithmetic does not yield decimals and the final result will either be 0 or 255 and all the gray tones get lost.
The effect you are seeing is called banding or posterisation. It is caused by making contrast stretches to data that is not sampled with sufficient bit-depth. As you only have 8-bit data, you only have 255 grey levels. If you stretch the 50 levels between 100-150 over a range of 255 levels, there will be gaps in your histogram around 5 levels wide. The solution is either to obtain 16-bit data, or make less drastic changes in the contrast.
Alternatively, if like me, you are a photographer, and more interested in the aesthetics of the image than its scientific accuracy, you can add a small amount of random noise to disguise and "smear over" the banding...
There is a nice description here.
I can also show you an example with ImageMagick, first we create two greyscale ramps (gradients), one 8-bit and one 16-bit, both ranging from brightness level 100 to 150 like this:
convert -depth 8 -size 100x500 gradient:"rgb(100,100,100)-rgb(150,150,150)" -rotate 90 gradient8.png
convert -depth 16 -size 100x500 gradient:"rgb(100,100,100)-rgb(150,150,150)" -rotate 90 gradient16.png
They look like this:
If I now stretch them both to the full range of 0-255 you will immediately see the banding effect in the 8-bit version, and the smoothness of the 16-bit version - which, incidentally, is the reason for using RAW format (12-14 bit) on your camera rather than shooting 8-bit JPEGs:
convert gradient8.png -auto-level out8.png
convert gradient16.png -auto-level out16.png
I alluded to using noise to redue the visibility of the banding effect, and you can do that using a technique like this:
convert out8.png -attenuate 0.3 +noise gaussian out.png
which gives you a less marked effect, somewhat similar to film grain:
I am not certain exactly what you are trying to do, but if you just want to spread the brightness levels from 127-255 over the full range of 0-255, you can do that simply at the command-line like this:
convert orig.png -level 50%,100% whites.png
Likewise, if you want the brightness levels from 0-17 spread over the range 0-255, you can do
convert orig.png -level 0,6.66667% blacks.png
Problem:
So I am looking to create bar chart values based on a comparison to an array of values.
Data:
array = [25 , 35 , 55 , 5 , 60 , 200 , 18 , 18 , 30 , 10]
Requirements:
I have a working Bar graph creation using CSS which loads the bar width value 'xx' as a percentage.
I want to allocate the lowest array item value to a percentage value of 100% (full width of CSS bar) The above example would be the fourth array item '5'
Likewise the sixth item in the array is the highest number and I want to allocate 0% to the highest.
(Think of the numbers in the array as time - shortest being the best)
So lowest (fourth) array item '5' = bar width value 100% and the
Highest array item '200' = bar width value 0%
The spread between the highest and lowest values in the array is 195
There are 10x items in the array.
The average value across the array is 45.6 which would for example generate a bar chart value of 50% if represented in the bar chart.
I am struggling to create a formula which dynamically generates the reverse percentage values from the varied array values above into a representational percentage bar chart value of any of the items in the array.
Specific Help Needed:
Can you see the solution in C# so that I can generate percentage bar values based on the requirements outlined above ?
[EDIT] (Including my code which partly works)
int[] array = { 25 , 35 , 55 , 5 , 60 , 200 , 18 , 18 , 30 , 10 };
int selectdVal = 5; //example selection from array
int ratioSpread = 100; //used as 100% CSS width
int responseSlow = array.Max(); //The slowest val within array
decimal ratioAdjust = (ratioSpread / responseSlow);
decimal maxBar = 100 - (selectdVal * ratioAdjust );
int renderBar = Convert.ToInt16(maxBar <= 0 ? 1 : maxBar ); //show min 1% bar width
The above is relatively ok, but I'd prefer to have the shortest time (Min.value) of 5 above actually return 100 for the renderBar, whereas in this example it returns 97.5
int[] array = { 25, 35, 55, 5, 60, 200, 18, 18, 30, 10 };
int selectdVal = 5;
int barMin = 1;
int barMax = 100;
decimal rangeMin = array.Min();
decimal rangeMax = array.Max();
decimal ratio = (barMax - barMin) / (rangeMax - rangeMin);
int bar = barMax - (int)(ratio * (selectdVal - rangeMin));
Could anyone give me a hint on how to generate "smooth" random numbers?
Here's what I mean by smooth:
The random numbers shall be used in a game, e.g. for wind direction and strength (does anyone remember goood old "Worms"?). Of course setting random numbers for those values every second or so would look awfully choppy.
I would rather have some kind of smooth oscillation in a given value range. Sort of like a sine wave but much more random.
Does anyone get what I'm after? ;-)
Any ideas on how to achieve this kind of behavior would be appreciated.
If you want the delta (change) to be small, just generate a small random number for the delta.
For example, instead of:
windspeed = random (100) # 0 thru 99 inclusive
use something like:
windspeed = windspeed - 4 + random (9) # -4 + 0..8 gives -4..4
if windspeed > 99: windspeed = 99
elif windspeed < 0: windspeed = 0
That way, your wind speed is still kept within the required bounds and it only ever changes gradually.
This will work for absolute values like speed, and also for direction if the thing you're changing gradually is the angle from a fixed direction.
It can pretty well be used for any measurement.
Alternatively, if you want to ensure that the windspeed changes with a possibly large delta, but slowly, you can generate your target windspeed as you currently do but move gradually toward it:
windspeed = 50
target = windspeed
while true:
# Only set new target if previous target reached.
if target == windspeed:
target = random (100)
# Move gradually toward target.
if target > windspeed:
windspeed = windspeed + max (random (4) + 1, target - windspeed)
else:
windspeed = windspeed - max (random (4) + 1, target - windspeed)
sleep (1)
Perlin (or better simplex) noise would be the first method that comes to mind when generating smoothed noise. It returns a number between 1 and -1, which will add or subtract from the current value. You can multiple that to make it seem less subtle or better yet... make the lowest wind value -1 and highest wind value 1.
Then simply have a seeder as a counter (1,2,3... etc) as the perlin/simplex input keep the values 'smooth'.
I created a new version of a smooth random number. The idea is that our random number is going to be within limits = [average - oscillation, average + oscillation], and will change everytime [-varianciance, +variance].
But, if it reaches the limits, our variance is going to be reduce.
E.g. numbers from [0, 100], with variance of 10. If the current value = 8, then the variance will be [0, 18]
python code:
def calculate_smooth_random(current_value, average, oscillation, variance):
max_value = average + oscillation
min_value = average - oscillation
max_limit = min(max_value, current_value + variance)
min_limit = max(min_value, current_value - variance)
total_variance = max_limit - min_limit
current_value = min_limit + random.random() * total_variance
print("current_value: {}".format(current_value))
return current_value
Image for distribution with values:
average: 20
oscillation: 10
variance: 5
I am struck in a tricky situation where I need to calculate the number of combinations to form 100 based on different factors.
Those are
Number of combinations
Multiplication factor
distance
Sample input 1: (2-10-20)
It means
list the valid 2 way combination to form 100.
the distance between the combination should be less than or equal to 20.
And all of the resultant combination must be divisible by the given multiplication factor 10
Output will be
[40,60]
[50,50]
[60,40]
here [30,70],[20,60] are invalid because the distance is above 20.
Sample Input 2: [2-5-20]
[40,60]
[45,55]
[50,50]
[55,45]
[60,40]
I would really appreciate if you guided me to the right direction.
Cheers.
I hope it's not a homework problem!
def combinations(n: Int, step: Int, distance: Int, sum: Int = 100): List[List[Int]] =
if (n == 1)
List(List(sum))
else
for {
first <- (step until sum by step).toList
rest <- combinations(n - 1, step, distance, sum - first)
if rest forall (x => (first - x).abs <= distance)
} yield first :: rest
If you need to divide 100 over 2 with a maximum distance of N, the lowest value in the combination is
100 / 2 - N / 2
If you need to divide 100 over 3 values with a maximum distance of N, this becomes more tricky. The average of the 3 values will be 100/3, but if one of them is much lower than this average, than the other can only be slightly bigger than this average, meaning that the minimum value is not the average minus the maximum distance divided by two, but probably
100 / 3 - 2N / 3
In general with M values, this becomes
100 / M - (M-1)N / M
Which can be simplified to
(100 - (M-1)N) / M
Similarly we can calculate the highest possible value:
(100 + (M-1)N) / M
This gives you a range for first value of your combination.
To determine the range for the second value, you have to consider the following constraints:
the distance with the first value (should not be higher than your maximum distance)
can we still achieve the sum (100)
The first constraint is not a problem. The second is.
Suppose that we divide 100 over 3 with a maximum distance of 30 using multiples of 10
As calculated before, the minimum value is:
(100 - (3-1)30) / 3 --> 13 --> rounded to the next multiple of 10 --> 20
The maximum value is
(100 + (3-1)30) / 3 --> 53 --> rounded to the previous multiple of 10 --> 50
So for the first value we should iterate over 20, 30, 40 and 50.
Suppose we choose 20. This leaves 80 for the other 2 values.
Again we can distribute 80 over 2 values with a maximum distance of 30, this gives:
Minimum: (80 - (2-1)30) / 2 --> 25 --> rounded --> 30
Maximum: (80 + (2-1)30) / 2 --> 55 --> rounded --> 50
The second constraint is that we don't want a distance larger than 30 compared with our first value. This gives a minimum of -10 and a maximum of 50.
Now take the intersection between both domains --> 30 to 50 and for the second value iterate over 30, 40, 50.
Then repeat this for the next value.
EDIT:
I added the algorithm in pseudo-code to make it clearer:
calculateRange (vector, remainingsum, nofremainingvalues, multiple, maxdistance)
{
if (remaingsum==0)
{
// at this moment the nofremainingvalues should be zero as well
// found a solution
print vector
return;
}
minvalueaccordingdistribution = (remainingsum-(nofremainingvalues-1)*maxdistance)/nofremaingvalues;
maxvalueaccordingdistribution = (remainingsum+(nofremainingvalues-1)*maxdistance)/nofremaingvalues;
minvalueaccordingdistance = max(values in vector) - maxdistance;
maxvalueaccordingdistance = min(values in vector) + maxdistance;
minvalue = min (minvalueaccordingdistribution, minvalueaccordingdistance);
maxvalue = max (minvalueaccordingdistribution, minvalueaccordingdistance);
for (value=minvalue;value<=maxvalue;value+=multiple)
{
calculaterange (vector + value, remainingsum - value, nofremainingvalues-1, multiple, maxdistance);
}
}
main()
{
calculaterange (emptyvector, 100, 2, 20);
}
Why can't you use a brute force approach with few optimization? For example, say
N - Number of combinations
M - Multiples
D - Max possible Distance
So possible values in combinations can be M, 2M, 3M and so on. You need to generate this set and then start with first element from set and try to find out next two from choosing values from same set (provided that they should be less than D from first/second value).
So with i/p of 3-10-30 would
Create a set of 10, 20, 30, 40, 50, 60, 70, 80, 90 as a possible values
Start with 10, choice for second value has to be 20, 30, 40, 50 (D < 30)
Now choose second value from set of 20, 30, 40, 50 and try to get next value and so on
If you use a recursion then solution would become even simpler.
You have to find N values from a
list of possible values within MIN &
MAX index.
So try first value at
MIN index (to MAX index). Say we
have chosen value at X index.
For every first value, try to find
out N-1 values from the list where MIN =
X + 1 and MAX.
Worst performance will happen when M = 1 and N is sufficiently large.
Is the distance between all the additive factors, or between each of them? For example, with 3-10-20, is [20-40-60] a valid answer? I'll assume the latter, but the solution below can be modified pretty trivially to work for the former.
Anyway, the way to go is to start with the most extreme answer (of one sort) that you can manage, and then walk the answers along until you get to the other most extreme.
Let's try to place numbers as low as possible except for the last one, which will be as high as possible (given that the others are low). Let the common divisor be d and divide 100 by it, so we have S = 100/d. This quantizes our problem nicely. Now we have our constraint that spacing is at most s, except we will convert that to a number of quantized steps, n = s/d. Now assume we have M samples, i1...iM and write the constraints:
i1 + i2 + i3 + ... + iM = S
0 <= i1 <= n
0 <= i2 <= n
. . .
0 <= iM <= n
i1 <= i2
i2 <= i3
. . .
i(M-1) <= iM
We can solve the first equation to get iM given the others.
Now, if we make everything as similar as possible:
i1 = i2 = ... = iM = I
M*I = S
I = S/M
Very good--we've got our starting point! (If I is a fraction, make the first few I and the remainder I+1.) Now we just try to walk each variable down in turn:
for (i1 = I-1 by -1 until criteria fails)
sum needs to add to S-i1
i2 >= i1
i2 <= i1 +n
solve the same problem for M-1 numbers adding to S-i1
(while obeying the above constraint on i2)
Well, look here--we've got a recursive algorithm! We just walk through and read off the answers.
Of course, we could walk i1 up instead of down. If you need to print off the answers, may as well do that. If you just need to count them, note that counting up is symmetric, so just double the answer you get from counting down. (You'll also have a correction factor if not all values started the same--if some were I and some were I+1, you need to take that into account, which I won't do here.)
Edit: If the range is what every value has to fit within, instead of all the
0 <= i1 <= n
conditions, you have
max(i1,i2,...,iM) - min(i1,i2,...,iM) <= n
But this gives the same recursive condition, except that we pass along the max and min of those items we've already selected to throw into the mix, instead of adding a constraint on i2 (or whichever other variable's turn it is).
Input:
(2-10-20)
Divide the number by param 1
(50,50)
2 Check whether the difference rule allows this combination. If it hurts the rule, then STOP, if it allows, then add this and it's combinations to the result list
For example: abs(50-50)<20, so it is ok
3 Increase the the first value by param 2, decrease the second value by param 2
Go 2. point