I'm trying to implement the Window Width and level formula from de Dicom specification in my application. Only it's not returning any grayscales at the moment. The dicom specifies the formula as following:
These Attributes are applied according to the following pseudo-code, where x is the input value, y
is an output value with a range from ymin to ymax, c is Window Center (0028,1050) and w is
Window Width (0028,1051):
if (x <= c - 0.5 - (w-1)/2), then y = ymin
else if (x > c - 0.5 + (w-1)/2), then y = ymax,
else y = ((x - (c - 0.5)) / (w-1) + 0.5) * (ymax - ymin)+ ymin
So i've translated this into the following c# syntax:
if (pixelData[i] <= wLevel - 0.5 - (wWidth - 1) / 2)
oColor = 0;
else if (pixelData[i] > wLevel - 0.5 + (wWidth - 1) / 2)
oColor = 255;
else
oColor = (int)((pixelData[i] - (wLevel - 0.5)) / (wWidth - 1) + 0.5) * (255 - 0) + 0;
Howevery, the last part of the formula
oColor = (int)((pixelData[i] - (wLevel - 0.5)) / (wWidth - 1) + 0.5) * (255 - 0) + 0;
Only seems to return 0
Anyone sees how this is possible?
The meaning of VOI LUT is to map a given pixel range to displayable values (usually 0..0xFF), using clamping for out of range pixel values.
This means that for a given window/level we can compute the displayable range:
level-window/2 , level + window/2 .
For pixel values that are in that range, linear transformation is used:
((pixel - lower_window_limit) / window) * displayable_range
where lower_window_limit is level - window/2
This -window/2 is missing in your formula.
Related
I am working on 2 class simple perceptron problem. My project work getting user mouse click from GUI panel and make classification. Class 1 expected output: 1 and Class 2 expected output -1. My problem is discrete perceptron working fine but continuous perceptron after one point stop decrease error. I don't know what I am doing wrong. I look so much code and source.
My formulas;
E=1/2 Σ(d-o)^2
f(net)=(2/(1+ⅇ^(-net)))-1
ΔW=n(d-o)(1-o^2)y
like this.
d: Expected output,
net: weight*input sum,
y: input matrix ([x1 x2 -1]) and
o: Actual output.
Code for continuous perceptron below;
while (totalError > Emax)
{
totalError = 0;
for(i=0; i<point.Count; i++)
{
double x1 = point[i].X1;
double x2 = point[i].X2;
double net = (x1 * w0) + (x2 * w1) + (x0 * w2);
double o = (2 / (1 + Math.Exp(-net))) - 1;
double error = Math.Pow(point[i].Class - o, 2);
w0 += (x1 * c * (point[i].Class - o) * (1 - Math.Pow(o, 2))) / 2;
w1 += (x2 * c * (point[i].Class - o) * (1 - Math.Pow(o, 2))) / 2;
w2 += (x0 * c * (point[i].Class - o) * (1 - Math.Pow(o, 2))) / 2;
totalError += error;
}
totalError = totalError / 2;
ErrorShow(cycle, totalError);
objGraphic.Clear(Color.White);
DrawSeperationLine();
cycle++;
}
Emax=0.001 selected. Project working like this. You can see it not correct line location. Class 1 is blue and class 2 red.
I think problem in for loop.
Console Output of Code:
Edit:
After discuss with #TaW (Thanks for showing road), I find out my problem in output (activation function). It always return 1 or -1. After that in weight change function [1-Math.Pow(o,2)] part return 0 and that make weight change equal 0. So my question how can I solve this problem. Type casting not work.
My question's solution is using normalization. For normalization I use standard deviation. Standart deviation code is below;
for(i=0;i<point.Count;i++){
x1 += point[i].X1;
x2 += point[i].X2;
}
meanx1 = x1 / point.Count;
meanx2 = x2 / point.Count;
for(i=0;i<point.Count;i++){
totalX1 += Math.Pow(point[i].X1 - meanx1, 2);
totalX2 += Math.Pow(point[i].X2 - meanx2, 2);
}
normX1 = totalX1 / (point.Count - 1);
normX2 = totalX2 / (point.Count - 1);
normX1 = normX1 / 100;
normX2 = normX2 / 100;
The last division is used to decrease the value.
currently I am using this formula to flatten a multidimensional array (x,y,z):
array = new byte[GridSizeX*GridSizeY*GridSizeZ];
index = x + y * GridSizeX+ z * GridSizeX* GridSizeY;
I was wondering how I would go about making it work for negative values of either x,y and z, since the index can't be a negative value the formula doesn't work for example with the cell (-1,2,3).
Is there a clean formula that can take into account various ranges of x,y,z(also non uniform ranges)?
For example minX=-5, maxX =7/ minY=-2,maxY=3 // minZ=-4,maxZ =6.
Thanks!
If
x is in [minX..maxX] range
y is in [minY..maxY] range
z is in [minZ..maxZ] range
The formula for zero-based index will be
index = (x - minX) +
(y - minY) * (maxX - minX + 1) +
(z - minZ) * (maxX - minX + 1) * (maxY - minY + 1);
I got a set of 3d vectors (x,y,z), and I want to calculate the covariance matrix without storing the vectors.
I will do it in C#, but eventually I will implement it in C on a microcontroller, so I need the algorithm in itself, and not a library.
Pseudocode would be great also.
The formula is simple if you have Matrix and Vector classes at hand:
Vector mean;
Matrix covariance;
for (int i = 0; i < points.size(); ++i) {
Vector diff = points[i] - mean;
mean += diff / (i + 1);
covariance += diff * diff.transpose() * i / (i + 1);
}
covariance *= 1 / points.size()
I personally always prefer this style rather than the two-pass calculation. The code is short and the results are flawless.
Matrix and Vector can have fixed dimension and can be easily coded for this purpose. You can even rewrite the code into discrete floating-point calculations and avoid computing the symmetric part of the covariance matrix.
Note that there is a vector outer product on the second last row of code. Not all vector libraries interpret it correctly.
I think I have found the solution. It is based on this article about how to calculate covariance manually and this one about calculating running variance. And then I adapted the algorithm in the latter to calculate covariance instead of variance, given my understanding of it from the first article.
public class CovarianceMatrix
{
private int _n;
private Vector _oldMean, _newMean,
_oldVarianceSum, _newVarianceSum,
_oldCovarianceSum, _newCovarianceSum;
public void Push(Vector x)
{
_n++;
if (_n == 1)
{
_oldMean = _newMean = x;
_oldVarianceSum = new Vector(0, 0, 0);
_oldCovarianceSum = new Vector(0, 0, 0);
}
else
{
//_newM = _oldM + (x - _oldM) / _n;
_newMean = new Vector(
_oldMean.X + (x.X - _oldMean.X) / _n,
_oldMean.Y + (x.Y - _oldMean.Y) / _n,
_oldMean.Z + (x.Z - _oldMean.Z) / _n);
//_newS = _oldS + (x - _oldM) * (x - _newM);
_newVarianceSum = new Vector(
_oldVarianceSum.X + (x.X - _oldMean.X) * (x.X - _newMean.X),
_oldVarianceSum.Y + (x.Y - _oldMean.Y) * (x.Y - _newMean.Y),
_oldVarianceSum.Z + (x.Z - _oldMean.Z) * (x.Z - _newMean.Z));
/* .X is X vs Y
* .Y is Y vs Z
* .Z is Z vs X
*/
_newCovarianceSum = new Vector(
_oldCovarianceSum.X + (x.X - _oldMean.X) * (x.Y - _newMean.Y),
_oldCovarianceSum.Y + (x.Y - _oldMean.Y) * (x.Z - _newMean.Z),
_oldCovarianceSum.Z + (x.Z - _oldMean.Z) * (x.X - _newMean.X));
// set up for next iteration
_oldMean = _newMean;
_oldVarianceSum = _newVarianceSum;
}
}
public int NumDataValues()
{
return _n;
}
public Vector Mean()
{
return (_n > 0) ? _newMean : new Vector(0, 0, 0);
}
public Vector Variance()
{
return _n <= 1 ? new Vector(0, 0, 0) : _newVarianceSum.DivideBy(_n - 1);
}
}
The code from emu is elegant, but requires an additional step to be correct:
Vector mean;
Matrix covariance;
for (int i = 0; i < points.size(); ++i) {
Vector diff = points[i] - mean;
mean += diff / (i + 1);
covariance += diff * diff.transpose() * i / (i + 1);
}
covariance = covariance/(points.size()-1);
Note the final step of normalizing the covariance.
Here is a simple example in R to demonstrate the principle:
a <- matrix(rnorm(22), ncol = 2)
a1 <- a[1:10, ]
a2 <- a[2:11, ]
cov(a1)
cov(a2)
m <- 10
# initial step
m1.1 <- mean(a1[, 1])
m1.2 <- mean(a1[, 2])
c1.11 <- cov(a1)[1, 1]
c1.22 <- cov(a1)[2, 2]
c1.12 <- cov(a1)[1, 2]
#step 1->2
m2.1 <- m1.1 + (a[11, 1] - a[1, 1])/m
m2.2 <- m1.2 + (a[11, 2] - a[1, 2])/m
c2.11 <- c1.11 + (a[11, 1]^2 - a[1, 1]^2)/(m - 1) + (m1.1^2 - m2.1^2) * m/(m - 1)
c2.22 <- c1.22 + (a[11, 2]^2 - a[1, 2]^2)/(m - 1) + (m1.2^2 - m2.2^2) * m/(m - 1)
c2.12 <- c1.12 + (a[11, 1] * a[11, 2] - a[1, 1]*a[1, 2])/(m - 1) +
(m1.1 * m1.2 - m2.1 * m2.2) * m/(m - 1)
cov(a2) - matrix(c(c2.11, c2.12, c2.12, c2.22), ncol=2)
I use Managed DirectX with C# to texture a sphere (Mesh.Sphere).
I use the following code to calculate U and V:
CustomVertex.PositionNormalTextured[] vertData = (CustomVertex.PositionNormalTextured[])tempMesh.VertexBuffer.Lock(0, typeof(CustomVertex.PositionNormalTextured), LockFlags.None, tempMesh.NumberVertices);
for (int i = 0; i < vertData.Length; ++i)
{
vertData[i].Tu = (float)(1.0 - (double)(0.5f + Math.Atan2(vertData[i].Nz, vertData[i].Nx) / (Math.PI * 2)));
vertData[i].Tv = (float)(0.5 - Math.Asin(vertData[i].Ny) / Math.PI);
}
Now I have the problem, that the poles of the sphere and the poles of my texture (equirectangular projection) does not match.
The red points in the picture are the place where the poles of the sphere currently match the texture.
Can someone tell me what I can do to fix this problem?
Your above code works perfectly provided the sphere is centered at the origin and y is up. I will demonstrate:
Applying the maths with the assumption that poles exist at 0, 1, 0 and 0, -1, 0 gives the following numbers of the poles.
u = 1.0 - (0.5 + (atan2( 0, 0 ) / (2 * PI));
=> u = 1.0 - (0.5 + (0 / (2 * PI));
=> u = 1.0 - 0.5;
=> u = 0.5
v = 0.5 - (asin( 1 ) / PI)
=> v = 0
and
u = 0.5
v = 0.5 - (asin( -1 ) / PI)
=> v = 0.5 - -0.5
=> v = 1.0
Which are the correct values, for u and v, ie (0.5, 0) and (0.5, 1).
If you are using z-up then this WILL give incorrect values (as you would need to swap the y and z over in your calculations) but it still does not give the pole values you are suggesting:
u = 1.0 - (0.5 + (atan2( 1, 0 ) / (2 * PI));
=> u = 1.0 - (0.5 + (PI / (2 * PI)))
=> u = 1.0 - (0.5 + 0.5);
=> u = 0
v = 0.5 - (asin( 0 ) / PI)
=> v = 0.5 - (0 / PI)
=> v = 0.5
and
u = 1.0 - (0.5 + (atan2( -1, 0 ) / (2 * PI));
=> u = 1.0 - (0.5 + (-PI / (2 * PI)))
=> u = 1.0 - (0.5 - 0.5);
=> u = 1.0
v = 0.5 - (asin( 0 ) / PI)
=> v = 0.5 - (0 / PI)
=> v = 0.5
The reason for this is fairly sensible. In the u direction the sphere wraps entirely round. ie a u of 0 is the same as a u of 1. This happens entirely in the x-z plane in the equation you have posted (y is not considered for u). This is why it is divided by 2 * pi or the number of radians in a full circle. The v direction does not wrap around. In fact it only applies to half the range and thus a division pi. You'll note that only y is used in the calculation and, hence, x and z do not affect the v calculation.
Hope that helps.
Let's say I have a range of two values:
5...........98
and let's assume the user position's the slider at value 40
Now I want to get the value from another range of values at the exact percentage position as from range 1
let's say the second range of values are 10.........80
int nRange1 = 98 - 5;
int nRange2 = 80 - 10;
int nValue1 = 40;
int nPercentOnRange1 = ((nValue1 - 5) / nRange1)*100;
Now I have to get the value from Range2 at the exact percentage as nPercentOnRange1, but I don't know how
First need to find % from first range and apply that % to new range.
Here is what I will do:
Range1(A to B) Selected value: c
Range2(E to F)
Range1 % = (C-A) / (B-A) * 100
Range 2 corresponding value = ((F - E) * (Range 1 %) / 100) + E
C#:
int Range1Min = 5, Range1Max=90, Range1SelectedValue = 40;
int Range2Min = 6, Range2Max=80;
decimal range1Percent = (Range1SelectedValue-Range1Min ) / (Range1Max-Range1Min) * 100.0
decimal range2NewValue = (Range2Max - Range2Min) * range1Percent / 100 + Range2Min;
Watch out for
int nPercentOnRange1 = ((nValue1 - 5)/ nRange1) * 100;
ending up as zero since nValue1 and nRange1 are integers. This might be better:
int nPercentOnRange1 = ((nValue1 - 5) * 100 / nRange1);
Then you can do
int nValue2 = 10 + nPercentOnRange1*nRange2/100;
The value you need is
x = 10 + nRange2 * nPercentOnRange1 / 100.0
Let me explain why. You need a number x such that
((x - 10) / nRange2) * 100.0 = nPercentOnRange1
Therefore, just solve for x.
((x - 10) / nRange2) * 100.0 = nPercentOnRange1 =>
((x - 10) / nRange2) = nPercentOnRange1 / 100.0 =>
x - 10 = nRange2 * nPercentOnRange1 / 100.0 =>
x = 10 + nRange2 * nPercentOnRange1 / 100.0
And note that this actually makes intuitive sense. We're saying take the percentage, scale that into the length of the second range (that's what nRange2 * nPercentOnRange1 / 100.0) is doing and then add that to the lower bound of the second range. Basically we are saying step nPercentOnRange1 percent into the second range. That's exactly what the formula is expressing.
Perhaps this will work:
nValue2 = nPercentage1 * nRange2 / 100 + 10