transform DenseTensor in Microsoft.ML.OnnxRuntime - c#

I am currently trying to port the code from https://github.com/AsuharietYgvar/AppleNeuralHash2ONNX/blob/master/nnhash.py to C# using Micrisoft.ML.OnnxRuntime. For Image loading I use SixLabors.ImageSharp.
So far loading the onnx session file works but I think I have trouble preprocessing the input image in a correct way.
The original code does like so:
image = Image.open(sys.argv[3]).convert('RGB')
image = image.resize([360, 360])
arr = np.array(image).astype(np.float32) / 255.0
arr = arr * 2.0 - 1.0
arr = arr.transpose(2, 0, 1).reshape([1, 3, 360, 360])
And now I have trouble finding out how to transpose the image data array. My code so far is like this:
img.Mutate(x =>
{
x.Resize(new ResizeOptions
{
Size = new SixLabors.ImageSharp.Size(360, 360),
Mode = SixLabors.ImageSharp.Processing.ResizeMode.Stretch
});
});
Tensor<float> arr = new DenseTensor<float>(new int[] { 1, 3, 360, 360 });
for (int y = 0; y < img.Height; y++)
{
Span<Rgb24> pixelSpan = img.GetPixelRowSpan(y);
for (int x = 0; x < img.Width; x++)
{
arr[0, 0, y, x] = (pixelSpan[x].R / 255f * 2) - 1;
arr[0, 1, y, x] = (pixelSpan[x].G / 255f * 2) - 1;
arr[0, 2, y, x] = (pixelSpan[x].B / 255f * 2) - 1;
}
}
but it seems like setting the order/layout of the Tensor is wrong. The problem is, I cant find any transform function in the Microsoft.ML.OnnxRuntime library to resemble the last line (arr.transpose(2, 0, 1).reshape([1, 3, 360, 360])) in the original code.
So how could I correctly build the input image Tensor for Microsoft.ML.OnnxRuntime?

The transpose is not required as you are already creating the dense tensor in the correct shape. In the python code, we are taking an array that is [360, 360, 3] and transposing that into a array of [1, 3, 360, 360]. While your dotnet code is taking the RGB values from the row, and placing the values directly into the tensor.
This is the code that we have been using to do this is below:
public virtual Tensor<float> ConvertImageToTensor(ref List<Image<Rgb48>> images, int[] inputDimension)
{
inputDimension[0] = images.Count;
Tensor<float> input = new DenseTensor<float>(inputDimension);
for (var i = 0; i < images.Count; i++)
{
var image = images[i];
for (var y = 0; y < image.Height; y++)
{
var pixelSpan = image.GetPixelRowSpan(y);
for (var x = 0; x < image.Width; x++)
{
input[i, 0, y, x] = pixelSpan[x].R;
input[i, 1, y, x] = pixelSpan[x].G;
input[i, 2, y, x] = pixelSpan[x].B;
}
}
}
return input;
}

Related

Parallize elementwise matrix (2dArray) calculations

This is my first question and I'm relatively new to C#. (Excuse my bad English)
I'm writing a template matching algorithm in C# using the .NET Framework with a WindowsForms Application in Visual Studio. Sadly the new indices and ranges functions from C#8.0, especially the range operator (..), didn't get implemented in the .NET Framework. I know there is a workaround for this as you can see in this thread but it is not supported by Microsoft. So I'm searching for another way to parallelize my elementwise 2dArray (matrix) calculations to make my program faster.
In my program, I'm calculating the differential square (ds) of an area (with the size of my template) inside a 2dArray (my image) and a 2dArray (my template). These values a written to a new 2dAary (DS) which holds all differential squares in the corresponding positions to the image. I can search the indices of DS where the differential square is minimal which is equal to the matching position (highest correspondence between template and image) of the template inside the image.
In python the calculation of DS is very quick using the index range operator (:) and will look like this:
H,W = I.shape # read out Height H & Width W from Image I
h,w = T.shape # read out Height h & Width w from Template T
for i in range(H-h+1):
for j in range(W-w+1):
DS[i,j] = np.sum((I[i:i+h,j:j+w] - T)**2)
But in C# I have to make the calculation of DS elementwise therefore it looks like this and takes for ever:
int Tw = template.Width;
int Th = template.Height;
int Iw = image.Width;
int Ih = image.Height;
int d = 0;
int[,] ds = new int[Tw, Th];
int[,] DS = new int[Iw - Tw + 1, Ih - Th + 1];
for (int y = 0; y < Ih - Th + 1; y++)
{
for (int x = 0; x < Iw - Tw + 1; x++)
{
for (int yT = 0; yT < Th; yT++)
{
for (int xT = 0; xT < Tw; xT++)
{
d = I[x + xT, y + yT] - T[xT, yT];
ds[xt, yt] = d * d;
}
}
int sum = ds.Cast<int>().Sum();
DS[x, y] = sum;
}
}
I know that I could use threads but that would be a little complex for me.
Or maybe I could use CUDA with my Nvidia GPU to speed things up a little.
But I am asking you and myself is there another way to parallelize (optimize) my elementwise 2dArray calculations?
I look forward to any help.
Many thanks in advance!!!
EDIT:
Here I have a working example of my code for a .NET Framework Console App. As you can see I make a lot of elementwise 2d and 3d Array calculations which I would like to process in parallel (or perform them faster in any other way):
using System;
using System.Drawing;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace TemplateMatcher_Console
{
class Program
{
public static int[,,] bitmapToMatrix(Bitmap bmp)
{
int[,,] I = new int[bmp.Width, bmp.Height, 3];
for (int y = 0; y < bmp.Height; y++)
{
for (int x = 0; x < bmp.Width; x++)
{
Color pix = bmp.GetPixel(x, y);
I[x, y, 0] = Convert.ToInt32(pix.R);
I[x, y, 1] = Convert.ToInt32(pix.G);
I[x, y, 2] = Convert.ToInt32(pix.B);
}
}
return I;
}
public static int[] indexOfMiniumValue(int[,] matrix)
{
int value = 0;
int minValue = 999999999;
int minFirstIndex = 0;
int minSecondIndex = 0;
int[] ij = new int[2];
for (int i = 0; i < matrix.GetLength(0); i++)
{
for (int j = 0; j < matrix.GetLength(1); j++)
{
value = matrix[i, j];
if (value < minValue)
{
minValue = value;
minFirstIndex = i;
minSecondIndex = j;
}
}
}
ij[0] = minFirstIndex;
ij[1] = minSecondIndex;
return ij;
}
public static void Print2DArray<T>(T[,] matrix)
{
for (int i = 0; i < matrix.GetLength(0); i++)
{
for (int j = 0; j < matrix.GetLength(1); j++)
{
Console.Write(matrix[i, j] + "\t");
}
Console.WriteLine();
}
}
static void Main(string[] args)
{
// Deklaration & Eingabe
Console.WriteLine("Type the filepath for your image and then press Enter");
string im = Console.ReadLine();
Console.WriteLine("\nType the filepath for your template and then press Enter");
string temp = Console.ReadLine();
Bitmap template = new Bitmap(#temp);
Bitmap image = new Bitmap(#im);
int Tw = template.Width;
int Th = template.Height;
int Iw = image.Width;
int Ih = image.Height;
int[,] ds = new int[Tw, Th];
int[,] DS = new int[Iw - Tw + 1, Ih - Th + 1];
int[,,] DS_rgb = new int[Iw - Tw + 1, Ih - Th + 1, 3];
int[] xy = new int[2];
// Verarbeitung
// int[,,] I = Array.ConvertAll(image_I, new Converter<byte, int>(Convert.ToInt32));
int[,,] I = bitmapToMatrix(image);
int[,,] T = bitmapToMatrix(template);
for (int rgb = 0; rgb < 3; rgb++)
{
for (int y = 0; y < Ih - Th + 1; y++)
{
for (int x = 0; x < Iw - Tw + 1; x++)
{
//DS_rgb[x, y, rgb] = (I[x .. x + template.Width, y .. y + template.Height, rgb] - T[0 .. template.Width, 0 .. template.Height, rgb]);
for (int yT = 0; yT < Th; yT++)
{
for (int xT = 0; xT < Tw; xT++)
{
ds[xT, yT] = (I[x + xT, y + yT, rgb] - T[xT, yT, rgb]) * (I[x + xT, y + yT, rgb] - T[xT, yT, rgb]);
}
}
int sum = ds.Cast<int>().Sum();
DS_rgb[x, y, rgb] = sum;
}
}
}
//DS[.., ..] = DS_rgb[.., .., 0] + DS_rgb[.., .., 1] + DS_rgb[.., .., 2];
for (int y = 0; y < Ih - Th + 1; y++)
{
for (int x = 0; x < Iw - Tw + 1; x++)
{
DS[x, y] = DS_rgb[x, y, 0] + DS_rgb[x, y, 1] + DS_rgb[x, y, 2];
}
}
//xy = DS.FindIndex(z => z == Math.Min(DS));
xy = indexOfMiniumValue(DS);
// Ausgabe
// Ausgeben der Matrix DS
/*
Console.WriteLine("\nMatrix with all differtial squares:");
Print2DArray(DS);
*/
Console.WriteLine($"\nPosition of your template in your image (upper left corner): ({xy[0]}, {xy[1]})");
Console.Write("\nPress any key to close the TemplateMatcher console app...");
Console.ReadKey();
}
}
}

Ordered Dithering - Color Values Per Channel

While moving forward with my algorithm for Ordered Dithering I got a problem, mainly I don't really know what col[levels] might be.
Here is the pseudocode
k - number of color values per channel
n - size of the threshold Bayers Matrix
My code which works somehow fine for K = 2, but it doesn't return correct result image when K = 3, K = 4 and so on
UPDATED CODE
class OrderedDithering
{
private float[,] bayerMatrix;
private float[,] dither2x2Matrix =
new float[,] { { 1, 3 },
{ 4, 2 } };
private float[,] dither3x3Matrix =
new float[,] { { 3, 7, 4 },
{ 6, 1, 9 },
{ 2, 8, 5 } };
public BitmapImage OrderedDitheringApply(BitmapImage FilteredImage, int valuesPerChannel, int thresholdSize)
{
Bitmap bitmap = ImageConverters.BitmapImage2Bitmap(FilteredImage);
if (thresholdSize == 2)
{
bayerMatrix = new float[2, 2];
for (int i = 0; i < 2; ++i)
for (int j = 0; j < 2; ++j)
bayerMatrix[i,j] = dither2x2Matrix[i,j] / 5;
}
else
{
bayerMatrix = new float[3, 3];
for (int i = 0; i < 3; ++i)
for (int j = 0; j < 3; ++j)
bayerMatrix[i, j] = dither3x3Matrix[i, j] / 10;
}
for (int i = 0; i < bitmap.Width; ++i)
for(int j = 0; j < bitmap.Height; ++j)
{
Color color = bitmap.GetPixel(i, j);
double r = Scale(0, 255, 0, 1, color.R);
double g = Scale(0, 255, 0, 1, color.G);
double b = Scale(0, 255, 0, 1, color.B);
int counter = 0;
counter += Dither(valuesPerChannel, r, thresholdSize, i, j);
counter += Dither(valuesPerChannel, g, thresholdSize, i, j);
counter += Dither(valuesPerChannel, b, thresholdSize, i, j);
if (counter == 0)
bitmap.SetPixel(i, j, Color.FromArgb(0,0,0));
else
bitmap.SetPixel(i, j, Color.FromArgb(255/counter, 255/counter, 255/counter));
}
return ImageConverters.Bitmap2BitmapImage(bitmap);
}
public int Dither(int valuesPerChannel, double colorIntensity, int thresholdSize, int i, int j)
{
double tempValue = (double)(Math.Floor((double)((valuesPerChannel - 1) * colorIntensity)));
double re = (valuesPerChannel - 1) * colorIntensity - tempValue;
if (re >= bayerMatrix[i % thresholdSize, j % thresholdSize])
return 1;
else
return 0;
}
public double Scale(double a0, double a1, double b0, double b1, double a)
{
return b0 + (b1 - b0) * ((a - a0) / (a1 - a0));
}
}
If you just need a dithering library that works with System.Drawing and supports ordered dithering feel free to use mine. It has an easily usable OrderedDitherer class.
But if you are just playing for the sake of curiosity, and would like to improve your algorithm, here are some comments:
You now always set the pixels black or white (btw, you can use just Color.Black instead of FromArgb(0, 0, 0), for example), so it cannot work for more colors
It is unclear from the provided code whether a target palette belongs to colorsNum but basically you need to quantize all dithered pixels to the nearest transformed color.
So instead of working by brightness you should apply the ordered matrix for each color channel of each pixels (between the 0..255 range), and pick a color from the target palette for the result. The 'nearest color' can have more interpretation but generally an euclidean search will do it.
Try to avoid Bitmap.SetPixel/GetPixel as they are terribly slow and SetPixel does not even work with bitmaps that have indexed palette PixelFormat.
Your code does not show the values of your matrix but the values should also be calibrated for the palette you use. The typical default values are good for a BW palette but they are probably too strong for a 256 color palette (the dithering "jumps over" more shades and introduces rather just noise than nice gradients). See this page for more info.
Feel free to explore the linked code base. The Dither extension method is the starting point for dithering a Bitmap in-place, and here are the OrderedDitherer constructor, the strength calibration for the min/max values of the matrix and a fairly fast nearest color search either by RGB channels or by brightness.

how to set the SetAlphamaps to one certain texture?

i want to change the texture of my terrain with certain texture. i got confuse to set the splatmapdata, anyone can help me out??
private void ChangeTexture(Vector3 WorldPos)
{
print ("changeTexture");
int mapX = (int)(((WorldPos.x - terrainPos.x) / terrainData.size.x) * terrainData.alphamapWidth);
int mapZ = (int)(((WorldPos.z - terrainPos.z) / terrainData.size.z) * terrainData.alphamapHeight);
float[,,] splatmapData = terrainData.GetAlphamaps(3, 3, 15, 15);
terrainData.SetAlphamaps (mapX, mapZ, splatmapData);
terrain.Flush ();
}
The data returned by GetAlphamaps
The returned array is three-dimensional - the first two dimensions represent x and y coordinates on the map, while the third denotes the splatmap texture to which the alphamap is applied.
Or in simple words a float[x, y, l] where
x = width in pixels
y = height in pixels
l = Texture-Layer
So lets say you want to set it to a certain texture at this pixel coordinates what you do is
set the weight for the texture's layer to 1
set all other layers weight to 0
So let's say you have e.g. 3 Layers and you want the second one (= index 1) to be the full weighted texture:
float[,,] splatmapData = terrainData.GetAlphamaps(mapX, mapZ, 15, 15);
// Iterate over x-y coordinates within the array
for(var y = 0; i < 15; y++)
{
for(var x = 0; x < 15; x++)
{
// Set first layers weight to 0
splatmapData[x, y, 0] = 0;
// Set second layer's weight to 1
splatmapData[x, y, 1] = 1;
// Set third layer's weight to 0
splatmapData[x, y, 2] = 0;
}
}
terrainData.SetAlphamaps(mapX, mapZ, splatmapData);
I would then implement an enum for the layers like let's say
public enum TerrainLayer
{
Default = 0,
Green,
Red
}
so you can simply pass the according layer index as a parameter - a bit more secure than passing in the int values themselves:
private void ChangeTexture(Vector3 worldPos, TerrainLayer toLayer)
{
print ("changeTexture");
int mapX = (int)(((worldPos.x - terrainPos.x) / terrainData.size.x) * terrainData.alphamapWidth);
int mapZ = (int)(((worldPos.z - terrainPos.z) / terrainData.size.z) * terrainData.alphamapHeight);
float[,,] splatmapData = terrainData.GetAlphamaps(mapX, mapZ, 15, 15);
for(var z = 0; z < 15; z++)
{
for(var x = 0; x < 15; x++)
{
// This ofcourse would be more efficient if you do this only once
// e.g. in Awake since the enum won't change on runtime
var values = (TerrainLAyer[])Enum.GetValues(typeof(TerrainLayer));
// Iterate through the enum and
for(var l = 0; l < values.Length; l++)
{
// set all layers to 0 except the toLayer
splatmapData[x, z, l] = values[l] == toLayer ? 1 : 0;
}
}
}
terrainData.SetAlphamaps (mapX, mapZ, splatmapData);
terrain.Flush ();
}
Now you would simply call it e.g.
ChangeTexture(somePosition, TerrainLayer.Green);

Spectral Saliency map from Matlab code to C#.net,

SSM screenshot
Hi, I have implemeted the Spectral Saliency map from Matlab code to C#.net,
But result is some strange.
Result map is not correctly mapped to original image.
What's wrong with me?
Original Matlab code(Octave) is here:
%% Read image from file
inImg = im2double(rgb2gray(imread('1.jpg')));
inImg = imresize(inImg, 64/size(inImg, 2));
%% Spectral Residual
myFFT = fft2(inImg);
myLogAmplitude = log(abs(myFFT));
myPhase = angle(myFFT);
mySpectralResidual = myLogAmplitude - imfilter(myLogAmplitude, fspecial('average', 3), 'replicate');
saliencyMap = abs(ifft2(exp(mySpectralResidual + i*myPhase))).^2;
%% After Effect
saliencyMap = mat2gray(imfilter(saliencyMap, fspecial('gaussian', [10, 10], 2.5)));
imshow(saliencyMap);
My Code is here : C#.net + AForge.net -------------
Windows10, VS2017, AForge & Accord.net
// 이미지 로드
image2 = (Bitmap)pictureBox1.Image;
// 박스 블러
// logAmp smoothing
Accord.Imaging.Filters.FastBoxBlur filtBox1 =
new Accord.Imaging.Filters.FastBoxBlur(3, 3);
// apply the filter
filtBox1.ApplyInPlace(image2);
// 크기줄이기 64x
AForge.Imaging.Filters.ResizeBilinear filtResize =
new AForge.Imaging.Filters.ResizeBilinear(64, 64);
// apply the filter
Bitmap smallImage = filtResize.Apply(image2);
// 2D FFT -------------------------------------------
//cv::merge(planes, 2, complexImg);
AForge.Imaging.ComplexImage complexImage =
AForge.Imaging.ComplexImage.FromBitmap(smallImage);
//double[,] logAmpl = new double[64, 64];
for (int x = 0; x < 64; x++)
{
for (int y = 0; y < 64; y++)
{
//Calculating elementwise complex conjugate of the shifted image 2d vector
complexImage.Data[x, y].Im = 0;
}
}// for y
//cv::dft(complexImg, complexImg);
complexImage.ForwardFourierTransform();
// Calc Log AmpliTude of magnitude
double[,] logAmpl = new double[64, 64];
for (int x = 0; x < 64; x++)
{
for (int y = 0; y < 64; y++)
{
//Calculating elementwise complex conjugate of the shifted image 2d vector
logAmpl[x, y] = Math.Log(Complex.Multiply(
complexImage.Data[x, y], complexImage.Data[x, y]).Magnitude);
}
}// for y
Bitmap logAmp = logAmpl.ToBitmap();
// extract Phase, int[,] array = new int[4, 2];
double[,] myPhase = new double[64, 64];
for (int x = 0; x < 64; x++)
{
for (int y = 0; y < 64; y++)
{
//Calculating elementwise complex conjugate of the shifted image 2d vector
myPhase[x, y] = complexImage.Data[x, y].Phase;
}
}// for y
// calc Spectral Residual -------------------------
// logAmp smoothing
Accord.Imaging.Filters.FastBoxBlur filtBox =
new Accord.Imaging.Filters.FastBoxBlur(3, 3);
// apply the filter
Bitmap smoothAmp= filtBox.Apply(logAmp);
// subtraction
AForge.Imaging.Filters.Subtract filter =
new AForge.Imaging.Filters.Subtract(smoothAmp);
// apply the filter
Bitmap specResi = filter.Apply(logAmp);
// Calc Saliency map -----------------
// residual for Re
AForge.Imaging.ComplexImage totalSpec =
AForge.Imaging.ComplexImage.FromBitmap(specResi);
// + j*im
for (int x = 0; x < 64; x++)
{
for (int y = 0; y < 64; y++)
{
totalSpec.Data[x, y].Im = myPhase[x, y];
}
}// for y
// exp of re + j*im
for (int x = 0; x < 64; x++)
{
for (int y = 0; y < 64; y++)
{
totalSpec.Data[x, y] = Complex.Exp(totalSpec.Data[x, y]);
}
}// for y
// ifft2
//complexImage.ForwardFourierTransform();
totalSpec.ForwardFourierTransform();
// .^2 of ifft2
for (int x = 0; x < 64; x++)
{
for (int y = 0; y < 64; y++)
{
totalSpec.Data[x, y] =
Complex.Multiply(totalSpec.Data[x, y], totalSpec.Data[x, y]);
}
}// for y
// abs .^2
Bitmap salMap= totalSpec.ToBitmap();
// 가우시안 블러
Accord.Imaging.Filters.GaussianBlur filt2 =
new Accord.Imaging.Filters.GaussianBlur(2.5, 9);
// apply the filter
filt2.ApplyInPlace(salMap);
// create filter
Accord.Imaging.Filters.ContrastStretch filtNorm =
new Accord.Imaging.Filters.ContrastStretch();
// process image
filtNorm.ApplyInPlace(salMap);
//ImageBox.Show(salMap);
pictureBox3.Image = salMap;
What;s wrong with me?

Retinex algorithm implementation

I need to implement Single Scale retinex and multiscale retinex algorithm in C#,
I searched a bit but couldn't find any useful practice projects and artilces with code
As I understood correctly I should:
Convert RGB to YUV
Blur the image using Gaussian blur filter
Use I'(x, y) = 255*log10( I(x, y)/G(x, y) ) + 127.5
I - is illumination, G - Gaussian kernel, I' - the result image
Сonvert back YUV to RGB
This code is not working correctly
public static Image<Bgr, byte> SingleScaleRetinex(this Image<Bgr, byte> img, int gaussianKernelSize, double sigma)
{
var radius = gaussianKernelSize / 2;
var kernelSize = 2 * radius + 1;
var ycc = img.Convert<Ycc, byte>();
var sum = 0f;
var gaussKernel = new float[kernelSize * kernelSize];
for (int i = -radius, k = 0; i <= radius; i++, k++)
{
for (int j = -radius; j <= radius; j++)
{
var val = (float)Math.Exp(-(i * i + j * j) / (sigma * sigma));
gaussKernel[k] = val;
sum += val;
}
}
for (int i = 0; i < gaussKernel.Length; i++)
gaussKernel[i] /= sum;
var gray = new Image<Gray, byte>(ycc.Size);
CvInvoke.cvSetImageCOI(ycc, 1);
CvInvoke.cvCopy(ycc, gray, IntPtr.Zero);
// Размеры изображения
var width = img.Width;
var height = img.Height;
var bmp = gray.Bitmap;
var bitmapData = bmp.LockBits(new Rectangle(Point.Empty, gray.Size), ImageLockMode.ReadWrite, PixelFormat.Format8bppIndexed);
unsafe
{
for (var y = 0; y < height; y++)
{
var row = (byte*)bitmapData.Scan0 + y * bitmapData.Stride;
for (var x = 0; x < width; x++)
{
var color = row + x;
float val = 0;
for (int i = -radius, k = 0; i <= radius; i++, k++)
{
var ii = y + i;
if (ii < 0) ii = 0; if (ii >= height) ii = height - 1;
var row2 = (byte*)bitmapData.Scan0 + ii * bitmapData.Stride;
for (int j = -radius; j <= radius; j++)
{
var jj = x + j;
if (jj < 0) jj = 0; if (jj >= width) jj = width - 1;
val += *(row2 + jj) * gaussKernel[k];
}
}
var newColor = 127.5 + 255 * Math.Log(*color / val);
if (newColor > 255)
newColor = 255;
else if (newColor < 0)
newColor = 0;
*color = (byte)newColor;
}
}
}
bmp.UnlockBits(bitmapData);
CvInvoke.cvCopy(gray, ycc, IntPtr.Zero);
CvInvoke.cvSetImageCOI(ycc, 0);
return ycc.Convert<Bgr, byte>();
}
Look at:
http://www.fer.unizg.hr/ipg/resources/color_constancy
These algorithms are modifications of the Retinex algorithm (with speed improvement) although the author gave them funny names :)
There is a full source code (C++, but it is written very nicely).
Sorry for necro-posting, but it seems that there's a mistake in step 3 of your procedure that can mislead someone passing by.
In order to apply the correction, you want to divide source image by Gauss-filtered copy of it, not the Gaussian kernel itself. Approximately, in pseudo-code:
I_filtered(x,y) = G(x,y) * I(x,y)
I'(x,y) = log(I(x,y) / I_filtered(x,y))
And then apply casting of I'(x,y) to required numeric type (uint8, as I can refer from original post).
More on that topic can be found in this paper:
Ri(x, y) = log(Ii(x, y)) − log(Ii(x, y) ∗ F(x, y))
where Ii
is the input image on the i-th color channel, Ri
is the retinex output image on the i-th
channel and F is the normalized surround function.
.

Categories