ImageSharp: convert from PixelFormats to ColorSpace - c#

I am trying to use ImageSharp for some image processing. I would like to get HSL values for an individual pixel. For that I think that I need to convert PixelFormat to a ColorSpace. How do I convert to/access Hsl color space values?
I have tried the following ColorSpaceConverter to no avail.
for (int y = 0; y < image.Height; y++)
{
Span<Rgb24> pixelRowSpan = image.GetPixelRowSpan(y);
Span<Hsl> hslRowSpan = new Span<Hsl>();
var converter = new ColorSpaceConverter();
converter.Convert(pixelRowSpan, hslRowSpan);
}
I do get the following errors:
error CS1503: Argument 1: cannot convert from
'System.Span<SixLabors.ImageSharp.PixelFormats.Rgb24>' to
'System.ReadOnlySpan<SixLabors.ImageSharp.ColorSpaces.CieLch>'
error CS1503: Argument 2: cannot convert from
'System.Span<SixLabors.ImageSharp.ColorSpaces.Hsl>' to 'System.Span<SixLabors.ImageSharp.ColorSpaces.CieLab>'

Rgb24 has an implicit conversion to Rgb but as you have discovered that doesn't allow implicit conversion of spans.
I would allocate a pooled buffer equivalent to one row of Rgb outside the loop and populate the buffer for each y.
// I would probably pool these buffers.
Span<Rgb> rgb = new Rgb[image.Width];
Span<Hsl> hsl = new Hsl[image.Width];
ColorSpaceConverter converter = new();
for (int y = 0; y < image.Height; y++)
{
Span<Rgb24> row = image.GetPixelRowSpan(y);
for (int x = 0; x < row.Length; x++)
{
rgb[x] = row[x];
}
converter.Convert(rgb, hsl);
}

Related

ImageSharp get dominant color from a stream

I'm trying to find an alternative to System.Drawing and ColorThief to be used in a docker container with Linux (because the prior mentioned have some issues).
So far I found this gist
But it seems to be on an older version since OctreeQuantizer had now a different constructor.
I've tried different approaches but most of the times I either always get FFFFFF or a null reference exception.
I get the NRE when I try new OctreeQuantizer(new QuantizerOptions { Dither = null, MaxColors = 1 }) which I thought might be the same as in the gist. I get always white when I play around with the QuantizerOptions.
I have little to no experience in image processing and ImageSharp, feels like I'm missing something.
tl;dr: trying to find the dominant color from a stream using ImageSharp.
I had the same issue. Solved using the code below. It is a bit hacky but works well for me.
using var image = Image.Load<Rgba32>(imageBytes);
image.Mutate(x => x
.Resize(new ResizeOptions {Sampler = KnownResamplers.NearestNeighbor, Size = new Size(100, 0)}));
int r = 0;
int g = 0;
int b = 0;
int totalPixels = 0;
for (int x = 0; x < image.Width; x++)
{
for (int y = 0; y < image.Height; y++)
{
var pixel = image[x, y];
r += Convert.ToInt32(pixel.R);
g += Convert.ToInt32(pixel.G);
b += Convert.ToInt32(pixel.B);
totalPixels++;
}
}
r /= totalPixels;
g /= totalPixels;
b /= totalPixels;
Rgba32 dominantColor = new Rgba32((byte) r, (byte) g, (byte) b, 255);
// This will give you a dominant color in HEX format i.e #5E35B1FF
string hexColor = dominant.ToHex();

Saving array of bitmaps to separate files with C#

I'm stuck at saving an array of System.Drawing.Bitmap type, each bitmap to separate file.
I have an array "survey". This array stores several Lists of double type.
For each List i want to create a bitmap and then save it as a bmp file.
The line raport[i].Save(Path.Combine(myfilepath, nets[i] + ".bmp")); returns TypeInitializationException - and i don't know why.
The piece nets[i] is a dictionary (int, string) with expected file names.
public void save_results()
{
System.Drawing.Bitmap[] raport = new System.Drawing.Bitmap[survey.Length];
for (int i = 0; i < survey.Length; i++)
{
raport[i] = new System.Drawing.Bitmap(survey[i].Count, 1000);
for (int x = 0; x < survey[i].Count; x++)
for (int y = 0; y < 1000; y++)
raport[i].SetPixel(x, y, Color.FromArgb(255, 255, 255));
for (int x = 0; x < survey[i].Count; x++)
raport[i].SetPixel(x, (int)(1000 - Math.Floor(survey[i][x] * 1000) >= 1000 ? 999 : 1000 - Math.Floor(survey[i][x] * 1000)), Color.FromArgb(0, 0, 0));
raport[i].Save(Path.Combine(myfilepath, nets[i] + ".bmp"));
}
}
Finally, the problem was associated with the variable "myfilepath".
The variable was 'compiled' from few file paths - and all of those strings should have been static:
public static string mydoc= Environment.GetFolderPath(Environment.SpecialFolder.MyDocuments);
public static string myfilepath_p = Path.Combine(mydoc, "Demeter");
public static string myfilepath= Path.Combine(myfilepath_p, "regresja_liniowa");
Originally, only the 'final' variable used in the cited code was static, what caused an error.
Rest of the code worked fine.

Passing a List as a parameter to a Matlab function from C# project

I'm trying to run The Fast Compressive Tracking Algorithm from my C# project. What I want is that, (in C#)to pass the images after converting them into gray scale images to the Matlab function
Runtracker
instead of having the images in the Matlab code as I'm going to apply some operations to specify the index of the array that containing the images to start tracking from.
But when I pass the list of the grayscale images, I got an error says
" The best overload method match for 'trackerNative.FCT.Runtracker(int)' has some invalid arguments."
Can you help me solve this? to pass List of images in C# to a matlab function.
C# Code
//to convert to graScale
public double[,] to_gray(Bitmap img)
{
int w = img.Width;
int h = img.Height;
double[,] grayImage = new double[w, h];
for (int i = 0; i < w; i++)
{
for (int x = 0; x < h; x++)
{
Color oc = img.GetPixel(i, x);
grayImage[i, x] = (double)(oc.R + oc.G + oc.B) / 3;
}
}
return grayImage;
}
//to get the files from specified folder
public static String[] GetFilesFrom(String searchFolder, String[] filters, bool isRecursive)
{
List<String> filesFound = new List<String>();
var searchOption = isRecursive ? SearchOption.AllDirectories : SearchOption.TopDirectoryOnly;
foreach (var filter in filters)
{
filesFound.AddRange(Directory.GetFiles(searchFolder, String.Format("*.{0}", filter), searchOption));
}
return filesFound.ToArray();
}
//Button to run the matlab function 'Runtracker'
private void button1_Click(object sender, EventArgs e)
{
FCT obj = new FCT(); //FCT is a matlab class
OpenFileDialog openFileDialog1 = new OpenFileDialog();
if (openFileDialog1.ShowDialog() == DialogResult.OK)
{
string file_path = openFileDialog1.FileName;
var filters = new String[] { "png" };
var files = GetFilesFrom(file_path, filters, false);
List< double[,] > imgArr = new List< double[,] >();
for (int i = 0; i < files.Length; i++)
{
Bitmap image = new Bitmap(files[i]);
double[,] grayScale_image = to_gray(image);
imgArr[i] = grayScale_image;
}
object m = obj.Runtracker(imgArr); //the error occured
//Bitmap output = m as Bitmap;
}
}
Matlab Code
function Runtracker(input_imgArr)
clc;clear all;close all;
rand('state',0);
%%
[initstate] = initCt(1);% position of the detected face in the 1st frame
num = length(input_imgArr);% number of frames
%%
x = initstate(1);% x axis at the Top left corner
y = initstate(2);
w = initstate(3);% width of the rectangle
h = initstate(4);% height of the rectangle
%---------------------------
img = imread(input_imgArr(1));
img = double(img);
%%
trparams.init_negnumtrain = 50;%number of trained negative samples
trparams.init_postrainrad = 4;%radical scope of positive samples
trparams.initstate = initstate;% object position [x y width height]
trparams.srchwinsz = 25;% size of search window
%-------------------------
%% Classifier parameters
clfparams.width = trparams.initstate(3);
clfparams.height= trparams.initstate(4);
% feature parameters
% number of rectangle from 2 to 4.
ftrparams.minNumRect =2;
ftrparams.maxNumRect =4;
M = 100;% number of all weaker classifiers, i.e,feature pool
%-------------------------
posx.mu = zeros(M,1);% mean of positive features
negx.mu = zeros(M,1);
posx.sig= ones(M,1);% variance of positive features
negx.sig= ones(M,1);
lRate = 0.85;% Learning rate parameter
%% Compute feature template
[ftr.px,ftr.py,ftr.pw,ftr.ph,ftr.pwt] = HaarFtr(clfparams,ftrparams,M);
%% Compute sample templates
posx.sampleImage = sampleImgDet(img,initstate,trparams.init_postrainrad,1);
negx.sampleImage = sampleImg(img,initstate,1.5*trparams.srchwinsz,4+trparams.init_postrainrad,trpar ams.init_negnumtrain);
%% Feature extraction
iH = integral(img);%Compute integral image
posx.feature = getFtrVal(iH,posx.sampleImage,ftr);
negx.feature = getFtrVal(iH,negx.sampleImage,ftr);
[posx.mu,posx.sig,negx.mu,negx.sig] =
classiferUpdate(posx,negx,posx.mu,posx.sig,negx.mu,negx.sig,lRate);%
update distribution parameters
%% Begin tracking
for i = 2:num
img = imread(input_imgArr(i));
imgSr = img;% imgSr is used for showing tracking results.
img = double(img);
iH = integral(img);%Compute integral image
%% Coarse detection
step = 4; % coarse search step
detectx.sampleImage =
sampleImgDet(img,initstate,trparams.srchwinsz,step);
detectx.feature = getFtrVal(iH,detectx.sampleImage,ftr);
r = ratioClassifier(posx,negx,detectx.feature);% compute the classifier for all samples
clf = sum(r);% linearly combine the ratio classifiers in r to the final classifier
[c,index] = max(clf);
x = detectx.sampleImage.sx(index);
y = detectx.sampleImage.sy(index);
w = detectx.sampleImage.sw(index);
h = detectx.sampleImage.sh(index);
initstate = [x y w h];
%% Fine detection
step = 1;
detectx.sampleImage = sampleImgDet(img,initstate,10,step);
detectx.feature = getFtrVal(iH,detectx.sampleImage,ftr);
r = ratioClassifier(posx,negx,detectx.feature);% compute the classifier for all samples
clf = sum(r);% linearly combine the ratio classifiers in r to the final classifier
[c,index] = max(clf);
x = detectx.sampleImage.sx(index);
y = detectx.sampleImage.sy(index);
w = detectx.sampleImage.sw(index);
h = detectx.sampleImage.sh(index);
initstate = [x y w h];
%% Show the tracking results
imshow(uint8(imgSr));
rectangle('Position',initstate,'LineWidth',4,'EdgeColor','r');
hold on;
text(5, 18, strcat('#',num2str(i)), 'Color','y', 'FontWeight','bold', 'FontSize',20);
set(gca,'position',[0 0 1 1]);
pause(0.00001);
hold off;
%% Extract samples
posx.sampleImage = sampleImgDet(img,initstate,trparams.init_postrainrad,1);
negx.sampleImage =
sampleImg(img,initstate,1.5*trparams.srchwinsz,4+trparams.init_postrainrad,trparams.init_negnumtrain);
%% Update all the features
posx.feature = getFtrVal(iH,posx.sampleImage,ftr);
negx.feature = getFtrVal(iH,negx.sampleImage,ftr);
[posx.mu,posx.sig,negx.mu,negx.sig] = classiferUpdate(posx,negx,posx.mu,posx.sig,negx.mu,negx.sig,lRate);% update distribution parameters
end
end
This is because the Runtracker expects an argument of type int as per the error and you are passing an argument of type int[] i.e an integer array.
The passed parameter and the expected parameter should match in order to execute the method successfully.
Hope this helps.

EmguCv: Reduce the grayscales

Is there a way to reduce the grayscales of an gray-image in openCv?
Normaly i have grayvalues from 0 to 256 for an
Image<Gray, byte> inputImage.
In my case i just need grayvalues from 0-10. Is there i good way to do that with OpenCV, especially for C# ?
There's nothing built-in on OpenCV that allows this sort of thing.
Nevertheless, you can write something yourself. Take a look at this C++ implementation and just translate it to C#:
void colorReduce(cv::Mat& image, int div=64)
{
int nl = image.rows; // number of lines
int nc = image.cols * image.channels(); // number of elements per line
for (int j = 0; j < nl; j++)
{
// get the address of row j
uchar* data = image.ptr<uchar>(j);
for (int i = 0; i < nc; i++)
{
// process each pixel
data[i] = data[i] / div * div + div / 2;
}
}
}
Just send a grayscale Mat to this function and play with the div parameter.

Operating on images in emgucv

I have 2 Emgu.CV.Image images:
Image<Gray, byte> img1 = new Image<Gray, byte>(#"xyz.gif");
Image<Gray, byte> img2 = new Image<Gray, byte>(#"abc.gif");
I want to perform operation on images like image addition pixel by pixel like ( without using inbuilt functions):
for (int i = 0; i < width1; i++){
for (int j = 0; j < height1; j++){
img2[i][j] = img1[i][j] + img2[i][j];
}
}
How can I do so?
If you need to alter the image, Pixel by Pixel then it's back to using the Image.Data property.
If you are using colour images it is important to note that it is a 3 Dimensional array containing Red, Green and Blue Data in layer 0,1,2 respectively. The following code will allow you to access data from the image and adjust it.
for (int i = 0; i < width1; i++)
{
for (int j = 0; j < height1; j++)
{
img1.Data[i,j,0] = img1.Data[i,j,0] + img2.Data[i,j,0];
img1.Data[i,j,1] = img1.Data[i,j,1] + img2.Data[i,j,1];
img1.Data[i,j,2] = img1.Data[i,j,2] + img2.Data[i,j,2];
}
}
For the int<>Byte conversion error, you might need to cast the results to a (byte) (however you can assign int's to the Data providing there in range) i.e.
img1.Data[i,j,0] = (byte)img1.Data[i,j,0] + img2.Data[i,j,0];
this is done to tell .Net that you willing to accept data loss. Please note however you are adding two bytes from image data. There values are 0-255 so you could end up with a value of 0-510. To account for this you must normalise you results to the required 0 - 255 standard in images. i.e.
img1.Data[i,j,0] = ((img1.Data[i,j,0] + img2.Data[i,j,2])/0);
As you are using greyscale images you image data will only have 1 layer the code is similar in format however you only add the first layer 0.
for (int i = 0; i < width1; i++)
{
for (int j = 0; j < height1; j++)
{
gray_image1.Data[i,j,0] = gray_image1.Data[i,j,0] + gray_image2.Data[i,j,0];
}
}
The TDepth property allows you to alter the type of data held with .Data construct while some conversion are not supported doubles are, mainly because the way these are stored allow code to execute more efficiently with them. It is good practice to use this however not really essential.

Categories