Converting a simple JavaScript code - c#

I've been trying to convert this JavaScript code that gets the dominant color from an image, so far with no success. I get errors with the colorCount & color variables. I don't know the suitable & equivalent data types to use for these variables. Here is my code:
public string dominantColor(Bitmap img)
{
int[] colorCount = new int[0];
int maxCount = 0;
string dominantColor = "";
// data is an array of a series of 4 one-byte values representing the rgba values of each pixel
Bitmap Bmp = new Bitmap(img);
BitmapData BmpData = Bmp.LockBits(new Rectangle(0, 0, Bmp.Width, Bmp.Height), ImageLockMode.ReadOnly, Bmp.PixelFormat);
byte[] data = new byte[BmpData.Stride * Bmp.Height];
for (int i = 0; i < data.Length; i += 4)
{
// ignore transparent pixels
if (data[i+3] == 0)
continue;
string color = data[i] + "." + data[i+1] + "," + data[i+2];
// ignore white
if (color == "255,255,255")
continue;
if (colorCount[color] != 0)
colorCount[color] = colorCount[color] + 1;
else
colorCount[color] = 0;
// keep track of the color that appears the most times
if (colorCount[color] > maxCount)
{
maxCount = colorCount[color];
dominantColor = color.ToString;
}
}
string rgb = dominantColor.Split(",");
return rgb;
}

I'll give you a complete managed version of your code:
static Color dominantColor(Bitmap img)
{
Hashtable colorCount = new Hashtable();
int maxCount = 0;
Color dominantColor = Color.White;
for (int i = 0; i < img.Width; i++)
{
for (int j = 0; j < img.Height; j++)
{
var color = img.GetPixel(i, j);
if (color.A == 0)
continue;
// ignore white
if (color.Equals(Color.White))
continue;
if (colorCount[color] != null)
colorCount[color] = (int)colorCount[color] + 1;
else
colorCount.Add(color, 0);
// keep track of the color that appears the most times
if ((int)colorCount[color] > maxCount)
{
maxCount = (int)colorCount[color];
dominantColor = color;
}
}
}
return dominantColor;
}
So what is the difference here?
- I use a Hashtable instead of your array (you never redefine the dimension of it - and the best way to use an extensible object from JavaScript is a Hashtable)
- I prefer to use the already included structure Color (which saves 4 bytes for Alpha, Red, Green, Blue)
- I also do the comparisons and return this structure (then you are free to do whatever you want to do - in JavaScript using those strings is just a workaround because the browser is just giving you such RGB(a) strings)
What is another problem in your code is the line containing byte[] data = new byte[BmpData.Stride * Bmp.Height]; - Your array is created and initialized but with no data (.NET will erase all previous data resulting in a lot of zeros). Therefore you will not anywhere.
Drawback of my version is that it is indeed very small (this is where your lockbits are coming into play). I can give you a non-managed version (using the lockbits and an unsafe-block) if you want to. Depends if performance matters a lot for you and if you are interested!

Related

Algorithms and techniques for string search across multiple GiB of text files

I have to create a utility that searches through 40 to 60 GiB of text files as quick as possible.
Each file has around 50 MB of data that consists of log lines (about 630.000 lines per file).
A NOSQL document database is unfortunately no option...
As of now I am using a Aho-Corsaick algorithm for the search which I stole from Tomas Petricek off of his blog. It works very well.
I process the files in Tasks. Each file is loaded into memory by simply calling File.ReadAllLines(path). The lines are then fed into the Aho-Corsaick one by one, thus each file causes around 600.000 calls to the algorithm (I need the line number in my results).
This takes a lot of time and requires a lot of memory and CPU.
I have very little expertise in this field as I usually work in image processing.
Can you guys recommend algorithms and approaches which could speed up the processing?
Below is more detailed view to the Task creation and file loading which is pretty standard. For more information on the Aho-Corsaick, please visit the linked blog page above.
private KeyValuePair<string, StringSearchResult[]> FindInternal(
IStringSearchAlgorithm algo,
string file)
{
List<StringSearchResult> result = new List<StringSearchResult>();
string[] lines = File.ReadAllLines(file);
for (int i = 0; i < lines.Length; i++)
{
var results = algo.FindAll(lines[i]);
for (int j = 0; j < results.Length; j++)
{
results[j].Row = i;
}
}
foreach (string line in lines)
{
result.AddRange(algo.FindAll(line));
}
return new KeyValuePair<string, StringSearchResult[]>(
file, result.ToArray());
}
public Dictionary<string, StringSearchResult[]> Find(
params string[] search)
{
IStringSearchAlgorithm algo = new StringSearch();
algo.Keywords = search;
Task<KeyValuePair<string, StringSearchResult[]>>[] findTasks
= new Task<KeyValuePair<string, StringSearchResult[]>>[_files.Count];
Parallel.For(0, _files.Count, i => {
findTasks[i] = Task.Factory.StartNew(
() => FindInternal(algo, _files[i])
);
});
Task.WaitAll(findTasks);
return findTasks.Select(t => t.Result)
.ToDictionary(x => x.Key, x => x.Value);
}
EDIT
See section Initial Answer for the original Answer.
I further optimized my code by doing the following:
Added paging to prevent memory overflow / crash due to large amount of result data.
I offload the search results into local files as soon as they exceed a certain buffer size (64kb in my case).
Offloading the results required me to convert my SearchData struct to binary and back.
Splicing the array of files which are processed and running them in Tasks greatly increased performance (from 35 sec to 9 sec when processing about 25 GiB of search data)
Splicing / scaling the file array
The code below gives a scaled/normalized value for T_min and T_max.
This value can then be used to determine the size of each array holding n-amount of file paths.
private int ScalePartition(int T_min, int T_max)
{
// Scale m to range.
int m = T_max / 2;
int t_min = 4;
int t_max = Math.Max(T_max / 16, T_min);
m = ((T_min - m) / (T_max - T_min)) * (t_max - t_min) + t_max;
return m;
}
This code shows the implementation of the scaling and splicing.
// Get size of file array portion.
int scale = ScalePartition(1, _files.Count);
// Iterator.
int n = 0;
// List containing tasks.
List<Task<SearchData[]>> searchTasks = new List<Task<SearchData[]>>();
// Loop through files.
while (n < _files.Count) {
// Local instance of n.
// You will get an AggregateException if you use n
// as n changes during runtime.
int num = n;
// The amount of items to take.
// This needs to be calculated as there might be an
// odd number of elements in the file array.
int cnt = n + scale > _files.Count ? _files.Count - n : scale;
// Run the Find(int, int, Regex[]) method and add as task.
searchTasks.Add(Task.Run(() => Find(num, cnt, regexes)));
// Increment iterator by the amount of files stored in scale.
n += scale;
}
Initial Answer
I had the best results so far after switching to MemoryMappedFile and moving from the Aho-Corsaick back to Regex (a demand has been made that pattern matching is a must have).
There are still parts that can be optimized or changed and I'm sure this is not the fastest or best solution but for it's alright.
Here is the code which returns the results in 30 seconds for 25 GiB worth of data:
// GNU coreutil wc defined buffer size.
// Had best performance with this buffer size.
//
// Definition in wc.c:
// -------------------
// /* Size of atomic reads. */
// #define BUFFER_SIZE (16 * 1024)
//
private const int BUFFER_SIZE = 16 * 1024;
private KeyValuePair<string, SearchData[]> FindInternal(Regex[] rgx, string file)
{
// Buffer for data segmentation.
byte[] buffer = new byte[BUFFER_SIZE];
// Get size of file.
FileInfo fInfo = new FileInfo(file);
long fSize = fInfo.Length;
fInfo = null;
// List of results.
List<SearchData> results = new List<SearchData>();
// Create MemoryMappedFile.
string name = "mmf_" + Path.GetFileNameWithoutExtension(file);
using (var mmf = MemoryMappedFile.CreateFromFile(
file, FileMode.Open, name))
{
// Create read-only in-memory access to file data.
using (var accessor = mmf.CreateViewStream(
0, fSize,
MemoryMappedFileAccess.Read))
{
// Store current position.
int pos = (int)accessor.Position;
// Check if file size is less then the
// default buffer size.
int cnt = (int)(fSize - BUFFER_SIZE > 0
? BUFFER_SIZE
: fSize - BUFFER_SIZE);
// Iterate through file until end of file is reached.
while (accessor.Position < fSize)
{
// Write data to buffer.
accessor.Read(buffer, 0, cnt);
// Update position.
pos = (int)accessor.Position;
// Update next buffer size.
cnt = (int)(fSize - pos >= BUFFER_SIZE
? BUFFER_SIZE
: fSize - pos);
// Convert buffer data to string for Regex search.
string s = Encoding.UTF8.GetString(buffer);
// Run regex against extracted data.
foreach (Regex r in rgx) {
// Get matches.
MatchCollection matches = r.Matches(s);
// Create SearchData struct to reduce memory
// impact and only keep relevant data.
foreach (Match m in matches) {
SearchData sd = new SearchData();
// The actual matched string.
sd.Match = m.Value;
// The index in the file.
sd.Index = m.Index + pos;
// Index to find beginning of line.
int nFirst = m.Index;
// Index to find end of line.
int nLast = m.Index;
// Go back in line until the end of the
// preceeding line has been found.
while (s[nFirst] != '\n' && nFirst > 0) {
nFirst--;
}
// Append length of \r\n (new line).
// Change this to 1 if you work on Unix system.
nFirst+=2;
// Go forth in line until the end of the
// current line has been found.
while (s[nLast] != '\n' && nLast < s.Length-1) {
nLast++;
}
// Remove length of \r\n (new line).
// Change this to 1 if you work on Unix system.
nLast-=2;
// Store whole line in SearchData struct.
sd.Line = s.Substring(nFirst, nLast - nFirst);
// Add result.
results.Add(sd);
}
}
}
}
}
return new KeyValuePair<string, SearchData[]>(file, results.ToArray());
}
public List<KeyValuePair<string, SearchData[]>> Find(params string[] search)
{
var results = new List<KeyValuePair<string, SearchData[]>>();
// Prepare regex objects.
Regex[] regexes = new Regex[search.Length];
for (int i=0; i<regexes.Length; i++) {
regexes[i] = new Regex(search[i], RegexOptions.Compiled);
}
// Get all search results.
// Creating the Regex once and passing it
// to the sub-routine is best as the regex
// engine adds a lot of overhead.
foreach (var file in _files) {
var data = FindInternal(regexes, file);
results.Add(data);
}
return results;
}
I had a stupid idea yesterday were I though that it might work out converting the file data to a bitmap and looking for the input within pixels as pixel checking is quite fast.
Just for the giggles... here is the non-optimized test code for that stupid idea:
public struct SearchData
{
public string Line;
public string Search;
public int Row;
public SearchData(string l, string s, int r) {
Line = l;
Search = s;
Row = r;
}
}
internal static class FileToImage
{
public static unsafe SearchData[] FindText(string search, Bitmap bmp)
{
byte[] buffer = Encoding.ASCII.GetBytes(search);
BitmapData data = bmp.LockBits(
new Rectangle(0, 0, bmp.Width, bmp.Height),
ImageLockMode.ReadOnly, bmp.PixelFormat);
List<SearchData> results = new List<SearchData>();
int bpp = Bitmap.GetPixelFormatSize(bmp.PixelFormat) / 8;
byte* ptFirst = (byte*)data.Scan0;
byte firstHit = buffer[0];
bool isFound = false;
for (int y=0; y<data.Height; y++) {
byte* ptStride = ptFirst + (y * data.Stride);
for (int x=0; x<data.Stride; x++) {
if (firstHit == ptStride[x]) {
byte[] temp = new byte[buffer.Length];
if (buffer.Length < data.Stride-x) {
int ret = 0;
for (int n=0, xx=x; n<buffer.Length; n++, xx++) {
if (ptStride[xx] != buffer[n]) {
break;
}
ret++;
}
if (ret == buffer.Length) {
int lineLength = 0;
for (int n = 0; n<data.Stride; n+=bpp) {
if (ptStride[n+2] == 255 &&
ptStride[n+1] == 255 &&
ptStride[n+0] == 255)
{
lineLength=n;
}
}
SearchData sd = new SearchData();
byte[] lineBytes = new byte[lineLength];
Marshal.Copy((IntPtr)ptStride, lineBytes, 0, lineLength);
sd.Search = search;
sd.Line = Encoding.ASCII.GetString(lineBytes);
sd.Row = y;
results.Add(sd);
}
}
}
}
}
return results.ToArray();
bmp.UnlockBits(data);
return null;
}
private static unsafe Bitmap GetBitmapInternal(string[] lines, int startIndex, Bitmap bmp)
{
int bpp = Bitmap.GetPixelFormatSize(bmp.PixelFormat) / 8;
BitmapData data = bmp.LockBits(
new Rectangle(0, 0, bmp.Width, bmp.Height),
ImageLockMode.ReadWrite,
bmp.PixelFormat);
int index = startIndex;
byte* ptFirst = (byte*)data.Scan0;
int maxHeight = bmp.Height;
if (lines.Length - startIndex < maxHeight) {
maxHeight = lines.Length - startIndex -1;
}
for (int y = 0; y < maxHeight; y++) {
byte* ptStride = ptFirst + (y * data.Stride);
index++;
int max = lines[index].Length;
max += (max % bpp);
lines[index] += new string('\0', max % bpp);
max = lines[index].Length;
for (int x=0; x+2<max; x+=bpp) {
ptStride[x+0] = (byte)lines[index][x+0];
ptStride[x+1] = (byte)lines[index][x+1];
ptStride[x+2] = (byte)lines[index][x+2];
}
ptStride[max+2] = 255;
ptStride[max+1] = 255;
ptStride[max+0] = 255;
for (int x = max + bpp; x < data.Stride; x += bpp) {
ptStride[x+2] = 0;
ptStride[x+1] = 0;
ptStride[x+0] = 0;
}
}
bmp.UnlockBits(data);
return bmp;
}
public static unsafe Bitmap[] GetBitmap(string filePath)
{
int bpp = Bitmap.GetPixelFormatSize(PixelFormat.Format24bppRgb) / 8;
var lines = System.IO.File.ReadAllLines(filePath);
int y = 0x800; //lines.Length / 0x800;
int x = lines.Max(l => l.Length) / bpp;
int cnt = (int)Math.Ceiling((float)lines.Length / (float)y);
Bitmap[] results = new Bitmap[cnt];
for (int i = 0; i < results.Length; i++) {
results[i] = new Bitmap(x, y, PixelFormat.Format24bppRgb);
results[i] = GetBitmapInternal(lines, i * 0x800, results[i]);
}
return results;
}
}
You can split the file into partitions and regex search each partition in parallel then join the results. There are some sharp edges in the details like handling values that span two partitions. Gigantor is a c# library I have created that does this very thing. Feel free to try it or have a look at the source code.

Instancing cubes and coloring them in Unity (c#)

I'm trying to automatically generate cubes in Unity with c#.
I have finally succeed in creating a large number of cubes (300x300), positioning them and naming them, but I have a problem when it comes to colouring them (last part of the code).
I want to colour them by using three variables, one for each RGB value. The variables I define in my code are rnewcolor, gnewcolor and bnewcolor, and they are attributed to each component by using (last part of the code):
go.GetComponent<Renderer>().material.color = new Color(rnewcolor/255f, gnewcolor/255f, bnewcolor/255f)
When I attribute random values to these three variables for example:
int rnewcolor = UnityEngine.Random.Range(0, 255))
then the code works, and the cubes are coloured with random colours as expected (image here https://ibb.co/pXDmhfB )
But when I try to read the RGB values from a file, storing them on the arrangements Rcolor[i,j,k], Gcolor[i,j,k] and Bcolor[i,j,k], to pass them later to the variables rnewcolor, gnewcolor, bnewcolor, then the colour attribution does not work and I get all the cubes with a kind of grey colour which is not the colour read from the file (image https://ibb.co/S3JRnBr )
Some more images:
Inspector on the first cube (used for the instancing): https://ibb.co/VNsCy5b (it has the code "ProgramCubeNew.cs" added as a component)
Inspector on the first cube (also in prefab): https://ibb.co/tKrVD2V
Some comments:
By printing some debugging, I know that the variables rnewcolor, gnewcolor, bnewcolor, Rcolor[,,], Gcolor[,,] and Bcolor[,,] are all of the same type, which is Int32.
Also, I know that the values read from the file and stored in the Rcolor[,,], Gcolor[,,] and Bcolor[,,] variables are right.
I know that here:
go.GetComponent<Renderer>().material.color = new Color(rnewcolor/255f, gnewcolor/255f, bnewcolor/255f)
the fucntion Color(,,) works with float numbers ranging from 0 to 1. It works perfectly when random numbers are assigned to the variables gnewcolor, bnewcolor and gnewcolor but it does not work when values are read from the file.
As the colouring process work when using random numbers, i assume that the articulation between objects in the Unity interface must be right (?)
Does anybody knows why I can't attribute the colours when values are read from file?
Thank you!
using System.Collections;
using UnityEngine;
using System;
using System.IO;
using System.Text;
public class ProgramCubeNew : MonoBehaviour
{
public Int32[,,] Rcolor = new Int32[400, 400, 400];
public Int32[,,] Gcolor = new Int32[400, 400, 400];
public Int32[,,] Bcolor = new Int32[400, 400, 400];
// Start is called before the first frame update
void Start()
{
GameObject prefab = Resources.Load("Cube") as GameObject;
string path = #"H:\Unity\Learn_Everything_Fast\Cube_script_generation\export_bones_scene\rgb_bones_scene.txt";
using (var reader = new StreamReader(path))
{
while (!reader.EndOfStream)
{
var line = reader.ReadLine();
var values = line.Split(' ');
int i = Convert.ToInt32(values[0]);
int j = Convert.ToInt32(values[1]);
int k = Convert.ToInt32(values[2]);
Rcolor[i, j, k] = Convert.ToInt32(values[3]);
Gcolor[i, j, k] = Convert.ToInt32(values[4]);
Bcolor[i, j, k] = Convert.ToInt32(values[5]);
}
}
// I'm using theese three lines in order to know if the file is being read ok, it seems so.
Debug.Log(Rcolor[50, 50, 50]);
Debug.Log(Gcolor[50, 50, 50]);
Debug.Log(Bcolor[50, 50, 50]);
for (int i = 0; i <= 300; i++)
{
for (int j = 0; j <= 300; j++)
{
int k = 50;
string name = string.Format("cube_{0}_{1}_{2}\n", i, j, k);
string namemat = string.Format("mat_{0}_{1}_{2}\n", i, j, k);
GameObject go = Instantiate(prefab) as GameObject;
go.transform.position = new Vector3(i, j + 20, k);
go.transform.name = name;
// int rnewcolor = UnityEngine.Random.Range(0, 255); // it works ok
// int gnewcolor = UnityEngine.Random.Range(0, 255); // it works ok
// int bnewcolor = UnityEngine.Random.Range(0, 255); // it works ok
int rnewcolor = Rcolor[i, j, k];
int gnewcolor = Gcolor[i, j, k];
int bnewcolor = Bcolor[i, j, k];
// I'm using this to print some debug
if (i == 50 && j == 50)
{
Debug.Log(Rcolor[i, j, k]);
Debug.Log(Gcolor[i, j, k]);
Debug.Log(Bcolor[i, j, k]);
Debug.Log(Rcolor[i, j, k].GetType());
Debug.Log(Gcolor[i, j, k].GetType());
Debug.Log(Bcolor[i, j, k].GetType());
Debug.Log(bnewcolor.GetType());
}
go.GetComponent<Renderer>().material.color = new Color(rnewcolor/255f,gnewcolor/255f,bnewcolor/255f);
}
}
}
// Update is called once per frame
void Update()
{
}
}

New image overlays previous bitmap

There are a number of posts about this, but i still can't figure it out. I am rather new at this, so please be forgiving.
I display an image, then grab a new image, and try to display it. When the new image is displayed, it has remnants of the old image. I have tried Picture1.Image= null to no avail.
Is it an issue with managed memory? I suspect it has to do with how the memory is being managed, that somehow the code copies a new image over and old image in a way that leaves some data from the previous image.
Here is the code to display the data in scaled1 (from this helpful earlier post):
Edit:
Code added showing processing of arrays that are plotted. The overlaying behavior stops if the arrays are cleared using the Array.Clear method. Perhaps when this is cleared up I can post a canonical snippet demonstrating the issue.
This resets the question as: Why do arrays need to be cleared when each value of the array is rewritten? How can the array retain information of previous values?
ushort[] frame = null;
byte[] scaled1 = null;
double[][] frameringSin;
double[][] frameringCos;
double[] sumsin;
double[] sumcos;
frame = new ushort[mImageWidth * mImageHeight];
scaled1 = new byte[mImageWidth * mImageHeight];
frameringSin = new double[RingSize][];
frameringCos = new double[RingSize][];
ringsin = new double[RingSize];
ringcos = new double[RingSize];
//Fill array with images
for(int ring=0; ring <nN; ++ring)
{
mCamera.GrabFrameReduced(framering[ring], reduced, out preset);
}
//Process images
for (int i = 0; i < nN; ++i)
{
Array.Clear(frameringSin[i], 0, frameringSin.Length);
Array.Clear(frameringCos[i], 0, frameringSin.Length);
}
Array.Clear(sumsin, 0, sumsin.Length);
Array.Clear(sumcos, 0, sumcos.Length);
for(int r=0;r<nN; ++r)
{
for (int i = 0; i < frame.Length; ++i)//upto 12 ms
{
frameringSin[r][i] = framering[r][i]* ringsin[r] / nN;
frameringCos[r][i] = framering[r][i] *ringcos[r] / nN;
}
}
for (int i = 0; i < sumsin.Length; ++i)//up to 25ms
{
for (int r = 0; r < nN; ++r)
{
sumsin[i] += frameringSin[r][i];
sumcos[i] += frameringCos[r][i];
}
}
for(int r=0 ; r<nN ;++r)
{
for (int i = 0; i < sumsin.Length; ++i)
{
A[i] = Math.Sqrt(sumsin[i] * sumsin[i] + sumcos[i] * sumcos[i]);
}
//extract scaling parameters
...
//Scale Image
for (i1 = 0; i1 < frame.Length; ++i1)
scaled1[i1] = (byte)((Math.Min(Math.Max(min1, frameA[i1]), max1) - min1) * scale1);
bmp1 = new Bitmap(mImageWidth,mImageHeight,System.Drawing.Imaging.PixelFormat.Format8bppIndexed);
var bdata1 = bmp1.LockBits(new Rectangle(new Point(0, 0), bmp1.Size), ImageLockMode.WriteOnly, bmp1.PixelFormat);
try
{
Marshal.Copy(scaled1, 0, bdata1.Scan0, scaled1.Length);
}
finally
{
bmp1.UnlockBits(bdata1);
}
Picture1.Image = bmp1;
Picture1.Refresh();
Actually, you're not replacing all values in the arrays - your for cycles are wrong. You want them to look like this:
for (i1 = 0; i1 < frame.Length; i1++)
scaled1[i1] = (byte)((Math.Min(Math.Max(min1, frameA[i1]), max1) - min1)
* scale1);
The difference (i++ vs ++i) is that your way, you're skipping the first and the last index. C# arrays start at 0, while you start at 1 (you increment the loop variable before you run the body for the first time).
Also, note that for performance reasons, it's very handy if you're going through the array like this:
for (var i = 0; i < array.Length; i++)
/* do work with array[i] */
The JIT compiler recognizes this and avoids bounds checks, because it knows there can never be an overflow. When you're doing a lot of work with arrays, this can give you a massive performance boost, even if you access multiple arrays through the same index (one of them will not have the checks, the others will - still saves a lot of work).
The default JIT isn't very smart about this (it has to be quite fast), so pretty much anything else will reintroduce the bounds check. If performance is a concern for you, you'd want to profile the code anyway, of course.
EDIT: Ah, my bad. Anyway, I believe your problem isn't having to clear the frameringXXX arrays, but rather, the sumsin and sumcos arrays - you're always adding to those, so you'd be adding to the value that was already there, rather than starting from zero again. So you need to reset those arrays to zeroes first (which is what Array.Clear does).

find lighter and darker colors based on any color from white to black

Can any one tell me how can I do same as 0to255 in a Windows Store app (C#)?
In short I need a logic in which I will pass a color & it will give me a list of color.
private List<string> GetColorCollection(string hexcolor)
{
//TODO:
}
GetColorCollection(510099) should give me the result same as this.
The website you linked to uses HSL (Hue Saturation Lightness) instead of RGB to represent the colors, and then varies the Lightness from 0 to 255 (hence the name).
The Color class provides getBrightness(), but nothing to manipulate the Lightness directly. Luckily, you're not the first person to need that conversion - you can just use the ColorRGB class from this answer.
I struggled with this one, and I can't get the right values (same as the website) but, I think this is pretty close. I hope this is useful to you.
I tested it with the value: 15bee1
http://0to255.com/15bee1
And these are my results:
I don't like to create functions that has strings as input instead when colors meant. So I made some conversion.
And this is the code:
// convert the Color type to String notation
private string ColorToString(Color color)
{
return String.Format("{0:x2}{1:x2}{2:x2}", color.R, color.G, color.B);
}
// convert the string notation to color type
private Color StringToColor(string hexcolor)
{
byte r = Convert.ToByte(hexcolor.Substring(0, 2), 16);
byte g = Convert.ToByte(hexcolor.Substring(2, 2), 16);
byte b = Convert.ToByte(hexcolor.Substring(4, 2), 16);
return new Color { R = r, G = g, B = b };
}
// This function calculates a gradient from <color1> to <color2> in <steps> steps
private IEnumerable<Color> GetColorGradient(Color color1, Color color2, int steps)
{
int rD = color2.R - color1.R;
int gD = color2.G - color1.G;
int bD = color2.B - color1.B;
for (int i = 1; i < steps; i++)
yield return new Color
{
R = (byte)(color1.R + (rD * i / (steps))),
G = (byte)(color1.G + (gD * i / (steps))),
B = (byte)(color1.B + (bD * i / (steps))),
};
}
// This will append two gradients (white->color->black)
private IEnumerable<Color> GetColorCollection(Color color, int steps)
{
int grayValue = (color.R + color.G + color.B) / 3;
// with the gray value I will determine the lightness of the color, so what step it should start. I don't want 16 white values, when I input a light color.
int currentStep = (grayValue * steps / 256) - 1;
yield return Colors.White;
foreach (Color c in GetColorGradient(Colors.White, color, currentStep))
yield return c;
yield return color;
foreach (Color c in GetColorGradient(color, Colors.Black, steps - currentStep - 1))
yield return c;
yield return Colors.Black;
}
// this function will convert a IEnumerable<Color> to IEnumerable<string>
private IEnumerable<string> GetColorCollection(string hexcolor, int steps)
{
foreach (Color newColor in GetColorCollection(StringToColor(hexcolor), steps))
yield return ColorToString(newColor);
}
And this is how to call it:
string hexcolor = "15bee1";
IEnumerable<string> results = GetColorCollection(hexcolor, 32);
I didn't test this, but should work.
First, get a System.Drawing.Color from hexcode using ColorTranslator
System.Drawing.Color color = ColorTranslator.FromHtml("#FF00FF");
Then, use ControlPaint.Light and ControlPaint.Dark to adjust the brightness of the color
List<Color> lighterColors = new List<Color>();
List<Color> darkerColors = new List<Color>();
for(int i = 0; i < 10; i++)
{
lighterColors.Add(ControlPaint.Light(color, (float)(i / 10));
darkerColors.Add(ControlPaint.Dark(color, (float)(i / 10));
}
Finally, convert each color in the list back to hex
string hexcode = System.Drawing.ColorTranslator.ToHtml(color);

Connected-component labeling algorithm optimization

I need some help with optimisation of my CCL algorithm implementation. I use it to detect black areas on the image. On a 2000x2000 it takes 11 seconds, which is pretty much. I need to reduce the running time to the lowest value possible to achieve. Also, I would be glad to know if there is any other algorithm out there which allows you to do the same thing, but faster than this one. So here is my code:
//The method returns a dictionary, where the key is the label
//and the list contains all the pixels with that label
public Dictionary<short, LinkedList<Point>> ProcessCCL()
{
Color backgroundColor = this.image.Palette.Entries[1];
//Matrix to store pixels' labels
short[,] labels = new short[this.image.Width, this.image.Height];
//I particulary don't like how I store the label equality table
//But I don't know how else can I store it
//I use LinkedList to add and remove items faster
Dictionary<short, LinkedList<short>> equalityTable = new Dictionary<short, LinkedList<short>>();
//Current label
short currentKey = 1;
for (int x = 1; x < this.bitmap.Width; x++)
{
for (int y = 1; y < this.bitmap.Height; y++)
{
if (!GetPixelColor(x, y).Equals(backgroundColor))
{
//Minumum label of the neighbours' labels
short label = Math.Min(labels[x - 1, y], labels[x, y - 1]);
//If there are no neighbours
if (label == 0)
{
//Create a new unique label
labels[x, y] = currentKey;
equalityTable.Add(currentKey, new LinkedList<short>());
equalityTable[currentKey].AddFirst(currentKey);
currentKey++;
}
else
{
labels[x, y] = label;
short west = labels[x - 1, y], north = labels[x, y - 1];
//A little trick:
//Because of those "ifs" the lowest label value
//will always be the first in the list
//but I'm afraid that because of them
//the running time also increases
if (!equalityTable[label].Contains(west))
if (west < equalityTable[label].First.Value)
equalityTable[label].AddFirst(west);
if (!equalityTable[label].Contains(north))
if (north < equalityTable[label].First.Value)
equalityTable[label].AddFirst(north);
}
}
}
}
//This dictionary will be returned as the result
//I'm not proud of using dictionary here too, I guess there
//is a better way to store the result
Dictionary<short, LinkedList<Point>> result = new Dictionary<short, LinkedList<Point>>();
//I define the variable outside the loops in order
//to reuse the memory address
short cellValue;
for (int x = 0; x < this.bitmap.Width; x++)
{
for (int y = 0; y < this.bitmap.Height; y++)
{
cellValue = labels[x, y];
//If the pixel is not a background
if (cellValue != 0)
{
//Take the minimum value from the label equality table
short value = equalityTable[cellValue].First.Value;
//I'd like to get rid of these lines
if (!result.ContainsKey(value))
result.Add(value, new LinkedList<Point>());
result[value].AddLast(new Point(x, y));
}
}
}
return result;
}
Thanks in advance!
You could split your picture in multiple sub-pictures and process them in parallel and then merge the results.
1 pass: 4 tasks, each processing a 1000x1000 sub-picture
2 pass: 2 tasks, each processing 2 of the sub-pictures from pass 1
3 pass: 1 task, processing the result of pass 2
For C# I recommend the Task Parallel Library (TPL), which allows to easily define tasks depending and waiting for each other. Following code project articel gives you a basic introduction into the TPL: The Basics of Task Parallelism via C#.
I would process one scan line at a time, keeping track of the beginning and end of each run of black pixels.
Then I would, on each scan line, compare it to the runs on the previous line. If there is a run on the current line that does not overlap a run on the previous line, it represents a new blob. If there is a run on the previous line that overlaps a run on the current line, it gets the same blob label as the previous. etc. etc. You get the idea.
I would try not to use dictionaries and such.
In my experience, randomly halting the program shows that those things may make programming incrementally easier, but they can exact a serious performance cost due to new-ing.
The problem is about GetPixelColor(x, y), it take very long time to access image data.
Set/GetPixel function are terribly slow in C#, so if you need to use them a lot, you should use Bitmap.lockBits instead.
private void ProcessUsingLockbits(Bitmap ProcessedBitmap)
{
BitmapData bitmapData = ProcessedBitmap.LockBits(new Rectangle(0, 0, ProcessedBitmap.Width, ProcessedBitmap.Height), ImageLockMode.ReadWrite, ProcessedBitmap.PixelFormat);
int BytesPerPixel = System.Drawing.Bitmap.GetPixelFormatSize(ProcessedBitmap.PixelFormat) / 8;
int ByteCount = bitmapData.Stride * ProcessedBitmap.Height;
byte[] Pixels = new byte[ByteCount];
IntPtr PtrFirstPixel = bitmapData.Scan0;
Marshal.Copy(PtrFirstPixel, Pixels, 0, Pixels.Length);
int HeightInPixels = bitmapData.Height;
int WidthInBytes = bitmapData.Width * BytesPerPixel;
for (int y = 0; y < HeightInPixels; y++)
{
int CurrentLine = y * bitmapData.Stride;
for (int x = 0; x < WidthInBytes; x = x + BytesPerPixel)
{
int OldBlue = Pixels[CurrentLine + x];
int OldGreen = Pixels[CurrentLine + x + 1];
int OldRed = Pixels[CurrentLine + x + 2];
// Transform blue and clip to 255:
Pixels[CurrentLine + x] = (byte)((OldBlue + BlueMagnitudeToAdd > 255) ? 255 : OldBlue + BlueMagnitudeToAdd);
// Transform green and clip to 255:
Pixels[CurrentLine + x + 1] = (byte)((OldGreen + GreenMagnitudeToAdd > 255) ? 255 : OldGreen + GreenMagnitudeToAdd);
// Transform red and clip to 255:
Pixels[CurrentLine + x + 2] = (byte)((OldRed + RedMagnitudeToAdd > 255) ? 255 : OldRed + RedMagnitudeToAdd);
}
}
// Copy modified bytes back:
Marshal.Copy(Pixels, 0, PtrFirstPixel, Pixels.Length);
ProcessedBitmap.UnlockBits(bitmapData);
}
Here is the basic code to access pixel data.
And I made a function to transform this into a 2D matrix, it's easier to manipulate (but little slower)
private void bitmap_to_matrix()
{
unsafe
{
bitmapData = ProcessedBitmap.LockBits(new Rectangle(0, 0, ProcessedBitmap.Width, ProcessedBitmap.Height), ImageLockMode.ReadWrite, ProcessedBitmap.PixelFormat);
int BytesPerPixel = System.Drawing.Bitmap.GetPixelFormatSize(ProcessedBitmap.PixelFormat) / 8;
int HeightInPixels = ProcessedBitmap.Height;
int WidthInPixels = ProcessedBitmap.Width;
int WidthInBytes = ProcessedBitmap.Width * BytesPerPixel;
byte* PtrFirstPixel = (byte*)bitmapData.Scan0;
Parallel.For(0, HeightInPixels, y =>
{
byte* CurrentLine = PtrFirstPixel + (y * bitmapData.Stride);
for (int x = 0; x < WidthInBytes; x = x + BytesPerPixel)
{
// Conversion in grey level
double rst = CurrentLine[x] * 0.0721 + CurrentLine[x + 1] * 0.7154 + CurrentLine[x + 2] * 0.2125;
// Fill the grey matix
TG[x / 3, y] = (int)rst;
}
});
}
}
And the website where the code comes
"High performance SystemDrawingBitmap"
Thanks to the author for his really good job !
Hope this will help !

Categories