I am currently writing a C# application to demonstrate the speedup of parallel computing over single threaded applications. My case is Median blur of an image. But putting more threads to work slows down the application significantly (60 seconds single threaded vs 75 seconds multithreaded) . Given my current approach, I don't know how I could improve the process for multithreading. Sorry in advance for the long code in this post.
my current approach:
first, I calculate how many pixels each thread needs to process to even out the work, the DateTime calculation is to know how much time is passed single threaded and how much time is passed multithreaded:
public void blurImage(int cores)
{
_startTotal = DateTime.Now;
int numberOfPixels = _originalImage.Width * _originalImage.Height;
if (cores>=numberOfPixels)
{
for (int i = 0; i < numberOfPixels; i++)
{
startThread(0, numberOfPixels);
}
}
else
{
int pixelsPerThread = numberOfPixels / cores;
int threshold = numberOfPixels - (pixelsPerThread * cores);
startThread(0, pixelsPerThread + threshold);
for (int i = 1; i < cores; i++)
{
int startPixel = i * pixelsPerThread + threshold;
startThread(startPixel, startPixel + pixelsPerThread);
}
}
_SeqTime = DateTime.Now.Subtract(_startTotal);
}
the startThread method starts a thread and saves the result into a special class object so it can all be merged into one image, I pass a copy of the input image in each thread.
private void startThread(int startPixel, int numberOfPixels)
{
BlurOperation operation = new BlurOperation(blurPixels);
_operations.Add(operation);
BlurResult result = new BlurResult();
operation.BeginInvoke((Bitmap)_processedImage.Clone(), startPixel, numberOfPixels, _windowSize, result, operation, new AsyncCallback(finish), result);
}
Every thread blurs their set of pixels and saves the result into a new list of colors, the result is saved into the result object as well as the startpixel and the current operation, so the program knows when all threads are finished:
private void blurPixels(Bitmap bitmap, int startPixel, int endPixel, int window, BlurResult result, BlurOperation operation)
{
List<Color> colors = new List<Color>();
for (int i = startPixel; i < endPixel; i++)
{
int x = i % bitmap.Width;
int y = i / bitmap.Width;
colors.Add(PixelBlurrer.ShadePixel(x, y, bitmap, window));
}
result._pixels = colors;
result._startPixel = startPixel;
result._operation = operation;
}
the PixelBlurrer calculates the median of each color channel and returns it:
public static Color ShadePixel(int x, int y, Bitmap image, int window)
{
List<byte> red = new List<byte>();
List<byte> green = new List<byte>();
List<byte> blue = new List<byte>();
int xBegin = Math.Max(x - window, 0);
int yBegin = Math.Max(y - window, 0);
int xEnd = Math.Min(x + window, image.Width - 1);
int yEnd = Math.Min(y + window, image.Height - 1);
for (int tx = xBegin; tx < xEnd; tx++)
{
for (int ty = yBegin; ty < yEnd; ty++)
{
Color c = image.GetPixel(tx, ty);
red.Add(c.R);
green.Add(c.G);
blue.Add(c.B);
}
}
red.Sort();
green.Sort();
blue.Sort();
Color output = Color.FromArgb(red[red.Count / 2], green[green.Count / 2], blue[blue.Count / 2]);
return output;
}
on the callback, we return to the GUI thread and merge all pixels into the resulting image. Lastly an event is called telling my form the process is done:
private void finish(IAsyncResult iar)
{
Application.Current.Dispatcher.BeginInvoke(new AsyncCallback(update), iar);
}
private void update(IAsyncResult iar)
{
BlurResult result = (BlurResult)iar.AsyncState;
updateImage(result._pixels, result._startPixel, result._operation);
}
private void updateImage(List<Color> colors, int startPixel, BlurOperation operation)
{
DateTime updateTime = DateTime.Now;
_operations.Remove(operation);
int end = startPixel + colors.Count;
for (int i = startPixel; i < end; i++)
{
int x = i % _processedImage.Width;
int y = i / _processedImage.Width;
_processedImage.SetPixel(x, y, colors[i - startPixel]);
}
if (_operations.Count==0)
{
done(this, null);
}
_SeqTime += DateTime.Now.Subtract(updateTime);
}
Any thoughts? I tried using Parallel.For instead of delegates, but that made it worse. Is there a way to speedup Median blur by multithreading or is this a failed case?
after some thinking, I figured out my logic is solid, but I didn't send a deep copy to each thread. After changing this line in StartThread:
operation.BeginInvoke((Bitmap)_processedImage.Clone(), startPixel, numberOfPixels, _windowSize, result, operation, new AsyncCallback(finish), result);
to this:
operation.BeginInvoke(new Bitmap(_processedImage), startPixel, numberOfPixels, _windowSize, result, operation, new AsyncCallback(finish), result);
I can see a speedup multithreaded
Related
I have to create a utility that searches through 40 to 60 GiB of text files as quick as possible.
Each file has around 50 MB of data that consists of log lines (about 630.000 lines per file).
A NOSQL document database is unfortunately no option...
As of now I am using a Aho-Corsaick algorithm for the search which I stole from Tomas Petricek off of his blog. It works very well.
I process the files in Tasks. Each file is loaded into memory by simply calling File.ReadAllLines(path). The lines are then fed into the Aho-Corsaick one by one, thus each file causes around 600.000 calls to the algorithm (I need the line number in my results).
This takes a lot of time and requires a lot of memory and CPU.
I have very little expertise in this field as I usually work in image processing.
Can you guys recommend algorithms and approaches which could speed up the processing?
Below is more detailed view to the Task creation and file loading which is pretty standard. For more information on the Aho-Corsaick, please visit the linked blog page above.
private KeyValuePair<string, StringSearchResult[]> FindInternal(
IStringSearchAlgorithm algo,
string file)
{
List<StringSearchResult> result = new List<StringSearchResult>();
string[] lines = File.ReadAllLines(file);
for (int i = 0; i < lines.Length; i++)
{
var results = algo.FindAll(lines[i]);
for (int j = 0; j < results.Length; j++)
{
results[j].Row = i;
}
}
foreach (string line in lines)
{
result.AddRange(algo.FindAll(line));
}
return new KeyValuePair<string, StringSearchResult[]>(
file, result.ToArray());
}
public Dictionary<string, StringSearchResult[]> Find(
params string[] search)
{
IStringSearchAlgorithm algo = new StringSearch();
algo.Keywords = search;
Task<KeyValuePair<string, StringSearchResult[]>>[] findTasks
= new Task<KeyValuePair<string, StringSearchResult[]>>[_files.Count];
Parallel.For(0, _files.Count, i => {
findTasks[i] = Task.Factory.StartNew(
() => FindInternal(algo, _files[i])
);
});
Task.WaitAll(findTasks);
return findTasks.Select(t => t.Result)
.ToDictionary(x => x.Key, x => x.Value);
}
EDIT
See section Initial Answer for the original Answer.
I further optimized my code by doing the following:
Added paging to prevent memory overflow / crash due to large amount of result data.
I offload the search results into local files as soon as they exceed a certain buffer size (64kb in my case).
Offloading the results required me to convert my SearchData struct to binary and back.
Splicing the array of files which are processed and running them in Tasks greatly increased performance (from 35 sec to 9 sec when processing about 25 GiB of search data)
Splicing / scaling the file array
The code below gives a scaled/normalized value for T_min and T_max.
This value can then be used to determine the size of each array holding n-amount of file paths.
private int ScalePartition(int T_min, int T_max)
{
// Scale m to range.
int m = T_max / 2;
int t_min = 4;
int t_max = Math.Max(T_max / 16, T_min);
m = ((T_min - m) / (T_max - T_min)) * (t_max - t_min) + t_max;
return m;
}
This code shows the implementation of the scaling and splicing.
// Get size of file array portion.
int scale = ScalePartition(1, _files.Count);
// Iterator.
int n = 0;
// List containing tasks.
List<Task<SearchData[]>> searchTasks = new List<Task<SearchData[]>>();
// Loop through files.
while (n < _files.Count) {
// Local instance of n.
// You will get an AggregateException if you use n
// as n changes during runtime.
int num = n;
// The amount of items to take.
// This needs to be calculated as there might be an
// odd number of elements in the file array.
int cnt = n + scale > _files.Count ? _files.Count - n : scale;
// Run the Find(int, int, Regex[]) method and add as task.
searchTasks.Add(Task.Run(() => Find(num, cnt, regexes)));
// Increment iterator by the amount of files stored in scale.
n += scale;
}
Initial Answer
I had the best results so far after switching to MemoryMappedFile and moving from the Aho-Corsaick back to Regex (a demand has been made that pattern matching is a must have).
There are still parts that can be optimized or changed and I'm sure this is not the fastest or best solution but for it's alright.
Here is the code which returns the results in 30 seconds for 25 GiB worth of data:
// GNU coreutil wc defined buffer size.
// Had best performance with this buffer size.
//
// Definition in wc.c:
// -------------------
// /* Size of atomic reads. */
// #define BUFFER_SIZE (16 * 1024)
//
private const int BUFFER_SIZE = 16 * 1024;
private KeyValuePair<string, SearchData[]> FindInternal(Regex[] rgx, string file)
{
// Buffer for data segmentation.
byte[] buffer = new byte[BUFFER_SIZE];
// Get size of file.
FileInfo fInfo = new FileInfo(file);
long fSize = fInfo.Length;
fInfo = null;
// List of results.
List<SearchData> results = new List<SearchData>();
// Create MemoryMappedFile.
string name = "mmf_" + Path.GetFileNameWithoutExtension(file);
using (var mmf = MemoryMappedFile.CreateFromFile(
file, FileMode.Open, name))
{
// Create read-only in-memory access to file data.
using (var accessor = mmf.CreateViewStream(
0, fSize,
MemoryMappedFileAccess.Read))
{
// Store current position.
int pos = (int)accessor.Position;
// Check if file size is less then the
// default buffer size.
int cnt = (int)(fSize - BUFFER_SIZE > 0
? BUFFER_SIZE
: fSize - BUFFER_SIZE);
// Iterate through file until end of file is reached.
while (accessor.Position < fSize)
{
// Write data to buffer.
accessor.Read(buffer, 0, cnt);
// Update position.
pos = (int)accessor.Position;
// Update next buffer size.
cnt = (int)(fSize - pos >= BUFFER_SIZE
? BUFFER_SIZE
: fSize - pos);
// Convert buffer data to string for Regex search.
string s = Encoding.UTF8.GetString(buffer);
// Run regex against extracted data.
foreach (Regex r in rgx) {
// Get matches.
MatchCollection matches = r.Matches(s);
// Create SearchData struct to reduce memory
// impact and only keep relevant data.
foreach (Match m in matches) {
SearchData sd = new SearchData();
// The actual matched string.
sd.Match = m.Value;
// The index in the file.
sd.Index = m.Index + pos;
// Index to find beginning of line.
int nFirst = m.Index;
// Index to find end of line.
int nLast = m.Index;
// Go back in line until the end of the
// preceeding line has been found.
while (s[nFirst] != '\n' && nFirst > 0) {
nFirst--;
}
// Append length of \r\n (new line).
// Change this to 1 if you work on Unix system.
nFirst+=2;
// Go forth in line until the end of the
// current line has been found.
while (s[nLast] != '\n' && nLast < s.Length-1) {
nLast++;
}
// Remove length of \r\n (new line).
// Change this to 1 if you work on Unix system.
nLast-=2;
// Store whole line in SearchData struct.
sd.Line = s.Substring(nFirst, nLast - nFirst);
// Add result.
results.Add(sd);
}
}
}
}
}
return new KeyValuePair<string, SearchData[]>(file, results.ToArray());
}
public List<KeyValuePair<string, SearchData[]>> Find(params string[] search)
{
var results = new List<KeyValuePair<string, SearchData[]>>();
// Prepare regex objects.
Regex[] regexes = new Regex[search.Length];
for (int i=0; i<regexes.Length; i++) {
regexes[i] = new Regex(search[i], RegexOptions.Compiled);
}
// Get all search results.
// Creating the Regex once and passing it
// to the sub-routine is best as the regex
// engine adds a lot of overhead.
foreach (var file in _files) {
var data = FindInternal(regexes, file);
results.Add(data);
}
return results;
}
I had a stupid idea yesterday were I though that it might work out converting the file data to a bitmap and looking for the input within pixels as pixel checking is quite fast.
Just for the giggles... here is the non-optimized test code for that stupid idea:
public struct SearchData
{
public string Line;
public string Search;
public int Row;
public SearchData(string l, string s, int r) {
Line = l;
Search = s;
Row = r;
}
}
internal static class FileToImage
{
public static unsafe SearchData[] FindText(string search, Bitmap bmp)
{
byte[] buffer = Encoding.ASCII.GetBytes(search);
BitmapData data = bmp.LockBits(
new Rectangle(0, 0, bmp.Width, bmp.Height),
ImageLockMode.ReadOnly, bmp.PixelFormat);
List<SearchData> results = new List<SearchData>();
int bpp = Bitmap.GetPixelFormatSize(bmp.PixelFormat) / 8;
byte* ptFirst = (byte*)data.Scan0;
byte firstHit = buffer[0];
bool isFound = false;
for (int y=0; y<data.Height; y++) {
byte* ptStride = ptFirst + (y * data.Stride);
for (int x=0; x<data.Stride; x++) {
if (firstHit == ptStride[x]) {
byte[] temp = new byte[buffer.Length];
if (buffer.Length < data.Stride-x) {
int ret = 0;
for (int n=0, xx=x; n<buffer.Length; n++, xx++) {
if (ptStride[xx] != buffer[n]) {
break;
}
ret++;
}
if (ret == buffer.Length) {
int lineLength = 0;
for (int n = 0; n<data.Stride; n+=bpp) {
if (ptStride[n+2] == 255 &&
ptStride[n+1] == 255 &&
ptStride[n+0] == 255)
{
lineLength=n;
}
}
SearchData sd = new SearchData();
byte[] lineBytes = new byte[lineLength];
Marshal.Copy((IntPtr)ptStride, lineBytes, 0, lineLength);
sd.Search = search;
sd.Line = Encoding.ASCII.GetString(lineBytes);
sd.Row = y;
results.Add(sd);
}
}
}
}
}
return results.ToArray();
bmp.UnlockBits(data);
return null;
}
private static unsafe Bitmap GetBitmapInternal(string[] lines, int startIndex, Bitmap bmp)
{
int bpp = Bitmap.GetPixelFormatSize(bmp.PixelFormat) / 8;
BitmapData data = bmp.LockBits(
new Rectangle(0, 0, bmp.Width, bmp.Height),
ImageLockMode.ReadWrite,
bmp.PixelFormat);
int index = startIndex;
byte* ptFirst = (byte*)data.Scan0;
int maxHeight = bmp.Height;
if (lines.Length - startIndex < maxHeight) {
maxHeight = lines.Length - startIndex -1;
}
for (int y = 0; y < maxHeight; y++) {
byte* ptStride = ptFirst + (y * data.Stride);
index++;
int max = lines[index].Length;
max += (max % bpp);
lines[index] += new string('\0', max % bpp);
max = lines[index].Length;
for (int x=0; x+2<max; x+=bpp) {
ptStride[x+0] = (byte)lines[index][x+0];
ptStride[x+1] = (byte)lines[index][x+1];
ptStride[x+2] = (byte)lines[index][x+2];
}
ptStride[max+2] = 255;
ptStride[max+1] = 255;
ptStride[max+0] = 255;
for (int x = max + bpp; x < data.Stride; x += bpp) {
ptStride[x+2] = 0;
ptStride[x+1] = 0;
ptStride[x+0] = 0;
}
}
bmp.UnlockBits(data);
return bmp;
}
public static unsafe Bitmap[] GetBitmap(string filePath)
{
int bpp = Bitmap.GetPixelFormatSize(PixelFormat.Format24bppRgb) / 8;
var lines = System.IO.File.ReadAllLines(filePath);
int y = 0x800; //lines.Length / 0x800;
int x = lines.Max(l => l.Length) / bpp;
int cnt = (int)Math.Ceiling((float)lines.Length / (float)y);
Bitmap[] results = new Bitmap[cnt];
for (int i = 0; i < results.Length; i++) {
results[i] = new Bitmap(x, y, PixelFormat.Format24bppRgb);
results[i] = GetBitmapInternal(lines, i * 0x800, results[i]);
}
return results;
}
}
You can split the file into partitions and regex search each partition in parallel then join the results. There are some sharp edges in the details like handling values that span two partitions. Gigantor is a c# library I have created that does this very thing. Feel free to try it or have a look at the source code.
How can I make the for loop of this function to use the GPU with OpenCL?
public static double[] Calculate(double[] num, int period)
{
var final = new double[num.Length];
double sum = num[0];
double coeff = 2.0 / (1.0 + period);
for (int i = 0; i < num.Length; i++)
{
sum += coeff * (num[i] - sum);
final[i] = sum;
}
return final;
}
Your problem as written does not fit well with something that would work on a GPU. You cannot parallelize (in a way that improves performance) the operation on a single array because the value of the nth element depends on elements 1 to n. However, you can utilize the GPU to process multiple arrays, where each GPU core operates on a separate array.
The full code for the solution is at the end of the answer, but the results of the test, to calculate on 10,000 arrays each of which has 10,000 elements, generates the following (on a GTX1080M and an i7 7700k with 32GB RAM):
Task Generating Data: 1096.4583ms
Task CPU Single Thread: 596.2624ms
Task CPU Parallel: 179.1717ms
GPU CPU->GPU: 89ms
GPU Execute: 86ms
GPU GPU->CPU: 29ms
Task Running GPU: 921.4781ms
Finished
In this test, we measure the speed at which we can generate results into a managed C# array using the CPU with one thread, the CPU with all threads, and finally the GPU using all cores. We validate that the results from each test are identical, using the function AreTheSame.
The fastest time is processing the arrays on the CPU using all threads (Task CPU Parallel: 179ms).
The GPU is actually the slowest (Task Running GPU: 922ms), but this is because of the time taken to reformat the C# arrays in a way that they can be transferred onto the GPU.
If this bottleneck were removed (which is quite possible, depending on your use case), the GPU could potentially be the fastest. If the data were already formatted in a manner that can be immediately be transferred onto the GPU, the total processing time for the GPU would be 204ms (CPU->GPU: 89ms + Execute: 86ms + GPU->CPU: 29 ms = 204ms). This is still slower than the parallel CPU option, but on a different sort of data set, it might be faster.
To get the data back from the GPU (the most important part of actually using the GPU), we use the function ComputeCommandQueue.Read. This transfers the altered array on the GPU back to the CPU.
To run the following code, reference the Cloo Nuget Package (I used 0.9.1). And make sure to compile on x64 (you will need the memory). You may need to update your graphics card driver too if it fails to find an OpenCL device.
class Program
{
static string CalculateKernel
{
get
{
return #"
kernel void Calc(global int* offsets, global int* lengths, global double* doubles, double periodFactor)
{
int id = get_global_id(0);
int start = offsets[id];
int length = lengths[id];
int end = start + length;
double sum = doubles[start];
for(int i = start; i < end; i++)
{
sum = sum + periodFactor * ( doubles[i] - sum );
doubles[i] = sum;
}
}";
}
}
public static double[] Calculate(double[] num, int period)
{
var final = new double[num.Length];
double sum = num[0];
double coeff = 2.0 / (1.0 + period);
for (int i = 0; i < num.Length; i++)
{
sum += coeff * (num[i] - sum);
final[i] = sum;
}
return final;
}
static void Main(string[] args)
{
int maxElements = 10000;
int numArrays = 10000;
int computeCores = 2048;
double[][] sets = new double[numArrays][];
using (Timer("Generating Data"))
{
Random elementRand = new Random(1);
for (int i = 0; i < numArrays; i++)
{
sets[i] = GetRandomDoubles(elementRand.Next((int)(maxElements * 0.9), maxElements), randomSeed: i);
}
}
int period = 14;
double[][] singleResults;
using (Timer("CPU Single Thread"))
{
singleResults = CalculateCPU(sets, period);
}
double[][] parallelResults;
using (Timer("CPU Parallel"))
{
parallelResults = CalculateCPUParallel(sets, period);
}
if (!AreTheSame(singleResults, parallelResults)) throw new Exception();
double[][] gpuResults;
using (Timer("Running GPU"))
{
gpuResults = CalculateGPU(computeCores, sets, period);
}
if (!AreTheSame(singleResults, gpuResults)) throw new Exception();
Console.WriteLine("Finished");
Console.ReadKey();
}
public static bool AreTheSame(double[][] a1, double[][] a2)
{
if (a1.Length != a2.Length) return false;
for (int i = 0; i < a1.Length; i++)
{
var ar1 = a1[i];
var ar2 = a2[i];
if (ar1.Length != ar2.Length) return false;
for (int j = 0; j < ar1.Length; j++)
if (Math.Abs(ar1[j] - ar2[j]) > 0.0000001) return false;
}
return true;
}
public static double[][] CalculateGPU(int partitionSize, double[][] sets, int period)
{
ComputeContextPropertyList cpl = new ComputeContextPropertyList(ComputePlatform.Platforms[0]);
ComputeContext context = new ComputeContext(ComputeDeviceTypes.Gpu, cpl, null, IntPtr.Zero);
ComputeProgram program = new ComputeProgram(context, new string[] { CalculateKernel });
program.Build(null, null, null, IntPtr.Zero);
ComputeCommandQueue commands = new ComputeCommandQueue(context, context.Devices[0], ComputeCommandQueueFlags.None);
ComputeEventList events = new ComputeEventList();
ComputeKernel kernel = program.CreateKernel("Calc");
double[][] results = new double[sets.Length][];
double periodFactor = 2d / (1d + period);
Stopwatch sendStopWatch = new Stopwatch();
Stopwatch executeStopWatch = new Stopwatch();
Stopwatch recieveStopWatch = new Stopwatch();
int offset = 0;
while (true)
{
int first = offset;
int last = Math.Min(offset + partitionSize, sets.Length);
int length = last - first;
var merged = Merge(sets, first, length);
sendStopWatch.Start();
ComputeBuffer<int> offsetBuffer = new ComputeBuffer<int>(
context,
ComputeMemoryFlags.ReadWrite | ComputeMemoryFlags.UseHostPointer,
merged.Offsets);
ComputeBuffer<int> lengthsBuffer = new ComputeBuffer<int>(
context,
ComputeMemoryFlags.ReadWrite | ComputeMemoryFlags.UseHostPointer,
merged.Lengths);
ComputeBuffer<double> doublesBuffer = new ComputeBuffer<double>(
context,
ComputeMemoryFlags.ReadWrite | ComputeMemoryFlags.UseHostPointer,
merged.Doubles);
kernel.SetMemoryArgument(0, offsetBuffer);
kernel.SetMemoryArgument(1, lengthsBuffer);
kernel.SetMemoryArgument(2, doublesBuffer);
kernel.SetValueArgument(3, periodFactor);
sendStopWatch.Stop();
executeStopWatch.Start();
commands.Execute(kernel, null, new long[] { merged.Lengths.Length }, null, events);
executeStopWatch.Stop();
using (var pin = Pinned(merged.Doubles))
{
recieveStopWatch.Start();
commands.Read(doublesBuffer, false, 0, merged.Doubles.Length, pin.Address, events);
commands.Finish();
recieveStopWatch.Stop();
}
for (int i = 0; i < merged.Lengths.Length; i++)
{
int len = merged.Lengths[i];
int off = merged.Offsets[i];
var res = new double[len];
Array.Copy(merged.Doubles,off,res,0,len);
results[first + i] = res;
}
offset += partitionSize;
if (offset >= sets.Length) break;
}
Console.WriteLine("GPU CPU->GPU: " + recieveStopWatch.ElapsedMilliseconds + "ms");
Console.WriteLine("GPU Execute: " + executeStopWatch.ElapsedMilliseconds + "ms");
Console.WriteLine("GPU GPU->CPU: " + sendStopWatch.ElapsedMilliseconds + "ms");
return results;
}
public static PinnedHandle Pinned(object obj) => new PinnedHandle(obj);
public class PinnedHandle : IDisposable
{
public IntPtr Address => handle.AddrOfPinnedObject();
private GCHandle handle;
public PinnedHandle(object val)
{
handle = GCHandle.Alloc(val, GCHandleType.Pinned);
}
public void Dispose()
{
handle.Free();
}
}
public class MergedResults
{
public double[] Doubles { get; set; }
public int[] Lengths { get; set; }
public int[] Offsets { get; set; }
}
public static MergedResults Merge(double[][] sets, int offset, int length)
{
List<int> lengths = new List<int>(length);
List<int> offsets = new List<int>(length);
for (int i = 0; i < length; i++)
{
var arr = sets[i + offset];
lengths.Add(arr.Length);
}
var totalLength = lengths.Sum();
double[] doubles = new double[totalLength];
int dataOffset = 0;
for (int i = 0; i < length; i++)
{
var arr = sets[i + offset];
Array.Copy(arr, 0, doubles, dataOffset, arr.Length);
offsets.Add(dataOffset);
dataOffset += arr.Length;
}
return new MergedResults()
{
Doubles = doubles,
Lengths = lengths.ToArray(),
Offsets = offsets.ToArray(),
};
}
public static IDisposable Timer(string name)
{
return new SWTimer(name);
}
public class SWTimer : IDisposable
{
private Stopwatch _sw;
private string _name;
public SWTimer(string name)
{
_name = name;
_sw = Stopwatch.StartNew();
}
public void Dispose()
{
_sw.Stop();
Console.WriteLine("Task " + _name + ": " + _sw.Elapsed.TotalMilliseconds + "ms");
}
}
public static double[][] CalculateCPU(double[][] arrays, int period)
{
double[][] results = new double[arrays.Length][];
for (var index = 0; index < arrays.Length; index++)
{
var arr = arrays[index];
results[index] = Calculate(arr, period);
}
return results;
}
public static double[][] CalculateCPUParallel(double[][] arrays, int period)
{
double[][] results = new double[arrays.Length][];
Parallel.For(0, arrays.Length, i =>
{
var arr = arrays[i];
results[i] = Calculate(arr, period);
});
return results;
}
static double[] GetRandomDoubles(int num, int randomSeed)
{
Random r = new Random(randomSeed);
var res = new double[num];
for (int i = 0; i < num; i++)
res[i] = r.NextDouble() * 0.9 + 0.05;
return res;
}
}
as commenter Cory stated refer to this link for setup.
How to use your GPU in .NET
Here is how you would use this project:
Add the Nuget Package Cloo
Add reference to OpenCLlib.dll
Download OpenCLLib.zip
Add using OpenCL
static void Main(string[] args)
{
int[] Primes = { 1,2,3,4,5,6,7 };
EasyCL cl = new EasyCL();
cl.Accelerator = AcceleratorDevice.GPU;
cl.LoadKernel(IsPrime);
cl.Invoke("GetIfPrime", 0, Primes.Length, Primes, 1.0);
}
static string IsPrime
{
get
{
return #"
kernel void GetIfPrime(global int* num, int period)
{
int index = get_global_id(0);
int sum = (2.0 / (1.0 + period)) * (num[index] - num[0]);
printf("" %d \n"",sum);
}";
}
}
for (int i = 0; i < num.Length; i++)
{
sum += coeff * (num[i] - sum);
final[i] = sum;
}
means first element is multiplied by coeff 1 time and subtracted from 2nd element. First element also multiplied by square of coeff and this time added to 3rd element. Then first element multiplied by cube of coeff and subtracted from 4th element.
This is going like this:
-e0*c*c*c + e1*c*c - e2*c = f3
e0*c*c*c*c - e1*c*c*c + e2*c*c - e3*c = f4
-e0*c*c*c*c*c + e1*c*c*c*c - e2*c*c*c + e3*c*c - e4*c =f5
For all elements, scan through for all smaller id elements and compute this:
if difference of id values(lets call it k) of elements is odd, take subtraction, if not then take addition. Before addition or subtraction, multiply that value by k-th power of coeff. Lastly, multiply the current num value by coefficient and add it to current cell. Current cell value is final(i).
This is O(N*N) and looks like an all-pairs compute kernel. An example using an open-source C# OpenCL project:
ClNumberCruncher cruncher = new ClNumberCruncher(ClPlatforms.all().gpus(), #"
__kernel void foo(__global double * num, __global double * final, __global int *parameters)
{
int threadId = get_global_id(0);
int period = parameters[0];
double coeff = 2.0 / (1.0 + period);
double sumOfElements = 0.0;
for(int i=0;i<threadId;i++)
{
// negativity of coeff is to select addition or subtraction for different powers of coeff
double powKofCoeff = pow(-coeff,threadId-i);
sumOfElements += powKofCoeff * num[i];
}
final[threadId] = sumOfElements + num[threadId] * coeff;
}
");
cruncher.performanceFeed = true; // getting benchmark feedback on console
double[] numArray = new double[10000];
double[] finalArray = new double[10000];
int[] parameters = new int[10];
int period = 15;
parameters[0] = period;
ClArray<double> numGpuArray = numArray;
numGpuArray.readOnly = true; // gpus read this from host
ClArray<double> finalGpuArray = finalArray; // finalArray will have results
finalGpuArray.writeOnly = true; // gpus write this to host
ClArray<int> parametersGpu = parameters;
parametersGpu.readOnly = true;
// calculate kernels with exact same ordering of parameters
// num(double),final(double),parameters(int)
// finalGpuArray points to __global double * final
numGpuArray.nextParam(finalGpuArray, parametersGpu).compute(cruncher, 1, "foo", 10000, 100);
// first compute always lags because of compiling the kernel so here are repeated computes to get actual performance
numGpuArray.nextParam(finalGpuArray, parametersGpu).compute(cruncher, 1, "foo", 10000, 100);
numGpuArray.nextParam(finalGpuArray, parametersGpu).compute(cruncher, 1, "foo", 10000, 100);
Results are on finalArray array for 10000 elements, using 100 workitems per workitem-group.
GPGPU part takes 82ms on a rx550 gpu which has very low ratio of 64bit-to-32bit compute performance(because consumer gaming cards are not good at double precision for new series). An Nvidia Tesla or an Amd Vega would easily compute this kernel without crippled performance. Fx8150(8 cores) completes in 683ms. If you need to specifically select only an integrated-GPU and its CPU, you can use
ClPlatforms.all().gpus().devicesWithHostMemorySharing() + ClPlatforms.all().cpus() when creating ClNumberCruncher instance.
binaries of api:
https://www.codeproject.com/Articles/1181213/Easy-OpenCL-Multiple-Device-Load-Balancing-and-Pip
or source code to compile on your pc:
https://github.com/tugrul512bit/Cekirdekler
if you have multiple gpus, it uses them without any extra code. Including a cpu to the computations would pull gpu effectiveness down in this sample for first iteration (repeatations complete in 76ms with cpu+gpu) so its better to use 2-3 GPU instead of CPU+GPU.
I didn't check numerical stability(you should use Kahan-Summation when adding millions or more values into same variable but I didn't use it for readability and don't have an idea about if 64-bit values need this too like 32-bit ones) or any value correctness, you should do it. Also foo kernel is not optimized. It makes %50 of core times idle so it should be better scheduled like this:
thread-0: compute element 0 and element N-1
thread-1: compute element 1 and element N-2
thread-m: compute element N/2-1 and element N/2
so all workitems get similar amount of work. On top of this, using 100 for workgroup size is not optimal. It should be something like 128,256,512 or 1024(for Nvidia) but this means array size should also be an integer multiple of this too. Then it would need extra control logic in the kernel to not go out of array borders. For even more performance, for loop could have multiple partial sums to do a "loop unrolling".
Firstly, if this is a question that has been asked already, don't get angry and just link me the original please, I couldn't find it. Thank you :)
Ok, so I don't really know how to explain this. When I step into my code which gets all the pixels of a bitmap and puts it into a dictionary in order. When I step into the code it all runs perfectly and completes its fast. However, when I don't put any break points in, x and y at colour = bmpThread.GetPixel(x,y); go out of bounds and go to 4 and I have no idea why. Why is it doing this and how do I stop it?
void PixelAnalyse(int x, int y, int currentPixel)
{
Bitmap bmpThread = bmp;
Color colour;
lock (bmpThread)
{
colour = bmpThread.GetPixel(x, y);
//pTemp = bmpThread.GetPixel(x, y);
}
//this.Invoke(new Action(() => dataGridView1.Rows.Clear()));
//Get the pixel colours
arrayOfColours[currentPixel] = colour;
//this.Invoke(new Action(() => dataGridView1.FirstDisplayedCell = dataGridView1.Rows[dataGridView1.Rows.Count - 1].Cells[0]));W
//this.Invoke(new Action(() => progressBar1.Value++));
CancellationTokenSource cts = new CancellationTokenSource();
cts.Cancel();
}
private void analyse1_Click(object sender, EventArgs e)
{
ThreadPool.SetMaxThreads(imageSize + 1, imageSize + 1);
for (int m = 0; m < imageSize; m++)
{
arrayOfColours.Add(m, Color.Black);
}
int y, x;
int currentPixel = 0;
for (x = 0; x < xSize; x++)
{
for (y = 0; y < ySize; y++)
{
ThreadPool.QueueUserWorkItem(new WaitCallback(o => PixelAnalyse(x, y, currentPixel)));
currentPixel++;
}
}
PrintToDataGraph();
}
Because you're capturing the variables, not the current value of them; basically, you're doing this:
queue an operation that accesses the variable currentPixel, x and y (ot the current value)
change the value of those variables
loop
this means that when each operation actually happens, the values of currentPixel, x and y are not what they were when you scheduled the work.
You can avoid this by declaring new variables at the lowest scope:
var a = x;
var b = y;
var c = currentPixel;
ThreadPool.QueueUserWorkItem(new WaitCallback(o => PixelAnalyse(a, b, c)));
However, it is unlikely that creating that many work items is an optimal approach; usually you should prefer to create a smaller number of "chunky" work items for the thread-pool.
I need some help with optimisation of my CCL algorithm implementation. I use it to detect black areas on the image. On a 2000x2000 it takes 11 seconds, which is pretty much. I need to reduce the running time to the lowest value possible to achieve. Also, I would be glad to know if there is any other algorithm out there which allows you to do the same thing, but faster than this one. So here is my code:
//The method returns a dictionary, where the key is the label
//and the list contains all the pixels with that label
public Dictionary<short, LinkedList<Point>> ProcessCCL()
{
Color backgroundColor = this.image.Palette.Entries[1];
//Matrix to store pixels' labels
short[,] labels = new short[this.image.Width, this.image.Height];
//I particulary don't like how I store the label equality table
//But I don't know how else can I store it
//I use LinkedList to add and remove items faster
Dictionary<short, LinkedList<short>> equalityTable = new Dictionary<short, LinkedList<short>>();
//Current label
short currentKey = 1;
for (int x = 1; x < this.bitmap.Width; x++)
{
for (int y = 1; y < this.bitmap.Height; y++)
{
if (!GetPixelColor(x, y).Equals(backgroundColor))
{
//Minumum label of the neighbours' labels
short label = Math.Min(labels[x - 1, y], labels[x, y - 1]);
//If there are no neighbours
if (label == 0)
{
//Create a new unique label
labels[x, y] = currentKey;
equalityTable.Add(currentKey, new LinkedList<short>());
equalityTable[currentKey].AddFirst(currentKey);
currentKey++;
}
else
{
labels[x, y] = label;
short west = labels[x - 1, y], north = labels[x, y - 1];
//A little trick:
//Because of those "ifs" the lowest label value
//will always be the first in the list
//but I'm afraid that because of them
//the running time also increases
if (!equalityTable[label].Contains(west))
if (west < equalityTable[label].First.Value)
equalityTable[label].AddFirst(west);
if (!equalityTable[label].Contains(north))
if (north < equalityTable[label].First.Value)
equalityTable[label].AddFirst(north);
}
}
}
}
//This dictionary will be returned as the result
//I'm not proud of using dictionary here too, I guess there
//is a better way to store the result
Dictionary<short, LinkedList<Point>> result = new Dictionary<short, LinkedList<Point>>();
//I define the variable outside the loops in order
//to reuse the memory address
short cellValue;
for (int x = 0; x < this.bitmap.Width; x++)
{
for (int y = 0; y < this.bitmap.Height; y++)
{
cellValue = labels[x, y];
//If the pixel is not a background
if (cellValue != 0)
{
//Take the minimum value from the label equality table
short value = equalityTable[cellValue].First.Value;
//I'd like to get rid of these lines
if (!result.ContainsKey(value))
result.Add(value, new LinkedList<Point>());
result[value].AddLast(new Point(x, y));
}
}
}
return result;
}
Thanks in advance!
You could split your picture in multiple sub-pictures and process them in parallel and then merge the results.
1 pass: 4 tasks, each processing a 1000x1000 sub-picture
2 pass: 2 tasks, each processing 2 of the sub-pictures from pass 1
3 pass: 1 task, processing the result of pass 2
For C# I recommend the Task Parallel Library (TPL), which allows to easily define tasks depending and waiting for each other. Following code project articel gives you a basic introduction into the TPL: The Basics of Task Parallelism via C#.
I would process one scan line at a time, keeping track of the beginning and end of each run of black pixels.
Then I would, on each scan line, compare it to the runs on the previous line. If there is a run on the current line that does not overlap a run on the previous line, it represents a new blob. If there is a run on the previous line that overlaps a run on the current line, it gets the same blob label as the previous. etc. etc. You get the idea.
I would try not to use dictionaries and such.
In my experience, randomly halting the program shows that those things may make programming incrementally easier, but they can exact a serious performance cost due to new-ing.
The problem is about GetPixelColor(x, y), it take very long time to access image data.
Set/GetPixel function are terribly slow in C#, so if you need to use them a lot, you should use Bitmap.lockBits instead.
private void ProcessUsingLockbits(Bitmap ProcessedBitmap)
{
BitmapData bitmapData = ProcessedBitmap.LockBits(new Rectangle(0, 0, ProcessedBitmap.Width, ProcessedBitmap.Height), ImageLockMode.ReadWrite, ProcessedBitmap.PixelFormat);
int BytesPerPixel = System.Drawing.Bitmap.GetPixelFormatSize(ProcessedBitmap.PixelFormat) / 8;
int ByteCount = bitmapData.Stride * ProcessedBitmap.Height;
byte[] Pixels = new byte[ByteCount];
IntPtr PtrFirstPixel = bitmapData.Scan0;
Marshal.Copy(PtrFirstPixel, Pixels, 0, Pixels.Length);
int HeightInPixels = bitmapData.Height;
int WidthInBytes = bitmapData.Width * BytesPerPixel;
for (int y = 0; y < HeightInPixels; y++)
{
int CurrentLine = y * bitmapData.Stride;
for (int x = 0; x < WidthInBytes; x = x + BytesPerPixel)
{
int OldBlue = Pixels[CurrentLine + x];
int OldGreen = Pixels[CurrentLine + x + 1];
int OldRed = Pixels[CurrentLine + x + 2];
// Transform blue and clip to 255:
Pixels[CurrentLine + x] = (byte)((OldBlue + BlueMagnitudeToAdd > 255) ? 255 : OldBlue + BlueMagnitudeToAdd);
// Transform green and clip to 255:
Pixels[CurrentLine + x + 1] = (byte)((OldGreen + GreenMagnitudeToAdd > 255) ? 255 : OldGreen + GreenMagnitudeToAdd);
// Transform red and clip to 255:
Pixels[CurrentLine + x + 2] = (byte)((OldRed + RedMagnitudeToAdd > 255) ? 255 : OldRed + RedMagnitudeToAdd);
}
}
// Copy modified bytes back:
Marshal.Copy(Pixels, 0, PtrFirstPixel, Pixels.Length);
ProcessedBitmap.UnlockBits(bitmapData);
}
Here is the basic code to access pixel data.
And I made a function to transform this into a 2D matrix, it's easier to manipulate (but little slower)
private void bitmap_to_matrix()
{
unsafe
{
bitmapData = ProcessedBitmap.LockBits(new Rectangle(0, 0, ProcessedBitmap.Width, ProcessedBitmap.Height), ImageLockMode.ReadWrite, ProcessedBitmap.PixelFormat);
int BytesPerPixel = System.Drawing.Bitmap.GetPixelFormatSize(ProcessedBitmap.PixelFormat) / 8;
int HeightInPixels = ProcessedBitmap.Height;
int WidthInPixels = ProcessedBitmap.Width;
int WidthInBytes = ProcessedBitmap.Width * BytesPerPixel;
byte* PtrFirstPixel = (byte*)bitmapData.Scan0;
Parallel.For(0, HeightInPixels, y =>
{
byte* CurrentLine = PtrFirstPixel + (y * bitmapData.Stride);
for (int x = 0; x < WidthInBytes; x = x + BytesPerPixel)
{
// Conversion in grey level
double rst = CurrentLine[x] * 0.0721 + CurrentLine[x + 1] * 0.7154 + CurrentLine[x + 2] * 0.2125;
// Fill the grey matix
TG[x / 3, y] = (int)rst;
}
});
}
}
And the website where the code comes
"High performance SystemDrawingBitmap"
Thanks to the author for his really good job !
Hope this will help !
Is this parallel merge sort implemented correctly? It looks correct, I took the 40seconds to write a test and it hasnt failed.
The gist of it is i need to sort by splitting the array in half every time. Then i tried to make sure i go wrong and asked a question for a sanity check (my own sanity). I wanted an in place sort but decided that it was way to complicated when seeing the answer, so i implemented the below.
Granted there's no point creating a task/thread to sort a 4 byte array but its to learn threading. Is there anything wrong or anything i can change to make this better. To me it looks perfect but i'd like some general feedback.
static void Main(string[] args)
{
var start = DateTime.Now;
//for (int z = 0; z < 1000000; z++)
int z = 0;
while(true)
{
var curr = DateTime.Now;
if (curr - start > TimeSpan.FromMinutes(1))
break;
var arr = new byte[] { 5, 3, 1, 7, 8, 5, 3, 2, 6, 7, 9, 3, 2, 4, 2, 1 };
Sort(arr, 0, arr.Length, new byte[arr.Length]);
//Console.Write(BitConverter.ToString(arr));
for (int i = 1; i < arr.Length; ++i)
{
if (arr[i] > arr[i])
{
System.Diagnostics.Debug.Assert(false);
throw new Exception("Sort was incorrect " + BitConverter.ToString(arr));
}
}
++z;
}
Console.WriteLine("Tried {0} times with success", z);
}
static void Sort(byte[] arr, int leftPos, int rightPos, byte[] tempArr)
{
var len = rightPos - leftPos;
if (len < 2)
return;
if (len == 2)
{
if (arr[leftPos] > arr[leftPos + 1])
{
var t = arr[leftPos];
arr[leftPos] = arr[leftPos + 1];
arr[leftPos + 1] = t;
}
return;
}
var rStart = leftPos+len/2;
var t1 = new Thread(delegate() { Sort(arr, leftPos, rStart, tempArr); });
var t2 = new Thread(delegate() { Sort(arr, rStart, rightPos, tempArr); });
t1.Start();
t2.Start();
t1.Join();
t2.Join();
var l = leftPos;
var r = rStart;
var z = leftPos;
while (l<rStart && r<rightPos)
{
if (arr[l] < arr[r])
{
tempArr[z] = arr[l];
l++;
}
else
{
tempArr[z] = arr[r];
r++;
}
z++;
}
if (l < rStart)
Array.Copy(arr, l, tempArr, z, rStart - l);
else
Array.Copy(arr, r, tempArr, z, rightPos - r);
Array.Copy(tempArr, leftPos, arr, leftPos, rightPos - leftPos);
}
You could use the Task Parallel Library to give you a better abstraction over threads and cleaner code. The example below uses this.
The main difference from your code, other than using the TPL, is that it has a cutoff threshold below which a sequential implementation is used regardless of the number of threads that have started. This prevents creation of threads that are doing a very small amount of work. There is also a depth cutoff below which new threads are not created. This prevents more threads being created than the hardware can handle based on the number of available logical cores (Environment.ProcessCount).
I would recommend implementing both these approaches in your code to prevent thread explosion for large arrays and innefficient creation of threads which do very small amounts of work, even for small array sizes. It will also give you better performance.
public static class Sort
{
public static int Threshold = 150;
public static void InsertionSort(int[] array, int from, int to)
{
// ...
}
static void Swap(int[] array, int i, int j)
{
// ...
}
static int Partition(int[] array, int from, int to, int pivot)
{
// ...
}
public static void ParallelQuickSort(int[] array)
{
ParallelQuickSort(array, 0, array.Length,
(int) Math.Log(Environment.ProcessorCount, 2) + 4);
}
static void ParallelQuickSort(int[] array, int from, int to, int depthRemaining)
{
if (to - from <= Threshold)
{
InsertionSort(array, from, to);
}
else
{
int pivot = from + (to - from) / 2; // could be anything, use middle
pivot = Partition(array, from, to, pivot);
if (depthRemaining > 0)
{
Parallel.Invoke(
() => ParallelQuickSort(array, from, pivot, depthRemaining - 1),
() => ParallelQuickSort(array, pivot + 1, to, depthRemaining - 1));
}
else
{
ParallelQuickSort(array, from, pivot, 0);
ParallelQuickSort(array, pivot + 1, to, 0);
}
}
}
}
The full source is available on http://parallelpatterns.codeplex.com/
You can read a discussion of the implementation on http://msdn.microsoft.com/en-us/library/ff963551.aspx