Given two ARGB colors represented as integers, 8 bit/channel (alpha, red, green, blue), I need to compute a value that represents a sort of distance (also integer) between them.
So the formula for the distance is: Delta=|R1-R2|+|G1-G2|+|B1-B2| where Rx, Gx and Bx are the values of the channles of color 1 and 2. Alpha channel is always ignored.
I need to speed up this calculation because is done a lot of times on a slow machine. What is the 'geekies' way to calculate this on a single thread given the two integers.
My best so far is but I guess this can be improved further:
//Used for color conversion from/to int
private const int ChannelMask = 0xFF;
private const int GreenShift = 8;
private const int RedShift = 16;
public int ComputeColorDelta(int color1, int color2)
{
int rDelta = Math.Abs(((color1 >> RedShift) & ChannelMask) - ((color2 >> RedShift) & ChannelMask));
int gDelta = Math.Abs(((color1 >> GreenShift) & ChannelMask) - ((color2 >> GreenShift) & ChannelMask));
int bDelta = Math.Abs((color1 & ChannelMask) - (color2 & ChannelMask));
return rDelta + gDelta + bDelta;
}
Long Answer:
How many is "a lot"
I have a fast machine I guess, but I wrote this little script:
public static void Main() {
var s = Stopwatch.StartNew();
Random r = new Random();
for (int i = 0; i < 100000000; i++) {
int compute = ComputeColorDelta(r.Next(255), r.Next(255));
}
Console.WriteLine(s.ElapsedMilliseconds);
Console.ReadLine();
}
And the output is:
6878
So 7 seconds for 100 million times seems pretty good.
We can definitely speed this up though. I changed your function to look like this:
public static int ComputeColorDelta(int color1, int color2) {
return 1;
}
With that change, the output was: 5546. So, we managed to get a 1 second performance gain over 100 million iterations by returning a constant. ;)
Short answer: this function is not your bottleneck. :)
I'm trying to let runtime to make calculation for me.
First of all I define struct with explicit field offset
[StructLayout(LayoutKind.Explicit)]
public struct Color
{
[FieldOffset(0)] public int Raw;
[FieldOffset(0)] public byte Blue;
[FieldOffset(8)] public byte Green;
[FieldOffset(16)] public byte Red;
[FieldOffset(24)] public byte Alpha;
}
the calculation function will be:
public int ComputeColorDeltaOptimized(Color color1, Color color2)
{
int rDelta = Math.Abs(color1.Red - color2.Red);
int gDelta = Math.Abs(color1.Green - color2.Green);
int bDelta = Math.Abs(color1.Blue - color2.Blue);
return rDelta + gDelta + bDelta;
}
And the usage
public void FactMethodName2()
{
var s = Stopwatch.StartNew();
var color1 = new Color(); // This is a structs, so I can define they out of loop and gain some performance
var color2 = new Color();
for (int i = 0; i < 100000000; i++)
{
color1.Raw = i;
color2.Raw = 100000000 - i;
int compute = ComputeColorDeltaOptimized(color1, color2);
}
Console.WriteLine(s.ElapsedMilliseconds); //5393 vs 7472 of original
Console.ReadLine();
}
One idea would be to use the same code you already have, but in a different order: apply the mask, take the difference, then shift.
Another modification that might help is to inline this function: that is, instead of calling it for each pair of colors, just compute the difference directly, inside whatever loop executes this code. I assume it is inside a tight loop, because otherwise its cost would be negligible.
Lastly, since you're probably getting image pixel data, you'd save a lot by going the unsafe route: make your bitmaps like this EditableBitmap, then grab the byte* and read the image data out of it.
You can do this in order to reduce the AND operations:
public int ComputeColorDelta(int color1, int color2)
{
int rDelta = Math.Abs((((color1 >> RedShift) - (color2 >> RedShift))) & ChannelMask)));
// same for other color channels
return rDelta + gDelta + bDelta;
}
not much but something...
Related
Is it possible to get an extremely fast, but reliable (Same input = same output, so I can't use time) pseudo-random number generator? I want the end result to be something like float NumGen( int x, int y, int seed ); so that it creates a random number between 0 and 1 based on those three values. I found several random number generators, but I can't get them to work, and the random number generator that comes with Unity is far to slow to use. I have to make about 9 calls to the generator per 1 meter of terrain, so I don't really care if it's not perfectly statistically random, just that it works really quickly. Does anyone know of an algorithm that fits my needs? Thanks :)
I think you are underestimating the System.Random class. It is quite speedy. I believe your slow down is related to creating a new instance of the Random class on each call to your NumGen method.
In my quick test I was able to generate 100,000 random numbers using System.Random in about 1 millisecond.
To avoid the slow down consider seed points in your 2D plane. Disperse the seed points so that they cover a distance no greater than 100,000 meters. Then associate (or calculate) the nearest seed point for each meter, and use that point as your seed to System.Random.
Yes, you will be generating a ton of random numbers you will never use, but they are virtually free.
Pseudo-code:
double NumGen(x, y, distance, seed) {
Random random = new Random(seed);
double result = 0;
for (int i=0; i<distance; i++) {
result = random.NextDouble();
}
}
You could modify this simple outline to return a sequence of random numbers (possibly representing a grid), and couple that with a caching mechanism. That would let you conserve memory and improve (lessen) CPU consumption.
I guess you had to create a Random instance on every call to NumGen. To get the function to return the same number for the same parameters you could use a hash function.
I tested a few things, and this code was about 3 times faster than recreating intances of Random.
//System.Security.Cryptography
static MD5 hasher = MD5.Create();
static byte[] outbuf;
static byte[] inbuf = new byte[12];
static float floatHash(uint x, uint y, uint z) {
inbuf[0]= (byte)(x >> 24);
inbuf[1]=(byte)(x >> 16);
inbuf[2]=(byte)(x >> 8);
inbuf[3]=(byte)(x);
inbuf[4]=(byte)(y >> 24);
inbuf[5]=(byte)(y >> 16);
inbuf[6]=(byte)(y >> 8);
inbuf[7]=(byte)(y);
inbuf[8]=(byte)(z >> 24);
inbuf[9]=(byte)(z >> 16);
inbuf[10]=(byte)(z >> 8);
inbuf[11]=(byte)(z);
outbuf = hasher.ComputeHash(inbuf);
return ((float)BitConverter.ToUInt64(outbuf, 0))/ulong.MaxValue;
}
Another method using some RSA methods is about 5 times faster than new System.Random(seed):
static uint prime = 4294967291;
static uint ord = 4294967290;
static uint generator = 4294967279;
static uint sy;
static uint xs;
static uint xy;
static float getFloat(uint x, uint y, uint seed) {
//will return values 1=> x >0; replace 'ord' with 'prime' to get 1> x >0
//one call to modPow would be enough if all data fits into an ulong
sy = modPow(generator, (((ulong)seed) << 32) + (ulong)y, prime);
xs = modPow(generator, (((ulong)x) << 32) + (ulong)seed, prime);
xy = modPow(generator, (((ulong)sy) << 32) + (ulong)xy, prime);
return ((float)xy) / ord;
}
static ulong b;
static ulong ret;
static uint modPow(uint bb, ulong e, uint m) {
b = bb;
ret = 1;
while (e > 0) {
if (e % 2 == 1) {
ret = (ret * b) % m;
}
e = e >> 1;
b = (b * b) % m;
}
return (uint)ret;
}
I ran a test to generate 100000 floats. I used the index as seed for System.Random and as x parameter of floatHash (y and z were 0).
System.Random: Min: 2.921559E-06 Max: 0.9999979 Repetitions: 0
floatHash MD5: Min: 7.011156E-06 Max: 0.9999931 Repetitions: 210 (values were returned twice)
getFloat RSA: Min: 1.547858E-06 Max: 0.9999989 Repetitions: 190
I've read the other posts on BitArray conversions and tried several myself but none seem to deliver the results I want.
My situation is as such, I have some c# code that controls an LED strip. To issue a single command to the strip I need at most 28 bits
1 bit for selecting between 2 led strips
6 for position (Max 48 addressable leds)
7 for color x3 (0-127 value for color)
Suppose I create a BitArray for that structure and as an example we populate it semi-randomly.
BitArray ba = new BitArray(28);
for(int i = 0 ;i < 28; i++)
{
if (i % 3 == 0)
ba.Set(i, true);
else
ba.Set(i, false);
}
Now I want to stick those 28 bits in 4 bytes (The last 4 bits can be a stop signal), and finally turn it into a String so I can send the string via USB to the LED strip.
All the methods I've tried convert each 1 and 0 as a literal char which is not the goal.
Is there a straightforward way to do this bit compacting in C#?
Well you could use BitArray.CopyTo:
byte[] bytes = new byte[4];
ba.CopyTo(bytes, 0);
Or:
int[] ints = new int[1];
ba.CopyTo(ints, 0);
It's not clear what you'd want the string representation to be though - you're dealing with naturally binary data rather than text data...
I wouldn't use a BitArray for this. Instead, I'd use a struct, and then pack that into an int when I need to:
struct Led
{
public readonly bool Strip;
public readonly byte Position;
public readonly byte Red;
public readonly byte Green;
public readonly byte Blue;
public Led(bool strip, byte pos, byte r, byte g, byte b)
{
// set private fields
}
public int ToInt()
{
const int StripBit = 0x01000000;
const int PositionMask = 0x3F; // 6 bits
// bits 21 through 26
const int PositionShift = 20;
const int ColorMask = 0x7F;
const int RedShift = 14;
const int GreenShift = 7;
int val = Strip ? 0 : StripBit;
val = val | ((Position & PositionMask) << PositionShift);
val = val | ((Red & ColorMask) << RedShift);
val = val | (Blue & ColorMask);
return val;
}
}
That way you can create your structures easily without having to fiddle with bit arrays:
var blue17 = new Led(true, 17, 0, 0, 127);
var blah22 = new Led(false, 22, 15, 97, 42);
and to get the values:
int blue17_value = blue17.ToInt();
You can turn the int into a byte array easily enough with BitConverter:
var blue17_bytes = BitConverter.GetBytes(blue17_value);
It's unclear to me why you want to send that as a string.
So this is might become overly complex to explain, but I'll try to keep it simple yet informative. my program, which is written in C#.net, monitors a microphone for 2 seconds and returns the Maximum value from a sample. I'm not super well versed with how sound and so forth is generated from winmm.dll, but my program is based loosely on NAudio and another project from CodeProject to visualize a wave. The wave format that I am using is this
//WaveIn.cs
private WaveFormat Format= new WaveFormat(8000, 16,1);
//waveFormat.cs
[StructLayout(LayoutKind.Sequential)]
public class WaveFormat
{
public short wFormatTag;
public short nChannels;
public int nSamplesPerSec;
public int nAvgBytesPerSec;
public short nBlockAlign;
public short wBitsPerSample;
public short cbSize;
public WaveFormat(int rate, int bits, short channels)
{
wFormatTag = (short)WaveFormats.Pcm;
nChannels = channels;
nSamplesPerSec = rate;
wBitsPerSample = (short)bits;
cbSize = 0;
nBlockAlign = (short)(nChannels * (wBitsPerSample / 8));
nAvgBytesPerSec = nSamplesPerSec * nBlockAlign;
}
(I think i may have just found my problem, by posting this but i'm still going to ask)
so then i setup a event for max sound level in my wavein file. If i understand the source code correctly it fires when the buffer is full. here is that code
private void CallBack(IntPtr waveInHandle, WaveMessage message, int userData, ref WaveHeader waveHeader, IntPtr reserved)
{
if (message == WaveMessage.WIM_DATA)
{
GCHandle hBuffer = (GCHandle)waveHeader.dwUser;
WaveInBuffer buffer = (WaveInBuffer)hBuffer.Target;
Exception exception = null;
if (DataAvailable != null)
{
DataAvailable(buffer.Data, buffer.BytesRecorded);
}
if (MaxSoundLevel != null) //FOLLOW THIS ONE
{
byte[] waveStream = new byte[buffer.BytesRecorded];
Marshal.Copy(buffer.Data, waveStream, 0, buffer.BytesRecorded);
MaxSoundLevel(GetMaxSound(GetWaveChannels(waveStream)));
}
if (recording)
{
try
{
buffer.Reuse();
}
catch (Exception e)
{
recording = false;
exception = e;
}
}
}
}
private short[] GetWaveChannels(byte[] waveStream)
{
short[] monoWave = new short[waveStream.Length/2];
int h=0;
for (int i = 0 ; i < waveStream.Length; i += 2)
{
monoWave[h] = BitConverter.ToInt16(waveStream, i);
h++;
}
return monoWave;
}
private int GetMaxSound(short[] wave)
{
int maxSound = 0;
for (int i = 0; i < wave.Length; i++)
{
maxSound = Math.Max(maxSound, Math.Abs(wave[i]));
}
return maxSound;
}
so when i monitor it from this test here it won't crash if i keep sound levels to "normal"
[Test]
public void TestSound()
{
var waveIn = new WaveIn();
waveIn.MaxSoundLevel += new WaveIn.MaxSoundHandler(waveIn_MaxSoundLevel);
waveIn.StartRecording();
Console.WriteLine("Starting to record");
Thread.Sleep(4800); //record for 4.8 seconds.
waveIn.StopRecording();
Console.WriteLine("Done Recording");
}
void waveIn_MaxSoundLevel(int MaxSound)
{
Console.WriteLine("MaxSound:{0}", MaxSound);
}
here is my output
MaxSound:28
MaxSound:24
MaxSound:31
MaxSound:17
MaxSound:18760
Unhandled Exception: System.OverflowException: Negating the minimum value of a twos complement number is invalid.
I once got it to give me MaxSound:32767 (0x7FFF).
So i figured that my problem lied within it trying to convert a 32 bit number to a 16 bit number which is why i switched GetMaxSound from short to int. So I don't know. I am stumped. So why am I having this problem? doesn't my wave suggest it's max is 32,767 and that the winmm.dll would know that and not go past that? and since it is just converting 2 bytes of data to a short should it never encounter this problem? Please help :)
My solution, for those who may be looking into this, was fairly simple in nature. A 16bit signed number's maximum positive value is 32767. it's maximum negative number is -32768. If you take the absolute value of 32768 and try to put it into a 16 bit number it will result in a overflow exception being thrown. So the solution is to cast the short value to a 32 bit number before i try to take the absolute value of it. Here is the corrected function
private int GetMaxSound(short[] wave)
{
int maxSound = 0;
for (int i = 0; i < wave.Length; i++)
{
maxSound = Math.Max(maxSound, Math.Abs((int)wave[i]));
}
return maxSound;
}
I could probably just have stuck with an unsigned number as well by using ushort but the Math.Abs does
I need to pass a parameter as two int parameters to a Telerik Report since it cannot accept Long parameters. What is the easiest way to split a long into two ints and reconstruct it without losing data?
Using masking and shifting is your best bet. long is guaranteed to be 64 bit and int 32 bit, according to the documentation, so you can mask off the bits into the two integers and then recombine.
See:
static int[] long2doubleInt(long a) {
int a1 = (int)(a & uint.MaxValue);
int a2 = (int)(a >> 32);
return new int[] { a1, a2 };
}
static long doubleInt2long(int a1, int a2)
{
long b = a2;
b = b << 32;
b = b | (uint)a1;
return b;
}
static void Main(string[] args)
{
long a = 12345678910111213;
int[] al = long2doubleInt(a);
long ap = doubleInt2long(al[0],al[1]);
System.Console.WriteLine(ap);
System.Console.ReadKey();
}
Note the use of bitwise operations throughout. This avoids the problems one might get when using addition or other numerical operations that might occur using negative numbers or rounding errors.
Note you can replace int with uint in the above code if you are able to use unsigned integers (this is always preferable in this sort of situation, as it's a lot clearer what's going on with the bits).
Doing bit-manipulation in C# can be awkward at times, particularly when dealing with signed values. You need to be using unsigned values whenever you plan on doing bit-manipulation. Unfortunately it's not going to yield the nicest looking code.
const long LOW_MASK = ((1L << 32) - 1);
long value = unchecked((long)0xDEADBEEFFEEDDEAD);
int valueHigh = (int)(value >> 32);
int valueLow = (int)(value & LOW_MASK);
long reconstructed = unchecked((long)(((ulong)valueHigh << 32) | (uint)valueLow));
If you want a nicer way to do this, get the raw bytes for the long and get the corresponding integers from the bytes. The conversion to/from representations doesn't change very much.
long value = unchecked((long)0xDEADBEEFFEEDDEAD);
byte[] valueBytes = BitConverter.GetBytes(value);
int valueHigh = BitConverter.ToInt32(valueBytes, BitConverter.IsLittleEndian ? 4 : 0);
int valueLow = BitConverter.ToInt32(valueBytes, BitConverter.IsLittleEndian ? 0 : 4);
byte[] reconstructedBytes = BitConverter.IsLittleEndian
? BitConverter.GetBytes(valueLow).Concat(BitConverter.GetBytes(valueHigh)).ToArray()
: BitConverter.GetBytes(valueHigh).Concat(BitConverter.GetBytes(valueLow)).ToArray();
long reconstructed = BitConverter.ToInt64(reconstructedBytes, 0);
For unigned the following will work:
ulong value = ulong.MaxValue - 12;
uint low = (uint)(value & (ulong)uint.MaxValue);
uint high = (uint)(value >> 32);
ulong value2 = ((ulong)high << 32) | low;
long x = long.MaxValue;
int lo = (int)(x & 0xffffffff);
int hi = (int)((x - ((long)lo & 0xffffffff)) >> 32);
long y = ((long)hi << 32) | ((long)lo & 0xffffffff);
Console.WriteLine(System.Convert.ToString(x, 16));
Console.WriteLine(System.Convert.ToString(lo, 16));
Console.WriteLine(System.Convert.ToString(hi, 16));
Console.WriteLine(System.Convert.ToString(y, 16));
Converting it to and from a string would be much simpler than converting it two and from a pair of ints. Is this an option?
string myStringValue = myLongValue.ToString();
myLongValue = long.Parse(myStringValue);
Instead of mucking with bit operations, just use a faux union. This also would work for different combinations of data types, not just long & 2 ints. More importantly, that avoids the need to be concerned about signs, endianness or other low-level details when you really only care about reading & writing bits in a consistent manner.
using System;
using System.Runtime.InteropServices;
public class Program
{
[StructLayout(LayoutKind.Explicit)]
private struct Mapper
{
[FieldOffset(0)]
public long Aggregated;
[FieldOffset(0)]
public int One;
[FieldOffset(sizeof(int))]
public int Two;
}
public static void Main()
{
var layout = new Mapper{ Aggregated = 0x00000000200000001 };
var one = layout.One;
var two = layout.Two;
Console.WriteLine("One: {0}, Two: {1}", one, two);
var secondLayout = new Mapper { One = one, Two = two };
var aggregated = secondLayout.Aggregated;
Console.WriteLine("Aggregated: {0}", aggregated.ToString("X"));
}
}
I am learning, trying to get thoughts behind CRC. I can't find CRC128 and CRC256 code anywhere. If anyone of you have the C++ or C# Code for them, please share them with me. Also provide online links to the websites. I am a newbie and can't code it by myself at all, neither can convert theories and mathematics to the coding. So I ask for help from you. It will be so nice of you who provide me the proper and simple codes. If anyone provides me these codes, please do also provide CRC Table generator functions. Thank you.
I agree with you except that the accidental collision rate is higher than 1 in 2^32 or 1 in 2^64 for 32 bit and 64 bit CRCs respectively.
I wrote an app that kept track of things by their CRC values for tracking items. We needed to track potentially millions of items and we started with a CRC32 which in real world practice has a collision rate of around 1 in 2^16 which was an unpleasant surprise. We then re-coded to use a CRC64 which had a real world collision rate of about 1 in 2^23. We tested this after the unpleasant surprise of the 32 bit one we started with and accepted the small error rate of the 64 bit one.
I can't really explain the statistics behind the expected collision rate but it makes sense that you would experience a collision much sooner that the width of the bits. Just like a hashtable...some hash buckets are empty and others have more than one entry....
Even for a 256 bit CRC the first 2 CRC's could be the same...it would be almost incredible but possible.
Though CRC-128 and CRC-256 were defined, I don't know of anyone who actually uses them.
Most of the time, developers who think they want a CRC should really be using a cryptographic hash function, which have succeeded CRCs for many applications. It would be a rare case indeed where CRC-128 or CRC-256 would be a superior choice to even the broken MD5, much less the SHA-2 family.
Here is a Java class I wrote recently for playing with CRCs. Beware that changing the bit order is implemented only for bitwise computation.
/**
* A CRC algorithm for computing check values.
*/
public class Crc
{
public static final Crc CRC_16_CCITT =
new Crc(16, 0x1021, 0xffff, 0xffff, true);
public static final Crc CRC_32 =
new Crc(32, 0x04c11db7, 0xffffffffL, 0xffffffffL, true);
private final int _width;
private final long _polynomial;
private final long _mask;
private final long _highBitMask;
private final long _preset;
private final long _postComplementMask;
private final boolean _msbFirstBitOrder;
private final int _shift;
private final long[] _crcs;
/**
* Constructs a CRC specification.
*
* #param width
* #param polynomial
* #param msbFirstBitOrder
*/
public Crc(
int width,
long polynomial)
{
this(width, polynomial, 0, 0, true);
}
/**
* Constructs a CRC specification.
*
* #param width
* #param polynomial
* #param msbFirstBitOrder
*/
public Crc(
int width,
long polynomial,
long preset,
long postComplementMask,
boolean msbFirstBitOrder)
{
super();
_width = width;
_polynomial = polynomial;
_mask = (1L << width) - 1;
_highBitMask = (1L << (width - 1));
_preset = preset;
_postComplementMask = postComplementMask;
_msbFirstBitOrder = msbFirstBitOrder;
_shift = _width - 8;
_crcs = new long[256];
for (int i = 0; i < 256; i++)
{
_crcs[i] = crcForByte(i);
}
}
/**
* Gets the width.
*
* #return The width.
*/
public int getWidth()
{
return _width;
}
/**
* Gets the polynomial.
*
* #return The polynomial.
*/
public long getPolynomial()
{
return _polynomial;
}
/**
* Gets the mask.
*
* #return The mask.
*/
public long getMask()
{
return _mask;
}
/**
* Gets the preset.
*
* #return The preset.
*/
public long getPreset()
{
return _preset;
}
/**
* Gets the post-complement mask.
*
* #return The post-complement mask.
*/
public long getPostComplementMask()
{
return _postComplementMask;
}
/**
* #return True if this CRC uses MSB first bit order.
*/
public boolean isMsbFirstBitOrder()
{
return _msbFirstBitOrder;
}
public long computeBitwise(byte[] message)
{
long result = _preset;
for (int i = 0; i < message.length; i++)
{
for (int j = 0; j < 8; j++)
{
final int bitIndex = _msbFirstBitOrder ? 7 - j : j;
final boolean messageBit = (message[i] & (1 << bitIndex)) != 0;
final boolean crcBit = (result & _highBitMask) != 0;
result <<= 1;
if (messageBit ^ crcBit)
{
result ^= _polynomial;
}
result &= _mask;
}
}
return result ^ _postComplementMask;
}
public long compute(byte[] message)
{
long result = _preset;
for (int i = 0; i < message.length; i++)
{
final int b = (int) (message[i] ^ (result >>> _shift)) & 0xff;
result = ((result << 8) ^ _crcs[b]) & _mask;
}
return result ^ _postComplementMask;
}
private long crcForByte(int b)
{
long result1 = (b & 0xff) << _shift;
for (int j = 0; j < 8; j++)
{
final boolean crcBit = (result1 & (1L << (_width - 1))) != 0;
result1 <<= 1;
if (crcBit)
{
result1 ^= _polynomial;
}
result1 &= _mask;
}
return result1;
}
public String crcTable()
{
final int digits = (_width + 3) / 4;
final int itemsPerLine = (digits + 4) * 8 < 72 ? 8 : 4;
final String format = "0x%0" + digits + "x, ";
final StringBuilder builder = new StringBuilder();
builder.append("{\n");
for (int i = 0; i < _crcs.length; i += itemsPerLine)
{
builder.append(" ");
for (int j = i; j < i + itemsPerLine; j++)
{
builder.append(String.format(format, _crcs[j]));
}
builder.append("\n");
}
builder.append("}\n");
return builder.toString();
}
}
CRC-128 and CRC-256 only make sense if the three following point are true :
You are CPU constrained to the point where a crypto hash would significantly slow you down
Accidental collision must statistically never happen, 1 in 2^64 is still too high
OTOH deliberate collisions are not a problem
A typical case where 2 and 3 can be true together is if an accidental collision would create a data loss that only affects the sender of the data, and not the platform.