Saving a vector as a single number? - c#

I was wondering if it would be possible to get a vector with an X and a Y value as a single number, knowing that both X and Y can range from -65000 to +65000.
Is this possible in any way?
Code examples on how to convert from this kind of number and to it would be nice.

Store it in a ulong:
ulong rslt = (uint)x;
rslt = rslt << 32;
rslt |= ((uint)y);
To get it out:
int x = (int)(rslt >> 32);
int y = (int)(rslt & 0xFFFFFFFF);

Assuming X and Y are both integer values and there is no overflow (32bit values is not enough) you can use e.g. (pseudocode)
V = fromXY(X, Y) = (y+65000)*130001+(x+65000)
(X,Y) = toXY(V) = (V%130001-65000,V/130001-65000) // <= / is integer division
(130001 is the number of distinct values for X or Y)

To combine:
var limit = 65000;
var x = 1;
var y = 2;
var single = x * (limit + 1) + y;
And then:
y = single % (limit + 1);
x = single - y / (limit + 1);
See it in action.
Of course, you have to assume that the maximum value for single fits within the size of the data type that stores it (which in this case it does).

the union does what you want very easily.
See also: http://www.cplusplus.com/doc/tutorial/other_data_types/
typedef long int64;
typedef int int32;
union {
struct { int32 a, b; };
int64 a_and_b;
} stacker;
int main ()
{
stacker.a = -1000;
stacker.b = 2000;
cout << stacker.a << ", " << stacker.b << endl;
cout << stacker.a_and_b << endl;
}
this will output:
-1000, 2000 <-- a and b read as two int32
8594229558296 <-- a and b interprested as a single int64

Related

Pack/Unpack 4 bytes into Integer [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 1 year ago.
Improve this question
As the title says, I have been trying to pack 4 full 0-255 bytes into 1 integer in c# using BitShifting. I'm trying to Compress some data, currently using 40 bytes of data. But really in theory I only need 12 bytes, but to do so I need to compress all my data into 3 Integers.
Currently my data is:
float3 pos; // Position Relative to Object
float4 color; // RGB is Compressed into X, Y is MatID, Z and W are unused
float3 normal; // A simple Normal -1 to 1 range
But in theory i can compress to:
int pos; // X, Y, Z, MatID - These are never > 200 nor negative
int color; // R, G, B, Unused Fourth Byte
int normal; // X, Y, Z, [0, 255] = [-1, 1] with 128 being 0, Unused Fourth Byte Should be plenty accurate for my needs
So my question is how would i go about doing this? Im reasonably new to Bit Shifting and havent managed to get much working.
If I understand it right, you need to store 4 values in 4 bytes (one value per byte) and then use individual values by performing bit shift operations.
You can do it like this:
using System;
public class Program
{
public static void Main()
{
uint pos = 0x4a5b6c7d;
// x is first byte, y is second byte, z is third byte, matId is fourth byte
uint x = (pos & 0xff);
uint y = (pos & 0xff00) >> 8;
uint z = (pos & 0xff0000) >> 16;
uint matId = (pos & (0xff << 24)) >> 24;
Console.WriteLine(x + " " + y + " " + z + " " + matId);
Console.WriteLine((0x7d) + " " + (0x6c) + " " + (0x5b) + " " + (0x4a));
}
}
x will be equal to result of 0x4a5b6c7d & 0x000000ff = 0x7d
y will be equal to result of 0x4a5b6c7d & 0x0000ff00 right shifted by 8 bits = 0x6c
Similar for z and matId.
Edit
For packing, you need to use | operator:
Left shift fourth value by 24, say a
Left shift third value by 16, say b
Left shift second value by 8, say c
Nothing for fourth value, say d
Do a binary OR of all 4 and store it in an int: int packed = a | b | c | d
using System;
public class Program
{
static void Unpack(uint p)
{
uint pos = p;
// x is first byte, y is second byte, z is third byte, matId is fourth byte
uint x = (pos & 0xff);
uint y = (pos & 0xff00) >> 8;
uint z = (pos & 0xff0000) >> 16;
uint matId = (pos & (0xff << 24)) >> 24;
Console.WriteLine(x + " " + y + " " + z + " " + matId);
}
static uint Pack(int x, int y, int z, int matId)
{
uint newx = x, newy = y, newz = z, newMaxId = matId;
uint pos2 = (newMaxId << 24) | (newz << 16) | (newy << 8) | newx;
Console.WriteLine(pos2);
return pos2;
}
public static void Main()
{
uint packedInt = Pack(10, 20, 30, 40);
Unpack(packedInt);
}
}

Optimization of conversion from opencv mat/Array to to OnnxRuntime Tensor?

I am using the ONNXRuntime to inference a UNet model and as a part of preprocessing I have to convert an EMGU OpenCV matrix to OnnxRuntime.Tensor.
I achieved it using two nested for loops which is unfortunately quite slow:
var data = new DenseTensor<float>(new[] { 1, 3, WIDTH, HEIGHT});
for (int y = 0; y < HEIGHT; y++)
{
for (int x = 0; x < WIDTH; x++)
{
data[0, 0, x, y] = image.GetValue(2, y, x)/255.0;
data[0, 1, x, y] = image.GetValue(1, y, x)/255.0;
data[0, 2, x, y] = image.GetValue(0, y, x)/255.0;
}
}
Then I found out that there exists a method which converts Array to DenseTensor. I wanted to use this method as follows:
var imgToPredictFloat = new Mat(image.Height, image.Width, DepthType.Cv32F, 3);
image.ConvertTo(imgToPredictFloat, DepthType.Cv32F, 1/255.0);
CvInvoke.CvtColor(imgToPredictFloat, imgToPredictFloat, ColorConversion.Bgra2Rgb);
var data = image.GetData().ToTensor<float>;
var reshaped = data.Reshape(new int[] { 1, 3, WIDTH, HEIGHT});
This would greatly improve the performance however the layout of the output tensor is not correct (the same as from the for loop) and the model obviously won't work. Any suggestions how to reshape the array to the correct layout?
In the code is also performed converting int 0-255 to float 0-1 and BGR layout to RGB layout.
This is how I have used cv::Mat with ONNX Runtime ( C++ ) :
const wchar_t* model_path = L"C:/data/DNN/ONNX/ResNet/resnet152v2/resnet152-v2-7.onnx";
printf("Using Onnxruntime C++ API\n");
Ort::Session session(env, model_path, session_options);
//*************************************************************************
// print model input layer (node names, types, shape etc.)
Ort::AllocatorWithDefaultOptions allocator;
size_t num_output_nodes = session.GetOutputCount();
std::vector<char*> outputNames;
for (size_t i = 0; i < num_output_nodes; ++i)
{
char* name = session.GetOutputName(i, allocator);
std::cout << "output: " << name << std::endl;
outputNames.push_back(name);
}
// print number of model input nodes
size_t num_input_nodes = session.GetInputCount();
std::vector<const char*> input_node_names(num_input_nodes);
std::vector<int64_t> input_node_dims; // simplify... this model has only 1 input node {1, 3, 224, 224}.
// Otherwise need vector<vector<>>
printf("Number of inputs = %zu\n", num_input_nodes);
// iterate over all input nodes
for (int i = 0; i < num_input_nodes; i++) {
// print input node names
char* input_name = session.GetInputName(i, allocator);
printf("Input %d : name=%s\n", i, input_name);
input_node_names[i] = input_name;
// print input node types
Ort::TypeInfo type_info = session.GetInputTypeInfo(i);
auto tensor_info = type_info.GetTensorTypeAndShapeInfo();
ONNXTensorElementDataType type = tensor_info.GetElementType();
printf("Input %d : type=%d\n", i, type);
// print input shapes/dims
input_node_dims = tensor_info.GetShape();
printf("Input %d : num_dims=%zu\n", i, input_node_dims.size());
for (int j = 0; j < input_node_dims.size(); j++)
printf("Input %d : dim %d=%jd\n", i, j, input_node_dims[j]);
}
cv::Size dnnInputSize;
cv::Scalar mean;
cv::Scalar std;
bool rgb = true;
//cv::Mat inputImage = cv::imread("C:/TestImages/kitten_01.jpg");
cv::Mat inputImage = cv::imread("C:/TestImages/slug_01.jpg");
rgb = true;
dnnInputSize = cv::Size(224, 224);
mean[0] = 0.485;
mean[1] = 0.456;
mean[2] = 0.406;
std[0] = 0.229;
std[1] = 0.224;
std[2] = 0.225;
cv::Mat blob;
// ONNX: (N x 3 x H x W)
cv::dnn::blobFromImage(inputImage, blob, 1.0 / 255.0, dnnInputSize, mean, rgb, false);
size_t input_tensor_size = blob.total();
std::vector<float> input_tensor_values(input_tensor_size);
for (size_t i = 0; i < input_tensor_size; ++i)
{
input_tensor_values[i] = blob.at<float>(i);
}
std::vector<const char*> output_node_names = { outputNames.front() };
// create input tensor object from data values
auto memory_info = Ort::MemoryInfo::CreateCpu(OrtArenaAllocator, OrtMemTypeDefault);
Ort::Value input_tensor = Ort::Value::CreateTensor<float>(memory_info, input_tensor_values.data(), input_tensor_size, input_node_dims.data(), 4);
assert(input_tensor.IsTensor());
// score model & input tensor, get back output tensor
auto output_tensors = session.Run(Ort::RunOptions{ nullptr }, input_node_names.data(), &input_tensor, 1, output_node_names.data(), 1);
assert(output_tensors.size() == 1 && output_tensors.front().IsTensor());
// Get pointer to output tensor float values
float* floatarr = output_tensors.front().GetTensorMutableData<float>();
assert(abs(floatarr[0] - 0.000045) < 1e-6);
cv::Mat1f result = cv::Mat1f(1000, 1, floatarr);
cv::Point classIdPoint;
double confidence = 0;
minMaxLoc(result, 0, &confidence, 0, &classIdPoint);
int classId = classIdPoint.y;
std::cout << "confidence: " << confidence << std::endl;
std::cout << "class: " << classId << std::endl;
The actual conversion part that you need is imho (adjust size and mean/std according to your network):
cv::Mat inputImage = cv::imread("C:/TestImages/slug_01.jpg");
rgb = true;
dnnInputSize = cv::Size(224, 224);
mean[0] = 0.485;
mean[1] = 0.456;
mean[2] = 0.406;
std[0] = 0.229;
std[1] = 0.224;
std[2] = 0.225;
cv::Mat blob;
// ONNX: (N x 3 x H x W)
cv::dnn::blobFromImage(inputImage, blob, 1.0 / 255.0, dnnInputSize, mean, rgb, false);

Get, Set, Read, converts bitwise in Short type value in C#

I have a short value X:
short X=1; //Result in binary: 0000000000000001
I need to split them into an array and set the bits (say bit 6 and 10) //Result in binary: 0000001000100001
I need to convert it back to short X value.
How can I do it painlessly?
Could you please help?
1. Manual solution
Setting bit 6 and 10:
myValue |= (1 << 6)|(1 << 10);
Clearing bit 6 and 10:
myValue &= ~((1 << 6)|(1 << 10));
2. Use BitArray
var bits = new BitArray(16); // 16 bits
bits[5] = true;
bits[10] = true;
Convert back to short:
var raw = new byte[2];
bits.CopyTo(raw, 0);
var asShort = BitConverter.ToInt16(raw, 0);
If what you are referring to is a very basic encryption, then perhaps using the XOR (^) operator would be better suited for your needs.
short FlipBytes(short original, params int[] bytesToSet)
{
int key = 0;
foreach (int b in bytesToSet)
{
if (b >= 0 && b < 16)
{
key |= 1 << b;
}
}
return (short)(original ^ key);
}
This method will both set and reset the bytes that you desire. For example:
short X = 1;
short XEncrypt = FlipBytes(X, 6, 10);
short XDecrypt = FlipBytes(XEncrypt, 6, 10);
// X = 1 , Binary = 0000000000000001
// XEncrypt = 1089 , Binary = 0000010001000001
// XDecrypt = 1 , Binary = 0000000000000001
If you have a int value "intValue" and you want to set a specific bit at position "bitPosition", do something like:
intValue = intValue | (1 << bitPosition);
or shorter:
intValue |= 1 << bitPosition;
If you want to reset a bit (i.e, set it to zero), you can do this:
intValue &= ~(1 << bitPosition);
(The operator ~ reverses each bit in a value, thus ~(1 << bitPosition) will result in an int where every bit is 1 except the bit at the given bitPosition.)
Linq solution, terser, but probably, less readable than foreach loop:
using System.Linq;
...
short X = 1;
var bitsToSet = new[] { 5, 9 };
var result = X | bitsToSet.Aggregate((s, a) => s |= 1 << a);
If you insist on short add cast:
short result = (short) (X | bitsToSet.Aggregate((s, a) => s |= 1 << a));

How to extract multiple parameters from a binary chromosome

I am trying to use AForge.Net Genetics library to create a simple application for optimization purposes. I have a scenario where I have four input parameters, therefore I tried to modify the "OptimizationFunction2D.cs" class located in the AForge.Genetic project to handle four parameters.
While converting the binary chromosomes into 4 parameters (type = double) I am not sure if my approach is correct as I don't know how to verify the extracted values. Below is the code segment where my code differs from the original AForge code:
public double[] Translate( IChromosome chromosome )
{
// get chromosome's value
ulong val = ((BinaryChromosome) chromosome).Value;
// chromosome's length
int length = ((BinaryChromosome) chromosome).Length;
// length of W component
int wLength = length/4;
// length of X component
int xLength = length / 4;
// length of Y component
int yLength = length / 4;
// length of Z component
int zLength = length / 4;
// W maximum value - equal to X mask
ulong wMax = 0xFFFFFFFFFFFFFFFF >> (64 - wLength);
// X maximum value
ulong xMax = 0xFFFFFFFFFFFFFFFF >> (64 - xLength);
// Y maximum value - equal to X mask
ulong yMax = 0xFFFFFFFFFFFFFFFF >> (64 - yLength);
// Z maximum value
ulong zMax = 0xFFFFFFFFFFFFFFFF >> (64 - zLength);
// W component
double wPart = val & wMax;
// X component;
double xPart = (val >> wLength) & xMax;
// Y component;
double yPart = (val >> (wLength + xLength) & yMax);
// Z component;
double zPart = val >> (wLength + xLength + yLength);
// translate to optimization's function space
double[] ret = new double[4];
ret[0] = wPart * _rangeW.Length / wMax + _rangeW.Min;
ret[1] = xPart * _rangeX.Length / xMax + _rangeX.Min;
ret[2] = yPart * _rangeY.Length / yMax + _rangeY.Min;
ret[3] = zPart * _rangeZ.Length / zMax + _rangeZ.Min;
return ret;
}
I am not sure if am correctly separating the chromosome value into four part (wPart/xPart/yPart/zPart). The original function in the AForge.Genetic library looks like this:
public double[] Translate( IChromosome chromosome )
{
// get chromosome's value
ulong val = ( (BinaryChromosome) chromosome ).Value;
// chromosome's length
int length = ( (BinaryChromosome) chromosome ).Length;
// length of X component
int xLength = length / 2;
// length of Y component
int yLength = length - xLength;
// X maximum value - equal to X mask
ulong xMax = 0xFFFFFFFFFFFFFFFF >> ( 64 - xLength );
// Y maximum value
ulong yMax = 0xFFFFFFFFFFFFFFFF >> ( 64 - yLength );
// X component
double xPart = val & xMax;
// Y component;
double yPart = val >> xLength;
// translate to optimization's function space
double[] ret = new double[2];
ret[0] = xPart * rangeX.Length / xMax + rangeX.Min;
ret[1] = yPart * rangeY.Length / yMax + rangeY.Min;
return ret;
}
Can someone please confirm if my conversion process is correct or is there a better way of doing it.
No, it works but you don't need it to be so complicated.
ulong wMax = 0xFFFFFFFFFFFFFFFF >> (64 - wLength);
this returns the same value to all the results wMax xMax yMax zMax so just do one and call it componentMask
part = (val >> (wLength * pos) & componentMask);
where pos is the 0 based position of the component. so 0 for w, 1 for x ...
the rest is ok.
EDIT:
if the Length is not divided by 4 you can make the last part be just val >> (wLength * pos) to make it have the remaining bits.

Concatenate three 4-bit values

I am trying to get the original 12-bit value from from a base15 (edit) string. I figured that I need a zerofill right shift operator like in Java to deal with the zero padding. How do I do this?
No luck so far with the following code:
static string chars = "0123456789ABCDEFGHIJKLMNOP";
static int FromStr(string s)
{
int n = (chars.IndexOf(s[0]) << 4) +
(chars.IndexOf(s[1]) << 4) +
(chars.IndexOf(s[2]));
return n;
}
Edit; I'll post the full code to complete the context
static string chars = "0123456789ABCDEFGHIJKLMNOP";
static void Main()
{
int n = FromStr(ToStr(182));
Console.WriteLine(n);
Console.ReadLine();
}
static string ToStr(int n)
{
if (n <= 4095)
{
char[] cx = new char[3];
cx[0] = chars[n >> 8];
cx[1] = chars[(n >> 4) & 25];
cx[2] = chars[n & 25];
return new string(cx);
}
return string.Empty;
}
static int FromStr(string s)
{
int n = (chars.IndexOf(s[0]) << 8) +
(chars.IndexOf(s[1]) << 4) +
(chars.IndexOf(s[2]));
return n;
}
Your representation is base26, so the answer that you are going to get from a three-character value is not going to be 12 bits: it's going to be in the range 0..17575, inclusive, which requires 15 bits.
Recall that shifting left by k bits is the same as multiplying by 2^k. Hence, your x << 4 operations are equivalent to multiplying by 16. Also recall that when you convert a base-X number, you need to multiply its digits by a power of X, so your code should be multiplying by 26, rather than shifting the number left, like this:
int n = (chars.IndexOf(s[0]) * 26*26) +
(chars.IndexOf(s[1]) * 26) +
(chars.IndexOf(s[2]));

Categories