I have created a .net assembly file from the MATLAB and used it in c# windows and c# web application and its working fine. Now I want to use this assembly file in xamarin but I am getting the error.
C# code should work in xamarin too. isn't it??
In windows when i tried to fetch the r,g and b channels of image its working fine but when i am trying to do the same thing in xamarin it is giving an error.
windows code is as :
int i, j;
Bitmap image = (Bitmap)pictureBox1.Image;
int width = image.Width;
int height = image.Height;
Bitmap processed_image = new Bitmap(width, height);
try
{
byte[,,] rgb = new byte[3, height, width];
for (i = 0; i < height; i++)
{
for (j = 0; j < width; j++)
{
rgb[0, i, j] = image.GetPixel(j, i).R;
rgb[1, i, j] = image.GetPixel(j, i).G;
rgb[2, i, j] = image.GetPixel(j, i).B;
}
}
MWNumericArray narr = new MWNumericArray();
narr = rgb;
Salt obj = new Salt();
MWArray u = obj.classification(narr);
label1.Text = u.ToString();
// MessageBox.Show(u.ToString());
}
catch (Exception ex)
{
MessageBox.Show(ex.ToString());
}
and the xamarin code is as:
int i, j;
Bitmap image = bitmapimage;// bitmapimage the image (either taken from gallery or captured using camera)
int width = image.Width;
int height = image.Height;
byte[,,] rgb = new byte[3, height, width];
for (i = 0; i < height; i++)
{
for (j = 0; j < width; j++)
{
rgb[0, i, j] = image.GetPixel(j, i).R;
rgb[1, i, j] = image.GetPixel(j, i).G;
rgb[2, i, j] = image.GetPixel(j, i).B;
}
}
MWNumericArray narr = new MWNumericArray();
narr = rgb;
Salt obj = new Salt();
MWArray u = obj.classification(narr);
textview.SetText(u);
I am getting error in fetching the R, G and B channels.
'int' does not contain a definition for 'R' and no extension method 'R' accepting a first argument of type 'int' could be found (are you missing a using directive or an assembly reference?)
Please help me to solve this error. Will be really great full.
Thanks.
It seems to me that you are using Android.Graphics.Bitmap. The GetPixel method of this object returns an int that should be converted to Android.Graphics.Color. Change your code to this:
var color = new Android.Graphics.Color(image.GetPixel(j, i));
rgb[0, i, j] = color.R;
rgb[1, i, j] = color.G;
rgb[2, i, j] = color.B;
Also, as a side note, GetPixel is slow, very slow. I suggest working with the raw binary data using Android.Graphics.Bitmap.CopyPixelsToBuffer() method. Read more here:
https://developer.xamarin.com/api/member/Android.Graphics.Bitmap.CopyPixelsToBuffer/p/Java.Nio.Buffer/
Related
i try to take surf algorithm in matlab and to convert it in c#.
The algorithm in matlab returns array of coordinates. The size of the array is [10,4].
In c# i wrote a code that doesnt returns the right information in the array.
Am i missed something when i converted this code?
private static void Main(string[] args)
{
Bitmap img1 = new Bitmap(Image.FromFile(#"C:\Users\cbencham\source\repos\3.jpg"));
Bitmap img2 = new Bitmap(Image.FromFile(#"C:\Users\cbencham\source\repos\4.jpg"));
//Get image dimensions
int width = img1.Width;
int height = img1.Height;
//Declare the double array of grayscale values to be read from "bitmap"
double[,] im1 = new double[width, height];
//Loop to read the data from the Bitmap image into the double array
int i, j;
for (i = 0; i < width; i++)
{
for (j = 0; j < height; j++)
{
Color pixelColor = img1.GetPixel(i, j);
double b = pixelColor.GetBrightness(); //the Brightness component
//Note that rows in C# correspond to columns in MWarray
im1.SetValue(b, i, j);
}
}
//Get image dimensions
width = img2.Width;
height = img2.Height;
//Declare the double array of grayscale values to be read from "bitmap"
double[,] im2 = new double[width, height];
//Loop to read the data from the Bitmap image into the double array
for (i = 0; i < width; i++)
{
for (j = 0; j < height; j++)
{
Color pixelColor = img2.GetPixel(i, j);
double b = pixelColor.GetBrightness(); //the Brightness component
//Note that rows in C# correspond to columns in MWarray
im2.SetValue(b, i, j);
}
}
MLApp.MLApp matlab = new MLApp.MLApp();
matlab.Execute(#"cd C:\Users\cbencham\source\repos");
object result = null;
matlab.Feval("surf", 1, out result, im1, im2);
// TODO: convert result to double Array [10,4]
for (i = 0; i < 10; i++)
{
for (j = 0; j < 4; j++)
{
Console.Write(arr[i, j]);
}
Console.WriteLine();
}
Console.ReadLine();
}
}
}
You can't convert an object array to a double array. But for your purpose you don't need to. You just need to cast it to and object array and print its contents.
var arr = result as object[,];
or
if (result is object[,] arr)
{
for (i = 0; i < arr.GetLength(0); i++)
{
for (j = 0; j < arr.GetLength(1); j++)
{
Console.Write(arr[i, j]);
}
Console.WriteLine();
}
Console.ReadLine();
}
I wrote an algorithm on Matlab. Hence I wanna use it on .Net. I perfectly converted .m file to .dll for using it on .Net with Matlab Library Compiler. First, I tried converting Matlab function to .dll without any parameter and it worked well on .Net. However, when I wanna use that function with parameter, I'm getting some error below. The parameter is basically image. I call the function on Matlab like this I = imread('xxx.jpg'); Detect(I);So my c# code like this
static void Main(string[] args)
{
DetectDots detectDots = null;
Bitmap bitmap = new Bitmap("xxx.jpg");
//Get image dimensions
int width = bitmap.Width;
int height = bitmap.Height;
//Declare the double array of grayscale values to be read from "bitmap"
double[,] bnew = new double[width, height];
//Loop to read the data from the Bitmap image into the double array
int i, j;
for (i = 0; i < width; i++)
{
for (j = 0; j < height; j++)
{
Color pixelColor = bitmap.GetPixel(i, j);
double b = pixelColor.GetBrightness(); //the Brightness component
bnew.SetValue(b, i, j);
}
}
MWNumericArray arr = bnew;
try
{
detectDots = new DetectDots();
detectDots.Detect(arr);
Console.ReadLine();
}
catch (Exception ex)
{
Console.WriteLine(ex.Message);
}
}
I used the same image that I called xxx.jpg which is 5312 x 2988.
But I'm getting this error below.
... MWMCR::EvaluateFunction error ...
Index exceeds matrix dimensions.
Error in => DetectDots.m at line 12.
... Matlab M-code Stack Trace ...
at
file C:\Users\TAHAME~1\AppData\Local\Temp\tahameral\mcrCache9.0\MTM_220\MTM\DetectDots.m, name DetectDots, line 12.
The important thing is, it says "Index exceeds matrix dimensions", is it true way converting Bitmap to MWArray? What is the problem?
I realised that rows in C# correspond to columns in MWarray. So I change small things on the code.
Instead of double[,] bnew = new double[width, height]; I use it double[,] bnew = new double[height, width];
and instead of bnew.SetValue(b, i, j); I use it bnew.SetValue(b, j, i);
Someone may use the whole code about Bitmap to MWArray below
Bitmap bitmap = new Bitmap("001-2.bmp");
//Get image dimensions
int width = bitmap.Width;
int height = bitmap.Height;
//Declare the double array of grayscale values to be read from "bitmap"
double[,] bnew = new double[height, width];
//Loop to read the data from the Bitmap image into the double array
int i, j;
for (i = 0; i < width; i++)
{
for (j = 0; j < height; j++)
{
Color pixelColor = bitmap.GetPixel(i, j);
double b = pixelColor.GetBrightness(); //the Brightness component
//Note that rows in C# correspond to columns in MWarray
bnew.SetValue(b, j, i);
}
}
MWNumericArray arr = bnew;
I am calculating the average of the RGB channels of images in C# and matlab and getting slightly different result?? (am using 0-255 pixel values...)
The difference is not large but I just can't seem to understand the reason...
Is this common?? Or is it due to bitmap implementation of image?? Or precision issue?? Or does it mean their is something wrong with my code??
Code:
Matlab
I = imread('Photos\hv2512.jpg');
Ir=double(I(:,:,1));
Ig=double(I(:,:,2));
Ib=double(I(:,:,3));
avRed=mean2(Ir)
avGn=mean2(Ig)
avBl=mean2(Ib)
C#
Bitmap bmp= new Bitmap(open.FileName)
double[,] Red = new double[bmp.Width, bmp.Height];
double[,] Green = new double[bmp.Width, bmp.Height];
double[,] Blue = new double[bmp.Width, bmp.Height];
int PixelSize = 3;
BitmapData bmData = null;
if (Safe)
{
Color c;
for (int j = 0; j < bmp.Height; j++)
{
for (int i = 0; i < bmp.Width; i++)
{
c = bmp.GetPixel(i, j);
Red[i, j] = (double) c.R;
Green[i, j] = (double) c.G;
Blue[i, j] = (double) c.B;
}
}
}
double avRed = 0, avGrn = 0, avBlue = 0;
double sumRed = 0, sumGrn = 0, sumBlue = 0;
int cnt = 0;
for (int rws = 0; rws < Red.GetLength(0); rws++)
for (int clms = 0; clms < Red.GetLength(1); clms++)
{
sumRed = sumRed + Red[rws, clms];
sumGrn = sumGrn + Green[rws, clms];
sumBlue = sumBlue + Blue[rws, clms];
cnt++;
}
avRed = sumRed / cnt;
avGrn = sumGrn / cnt;
avBlue = sumBlue / cnt;
This is the image I am using
first time working with C# here. I am reading a few images files, do some calculations, and output and array of double. I need to be able to save this double array (or these, since I will have multiples arrays) to a greyscale image. I have been looking around on the internet, I couldn't find much. i have done it on Python and Mathlab, but C# doesn't seems to be as friendly to me. here is what I have done so far (for the double image creation).
static Image MakeImage(double[,] data)
{
Image img = new Bitmap(data.GetUpperBound(1), data.GetUpperBound(0));
//Bitmap bitmap = new Bitmap(data.GetUpperBound(1), data.GetUpperBound(0));
for (int i = 0; i < data.GetUpperBound(1); i++)
{
for (int k = 0; k < data.GetUpperBound(0); k++)
{
//bitmap.SetPixel(k, i, Color.FromArgb((int)data[i, k],(int) data[i, k],(int) data[i, k]));
}
}
return img;
}
}
}
This code actually doesnt do much. It create my blank image template. color doesnt take double as input. I have no Idea how to create an image from data... I am stuck =)
thank you in advance.
If you can accept using an unsafe block this is pretty fast:
private Image CreateImage(double[,] data)
{
double min = data.Min();
double max = data.Max();
double range = max - min;
byte v;
Bitmap bm = new Bitmap(data.GetLength(0), data.GetLength(1));
BitmapData bd = bm.LockBits(new Rectangle(0, 0, bm.Width, bm.Height), ImageLockMode.WriteOnly, PixelFormat.Format32bppArgb);
// This is much faster than calling Bitmap.SetPixel() for each pixel.
unsafe
{
byte* ptr = (byte*)bd.Scan0;
for (int j = 0; j < bd.Height; j++)
{
for (int i = 0; i < bd.Width; i++)
{
v = (byte)(255 * (data[i, bd.Height - 1 - j] - min) / range);
ptr[0] = v;
ptr[1] = v;
ptr[2] = v;
ptr[3] = (byte)255;
ptr += 4;
}
ptr += (bd.Stride - (bd.Width * 4));
}
}
bm.UnlockBits(bd);
return bm;
}
I am writing a class for printing bitmaps to a portable bluetooth printer in Android via Mono For Android. My class is used to obtain the pixel data from the stream so that is can be sent to the printer in the correct format. Right now the class is simple, it just reads the height, width, and bits per pixel.
Using the offset it reads and returns the pixel data to the printer. Right now I am just working with 1 bit per pixel black and white images. The bitmaps I am working with are in Windows format.
Here is the original image:
Here is the result of printing, the first image is without any transformation. And the second one is the result of modifying the BitArray with the following code:
BitArray bits = new BitArray(returnBytes);
BitArray flippedBits = new BitArray(bits);
for (int i = 0, j = bits.Length - 1; i < bits.Length; i++, j--)
{
flippedBits[i] = bits[j];
}
My Question is:
How do I flip the image vertically when I am working with a byte array. I am having trouble finding the algorithm for doing this, all examples seem to suggest using established graphics libraries which I cannot use.
Edit:
My Bitmap is saved in a 1 dimensional array, with the first rows bytes, then the second, third, etc.
You need to do something like this:
BitArray bits = new BitArray(returnBytes);
BitArray flippedBits = new BitArray(bits);
for (int i = 0; i < bits.Length; i += width) {
for (int j = 0, k = width - 1; j < width; ++j, --k) {
flippedBits[i + j] = bits[i + k];
}
}
If you need to mirror picture upside-down, use this code:
BitArray bits = new BitArray(returnBytes);
BitArray flippedBits = new BitArray(bits);
for (int i = 0, j = bits.Length - width; i < bits.Length; i += width, j -= width) {
for (int k = 0; k < width; ++k) {
flippedBits[i + k] = bits[j + k];
}
}
For the format with width*height bits in row order, you just need to view the bit array as a two-dimensional array.
for(int row = 0; row < height; ++row) {
for(int column = 0; column < width; ++column) {
flippedBits[row*width + column] = bits[row*width + (width-1 - column)];
}
}
It would be a bit more complicated if there were more than one bit per pixel.
You need to use two loops, the first to iterate over all the rows and the second to iterate the pixels inside each row.
for (int y = 0; y < height; y++)
{
int row_start = (width/8) * y;
int flipped_row = (width/8) * (height-1 - y);
for (int x = 0; x < width/8; x++)
{
flippedBits[flipped_row+x] = bits[row_start+x];
}
}