Map Temperature Data on a Bitmap in C# - c#

I have a 2D array containing Temperature data from a numerically solved heat transfer problem in C#. To visualize the temperature distribution I used the "Bitmap"; Where the lowest Temperature is showed by the color Blue while the hottest is represented by Red!
The problem is that it takes too much time to generate the bitmap for a 300x300 image size! While i'm trying to work with larger ones which makes it impossible!
Is there any more efficient way to make it work?
Any help would be greatly appreciated
Here's a little bit of my code and a a generated bitmap:enter image description here
//RGB Struct
struct RGB
{
public Int32 num;
public int red;
public int green;
public int blue;
public RGB(Int32 num)
{
int[] color = new int[3];
int i = 2;
while (num > 0)
{
color[i] = num % 256;
num = num - color[i];
num = num / 256;
i--;
}
this.red = color[0];
this.green = color[1];
this.blue = color[2];
this.num = (256 * 256) * color[0] + 256 * color[1] + color[2];
}
}
//Create Color Array
Int32 red = 16711680;
Int32 blue = 255;
Int32[,] decimalColor = new Int32[Nx, Ny];
for (int i = 0; i < Nx; i++)
{
for (int j = 0; j < Ny; j++)
{
double alpha = (T_new[i, j] - T_min) / (T_max - T_min);
double C = alpha * (red - blue);
decimalColor[i, j] = Convert.ToInt32(C) + blue;
}
}
//Bitmap Result
Bitmap bmp = new Bitmap(Nx, Ny);
for (int i = 0; i < Nx; i++)
{
for (int j = 0; j < Ny; j++)
{
RGB rgb = new RGB(decimalColor[i, j]);
bmp.SetPixel(i,j,Color.FromArgb(rgb.red,rgb.green,rgb.blue));
}
}
pictureBox1.Image = bmp;

Related

how to convert matlab to c# when the output is array?

i try to take surf algorithm in matlab and to convert it in c#.
The algorithm in matlab returns array of coordinates. The size of the array is [10,4].
In c# i wrote a code that doesnt returns the right information in the array.
Am i missed something when i converted this code?
private static void Main(string[] args)
{
Bitmap img1 = new Bitmap(Image.FromFile(#"C:\Users\cbencham\source\repos\3.jpg"));
Bitmap img2 = new Bitmap(Image.FromFile(#"C:\Users\cbencham\source\repos\4.jpg"));
//Get image dimensions
int width = img1.Width;
int height = img1.Height;
//Declare the double array of grayscale values to be read from "bitmap"
double[,] im1 = new double[width, height];
//Loop to read the data from the Bitmap image into the double array
int i, j;
for (i = 0; i < width; i++)
{
for (j = 0; j < height; j++)
{
Color pixelColor = img1.GetPixel(i, j);
double b = pixelColor.GetBrightness(); //the Brightness component
//Note that rows in C# correspond to columns in MWarray
im1.SetValue(b, i, j);
}
}
//Get image dimensions
width = img2.Width;
height = img2.Height;
//Declare the double array of grayscale values to be read from "bitmap"
double[,] im2 = new double[width, height];
//Loop to read the data from the Bitmap image into the double array
for (i = 0; i < width; i++)
{
for (j = 0; j < height; j++)
{
Color pixelColor = img2.GetPixel(i, j);
double b = pixelColor.GetBrightness(); //the Brightness component
//Note that rows in C# correspond to columns in MWarray
im2.SetValue(b, i, j);
}
}
MLApp.MLApp matlab = new MLApp.MLApp();
matlab.Execute(#"cd C:\Users\cbencham\source\repos");
object result = null;
matlab.Feval("surf", 1, out result, im1, im2);
// TODO: convert result to double Array [10,4]
for (i = 0; i < 10; i++)
{
for (j = 0; j < 4; j++)
{
Console.Write(arr[i, j]);
}
Console.WriteLine();
}
Console.ReadLine();
}
}
}
You can't convert an object array to a double array. But for your purpose you don't need to. You just need to cast it to and object array and print its contents.
var arr = result as object[,];
or
if (result is object[,] arr)
{
for (i = 0; i < arr.GetLength(0); i++)
{
for (j = 0; j < arr.GetLength(1); j++)
{
Console.Write(arr[i, j]);
}
Console.WriteLine();
}
Console.ReadLine();
}

Fast/Optimized way to flip an RGBA image vertically

I have a byte[] for a RGBA array. I have the following method that flips the image vertically:
private byte[] FlipPixelsVertically(byte[] frameData, int height, int width)
{
byte[] data = new byte[frameData.Length];
int k = 0;
for (int j = height - 1; j >= 0 && k < height; j--)
{
for (int i = 0; i < width * 4; i++)
{
data[k * width * 4 + i] = frameData[j * width * 4 + i];
}
k++;
}
return data;
}
The reason I am creating new byte[] is because I do not want to alter the contents of frameData, since the original info will be used elsewhere. So for now, I just have a nested for loop that copies the byte to the proper place in data.
As height and width increase, this will become an expensive operation. How can I optimize this so that the copy/swap is faster?
Using Buffer.BlockCopy:
private byte[] FlipPixelsVertically(byte[] frameData, int height, int width)
{
byte[] data = new byte[frameData.Length];
int k = 0;
for (int k = 0; k < height; k++)
{
int j = height - k - 1;
Buffer.BlockCopy(
frameData, k * width * 4,
data, j * width * 4,
width*4);
}
return data;
}

How to translate Tiff.ReadEncodedTile to elevation terrain matrix from height map in C#?

I'm new with working with reading tiff images and I'm trying to get the elevation terrain values from a tiff map by using LibTiff. The maps I need to decode are tile organized. Below the fragment of the code I'm using currently to get these values, based on the library documentation and research on the web:
private void getBytes()
{
int numBytes = bitsPerSample / 8; //Number of bytes depending the tiff map
int stride = numBytes * height;
byte[] bufferTiff = new byte[stride * height]; // this is the buffer with the tiles data
int offset = 0;
for (int i = 0; i < tif.NumberOfTiles() - 1; i++)
{
int rawTileSize = (int)tif.RawTileSize(i);
offset += tif.ReadEncodedTile(i, bufferTiff, offset, rawTileSize);
}
values = new double[height, width]; // this is the matrix to save the heigth values in meters
int ptr = 0; // pointer for saving the each data bytes
int m = 0;
int n = 0;
byte[] byteValues = new byte[numBytes]; // bytes of each height data
for (int i = 0; i < bufferTiff.Length; i++)
{
byteValues[ptr] = bufferTiff[i];
ptr++;
if (ptr % numBytes == 0)
{
ptr = 0;
if (n == height) // tiff map Y pixels
{
n = 0;
m++;
if (m == width) // tiff map X pixels
{
m = 0;
}
}
values[m, n] = BitConverter.ToDouble(byteValues, 0); // Converts each byte data to the height value in meters. If the map is 32 bps the method I use is BitConverter.ToFloat
if (n == height - 1 && m == width - 1)
break;
n++;
}
}
SaveArrayAsCSV(values, "values.txt");
}
//Only to show results in a cvs file:
public void SaveArrayAsCSV(double[,] arrayToSave, string fileName) // source: http://stackoverflow.com/questions/8666518/how-can-i-write-a-general-array-to-csv-file
{
using (StreamWriter file = new StreamWriter(fileName))
{
WriteItemsToFile(arrayToSave, file);
}
}
//Only to show results in a cvs file:
private void WriteItemsToFile(Array items, TextWriter file) // source: http://stackoverflow.com/questions/8666518/how-can-i-write-a-general-array-to-csv-file
{
int cont = 0;
foreach (object item in items)
{
if (item is Array)
{
WriteItemsToFile(item as Array, file);
file.Write(Environment.NewLine);
}
else {
file.Write(item + " | ");
cont++;
if(cont == width)
{
file.Write("\n");
cont = 0;
}
}
}
}
I've been testing two different maps (32 and 64 bits per sample) and the results are similar: At the begining, the data seems to be consistent, but there is a point in which all the other values are corrupted (even zero at the end of data results). I deduce there are some bytes that need to be ignored, but I don't know how identify them to depurate my code. The method Tiff.ReadScanline does not work for me, because the maps I need to decode are tiles organized, and this method is not for working with these kind of images (according to BitMiracle.LibTiff documentation). The method Tiff.ReadRGBATile is not valid neither, because the tiff images are not RGB. I can read these values with Matlab, but my project needs to be built in C#, so I can compare the expected results with mine. As reference (I think it could be helpful), these are some data extracted from one of the tiff files with LibTiff tag reading methods:
ImageWidth: 2001
ImageLength: 2001
BitsPerSample: 32
Compression: PackBits (aka Macintosh RLE)
Photometric: MinIsBlack
SamplesPerPixel: 1
PlanarConfig: Contig
TileWidth: 208
TileLength: 208
SampleFormat: 3
Thanks in advance by your help guys!
Ok, Finally I found the solution: My mistake was the parameter "count" in the function Tiff.ReadEncodedTile(tile, buffer, offset, count). The Tiff.RawTileSize(int) function, returns the compressed bytes size of the tile (different for each tile, depending of the compression algorithm), but Tiff.ReadEncodedTile returns the decompressed bytes (bigger and constant for all tiles). That's why not all the information was been saved properly, but just a part of data. Below the correct code with the terrain elevation matrix (need optimization but it works, I think it could be helpful)
private void getBytes()
{
int numBytes = bitsPerSample / 8;
int numTiles = tif.NumberOfTiles();
int stride = numBytes * height;
int bufferSize = tileWidth * tileHeight * numBytes * numTiles;
int bytesSavedPerTile = tileWidth * tileHeight * numBytes; //this is the real size of the decompressed bytes
byte[] bufferTiff = new byte[bufferSize];
FieldValue[] value = tif.GetField(TiffTag.TILEWIDTH);
int tilewidth = value[0].ToInt();
value = tif.GetField(TiffTag.TILELENGTH);
int tileHeigth = value[0].ToInt();
int matrixSide = (int)Math.Sqrt(numTiles); // this works for a square image (for example a tiles organized tiff image)
int bytesWidth = matrixSide * tilewidth;
int bytesHeigth = matrixSide * tileHeigth;
int offset = 0;
for (int j = 0; j < numTiles; j++)
{
offset += tif.ReadEncodedTile(j, bufferTiff, offset, bytesSavedPerTile); //Here was the mistake. Now it works!
}
double[,] aux = new double[bytesHeigth, bytesWidth]; //Double for a 64 bps tiff image. This matrix will save the alldata, including the transparency (the "blank zone" I was talking before)
terrainElevation = new double[height, width]; // Double for a 64 bps tiff image. This matrix will save only the elevation values, without transparency
int ptr = 0;
int m = 0;
int n = -1;
int contNumTile = 1;
int contBytesPerTile = 0;
int i = 0;
int tileHeigthReference = tileHeigth;
int tileWidthReference = tileWidth;
int row = 1;
int col = 1;
byte[] bytesHeigthMeters = new byte[numBytes]; // Buffer to save each one elevation value to parse
while (i < bufferTiff.Length && contNumTile < numTiles + 1)
{
for (contBytesPerTile = 0; contBytesPerTile < bytesSavedPerTile; contBytesPerTile++)
{
bytesHeigthMeters[ptr] = bufferTiff[i];
ptr++;
if (ptr % numBytes == 0 && ptr != 0)
{
ptr = 0;
n++;
if (n == tileHeigthReference)
{
n = tileHeigthReference - tileHeigth;
m++;
if (m == tileWidthReference)
{
m = tileWidthReference - tileWidth;
}
}
double heigthMeters = BitConverter.ToDouble(bytesHeigthMeters, 0);
if (n < bytesWidth)
{
aux[m, n] = heigthMeters;
}
else
{
n = -1;
}
}
i++;
}
if (i % tilewidth == 0)
{
col++;
if (col == matrixSide + 1)
{
col = 1;
}
}
if (contNumTile % matrixSide == 0)
{
row++;
n = -1;
if (row == matrixSide + 1)
{
row = 1;
}
}
contNumTile++;
tileHeigthReference = tileHeight * (col);
tileWidthReference = tileWidth * (row);
m = tileWidth * (row - 1);
}
for (int x = 0; x < height; x++)
{
for (int y = 0; y < width; y++)
{
terrainElevation[x, y] = aux[x, y]; // Final result. Each position of matrix has saved each pixel terrain elevation of the map
}
}
}
Regards!
Here is an improved code, works with non-square tiles:
int imageWidth = tiff.GetField(TiffTag.IMAGEWIDTH)[0].ToInt();
int imageHeight = tiff.GetField(TiffTag.IMAGELENGTH)[0].ToInt();
int bytesPerSample = (int)tiff.GetField(TiffTag.BITSPERSAMPLE)[0].ToInt() / 8;
SampleFormat format = (SampleFormat)tiff.GetField(TiffTag.SAMPLEFORMAT)[0].ToInt();
//Array to return
float[,] decoded = new float[imageHeight, imageWidth];
//Get decode function (I only want a float array)
Func<byte[], int, float> decode = GetConversionFunction(format, bytesPerSample);
if (decode == null)
{
throw new ArgumentException("Unsupported TIFF format:"+format);
}
if(tiff.IsTiled())
{
//tile dimensions in pixels - the image dimensions MAY NOT be a multiple of these dimensions
int tileWidth = tiff.GetField(TiffTag.TILEWIDTH)[0].ToInt();
int tileHeight = tiff.GetField(TiffTag.TILELENGTH)[0].ToInt();
//tile matrix size
int numTiles = tiff.NumberOfTiles();
int tileMatrixWidth = (int)Math.Ceiling(imageWidth / (float)tileWidth);
int tileMatrixHeight = (int)Math.Ceiling(imageHeight / (float)tileHeight);
//tile dimensions in bytes
int tileBytesWidth = tileWidth * bytesPerSample;
int tileBytesHeight = tileHeight * bytesPerSample;
//tile buffer
int tileBufferSize = tiff.TileSize();
byte[] tileBuffer = new byte[tileBufferSize];
int imageHeightMinus1 = imageHeight - 1;
for (int tileIndex = 0 ; tileIndex < numTiles; tileIndex++)
{
int tileX = tileIndex / tileMatrixWidth;
int tileY = tileIndex % tileMatrixHeight;
tiff.ReadTile(tileBuffer, 0, tileX*tileWidth, tileY*tileHeight, 0, 0);
int xImageOffset = tileX * tileWidth;
int yImageOffset = tileY * tileHeight;
for (int col = 0; col < tileWidth && xImageOffset+col < imageWidth; col++ )
{
for(int row = 0; row < tileHeight && yImageOffset+row < imageHeight; row++)
{
decoded[imageHeightMinus1-(yImageOffset+row), xImageOffset+col] = decode(tileBuffer, row * tileBytesWidth + col * bytesPerSample);
}
}
}
}

C# and matlab give different image means?

I am calculating the average of the RGB channels of images in C# and matlab and getting slightly different result?? (am using 0-255 pixel values...)
The difference is not large but I just can't seem to understand the reason...
Is this common?? Or is it due to bitmap implementation of image?? Or precision issue?? Or does it mean their is something wrong with my code??
Code:
Matlab
I = imread('Photos\hv2512.jpg');
Ir=double(I(:,:,1));
Ig=double(I(:,:,2));
Ib=double(I(:,:,3));
avRed=mean2(Ir)
avGn=mean2(Ig)
avBl=mean2(Ib)
C#
Bitmap bmp= new Bitmap(open.FileName)
double[,] Red = new double[bmp.Width, bmp.Height];
double[,] Green = new double[bmp.Width, bmp.Height];
double[,] Blue = new double[bmp.Width, bmp.Height];
int PixelSize = 3;
BitmapData bmData = null;
if (Safe)
{
Color c;
for (int j = 0; j < bmp.Height; j++)
{
for (int i = 0; i < bmp.Width; i++)
{
c = bmp.GetPixel(i, j);
Red[i, j] = (double) c.R;
Green[i, j] = (double) c.G;
Blue[i, j] = (double) c.B;
}
}
}
double avRed = 0, avGrn = 0, avBlue = 0;
double sumRed = 0, sumGrn = 0, sumBlue = 0;
int cnt = 0;
for (int rws = 0; rws < Red.GetLength(0); rws++)
for (int clms = 0; clms < Red.GetLength(1); clms++)
{
sumRed = sumRed + Red[rws, clms];
sumGrn = sumGrn + Green[rws, clms];
sumBlue = sumBlue + Blue[rws, clms];
cnt++;
}
avRed = sumRed / cnt;
avGrn = sumGrn / cnt;
avBlue = sumBlue / cnt;
This is the image I am using

Converting a multi-band 16-bit tiff image to an 8-bit tiff image

I got some pixel data from 16-bit(range 0-65535) tif image as an integer array. I got the value using gdal readraster. How do I convert them to 8-bit(0-225) and convert it (the array) to 8-bit tif image ?
Here is some of my code :
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using OSGeo.GDAL;
using OSGeo.OSR;
namespace ConsoleApplication4
{
class Program
{
static void Main(string[] args)
{
Gdal.AllRegister();
Dataset data1;
int xsize, ysize;
int bandsize;
data1 = Gdal.Open("F:\\po_1473547_bgrn_0000000.tif", Access.GA_ReadOnly);
bandsize = data1.RasterCount;
xsize = data1.RasterXSize; //cols
ysize = data1.RasterYSize; //rows
Console.WriteLine("cols : "+xsize+", rows : "+ysize);
Band[] bands = new Band[bandsize];
for (int i = 0; i < bandsize; i++) {
bands[i] = data1.GetRasterBand(i+1);
}
int[,,] pixel = new int[bandsize,xsize,ysize];
int[] pixtemp = new int[xsize * ysize];
for (int i = 0; i < bandsize; i++)
{
bands[i].ReadRaster(0, 0, xsize, ysize, pixtemp, xsize, ysize, 0, 0);
for (int j = 0; j < xsize; j++)
{
for (int k = 0; k < ysize; k++)
{
pixel[i,j,k] = pixtemp[j + k * xsize];
}
}
}
Console.WriteLine("");
for (int i = 0; i < bandsize; i++)
{
Console.WriteLine("some pixel from band " + (i+1));
for (int j = 0; j < 100; j++)
{
Console.Write(" " + pixel[i,100,j]);
}
Console.WriteLine("\n\n");
}
}
}
}
I was searching Google on how to do that but I only found how to do that if the data type is a byte. Someone please give me a hint.
I don't know about GEO Tiff format, but to convert a regular 16 bit tiff image file to an 8 bit one, you need to scale the 16 bit channel values to 8 bits. The example below shows how this can be achieved for gray scale images.
public static class TiffConverter
{
private static IEnumerable<BitmapSource> Load16BitTiff(Stream source)
{
var decoder = new TiffBitmapDecoder(source, BitmapCreateOptions.PreservePixelFormat, BitmapCacheOption.Default);
for (int i = 0; i < decoder.Frames.Count; i++)
// return all frames that are present in the input.
yield return decoder.Frames[i];
}
private static BitmapSource NormalizeTiffTo8BitImage(BitmapSource source)
{
// allocate buffer & copy image bytes.
var rawStride = source.PixelWidth * source.Format.BitsPerPixel / 8;
var rawImage = new byte[rawStride * source.PixelHeight];
source.CopyPixels(rawImage, rawStride, 0);
// get both max values of first & second byte of pixel as scaling bounds.
var max1 = 0;
int max2 = 1;
for (int i = 0; i < rawImage.Length; i++)
{
if ((i & 1) == 0)
{
if (rawImage[i] > max1)
max1 = rawImage[i];
}
else if (rawImage[i] > max2)
max2 = rawImage[i];
}
// determine normalization factors.
var normFactor = max2 == 0 ? 0.0d : 128.0d / max2;
var factor = max1 > 0 ? 255.0d / max1 : 0.0d;
max2 = Math.Max(max2, 1);
// normalize each pixel to output buffer.
var buffer8Bit = new byte[rawImage.Length / 2];
for (int src = 0, dst = 0; src < rawImage.Length; dst++)
{
int value16 = rawImage[src++];
double value8 = ((value16 * factor) / max2) - normFactor;
if (rawImage[src] > 0)
{
int b = rawImage[src] << 8;
value8 = ((value16 + b) / max2) - normFactor;
}
buffer8Bit[dst] = (byte)Math.Min(255, Math.Max(value8, 0));
src++;
}
// return new bitmap source.
return BitmapSource.Create(
source.PixelWidth, source.PixelHeight,
source.DpiX, source.DpiY,
PixelFormats.Gray8, BitmapPalettes.Gray256,
buffer8Bit, rawStride / 2);
}
private static void SaveTo(IEnumerable<BitmapSource> src, string fileName)
{
using (var stream = File.Create(fileName))
{
var encoder = new TiffBitmapEncoder();
foreach (var bms in src)
encoder.Frames.Add(BitmapFrame.Create(bms));
encoder.Save(stream);
}
}
public static void Convert(string inputFileName, string outputFileName)
{
using (var inputStream = File.OpenRead(inputFileName))
SaveTo(Load16BitTiff(inputStream).Select(NormalizeTiffTo8BitImage), outputFileName);
}
}
Usage:
TiffConverter.Convert(#"c:\temp\16bit.tif", #"c:\temp\8bit.tif");
Interpolate pixels from 16 bit to 8 bit, some resampling methods could perform.
Linear Interpolation may help.
//Convert tiff from 16-bit to 8-bit
byte[,,] ConvertBytes(int[,,] pixel, bandsize, xsize, ysize)
{
byte[,,] trgPixel = new byte[bandsize,xsize,ysize];
for (int i = 0; i < bandsize; i++)
{
for (int j = 0; j < xsize; j++)
{
for (int k = 0; k < ysize; k++)
{
//Linear Interpolation
trgPixel[i,j,k] = (byte)((65535-pixel[i,j,k])/65536.0*256);
}
}
}
return trgPixel;
}
//Save 8-bit tiff to file
void SaveBytesToTiff(string destPath, byte[,,] pixel, bandsize, xsize, ysize)
{
string fileformat = "GTiff";
Driver dr = Gdal.getDriverByName(fileformat);
Dataset newDs = dr.Create(destPath, xsize, ysize, bandsize, DateType.GDT_Byte, null);
for(int i=0; i< bandsize;i++)
{
byte[] buffer = new byte[xsize * ysize];
for (int j = 0; j < xsize; j++)
{
for (int k = 0; k < ysize; k++)
{
buffer[j+k*xsize] = pixel[i,j,k];
}
}
newDs.WriteRaster(0, 0, xsize, ysize, buffer, xsize, ysize, i+1, null, 0, 0, 0);
newDs.FlushCache();
}
newDs.Dispose();
}

Categories