Different Results Sending Byte Array - c#

Can someone please explain the difference between the 2 examples below:
Pre-build, in Unity, dragging a file (that is renamed from test.jpg to test.jpg.bytes) to a slot defined as a TextAsset (imageAsset) and then using this code:
private byte[] PrepareImageFile()
{
int width = Screen.width;
int height = Screen.height;
var tex = new Texture2D(width, height, TextureFormat.RGB24, false);
tex.LoadImage(imageAsset.bytes);
tex.Apply();
byte[] bytes = tex.EncodeToPNG();
Destroy(tex);
return bytes;
}
Post-build, on an Android tablet, passing in a gallery image path (aPath) then using this code:
private byte[] PrepareTheFile(string aPath)
{
byte[] data = File.ReadAllBytes(aPath);
int width = Screen.width;
int height = Screen.height;
var tex = new Texture2D(width, height, TextureFormat.RGB24, false);
tex.LoadImage(data);
tex.Apply();
byte[] bytes = tex.EncodeToPNG();
Destroy(tex);
return bytes;
}
The reason I know they are different is when the image is sent to a facial recognition API (using bytes), #1 returns accurate results (9/10 identified), but #2 returns inaccurate results (only 1/10 identified correctly).
There are no errors and the image must be reaching its destination for analysis as 1 of the 10 people get identified correctly.
public void GrabImage()
{
NativeGallery.Permission permission = NativeGallery.GetImageFromGallery((path) =>
{
if (path != null)
{
texture = new Texture2D(300, 300, TextureFormat.RGB24, false);
texture.LoadImage(File.ReadAllBytes(path));
Debug.Log(_celebTextAttributes.text + "W:" + texture.width + " x H:" + texture.height);
texture.Apply();
_celebTextAttributes.SetText("Path: " + path);
imagePath = path;
}
}, "Select an image from", "image/png");
_celebImage.GetComponent<Renderer>().material.mainTexture = texture;
}
Any help?

Related

How to crop a captured screenshot?

Capturing a screen shot is quite easy but cropping is a different story, or at least appears so. We are attempting to emulate the solution here in Xamarin using SkiaSharp (System.Drawing not supported in Xamarin).
Currently we are able to capture the screenshot and return the image BUT if we crop our image the image returned is all black.
How do we crop a captured screenshot correctly?
NOTE*: image = image.Subset(rec); under the "crop image" section is how we are trying to crop.
iOS screenshot
public byte[] Capture()
{
var capture = UIScreen.MainScreen.Capture();
using (NSData data = capture.AsPNG())
{
var bytes = new byte[data.Length];
Marshal.Copy(data.Bytes, bytes, 0, Convert.ToInt32(data.Length));
return bytes;
}
}
Droid screenshot
public byte[] Capture()
{
var rootView = context.Window.DecorView.RootView;
using (var screenshot = Bitmap.CreateBitmap(
rootView.Width,
rootView.Height,
Bitmap.Config.Argb8888))
{
var canvas = new Canvas(screenshot);
rootView.Draw(canvas);
using (var stream = new MemoryStream())
{
screenshot.Compress(Bitmap.CompressFormat.Png, 90, stream);
return stream.ToArray();
}
}
}
Attempting to crop captured screenshot
// Use this function to crop a screen shot to a specific element.
public byte[] test(byte[] screenshotData, View element)
{
// locate IntPtr to byte[] of uncropped screenshot
GCHandle gch = GCHandle.Alloc(screenshotData, GCHandleType.Pinned);
IntPtr addr = gch.AddrOfPinnedObject();
// assign initial bounds
SKImageInfo info = new SKImageInfo((int)App.Current.MainPage.Width,
(int)App.Current.MainPage.Height);
// create initial pixel map
using SKPixmap pixmap = new SKPixmap(info, addr);
// Release
gch.Free();
// create bitmap
using SKBitmap bitmap = new SKBitmap();
// assign pixel data
bitmap.InstallPixels(pixmap);
// create surface
using SKSurface surface = SKSurface.Create(info);
// create a canvas for drawing
using SKCanvas canvas = surface.Canvas;
// draw
canvas.DrawBitmap(bitmap, info.Rect);
// get an image subset to save
SKImage image = surface.Snapshot();
SKRectI rec = new SKRectI((int)element.Bounds.Left, (int)element.Bounds.Top,
(int)element.Bounds.Right, (int)element.Bounds.Bottom);
// crop image
image = image.Subset(rec);
byte[] bytes = SKBitmap.FromImage(image).Bytes;
image.Dispose();
return bytes;
}
EDIT: Alternative solution attempt (not working)
// Use this function to crop a screen shot to a specific element.
public byte[] test(byte[] screenshotData, View element)
{
// locate IntPtr to byte[] of uncropped screenshot
GCHandle gch = GCHandle.Alloc(screenshotData, GCHandleType.Pinned);
IntPtr addr = gch.AddrOfPinnedObject();
// assign initial bounds
SKImageInfo info = new SKImageInfo((int)App.Current.MainPage.Width,
(int)App.Current.MainPage.Height);
// create bitmap
SKBitmap bitmap = new SKBitmap();
bitmap.InstallPixels(info, addr);
// boundaries
SKRect cropRect = new SKRect((int)element.Bounds.Left, (int)element.Bounds.Top,
(int)element.Bounds.Right, (int)element.Bounds.Bottom);
SKBitmap croppedBitmap = new SKBitmap((int)cropRect.Width,
(int)cropRect.Height);
SKRect dest = new SKRect(0, 0, cropRect.Width, cropRect.Height);
SKRect source = new SKRect(cropRect.Left, cropRect.Top,
cropRect.Right, cropRect.Bottom);
// draw with destination and source rectangles
// to extract a subset of the original bitmap
using SKCanvas canvas = new SKCanvas(croppedBitmap);
canvas.DrawBitmap(bitmap, source, dest);
return croppedBitmap.Bytes;
//return bitmap.Bytes;
}
iOS solution - As seen here.
// crop the image, without resizing
private UIImage CropImage(UIImage sourceImage, int crop_x, int crop_y, int width, int height)
{
var imgSize = sourceImage.Size;
UIGraphics.BeginImageContextWithOptions(new System.Drawing.SizeF(width, height), false, 0.0f);
var context = UIGraphics.GetCurrentContext();
var clippedRect = new RectangleF(0, 0, width, height);
context.ClipToRect(clippedRect);
var drawRect = new RectangleF(-crop_x, -crop_y, imgSize.Width, imgSize.Height);
sourceImage.Draw(drawRect);
var modifiedImage = UIGraphics.GetImageFromCurrentImageContext();
UIGraphics.EndImageContext();
return modifiedImage;
}
Android solution - getNavigationBarSize(context) as seen here
// crop the image, without resizing
private byte[] CropImage(byte[] screenshotBytes, int top)
{
Android.Graphics.Bitmap bitmap = Android.Graphics.BitmapFactory.DecodeByteArray(
screenshotBytes, 0, screenshotBytes.Length);
int viewStartY = (int)(top * 2.8f);
int viewHeight = (int)(bitmap.Height - (top * 2.8f));
var navBarXY = getNavigationBarSize(context);
int viewHeightMinusNavBar = viewHeight - navBarXY.Y;
Android.Graphics.Bitmap crop = Android.Graphics.Bitmap.CreateBitmap(bitmap,
0, viewStartY,
bitmap.Width, viewHeightMinusNavBar
);
bitmap.Dispose();
using MemoryStream stream = new MemoryStream();
crop.Compress(Android.Graphics.Bitmap.CompressFormat.Jpeg, 100, stream);
return stream.ToArray();
}
*NOTE: Unsure why a multiplication of 2.8 is required, though this works correctly. It should be stated testing was only done in the Android emulator. Perhaps it's emulator specific.
*NOTE2: x = 0 and width is equal to the entire width of the screen as that's per our requirement. Likewise top is Element.Bounds.Top.

Unity3d convert Texture2d into bitmap format

I had made an attempt to get this done but unfortunately I am coming up short not sure what I am doing wrong.
private void CreateMovie(List<Texture2D> textures, string fileName, int frameRate)
{
var writer = new AviWriter(fileName + ".avi")
{
FramesPerSecond = frameRate,
EmitIndex1 = true
};
var stream = writer.AddVideoStream();
stream.Width = _images[0].width;
stream.Height = _images[0].height;
stream.Codec = KnownFourCCs.Codecs.Uncompressed;
stream.BitsPerPixel = BitsPerPixel.Bpp32;
int count = 0;
while (count < textures.Count)
{
byte[] byteArray = textures[count].GetRawTextureData();
stream.WriteFrame(false, byteArray, 0, byteArray.Length);
count++;
}
writer.Close();
}
Once I write the bytes to file and try to open it I get the file is in a unknown format.
Can you post your code for writing to the texture?
To convert your Texture to a different format, you will need to create a new Texture with the desired format, then write the data to the texture.
Use the following constructor:
public Texture2D(int width, int height, TextureFormat textureFormat = TextureFormat.RGBA32, bool mipChain = true, bool linear = false);

Unity, Save cubemap to one circle image

I have cubemap. I need to save it in a circular image, for example in PNG. Many hours of searching on the Internet in what I have failed. How I do it? Is that possible?
I have image: joxi.ru/zANd66wSl6Kdkm
I need to save in png: joxi.ru/12MW55wT40LYjr Part code, which help you:
tex.SetPixels(cubemap.GetPixels(CubemapFace.PositiveZ));
bytes = tex.EncodeToPNG();
File.WriteAllBytes(Application.dataPath + "/" + cubemap.name +"_PositiveZ.png", bytes);
You can create a class that inherits ScriptableWizard class that will render a cubemap from a specific transform. Here is my code:
using UnityEngine;
using UnityEditor;
using System.Collections;
using System.IO;
public class RenderCubemapWizard : ScriptableWizard
{
public Transform renderFromPosition;
public Cubemap cubemap;
void OnWizardUpdate()
{
string helpString = "Select transform to render from and cubemap to render into";
bool isValid = (renderFromPosition != null) && (cubemap != null);
}
void OnWizardCreate()
{
// create temporary camera for rendering
GameObject go = new GameObject("CubemapCamera");
go.AddComponent<Camera>();
// place it on the object
go.transform.position = renderFromPosition.position;
go.transform.rotation = Quaternion.identity;
// render into cubemap
go.GetComponent<Camera>().RenderToCubemap(cubemap);
// destroy temporary camera
DestroyImmediate(go);
ConvertToPng();
}
[MenuItem("GameObject/Render into Cubemap")]
static void RenderCubemap()
{
ScriptableWizard.DisplayWizard<RenderCubemapWizard>(
"Render cubemap", "Render!");
}
void ConvertToPng()
{
Debug.Log(Application.dataPath + "/" +cubemap.name +"_PositiveX.png");
var tex = new Texture2D (cubemap.width, cubemap.height, TextureFormat.RGB24, false);
// Read screen contents into the texture
tex.SetPixels(cubemap.GetPixels(CubemapFace.PositiveX));
// Encode texture into PNG
var bytes = tex.EncodeToPNG();
File.WriteAllBytes(Application.dataPath + "/" + cubemap.name +"_PositiveX.png", bytes);
tex.SetPixels(cubemap.GetPixels(CubemapFace.NegativeX));
bytes = tex.EncodeToPNG();
File.WriteAllBytes(Application.dataPath + "/" + cubemap.name +"_NegativeX.png", bytes);
tex.SetPixels(cubemap.GetPixels(CubemapFace.PositiveY));
bytes = tex.EncodeToPNG();
File.WriteAllBytes(Application.dataPath + "/" + cubemap.name +"_PositiveY.png", bytes);
tex.SetPixels(cubemap.GetPixels(CubemapFace.NegativeY));
bytes = tex.EncodeToPNG();
File.WriteAllBytes(Application.dataPath + "/" + cubemap.name +"_NegativeY.png", bytes);
tex.SetPixels(cubemap.GetPixels(CubemapFace.PositiveZ));
bytes = tex.EncodeToPNG();
File.WriteAllBytes(Application.dataPath + "/" + cubemap.name +"_PositiveZ.png", bytes);
tex.SetPixels(cubemap.GetPixels(CubemapFace.NegativeZ));
bytes = tex.EncodeToPNG();
File.WriteAllBytes(Application.dataPath + "/" + cubemap.name +"_NegativeZ.png", bytes);
DestroyImmediate(tex);
}
}
This basically creates a new cubemap from the given position that you specify from within the wizard (to use the wizard go to GameObject in the top menu and at the bottom of the list you'll see 'Render into Cubemap'). It will then grab the six positions of the cubemap and convert it into a PNG file from with in the ConvertToPng() function. This works for me and it should work for you since it essentially only needs a transform position.
Sorry for how long it is tried to simplify it but this as simplified as I could make it.
Here are the links that helped me come to this conclusion:
How to convert a face to png
Unity's scriptable wizard for rendering a cubemap
This is the correct approach that allows for a single compressed cubemap texture. After .png texture is saved, just set its settings to cube & the compression settings you want.
#if UNITY_EDITOR
using UnityEngine;
using UnityEditor;
using System.IO;
public class RenderCubemapUtil : ScriptableWizard
{
public Transform renderFromPosition;
public int size = 512;
public string newCubmapPath;
void OnWizardUpdate()
{
isValid = renderFromPosition != null && size >= 16 && !string.IsNullOrEmpty(newCubmapPath);
}
void OnWizardCreate()
{
if (!isValid) return;
// create temporary camera for rendering
var go = new GameObject("CubemapCamera");
go.AddComponent<Camera>();
try
{
// place it on the object
go.transform.position = renderFromPosition.position;
go.transform.rotation = Quaternion.identity;
// create new texture
var cubemap = new Cubemap(size, TextureFormat.RGB24, false);
// render into cubemap
go.GetComponent<Camera>().RenderToCubemap(cubemap);
// convert cubemap to single horizontal texture
var texture = new Texture2D(size * 6, size, cubemap.format, false);
int texturePixelCount = (size * 6) * size;
var texturePixels = new Color[texturePixelCount];
var cubeFacePixels = cubemap.GetPixels(CubemapFace.PositiveX);
CopyTextureIntoCubemapRegion(cubeFacePixels, texturePixels, size * 0);
cubeFacePixels = cubemap.GetPixels(CubemapFace.NegativeX);
CopyTextureIntoCubemapRegion(cubeFacePixels, texturePixels, size * 1);
cubeFacePixels = cubemap.GetPixels(CubemapFace.PositiveY);
CopyTextureIntoCubemapRegion(cubeFacePixels, texturePixels, size * 3);
cubeFacePixels = cubemap.GetPixels(CubemapFace.NegativeY);
CopyTextureIntoCubemapRegion(cubeFacePixels, texturePixels, size * 2);
cubeFacePixels = cubemap.GetPixels(CubemapFace.PositiveZ);
CopyTextureIntoCubemapRegion(cubeFacePixels, texturePixels, size * 4);
cubeFacePixels = cubemap.GetPixels(CubemapFace.NegativeZ);
CopyTextureIntoCubemapRegion(cubeFacePixels, texturePixels, size * 5);
texture.SetPixels(texturePixels, 0);
// write texture as png to disk
var textureData = texture.EncodeToPNG();
File.WriteAllBytes(Path.Combine(Application.dataPath, $"{newCubmapPath}.png"), textureData);
// save to disk
AssetDatabase.SaveAssetIfDirty(cubemap);
AssetDatabase.SaveAssets();
AssetDatabase.Refresh();
}
finally
{
// destroy temporary camera
DestroyImmediate(go);
}
}
private void CopyTextureIntoCubemapRegion(Color[] srcPixels, Color[] dstPixels, int xOffsetDst)
{
int cubemapWidth = size * 6;
for (int y = 0; y != size; ++y)
{
for (int x = 0; x != size; ++x)
{
int iSrc = x + (y * size);
int iDst = (x + xOffsetDst) + (y * cubemapWidth);
dstPixels[iDst] = srcPixels[iSrc];
}
}
}
[MenuItem("GameObject/Render into Cubemap")]
static void RenderCubemap()
{
DisplayWizard<RenderCubemapUtil>("Render cubemap", "Render!");
}
}
#endif

Saving Texture2D to file not working properly

So I've created this class based off of the Texture2D.EncodeToPNG code example on Unity's website. I'm not getting any errors when I execute it, but I'm also not seeing a new file created. What am I doing wrong here?
public class CreateJPG : MonoBehaviour
{
public int width = 1050;
public int height = 700;
string fileName;
string filePath;
// Texture2D tex;
public void GrabJPG () {
SaveJPG();
Debug.Log("GrabJPG Executing");
}
IEnumerator SaveJPG()
{
// We should only read the screen buffer after rendering is complete
yield return new WaitForEndOfFrame();
// Create a texture the size of the screen, RGB24 format
Texture2D tex = new Texture2D(width, height, TextureFormat.RGB24, false);
tex.ReadPixels(new Rect(0,0,width,height),0,0);
tex.Apply();
// Encode texture into JPG
byte[] bytes = tex.EncodeToJPG(60);
Object.Destroy(tex);
// Get filePrefix from GameSetup array index
GameObject init = GameObject.FindGameObjectWithTag("Initializer");
GameSetup gameSetup = init.GetComponent<GameSetup>();
string prefix = gameSetup.filePrefix;
string subDir = gameSetup.subDir;
string dtString = System.DateTime.Now.ToString("MM-dd-yyyy_HHmmssfff");
fileName = prefix+dtString+".jpg";
filePath = "/Users/kenmarold/Screenshots/"+subDir+"/";
Debug.Log("SaveJPG Executing");
File.WriteAllBytes(filePath+fileName, bytes);
Debug.Log("Your file was saved at " + filePath+subDir+prefix+fileName);
if(width > 0 && height > 0)
{
}
}
}
You didn't start your coroutine, you need to call StartCodoutine in GrabJPG:
StartCoroutine(SaveJPG());
https://docs.unity3d.com/Manual/Coroutines.html
https://unity3d.com/learn/tutorials/modules/intermediate/scripting/coroutines
P. S. By the way, you can use Application.CaptureScreenshot

How to generate a dynamic GRF image to ZPL ZEBRA print

I have a problem.
I´m generating a dynamic BMP image and trying to send this to a ZEBRA printer by ZPL commands.
I need to convert my BMP to a GRF image. I think that my Hexadecimal extracted by the BMP image isn´t correct.
The printed image is blurred and incorrect.
This is my code:
string bitmapFilePath = #oldArquivo; // file is attached to this support article
byte[] bitmapFileData = System.IO.File.ReadAllBytes(bitmapFilePath);
int fileSize = bitmapFileData.Length;
Bitmap ImgTemp = new Bitmap(bitmapFilePath);
Size ImgSize = ImgTemp.Size;
ImgTemp.Dispose();
// The following is known about test.bmp. It is up to the developer
// to determine this information for bitmaps besides the given test.bmp.
int width = ImgSize.Width;
int height = ImgSize.Height;
int bitmapDataOffset = 62; // 62 = header of the image
int bitmapDataLength = fileSize - 62;// 8160;
double widthInBytes = Math.Ceiling(width / 8.0);
// Copy over the actual bitmap data from the bitmap file.
// This represents the bitmap data without the header information.
byte[] bitmap = new byte[bitmapDataLength];
Buffer.BlockCopy(bitmapFileData, bitmapDataOffset, bitmap, 0, (bitmapDataLength));
// Invert bitmap colors
for (int i = 0; i < bitmapDataLength; i++)
{
bitmap[i] ^= 0xFF;
}
// Create ASCII ZPL string of hexadecimal bitmap data
string ZPLImageDataString = BitConverter.ToString(bitmap).Replace("-", string.Empty);
string comandoCompleto = "~DG" + nomeImagem + ".GRF,0" + bitmapDataLength.ToString() + ",0" + widthInBytes.ToString() + "," + ZPLImageDataString;
Try the following code. Not tested!
public static string CreateGRF(string filename, string imagename)
{
Bitmap bmp = null;
BitmapData imgData = null;
byte[] pixels;
int x, y, width;
StringBuilder sb;
IntPtr ptr;
try
{
bmp = new Bitmap(filename);
imgData = bmp.LockBits(new Rectangle(0, 0, bmp.Width, bmp.Height), ImageLockMode.ReadOnly, PixelFormat.Format1bppIndexed);
width = (bmp.Width + 7) / 8;
pixels = new byte[width];
sb = new StringBuilder(width * bmp.Height * 2);
ptr = imgData.Scan0;
for (y = 0; y < bmp.Height; y++)
{
Marshal.Copy(ptr, pixels, 0, width);
for (x = 0; x < width; x++)
sb.AppendFormat("{0:X2}", (byte)~pixels[x]);
ptr = (IntPtr)(ptr.ToInt64() + imgData.Stride);
}
}
finally
{
if (bmp != null)
{
if (imgData != null) bmp.UnlockBits(imgData);
bmp.Dispose();
}
}
return String.Format("~DG{0}.GRF,{1},{2},", imagename, width * y, width) + sb.ToString();
}
One thing to point out is that the bitmap being converted must be monochrome (that is, 1 bit per pixel). There is an example on Zebra's knowledgebase that demonstrates printing a simple monochrome image in ZPL: https://km.zebra.com/kb/index?page=answeropen&type=open&searchid=1356730396931&answerid=16777216&iqaction=5&url=https%3A%2F%2Fkm.zebra.com%2Fkb%2Findex%3Fpage%3Dcontent%26id%3DSA304%26actp%3Dsearch%26viewlocale%3Den_US&highlightinfo=4194550,131,153#. If you can convert your images into monochrome bitmaps, then you should be able to follow that example.
// Given a monochrome bitmap file, one can read
// information about that bitmap from the header
// information in the file. This information includes
// bitmap height, width, bitsPerPixel, etc. It is required
// that a developer understands the basic bitmap format and
// how to extract the following data in order to proceed.
// A simple search online for 'bitmap format' should yield
// all the needed information. Here, for our example, we simply
// declare what the bitmap information is, since we are working
// with a known sample file.
string bitmapFilePath = #"test.bmp"; // file is attached to this support article
byte[] bitmapFileData = System.IO.File.ReadAllBytes(bitmapFilePath);
int fileSize = bitmapFileData.Length;
// The following is known about test.bmp. It is up to the developer
// to determine this information for bitmaps besides the given test.bmp.
int bitmapDataOffset = 62;
int width = 255;
int height = 255;
int bitsPerPixel = 1; // Monochrome image required!
int bitmapDataLength = 8160;
double widthInBytes = Math.Ceiling(width / 8.0);
// Copy over the actual bitmap data from the bitmap file.
// This represents the bitmap data without the header information.
byte[] bitmap = new byte[bitmapDataLength];
Buffer.BlockCopy(bitmapFileData, bitmapDataOffset, bitmap, 0, bitmapDataLength);
// Invert bitmap colors
for (int i = 0; i < bitmapDataLength; i++)
{
bitmap[i] ^= 0xFF;
}
// Create ASCII ZPL string of hexadecimal bitmap data
string ZPLImageDataString = BitConverter.ToString(bitmap);
ZPLImageDataString = ZPLImageDataString.Replace("-", string.Empty);
// Create ZPL command to print image
string[] ZPLCommand = new string[4];
ZPLCommand[0] = "^XA";
ZPLCommand[1] = "^FO20,20";
ZPLCommand[2] =
"^GFA, " +
bitmapDataLength.ToString() + "," +
bitmapDataLength.ToString() + "," +
widthInBytes.ToString() + "," +
ZPLImageDataString;
ZPLCommand[3] = "^XZ";
// Connect to printer
string ipAddress = "10.3.14.42";
int port = 9100;
System.Net.Sockets.TcpClient client =
new System.Net.Sockets.TcpClient();
client.Connect(ipAddress, port);
System.Net.Sockets.NetworkStream stream = client.GetStream();
// Send command strings to printer
foreach (string commandLine in ZPLCommand)
{
stream.Write(ASCIIEncoding.ASCII.GetBytes(commandLine), 0, commandLine.Length);
stream.Flush();
}
// Close connections
stream.Close();
client.Close();
Please add 2 to widthInBytes - it works!!!
int bitmapDataOffset = int.Parse(bitmapFileData[10].ToString()); ;
int width = 624;// int.Parse(bitmapFileData[18].ToString()); ;
int height = int.Parse(bitmapFileData[22].ToString()); ;
int bitsPerPixel = int.Parse(bitmapFileData[28].ToString()); // Monochrome image required!
int bitmapDataLength = bitmapFileData.Length - bitmapDataOffset;
double widthInBytes = Math.Ceiling(width / 8.0)+2;

Categories