I'm developing a mobile app to capture images of seedlings and calculate the plant growth by calculating the differences of white pixels for each image series. I already got how to change the threshold but I don't know how to count black and white pixels in the image. I'm using OpenCV for Unity plugin.
Basically this is all I have. Then I stuck on how to calculate the pixel number. By the way, can opencv for unity counts the pixels number because it's unlike the normal opencv?
public class thresholdpixel : MonoBehaviour
{
// Use this for initialization
void Start()
{
Texture2D imgTexture = TangkapGambar.MyTexture2;
Mat imgMat = new Mat(imgTexture.height, imgTexture.width, CvType.CV_8UC1);
Utils.texture2DToMat(imgTexture, imgMat);
Debug.Log("imgMat.ToString() " + imgMat.ToString());
Imgproc.threshold(imgMat, imgMat, 0, 255, Imgproc.THRESH_BINARY | Imgproc.THRESH_OTSU);
Texture2D texture = new Texture2D(imgMat.cols(), imgMat.rows(), TextureFormat.RGBA32, false);
Utils.matToTexture2D(imgMat, texture);
gameObject.GetComponent<Renderer>().material.mainTexture = texture;
}
void countPixel()
{
}
void Update()
{
}
}
}
Related
I'm trying to access image pixels by position i have been use byte array for accessing but it does not give the correct position of x,y like python image[x][y] is there any better way to access pixels?
i have used opencv plugin in unity,visual studio and cannot access them
public texture2D image;
Mat imageMat = new Mat(image.height, image.width, CvType.CV_8UC4);
Utils.texture2DToMat(image, imageMat); // actually converts texture2d to matrix
byte[] imageData = new byte[(int)(imageMat.total() * imageMat.channels())]; // pixel data of image
imageMat.get(0, 0, imageData);// gets pixel data
pixel=imageData[(y * imageMat.cols() + x) * imageMat.channels() + r]
y and x are pixel values in the code and r is the channel but i'm not able to
access a particular value of x and y with that code
There is no usual way to do it because operation is really slow. But some trick to do it is you can make screen texture from 'Camera' class.
After you make texture, you can use texture.GetPixel(x,y)
public class Example : MonoBehaviour
{
// Take a "screenshot" of a camera's Render Texture.
Texture2D RTImage(Camera camera)
{
// The Render Texture in RenderTexture.active is the one
// that will be read by ReadPixels.
var currentRT = RenderTexture.active;
RenderTexture.active = camera.targetTexture;
// Render the camera's view.
camera.Render();
// Make a new texture and read the active Render Texture into it.
Texture2D image = new Texture2D(camera.targetTexture.width, camera.targetTexture.height);
image.ReadPixels(new Rect(0, 0, camera.targetTexture.width, camera.targetTexture.height), 0, 0);
image.Apply();
// Replace the original active Render Texture.
RenderTexture.active = currentRT;
return image;
}
}
When getting video input from a webcam via WebCamTexture the bottom row of the returned image is completely black (RGB = 0,0,0).
I have tried several different webcams and get the same result with all of them.
I do get a correct image when using the Windows 10 Camera app and also when getting a webcam feed in Processing or Java.
The black line (always 1 pixel high and as wide as the image) appears when showing video on the canvas, saving a snapshot to disk and also when looking directly at the pixel data with GetPixels32().
Here is the black-line at the Bottom of the picture image:
I have confirmed that the image returned is the correct size, i.e. the black row is not an extra row. It's always the lowest line of the image that is black.
I have included the c# code I'm using below.
What is the cause of this black line and is there a way to avoid it?
I have looked for any information on this issue but not found anything online. I'm a complete beginner at Unity and would be grateful for any help.
I'm using Unity version 5.6.2 but had the same issue with 5.5
public class CamController : MonoBehaviour
{
private WebCamTexture webcamTexture;
private WebCamDevice[] devices;
void Start()
{
//start webcam
webcamTexture = new WebCamTexture();
devices = WebCamTexture.devices;
webcamTexture.deviceName = devices[0].name;
webcamTexture.Play();
}
void Update()
{
//if user presses C capture cam image
if (Input.GetKeyDown(KeyCode.C))
captureImage();
}
void captureImage()
{
//get webcam pixels
Color32[] camPixels;
camPixels = webcamTexture.GetPixels32();
//print pixel data for first and second (from bottom) lines of image to console
for (int y = 0; y < 2; y++)
{
Debug.Log("Line: " + y);
for (int x = 0; x < webcamTexture.width; x++)
{
Debug.Log(x + " - " + camPixels[y * webcamTexture.width + x]);
}
}
//save webcam image as png
Texture2D brightBGTexture = new Texture2D(webcamTexture.width, webcamTexture.height);
brightBGTexture.SetPixels32(camPixels, 0);
brightBGTexture.Apply();
byte[] pngBytes = brightBGTexture.EncodeToPNG();
File.WriteAllBytes(Application.dataPath + "/../camImage.png", pngBytes);
}
}
After calling SetPixels32, you must call Texture2D.Apply to apply the changes to the Texture2D.
You should that before encoding the Texture2D to png.
//save webcam image as png
Texture2D brightBGTexture = new Texture2D(webcamTexture.width, webcamTexture.height);
brightBGTexture.SetPixels32(camPixels, 0);
brightBGTexture.Apply();
byte[] pngBytes = brightBGTexture.EncodeToPNG();
File.WriteAllBytes(Application.dataPath + "/../camImage.png", pngBytes);
EDIT:
Even with calling Texture2D.Apply() the problem is still there. This is a bug with the WebCamTexture API and you should file for a bug report through the Editor.
I have two renderer objects (A and B) in my scene connected to two different cameras (green square and red square):
I am using the following script on both render objects to create a render texure on the corresponding camera and then draw this as a texture on the object on each frame:
using UnityEngine;
using System.Collections;
[ExecuteInEditMode]
public class CameraRenderer : MonoBehaviour
{
public Camera Camera;
public Renderer Renderer;
void Start()
{
RenderTexture renderTexture = new RenderTexture (256, 256, 16, RenderTextureFormat.ARGB32);
renderTexture.Create ();
Camera.targetTexture = renderTexture;
}
void Update ()
{
Renderer.sharedMaterial.mainTexture = GetCameraTexture ();
}
Texture2D GetCameraTexture()
{
RenderTexture currentRenderTexture = RenderTexture.active;
RenderTexture.active = Camera.targetTexture;
Camera.Render();
Texture2D texture = new Texture2D(Camera.targetTexture.width, Camera.targetTexture.height);
texture.ReadPixels(new Rect(0, 0, Camera.targetTexture.width, Camera.targetTexture.height), 0, 0);
texture.Apply();
RenderTexture.active = currentRenderTexture;
return texture;
}
}
I am expecting to see two different images on A and B from the different cameras, but I am seeing the same image. I originally was using a render texture that I created in the editor attached to the camera, but though that might be what was causing them to render the same thing so I tried creating a new texture on each object. Sadly this still resulted in the same outcome.
I'm pretty new to unity so I've run out of ideas pretty fast - any suggestions would be great!
I wouldn't advise naming your objects your class names. Anyways I think the renderers are using the same material and they both render the same texture whichever camera gives them last.
Either use Renderer.material to automatically create an new instance of the material, or manually assign different materials to the 2 renderers.
Try,
Renderer.material.mainTexture = GetCameraTexture ();
Instead of,
Renderer.sharedMaterial.mainTexture = GetCameraTexture ();
I'm creating a 2D game with Unity3D, but I'm quite new to it.
I'm trying to draw a monochromatic background, with some sprites in front of it.
I found this code:
using UnityEngine;
using System.Collections;
public class GUIRect : MonoBehaviour {
public Color color;
// Use this for initialization
void Start () {
}
// Update is called once per frame
void Update () {
}
private static Texture2D _staticRectTexture;
private static GUIStyle _staticRectStyle;
// Note that this function is only meant to be called from OnGUI() functions.
public static void GUIDrawRect( Rect position, Color color )
{
if( _staticRectTexture == null )
{
_staticRectTexture = new Texture2D( 1, 1 );
}
if( _staticRectStyle == null )
{
_staticRectStyle = new GUIStyle();
}
_staticRectTexture.SetPixel( 0, 0, color );
_staticRectTexture.Apply();
_staticRectStyle.normal.background = _staticRectTexture;
GUI.Box( position, GUIContent.none, _staticRectStyle );
}
void OnGUI() {
GUIDrawRect(Rect.MinMaxRect(0, 0, Screen.width, Screen.height), color);
}
}
I attached it to an empty game object, and it's working good, but I can't decide the z-ordering between it and other sprites in the scene.
Is it the correct approach? If so, how should I change it's draw order?
Are you using unity Pro? If so you can use post-processing shaders.
http://docs.unity3d.com/Manual/script-GrayscaleEffect.html
If you want to have a 2d background, just use the 2d setting when starting Unity.
Then your background can be texture sprites layered upon each other. No reason to draw on the GUI except for actual GUI items.
http://answers.unity3d.com/questions/637876/how-to-make-background-for-2d-game.html
Ok so I ported a game I have been working on over to Monogame, however I'm having a shader issue now that it's ported. It's an odd bug, since it works on my old XNA project and it also works the first time I use it in the new monogame project, but not after that unless I restart the game.
The shader is a very simple shader that looks at a greyscale image and, based on the grey, picks a color from the lookup texture. Basically I'm using this to randomize a sprite image for an enemy every time a new enemy is placed on the screen. It works for the first time an enemy is spawned, but doesn't work after that, just giving a completely transparent texture (not a null texture).
Also, I'm only targeting Windows Desktop for now, but I am planning to target Mac and Linux at some point.
Here is the shader code itself.
sampler input : register(s0);
Texture2D colorTable;
float seed; //calculate in program, pass to shader (between 0 and 1)
sampler colorTableSampler =
sampler_state
{
Texture = <colorTable>;
};
float4 PixelShaderFunction(float2 c: TEXCOORD0) : COLOR0
{
//get current pixel of the texture (greyscale)
float4 color = tex2D(input, c);
//set the values to compare to.
float hair = 139/255; float hairless = 140/255;
float shirt = 181/255; float shirtless = 182/255;
//var to hold new color
float4 swap;
//pixel coordinate for lookup
float2 i;
i.y = 1;
//compare and swap
if (color.r >= hair && color.r <= hairless)
{
i.x = ((0.5 + seed + 96)/128);
swap = tex2D(colorTableSampler,i);
}
if (color.r >= shirt && color.r <= shirtless)
{
i.x = ((0.5 + seed + 64)/128);
swap = tex2D(colorTableSampler,i);
}
if (color.r == 1)
{
i.x = ((0.5 + seed + 32)/128);
swap = tex2D(colorTableSampler,i);
}
if (color.r == 0)
{
i.x = ((0.5 + seed)/128);
swap = tex2D(colorTableSampler, i);
}
return swap;
}
technique ColorSwap
{
pass Pass1
{
// TODO: set renderstates here.
PixelShader = compile ps_2_0 PixelShaderFunction();
}
}
And here is the function that creates the texture. I should also note that the texture generation works fine without the shader, I just get the greyscale base image.
public static Texture2D createEnemyTexture(GraphicsDevice gd, SpriteBatch sb)
{
//get a random number to pass into the shader.
Random r = new Random();
float seed = (float)r.Next(0, 32);
//create the texture to copy color data into
Texture2D enemyTex = new Texture2D(gd, CHARACTER_SIDE, CHARACTER_SIDE);
//create a render target to draw a character to.
RenderTarget2D rendTarget = new RenderTarget2D(gd, CHARACTER_SIDE, CHARACTER_SIDE,
false, gd.PresentationParameters.BackBufferFormat, DepthFormat.None);
gd.SetRenderTarget(rendTarget);
//set background of new render target to transparent.
//gd.Clear(Microsoft.Xna.Framework.Color.Black);
//start drawing to the new render target
sb.Begin(SpriteSortMode.Immediate, BlendState.Opaque,
SamplerState.PointClamp, DepthStencilState.None, RasterizerState.CullNone);
//send the random value to the shader.
Graphics.GlobalGfx.colorSwapEffect.Parameters["seed"].SetValue(seed);
//send the palette texture to the shader.
Graphics.GlobalGfx.colorSwapEffect.Parameters["colorTable"].SetValue(Graphics.GlobalGfx.palette);
//apply the effect
Graphics.GlobalGfx.colorSwapEffect.CurrentTechnique.Passes[0].Apply();
//draw the texture (now with color!)
sb.Draw(enemyBase, new Microsoft.Xna.Framework.Vector2(0, 0), Microsoft.Xna.Framework.Color.White);
//end drawing
sb.End();
//reset rendertarget
gd.SetRenderTarget(null);
//copy the drawn and colored enemy to a non-volitile texture (instead of render target)
//create the color array the size of the texture.
Color[] cs = new Color[CHARACTER_SIDE * CHARACTER_SIDE];
//get all color data from the render target
rendTarget.GetData<Color>(cs);
//move the color data into the texture.
enemyTex.SetData<Color>(cs);
//return the finished texture.
return enemyTex;
}
And just in case, the code for loading in the shader:
BinaryReader Reader = new BinaryReader(File.Open(#"Content\\shaders\\test.mgfx", FileMode.Open));
colorSwapEffect = new Effect(gd, Reader.ReadBytes((int)Reader.BaseStream.Length));
If anyone has ideas to fix this, I'd really appreciate it, and just let me know if you need other info about the problem.
I am not sure why you have "at" (#) sign in front of the string, when you escaped backslash - unless you want to have \\ in your string, but it looks strange in the file path.
You have wrote in your code:
BinaryReader Reader = new BinaryReader(File.Open(#"Content\\shaders\\test.mgfx", FileMode.Open));
Unless you want \\ inside your string do
BinaryReader Reader = new BinaryReader(File.Open(#"Content\shaders\test.mgfx", FileMode.Open));
or
BinaryReader Reader = new BinaryReader(File.Open("Content\\shaders\\test.mgfx", FileMode.Open));
but do not use both.
I don't see anything super obvious just reading through it, but really this could be tricky for someone to figure out just looking at your code.
I'd recommend doing a graphics profile (via visual studio) and capturing the frame which renders correctly then the frame rendering incorrectly and comparing the state of the two.
Eg, is the input texture what you expect it to be, are pixels being output but culled, is the output correct on the render target (in which case the problem could be Get/SetData), etc.
Change ps_2_0 to ps_4_0_level_9_3.
Monogame cannot use shaders built on HLSL 2.
Also the built in sprite batch shader uses ps_4_0_level_9_3 and vs_4_0_level_9_3, you will get issues if you try to replace the pixel portion of a shader with a different level shader.
This is the only issue I can see with your code.