When getting video input from a webcam via WebCamTexture the bottom row of the returned image is completely black (RGB = 0,0,0).
I have tried several different webcams and get the same result with all of them.
I do get a correct image when using the Windows 10 Camera app and also when getting a webcam feed in Processing or Java.
The black line (always 1 pixel high and as wide as the image) appears when showing video on the canvas, saving a snapshot to disk and also when looking directly at the pixel data with GetPixels32().
Here is the black-line at the Bottom of the picture image:
I have confirmed that the image returned is the correct size, i.e. the black row is not an extra row. It's always the lowest line of the image that is black.
I have included the c# code I'm using below.
What is the cause of this black line and is there a way to avoid it?
I have looked for any information on this issue but not found anything online. I'm a complete beginner at Unity and would be grateful for any help.
I'm using Unity version 5.6.2 but had the same issue with 5.5
public class CamController : MonoBehaviour
{
private WebCamTexture webcamTexture;
private WebCamDevice[] devices;
void Start()
{
//start webcam
webcamTexture = new WebCamTexture();
devices = WebCamTexture.devices;
webcamTexture.deviceName = devices[0].name;
webcamTexture.Play();
}
void Update()
{
//if user presses C capture cam image
if (Input.GetKeyDown(KeyCode.C))
captureImage();
}
void captureImage()
{
//get webcam pixels
Color32[] camPixels;
camPixels = webcamTexture.GetPixels32();
//print pixel data for first and second (from bottom) lines of image to console
for (int y = 0; y < 2; y++)
{
Debug.Log("Line: " + y);
for (int x = 0; x < webcamTexture.width; x++)
{
Debug.Log(x + " - " + camPixels[y * webcamTexture.width + x]);
}
}
//save webcam image as png
Texture2D brightBGTexture = new Texture2D(webcamTexture.width, webcamTexture.height);
brightBGTexture.SetPixels32(camPixels, 0);
brightBGTexture.Apply();
byte[] pngBytes = brightBGTexture.EncodeToPNG();
File.WriteAllBytes(Application.dataPath + "/../camImage.png", pngBytes);
}
}
After calling SetPixels32, you must call Texture2D.Apply to apply the changes to the Texture2D.
You should that before encoding the Texture2D to png.
//save webcam image as png
Texture2D brightBGTexture = new Texture2D(webcamTexture.width, webcamTexture.height);
brightBGTexture.SetPixels32(camPixels, 0);
brightBGTexture.Apply();
byte[] pngBytes = brightBGTexture.EncodeToPNG();
File.WriteAllBytes(Application.dataPath + "/../camImage.png", pngBytes);
EDIT:
Even with calling Texture2D.Apply() the problem is still there. This is a bug with the WebCamTexture API and you should file for a bug report through the Editor.
Related
I am trying to create an application that generates a bitmap image every frame based on user actions and have it display that image to the screen. I would like the application to also be able to update that image in unity in real time as soon as the user makes another action.
I have created an application that does this and it works. However, it is veryyyy slow. My Update() method is attached below.
My idea was:
Capture user data (mouse location).
Convert that data into a special signal format that another program recognizes.
Have that program return a bitmap image.
Use that bitmap as a texture and update the existing texture with the new image.
Code:
UnityEngine.Texture2D oneTexture;
Bitmap currentBitmap;
private int frameCount = 0;
void Update()
{
// Show mouse position in unity environment
double xValue = Input.mousePosition.x;
double yValue = Screen.height - Input.mousePosition.y;
myPoints = "" + xValue + "," + yValue + Environment.NewLine;
// Show heatmap being recorded.
signals = Program.ConvertStringToSignalsList(myPoints);
currentBitmap = Program.CreateMouseHeatmap(Screen.width, Screen.height, signals);
// Update old heatmap texture.
UpdateTextureFromBitmap();
ri.texture = oneTexture;
ri.rectTransform.sizeDelta = new Vector2(Screen.width, Screen.height);
frameCount++;
// Write points to Database.
StartCoroutine(WriteToDB(xValue, yValue)); // <<<<< Comment out when playback.
}
private void UpdateTextureFromBitmap()
{
// Convert Bitmap object into byte array instead of creating actual
// .bmp image file each frame.
byte[] imageBytes = ImageToBytes(currentBitmap);
BMPLoader loader = new BMPLoader();
BMPImage img = loader.LoadBMP(imageBytes);
// Only initialize the Texture once.
if (frameCount == 0)
{
oneTexture = img.ToTexture2D();
}
else
{
Color32[] imageData = img.imageData;
oneTexture.SetPixels32(imageData);
oneTexture.Apply();
}
}
I was wondering if someone could help me improve the rate at which the image updates to the screen? I know that it is possible to make this program much faster but I am so new to unity and C# that I don't know how to make that happen. Also if there is a completely different way that I should be going about doing this then I am open to that too. Any help would be appreciated. Thanks!
Also, below is a screenshot of the Profiler showing the breakdown of CPU Usage. Currently it looks like every frame is taking about 500ms.
I need help for converting Emgucv videoCapture image to LoadRawTextureData of unity to display images/videos in Unity3d 2018.1.
I am able to display images but it show strange lines and distorted image effects or some kind of shuttering/scattering.
I read the question in this post and apply the solution but the problem is not solved Convert Mat to Texture2d(Stackoverflow)
I think the Colorspace of Emgucv image did not matched with the texture2d.
Code:
void Start () {
vc = new VideoCapture(0);
vc.FlipVertical = true; //The image I am getting from webcam is flipped
myMaterial = GetComponent<Renderer>().material;
frameHeight = (int)vc.GetCaptureProperty(Emgu.CV.CvEnum.CapProp.FrameHeight);
frameWidth = (int)vc.GetCaptureProperty(Emgu.CV.CvEnum.CapProp.FrameWidth);
camera = new Texture2D(frameHeight,frameWidth, TextureFormat.RGB24, false,false);
}
I am getting images in this part of code and converting the color space as per texture2d format.
void Update () {
Mat secondimage = new Mat();
Mat myimage = vc.QueryFrame();
CvInvoke.CvtColor(myimage, secondimage, Emgu.CV.CvEnum.ColorConversion.Bgr2Rgb);
Image<Rgb, byte> hello = secondimage.ToImage<Rgb, byte>();
camera.LoadRawTextureData(hello.bytes);
camera.Apply();
myMaterial.mainTexture = camera;
}
Results:
The Result is kind of strange,take a look on following result images.
This is my hand holding in Front of camera.This image is orignal texture2d image.
Image1
This image is displayed on the 3d plane in unity3d.
Image2
As code explain i already Convert image Bgr2Rgb.
update:
I simply exchange the width and height in texture2d and wow it worked.
camera = new Texture2D(frameWidth, frameHeight, TextureFormat.RGB24,false,false);
I'm developing a mobile app to capture images of seedlings and calculate the plant growth by calculating the differences of white pixels for each image series. I already got how to change the threshold but I don't know how to count black and white pixels in the image. I'm using OpenCV for Unity plugin.
Basically this is all I have. Then I stuck on how to calculate the pixel number. By the way, can opencv for unity counts the pixels number because it's unlike the normal opencv?
public class thresholdpixel : MonoBehaviour
{
// Use this for initialization
void Start()
{
Texture2D imgTexture = TangkapGambar.MyTexture2;
Mat imgMat = new Mat(imgTexture.height, imgTexture.width, CvType.CV_8UC1);
Utils.texture2DToMat(imgTexture, imgMat);
Debug.Log("imgMat.ToString() " + imgMat.ToString());
Imgproc.threshold(imgMat, imgMat, 0, 255, Imgproc.THRESH_BINARY | Imgproc.THRESH_OTSU);
Texture2D texture = new Texture2D(imgMat.cols(), imgMat.rows(), TextureFormat.RGBA32, false);
Utils.matToTexture2D(imgMat, texture);
gameObject.GetComponent<Renderer>().material.mainTexture = texture;
}
void countPixel()
{
}
void Update()
{
}
}
}
I've did a lot of research, but I can't find a suitable solution that works with Unity3d/c#. I'm using a Fove-HMD and would like to record/make a video of the integrated camera. So far I managed every update to take a snapshot of the camera, but I can't find a way to merge this snapshots into a video. Does someone know a way of converting them? Or can someone point me in the right direction, in which I could continue my research?
public class FoveCamera : SingletonBase<FoveCamera>{
private bool camAvailable;
private WebCamTexture foveCamera;
private List<Texture2D> snapshots;
void Start ()
{
//-------------just checking if webcam is available
WebCamDevice[] devices = WebCamTexture.devices;
if (devices.Length == 0)
{
Debug.LogError("FoveCamera could not be found.");
camAvailable = false;
return;
}
foreach (WebCamDevice device in devices)
{
if (device.name.Equals("FOVE Eyes"))
foveCamera = new WebCamTexture(device.name);//screen.width and screen.height
}
if (foveCamera == null)
{
Debug.LogError("FoveCamera could not be found.");
return;
}
//-------------camera found, start with the video
foveCamera.Play();
camAvailable = true;
}
void Update () {
if (!camAvailable)
{
return;
}
//loading snap from camera
Texture2D snap = new Texture2D(foveCamera.width,foveCamera.height);
snap.SetPixels(foveCamera.GetPixels());
snapshots.Add(snap);
}
}
The code works so far. The first part of the Start-Method is just for finding and enabling the camera. In the Update-Method I'm taking every update a snapshot of the video.
After I "stop" the Update-Method, I would like to convert the gathered Texture2D object into a video.
Thanks in advance
Create MediaEncoder
using UnityEditor; // VideoBitrateMode
using UnityEditor.Media; // MediaEncoder
var vidAttr = new VideoTrackAttributes
{
bitRateMode = VideoBitrateMode.Medium,
frameRate = new MediaRational(25),
width = 320,
height = 240,
includeAlpha = false
};
var audAttr = new AudioTrackAttributes
{
sampleRate = new MediaRational(48000),
channelCount = 2
};
var enc = new MediaEncoder("sample.mp4", vidAttr, audAttr);
Convert each snapshot to Texture2D
Call consequently AddFrame to add each snapshot to MediaEncoder
enc.AddFrame(tex);
Once done call Dispose to close the file
enc.Dispose();
I see two methods here, one is fast to implement, dirty and not for all platforms, second one harder but pretty. Both rely on FFMPEG.
1) Save every frame into image file (snap.EncodeToPNG()) and then call FFMPEG to create video from images (FFmpeg create video from images) - slow due to many disk operations.
2) Use FFMPEG via wrapper implemented in AForge and supply its VideoFileWriter class with images that you have.
Image sequence to video stream?
Problem here is it uses System.Bitmap, so in order to convert Texture2D to Bitmap you can use: How to create bitmap from byte array?
So you end up with something like:
Bitmap bmp;
Texture2D snap;
using (var ms = new MemoryStream(snap.EncodeToPNG()))
{
bmp = new Bitmap(ms);
}
vFWriter.WriteVideoFrame(bmp);
Both methods are not the fastest ones though, so if performance is an issue here you might want to operate on lower level data like DirectX or OpenGL textures.
Ok so I ported a game I have been working on over to Monogame, however I'm having a shader issue now that it's ported. It's an odd bug, since it works on my old XNA project and it also works the first time I use it in the new monogame project, but not after that unless I restart the game.
The shader is a very simple shader that looks at a greyscale image and, based on the grey, picks a color from the lookup texture. Basically I'm using this to randomize a sprite image for an enemy every time a new enemy is placed on the screen. It works for the first time an enemy is spawned, but doesn't work after that, just giving a completely transparent texture (not a null texture).
Also, I'm only targeting Windows Desktop for now, but I am planning to target Mac and Linux at some point.
Here is the shader code itself.
sampler input : register(s0);
Texture2D colorTable;
float seed; //calculate in program, pass to shader (between 0 and 1)
sampler colorTableSampler =
sampler_state
{
Texture = <colorTable>;
};
float4 PixelShaderFunction(float2 c: TEXCOORD0) : COLOR0
{
//get current pixel of the texture (greyscale)
float4 color = tex2D(input, c);
//set the values to compare to.
float hair = 139/255; float hairless = 140/255;
float shirt = 181/255; float shirtless = 182/255;
//var to hold new color
float4 swap;
//pixel coordinate for lookup
float2 i;
i.y = 1;
//compare and swap
if (color.r >= hair && color.r <= hairless)
{
i.x = ((0.5 + seed + 96)/128);
swap = tex2D(colorTableSampler,i);
}
if (color.r >= shirt && color.r <= shirtless)
{
i.x = ((0.5 + seed + 64)/128);
swap = tex2D(colorTableSampler,i);
}
if (color.r == 1)
{
i.x = ((0.5 + seed + 32)/128);
swap = tex2D(colorTableSampler,i);
}
if (color.r == 0)
{
i.x = ((0.5 + seed)/128);
swap = tex2D(colorTableSampler, i);
}
return swap;
}
technique ColorSwap
{
pass Pass1
{
// TODO: set renderstates here.
PixelShader = compile ps_2_0 PixelShaderFunction();
}
}
And here is the function that creates the texture. I should also note that the texture generation works fine without the shader, I just get the greyscale base image.
public static Texture2D createEnemyTexture(GraphicsDevice gd, SpriteBatch sb)
{
//get a random number to pass into the shader.
Random r = new Random();
float seed = (float)r.Next(0, 32);
//create the texture to copy color data into
Texture2D enemyTex = new Texture2D(gd, CHARACTER_SIDE, CHARACTER_SIDE);
//create a render target to draw a character to.
RenderTarget2D rendTarget = new RenderTarget2D(gd, CHARACTER_SIDE, CHARACTER_SIDE,
false, gd.PresentationParameters.BackBufferFormat, DepthFormat.None);
gd.SetRenderTarget(rendTarget);
//set background of new render target to transparent.
//gd.Clear(Microsoft.Xna.Framework.Color.Black);
//start drawing to the new render target
sb.Begin(SpriteSortMode.Immediate, BlendState.Opaque,
SamplerState.PointClamp, DepthStencilState.None, RasterizerState.CullNone);
//send the random value to the shader.
Graphics.GlobalGfx.colorSwapEffect.Parameters["seed"].SetValue(seed);
//send the palette texture to the shader.
Graphics.GlobalGfx.colorSwapEffect.Parameters["colorTable"].SetValue(Graphics.GlobalGfx.palette);
//apply the effect
Graphics.GlobalGfx.colorSwapEffect.CurrentTechnique.Passes[0].Apply();
//draw the texture (now with color!)
sb.Draw(enemyBase, new Microsoft.Xna.Framework.Vector2(0, 0), Microsoft.Xna.Framework.Color.White);
//end drawing
sb.End();
//reset rendertarget
gd.SetRenderTarget(null);
//copy the drawn and colored enemy to a non-volitile texture (instead of render target)
//create the color array the size of the texture.
Color[] cs = new Color[CHARACTER_SIDE * CHARACTER_SIDE];
//get all color data from the render target
rendTarget.GetData<Color>(cs);
//move the color data into the texture.
enemyTex.SetData<Color>(cs);
//return the finished texture.
return enemyTex;
}
And just in case, the code for loading in the shader:
BinaryReader Reader = new BinaryReader(File.Open(#"Content\\shaders\\test.mgfx", FileMode.Open));
colorSwapEffect = new Effect(gd, Reader.ReadBytes((int)Reader.BaseStream.Length));
If anyone has ideas to fix this, I'd really appreciate it, and just let me know if you need other info about the problem.
I am not sure why you have "at" (#) sign in front of the string, when you escaped backslash - unless you want to have \\ in your string, but it looks strange in the file path.
You have wrote in your code:
BinaryReader Reader = new BinaryReader(File.Open(#"Content\\shaders\\test.mgfx", FileMode.Open));
Unless you want \\ inside your string do
BinaryReader Reader = new BinaryReader(File.Open(#"Content\shaders\test.mgfx", FileMode.Open));
or
BinaryReader Reader = new BinaryReader(File.Open("Content\\shaders\\test.mgfx", FileMode.Open));
but do not use both.
I don't see anything super obvious just reading through it, but really this could be tricky for someone to figure out just looking at your code.
I'd recommend doing a graphics profile (via visual studio) and capturing the frame which renders correctly then the frame rendering incorrectly and comparing the state of the two.
Eg, is the input texture what you expect it to be, are pixels being output but culled, is the output correct on the render target (in which case the problem could be Get/SetData), etc.
Change ps_2_0 to ps_4_0_level_9_3.
Monogame cannot use shaders built on HLSL 2.
Also the built in sprite batch shader uses ps_4_0_level_9_3 and vs_4_0_level_9_3, you will get issues if you try to replace the pixel portion of a shader with a different level shader.
This is the only issue I can see with your code.