StartCoroutine() to fix targetTexture.ReadPixels error - c#

As the title suggests I have a problem with the error occurring at the row
targetTexture.ReadPixels(new Rect(0, 0, cameraResolution.width, cameraResolution.height), 0, 0);
Error:
ReadPixels was called to read pixels from system frame buffer, while
not inside drawing frame. UnityEngine.Texture2D:ReadPixels(Rect,
Int32, Int32)
As I have understood from other posts one way to solve this issue is to make a Ienumerator method which yield return new WaitForSeconds or something, and call it like: StartCoroutine(methodname) so that the frames gets to load in time so that there will be pixels to read-ish.
What I don't get is where in the following code this method would make the most sense. Which part does not get to load in time?
PhotoCapture photoCaptureObject = null;
Texture2D targetTexture = null;
public string path = "";
CameraParameters cameraParameters = new CameraParameters();
private void Awake()
{
var cameraResolution = PhotoCapture.SupportedResolutions.OrderByDescending((res) => res.width * res.height).First();
targetTexture = new Texture2D(cameraResolution.width, cameraResolution.height);
// Create a PhotoCapture object
PhotoCapture.CreateAsync(false, captureObject =>
{
photoCaptureObject = captureObject;
cameraParameters.hologramOpacity = 0.0f;
cameraParameters.cameraResolutionWidth = cameraResolution.width;
cameraParameters.cameraResolutionHeight = cameraResolution.height;
cameraParameters.pixelFormat = CapturePixelFormat.BGRA32;
});
}
private void Update()
{
// if not initialized yet don't take input
if (photoCaptureObject == null) return;
if (Input.GetKey("k") || Input.GetKey("k"))
{
Debug.Log("k was pressed");
VuforiaBehaviour.Instance.gameObject.SetActive(false);
// Activate the camera
photoCaptureObject.StartPhotoModeAsync(cameraParameters, result =>
{
if (result.success)
{
// Take a picture
photoCaptureObject.TakePhotoAsync(OnCapturedPhotoToMemory);
}
else
{
Debug.LogError("Couldn't start photo mode!", this);
}
});
}
}
private static string FileName(int width, int height)
{
return $"screen_{width}x{height}_{DateTime.Now:yyyy-MM-dd_HH-mm-ss}.png";
}
private void OnCapturedPhotoToMemory(PhotoCapture.PhotoCaptureResult result, PhotoCaptureFrame photoCaptureFrame)
{
// Copy the raw image data into the target texture
photoCaptureFrame.UploadImageDataToTexture(targetTexture);
Resolution cameraResolution = PhotoCapture.SupportedResolutions.OrderByDescending((res) => res.width * res.height).First();
targetTexture.ReadPixels(new Rect(0, 0, cameraResolution.width, cameraResolution.height), 0, 0);
targetTexture.Apply();
byte[] bytes = targetTexture.EncodeToPNG();
string filename = FileName(Convert.ToInt32(targetTexture.width), Convert.ToInt32(targetTexture.height));
//save to folder under assets
File.WriteAllBytes(Application.streamingAssetsPath + "/Snapshots/" + filename, bytes);
Debug.Log("The picture was uploaded");
// Deactivate the camera
photoCaptureObject.StopPhotoModeAsync(OnStoppedPhotoMode);
}
private void OnStoppedPhotoMode(PhotoCapture.PhotoCaptureResult result)
{
// Shutdown the photo capture resource
VuforiaBehaviour.Instance.gameObject.SetActive(true);
photoCaptureObject.Dispose();
photoCaptureObject = null;
}
Sorry if this counts as a duplicate to this for example.
Edit
And this one might be useful when I get to that point.
Is it so that I don't need these three lines at all?
Resolution cameraResolution = PhotoCapture.SupportedResolutions.OrderByDescending((res) => res.width * res.height).First();
targetTexture.ReadPixels(new Rect(0, 0, cameraResolution.width, cameraResolution.height), 0, 0);
targetTexture.Apply();
As written in the comments the difference between using these three lines and not is that the photo saved has a black background + the AR-GUI. Without the second line of code above is a photo with the AR-GUI but with the background is a live stream of my computer webcam. And really I don't wanna see the computer webcam but what the HoloLens sees.

Your three lines
Resolution cameraResolution = PhotoCapture.SupportedResolutions.OrderByDescending((res) => res.width * res.height).First();
targetTexture.ReadPixels(new Rect(0, 0, cameraResolution.width, cameraResolution.height), 0, 0);
targetTexture.Apply();
make not much sense to me. Texture2D.ReadPixels is for creating a Screenshot so you would overwrite the texture you just received from PhotoCapture with a screenshot? (Also with incorrect dimensions since camera resolution very probably != screen resolution.)
That's also the reason for
As written in the comments the difference between using these three lines and not is that the photo saved has a black background + the AR-GUI.
After doing
photoCaptureFrame.UploadImageDataToTexture(targetTexture);
you already have the Texture2D received from the PhotoCapture in the targetTexture.
I think you probably confused it with Texture2D.GetPixels which is used to get the pixel data of a given Texture2D.
I would like to crop the captured photo from the center in the end and am thinking that maybe that is possible with this code row? Beginning the new rect at other pixels than 0, 0)
What you actually want is cropping the received Texture2D from the center as you mentioned in the comments. You can do that using GetPixels(int x, int y, int blockWidth, int blockHeight, int miplevel) which is used to cut out a certain area of a given Texture2D
public static Texture2D CropAroundCenter(Texture2D input, Vector2Int newSize)
{
if(input.width < newSize.x || input.height < newSize.y)
{
Debug.LogError("You can't cut out an area of an image which is bigger than the image itself!", this);
return null;
}
// get the pixel coordinate of the center of the input texture
var center = new Vector2Int(input.width / 2, input.height / 2);
// Get pixels around center
// Get Pixels starts width 0,0 in the bottom left corner
// so as the name says, center.x,center.y would get the pixel in the center
// we want to start getting pixels from center - half of the newSize
//
// than from starting there we want to read newSize pixels in both dimensions
var pixels = input.GetPixels(center.x - newSize.x / 2, center.y - newSize.y / 2, newSize.x, newSize.y, 0);
// Create a new texture with newSize
var output = new Texture2D(newSize.x, newSize.y);
output.SetPixels(pixels);
output.Apply();
return output;
}
for (hopefully) better understanding this is an illustration what that GetPixels overload with the given values does here:
and than use it in
private void OnCapturedPhotoToMemory(PhotoCapture.PhotoCaptureResult result, PhotoCaptureFrame photoCaptureFrame)
{
// Copy the raw image data into the target texture
photoCaptureFrame.UploadImageDataToTexture(targetTexture);
// for example take only half of the textures width and height
targetTexture = CropAroundCenter(targetTexture, new Vector2Int(targetTexture.width / 2, targetTexture.height / 2);
byte[] bytes = targetTexture.EncodeToPNG();
string filename = FileName(Convert.ToInt32(targetTexture.width), Convert.ToInt32(targetTexture.height));
//save to folder under assets
File.WriteAllBytes(Application.streamingAssetsPath + "/Snapshots/" + filename, bytes);
Debug.Log("The picture was uploaded");
// Deactivate the camera
photoCaptureObject.StopPhotoModeAsync(OnStoppedPhotoMode);
}
Or you could make it an extension method in an apart static class like
public static class Texture2DExtensions
{
public static void CropAroundCenter(this Texture2D input, Vector2Int newSize)
{
if (input.width < newSize.x || input.height < newSize.y)
{
Debug.LogError("You can't cut out an area of an image which is bigger than the image itself!");
return;
}
// get the pixel coordinate of the center of the input texture
var center = new Vector2Int(input.width / 2, input.height / 2);
// Get pixels around center
// Get Pixels starts width 0,0 in the bottom left corner
// so as the name says, center.x,center.y would get the pixel in the center
// we want to start getting pixels from center - half of the newSize
//
// than from starting there we want to read newSize pixels in both dimensions
var pixels = input.GetPixels(center.x - newSize.x / 2, center.y - newSize.y / 2, newSize.x, newSize.y, 0);
// Resize the texture (creating a new one didn't work)
input.Resize(newSize.x, newSize.y);
input.SetPixels(pixels);
input.Apply(true);
}
}
and use it instead like
targetTexture.CropAroundCenter(new Vector2Int(targetTexture.width / 2, targetTexture.height / 2));
Note:
UploadImageDataToTexture: You may only use this method if you specified the BGRA32 format in your CameraParameters.
Luckily you use that anyway ;)
Keep in mind that this operation will happen on the main thread and therefore be slow.
However the only alternative would be CopyRawImageDataIntoBuffer and generate the texture in another thread or external, so I'ld say it is ok to stay with UploadImageDataToTexture ;)
and
The captured image will also appear flipped on the HoloLens. You can reorient the image by using a custom shader.
by flipped they actually mean that the Y-Axis of the texture is upside down. X-Axis is correct.
For flipping the Texture vertically you can use a second extension method:
public static class Texture2DExtensions
{
public static void CropAroundCenter(){....}
public static void FlipVertically(this Texture2D texture)
{
var pixels = texture.GetPixels();
var flippedPixels = new Color[pixels.Length];
// These for loops are for running through each individual pixel and
// write them with inverted Y coordinates into the flippedPixels
for (var x = 0; x < texture.width; x++)
{
for (var y = 0; y < texture.height; y++)
{
var pixelIndex = x + y * texture.width;
var flippedIndex = x + (texture.height - 1 - y) * texture.width;
flippedPixels[flippedIndex] = pixels[pixelIndex];
}
}
texture.SetPixels(flippedPixels);
texture.Apply();
}
}
and use it like
targetTexture.FlipVertically();
Result: (I used FlipVertically and cropp to the half of size every second for this example and a given Texture but it should work the same for a taken picture.)
Image source: http://developer.vuforia.com/sites/default/files/sample-apps/targets/imagetargets_targets.pdf
Update
To your button problem:
Don't use
if (Input.GetKey("k") || Input.GetKey("k"))
First of all you are checking the exact same condition twice. And than GetKey fires every frame while the key stays pressed. Instead rather use
if (Input.GetKeyDown("k"))
which fires only a single time. I guess there was an issue with Vuforia and PhotoCapture since your original version fired so often and maybe you had some concurrent PhotoCapture processes...

Related

Unity's SetPixel method does not color out the given pixels

I am currently developing a pixel art program in Unity. Obviously, it has a pencil tool with a script on it that I have made.
Unfortunately, the SetPixel method does not color the pixels. I don't know if it is the method itself that it's not working or something else.
This is the code I am using:
[SerializeField] private Sprite textureRendererSprite;
private Texture2D texture;
private MouseCoordinates mouseCoordinates;
void Start()
{
mouseCoordinates = GetComponent<MouseCoordinates>();
texture = textureRendererSprite.texture;
}
void Update()
{
if (Input.GetMouseButtonDown(0))
{
texture.SetPixel(int.Parse(mouseCoordinates.posInt.x.ToString()), int.Parse(mouseCoordinates.posInt.y.ToString()), Color.black);
Debug.Log(int.Parse(mouseCoordinates.posInt.x.ToString()));
Debug.Log(int.Parse(mouseCoordinates.posInt.y.ToString()));
}
}
Also, this is my MouseCoordinates script:
[SerializeField] private Canvas parentCanvas = null;
[SerializeField] private RectTransform rect = null;
[SerializeField] private Text text;
public Vector2 posInt;
[SerializeField] private Camera UICamera = null;
void Start()
{
if (rect == null)
rect = GetComponent<RectTransform>();
if (parentCanvas == null)
parentCanvas = GetComponentInParent<Canvas>();
if (UICamera == null && parentCanvas.renderMode == RenderMode.WorldSpace)
UICamera = parentCanvas.worldCamera;
}
public void OnPointerClick(PointerEventData eventData)
{
RectTransformUtility.ScreenPointToLocalPointInRectangle(rect, eventData.position, UICamera, out Vector2 localPos);
localPos.x += rect.rect.width / 2f;
localPos.y += rect.rect.height / 2f;
posInt.x = ((int)localPos.x);
posInt.y = ((int)localPos.y);
text.text = (posInt.x + ", " + posInt.y).ToString();
}
I was a little bored, so here is a fully working pixel draw I just whipped up. The one part you were missing with your implementation is Texture2D.Apply, which based on the Texture2D.SetPixels doc page,
This function takes a color array and changes the pixel colors of the
whole mip level of the texture. Call Apply to actually upload the
changed pixels to the graphics card.
Now to your actual implementation. You do not need a majority of the data you are caching, as a PointerEventData already has most of it. The only component you will need is the Image component that you want to change.
OnPointerClick is fine, but that only registers clicks, not dragging. If you want to make a pixel art tool, most art is done by dragging a cursor or stylus, so you will want to use an OnDragHandler instead or, along with your click.
One other note, you are not adding any brush size. More of a QoL update to your snippet, but with the addition of a brush size there are other complications that arise. SetPixel is bottom left aligned and must be contained within the bounds of the texture. You can correct this by offsetting the center point of your click by half a brush size, then clamping the width and height of your box.
Here is the current snippet:
using UnityEngine;
using UnityEngine.EventSystems;
using UnityEngine.UI;
public class TestScript : MonoBehaviour, IPointerClickHandler, IDragHandler
{
// color we are setting pixels to
[SerializeField] private Color clr = Color.white;
// our source UI image - it can be a raw image or sprite renderer, I just used UI image
[SerializeField] private Image img = null;
[Range(1, 255)]
[SerializeField] private int BrushSize = 1;
// the texture we are going to manipulate
private Texture2D tex2D = null;
private void Awake()
{
Sprite imgSprite = img.sprite;
// create a new instance of our texture to not write to it directly and overwrite it
tex2D = new Texture2D((int)imgSprite.rect.width, (int)imgSprite.rect.height);
var pixels = imgSprite.texture.GetPixels((int)imgSprite.textureRect.x,
(int)imgSprite.textureRect.y,
(int)imgSprite.textureRect.width,
(int)imgSprite.textureRect.height);
tex2D.SetPixels(pixels);
tex2D.Apply();
// assign this new texture to our image by creating a new sprite
img.sprite = Sprite.Create(tex2D, img.sprite.rect, img.sprite.pivot);
}
public void OnPointerClick(PointerEventData eventData)
{
Draw(eventData);
}
public void OnDrag(PointerEventData eventData)
{
Draw(eventData);
}
private void Draw(in PointerEventData eventData)
{
Vector2 localCursor;
// convert the position click to a local position on our rect
if (!RectTransformUtility.ScreenPointToLocalPointInRectangle(img.rectTransform, eventData.position, eventData.pressEventCamera, out localCursor))
return;
// convert this position to pixel coordinates on our texture
int px = Mathf.Clamp(0, (int)((localCursor.x - img.rectTransform.rect.x) * tex2D.width / img.rectTransform.rect.width), tex2D.width);
int py = Mathf.Clamp(0, (int)((localCursor.y - img.rectTransform.rect.y) * tex2D.height / img.rectTransform.rect.height), tex2D.height);
// confirm we are in the bounds of our texture
if (px >= tex2D.width || py >= tex2D.height)
return;
// debugging - you can remove this
// print(px + ", " + py);
// if our brush size is greater than 1, then we need to grab neighbors
if (BrushSize > 1)
{
// bottom - left aligned, so find new bottom left coordinate then use that as our starting point
px = Mathf.Clamp(px - (BrushSize / 2), 0, tex2D.width);
py = Mathf.Clamp(py - (BrushSize / 2), 0, tex2D.height);
// add 1 to our brush size so the pixels found are a neighbour search outward from our center point
int maxWidth = Mathf.Clamp(BrushSize + 1, 0, tex2D.width - px);
int maxHeight = Mathf.Clamp(BrushSize + 1, 0, tex2D.height - py);
// cache our maximum dimension size
int blockDimension = maxWidth * maxHeight;
// create an array for our colors
Color[] colorArray = new Color[blockDimension];
// fill this with our color
for (int x = 0; x < blockDimension; ++x)
colorArray[x] = clr;
// set our pixel colors
tex2D.SetPixels(px, py, maxWidth, maxHeight, colorArray);
}
else
{
// set our color at our position - note this will almost never be seen as most textures are rather large, so a single pixel is not going to
// appear most of the time
tex2D.SetPixel(px, py, clr);
}
// apply the changes - this is what you were missing
tex2D.Apply();
// set our sprite to the new texture data
img.sprite = Sprite.Create(tex2D, img.sprite.rect, img.sprite.pivot);
}
}
Here is a gif of the snippet in action. Quite fun to play around with. And remember, whatever texture you use for this must have the setting Read and Write enabled on the import settings. Without this setting, the data is not mutable and you can not access the texture data at runtime.
Edit: Skimmed your question a bit too quickly. Realizing you are using a 2D sprite and not a UI Image or RawImage. You can still draw to a Sprite, but as it is not a UI object, it does not have a RectTransform. However, in your second snippet you reference a RectTransform. Can you explain your setup a bit more? The answer I provided should be enough to point you in the right direction either way.

Cut faraway objects based on depth map

I would like to do grabcut which uses a depth map that cuts away far objects, that is used in mixed reality application. So I would like to show just the front of what I see and the background as virtual reality scene.
The problem right now I tried to adapt so code and what I get is front which is cut but in black color, the mask actually.
I don't know where is the problem settle.
The input is a depth map from zed camera.
here is a picture of the behaviour:
My trial:
private void convertToGrayScaleValues(Mat mask)
{
int width = mask.rows();
int height = mask.cols();
byte[] buffer = new byte[width * height];
mask.get(0, 0, buffer);
for (int x = 0; x < width; x++)
{
for (int y = 0; y < height; y++)
{
int value = buffer[y * width + x];
if (value == Imgproc.GC_BGD)
{
buffer[y * width + x] = 0; // for sure background
}
else if (value == Imgproc.GC_PR_BGD)
{
buffer[y * width + x] = 85; // probably background
}
else if (value == Imgproc.GC_PR_FGD)
{
buffer[y * width + x] = (byte)170; // probably foreground
}
else
{
buffer[y * width + x] = (byte)255; // for sure foreground
}
}
}
mask.put(0, 0, buffer);
}
For Each depth frame from Camera:
Mat erodeElement = Imgproc.getStructuringElement(Imgproc.MORPH_RECT, new Size(4, 4));
Mat dilateElement = Imgproc.getStructuringElement(Imgproc.MORPH_RECT, new Size(7, 7));
depth.copyTo(maskFar);
Core.normalize(maskFar, maskFar, 0, 255, Core.NORM_MINMAX, CvType.CV_8U);
Imgproc.cvtColor(maskFar, maskFar, Imgproc.COLOR_BGR2GRAY);
Imgproc.threshold(maskFar, maskFar, 180, 255, Imgproc.THRESH_BINARY);
Imgproc.dilate(maskFar, maskFar, erodeElement);
Imgproc.erode(maskFar, maskFar, dilateElement);
Mat bgModel = new Mat();
Mat fgModel = new Mat();
Imgproc.grabCut(image, maskFar, new OpenCVForUnity.CoreModule.Rect(), bgModel, fgModel, 1, Imgproc.GC_INIT_WITH_MASK);
convertToGrayScaleValues(maskFar); // back to grayscale values
Imgproc.threshold(maskFar, maskFar, 180, 255, Imgproc.THRESH_TOZERO);
Mat foreground = new Mat(image.size(), CvType.CV_8UC4, new Scalar(0, 0, 0));
image.copyTo(foreground, maskFar);
Utils.fastMatToTexture2D(foreground, texture);
In this case, the graph cut on the depth image might not be the correct method to solve all of your issue.
If you insist the processing should be done in the depth image. To find everything that is not on the table and filter out the table part. You may first apply the disparity based approach for finding the object that's is not on the ground. Reference: https://github.com/windowsub0406/StereoVision
Then based on the V disparity output image, find the locally connected component that is grouped together. You may follow this link how to do this disparity map in OpenCV which is asking the similar way to find the objects that's not on the ground
If you are ok with RGB based approaches, then use any deep learning-based method to recognize the monitor should be the correct approaches. It can directly detect the mointer bounding box. By apply this bounding box to the depth image, you may have what you want. For deep learning based approaches, there are many available package such as Yolo series. You may find one that is suitable for you. reference: https://medium.com/#dvshah13/project-image-recognition-1d316d04cb4c

How To Crop Captured Image? --C#

Is it possible to crop the captured image based on the shape that I want? I'm using raw image + web cam texture to activate the camera and save the image. And I'm using UI Image overlay method as a mask to cover the unwanted parts. I will be attaching the picture to the char model in the latter part. Sorry, I am new to unity. Grateful for your help!
Below is what I have in my code:
// start cam
void Start () {
devices = WebCamTexture.devices;
background = GetComponent<RawImage> ();
devCam = new WebCamTexture ();
background.texture = devCam;
devCam.deviceName = devices [0].name;
devCam.Play ();
}
void OnGUI()
{
GUI.skin = skin;
//swap front and back camera
if (GUI.Button (new Rect ((Screen.width / 2) - 1200, Screen.height - 650, 250, 250),"", GUI.skin.GetStyle("btn1"))) {
devCam.Stop();
devCam.deviceName = (devCam.deviceName == devices[0].name) ? devices[1].name : devices[0].name;
devCam.Play();
}
//snap picture
if (GUI.Button (new Rect ((Screen.width / 2) - 1200, Screen.height - 350, 250, 250), "", GUI.skin.GetStyle ("btn2"))) {
OnSelectCapture ();
//freeze cam here?
}
}
public void OnSelectCapture()
{
imgID++;
string fileName = imgID.ToString () + ".png";
Texture2D snap = new Texture2D (devCam.width, devCam.height);
Color[] c;
c = devCam.GetPixels ();
snap.SetPixels (c);
snap.Apply ();
// Save created Texture2D (snap) into disk as .png
System.IO.File.WriteAllBytes (Application.persistentDataPath +"/"+ fileName, snap.EncodeToPNG ());
}
}
Unless I am not understanding your question correctly, you can just call `devCam.pause!
Update
What you're looking for is basically to copy the pixels from the screen onto a separate image under some condition. So you could use something like this: https://docs.unity3d.com/ScriptReference/Texture2D.EncodeToPNG.html
I'm not 100% sure what you want to do with it exactly but if you want to have an image that you can use as a sprite, for instance, you can scan each pixel and if the pixel colour value is the same as the blue background, swap it for a 100% transparent pixel (0 in the alpha channel). That will give you just the face with the black hair and the ears.
Update 2
The link that I referred you to copies all pixels from the camera view so you don't have to worry about your source image. Here is the untested method, it should work plug and play so long as there is only one background colour else you will need to modify slightly to test for different colours.
IEnumerator GetPNG()
{
// Create a texture the size of the screen, RGB24 format
yield return new WaitForEndOfFrame();
int width = Screen.width;
int height = Screen.height;
Texture2D tex = new Texture2D(width, height, TextureFormat.RGB24, false);
// Read screen contents into the texture
tex.ReadPixels(new Rect(0, 0, width, height), 0, 0);
tex.Apply();
//Create second texture to copy the first texture into minus the background colour. RGBA32 needed for Alpha channel
Texture2D CroppedTexture = new Texture2D(tex.width, tex.height, TextureFormat.RGBA32, false);
Color BackGroundCol = Color.white;//This is your background colour/s
//Height of image in pixels
for(int y=0; y<tex.height; y++){
//Width of image in pixels
for(int x=0; x<tex.width; x++){
Color cPixelColour = tex.GetPixel(x,y);
if(cPixelColour != BackGroundCol){
CroppedTexture.SetPixel(x,y, cPixelColour);
}else{
CroppedTexture.SetPixel(x,y, Color.clear);
}
}
}
// Encode your cropped texture into PNG
byte[] bytes = CroppedTexture.EncodeToPNG();
Object.Destroy(CroppedTexture);
Object.Destroy(tex);
// For testing purposes, also write to a file in the project folder
File.WriteAllBytes(Application.dataPath + "/../CroppedImage.png", bytes);
}

Issues with Rendering a Bitmap

I am currently working on a histogram renderer that renders bitmaps onto the Grasshopper canvas. There are a total of two bitmaps, both of them explained below
private readonly Bitmap _image;
and:
private readonly Bitmap _overlayedImage;
The Bitmap instance with the name _image looks like this:
_bitmap http://puu.sh/6mUk4/20b879710a.png
While the Bitmap instance with the name _overlayedImage looks like this:
Basically, _overlayedImage is a bitmap that is created using the _image bitmap, and as the name suggests, overlays the text (that you can see in the image I posted) and adds a black background to it. This is how it is assigned
_overlayedImage = overlayBitmap(_image, width * 3, height * 3, times, dates, colors);
(The * 3 is used to resize the image).
An issue I currently have is multi-fold.
Using this method, I am able to render _image onto the canvas.
The code is like this:
protected override void Render(Grasshopper.GUI.Canvas.GH_Canvas canvas, Graphics graphics, Grasshopper.GUI.Canvas.GH_CanvasChannel channel) {
// Render the default component.
base.Render(canvas, graphics, channel);
// Now render our bitmap if it exists.
if (channel == Grasshopper.GUI.Canvas.GH_CanvasChannel.Wires) {
var comp = Owner as KT_HeatmapComponent;
if (comp == null)
return;
List<HeatMap> maps = comp.CachedHeatmaps;
if (maps == null)
return;
if (maps.Count == 0)
return;
int x = Convert.ToInt32(Bounds.X + Bounds.Width / 2);
int y = Convert.ToInt32(Bounds.Bottom + 10);
for (int i = 0; i < maps.Count; i++) {
Bitmap image = maps[i].overlayedImage;
if (image == null)
continue;
Rectangle mapBounds = new Rectangle(x, y, maps[i].Width, maps[i].Height);
mapBounds.X -= mapBounds.Width / 2;
Rectangle edgeBounds = mapBounds;
GH_Capsule capsule = GH_Capsule.CreateCapsule(edgeBounds, GH_Palette.Normal);
capsule.Render(graphics, Selected, false, false);
capsule.Dispose();
graphics.DrawImage(image, mapBounds);
graphics.DrawRectangle(Pens.Black, mapBounds);
// some graphics interpolation and bicubic methods
y = edgeBounds.Bottom - (mapBounds.Height) - 4;
}
}
}
As per what comp.CachedHeatmaps; is:
private readonly List<HeatMap> _maps = new List<HeatMap>();
internal List<HeatMap> CachedHeatmaps {
get { return _maps; }
}
However, whenever I try to use Render() on the _overlayedImage, I am unable to do so.
I have isolated the issue to the Render() method, and it seems this line
Rectangle mapBounds = new Rectangle(x, y, maps[i].Width, maps[i].Height); is the main issue, as maps[i].Width and maps[i].Height returns 1 and 100 respectively, which are coincidentally the dimensions of the legend, which are 100 pixels vertically and 1 pixel horizontally.
I apologize for the decently long question, but I don't think I could have explained it any other way.
It turns out there are two issues:
In my main method I used _overlayedImage.Dispose(), which effectively destroyed the image before it was even displayed onto the canvas.
Also, my issue isolation was also correct. This line resulted in the thing rendering correctly:
Rectangle mapBounds = new Rectangle(x, y, maps[i].overlayedImage.Width, maps[i].overlayedImage.Height);
Resulting component:

Direct3D uploading video textures

I am trying to play video on Direct3D 9 device, using:
nVLC - for fetching the RGB32 frames from file
SlimDX - Actually displaying frames on video device using textures.
Here is my code to receive RGB32 frames;
_videoWrapper.SetCallback(delegate(Bitmap frame)
{
if (_mainContentSurface == null || _dead)
return;
var bmpData = frame.LockBits(new Rectangle(0, 0, frame.Width, frame.Height), ImageLockMode.ReadOnly, frame.PixelFormat);
var ptr = bmpData.Scan0;
var size = bmpData.Stride * frame.Height;
_mainContentSurface.Buffer = new byte[size];
System.Runtime.InteropServices.Marshal.Copy(ptr, _mainContentSurface.Buffer, 0, size);
_mainContentSurface.SetTexture(_mainContentSurface.Buffer, frame.Width, frame.Height);
_secondaryContentSurface.SetTexture(_mainContentSurface.Buffer, frame.Width, frame.Height); // same buffer to second WINDOW
_mainContentSurface.VideoFrameRate.Value =_videoWrapper.ActualFrameRate;
frame.UnlockBits(bmpData);
});
And here is my actual usage of SetTexture and mapping texture to square:
public void SetTexture(byte[] image, int width, int height)
{
if (Context9 != null && Context9.Device != null)
{
if (IsFormClosed)
return;
// rendering is seperate from the "FRAME FETCH" thread, if it makes sense.
// also note that we recreate video texture if needed.
_renderWindow.BeginInvoke(new Action(() =>
{
if (_image == null || _currentVideoTextureWidth != width || _currentVideoTextureHeight != height)
{
if(_image != null)
_image.Dispose();
_image = new Texture(Context9.Device, width, height, 0, Usage.Dynamic, Format.A8R8G8B8,
Pool.Default);
_currentVideoTextureWidth = width;
_currentVideoTextureHeight = height;
if(_image == null)
throw new Exception("Video card does not support textures power of TWO or dynamic textures. Get a video card");
}
//upload data into texture.
var data = _image.LockRectangle(0, LockFlags.None);
data.Data.Write(image, 0, image.Length);
_image.UnlockRectangle(0);
}));
}
}
and finally the actual rendering:
Context9.Device.SetStreamSource(0, _videoVertices, 0, Vertex.SizeBytes);
Context9.Device.VertexFormat = Vertex.Format;
// Setup our texture. Using Textures introduces the texture stage states,
// which govern how Textures get blended together (in the case of multiple
// Textures) and lighting information.
Context9.Device.SetTexture(0, _image);
// The sampler states govern how smooth the texture is displayed.
Context9.Device.SetSamplerState(0, SamplerState.MinFilter, TextureFilter.Linear);
Context9.Device.SetSamplerState(0, SamplerState.MagFilter, TextureFilter.Linear);
Context9.Device.SetSamplerState(0, SamplerState.MipFilter, TextureFilter.Linear);
// Now drawing 2 triangles, for a quad.
Context9.Device.DrawPrimitives(PrimitiveType.TriangleList, 0, 2);
Now, it works on my machine. Without problems. With every video file and in every position. But when I checked the WinXP, picture was completely broken. Here is a screencaps for both nonworking and working;
http://www.upload.ee/image/2941734/untitled.PNG
http://www.upload.ee/image/2941762/Untitled2.png
Note that on the first picture, they are _maincontentSurface and _secondaryContentSurface. Does anyone have idea what could be the problem?
You shouldn't need to recreate your texture every time, just create it as dynamic:
this.Texture = new Texture(device, w, h, 1, Usage.Dynamic, Format.X8R8G8B8, Pool.Default);
About the copy issue could come from stride (row length might be different since it is padded):
to get Row pitch of the texture:
public int GetRowPitch()
{
if (rowpitch == -1)
{
DataRectangle dr = this.Texture.LockRectangle(0, LockFlags.Discard);
this.rowpitch = dr.Pitch;
this.Texture.UnlockRectangle(0);
}
return rowpitch;
}
If your texture row pitch is equal to your frame pitch, you can copy the way you do, otherwise you can do it this way:
public void WriteDataPitch(IntPtr ptr, int len)
{
DataRectangle dr = this.Texture.LockRectangle(0, LockFlags.Discard);
int pos = 0;
int stride = this.Width * 4;
byte* data = (byte*)ptr.ToPointer();
for (int i = 0; i < this.Height; i++)
{
dr.Data.WriteRange((IntPtr)data, this.Width * 4);
pos += dr.Pitch;
dr.Data.Position = pos;
data += stride;
}
this.Texture.UnlockRectangle(0);
}
If you want an example of fully working vlc player with slimdx let me know got that around (need to wrap it up nicely)

Categories