I am generating dynamic textures in monogame for simple shapes. Yes I know the disadvantages to this system, but I am just experimenting with building my own physics engine. I am trying to generate the texture for an ellipse as is described here.
I have a function PaintDescriptor that takes an x and y pixel coordinate and gives back what color it should be. Red is just while I am debugging, and normally it would be Color.Transparent.
public override Color PaintDescriptor(int x, int y)
{
float c = (float)Width / 2;
float d = (float)Height / 2;
return pow((x - c) / c, 2) + pow((y - d) / d, 2) <= 1 ? BackgroundColor : Color.Red;
}
Now this works if Width == Height, so, a circle. However, if they are not equal, it generates a texture with some ellipse like shapes, but also with banding/striping.
I have tried seeing if my width and height were switched, and ive tried several other things. One thing to note is that where in the normal coordinate system on desmos I have (y + d) / d, but since the screen's y axis is flipped, I have to flip the y offset in the code: (y - d) / d. The rest of the relating code for texture generation and drawing is here:
public Texture2D GenerateTexture(GraphicsDevice device, Func<int, int, Color> paint)
{
Texture2D texture = new Texture2D(device, Width, Height);
Color[] data = new Color[Width * Height];
for (int pixel = 0; pixel < data.Count(); pixel++)
data[pixel] = paint(pixel / Width, pixel % Height);
texture.SetData(data);
return texture;
}
public void Draw(float scale = 1, float layerdepth = 0, SpriteEffects se = SpriteEffects.None)
{
if (SBRef == null)
throw new Exception("No reference to spritebatch object");
SBRef.Draw(Texture, new Vector2(X, Y), null, null, null, 0, new Vector2(scale, scale), Color.White, se, layerdepth);
}
public float pow(float num, float power) //this is a redirect of math.pow to make code shorter and more readable
{
return (float)Math.Pow(num, power);
}
Why doesnt this match desmos? Why does it not make an ellipse?
EDIT: I forgot to mention, but one possible solution I have come across is to always draw a circle, and then scale it to the desired width and height. This is not acceptable for me for one because of some possible blurriness in drawing, or other artifacts, but more mainly because I want to understand whatever im not currently getting with this solution.
After sleeping and coming back with a fresh mindset for like the 10th time, I found the answer. in the function GenerateTexture:
data[pixel] = paint(pixel / Width, pixel % Height);
should be
data[pixel] = paint(pixel % Width, pixel / Height);
Related
I would like to do grabcut which uses a depth map that cuts away far objects, that is used in mixed reality application. So I would like to show just the front of what I see and the background as virtual reality scene.
The problem right now I tried to adapt so code and what I get is front which is cut but in black color, the mask actually.
I don't know where is the problem settle.
The input is a depth map from zed camera.
here is a picture of the behaviour:
My trial:
private void convertToGrayScaleValues(Mat mask)
{
int width = mask.rows();
int height = mask.cols();
byte[] buffer = new byte[width * height];
mask.get(0, 0, buffer);
for (int x = 0; x < width; x++)
{
for (int y = 0; y < height; y++)
{
int value = buffer[y * width + x];
if (value == Imgproc.GC_BGD)
{
buffer[y * width + x] = 0; // for sure background
}
else if (value == Imgproc.GC_PR_BGD)
{
buffer[y * width + x] = 85; // probably background
}
else if (value == Imgproc.GC_PR_FGD)
{
buffer[y * width + x] = (byte)170; // probably foreground
}
else
{
buffer[y * width + x] = (byte)255; // for sure foreground
}
}
}
mask.put(0, 0, buffer);
}
For Each depth frame from Camera:
Mat erodeElement = Imgproc.getStructuringElement(Imgproc.MORPH_RECT, new Size(4, 4));
Mat dilateElement = Imgproc.getStructuringElement(Imgproc.MORPH_RECT, new Size(7, 7));
depth.copyTo(maskFar);
Core.normalize(maskFar, maskFar, 0, 255, Core.NORM_MINMAX, CvType.CV_8U);
Imgproc.cvtColor(maskFar, maskFar, Imgproc.COLOR_BGR2GRAY);
Imgproc.threshold(maskFar, maskFar, 180, 255, Imgproc.THRESH_BINARY);
Imgproc.dilate(maskFar, maskFar, erodeElement);
Imgproc.erode(maskFar, maskFar, dilateElement);
Mat bgModel = new Mat();
Mat fgModel = new Mat();
Imgproc.grabCut(image, maskFar, new OpenCVForUnity.CoreModule.Rect(), bgModel, fgModel, 1, Imgproc.GC_INIT_WITH_MASK);
convertToGrayScaleValues(maskFar); // back to grayscale values
Imgproc.threshold(maskFar, maskFar, 180, 255, Imgproc.THRESH_TOZERO);
Mat foreground = new Mat(image.size(), CvType.CV_8UC4, new Scalar(0, 0, 0));
image.copyTo(foreground, maskFar);
Utils.fastMatToTexture2D(foreground, texture);
In this case, the graph cut on the depth image might not be the correct method to solve all of your issue.
If you insist the processing should be done in the depth image. To find everything that is not on the table and filter out the table part. You may first apply the disparity based approach for finding the object that's is not on the ground. Reference: https://github.com/windowsub0406/StereoVision
Then based on the V disparity output image, find the locally connected component that is grouped together. You may follow this link how to do this disparity map in OpenCV which is asking the similar way to find the objects that's not on the ground
If you are ok with RGB based approaches, then use any deep learning-based method to recognize the monitor should be the correct approaches. It can directly detect the mointer bounding box. By apply this bounding box to the depth image, you may have what you want. For deep learning based approaches, there are many available package such as Yolo series. You may find one that is suitable for you. reference: https://medium.com/#dvshah13/project-image-recognition-1d316d04cb4c
I'm using the following code to Transform a small rectangle coordinates to a larger one ie: A rectangle position on a small image to the same position on the larger resolution of the same image
Rectangle ConvertToLargeRect(Rectangle smallRect, Size largeImageSize, Size smallImageSize)
{
double xScale = (double)largeImageSize.Width / smallImageSize.Width;
double yScale = (double)largeImageSize.Height / smallImageSize.Height;
int x = (int)(smallRect.X * xScale + 0.5);
int y = (int)(smallRect.Y * yScale + 0.5);
int right = (int)(smallRect.Right * xScale + 0.5);
int bottom = (int)(smallRect.Bottom * yScale + 0.5);
return new Rectangle(x, y, right - x, bottom - y);
}
But there seems to be a problem with some images.The transformed rectangle coordinates seems to be off the image.
UPDATE:
img.Draw(rect, new Bgr(232, 3, 3), 2);
Rectangle transret= ConvertToLargeRect(rect, orgbitmap.Size, bit.Size);
target = new Bitmap(transret.Width, transret.Height);
using (Graphics g = Graphics.FromImage(target))
{
g.SmoothingMode = SmoothingMode.HighQuality;
g.DrawImage(orgbitmap, new Rectangle(0, 0, target.Width, target.Height),
transret, GraphicsUnit.Pixel);
}
Rectangle Drawn on small resolution Image
{X=190,Y=2,Width=226,Height=286}
Rectangle Transformed into Orginal Large Resolution Image {X=698,Y=7,Width=830,Height=931}
Original Image
First of all, if you resize the shape it shouldn't move position. That's not what one would expect out of enlarging a shape. This means the X,Y point of the top-left corner shouldn't be transformed.
Second, you shouldn't be adding 0.5 manually to operations, that's not a clean way to proceed. Use the ceiling function as suggested by #RezaAghaei
Third, you should not substract X/Y from the height/width, your calculations should be done as width * scale.
Please correct those mistakes, and if it doesn't work I'll update the answer with extra steps.
I am working on a program that takes a Bitmap and converts it into circular form. The code is as follows:
public static Image CropToCircle(Image srcImage, Color backGround)
{
Image dstImage = new Bitmap(srcImage.Width, srcImage.Height, srcImage.PixelFormat);
Graphics g = Graphics.FromImage(dstImage);
using (Brush br = new SolidBrush(backGround)) {
g.FillRectangle(br, 0, 0, dstImage.Width, dstImage.Height);
}
GraphicsPath path = new GraphicsPath();
path.AddEllipse(0, 0, dstImage.Width, dstImage.Height);
g.SetClip(path);
g.DrawImage(srcImage, 0, 0);
return dstImage;
}
It returns the image in circular shape; however I need to read an image wedge in the form of degrees; that is, the circle has 360 degrees and I am trying to write a function that will accept a degree (e.g. 10) and will return the pixels of the image that fall in 10th degree. Such that entire image will be readable in 1 to 360 degrees.
Since my hint was actually rather misleading, let me make up by giving you a working code:
// collect a list of colors from a bitmap with a cetner c and radius r
List<Color> getColorsByAngle(Bitmap bmp, Point c, int r, float angle)
{
List<Color> colors = new List<Color>();
for (int i = 0; i < r; i++)
{
int x = (int)(Math.Sin(angle / 180f * Math.PI) * i);
int y = (int)(Math.Cos(angle / 180f * Math.PI) * i);
colors.Add(bmp.GetPixel(c.X + x, c.Y + y));
}
return colors;
}
Here it is at work:
(The gif is rather quantized for size..)
Note that
Pixels close to the center will be read multiple time, the center itself even each time
To collect all outer pixels you need to read as many angles as the circumference of the circle has pixels, ie 2 * PI * radius. So for a circle with a radius of 300 pixels you need to step the angle in 360° / (600 * 3.14) or about 0.2°..
Also note the the coordinate systems in GDI and in geometry are not the same, neither in the direction of the axes nor the angles. Adapting this is left for you..
The original version didn't mention a 'wedge area'. To read an area or the whole image simply loop over an angle range in suitable steps!
I am fairly new to xna. I just created a sprite with transparent background(magenta). Problem is my Rectangle is reading the coordinates of whole sprite not of visible one. How do I make it read only the visible sprite.
myrectangle = new Rectangle(0, 0, box.Width, box.Height);
I want to place my visible part not transparent at that position. Thanks in advance.
To transform a color to transparent, go to the texture properties, content processor, and enable Color Key, and set the key Color to magenta.
Then to positioning the sprite where you want, you need to set the proper origin.
To set the ship center in the desired position, is needed to set the origin as shown:
So when you draw it, you need doing similar to this:
var origin = new Vector2(40,40);
spritebatch.Draw(shipTexture, shipPosition, null, Color, origin, ...)
You can change your texture rectangle source too:
var texSource = new Rectangle( 25,25, 30,30);
spritebatch.Draw(shipTexture, shipPosition, texSource, Color)
Although you may need to change the origin if you want to position the ship at its center
You need to manually measure the offset of the point you need using a program like Paint and then set that offset in the parameter Origin in the Draw method.
A better idea is to measure the size in pixel of your sprite (without the background) and the set it as the sourceRectangle in the Draw method.
spritebatch.Draw(textureToDraw, Position, sourceRectangle, Color.White)
SourceRectangle is nullable, its defalut value is null, and in that case XNA will draw the whole texture, and you don't need that.
Using transparent color coding like Magenta is very old-fashioned. Nowadays we use the alpha in the images to achieve this.
I guess the only real way to do what you want to do is to search through the color-data to find the smallest and the largest x and y coordinates which have alpha > 0, or != Color.Magenta in your case.
Texture2D sprite = Content.Load<Texture2D>(.....);
int width = sprite.Width;
int height = sprite.Height;
Rectangle sourceRectangle = new Rectangle(int.Max, int.Max, 0, 0);
Color[] data = new Color[width*height];
sprite.GetData<Color>(data);
int maxX = 0;
int maxY = 0;
for (int y = 0; y < height; y++)
{
for (int x = 0; x < width; x++)
{
int index = width * y + x;
if (data[index] != Color.Magenta)
{
if (x < sourceRectangle.X)
sourceRectangle.X = x;
else if (x > maxX)
maxX = x;
if (y < sourceRectangle.Y)
sourceRectangle.Y = y;
else if (y > maxY)
maxY = y;
}
}
}
sourceRectangle.Width = maxX - sourceRectangle.X;
sourceRectangle.Height = maxY - sourceRectange.Y;
I use a cheat method in VB.Net, which I assume you could make work in C#:
Private Function MakeTexture(ByVal b As Bitmap) As Texture2D
Using MemoryStream As New MemoryStream
b.Save(MemoryStream, System.Drawing.Imaging.ImageFormat.Png)
Return Texture2D.FromStream(XNAGraphics.GraphicsDevice, MemoryStream)
End Using
End Function
As long as your bitmap is loaded with a transparent color, this works slick.
I am creating a hidden object game and am trying to mark the object when found with an eclipse. I've manually saved the top left and bottom right coordinates of each picture which were obtained via the GestureListener_Tap event.
The problem is when I tried to draw an eclipse bounded by the coordinates using this code
WriteableBitmapExtensions.DrawEllipse(writeableBmp, AnsX1, AnsY1, AnsX2, AnsY2, Colors.Red);
The location of the eclipse is always off to the top left. Marking the pixel locations using the following codes show that they are indeed located differently then what I would expect from the GestureListener_Tap.
writeableBmp.SetPixel(AnsX1, AnsY1, Colors.Red);
writeableBmp.SetPixel(AnsX2, AnsY2, Colors.Red);
My code for marking the location:
private void fadeOutAnimation_Ended(object sender, EventArgs e)
{
WriteableBitmap writeableBmp = new WriteableBitmap(bmpCurrent);
imgCat.Source = writeableBmp;
writeableBmp.GetBitmapContext();
WriteableBitmapExtensions.DrawEllipse(writeableBmp, AnsX1, AnsY1, AnsX2, AnsY2, Colors.Red);
writeableBmp.SetPixel(AnsX1, AnsY1, Colors.Red);
writeableBmp.SetPixel(AnsX2, AnsY2, Colors.Red);
// Present the WriteableBitmap
writeableBmp.Invalidate();
//Just some animation code
RadFadeAnimation fadeInAnimation = new RadFadeAnimation();
fadeInAnimation.StartOpacity = 0.2;
fadeInAnimation.EndOpacity = 1.0;
RadAnimationManager.Play(this.imgCat, fadeInAnimation);
}
What am I missing?
EDIT:
My answer below does not take into account the screen orientation changed. See my comment below the answer. How do you map pixel coordinates to image coordinate?
EDIT 2:
Found a correct solution. Updated my answer
From #PaulAnnetts comment I managed to transform the pixel coordinates. My inital mistake was to assume the image coordinates are the same as the pixel coordinates! I use the following code to convert.
private int xCoordinateToPixel(int coordinate)
{
double x;
x = writeableBmp.PixelWidth / imgCat.ActualWidth * coordinate;
return Convert.ToInt32(x);
}
private int yCoordinateToPixel(int coordinate)
{
double y;
y = writeableBmp.PixelHeight / imgCat.ActualHeight * coordinate;
return Convert.ToInt32(y);
}
EDIT:
Since PixelHeight and PixelWidth is fixed, and ActualHeight & ActualWidth isn't, I should convert pixel to coordinate in the GestureListerner_Tap event.
if ((X >= xPixelToCoordinate(AnsX1) && Y >= yPixelToCoordinate(AnsY1)) && (X <= xPixelToCoordinate(AnsX2) && Y <= yPixelToCoordinate(AnsY2)))
{...}
And my pixel to coordinate converters
private int xPixelToCoordinate(int xpixel)
{
double x = imgCat.ActualWidth / writeableBmp.PixelWidth * xpixel;
return Convert.ToInt32(x);
}
private int yPixelToCoordinate(int ypixel)
{
double y = imgCat.ActualHeight / writeableBmp.PixelHeight * ypixel;
return Convert.ToInt32(y);
}