Why might Graphics.RotateTransform() not be applied? - c#

I have the following function:
static private Image CropRotate(Image wholeImage, Rectangle cropArea)
{
Bitmap cropped = new Bitmap(cropArea.Width, cropArea.Height);
using(Graphics g = Graphics.FromImage(cropped))
{
g.DrawImage(wholeImage, new Rectangle(0, 0, cropArea.Width, cropArea.Height), cropArea, GraphicsUnit.Pixel);
g.RotateTransform(180f);
}
return cropped as Image;
}
It's supposed to crop an image, then rotate the resulting sub-image. In actuality though, it only performs the crop.
Why is RotateTransform() not being applied?

Have you tried putting the RotateTransform() before the DrawImage()?
The example on the msdn page shows the transformation being applied before any drawing is done.

The RotateTransform call alters the current transform matrix, which has an effect on all subsequent operations. It does not transform the already output operations at all. This is the same for any of the operations that change the transform matrix (like ScaleTransform).
Make sure you call these before you perform the operations you want transformed - in this case, before the call to DrawImage.
You can use this to do something like
Draw (not rotated or scaled)
Rotate (only changes transform matrix)
Scale (only changes transform matrix)
Draw (now rotated and scaled)
ClearTransform (only changes transform matrix)
Draw (not rotated or scaled)
the first and last draw outputs will not be transformed, but the middle one would be affected by both the rotate and scale (in that order).

Related

Monogame: Render only inside specified area

This may be a strange question, but I'm trying to find a way to render sprites only inside a specific allowed area rather then the entire buffer/texture.
Like so:
Basically allowing me to draw to the buffer or texture2D as I normally would, but with actual drawing happening only inside this specified area and remaining pixels outside of it remaining untouched.
Why this is needed - I'm building my own UI system and I would like to avoid using intermediary buffers as it is quite slow when there are many UI components on the screen (and each has to draw to their own buffer to prevent child elements being drawn outside of parent bounds).
And just to clarify - this is all for simple 2D rendering, not 3D.
If your UI is actually drawn with SpriteBatch you can use ScissorRectangle
GraphicsDevice.RasterizerState.ScissorTestEnable = true;
spriteBatch.GraphicsDevice.ScissorRectangle = ...
In 3D, you can render to a texture and draw just a portion of it - or with a shader (you could actually just send in the dimensions as parameter and set it to black in PixelShader if the Pixel is outside that Rectangle (or whatever you want to accomplish)
You can use:
spriteBatch.Draw(yourTexture,
//where and the size of what you want to draw on screen
//for example, new Rectangle(100, 100, 50, 50)//position and width, height
destinationRectangle,
//the area you want to draw from the original texture
//for example, new Rectangle(0, 0, 50, 50)//position and width, height
sourceRectangle,
Color.White);
Then it will only draw the area that you chose before. Hope this helps!

Drawing a circular magnifying lens showing scaled underlying content in XNA/Monogame (in 2D)

I have a 2D scene in Monogame with some primitives and sprites (i.e. in PrimitiveBatches and SpriteBatches) and I would like to create a magnifying glass effect with a circular lens showing a zoomed view of the content under it. How do I do that?
Thanks.
I do not use your environment but I always did this effect with pixel displacement. If you got pixel access to rendered scene (ideally while still in a back-buffer so it does not flicker) then just move the pixels inside your lens to the outward positions. Either use constant displacement or even better is when you move more (bigger zoom) in the middle and less near the edges.
Typical implementation looks like this:
copy lens area to some temp buffer
loop (x,y) through lens area
compute actual radius r of processed pixel from lens center (x0,y0)
ignore pixels outside lens area (r>R)
compute actual zoom m of processed pixel
I like to use cos for this like this:
m=1.0+(1.5*cos(0.5*M_PI*double(r)/double(r0))); // M_PI=3.1415...
you can play with the 1.0,1.5 constants. They determine minimal (1.0) and maximal (1.0+1.5) zoom. Also this is for cos taking angle in [rad] so if yours need [deg] instead change the 0.5*M_PI with 90.0
copy pixel from temp to backbuffer or screen
backbuffer(x,y)=temp(x0+(x-x0)/m,y0+(y-y0)/m)
Here C++/VCL example:
void TMain::draw()
{
// clear bmp (if image not covering whole area)
bmp->Canvas->Brush->Color=clBlack;
bmp->Canvas->FillRect(TRect(0,0,xs,ys));
// copy background image
bmp->Canvas->Draw(0,0,jpg); // DWORD pxy[ys][xs] is bmp direct pixel access, (xs,ys) is bmp size
// here comes the important stuff:
int x0=mx,y0=my; // position = mouse
const int r0=50; // radius
DWORD tmp[2*r0+3][2*r0+3]; // temp buffer
double m;
int r,x,y,xx,yy,xx0,xx1,yy0,yy1;
// zoom area bounding box
xx0=x0-r0; if (xx0< 0) xx0=0;
xx1=x0+r0; if (xx1>=xs) xx1=xs-1;
yy0=y0-r0; if (yy0< 0) yy0=0;
yy1=y0+r0; if (yy1>=ys) yy1=ys-1;
// copy bmp to tmp
for (y=yy0;y<=yy1;y++)
for (x=xx0;x<=xx1;x++)
tmp[y-yy0][x-xx0]=pyx[y][x];
// render zoomed area
for (y=yy0;y<=yy1;y++)
for (x=xx0;x<=xx1;x++)
{
// compute radius
xx=x-x0;
yy=y-y0;
r=sqrt((xx*xx)+(yy*yy));
if (r>r0) continue;
if (r==r0) { pyx[y][x]=clWhite; continue; }
// compute zoom: 2.5 on center, 1.0 at eges
m=1.0+(1.5*cos(0.5*M_PI*double(r)/double(r0))); // M_PI=3.1415...
// compute displacement
xx=double(double(xx)/m)+x0;
yy=double(double(yy)/m)+y0;
// copy
if ((xx>=xx0)&&(yy>=yy0)&&(xx<=xx1)&&(yy<=yy1))
pyx[y][x]=tmp[yy-yy0][xx-xx0];
}
// just refresh screen with backbuffer
Canvas->Draw(0,0,bmp);
}
And here animated GIF preview (quality and fps is lowered by GIF encoding):
If you need help with understanding the gfx access in my code see:
gfx rendering in C++

Actual coordinate after ScaleTransfrom()

I am drawing a rectangle in a WinForms application in C# and I want to get the actual coordinates of the rectangle after applying ScaleTransform() method.
Graphics g = e.Graphics;
g.ScaleTransform(2.0F,2.0F,System.Drawing.Drawing2D.MatrixOrder.Append);
g.DrawRectangle(pen, 20, 40, 100,100)
Once you have set a ScaleTransform in your Graphics object (or any transform for that matter), you can use it to transform the points of your rectangle (or any other points).
For example:
// your existing code
Graphics g = e.Graphics;
g.ScaleTransform(2.0F,2.0F,System.Drawing.Drawing2D.MatrixOrder.Append);
// say we have some rectangle ...
Rectangle rcRect = new Rectangle(20, 40, 100, 100);
// make an array of points
Point[] pPoints =
{
new Point(rcRect.Left, rcRect.Top), // top left
new Point(rcRect.Right, rcRect.Top), // top right
new Point(rcRect.Left, rcRect.Bottom), // bottom left
new Point(rcRect.Right, rcRect.Bottom), // bottom right
};
// get a copy of the transformation matrix
using (Matrix mat = g.Transform)
{
// use it to transform the points
mat.TransformPoints(pPoints);
}
Note the using syntax above - this is because, as MSDN says:
Because the matrix returned and by the Transform property is a copy of
the geometric transform, you should dispose of the matrix when you no
longer need it.
As a slightly less wordy alternative, you can do the same thing using the TransformPoints method of the Graphics class (MSDN here) - so construct your array of points as above, then just do this:
g.TransformPoints(CoordinateSpace.Page, CoordinateSpace.World, pPoints);
MSDN describes the relevant coordinate spaces used in the above function:
GDI+ uses three coordinate spaces: world, page, and device. World
coordinates are the coordinates used to model a particular graphic
world and are the coordinates you pass to methods in the .NET
Framework. Page coordinates refer to the coordinate system used by a
drawing surface, such as a form or a control. Device coordinates are
the coordinates used by the physical device being drawn on, such as a
screen or a printer. The Transform property represents the world
transformation, which maps world coordinates to page coordinates.

Create a Region or GraphicsPath from the non-transparent areas of a bitmap

I want to hit-test a drawn bitmap to see if a given Point is visible in the non-transparent pixels of the image.
For example, to do this test for the whole bitmap rectangle, you would do something like this:
Bitmap bitmap = new Bitmap("filename.jpg");
GraphicsPath path = new GraphicsPath();
Rectangle bitmapRect = new Rectangle(x, y, bitmap.Width, bitmap.Height);
path.AddRectangle(bitmapRect);
if (path.IsVisible(mouseLocation))
OnBitmapClicked();
However, if I have a bitmap of a non-rectangular item and I want to be able to check if they are clicking on the non-transparent area, is there any supported way in the .NET framework to do this?
The only way I could think to do this is to lock the bitmap bytes into an array, and iterate through it, adding each x,y coordinate that is non-transparent to an array of Point structures. Then use those point structures to assemble a GraphicsPath.
Since these points would be zero-based I would need to offset my mouse location with the distance between the x,y coordinate that the image is being drawn at and 0,0. But this way I could essentially use the same GraphicsPath for each image if I draw it multiple times, as long as the image is not skewed or scaled differently.
If this is the only good route, how would I add the points to the GraphicsPath? Draw lines from point to point? Draw a closed curve?
IMHO a simpler technique would be to look at the alpha component of the hit pixel:
Color pixel = bitmap.GetPixel(mouseLocation.X, mouseLocation.Y);
bool hit = pixel.A > 0;

"Wrap-around" effect with a Direct3D.Texture

Given a destination rectangle and an x/y offset value, I need an image to be drawn within the confines of that destination rectangle. If the offset would push the image off the edge of the rectangle, then the part that "pushes out" should appear on the opposite side of the destination rectangle. In simplest terms, I need a scrolling background.
In GDI, I can accomplish this with an "ImageAttributes" object that uses a tile wrap mode:
ImageAttributes attributes = new ImageAttributes();
attributes.SetWrapMode(System.Drawing.Drawing2D.WrapMode.Tile);
Rectangle rectangle = new Rectangle(0, 0, (int)width, (int)height);
g.DrawImage(bmp, rectangle, -x, -y, width, height, GraphicsUnit.Pixel, attributes);
Now, I need a way to do this in DirectX. Assume that this is the method I have right now:
public void RenderTexture(PrismDXObject obj, D3D.Texture texture, int xOffset, int yOffset)
{
if (obj != null && texture != null)
{
_renderSprite.Begin(D3D.SpriteFlags.AlphaBlend);
_renderSprite.Draw(texture,
new Rectangle(0, 0, (int)obj.Width, (int)obj.Height),
new Vector3(0.0f, 0.0f, 0.0f),
new Vector3((int)obj.Left, (int)obj.Top, 0.0f),
obj.RenderColor);
_renderSprite.End();
}
}
}
...where "_renderSprite" is a D3D.Sprite, and PrismDXObject is a simple class that stores x/y/width/height/color. How can I update this method so that xOffset and yOffset can be used to make the texture wrap? Remember, my end-goal is a scrolling background that loops as the player walks forward.
Incidentally, that RenderTexture() method is meant to be a "library method" which can be called from anywhere in my program... so if I'm doing something really inefficient or ill-advised, I'd welcome a friendly warning! My main concern is getting the wrapping background to work, though.
I'm not sure that the sprite mechanism allows for what I'm about to explain, but 2 triangles certainly do. If this does not work with sprites, use triangles directly:
What you're asking for is directly supported by the texturing subsystem, it is called texture wrapping.
When you specify the texture coordinates that your quad will use, instead of using the 0,0-1,1 range, you can use 0+xoffset/tex_x_size, 0+yoffset/tex_y_size, 1+xoffset/tex_x_size, 1+yoffset/tex_y_size for your texture coordinates.
Then, the only thing left to do is to specify that the texture sampler you will use to map your background does texture wrapping. To do this, you need to set to D3DTADDRESS_WRAP the D3DSAMP_ADDRESSU and D3DSAMP_ADDRESSV sampler states. Note, this is the default for the sampler state.
that's it. Now, getting back to D3D.Sprite specifically, the Draw method takes a rectangle that tells which part of the texture to use. have you tried drawing xoffset, yoffset, xOffset+obj,Width, yoffset+obj.height ? This will only work if the sprite subsystem uses a sampler that has wrapping on, and I don't know how sprite is implemented internally.

Categories