I have seen several examples on rendering 1 pixel lines in WPF, but none seem to apply to my situation. I am using DrawingVisual and DrawingContext to draw some shapes and RenderTargetBitmap and PngBitmapEncoder to generate the image. In many cases the rectangles have a 2 pixel border even though I set it to 1. I am guessing this is due to the resolution independent rendering that is used.
I have found several solutions but they are either in XAML or apply to drawing controls. The closest thing I have found is XSnappingGuidelines/YSnappingGuidelines but I cannot find a single example of how to use it. The documentation is very much lacking on these properties.
How do I disable the resolution independent rendering for DrawingVisual?
UPDATE:
Here is what I am trying to do:
Declare a DrawingVisual:
DrawingVisual mainTemplate = new DrawingVisual();
Get Context:
using (DrawingContext context = mainTemplate.RenderOpen())
Draw rectangle:
penToUse = new Pen(new SolidColorBrush(Color.FromRgb(0xFF, 0xFF, 0xFF)), 1.0);
penToUse.DashStyle = DashStyles.Dash;
context.DrawRectangle(brushToUse, penToUse, new Rect(left, top, width, height));
Where do I set the rendering mode to align to pixels?
jorj
In WPF, when you draw a line, the line is centered on the coordinates you specify. So if on a device with 96 DPI you draw a vertical line from 10, 10 to 10, 20 and the width of the pen is 1, the line will actually be drawn between 9.5 and 10.5 taking two pixels. If you want to align the line on the pixel edge, you need to shift it by 0.5. On a 120 DPI monitor, the width of the line should be 0.8 to take a single pixel, and you need to shift it by 0.4 to align on the pixel edge.
You do not have to use GuidelineSet because it doesn't do more than this simple shifting but unnecessarily complicates the code.
The closest I've come to being able to render single pixels lines in WPF with a DrawingContext is this:
GuidelineSet guidelines = new GuidelineSet();
guidelines.GuidelinesX.Add(_bgRect.Left - 0.5);
guidelines.GuidelinesX.Add(_bgRect.Right + 0.5);
guidelines.GuidelinesY.Add(_bgRect.Top - 0.5);
guidelines.GuidelinesY.Add(_bgRect.Bottom + 0.5);
dc.PushGuidelineSet(guidelines);
dc.DrawRectangle(Background, _outlinePen, _bgRect);
if (BorderThickness.Left > 1)
dc.DrawLine(_leftPen, _bgRect.TopLeft, _bgRect.BottomLeft);
if (BorderThickness.Top > 1)
dc.DrawLine(_topPen, _bgRect.TopLeft, _bgRect.TopRight);
if (BorderThickness.Right > 1)
dc.DrawLine(_rightPen, _bgRect.TopRight, _bgRect.BottomRight);
if (BorderThickness.Bottom > 1)
dc.DrawLine(_bottomPen, _bgRect.BottomRight, _bgRect.BottomLeft);
dc.Pop();
Related
I'm working on a basic mindmap program but I don't have a lot of experience with drawing with WPF. I want to be able to draw rectangles with text on them and i would like to be able to click on the rectangles to change the text for example.
As of now I have:
private void DrawSubject(int curve, double X, double Y, Brush clr)
{
Rectangle rect = new Rectangle();
rect.Width = 62;
rect.Height = 38;
rect.Fill = clr;
rect.Stroke = line;
rect.RadiusX = rect.RadiusY = curve;
Canvas.SetLeft(rect, X);
Canvas.SetTop(rect, Y);
mindmap.Children.Add(rect);
}
SolidColorBrush line = new SolidColorBrush(Color.FromArgb(255, 21, 26, 53));
minmap is the name of the canvas. I want to be able to draw a lot of these rectangles which present branches of the mindmap. However, when I drew 10,000 of these on random locations the process memory in the diagnostic tools went up by 100 MB, after it was done drawing all of them. I did this to sort of simulate a mindmap with 10,000 branches. So i was wondering if there might be a way to decrease the used memory for these rectangles?
Or is it better to use DrawingVisual and a grid.click event which checks if the clicked position matches the position of a rectangle by putting the coordinates of the rectangle in a List?
I would attempt the DrawingVisual method you described, if that proves costly in performance(I don't know how well DrawingVisual works) you could look into embedding OpenGL or DirectX into your application and rendering them via that.
But raytracing drawn visuals rather than making a Control for each is definitely the way to go for your scale.
I'm using the Drawstring method of the Graphics Class to Draw a Text on an Image.The Font is Specified before drawing.
G.DrawString(mytext, font, brush, 0, 0)
The Problem arises when the same text is drawn on an image with smaller size.The Text drawn appears to be larger.I'm looking for a solution to alter the font size according to the image size so that the text don't appear larger or smaller when drawn on images of different sizes.
I'm attaching the images with different sizes with the text of same font size drawn on it.
http://i.stack.imgur.com/ZShUI.jpg
http://i.stack.imgur.com/GUfbM.jpg
I can't directly post the image because I'm not allowed.
You would get most precise scaling by drawing on separate image and then slapping that image onto original one. You'd do that as follows:
Create in-memory Bitmap with enough space
Draw text on that bitmap in your default font
Draw image containing the text onto original image by scaling it to size you need
Code:
Bitmap textBmp = new Bitmap(100, 100);
Graphics textBmpG = Graphics.FromImage(textBmp);
textBmpG .DrawString("test 1", new Font(FontFamily.GenericSansSerif, 16), Brushes.Red, new PointF(0, 0));
Graphics origImgG = Graphics.FromImage(originalImg);
origImgG.DrawImage(textBmpG, new Rectangle(50, 50, 50, 50), new Rectangle(0, 0, 100, 100), GraphicsUnit.Pixel);
Take notice of last line and Rectangle parameters. Use them to scale your text bitmap onto original image. Alternatively, you can also choose Graphics.MeasureString method to determine how wide your text would be and make attempts until you get best one you can.
Use Graphics.MeasureString() to measure how big your string would be on the image
Decrease/increase font step by step accordingly
As you requested in comment I'll give you more detailed suggestion here. Say your original image width is WI1, and width of text on it using Graphics.MeasureString is WT1. If you resize your image to width WI2, then your perfect text width would be WT2 = WT1 * WI2 / WI1. Using DrawText method you may not be able to get this exact width because when you increase font by 1 it may jump over that value. So you have to make several attempts and find best. Pick a size of font, if resulting text width is smaller (measure with MeasureString), increase it until it becomes bigger than target and you've got about closest match. Same thing goes if it's too big. Decrease font step by step.
This is quick and dirty as you see, because you have many draws, but I can't think of better solution, unless you're using monospaced fonts.
Difference between those solutions would be that in first you can get text to fit EXACT size you need, but you probably would loose some font readability due to scaling. Second solution would give good readability, but you can't get pixel perfect size of text.
In my opinion you have two ways:
Draw text on original image and then resize resulting image (so, even text included in it)
Scale font size by a factor newImageWidth/originalImageWidth.
It's not perfect because you could have some problem with text height (if new image is not just scaled but with different aspect ratio), but it's an idea
I cannot understand the way GDI+ is drawing line on a surface, may be it has some algorithm to do it.
For ex. lets take a surface 10x10 px.
Bitmap img = new Bitmap(10, 10);
Now lets draw a line on this surface, with width 5px and top offset 5px.
using (var g = Graphics.FromImage(img))
{
g.Clear(Color.White);
var pen = new Pen(Color.Brown);
pen.Width = 5;
g.DrawLine(pen, 0F, 5F, 10F, 5F);
}
We will get:
The drawing didn't begin at pixel #5, it began from pixel #4.
It is obvious, that the start point is calculated separately. But how?
I've tried to get a regularity, and got this:
y = offset + width/2 - 1
where y is real start point y, offset is selected start point y.
But in some cases this doesn't work. For example, lets take width=6, selected top offset = 0, we will get y=2, and it will be drawn this way:
It must show 6 pixels but it didn't.
So there must be more general algorythm for selecting the start point, but I really have no idea aboit what it can be.
Any help appreciated.
There is no offset in the line drawing. The co-ordinates you specify in the DrawLine method define the centre of the line. The top pixel is y - width / 2 and the bottom is y - width / 2 + width - 1. That second formula takes into account the fact that width / 2 is rounded down. Also, the top line is y = 0 and the bottom line is y = 9. So, for you first line:
top = 5 - (5 / 2) = 3
bottom = 5 - (5 / 2) + 5 - 1 = 7
and the second line:
top = 2 - (6 / 2) = -1
bottom = 2 - (6 / 2) + 6 - 1 = 4
The top edge is clipped to the edge of the bitmap so the line width is reduced.
In the first example, it looks like a line with a width of 5 pixels, centered on row 5 (counting starts at 0, not 1).
This seems like a reasonable outcome.
In the second example, it looks like a line of width 6, centered between rows 1 and 2, where the top row is cut off, because it extends beyond the borders of the image.
Coordinates in GDI+ don't designate the pixels itself but the (infinitly small) points at their center. Thus (0f,5f) means the center of the pixel in the first column and 6th row (since counting starts at zero). Therefore, I will differentiate between coordinates and pixel rows from now on.
When drawing a line, GDI+ conceptually moves a pen of the specified width along the line defined by these coordinates. Note that you can define the exact shape of this pen at the beginning and the end of the line by specifying Pen.StartCap and Pen.EndCap.
Since you specified width 5f and you're drawing a horizontal line, the line extends 2.5 pixels to the top and to the bottom thus covering complete pixel rows from #3 to (and including) #7. Note that the upper edge of pixel row #3 has a y-coordinate 2.5 and the lower edge of row #7 has 7.5 according to the above definition, which is 5f-2.5f and 5f+2.5f respectively
I do not get the same result as you for your second example (I tried in Try GDI+ application). Instead I get a line which covers the first three pixel rows. Theoretically, it should even cover the first 3.5 prows (since the coordinate designates the center of the first row, the upper part is simply clipped). But if antialiasing is turned off, the 'half row' at the bottom gets truncated.
This can be shown by setting g.SmoothingMode to SmoothingMode.AntiAlias in which case the fourth row is drawn transparently. When using antialiasing you'll also notice, that the first and last column are not completely painted since the start coordinate is, again, at the center of the column.
I've got a 2D game that I'm working on that is in 4:3 aspect ratio. When I switch it to fullscreen mode on my widescreen monitor it stretches. I tried using two viewports to give a black background to where the game shouldn't stretch to, but that left the game in the same size as before. I couldn't get it to fill the viewport that was supposed to hold the whole game.
How can I get it to go fullscreen without stretching and without me needing to modify every position and draw statement in the game? The code I'm using for the viewports is below.
// set the viewport to the whole screen
GraphicsDevice.Viewport = new Viewport
{
X = 0,
Y = 0,
Width = GraphicsDevice.PresentationParameters.BackBufferWidth,
Height = GraphicsDevice.PresentationParameters.BackBufferHeight,
MinDepth = 0,
MaxDepth = 1
};
// clear whole screen to black
GraphicsDevice.Clear(Color.Black);
// figure out the largest area that fits in this resolution at the desired aspect ratio
int width = GraphicsDevice.PresentationParameters.BackBufferWidth;
int height = (int)(width / targetAspectRatio + .5f);
if (height > GraphicsDevice.PresentationParameters.BackBufferHeight)
{
height = GraphicsDevice.PresentationParameters.BackBufferHeight;
width = (int)(height * targetAspectRatio + .5f);
}
//Console.WriteLine("Back: Width: {0}, Height: {0}", GraphicsDevice.PresentationParameters.BackBufferWidth, GraphicsDevice.PresentationParameters.BackBufferHeight);
//Console.WriteLine("Front: Width: {0}, Height: {1}", width, height);
// set up the new viewport centered in the backbuffer
GraphicsDevice.Viewport = new Viewport
{
X = GraphicsDevice.PresentationParameters.BackBufferWidth / 2 - width / 2,
Y = GraphicsDevice.PresentationParameters.BackBufferHeight / 2 - height / 2,
Width = width,
Height = height,
MinDepth = 0,
MaxDepth = 1
};
GraphicsDevice.Clear(Color.CornflowerBlue);
The image below shows what the screen looks like. The black on the sides is what I want (and is from the first viewport) and the second viewport is the game and the cornflower blue area. What I want is to get the game to scale to fill the cornflower blue area.
Use a viewport http://msdn.microsoft.com/en-us/library/microsoft.xna.framework.graphics.viewport_members.aspx
As is also the case in commercial games, you should provide an option to the user that allows them to switch between 4:3 aspect and 16:9 aspect. You should be able to just modify the camera viewing ratio accordingly.
EDIT:
As far as I have seen, there are no games that 'auto-detect' the proper aspect ratio to use.
As has been pointed out, there are ways to make a good guess as to what the proper aspect ratio is. If XNA allows you to get at the current Windows user's screen settings data, you can determine an aspect ratio based off of the monitor resolution.
Once you have determined the monitor resolution of the user, you can best decide how to deal with it. At first, the best bet may be to just put black bars on the left/right side of the screen to allow full-screen with a 16:9 aspect ratio that is essentially still using the 4:3 artwork.
Eventually you could modify the game so that it changes the viewing port size when the aspect ratio is 16:9. This wouldn't require changing any art assets, just how they are being rendered.
First of all I'm assuming you're talking about XNA 4.0, which AFAIK there are breaking changes between XNA 3.x and XNA 4.0.
I'm relatively new at XNA, however it seems to me that your assets does not fit the size of the window. Let's say that your game are is 320x240 and your window is bigger e.g. 640x480.
Thus you can specify PreferredBuffer in order to scale up your application. So, tell to XNA you are going to use 320x240 by setting the following values;
_graphics.PreferredBackBufferWidth = 320;
_graphics.PreferredBackBufferHeight = 240;
Additionally you can start fullscreen mode by setting:
_graphics.IsFullScreen = true;
Also, you have to handle manually the how the items should change their size once the Window has changed their size.
Checkout my sample at.
https://github.com/hmadrigal/xnawp7/tree/master/XNASample02
(BTW, you can press F11 to switch between fullscreen and normal view)
Best regards,
Herber
I'm not sure if you can actually scale your view port like that. I understand what you're trying to do, but to do it you'd have to do the following.
Set your screen backbuffer width and height to the 16:9 resolution.
Program in the displacement so that objects didn't draw in those borders.
The thing is, all major games these days, if you play them on a 16:9 monitor and select a 4:3 resolution, will stretch to fit the screen. This isn't something you usually want to overcome. You either support many resolutions in your game, or you will get stretching when a user uses the wrong resolution for his or her screen type.
Usually, one sets up their game, and their textures to work based on the relative dimensions of the current viewport or backbuffer width and height. This way, regardless of the resolution inputted, the game scales to work with that width/height ratio.
It's a bit more work, but in the end, makes your game far more polished and compatible with a wide array of systems.
The only time this may not be done is if the app runs in a window (NOT fullscreen).
The LinearGradientBrush in .net (or even in GDI+ as a whole?) seems to have a severe bug: Sometimes, it introduces artifacts. (See here or here - essentially, the first line of a linear gradient is drawn in the endcolor, i.e. a gradient from White to Black will start with a Black line and then with the proper White to Black gradient)
I wonder if anyone found a working workaround for this? This is a really annoying bug :-(
Here is a picture of the Artifacts, note that there are 2 LinearGradientBrushes:
alt text http://img142.imageshack.us/img142/7711/gradientartifactmm6.jpg
I have noticed this as well when using gradient brushes. The only effective workaround I have is to always create the gradient brush rectangle 1 pixel bigger on all edges than the area that is going to be painted with it. That protects you against the issue on all four edges. The downside is that the colors used at the edges are a fraction off those you specify, but this is better than the drawing artifact problem!
You can use the nice Inflate(int i) method on a rectangle to get the bigger version.
I would finesse Phil's answer above (this is really a comment but I don't have that privilege). The behaviour I see is contrary to the documentation, which says:
The starting line is perpendicular to the orientation line and passes through one of the corners of the rectangle. All points on the starting line are the starting color. Then ending line is perpendicular to the orientation line and passes through one of the corners of the rectangle. All points on the ending line are the ending color.
Namely you get a single pixel wrap-around in some cases. As far as I can tell (by experimentation) I only get the problem when the width or height of the rectangle is odd. So to work around the bug I find it is adequate to increase the LinearGradientBrush rectangle by 1 pixel if and only if the dimension (before expansion) is an odd number. In other words, always round the brush rectangle up the the next even number of pixels in both width and height.
So to fill a rectangle r I use something like:
Rectangle gradientRect = r;
if (r.Width % 2 == 1)
{
gradientRect.Width += 1;
}
if (r.Height % 2 == 1)
{
gradientRect.Height += 1;
}
var lgb = new LinearGradientBrush(gradientRect, startCol, endCol, angle);
graphics.FillRectangle(lgb, r);
Insane but true.
At least with WPF you could try to use GradientStops to get 100% correct colors right at the edges, even when overpainting.
I experienced artifacts too in my C++ code. What solved the problem is setting a non-default SmoothingMode for the Graphics object. Please note that all non-default smoothing modes use coordinate system, which is bound to the center of a pixel. Thus, you have to correctly convert your rectangle from GDI to GDI+ coordinates:
Gdiplus::RectF brushRect;
graphics.SetSmoothingMode( Gdiplus::SmoothingModeHighQuality );
brushRect.X = rect.left - (Gdiplus::REAL)0.5;
brushRect.Y = rect.top - (Gdiplus::REAL)0.5;
brushRect.Width = (Gdiplus::REAL)( rect.right - rect.left );
brushRect.Height = (Gdiplus::REAL)( rect.bottom - rect.top );
It seems like LinearGradientBrush works correctly only in high-quality modes.