using Cairo;
I have drawn a rectangle inside a bigger rectangle witch is inside a drawing area.
I have managed to attach a event to the Drawing area witch is a object I have extended from it
this.AddEvents ((int) EventMask.ButtonPressMask);
this.ButtonPressEvent += delegate(object o, ButtonPressEventArgs args) {
hasInterface(args.Event.X, args.Event.Y);
Console.WriteLine("Button Pressed " + args.Event.X + ", " + args.Event.Y);
};
I'm dynamically drawing the squares using:
cr.Translate(width/2, height/2);
cr.Rectangle((pX + (i * tmp)) , pY, boxsize, boxsize);
private void recordPosition(double x, double y)
{
x = x*2;
y = y*2;
boxCoordinates.Add( new double[,]
{
{x, y}
}
); // store coords
}
List<double,double> boxCoordinates
So for the inside of the drawing area the square is drawn at x=0, y=0 from the "outside" point of view it's in x=90, y=45; the width = 180 , height = 100
I was using translate (since half of this is copied ) of the size/2 so this means that the drawing area was doing a resize of the square, to solve this issue I was saving the position's multiplying it by 2, but this is not working has I'm getting "hits" outside of the rectangle drawn.
What is the best way to do this? I mean to translate the X Y positions from the window to the drawing area, I saw this was possible in other languages but not sure how to do it in C# and the drawing area from mono.
Thanks for any help.
I've done this a few times in C w SDL and C# with Cairo, basically, you want to be able to convert the bounding box of each of your rectangles to and from the coordinates you are using for rendering on the cairo canvas.
For each of your rectangles, you'll have the location you of your rectangles in thier own world. I like to call these the "world coordinates" and their "screen coordinates" (which map to where your mouse will be).
You can store the world coordinates of each box and then translate them to screen ones for each frame you render.
public class Shape {
public Point WorldLoc { get; set; }
}
You would do all your physics (if you have any) on the WorldLoc values. When you come to render, You want to be able to convert your WorldLoc to ScreenLoc.
public class Scene {
public double Zoom;
public Point Offset;
public Point WorldToScreen( Point world ){
var p = new Point();
p.x = (world.x - Offset.x) * Zoom;
p.y = (world.y - Offset.y) * Zoom;
return p;
}
}
Each time you render somthing in this Scene, you'll use WorldToScreen() to get the screen coordinates. You can then use the same thing to work out if your mouse is in the screen box of a world box.
Related
I am trying to take a Ui object's screen space position and translate that to what I am calling 'monitor space'.
As far as I can tell, screen space, in Unity, is relative to the applications' window. That is, even if the app is not full screen, and moved around on your monitor, 0,0 will still be the lower left of the app window.
I need to translate one of those screen space values into the actual position within the user's monitor. This is especially important when considering that the user might have multiple monitors.
I am not finding anything to get this done, though.
I am hoping to find a platform agnostic solution, but if it must be Windows-only than I can make that work as well.
Any help on this would be greatly appreciated.
Thank you
Now after TEEBQNE's answer I also wanted to give it a shot using the native solution.
As mentioned this will be only for Windows PC Standalone and requires
Unity's new Input System (see Quick Start)
One of the solutions from Getting mouse position in c#
For example if you want to use System.Windows.Forms then copy the according DLL from
C:\Windows\Microsoft.NET\Framework64\v4.x.xx
into your project under Assets/Plugins
Then in code you can use
using System.Windows.Forms;
If this is more efficient (or even works this way) I can't tell - only on the phone here - but I hope the idea gets clear ;)
So the idea is:
store initial cursor position
Set your cursor to certain positions of interest using WarpCursorPosition using Unity screen coordinates as input
read out the resulting absolute monitor coordinates using the native stuff
in the end reset the cursor to the original position
This might look somewhat like
using UnityEngine;
using UnityEngine.InputSystem;
public static class MonitorUtils
{
// Store reference to main Camera (Camera.main is expensive)
private static Camera _mainCamera;
// persistent array to fetch rect corners
// cheaper then everytime creating and throwing away a new array
// especially when fetching them every frame
private static readonly Vector3[] corners = new Vector3[4];
// For getting the UI rect corners in Monitor pixel coordinates
public static void GetMonitorRectCorners(this RectTransform rectTransform, Vector2Int[] output, bool isScreenSpaceCanvas = true, Camera camera = null)
{
// Lazy initialization of optional parameter
if (!camera) camera = GetMainCamera();
// Store initial mouse position
var originalMousePosition = Mouse.current.position.ReadValue();
// Get the four world space positions of your RectTtansform's corners
// in the order bottom left, top left, top right, bottom right
// See https://docs.unity3d.com/ScriptReference/RectTransform.GetWorldCorners.html
rectTransform.GetWorldCorners(corners);
// Iterate the four corners
for (var i = 0; i < 4; i++)
{
if (!isScreenSpaceCanvas)
{
// Get the monitor position from the world position (see below)
output[i] = WorldToMonitorPoint(corners[i], camera);
}
else
{
// Get the monitor position from the screen position (see below)
output[i] = ScreenToMonitorPoint(corners[i], camera);
}
}
// Restore mouse position
Mouse.current.WarpCursorPosition(originalMousePosition);
}
// For getting a single Unity world space position in Monitor pixel coordinates
public static Vector2Int WorldToMonitorPoint(Vector3 worldPoint, Camera camera = null)
{
// Lazy initialization of optional parameter
if (!camera) camera = GetMainCamera();
var screenPos = camera.WorldToScreenPoint(worldPoint);
return ScreenToMonitorPoint(screenPos, camera);
}
// For getting a single Unity world space position in Monitor pixel coordinates
public static Vector2Int ScreenToMonitorPoint(Vector3 screenPos, Camera camera = null)
{
// Lazy initialization of optional parameter
if (!camera) camera = GetMainCamera();
// Set the system cursor position there based on Unity screen space
Mouse.current.WarpCursorPosition(screenPos);
// Then get the actual system mouse position (see below)
return GetSystemMousePosition();
}
// Get and store the main camera
private static Camera GetMainCamera()
{
if (!_mainCamera) _mainCamera = Camera.main;
return _mainCamera;
}
// Convert the system mouse position to Vector2Int for working
// with it in Unity
private static Vector2Int GetSystemMousePosition()
{
var point = System.Windows.Forms.Cursor.Position;
return new Vector2Int(point.X, point.Y);
}
}
So you can either simply use
var monitorPosition = MonitorUtils.WorldToMonitorPoint(someUnityWorldPosition);
// or if you already have the `Camera` reference
//var monitorPosition = MonitorUtils.WorldToMonitorPoint(someUnityWorldPosition, someCamera);
or if you already have a screen space position like e.g. in a ScreenSpace Overlay canvas
var monitorPosition = MonitorUtils.ScreenToMonitorPoint(someUnityWorldPosition);
// or if you already have the `Camera` reference
//var monitorPosition = MonitorUtils.ScreenToMonitorPoint(someUnityWorldPosition, someCamera);
or you can get all four corners of a UI element at once using e.g.
var monitorCorners = new Vector2Int [4];
someRectTransform.GetMonitorRectCorners(monitorCorners, isScreenSpaceCanvas);
// or again if you already have a camera reference
//someRectTransform.GetMonitorRectCorners(monitorCorners, isScreenSpaceCanvas, someCamera);
Little example
public class Example : MonoBehaviour
{
[Header("References")]
[SerializeField] private Camera mainCamera;
[SerializeField] private RectTransform _rectTransform;
[SerializeField] private Canvas _canvas;
[Header("Debugging")]
[SerializeField] private bool isScreenSpace;
[Header("Output")]
[SerializeField] private Vector2Int bottomLeft;
[SerializeField] private Vector2Int topLeft;
[SerializeField] private Vector2Int topRight;
[SerializeField] private Vector2Int bottomRight;
private readonly Vector2Int[] _monitorPixelCornerCoordinates = new Vector2Int[4];
private void Awake()
{
if (!mainCamera) mainCamera = Camera.main;
if (!_canvas) _canvas = GetComponentInParent<Canvas>();
isScreenSpace = _canvas.renderMode == RenderMode.ScreenSpaceOverlay;
}
private void Update()
{
if (Keyboard.current.spaceKey.isPressed)
{
_rectTransform.GetMonitorRectCorners(_monitorPixelCornerCoordinates, isScreenSpace);
bottomLeft = _monitorPixelCornerCoordinates[0];
topLeft = _monitorPixelCornerCoordinates[1];
topRight = _monitorPixelCornerCoordinates[2];
bottomRight = _monitorPixelCornerCoordinates[3];
}
}
}
You will see that moving your mouse each and every frame isn't a good idea though ^^
Now you can see the four corners being updated depending on the actual position on the screen.
Note: while Unity Screenspace is 0,0 at the bottom left in normal display pixels 0,0 is actually rather top-left. So you might need to invert these.
Alright first off - sorry for the late response just got back and was able to type up an answer.
From what I have found, this solution does not work in the editor and produces odd results on Mac with retina display. In the editor, the Screen and Display spaces appear to be exactly the same. There is probably a solution to fix this but I did not look into the specifics. As for Mac, for whatever reason, the internal resolution outputted is always half the actual resolution. I am not sure if this is just a retina display bug with Unity or a general Mac bug. I tested and ran this test script on both a Windows computer and Mac with a retina display. I have yet to test it on any mobile platform.
I do not know exactly what you would like to achieve with the values you wish to find, so I set up a demo scene displays the values instead of using them.
Here is the demo script:
using UnityEngine;
using System.Collections.Generic;
using UnityEngine.UI;
public class TestScript : MonoBehaviour
{
[SerializeField] private RectTransform rect = null;
[SerializeField] private List<Text> text = new List<Text>();
[SerializeField] private Canvas parentCanvas = null;
[SerializeField] private Camera mainCam = null;
private void Start()
{
// determine the canvas mode of our UI object
if (parentCanvas == null)
parentCanvas = GetComponentInParent<Canvas>();
// only need a camera in the case of camera space canvas
if (parentCanvas.renderMode == RenderMode.ScreenSpaceCamera && mainCam == null)
mainCam = Camera.main;
// generate initial data points
GenerateData();
}
/// <summary>
/// Onclick of our button to test generating data when the object moves
/// </summary>
public void GenerateData()
{
// the anchored position is relative to screen space if the canvas is an overlay - if not, it will need to be converted to screen space based on our camera
Vector3 screenPos = parentCanvas.renderMode == RenderMode.ScreenSpaceCamera ? mainCam.WorldToScreenPoint(transform.position) : rect.transform.position;
// our object relative to screen position
text[0].text = "Screen Pos: " + screenPos;
// the dimensions of our screen (The current window that is rendering our game)
text[1].text = "Screen dimensions: " + Screen.width + " " + Screen.height;
// find our width / height normalized relative to the screen space dimensions
float x = Mathf.Clamp01(screenPos.x / Screen.width);
float y = Mathf.Clamp01(screenPos.y / Screen.height);
// our normalized screen positions
text[2].text = "Normalized Screen Pos: " + x + " " + y;
// grab the dimensions of the main renderer - the current monitor our game is rendered on
#if UNITY_STANDALONE_OSX
text[3].text = "Display dimensions: " + (Display.main.systemWidth * 2f) + " " + (Display.main.systemHeight * 2f);
// now find the coordinates our the UI object transcribed from screen space normalized coordinates to our monitor / resolution coordinates
text[4].text = "Display relative pos: " + (Display.main.systemWidth * x * 2f) + " " + (Display.main.systemHeight * y * 2f);
#else
text[3].text = "Display dimensions: " + Display.main.systemWidth + " " + Display.main.systemHeight;
// now find the coordinates our the UI object transcribed from screen space normalized coordinates to our monitor / resolution coordinates
text[4].text = "Display relative pos: " + (Display.main.systemWidth * x) + " " + (Display.main.systemHeight * y);
#endif
}
/// <summary>
/// Just for debugging - can be deleted
/// </summary>
private void Update()
{
if (Input.GetKey(KeyCode.A))
{
rect.anchoredPosition += new Vector2(-10f, 0f);
}
if (Input.GetKey(KeyCode.W))
{
rect.anchoredPosition += new Vector2(0f, 10f);
}
if (Input.GetKey(KeyCode.S))
{
rect.anchoredPosition += new Vector2(0f, -10f);
}
if (Input.GetKey(KeyCode.D))
{
rect.anchoredPosition += new Vector2(10f, 0f);
}
}
}
I accounted for the parent canvas being either Overlay or Camera mode and put in a check for an OSX build to adjust to the proper screen dimensions.
Here is a gif of the build on OSX. I set the window to be 1680x1050 and my computer's current resolution is 2880x1800. I had also test it on Windows but did not record it as the example looks nearly identical.
Let me know if you have more questions about the implementation or if there are issues with other platforms I did not test.
Edit: Just realized you want the screen space coordinate relative to the monitor space. I will correct the snippet in a little bit - in a meeting right now.
Edit2: After a bit more looking, it will not be easy to get the exact coordinates without the window being centered or getting the standalone window's position. I do not believe there is an easy way to get this information without a dll, so here is a implementation for mac and a solution for windows.
Currently, the solution I have will only get the screen position if the standalone player is windowed and centered on your screen. If the player is centered on the screen, I know that the center of my monitor is half the dimensions of its resolution, and know that the center point of my window matches up to this point. I can now get the bottom left corner of my window relative to my monitor and not a (0,0) coordinate. As the screen space has the bottom left corner at (0,0), you can now adjust the position to monitor space by adding the position of the newly calculated bottom left position.
Here is the new new GenerateData method:
/// <summary>
/// Onclick of our button to test generating data when the object moves
/// </summary>
public void GenerateData()
{
// the anchored position is relative to screen space if the canvas is an overlay - if not, it will need to be converted to screen space based on our camera
Vector3 screenPos = parentCanvas.renderMode == RenderMode.ScreenSpaceCamera ? mainCam.WorldToScreenPoint(transform.position) : rect.transform.position;
// grab the display dimensions
Vector2 displayDimensions;
// bug or something with mac or retina display on mac where the main.system dimensions are half of what they actually are
#if UNITY_STANDALONE_OSX || UNITY_EDITOR_OSX
displayDimensions = new Vector2(Display.main.systemWidth * 2f, Display.main.systemHeight * 2f);
#else
displayDimensions = new Vector2(Display.main.systemWidth, Display.main.systemHeight);
#endif
// the centerpoint of our display coordinates
Vector2 displayCenter = new Vector2(displayDimensions.x / 2f, displayDimensions.y / 2f);
// half our screen dimensions to find our screen space relative to monitor space
Vector2 screenDimensionsHalf = new Vector2(Screen.width / 2f, Screen.height / 2f);
// find the corners of our window relative to the monitor space
Vector2[] displayCorners = new Vector2[] {
new Vector2(displayCenter.x - screenDimensionsHalf.x, displayCenter.y - screenDimensionsHalf.y), // bottom left
new Vector2(displayCenter.x - screenDimensionsHalf.x, displayCenter.y + screenDimensionsHalf.y), // top left
new Vector2(displayCenter.x + screenDimensionsHalf.x, displayCenter.y + screenDimensionsHalf.y), // top right
new Vector2(displayCenter.x + screenDimensionsHalf.x, displayCenter.y - screenDimensionsHalf.y) // bottom right
};
for (int z = 0; z < 4; ++z)
{
text[z].text = displayCorners[z].ToString();
}
// outputting our screen position relative to our monitor
text[4].text = (new Vector2(screenPos.x, screenPos.y) + displayCorners[0]).ToString();
}
Once you are able to either get or set the windowed screen, you can properly re-orient the lower-left corner relative to the monitor dimensions or you can set the window back to the center point of your monitor. The above snippet would also work for a full-screen player. You would just need to determine how far off the aspect ratio of the player window is to your monitor, which allows you to find how large the black bars would be on the edges.
I assumed what you had wanted was straightforward but from what I can tell an OS-agnostic solution would be difficult. My above solution should work for any platform when the player is windowed if you can either get or set the standalone window position and for any platform that is full-screened with the theoretical approach I mentioned.
If you want more info on how to adjust the implementation for the full-screened window let me know.
In the app I'm trying to develop a key part is getting the position of where the user has touched. First I thought of using a tap gesture recognizer but after a quick google search I learned that was useless (See here for an example).
Then I believe I discovered SkiaSharp and after learning how to use it, at least somewhat, I'm still not sure how I get the proper coordinates of a touch. Here are sections of the code in my project that are relevant to the problem.
Canvas Touch Function
private void canvasView_Touch(object sender, SKTouchEventArgs e)
{
// Only carry on with this function if the image is already on screen.
if(m_isImageDisplayed)
{
// Use switch to get what type of action occurred.
switch (e.ActionType)
{
case SKTouchAction.Pressed:
TouchImage(e.Location);
// Update simply tries to draw a small square using double for loops.
m_editedBm = Update(sender);
// Refresh screen.
(sender as SKCanvasView).InvalidateSurface();
break;
default:
break;
}
}
}
Touch Image
private void TouchImage(SKPoint point)
{
// Is the point in range of the canvas?
if(point.X >= m_x && point.X <= (m_editedCanvasSize.Width + m_x) &&
point.Y >= m_y && point.Y <= (m_editedCanvasSize.Height + m_y))
{
// Save the point for later and set the boolean to true so the algorithm can begin.
m_clickPoint = point;
m_updateAlgorithm = true;
}
}
Here I'm just seeing or TRYING to see if the point clicked was in range of the image and I made a different SkSize variable to help. Ignore the boolean, not that important.
Update function (function that attempts to draw ON the point pressed so it's the most important)
public SKBitmap Update(object sender)
{
// Create the default test color to replace current pixel colors in the bitmap.
SKColor color = new SKColor(255, 255, 255);
// Create a new surface with the current bitmap.
using (var surface = new SKCanvas(m_editedBm))
{
/* According to this: https://learn.microsoft.com/en-us/xamarin/xamarin-forms/user-interface/graphics/skiasharp/paths/finger-paint ,
the points I have to start are in Xamarin forms coordinates, but I need to translate them to SkiaSharp coordinates which are in
pixels. */
Point pt = new Point((double)m_touchPoint.X, (double)m_touchPoint.Y);
SKPoint newPoint = ConvertToPixel(pt);
// Loop over the touch point start, then go to a certain value (like x + 100) just to get a "block" that's been altered for pixels.
for (int x = (int)newPoint.X; x < (int)newPoint.X + 200.0f; ++x)
{
for (int y = (int)newPoint.Y; y < (int)newPoint.Y + 200.0f; ++y)
{
// According to the x and y, change the color.
m_editedBm.SetPixel(x, y, color);
}
}
return m_editedBm;
}
}
Here I'm THINKING that it'll start, you know, at the coordinate I pressed (and these coordinates have been confirmed to be within the range of the image thanks to the function "TouchImage". And when it does get the correct coordinates (or at least it SHOULD of done that) the square will be drawn one "line" at a time. I have a game programming background so this kind of sounds simple but I can't believe I didn't get this right the first time.
Also I have another function, it MIGHT prove worthwhile because the original image is rotated and then put on screen. Why? Well by default the image, after taking the picture, and then displayed, is rotated to the left. I had no idea why but I corrected it with the following function:
// Just rotate the image because for some reason it's titled 90 degrees to the left.
public static SKBitmap Rotate()
{
using (var bitmap = m_bm)
{
// The new ones width IS the old ones height.
var rotated = new SKBitmap(bitmap.Height, bitmap.Width);
using (var surface = new SKCanvas(rotated))
{
surface.Translate(rotated.Width, 0.0f);
surface.RotateDegrees(90);
surface.DrawBitmap(bitmap, 0, 0);
}
return rotated;
}
}
I'll keep reading and looking up stuff on what I'm doing wrong, but if any help is given I'm grateful.
I'm working on an application where several GUI Label display names of planes.
And here's the result :
The problem is, if I rotate my camera by 180 °, those label are here, like there is a point symmetry :
So my label appear twice, once on the plane, which is good, and a second time, behind the camera.
I check if my script was not added twice, but there is no problem, more strange, if I look from an above view, the problem just disappear :
I have no idea where this can come from, here's my code, who is attached to every plane :
void OnGUI()
{
if (showInfos)
{
Rect r = new Rect((Camera.main.WorldToScreenPoint(gameObject.transform.position)).x+25, Camera.main.pixelHeight - (Camera.main.WorldToScreenPoint(gameObject.transform.position)).y+25, 75f, 75f);
GUI.Label(r, gameObject.transform.root.name);
}
}
You are drawing the labels whether or not they are in the view frustum or not.
From Camera.WorldToScreenPoint (emphasis mine):
Screenspace is defined in pixels. The bottom-left of the screen is (0,0); the right-top is (pixelWidth,pixelHeight). The z position is in world units from the camera.
You need to check if the Z value of the screen point is negative or positive (I don't know which one is in front of cam and which one is behind it, I don't use Unity), and according to that decide if it needs to be rendered or not.
void OnGUI()
{
if (showInfos)
{
var pt = Camera.main.WorldToScreenPoint(gameObject.transform.position);
if (pt.z > 0) //or < 0, no idea.
{
Rect r = new Rect(pt.x + 25, Camera.main.pixelHeight - pt.y + 25, 75f, 75f);
GUI.Label(r, gameObject.transform.root.name);
}
}
}
I'm trying to draw some polygons and lines usinng e.Graphics.DrawPolygon (or DrawLine). But I have a little problem specifying the coordinates where to draw. I am drawing onto a PictureBox using its Paint event. The elements draw correctly relatively to each other (creating the required final picture), but seem always to draw in the upper-left corner of the PictureBox. When creating the points to draw, when I just try to multiply the coordinates, it draws it at the same place but bigger (size is multiplied, instead of location coordinates).
Here is my code:
//some for loop
{
//getting the coordinates
Point toAdd = new Point((int)xCoord, (int)yCoord); // creating the point from originaly a double, here i tried to multiply..
tmpPoints.Add(toAdd); // tmpPoints is a List<Point>
}
points.Add(tmpPoints.ToArray()); //List<Point[]>
drawBuffer = points; //saving to a public List<Point[]>
points.Clear();
this.Invalidate();
here part of the pictureBox1_Paint method:
for (int i = 0; i < drawBuffer.Count; i++)
{
//some other stuff like deciding which color to use, not very important
Brush br = new SolidBrush(polyColor);
e.Graphics.FillPolygon(br, drawBuffer[i]);
brush.Dispose();
}
I have checked using breakpoint, the coordiinates are the same ratio (what was 100 pixels wide is still 100 pixels wide), they are at coordinates like x 3000 and y 1500, but it just draws itself in the upper-left corner. When i multiply the coordinates 3 times (see the code for the place where i multiplied), it draws at the same place but 3 times bigger (doesn't make sense after checking the coords...)
So, my question is - how do I set the location correctly, or is there any other way to do this?
Like this (I know, this is nonsense, just an example)
foreach(Polygon poly in e.Graphics)
{
poly.Location = new Point(poly.Location.X * 2, poly.Location.Y * 2);
}
When you multiply the coordinates of the points, they're scaled around the point (0, 0), the top-left corner of the canvas:
In order to scale it around its center (and I suppose you expected it to work this way), you need to calculate some kind of center of the polygon. For simplicity, it can be even an arithmetic mean of the coordinates, on X and Y axes respectively. If you already have the coordinates of the center, translate the coordinates of every point by a reversed vector made from the center coordinates (this is how it would look like if you drew it after this operation - the polygon's center is in the center of the coordinate system):
Now, do your scaling:
and move it back by the vector of polygon's center coordinates:
when you multiply
poly.Location = new Point(poly.Location.X * 2, poly.Location.Y * 2);
you are doing a stretch operation when you add
poly.Location = new Point(poly.Location.X + 50, poly.Location.Y +50); you are doing a translation operation.
If you want to shift everything without modifying the stored coords then just translate the graphics before drawing:
private void pictureBox1_Paint(object sender, PaintEventArgs e)
{
e.Graphics.TranslateTransform(100, 100); // shift the origin somehow
// ... draw the polygons as before ...
}
i draw the circle in c# using directx.i like to draw the circle with same dimensions in c# using GDI.It means i like to convert that circle from directx to GDI. Is any body help for me.plz provide the answer for me.how can i do it.Is any algorithm available for that........
And also i give the input for center of the circle is (x,y)in this point format.but in gdi it is pixel format .so how can i convert the directx points to gdi+ pixels
Here is a link from MSDN that introduces Graphics and Drawing in Windows Forms. And it's likely that you will need something similar to:
public Form1()
{
InitializeComponent();
this.Paint += new PaintEventHandler(Form1_Paint);
// This works too
//this.Paint += (_, args) => DrawCircle(args.Graphics);
}
void Form1_Paint(object sender, PaintEventArgs e)
{
DrawCircle(e.Graphics);
}
private void DrawCircle(Graphics g)
{
int x = 0;
int y = 0;
int radius = 50;
// The x,y coordinates here represent the upper left corner
// so if you have the center coordinates (cenX, cenY), you will have to
// substract radius from both cenX and cenY in order to represent the
// upper left corner.
// The width and height represents that of the bounding rectangle of the circle
g.DrawEllipse(Pens.Black, x, y, radius * 2, radius * 2);
// Use this instead if you need a filled circle
//g.FillEllipse(Brushes.Black, x, y, radius * 2, radius * 2);
}
After that you might want to look into double-buffering techniques, a few links :
[SO] How to double buffer .NET controls on a form?
[MSDN] Double Buffered Graphics
[MSDN] Using Double Buffering