I will start with my situation right now.
I downloaded the raycast project from: https://github.com/ChrisSerpico/raycasting
This is based on the tutorial from here: https://lodev.org/cgtutor/raycasting.html
After I got the project to work, played around a bit and modified some things, I'm currently stuck with adding multiple layers (based on one map per layer). I read a lot of things around the Internet but had no luck implementing that feature.
In this project: https://github.com/Owlzy/OwlRaycastEngine
there are multiple layers added, but that is done with slices and I can't figure out how to implement this in the Serpico project (took this because the floor/ceiling drawing works a lot better there). Textures are saved like this:
Texture2D canvas; // used to convert the buffer to a single texture to be drawn
Color[] buffer; // screen buffer with raw color data to be drawn
Color[][] rawData; // raw data of the individual external textures
// initialize graphics rendering objects
canvas = new Texture2D(GraphicsDevice, SCREEN_WIDTH, SCREEN_HEIGHT);
buffer = new Color[SCREEN_WIDTH * SCREEN_HEIGHT];
rawData = new Color[NUM_TEXTURES][]; //number of Textures
for (int i = 0; i < NUM_TEXTURES; i++)
{
rawData[i] = new Color[TEXTURE_WIDTH * TEXTURE_HEIGHT];
}
The buffer gets filles this way in the Wallcasting loop:
if (TEXTURE_WIDTH * texY + texX <= rawData[texNum].Length - 1)
{
buffer[SCREEN_WIDTH * y + x] = rawData[texNum][TEXTURE_WIDTH * texY + texX];
}
else //avoid crash when running into walls
{
buffer[SCREEN_WIDTH * y + x] = rawData[texNum][rawData[texNum].Length - 1];
}
and finally drawn this way:
canvas.SetData<Color>(buffer);
b.Draw(canvas, new Rectangle(0, 0, SCREEN_WIDTH, SCREEN_HEIGHT), Color.White);
The code is straight from the lodev tutorial. I tried around with the variables lineHeight , texY and so on but no result. The textures just get stretched, cut off or the screen gets drawn with terrible effects.
Could someone help please? I really dispair...
Thanks a lot!
The problem is in the call in Draw() canvas.SetData<Color>(buffer);
Move this line to Update() and it will "mostly" work. Texture memory is shared between the CPU and GPU. By the time Draw() is called it is expected the textures already exist in GPU memory. Transferring data during draws causes random tearing.
The mostly comes in the nondeterministic delays between Update() and Draw() and the PCIE memory transfer.
Related
i've made a small application to grap screenshots from any windowed game and send it to the iPhone to creat an virtual reality app, like oculus rift (see https://github.com/gagagu/VR-Streamer-Windows-Server for more info).
The images will be captured with SharpDX and everything is working fine.
Now i want to implement such like lens correction (barrel distortion) and i'm looking for the fastest way to realize it. I'm looking many internet sites with informations about barrel distortion and i think the fastest way is to use a shader for it, but i'm very new to sharpdx (and no knowledge about shaders) and i don't know how to implement a shader to my code. The most tutorials applys a shader to an object (like a cube) but not to a captured image and so i don't know how to do it.
[STAThread]
public System.Drawing.Bitmap Capture()
{
isInCapture = true;
try
{
// init
bool captureDone = false;
bitmap = new System.Drawing.Bitmap(captureRect.Width, captureRect.Height, PixelFormat.Format32bppArgb);
// the capture needs some time
for (int i = 0; !captureDone; i++)
{
try
{
//capture
duplicatedOutput.AcquireNextFrame(-1, out duplicateFrameInformation, out screenResource);
// only for wait
if (i > 0)
{
using (var screenTexture2D = screenResource.QueryInterface<Texture2D>())
device.ImmediateContext.CopyResource(screenTexture2D, screenTexture);
mapSource = device.ImmediateContext.MapSubresource(screenTexture, 0, MapMode.Read, MapFlags.None);
mapDest = bitmap.LockBits(new System.Drawing.Rectangle(0, 0, captureRect.Width, captureRect.Height),
ImageLockMode.WriteOnly, bitmap.PixelFormat);
sourcePtr = mapSource.DataPointer;
destPtr = mapDest.Scan0;
// set x position offset to rect.x
int rowPitch = mapSource.RowPitch - offsetX;
// set pointer to y position
sourcePtr = IntPtr.Add(sourcePtr, mapSource.RowPitch * captureRect.Y);
for (int y = 0; y < captureRect.Height; y++) // needs to speed up!!
{
// set pointer to x position
sourcePtr = IntPtr.Add(sourcePtr, offsetX);
// copy pixel to bmp
Utilities.CopyMemory(destPtr, sourcePtr, pWidth);
// incement pointert to next line
sourcePtr = IntPtr.Add(sourcePtr, rowPitch);
destPtr = IntPtr.Add(destPtr, mapDest.Stride);
}
bitmap.UnlockBits(mapDest);
device.ImmediateContext.UnmapSubresource(screenTexture, 0);
captureDone = true;
}
screenResource.Dispose();
duplicatedOutput.ReleaseFrame();
}
catch//(Exception ex) // catch (SharpDXException e)
{
//if (e.ResultCode.Code != SharpDX.DXGI.ResultCode.WaitTimeout.Result.Code)
//{
// // throw e;
//}
return new Bitmap(captureRect.Width, captureRect.Height, PixelFormat.Format32bppArgb);
}
}
}
catch
{
return new Bitmap(captureRect.Width, captureRect.Height, PixelFormat.Format32bppArgb);
}
isInCapture = false;
return bitmap;
}
It would be really great to get a little start assist from someone who willing to help.
I've found some shaders on inet but it is written for opengl (https://github.com/dghost/glslRiftDistort/tree/master/libovr-0.4.x/glsl110). Can i use the also for directx (sharpdx)?
Thanks forward for any help!
Now I've never used DirectX myself, but I suppose you'll need to use HLSL instead of GLSL (which should be fairly similar though). The idea is that you'll have to load your "screenshot" into a texture buffer, as an input to your fragment shader (pixel shader). Fragment shaders are deceptively easy to understand, it's just a piece of code (written in GLSL or HLSL) looking very much like a subset of C to which a few math functions has been added (vector and matrices manipulation mostly) executed for every single pixel to be rendered.
The code should be fairly simple, you'll take the current pixel position, apply the barrel distortion transformation to it's coordinates, then look up that coordinate in your screenshot texture. The transformation should look something like that :
vec2 uv;
/// Barrel Distortion ///
float d=length(uv);
float z = sqrt(1.0 - d * d);
float r = atan(d, z) / 3.14159;
float phi = atan(uv.y, uv.x);
uv = vec2(r*cos(phi)+.5,r*sin(phi)+.5);
Here's a shadertoy link if you wanna play with it and figure out how it works
I have no idea how HLSL handles texture filtering (which pixel you'll get when using floating point values for coordinates), but I'd put my money on bilinear filtering, which may very well give an unpleasant pixelyness to your output. You'll have to look at better filtering methods once you get the distortion working. Shouldn't be anything too complicated, familiarize yourself with HLSL syntax, find how to load your screenshot into a texture in DirectX and get rolling.
Edit : I said barrel distortion but the code is actually for the fisheye effect. Of course both are pretty much identical, the barrel distortion being only on one axis. I believe what you need is the fisheye effect though, it's what is commonly used for HMDs if I'm not mistaken.
How is the approach to plot complex drawings with Direct2D (Sharpdx)?
Actually I am using a WindowsRenderTarget, connecting it with a Direct2D1.Factory and drawing to a RenderControl.
Factory2D = new SharpDX.Direct2D1.Factory(FactoryType.MultiThreaded);
FactoryWrite = new SharpDX.DirectWrite.Factory();
var properties = new HwndRenderTargetProperties();
properties.Hwnd = this.Handle;
properties.PixelSize = new Size2(this.ClientSize.Width, this.ClientSize.Height);
properties.PresentOptions = PresentOptions.RetainContents;
RenderTarget2D = new WindowRenderTarget(Factory2D, new RenderTargetProperties(new PixelFormat(Format.Unknown, AlphaMode.Premultiplied)), properties);
RenderTarget2D.AntialiasMode = AntialiasMode.PerPrimitive;
The drawing is done in the Paint Event of the form:
RenderTarget2D.BeginDraw();
RenderTarget2D.Clear(Color4.Black);
drawProgress(); // Doing Paintings like DrawLine, Multiple PathGeometrys, DrawEllipse and DrawText
RenderTarget2d.EndDraw();
In the MouseMove/MouseWheel event the drawing will be recalculated (for scaling or calculation of the elements that will be displayed). This process need about 8-10ms.
The next step is actually
this.Refresh();
Here, I guess, is the problem, this progress needs up to 140ms.
So the scaling/moving of the plot has about 7fps.
Also the program occupies more and more memory when refreshing the Control
////Edit
Painting of lines:
private void drawLines(Pen pen, PointF[] drawElements)
{
SolidColorBrush tempBrush = new SolidColorBrush(RenderTarget2D, SharpDX.Color.FromRgba(pen.Color.ToArgb()));
int countDrawing = (drawElements.Length / 2) + drawElements.Length % 2;
for (int i = 0; i < countDrawing; i++)
{
drawLine(new Vector2(drawElements[i].X, drawElements[i].Y), new Vector2(drawElements[i + 1].X, drawElements[i + 1].Y), brushWhite);
}
}
Painting geometrys:
RenderTarget2D.DrawGeometry(graphicPathToPathGeometry(p), penToSolidColorBrush(pen));
private PathGeometry graphicPathToPathGeometry(GraphicsPath path)
{
geometry = new PathGeometry(Factory2D);
sink = geometry.Open();
if (path.PointCount > 0)
{
sink.BeginFigure(new Vector2(path.PathPoints[path.PointCount - 1].X, path.PathPoints[path.PointCount - 1].Y), FigureBegin.Hollow);
sink.AddLines(pointFToVector2(path.PathPoints));
sink.EndFigure(new FigureEnd());
sink.Close();
}
return geometry;
}
In mouse move the drawing will be recalculated by just building differences between Cursor.Position.X/Y old and Cursor.Position.X/Y new. So the the lines will be recalculated really often :)
The main bottleneck is your graphicPathToPathGeometry() function. You are creating and "filling" a PathGeometry in a render loop. As I mentioned above, a core principle is that you have to create your resources at once and then just reuse them in your drawing routine(s).
About your memory leak... your code samples don't provide enough information, but most probably you are not freeing the resources that you are creating (ie PathGeometry, SolidColorBrush and the ones we don't see).
The simplest advise is - use your render loop only for rendering/drawing and reuse resources instead of recreating them.
Improving the performance of Direct2D apps
One part of the problem is:
SolidColorBrush tempBrush = new SolidColorBrush(RenderTarget2D, SharpDX.Color.FromRgba(pen.Color.ToArgb()));
Creating objects of any kind inside the renderloop creates a big memory lack in the application. Drawing existing values is the way to go.
I guess the performance issue will also be based on this problem.
I profiled my application (a game), and I noticed that this function is a (the!) bottleneck. In particular, this function is called a lot due to it's meaning: it draws windows of my game's skyscraper. The game is flowing horizontally so, everytime new skyscraper are generated and windows has to be drawn.
The method is simple: I load just one image of a window and then I use it like a "stencil" to draw every window, while I calculate its position on the skyscraper.
position_ is the starting position, based on top-left corner where I want to begin drawing
n_horizontal_windows_ and n_vertical_windows_ is self-explanatory and it is generated in constructor
skipped_lights_ is a matrix of bool that says if that particular light is on or off (off means don't draw the window)
delta_x is like the padding, the distance between a window and another.
w_window is the width of the window (every window has the same)
public override void Draw(SpriteBatch spriteBatch)
{
Vector2 tmp_pos = position_;
float default_pos_y = tmp_pos.Y;
for (int r = 0; r < n_horizontal_windows_; ++r)
{
for (int c = 0; c < n_vertical_windows; ++c)
{
if (skipped_lights_[r, c])
{
spriteBatch.Draw(
window_texture_,
tmp_pos,
overlay_color_);
}
tmp_pos.Y += delta_y_;
}
tmp_pos.X += delta_x_ + w_window_;
tmp_pos.Y = default_pos_y;
}
}
As you can see the position is calculated inside the loop.
Just an example of the result (as you can see I create three layers of skyscrapers):
How can I optimize this function?
You could always render each building to a texture and cache it while it's on screen. That way you only draw the windows once for each building. you will draw the entire building in one call after it's been cached, saving you from building it piece by piece every frame. It should prevent a lot of over-draw that you were getting each frame too. It will have a slight memory cost though.
First time I ever ask a question here so correct me if I´m doing it wrong.
Picture of my chess set:
Every time I move a piece it lags for about 1 second. Every piece and tile has an Image and there is exactly 96 Images. Every time I move a piece it clears everything with black and then update the graphics.
In the early stages of the chess I didn't have any Images and used different colors instead and only a few pieces there was no noticeable lag and the piece moved in an instant.
public void updateGraphics(PaintEventArgs e, Graphics g, Bitmap frame)
{
g = Graphics.FromImage(frame);
g.Clear(Color.Black);
colorMap(g);
g.Dispose();
e.Graphics.DrawImageUnscaled(frame, 0, 0);
}
The function colorMap(g) looks like this:
private void colorMap(Graphics g)
{
for (int y = 0; y < SomeInts.amount; y++)
{
for (int x = 0; x < SomeInts.amount; x++)
{
//Tiles
Bundle.tile[x, y].colorBody(g, x, y);
//Pieces
player1.colorAll(g);
player2.colorAll(g);
}
}
}
The colorAll function executes every pieces colorBody(g) function which look like this:
public void colorBody(Graphics g)
{
//base.colorBody() does the following: body = new Rectangle(x * SomeInts.size + SomeInts.size / 4, y * SomeInts.size + SomeInts.size / 4, size, size);
base.colorBody();
if (team == 1)
{
//If its a white queen
image = Image.FromFile("textures/piece/white/queen.png");
}
if (team == 2)
{
//If its a black queen
image = Image.FromFile("textures/piece/black/queen.png");
}
g.DrawImage(image, body);
}
and finaly the function that moves the piece:
public void movePiece(MouseEventArgs e)
{
for (int y = 0; y < SomeInts.amount; y++)
{
for (int x = 0; x < SomeInts.amount; x++)
{
if (Bundle.tile[x, y].body.Contains(e.Location))
{
//Ignore this
for (int i = 0; i < queens.Count; i++)
{
Queen temp = queens.ElementAt<Queen>(i);
temp.move(x, y);
}
//Relevant
player1.move(x, y);
player2.move(x, y);
}
}
}
}
Thank you for reading all this! I could make a link to the whole program if my coding examples is not enough.
You're calling Image.FromFile in every refresh, for every image - effectively reloading every image file every time from disk.
Have you considered loading the images once, and storing the resulting Images somewhere useful? (say, an array, Image[2,6] would be adequate)
Why do you redraw the board each time? Can't you just leave the board where it is and display an image with transparent background over it? That way you have one image as a background (the board), plus 64 smaller images placed over the board in a grid and just change the image being displayed on each move.
That way, you can let windows handle the drawing...
Also, load the images of the pieces at the start of the application.
In addition to not calling Image.FromFile() inside updateGraphics() (which is definitely your biggest issue), you shouldn't be attempting to redraw the entire board every on every call to updateGraphics() - most of the time, only a small portion of the board will be invalidated.
The PaintEventArgs contains an parameter, ClipRectangle, which specifies which portion of the board needs redrawing. See if you can't figure out which tiles intersect with that rectangle, and only redraw those tiles :)
Hint: Write a function Point ScreenToTileCoords(Point) which takes a screen coordinate and returns which board-tile is at that coordinate. Then the only tiles you need to redraw are
Point upperLeftTileToBeDrawn = ScreenToTileCoords(e.ClipRectangle.Left, e.ClipRectangle.Top);
Point lowerRightTileToBeDrawn = ScreenToTileCoords(e.ClipRectangle.Right - 1, e.ClipRectangle.Bottom- 1);
Also, make sure your control is double-buffered, to avoid tearing. This is much simpler than #Steve B's link in the comments above states; assuming this is a UserControl, simply set
this.DoubleBuffered = true;
Well, what about this:
Do not clear the whole board but only those parts that need to be cleared.
Alternative:
Update to WPF - it moves drawing to the graphics card - and just move pieces around, in a smart way (i.e. have a control / object for every piece).
In my project, I'm using (uncompressed 16-bit grayscale) gigapixel images which come from a high resolution scanner for measurement purposes. Since these bitmaps can not be loaded in memory (mainly due to memory fragmentation) I'm using tiles (and tiled TIFF on disc). (see StackOverflow topic on this)
I need to implement panning/zooming in a way like Google Maps or DeepZoom. I have to apply image processing on the fly before presenting it on screen, so I can not use a precooked library which directly accesses an image file. For zooming I intend to keep a multi-resolution image in my file (pyramid storage). The most useful steps seem to be +200%, 50% and show all.
My code base is currently C# and .NET 3.5. Currently I assume Forms type, unless WPF gives me great advantage in this area. I have got a method which can return any (processed) part of the underlying image.
Specific issues:
hints or references on how to implement this pan/zoom with on-demand generation of image parts
any code which could be used as a basis (preferably commercial or LGPL/BSD like licenses)
can DeepZoom be used for this (i.e. is there a way that I can provide a function to provide a tile at the right resulution for the current zoom level?) ( I need to have pixel accurate addressing still)
This CodeProject article: Generate...DeepZoom Image Collection might be a useful read since it talks about generating a DeepZoom image source.
This MSDN article has a section Dynamic Deep Zoom: Supplying Image Pixels at Run Time and links to this Mandelbrot Explorer which 'kinda' sounds similar to what you're trying to do (ie. he is generating specific parts of the mandelbrot set on-demand; you want to retrieve specific parts of your gigapixel image on-demand).
I think the answer to "can DeepZoom be used for this?" is probably "Yes", however as it is only available in Silverlight you will have to do some tricks with an embedded web browser control if you need a WinForms/WPF client app.
Sorry I can't provide more specific answers - hope those links help.
p.s. I'm not sure if Silverlight supports TIFF images - that might be an issue unless you convert to another format.
I decided to try something myself. I came up with a straightforward GDI+ code, which uses the tiles I've already got. I just filter out the parts which are relevant for current clipping region. It works like magic! Please find my code below.
(Form settings double buffering for the best results)
protected override void OnPaint(PaintEventArgs e)
{
base.OnPaint(e);
Graphics dc = e.Graphics;
dc.ScaleTransform(1.0F, 1.0F);
Size scrollOffset = new Size(AutoScrollPosition);
int start_x = Math.Min(matrix_x_size,
(e.ClipRectangle.Left - scrollOffset.Width) / 256);
int start_y = Math.Min(matrix_y_size,
(e.ClipRectangle.Top - scrollOffset.Height) / 256);
int end_x = Math.Min(matrix_x_size,
(e.ClipRectangle.Right - scrollOffset.Width + 255) / 256);
int end_y = Math.Min(matrix_y_size,
(e.ClipRectangle.Bottom - scrollOffset.Height + 255) / 256);
// start * contain the first and last tile x/y which are on screen
// and which need to be redrawn.
// now iterate trough all tiles which need an update
for (int y = start_y; y < end_y; y++)
for (int x = start_x; x < end_x; x++)
{ // draw bitmap with gdi+ at calculated position.
dc.DrawImage(BmpMatrix[y, x],
new Point(x * 256 + scrollOffset.Width,
y * 256 + scrollOffset.Height));
}
}
To test it, I've created a matrix of 80x80 of 256 tiles (420 MPixel). Of course I'll have to add some deferred loading in real life. I can leave tiles out (empty) if they are not yet loaded. In fact, I've asked my client to stick 8 GByte in his machine so I don't have to bother about performance too much. Once loaded tiles can stay in memory.
public partial class Form1 : Form
{
bool dragging = false;
float Zoom = 1.0F;
Point lastMouse;
PointF viewPortCenter;
private readonly Brush solidYellowBrush = new SolidBrush(Color.Yellow);
private readonly Brush solidBlueBrush = new SolidBrush(Color.LightBlue);
const int matrix_x_size = 80;
const int matrix_y_size = 80;
private Bitmap[,] BmpMatrix = new Bitmap[matrix_x_size, matrix_y_size];
public Form1()
{
InitializeComponent();
Font font = new Font("Times New Roman", 10, FontStyle.Regular);
StringFormat strFormat = new StringFormat();
strFormat.Alignment = StringAlignment.Center;
strFormat.LineAlignment = StringAlignment.Center;
for (int y = 0; y < matrix_y_size; y++)
for (int x = 0; x < matrix_x_size; x++)
{
BmpMatrix[y, x] = new Bitmap(256, 256, PixelFormat.Format24bppRgb);
// BmpMatrix[y, x].Palette.Entries[0] = (x+y)%1==0?Color.Blue:Color.White;
using (Graphics g = Graphics.FromImage(BmpMatrix[y, x]))
{
g.FillRectangle(((x + y) % 2 == 0) ? solidBlueBrush : solidYellowBrush, new Rectangle(new Point(0, 0), new Size(256, 256)));
g.DrawString("hello world\n[" + x.ToString() + "," + y.ToString() + "]", new Font("Tahoma", 8), Brushes.Black,
new RectangleF(0, 0, 256, 256), strFormat);
g.DrawImage(BmpMatrix[y, x], Point.Empty);
}
}
BackColor = Color.White;
Size = new Size(300, 300);
Text = "Scroll Shapes Correct";
AutoScrollMinSize = new Size(256 * matrix_x_size, 256 * matrix_y_size);
}
Turned out this was the easy part. Getting async multithreaded i/o done in the background was a lot harder to acchieve. Still, I've got it working in the way described here. The issues to resolve were more .NET/Form multithreading related than to this topic.
In pseudo code it works like this:
after onPaint (and on Tick)
check if tiles on display need to be retrieved from disc
if so: post them to an async io queue
if not: check if tiles close to display area are already loaded
if not: post them to an async io/queue
check if bitmaps have arrived from io thread
if so: updat them on screen, and force repaint if visible
Result: I now have my own Custom control which uses roughly 50 MByte for very fast access to arbitrary size (tiled) TIFF files.
I guess you can address this issue following the steps below:
Image generation:
segment your image in multiple subimages (tiles) of a small resolution, for instace, 500x500. These images are depth 0
combine a series of tiles with depth 0 (4x4 or 6x6), resize the combination generating a new tile with 500x500 pixels in depth 1.
continue with this approach until get the entire image using only a few tiles.
Image visualization
Start from the highest depth
When user drags the image, load the tiles dynamically
When the user zoom a region of the image, decrease the depth, loading the tiles for that region in a higher resolution.
The final result is similar to Google Maps.