Win2D: Correct usage of CanvasVirtualControl - c#

I want to draw the mandelbrot-set taken from the Win2D-Example-Gallery and tweak it a little.
At first I had all my code to generate the mandelbrot inside the CreateResources-Method of CanvasAnimatedControl, but due to performance issues I went on to do it using shaders (HLSL or PixelShaderEffect) and CanvasVirtualControl:
public PixelShaderEffect _effectMandel;
CanvasVirtualImageSource _sdrc;
public async Task CreateResources(CanvasVirtualControl sender)
{
_sdrc = new CanvasVirtualImageSource(sender, new Size(_width, _height));
var arr = await FileHelper.ReadAllBytes("Shaders/Mandelbrot.bin");
if (arr != null)
{
_effectMandel = new PixelShaderEffect(arr);
using (CanvasDrawingSession drawingSession = sender.CreateDrawingSession(new Rect(0,0,_width,_height)))
{
drawingSession.DrawImage(_effectMandel);
}
}
}
When I run the application, I get a System.Runtime.InteropServices.COMException right in the using section and the 'App.g.i.cs' file opens up telling me:
The shader code I use is this:
// Copyright (c) Microsoft Corporation. All rights reserved.
//
// Licensed under the MIT License. See LICENSE.txt in the project root for license information.
// This shader has no input textures.
// It generates a mandelbrot fractal.
#define D2D_INPUT_COUNT 0
#define D2D_REQUIRES_SCENE_POSITION
#include "d2d1effecthelpers.hlsli"
float scale;
float2 translate;
static const float4 tapOffsetsX = float4(-0.25, 0.25, -0.25, 0.25);
static const float4 tapOffsetsY = float4(-0.25, -0.25, 0.25, 0.25);
static const int iterations = 100;
D2D_PS_ENTRY(main)
{
float2 pos = D2DGetScenePosition().xy;
// Improve visual quality by supersampling inside the pixel shader, evaluating four separate
// versions of the fractal in parallel, each at a slightly different position offset.
// The x, y, z, and w components of these float4s contain the four simultaneous computations.
float4 c_r = (pos.x + tapOffsetsX) * scale + translate.x;
float4 c_i = (pos.y + tapOffsetsY) * scale + translate.y;
float4 value_r = 0;
float4 value_i = 0;
// Evalulate the Mandelbrot fractal.
for (int i = 0; i < iterations; i++)
{
float4 new_r = value_r * value_r - value_i * value_i + c_r;
float4 new_i = value_r * value_i * 2 + c_i;
value_r = new_r;
value_i = new_i;
}
// Adjust our four parallel results to range 0:1.
float4 distanceSquared = value_r * value_r + value_i * value_i;
float4 vectorResult = isfinite(distanceSquared) ? saturate(1 - distanceSquared) : 0;
// Resolve the supersampling to produce a single scalar result.
float result = dot(vectorResult, 0.25);
if (result < 1.0 / 256)
return 0;
else
return float4(result, result, result, 1);
}
If you know why this happens, please answer. Thanks!

I needed to setup a Timer to regularly invalidate the canvas and get 60fps.
I had another look into the Microsoft Examples and finally worked it out using this code:
DispatcherTimer timer;
internal void Regions_Invalidated(CanvasVirtualControl sender, CanvasRegionsInvalidatedEventArgs args)
{
// Configure the Mandelbrot effect to position and scale its output.
float baseScale = 0.005f;
float scale = (baseScale * 96 / sender.Dpi) / (helper._modifiers[1] / 1000f);
var controlSize = baseScale * sender.Size.ToVector2() * scale;
Vector2 translate = (baseScale * sender.Size.ToVector2() * new Vector2(-0.5f,-0f));
_effectMandel.Properties["scale"] = scale;
_effectMandel.Properties["translate"] = (Microsoft.Graphics.Canvas.Numerics.Vector2)translate;
#endif
// Draw the effect to whatever regions of the CanvasVirtualControl have been invalidated.
foreach (var region in args.InvalidatedRegions)
{
using (var drawingSession = sender.CreateDrawingSession(region))
{
drawingSession.DrawImage(_effectMandel);
}
}
// start timer for fps
this.timer = new DispatcherTimer();
int fps = 60;
this.timer.Interval = new TimeSpan(0, 0, 0, 0, 100 / fps);
this.timer.Tick += timer_Tick;
this.timer.Start();
}
private void timer_Tick(object sender, object e)
{
this.timer.Stop();
_canvas.Invalidate();
}
Hope this is helpful to someone.

Related

OpenGL Shadow mapping. Performing calculations on the GPU?

I'm using a simple attenuation algorithm to darken walls based on their distance from light sources.
The end goal is to develop a light-mapping system in which the brightness for each wall is calculated in a pre-pass (including shadowing from other walls), and then that light-map image is blended with the wall texture.
+ =
Besides shadowing, I have the light-maps working, and the result replicate the shader code exactly. The problem is it is slow, and adding raycasted shadow checking is only going to make it worse.
My question is this, how can I perform these calculations on the GPU? Is a third party library/module required, or can it be done natively through OpenGL (OpenTK in my case)?
Alternatively, I'd be happy to switch to deferred rendering/lighting with cube shadow mapping but I'm yet to come across any information I can get my head around.
c# lightmap (run once for each wall)
public void createLightMap()
{
// Determine Light Map dimensions
int LightMapSize = 300;
int w = (int)(this.Width * LightMapSize);
int h = (int)(this.Height * LightMapSize);
// Create Bitmap
Bitmap bitmap = new Bitmap(w, h);
// Fragment testing
Vector3 fragmentPosition = new Vector3(this.x2, this.Height, this.z2);
float xIncement = (1f / LightMapSize) * ((x2 - x) / this.Width);
float zIncement = (1f / LightMapSize) * ((z2 - z) / this.Width);
float yIncement = (1f / LightMapSize);
// Calculate Light value for each pixel
for (int x = 0; x < w; x++) {
for (int y = 0; y < h; y++)
{
// Update fragment position
fragmentPosition.X = this.x2 - xIncement -(xIncement * x);
fragmentPosition.Z = this.z2 - (zIncement * x);
fragmentPosition.Y = this.Height - (yIncement * y);
Vector3 totalDiffuse = Vector3.Zero;
// Iterate through the lights
for (int n = 0; n < 2; n++)
{
Light light = Game.lights[n];
Vector3 LightPosition = new Vector3(light.Position);
Vector3 Attenuation = new Vector3(light.Attenuation);
Vector3 Colour = new Vector3(light.Colour);
Vector3 toLightVector = LightPosition - fragmentPosition;
// Return early if wall is facing away from light
if (Vector3.Dot(this.normalVector, toLightVector.Normalized()) < 0)
continue;
// Calculate vector length (aka, distance from lightsource)
float distance = (float)Math.Sqrt(toLightVector.X * toLightVector.X + toLightVector.Y * toLightVector.Y + toLightVector.Z * toLightVector.Z);
// Attenuation
float attFactor = Attenuation.X + (Attenuation.Y * distance) + (Attenuation.Z * distance * distance);
Vector3 diffuse = Colour / attFactor;
totalDiffuse += diffuse;
}
// Create bitmap
var r = (int)(totalDiffuse.X * 256);
var g = (int)(totalDiffuse.Y * 256);
var b = (int)(totalDiffuse.Z * 256);
r = Math.Min(r, 255);
g = Math.Min(g, 255);
b = Math.Min(b, 255);
// Set Pixel
bitmap.SetPixel(x, y, Color.FromArgb(r, g, b));
}
}
this.LightMapTextureID = Texture.loadImage(bitmap);
}
Fragment shader (an alternative to above light-mapping, creating the same effect)
#version 330
precision highp float;
in vec2 frag_texcoord;
in vec3 toLightVector[8];
uniform sampler2D MyTexture0;
uniform vec3 LightColour[8];
uniform vec3 LightAttenuation[8];
uniform float NumberOfLights;
out vec4 finalColor;
void main(void)
{
vec3 totalDiffuse;
for (int i=0; i<NumberOfLights; i++) {
{
float distance = length(toLightVector[i]);
float attFactor = LightAttenuation[i].x + (LightAttenuation[i].y * distance) + (LightAttenuation[i].z * distance * distance);
vec3 diffuse = (LightColour[i]) / attFactor;
totalDiffuse += diffuse;
}
finalColor = vec4(totalDiffuse, 1.0) * texture(MyTexture0, frag_texcoord)
}
}

Point-Zoom on Mandelbrot Set in C# - It works, except when the mouse has moved

I'm able to point zoom on the Mandelbrot set, as long as the mouse doesn't move after zooming has begun. I've tried calculating a normalized delta (new coordinate - old coordinate)*(oldzoom), but what happens is the image appears to jump around to a new location. I've seen this issue before. I'm struggling more here because I have to somehow convert this mouse position delta back to the -2,2 coordinate space of the Mandelbrot set.
Here's my code. What's important is the GetZoomPoint method, and then the lines of code that define x0 and y0. Also, I use the Range class to scale values from one range to another. I WAS using deltaTrans (thats the thing I was talking about earlier where I normalize the mouse delta with the old scale).
using OpenTK.Graphics.OpenGL;
using SpriteSheetMaker;
using System;
using System.Collections.Generic;
using System.Drawing;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace Fractal.Fractal
{
public class Mandelbrot : BaseTexture
{
private static Transform GlobalTransform = SpriteSheetMaker.Global.Transform;
private static Vector3 GlobalScale = GlobalTransform.Scale;
private static Vector3 GlobalTrans = GlobalTransform.Translation;
private static Vector3 LastWindowPoint = null;
private static Vector3 ZoomFactor = Vector3.ONE * 1.2f;
private static Vector3 Displacement = Vector3.ZERO;
private static int WindowSize = 100;
public static Vector3 GetZoomPoint()
{
var zP = OpenGLHelpers.LastZoomPoint.Clone();
if (LastWindowPoint == null)
{
LastWindowPoint = zP.Clone();
}
var delta = zP - LastWindowPoint;
var oldZoom = GlobalScale / ZoomFactor;
var deltaTrans = delta.XY * oldZoom.XY;
var factor = ZoomFactor.Clone();
Range xR = new Range(0, WindowSize);
Range yR = new Range(0, WindowSize);
Range complexRange = new Range(-2, 2);
// Calculate displacement of zooming position.
var dx = (zP.X - Displacement.X) * (factor.X - 1f);
var dy = (zP.Y - Displacement.Y) * (factor.Y - 1f);
// Compensate for displacement.
Displacement.X -= dx;
Displacement.Y -= dy;
zP -= Displacement;
var x = complexRange.ScaleValue(zP.X, xR);
var y = complexRange.ScaleValue(zP.Y, yR);
var rtn = new Vector3(x, y);
LastWindowPoint = zP.Clone();
return rtn;
}
public static Mandelbrot Generate()
{
var size = new Size(WindowSize, WindowSize);
var radius = new Size(size.Width / 2, size.Height / 2);
Bitmap bmp = new Bitmap(size.Width, size.Height);
LockBitmap.LockBitmapUnsafe lbm = new LockBitmap.LockBitmapUnsafe(bmp);
lbm.LockBits();
var pt = Mandelbrot.GetZoomPoint();
Parallel.For(0, size.Width, i =>
{
// float x0 = complexRangeX.ScaleValue(i, xRange);
float x0 = ((i - radius.Width) / GlobalScale.X) + pt.X;
Parallel.For(0, size.Height, j =>
{
// float y0 = complexRangeY.ScaleValue(j, yRange);
float y0 = ((j - radius.Height) / GlobalScale.Y) + pt.Y;
float value = 0f;
float x = 0.0f;
float y = 0.0f;
int iteration = 0;
int max_iteration = 100;
while (x * x + y * y <= 4.0 && iteration < max_iteration)
{
float xtemp = x * x - y * y + x0;
y = 2.0f * x * y + y0;
x = xtemp;
iteration += 1;
if (iteration == max_iteration)
{
value = 255;
break;
}
else
{
value = iteration * 50f % 255f;
}
}
int v = (int)value;
lbm.SetPixel(i, j, new ColorLibrary.HSL(v / 255f, 1.0, 0.5).ToDotNetColor());
});
});
lbm.UnlockBits();
var tex = new BaseTextureImage(bmp);
var rtn = new Mandelbrot(tex);
return rtn;
}
public override void Draw()
{
base._draw();
}
private Mandelbrot(BaseTextureImage graphic)
{
var topLeft = new Vector3(0, 1);
var bottomLeft = new Vector3(0, 0);
var bottomRight = new Vector3(1, 0);
var topRight = new Vector3(1, 1);
this.Vertices = new List<Vector3>()
{
topLeft,bottomLeft,bottomRight,topRight
};
this.Size.X = WindowSize;
this.Size.Y = WindowSize;
this.Texture2D = graphic;
}
}
}
I refactored my code, and also figured out a solution to this problem. 2 big wins in one. Ok, so I found a solution on CodeProject written in C# which I was readily able to adapt to my project. I'm not sure why I didn't realize this when I posted the question, but what I needed to solve this issue was to create a 'window' of zoom and not think in terms of a 'point zoom'. Yes, even if I am trying to zoom directly into a point, that point is just the center of some sort of a window.
Here is the method I have, which expects start and end mousedown coordinates (screen space), and converts the mandelbrot set window size accordingly.
public void ApplyZoom(double x0, double y0, double x1, double y1)
{
if (x1 == x0 && y0 == y1)
{
//This was just a click, no movement occurred
return;
}
/*
* XMin, YMin and XMax, YMax are the current extent of the set
* mx0,my0 and mx1,my1 are the part we selected
* do the math to draw the selected rectangle
* */
double scaleX, scaleY;
scaleX = (XMax - XMin) / (float)BitmapSize;
scaleY = (YMax - YMin) / (float)BitmapSize;
XMax = (float)x1 * scaleX + XMin;
YMax = (float)y1 * scaleY + YMin;
XMin = (float)x0 * scaleX + XMin;
YMin = (float)y0 * scaleY + YMin;
this.Refresh(); // force mandelbrot to redraw
}
Basically, whats happening is we calculate the ratio between the mandelbrot window size versus the screen size we are drawing to. Then, using that scale, we basically convert our mousedown coordinates to mandelbrot set coordinates (x1*scaleX, etc) and manipulate the current Min and Max coordinates with them, using the Min values as the pivot point.
Here's the link to the CodeProject I used as a reference: CodeProject link

Mandelbrot Fractal not working

I tried making this Mandelbrot fractal generator, but when I run this, I get an output like a circle.
Not sure exactly why this happens. I think something may be wrong with my coloring, but even if so the shape is also incorrect.
public static Bitmap Generate(
int width,
int height,
double realMin,
double realMax,
double imaginaryMin,
double imaginaryMax,
int maxIterations,
int bound)
{
var bitmap = new FastBitmap(width, height);
var planeWidth = Math.Abs(realMin) + Math.Abs(realMax); // Total width of the plane.
var planeHeight = Math.Abs(imaginaryMin) + Math.Abs(imaginaryMax); // Total height of the plane.
var realStep = planeWidth / width; // Amount to step by on the real axis per pixel.
var imaginaryStep = planeHeight / height; // Amount to step by on the imaginary axis per pixel.
var realScaling = width / planeWidth;
var imaginaryScaling = height / planeHeight;
var boundSquared = bound ^ 2;
for (var real = realMin; real <= realMax; real += realStep) // Loop through the real axis.
{
for (var imaginary = imaginaryMin; imaginary <= imaginaryMax; imaginary += imaginaryStep) // Loop through the imaginary axis.
{
var z = Complex.Zero;
var c = new Complex(real, imaginary);
var iterations = 0;
for (; iterations < maxIterations; iterations++)
{
z = z * z + c;
if (z.Real * z.Real + z.Imaginary * z.Imaginary > boundSquared)
{
break;
}
}
if (iterations == maxIterations)
{
bitmap.SetPixel(
(int)((real + Math.Abs(realMin)) * realScaling),
(int)((imaginary + Math.Abs(imaginaryMin)) * imaginaryScaling),
Color.Black);
}
else
{
var nsmooth = iterations + 1 - Math.Log(Math.Log(Complex.Abs(z))) / Math.Log(2);
var color = MathHelper.HsvToRgb(0.95f + 10 * nsmooth, 0.6, 1.0);
bitmap.SetPixel(
(int)((real + Math.Abs(realMin)) * realScaling),
(int)((imaginary + Math.Abs(imaginaryMin)) * imaginaryScaling),
color);
}
}
}
return bitmap.Bitmap;
}
Here's one error:
var boundSquared = bound ^ 2;
This should be:
var boundSquared = bound * bound;
The ^ operator means xor.

Convert GLSL to C#

Hello guys can someone help me with the conversion of glsl to c#? i'm new to glsl and i really need alot of help! gladly appreciate all your helps! :)
#version 120
uniform sampler2D tex;
void main()
{
vec4 pixcol = texture2D(tex, gl_TexCoord[0].xy);
vec4 colors[3];
colors[0] = vec4(0.,0.,1.,1.);
colors[1] = vec4(1.,1.,0.,1.);
colors[2] = vec4(1.,0.,0.,1.);
float lum = (pixcol.r+pixcol.g+pixcol.b)/3.;
int ix = (lum < 0.5)? 0:1;
vec4 thermal = mix(colors[ix],colors[ix+1],(lum-float(ix)*0.5)/0.5);
gl_FragColor = thermal;
}
You don't need to convert GLSL to c# to use it, as it is used by OpenGL API directly. There are several OpenGl wrappers for c#, not sure if all support shaders, but openTk supports for sure, example is here:
using (StreamReader sr = new StreamReader("vertex_shader.glsl"))
{
GL.ShaderSource(m_shader_handle, sr.ReadToEnd());
}
You can load shader either from file, either from string directly:
string shader = "void main() { // your shader code }"
GL.ShaderSource(m_shader_handle, shader);
The c# code that would be analogous to your reference code is (pseudo-code)
//Declares the texture/picture i.e. uniform sampler2D tex;
Bitmap x = new Bitmap();
float[] main()
{
// Looks up the color of a pixel for the specified coordinates, the color (as a rule of thumb)
// a normalized value i.e. vec4 pixcol = texture2D(tex, gl_TexCoord[0].xy);
Color pixcolUnnormalized= x.GetPixel();
float[] pixcol = new float[] { pixcolUnnormalized.r / 255.0f, pixcolUnnormalized.g / 255.0f, pixcolUnnormalized.b / 255.0f, pixcolUnnormalized.a / 255.0f);
//Setup coefficients i.e. vec4 colors[3]; colors[0] = vec4(0.,0.,1.,1.); colors[1] = vec4(1.,1.,0.,1.); colors[2] = vec4(1.,0.,0.,1.);
float[] color1[] = new float[] { 0.0,0.0,1.0,1.0 };
float[] color2[] = new float[] { 0.0,0.0,1.0,1.0 };
float[] color3[] = new float[] { 0.0,0.0,1.0,1.0 };
float[ float[] ] colors = new float[] { colors1, colors2, colors3 };
///Obtain luminance value from the pixel.
float lum = (pixcol[0]+pixcol[1] +pixcol[2])/3.;
int ix = (lum < 0.5)? 0:1;
//Interpolate the color values i.e. vec4 thermal = mix(colors[ix],colors[ix+1],(lum-float(ix)*0.5)/0.5);
float[] thermal = new float[] {
colors[ix][0] * ( 1 - (lum-float(ix)*0.5)/0.5) ) + colors[ix + 1][0] * (lum-float(ix)*0.5)/0.5)
colors[ix][1] * ( 1 - (lum-float(ix)*0.5)/0.5) ) + colors[ix + 1][1] * (lum-float(ix)*0.5)/0.5)
colors[ix][2] * ( 1 - (lum-float(ix)*0.5)/0.5) ) + colors[ix + 1][2] * (lum-float(ix)*0.5)/0.5)
colors[ix][3] * ( 1 - (lum-float(ix)*0.5)/0.5) ) + colors[ix + 1][3] * (lum-float(ix)*0.5)/0.5)
};
//return the value
return thermal;
}

Box2D & XNA Rendering the data

So i have been finding that for Box2D your physics information should not be your rendering information
so you can't do things like
spriteBatch.Draw(mazeBox, mazeBody.Position / 0.01f, Color.White)
instead you should create transforms of the physics info and use that as your rendering.
So what does that exactly mean? I have been trying to find info on how to use transforms and render but i am getting blanks.
That means that physics could be rendered just as you wish: bigger, smaller, rotated, translated and so on. You just need to find out which proportions your renderer (in our case it's XNA) will draw your physic bodies. Just do the next: draw a line at ground position and a 1x1 box in the ball / box position using "hello box2d" application (this one doesn't exists at all, but you can simply create a new box2d application, which does nothing but simulating ball / box falling on the floor. And do not forget about stepping your physics!).
If you're interested, here's my SFML application with box2d and some character controller basics:
#include <stdio.h>
#include <Box2D/Box2D.h>
#include <SFML/Window.hpp>
#include <SFML/Graphics.hpp>
#include "Animation.h"
#pragma comment(lib, "Box2D.lib")
#pragma comment(lib, "sfml-system.lib")
#pragma comment(lib, "sfml-window-s.lib")
#pragma comment(lib, "sfml-graphics.lib")
#define M_PI 3.14f
#define PIXELS_PER_METER 64.f
#define METERS_PER_PIXEL (1.f / PIXELS_PER_METER)
#define PPM PIXELS_PER_METER
#define MPP METERS_PER_PIXEL
#define x_cor 2.f * METERS_PER_PIXEL
#define y_cor METERS_PER_PIXEL
// Thanks to bobasaurus =)
class DebugDraw : public b2DebugDraw
{
public:
DebugDraw(sf::RenderWindow *renderWindow)
{
window = renderWindow;
}
void DrawPolygon(const b2Vec2 *vertices, int32 vertexCount, const b2Color &color)
{
sf::Shape polygon;
for (int32 i = 0; i < vertexCount; i++)
{
b2Vec2 vertex = vertices[i];
polygon.AddPoint(vertex.x * PIXELS_PER_METER, window->GetHeight() - (vertex.y * PIXELS_PER_METER), sf::Color(0, 0, 0, 0), B2SFColor(color));
}
window->Draw(polygon);
}
void DrawSolidPolygon(const b2Vec2 *vertices, int32 vertexCount, const b2Color &color)
{
sf::Shape polygon;
for (int32 i = 0; i < vertexCount; i++)
{
b2Vec2 vertex = vertices[i];
polygon.AddPoint(vertex.x * PIXELS_PER_METER, window->GetHeight() - (vertex.y * PIXELS_PER_METER), B2SFColor(color)); //need transparant outline?
}
window->Draw(polygon);
}
void DrawCircle(const b2Vec2& center, float32 radius, const b2Color& color)
{
sf::Shape circle = sf::Shape::Circle(center.x * PPM, window->GetHeight() - (center.y * PPM), radius * PPM, sf::Color(0, 0, 0, 0), 1.0f, B2SFColor(color));
window->Draw(circle);
}
void DrawSolidCircle(const b2Vec2& center, float32 radius, const b2Vec2& axis, const b2Color& color)
{
sf::Shape circle = sf::Shape::Circle(center.x * PPM, window->GetHeight() - (center.y * PPM), radius * PPM, B2SFColor(color));
window->Draw(circle);
}
void DrawSegment(const b2Vec2& p1, const b2Vec2& p2, const b2Color& color) {}
void DrawTransform(const b2Transform& xf) {}
private:
sf::RenderWindow *window;
sf::Color B2SFColor(const b2Color &color)
{
sf::Color result((sf::Uint8) (color.r * 255), (sf::Uint8) (color.g * 255), (sf::Uint8) (color.b * 255));
return result;
}
};
int main()
{
sf::RenderWindow *App = new sf::RenderWindow(sf::VideoMode(800, 600, 32), "SFML + Box2D Test");
App->UseVerticalSync(true);
// ================= Init Physics ====================
b2World *world = new b2World(b2Vec2(0.0f, -10.0f), true);
DebugDraw *debugDraw = new DebugDraw(App);
debugDraw->SetFlags(b2DebugDraw::e_shapeBit);
world->SetDebugDraw(debugDraw);
// Define the ground body.
b2BodyDef groundBodyDef;
groundBodyDef.position.Set(0.0f * x_cor, 0.0f * y_cor);
b2Body* groundBody = world->CreateBody(&groundBodyDef);
b2PolygonShape groundBox;
groundBox.SetAsBox(500.f * x_cor, 10.0f * y_cor);
groundBody->CreateFixture(&groundBox, 0.0f);
// ====================================================
// ====================================
/*b2PolygonShape shape;
shape.SetAsBox(5.f * x_cor, 5.f * x_cor);
b2FixtureDef fd;
fd.shape = &shape;
fd.density = 1.0f;
fd.friction = 0.3f;
fd.restitution = 0.7f;
b2BodyDef bd;
bd.type = b2_dynamicBody;
bd.angle = M_PI / 4.f;
bd.position.Set(10.f * x_cor, 80.f * x_cor);
b2Body* body = world->CreateBody(&bd);
body->CreateFixture(&fd);*/
b2BodyDef bd;
bd.position.Set(3.0f, 5.0f);
bd.type = b2_dynamicBody;
bd.fixedRotation = true;
bd.allowSleep = false;
b2Body* body = world->CreateBody(&bd);
b2PolygonShape shape;
shape.SetAsBox(0.25f, 0.25f);
b2FixtureDef fd;
fd.shape = &shape;
fd.friction = 20.0f;
fd.density = 20.0f;
body->CreateFixture(&fd);
// ====================================
sf::Image Image;
if (!Image.LoadFromFile("moo.jpg"))
return 1;
//Image.Copy(Image, 0, 0, sf::IntRect(0, 0, 67 * 5, 68));
sf::Animation Sprite(Image, 45, 50, 5);
Sprite.SetLoopSpeed(20);
Sprite.Play(0, 4);
Sprite.SetBlendMode(sf::Blend::Alpha);
Sprite.SetCenter(Sprite.GetSize().x / 2, Sprite.GetSize().y / 2);
while (App->IsOpened())
{
sf::Event Event;
static std::vector<sf::Vector2f> points;
static sf::Color cl;
bool nonConvex = false;
while (App->GetEvent(Event))
{
if (Event.Type == sf::Event::Closed)
App->Close();
if (Event.Type == sf::Event::KeyPressed)
{
if (Event.Key.Code == sf::Key::Escape)
App->Close();
if (Event.Key.Code == sf::Key::W && abs(body->GetLinearVelocity().y) < 1.0f)
body->ApplyLinearImpulse(b2Vec2(0, 5 * body->GetMass()), body->GetWorldCenter());
}
}
{
if (App->GetInput().IsKeyDown(sf::Key::A) && abs(body->GetLinearVelocity().x) < 5.0f)
{
body->ApplyForce(b2Vec2(-30 * body->GetMass(), 0), body->GetPosition());
}
if (App->GetInput().IsKeyDown(sf::Key::D) && abs(body->GetLinearVelocity().x) < 5.0f)
{
body->ApplyForce(b2Vec2(30 * body->GetMass(), 0), body->GetPosition());
}
if (App->GetInput().IsKeyDown(sf::Key::D))
{
//if (Sprite.IsStopped())
{
Sprite.FlipX(false);
Sprite.Play(0, 5);
}
} else
if (App->GetInput().IsKeyDown(sf::Key::A))
{
//if (Sprite.IsStopped())
{
Sprite.FlipX(true);
Sprite.Play(0, 5);
}
} else
//if (!Sprite.IsStopped())
{
Sprite.Play(12, 22);
}
}
world->Step(App->GetFrameTime(), 1024, 1024);
world->ClearForces();
App->Clear();
// And draw all the stuff
world->DrawDebugData();
Sprite.Update();
Sprite.SetPosition(body->GetPosition().x * PPM, App->GetHeight() - (body->GetPosition().y * PPM));
App->Draw(Sprite);
App->Display();
}
return 0;
}
Given a Body that represents a rectangle, the following c# code will render a texture according to the Body's physics:
spritebatch.Begin();
spritebatch.Draw(texture,
Body.Position * Scale,
textureSourceRectangle,
Color.White,
Body.Rotation,
new Vector2(textureWidth / 2f, textureHeight / 2f),
1f,
SpriteEffects.None,
0);
spritebatch.End();
Scale is defined for me as 100.0f, meaning that a Body with Height set to 0.1f is equal to 0.1f * 100.0f = 10 pixels.
The same goes for Body.Position. (0.1f, 0.1f) in box2D is equal to (10,10) in screen coordinates.
It's also important to set the Origin to the center of the rectangle when drawing. This way the rotation happens around the center of your texture.

Categories