I'm using a simple attenuation algorithm to darken walls based on their distance from light sources.
The end goal is to develop a light-mapping system in which the brightness for each wall is calculated in a pre-pass (including shadowing from other walls), and then that light-map image is blended with the wall texture.
+ =
Besides shadowing, I have the light-maps working, and the result replicate the shader code exactly. The problem is it is slow, and adding raycasted shadow checking is only going to make it worse.
My question is this, how can I perform these calculations on the GPU? Is a third party library/module required, or can it be done natively through OpenGL (OpenTK in my case)?
Alternatively, I'd be happy to switch to deferred rendering/lighting with cube shadow mapping but I'm yet to come across any information I can get my head around.
c# lightmap (run once for each wall)
public void createLightMap()
{
// Determine Light Map dimensions
int LightMapSize = 300;
int w = (int)(this.Width * LightMapSize);
int h = (int)(this.Height * LightMapSize);
// Create Bitmap
Bitmap bitmap = new Bitmap(w, h);
// Fragment testing
Vector3 fragmentPosition = new Vector3(this.x2, this.Height, this.z2);
float xIncement = (1f / LightMapSize) * ((x2 - x) / this.Width);
float zIncement = (1f / LightMapSize) * ((z2 - z) / this.Width);
float yIncement = (1f / LightMapSize);
// Calculate Light value for each pixel
for (int x = 0; x < w; x++) {
for (int y = 0; y < h; y++)
{
// Update fragment position
fragmentPosition.X = this.x2 - xIncement -(xIncement * x);
fragmentPosition.Z = this.z2 - (zIncement * x);
fragmentPosition.Y = this.Height - (yIncement * y);
Vector3 totalDiffuse = Vector3.Zero;
// Iterate through the lights
for (int n = 0; n < 2; n++)
{
Light light = Game.lights[n];
Vector3 LightPosition = new Vector3(light.Position);
Vector3 Attenuation = new Vector3(light.Attenuation);
Vector3 Colour = new Vector3(light.Colour);
Vector3 toLightVector = LightPosition - fragmentPosition;
// Return early if wall is facing away from light
if (Vector3.Dot(this.normalVector, toLightVector.Normalized()) < 0)
continue;
// Calculate vector length (aka, distance from lightsource)
float distance = (float)Math.Sqrt(toLightVector.X * toLightVector.X + toLightVector.Y * toLightVector.Y + toLightVector.Z * toLightVector.Z);
// Attenuation
float attFactor = Attenuation.X + (Attenuation.Y * distance) + (Attenuation.Z * distance * distance);
Vector3 diffuse = Colour / attFactor;
totalDiffuse += diffuse;
}
// Create bitmap
var r = (int)(totalDiffuse.X * 256);
var g = (int)(totalDiffuse.Y * 256);
var b = (int)(totalDiffuse.Z * 256);
r = Math.Min(r, 255);
g = Math.Min(g, 255);
b = Math.Min(b, 255);
// Set Pixel
bitmap.SetPixel(x, y, Color.FromArgb(r, g, b));
}
}
this.LightMapTextureID = Texture.loadImage(bitmap);
}
Fragment shader (an alternative to above light-mapping, creating the same effect)
#version 330
precision highp float;
in vec2 frag_texcoord;
in vec3 toLightVector[8];
uniform sampler2D MyTexture0;
uniform vec3 LightColour[8];
uniform vec3 LightAttenuation[8];
uniform float NumberOfLights;
out vec4 finalColor;
void main(void)
{
vec3 totalDiffuse;
for (int i=0; i<NumberOfLights; i++) {
{
float distance = length(toLightVector[i]);
float attFactor = LightAttenuation[i].x + (LightAttenuation[i].y * distance) + (LightAttenuation[i].z * distance * distance);
vec3 diffuse = (LightColour[i]) / attFactor;
totalDiffuse += diffuse;
}
finalColor = vec4(totalDiffuse, 1.0) * texture(MyTexture0, frag_texcoord)
}
}
Related
With the code below I created a function to draw a 3D chain model in C# using the helix toolkit. This works exactly how I wanted to but... now I'm breaking my head around a good approach to draw the chainlinks in specific direction, from a startpoint to a endpoint, but I didn't come much further the last week. I know I need to work with vector multiplication or scalars but I need some guidance to right topic to solve my problem.
using HelixToolkit.SharpDX.Core;
using HelixToolkit.Wpf.SharpDX;
using SharpDX;
namespace RapiD.Geometry.Models
{
public partial class ChainLink3D : GeometryBase3D
{
[ObservableProperty]
float radius;
[ObservableProperty]
float width;
[ObservableProperty]
float diameter;
[ObservableProperty]
float length;
[ObservableProperty]
int copies;
[ObservableProperty]
ObservableCollection<Element3D> elements;
public ChainLink3D(float diameter, float width, float length, int copies)
{
this.width = width;
this.length = length;
this.diameter = diameter;
this.copies = copies;
this.elements= new ObservableCollection<Element3D>();
OriginalMaterial = PhongMaterials.Chrome;
DrawChainLink();
}
public void DrawChainLink()
{
MeshBuilder meshBuilder = new MeshBuilder();
float radius = (width - diameter) / 2;
float trans = 0f;
float translate = length + (radius * 2) - diameter;
float yoffset = 0;
int segments = 10;
float interval = 180 / segments;
int numOfCopies = copies;
float startPoint = radius - (diameter / 2);
float endPoint = -length -radius + (diameter / 2);
Vector3 startVector = new Vector3(-300, 200f, 0);
Vector3 endVector = new Vector3(300, 500, 0);
Vector3 direction = Vector3.Normalize (endVector - startVector);
//The for loop is drawing the chainlink
for (int j = 0; j < numOfCopies; j++)
{
List<Vector3> single_chain_link = new List<Vector3>();
for (float i = 0; i <= 360; i += interval)
{
if (i > 180)
yoffset = -length;
else
yoffset = 0;
float a = i * MathF.PI / 180;
float x = radius * MathF.Cos(a);
float y = radius * MathF.Sin(a) + yoffset + trans;
Vector3 vec = new Vector3(x, y, 0);
//Rotates every second chainlink
if (j % 2 == 1)
vec =new Vector3(0, y, x);
vec += startVector;
//vec *= direction;
single_chain_link.Add(vec);
}
// this three are a reference for a new example direction in which I want to draw the chain link to
meshBuilder.AddSphere(Vector3.Zero, 5, 10, 10);
meshBuilder.AddSphere(startVector, 5, 10, 10);
meshBuilder.AddSphere(endVector, 5, 10, 10);
meshBuilder.AddTube(single_chain_link, diameter, 10, true);
meshBuilder.AddArrow(new Vector3(0, startPoint + trans, 0), new Vector3(0, endPoint + trans, 0), 2, 10);
elements.Add(new Element3D(new Vector3(0, startPoint + trans, 0), new Vector3(0, endPoint + trans, 0)));
//single_chain_link.OrderByDescending(x => x.X);
MeshGeometry = meshBuilder.ToMeshGeometry3D();
trans -= translate;
}
}
}
}
I did successfully draw the chain form a specific startpoint, but I want to draw the elements from the given startpoint to a endposition.
You should be using a transformation to rotate and/or move your model to the correct orientation.
To create a rotation matrix from a direction it is useful to know some linear algebra. Notably that the cross product between two vectors result in a vector orthogonal to both. And that a rotation matrix is not really anything more than three ortogonal axes. So you can do something like the following pseudo code
var x = myDirection;
if(x.AlmostEqual(Vector3.UnitY)){
var y = x.CrossProduct(Vector3.UnitZ);
}
else{
var y = x.CrossProduct(Vector3.UnitY);
}
var z = y.CrossProduct(x);
// Create a matrix from the x, y, z axes
If you are using System.Numerics there is the Matrix4x4.CreateLookAt that does more or less this.
Once you have a matrix you can just transform your model to rotate it in whatever direction you want. Note that it is common, at least for me, to mix up directions and end up with something that is of by 90 degrees, or some other error. It does not help that different libraries can use some different conventions. My best solution is to do things in small steps and verify that the result is as you expect it to be.
With the help of this link i can apply projection on my texture.
Now I want to cut/remove equal area from top and bottom of my glcontrol and then need to apply same projection on remain area. I have tried like below. But as shown in the image top and bottom curve is missing on projection.
How can I bring it back in remain area?
precision highp float;
uniform sampler2D sTexture;
varying vec2 vTexCoord;
void main()
{
float img_h_px = 432.0; // height of the image in pixel
float area_h_px = 39.0; // area height in pixel
float w = area_h_px/img_h_px;
if (vTexCoord.y < w || vTexCoord.y > (1.0-w)){
gl_FragColor= vec4(1.0,0.0,1.0,1.0);
}
else
{
vec2 pos = vTexCoord.xy * 2.0 - 1.0;
float b = 0.5;
float v_scale = (1.0 + b) / (1.0 + b * sqrt(1.0 - pos.x*pos.x));
float u = asin( pos.x ) / 3.1415 + 0.5;
float v = (pos.y * v_scale) * 0.5 + 0.5;
if ( v < 0.0 || v > 1.0 )
discard;
vec3 texColor = texture2D( u_texture, vec2(u, v) ).rgb;
gl_FragColor = vec4( texColor.rgb, 1.0 );
}
}
The size of bottom and top area (sum of bottom and top area), relative to the size of the control is 2.0*area_h_px/img_h_px = 2.0*w.
The ration (h_ratio) of the control size and the "visible" area is:
float w = area_h_px/img_h_px;
float h_ratio = 1.0 - 2.0*w;
You've to scale the y coordinate for the texture lookup by the ration of the "visible" area and the control size, this is reciprocal of h_ratio (1.0/h_ratio):
float v = (pos.y * v_scale / h_ratio) * 0.5 + 0.5;
Final shader:
precision highp float;
uniform sampler2D sTexture;
varying vec2 vTexCoord;
void main()
{
float img_h_px = 432.0; // height of the image in pixel
float area_h_px = 39.0; // area height in pixel
float w = area_h_px/img_h_px;
float h_ratio = 1.0 - 2.0*w;
vec2 pos = vTexCoord.xy * 2.0 - 1.0;
float b = 0.5;
float v_scale = (1.0 + b) / (1.0 + b * sqrt(1.0 - pos.x*pos.x));
float u = asin(pos.x) / 3.1415 + 0.5;
float v = (pos.y * v_scale / h_ratio) * 0.5 + 0.5;
vec3 texColor = texture2D(sTexture, vec2(u, v)).rgb;
vec4 color = vec4(texColor.rgb, 1.0);
if (vTexCoord.y < w || vTexCoord.y > (1.0-w))
color = vec4(1.0, 0.0, 1.0, 1.0);
else if (v < 0.0 || v > 1.0)
discard;
gl_FragColor = color;
}
If you want to tint the entire area in purple, then you've to set color, instead of discarding the fragments:
if (v < 0.0 || v > 1.0)
color = vec4(1.0, 0.0, 1.0, 1.0);
I'm trying to implement, using SharpDX11, a ray/mesh intersection method using the GPU. I've seen from an older post (Older post) that this can be done using the Compute Shader; but I need help in order to create and define the buffer outside the .hlsl code.
My HLSL code is the following:
struct rayHit
{
float3 intersection;
};
cbuffer cbRaySettings : register(b0)
{
float3 rayFrom;
float3 rayDir;
uint TriangleCount;
};
StructuredBuffer<float3> positionBuffer : register(t0);
StructuredBuffer<uint3> indexBuffer : register(t1);
AppendStructuredBuffer<rayHit> appendRayHitBuffer : register(u0);
void TestTriangle(float3 p1, float3 p2, float3 p3, out bool hit, out float3 intersection)
{
//Perform ray/triangle intersection
//Compute vectors along two edges of the triangle.
float3 edge1, edge2;
float distance;
//Edge 1
edge1.x = p2.x - p1.x;
edge1.y = p2.y - p1.y;
edge1.z = p2.z - p1.z;
//Edge2
edge2.x = p3.x - p1.x;
edge2.y = p3.y - p1.y;
edge2.z = p3.z - p1.z;
//Cross product of ray direction and edge2 - first part of determinant.
float3 directioncrossedge2;
directioncrossedge2.x = (rayDir.y * edge2.z) - (rayDir.z * edge2.y);
directioncrossedge2.y = (rayDir.z * edge2.x) - (rayDir.x * edge2.z);
directioncrossedge2.z = (rayDir.x * edge2.y) - (rayDir.y * edge2.x);
//Compute the determinant.
float determinant;
//Dot product of edge1 and the first part of determinant.
determinant = (edge1.x * directioncrossedge2.x) + (edge1.y * directioncrossedge2.y) + (edge1.z * directioncrossedge2.z);
//If the ray is parallel to the triangle plane, there is no collision.
//This also means that we are not culling, the ray may hit both the
//back and the front of the triangle.
if (determinant == 0)
{
distance = 0.0f;
intersection = float3(0, 0, 0);
hit = false;
}
float inversedeterminant = 1.0f / determinant;
//Calculate the U parameter of the intersection point.
float3 distanceVector;
distanceVector.x = rayFrom.x - p1.x;
distanceVector.y = rayFrom.y - p1.y;
distanceVector.z = rayFrom.z - p1.z;
float triangleU;
triangleU = (distanceVector.x * directioncrossedge2.x) + (distanceVector.y * directioncrossedge2.y) + (distanceVector.z * directioncrossedge2.z);
triangleU = triangleU * inversedeterminant;
//Make sure it is inside the triangle.
if (triangleU < 0.0f || triangleU > 1.0f)
{
distance = 0.0f;
intersection = float3(0, 0, 0);
hit = false;
}
//Calculate the V parameter of the intersection point.
float3 distancecrossedge1;
distancecrossedge1.x = (distanceVector.y * edge1.z) - (distanceVector.z * edge1.y);
distancecrossedge1.y = (distanceVector.z * edge1.x) - (distanceVector.x * edge1.z);
distancecrossedge1.z = (distanceVector.x * edge1.y) - (distanceVector.y * edge1.x);
float triangleV;
triangleV = ((rayDir.x * distancecrossedge1.x) + (rayDir.y * distancecrossedge1.y)) + (rayDir.z * distancecrossedge1.z);
triangleV = triangleV * inversedeterminant;
//Make sure it is inside the triangle.
if (triangleV < 0.0f || triangleU + triangleV > 1.0f)
{
distance = 0.0f;
intersection = float3(0, 0, 0);
hit = false;
}
//Compute the distance along the ray to the triangle.
float raydistance;
raydistance = (edge2.x * distancecrossedge1.x) + (edge2.y * distancecrossedge1.y) + (edge2.z * distancecrossedge1.z);
raydistance = raydistance * inversedeterminant;
//Is the triangle behind the ray origin?
if (raydistance < 0.0f)
{
distance = 0.0f;
intersection = float3(0, 0, 0);
hit = false;
}
intersection = rayFrom + (rayDir * distance);
hit = true;
}
[numthreads(64, 1, 1)]
void CS_RayAppend(uint3 tid : SV_DispatchThreadID)
{
if (tid.x >= TriangleCount)
return;
uint3 indices = indexBuffer[tid.x];
float3 p1 = positionBuffer[indices.x];
float3 p2 = positionBuffer[indices.y];
float3 p3 = positionBuffer[indices.z];
bool hit;
float3 p;
TestTriangle(p1, p2, p3, hit, p);
if (hit)
{
rayHit hitData;
hitData.intersection = p;
appendRayHitBuffer.Append(hitData);
}
}
While the following is part of my c# implementation but I'm not able to understand how lo load buffers for compute shader.
int count = obj.Mesh.Triangles.Count;
int size = 8; //int+float for every hit
BufferDescription bufferDesc = new BufferDescription() {
BindFlags = BindFlags.UnorderedAccess | BindFlags.ShaderResource,
Usage = ResourceUsage.Default,
CpuAccessFlags = CpuAccessFlags.None,
OptionFlags = ResourceOptionFlags.BufferStructured,
StructureByteStride = size,
SizeInBytes = size * count
};
Buffer buffer = new Buffer(device, bufferDesc);
UnorderedAccessViewDescription uavDescription = new UnorderedAccessViewDescription() {
Buffer = new UnorderedAccessViewDescription.BufferResource() { FirstElement = 0, Flags = UnorderedAccessViewBufferFlags.None, ElementCount = count },
Format = SharpDX.DXGI.Format.Unknown,
Dimension = UnorderedAccessViewDimension.Buffer
};
UnorderedAccessView uav = new UnorderedAccessView(device, buffer, uavDescription);
context.ComputeShader.SetUnorderedAccessView(0, uav);
var code = HLSLCompiler.CompileFromFile(#"Shaders\TestTriangle.hlsl", "CS_RayAppend", "cs_5_0");
ComputeShader _shader = new ComputeShader(device, code);
Buffer positionsBuffer = new Buffer(device, Utilities.SizeOf<Vector3>(), ResourceUsage.Default, BindFlags.None, CpuAccessFlags.None, ResourceOptionFlags.None, 0);
context.UpdateSubresource(ref data, positionsBuffer);
context.ComputeShader.Set(_shader);
Inside my c# implementation i'm considering only one ray (with its origin and direction) and I would like to use the shader to check the intersection with all the triangles of the mesh. I'm already able to do that using the CPU but for 20k+ triangles the computation took too long even if i'm already using parallel coding.
I want to draw the mandelbrot-set taken from the Win2D-Example-Gallery and tweak it a little.
At first I had all my code to generate the mandelbrot inside the CreateResources-Method of CanvasAnimatedControl, but due to performance issues I went on to do it using shaders (HLSL or PixelShaderEffect) and CanvasVirtualControl:
public PixelShaderEffect _effectMandel;
CanvasVirtualImageSource _sdrc;
public async Task CreateResources(CanvasVirtualControl sender)
{
_sdrc = new CanvasVirtualImageSource(sender, new Size(_width, _height));
var arr = await FileHelper.ReadAllBytes("Shaders/Mandelbrot.bin");
if (arr != null)
{
_effectMandel = new PixelShaderEffect(arr);
using (CanvasDrawingSession drawingSession = sender.CreateDrawingSession(new Rect(0,0,_width,_height)))
{
drawingSession.DrawImage(_effectMandel);
}
}
}
When I run the application, I get a System.Runtime.InteropServices.COMException right in the using section and the 'App.g.i.cs' file opens up telling me:
The shader code I use is this:
// Copyright (c) Microsoft Corporation. All rights reserved.
//
// Licensed under the MIT License. See LICENSE.txt in the project root for license information.
// This shader has no input textures.
// It generates a mandelbrot fractal.
#define D2D_INPUT_COUNT 0
#define D2D_REQUIRES_SCENE_POSITION
#include "d2d1effecthelpers.hlsli"
float scale;
float2 translate;
static const float4 tapOffsetsX = float4(-0.25, 0.25, -0.25, 0.25);
static const float4 tapOffsetsY = float4(-0.25, -0.25, 0.25, 0.25);
static const int iterations = 100;
D2D_PS_ENTRY(main)
{
float2 pos = D2DGetScenePosition().xy;
// Improve visual quality by supersampling inside the pixel shader, evaluating four separate
// versions of the fractal in parallel, each at a slightly different position offset.
// The x, y, z, and w components of these float4s contain the four simultaneous computations.
float4 c_r = (pos.x + tapOffsetsX) * scale + translate.x;
float4 c_i = (pos.y + tapOffsetsY) * scale + translate.y;
float4 value_r = 0;
float4 value_i = 0;
// Evalulate the Mandelbrot fractal.
for (int i = 0; i < iterations; i++)
{
float4 new_r = value_r * value_r - value_i * value_i + c_r;
float4 new_i = value_r * value_i * 2 + c_i;
value_r = new_r;
value_i = new_i;
}
// Adjust our four parallel results to range 0:1.
float4 distanceSquared = value_r * value_r + value_i * value_i;
float4 vectorResult = isfinite(distanceSquared) ? saturate(1 - distanceSquared) : 0;
// Resolve the supersampling to produce a single scalar result.
float result = dot(vectorResult, 0.25);
if (result < 1.0 / 256)
return 0;
else
return float4(result, result, result, 1);
}
If you know why this happens, please answer. Thanks!
I needed to setup a Timer to regularly invalidate the canvas and get 60fps.
I had another look into the Microsoft Examples and finally worked it out using this code:
DispatcherTimer timer;
internal void Regions_Invalidated(CanvasVirtualControl sender, CanvasRegionsInvalidatedEventArgs args)
{
// Configure the Mandelbrot effect to position and scale its output.
float baseScale = 0.005f;
float scale = (baseScale * 96 / sender.Dpi) / (helper._modifiers[1] / 1000f);
var controlSize = baseScale * sender.Size.ToVector2() * scale;
Vector2 translate = (baseScale * sender.Size.ToVector2() * new Vector2(-0.5f,-0f));
_effectMandel.Properties["scale"] = scale;
_effectMandel.Properties["translate"] = (Microsoft.Graphics.Canvas.Numerics.Vector2)translate;
#endif
// Draw the effect to whatever regions of the CanvasVirtualControl have been invalidated.
foreach (var region in args.InvalidatedRegions)
{
using (var drawingSession = sender.CreateDrawingSession(region))
{
drawingSession.DrawImage(_effectMandel);
}
}
// start timer for fps
this.timer = new DispatcherTimer();
int fps = 60;
this.timer.Interval = new TimeSpan(0, 0, 0, 0, 100 / fps);
this.timer.Tick += timer_Tick;
this.timer.Start();
}
private void timer_Tick(object sender, object e)
{
this.timer.Stop();
_canvas.Invalidate();
}
Hope this is helpful to someone.
I'm able to point zoom on the Mandelbrot set, as long as the mouse doesn't move after zooming has begun. I've tried calculating a normalized delta (new coordinate - old coordinate)*(oldzoom), but what happens is the image appears to jump around to a new location. I've seen this issue before. I'm struggling more here because I have to somehow convert this mouse position delta back to the -2,2 coordinate space of the Mandelbrot set.
Here's my code. What's important is the GetZoomPoint method, and then the lines of code that define x0 and y0. Also, I use the Range class to scale values from one range to another. I WAS using deltaTrans (thats the thing I was talking about earlier where I normalize the mouse delta with the old scale).
using OpenTK.Graphics.OpenGL;
using SpriteSheetMaker;
using System;
using System.Collections.Generic;
using System.Drawing;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace Fractal.Fractal
{
public class Mandelbrot : BaseTexture
{
private static Transform GlobalTransform = SpriteSheetMaker.Global.Transform;
private static Vector3 GlobalScale = GlobalTransform.Scale;
private static Vector3 GlobalTrans = GlobalTransform.Translation;
private static Vector3 LastWindowPoint = null;
private static Vector3 ZoomFactor = Vector3.ONE * 1.2f;
private static Vector3 Displacement = Vector3.ZERO;
private static int WindowSize = 100;
public static Vector3 GetZoomPoint()
{
var zP = OpenGLHelpers.LastZoomPoint.Clone();
if (LastWindowPoint == null)
{
LastWindowPoint = zP.Clone();
}
var delta = zP - LastWindowPoint;
var oldZoom = GlobalScale / ZoomFactor;
var deltaTrans = delta.XY * oldZoom.XY;
var factor = ZoomFactor.Clone();
Range xR = new Range(0, WindowSize);
Range yR = new Range(0, WindowSize);
Range complexRange = new Range(-2, 2);
// Calculate displacement of zooming position.
var dx = (zP.X - Displacement.X) * (factor.X - 1f);
var dy = (zP.Y - Displacement.Y) * (factor.Y - 1f);
// Compensate for displacement.
Displacement.X -= dx;
Displacement.Y -= dy;
zP -= Displacement;
var x = complexRange.ScaleValue(zP.X, xR);
var y = complexRange.ScaleValue(zP.Y, yR);
var rtn = new Vector3(x, y);
LastWindowPoint = zP.Clone();
return rtn;
}
public static Mandelbrot Generate()
{
var size = new Size(WindowSize, WindowSize);
var radius = new Size(size.Width / 2, size.Height / 2);
Bitmap bmp = new Bitmap(size.Width, size.Height);
LockBitmap.LockBitmapUnsafe lbm = new LockBitmap.LockBitmapUnsafe(bmp);
lbm.LockBits();
var pt = Mandelbrot.GetZoomPoint();
Parallel.For(0, size.Width, i =>
{
// float x0 = complexRangeX.ScaleValue(i, xRange);
float x0 = ((i - radius.Width) / GlobalScale.X) + pt.X;
Parallel.For(0, size.Height, j =>
{
// float y0 = complexRangeY.ScaleValue(j, yRange);
float y0 = ((j - radius.Height) / GlobalScale.Y) + pt.Y;
float value = 0f;
float x = 0.0f;
float y = 0.0f;
int iteration = 0;
int max_iteration = 100;
while (x * x + y * y <= 4.0 && iteration < max_iteration)
{
float xtemp = x * x - y * y + x0;
y = 2.0f * x * y + y0;
x = xtemp;
iteration += 1;
if (iteration == max_iteration)
{
value = 255;
break;
}
else
{
value = iteration * 50f % 255f;
}
}
int v = (int)value;
lbm.SetPixel(i, j, new ColorLibrary.HSL(v / 255f, 1.0, 0.5).ToDotNetColor());
});
});
lbm.UnlockBits();
var tex = new BaseTextureImage(bmp);
var rtn = new Mandelbrot(tex);
return rtn;
}
public override void Draw()
{
base._draw();
}
private Mandelbrot(BaseTextureImage graphic)
{
var topLeft = new Vector3(0, 1);
var bottomLeft = new Vector3(0, 0);
var bottomRight = new Vector3(1, 0);
var topRight = new Vector3(1, 1);
this.Vertices = new List<Vector3>()
{
topLeft,bottomLeft,bottomRight,topRight
};
this.Size.X = WindowSize;
this.Size.Y = WindowSize;
this.Texture2D = graphic;
}
}
}
I refactored my code, and also figured out a solution to this problem. 2 big wins in one. Ok, so I found a solution on CodeProject written in C# which I was readily able to adapt to my project. I'm not sure why I didn't realize this when I posted the question, but what I needed to solve this issue was to create a 'window' of zoom and not think in terms of a 'point zoom'. Yes, even if I am trying to zoom directly into a point, that point is just the center of some sort of a window.
Here is the method I have, which expects start and end mousedown coordinates (screen space), and converts the mandelbrot set window size accordingly.
public void ApplyZoom(double x0, double y0, double x1, double y1)
{
if (x1 == x0 && y0 == y1)
{
//This was just a click, no movement occurred
return;
}
/*
* XMin, YMin and XMax, YMax are the current extent of the set
* mx0,my0 and mx1,my1 are the part we selected
* do the math to draw the selected rectangle
* */
double scaleX, scaleY;
scaleX = (XMax - XMin) / (float)BitmapSize;
scaleY = (YMax - YMin) / (float)BitmapSize;
XMax = (float)x1 * scaleX + XMin;
YMax = (float)y1 * scaleY + YMin;
XMin = (float)x0 * scaleX + XMin;
YMin = (float)y0 * scaleY + YMin;
this.Refresh(); // force mandelbrot to redraw
}
Basically, whats happening is we calculate the ratio between the mandelbrot window size versus the screen size we are drawing to. Then, using that scale, we basically convert our mousedown coordinates to mandelbrot set coordinates (x1*scaleX, etc) and manipulate the current Min and Max coordinates with them, using the Min values as the pivot point.
Here's the link to the CodeProject I used as a reference: CodeProject link