Applying fragment shaders to images - c#

My goal is to apply any shader written in SKSL to an image. I am aware that this feature is available in Skia but not in SkiaSharp, but I see that there has been some effort recently to port the functionality. I've been trying to replicate a solution for one of the pre-release versions that one of the contributors made here: https://github.com/mono/SkiaSharp/issues/1319#issuecomment-640529824 but I had to make a few changes to my code and shader as they weren't seemingly compatible with the latest pre-release version (2.88.0-preview.232). Here is mine for reference:
// input values
SKBitmap bmp = new SKBitmap(1000, 1000);
SKCanvas canvas = new SKCanvas(bmp);
float threshold = 1.05f;
float exponent = 1.5f;
// shader
var src = #"
in fragmentProcessor color_map;
uniform float scale;
uniform half exp;
uniform float3 in_colors0;
half4 main(float2 p) {
half4 texColor = sample(color_map, p);
if (length(abs(in_colors0 - pow(texColor.rgb, half3(exp)))) < scale)
discard;
return texColor;
}";
using var effect = SKRuntimeEffect.Create(src, out var errorText);
// input values
var inputs = new SKRuntimeEffectUniforms(effect);
inputs["scale"] = threshold;
inputs["exp"] = exponent;
inputs["in_colors0"] = new[] {1f, 1f, 1f};
// shader values
using var blueShirt = SKImage.FromEncodedData(Path.Combine(PathToImages, "blue-shirt.jpg"));
using var textureShader = blueShirt.ToShader();
var children = new SKRuntimeEffectChildren(effect);
children["color_map"] = textureShader;
// create actual shader
using var shader = effect.ToShader(true, inputs, children);
// draw as normal
canvas.Clear(SKColors.Black);
using var paint = new SKPaint { Shader = shader };
canvas.DrawRect(SKRect.Create(400, 400), paint);
However, I get a System.Runtime.InteropServices.SEHException: 'External component has thrown an exception.' on canvas.DrawRect() which I believe is either a problem with my shader or that this feature is not yet fully supported. Besides manually modifying individual pixels with C# which should be a lot slower than a shader operation, is the latter not possible within the current realm of SkiaSharp? If not, I think I'll be needing to load an image into some library with OpenGL bindings (like OpenTK), apply the shader, take a screenshot of the canvas, then load that image into SkiaSharp, which I imagine is not a great solution. I'd like to see possible solutions even outside of Skia.

Related

C#: SharpDX - Draw Basic Stars

The Issue
I've been searching for an answer to this issue for a few days now. I need help finding a way to generate a basic star. I have the code to randomly generate the locations done; however, I am new to DirectX and came from the world of XNA and Unity. DirectX development seems overly-complicated at the best of times. I have found a few tutorials, but, I am finding them difficult to follow. I have been unable to render anything to the screen once I've cleared it. I'm using the basic setup as far as rendering goes, I haven't created any special classes or structs. I have been trying to follow the Richard's Software tutorials that were converted to C# from C++ in the book 3D Game Programming With DirectX 11 by Frank D. Luna. The farthest I have been able to successfully complete was clearing to Color.CornflowerBlue.
Question(s)
Are there any simplistic methods to draw/render objects to the screen, I'm able to render text just fine, but images (sprites) and 3D meshes seem to be giving me issues. Is there a simplistic method to draw basic geometric shapes? For example: Primitives.DrawSphere(float radius, Vector3 location, Color c);
If there aren't any simplistic methods available to draw primitives, what is going to be the simplest approach to rendering stars? I can do spheres, sprites with alpha blending to simulate distance, billboards, etc. What will be the simplest method to implement?
How do I implement the simplest method revealed by question 2 above? Code samples, tutorials (no videos), articles, etc. are greatly appreciated as I am having a hard time tracking down good C# references, it would appear that most are utilizing Unity and Unreal these days, but I don't have those options.
Notes
I work in a government environment and am unable to utilize third party tools that haven't been approved. The approval process is a nightmare so third party tools are typically a no go. All supplied answers, documentation, samples, etc. should be strictly utilizing SharpDX.
My Code
My project is a WindowsFormsApplicaiton where the primary form has been derived from RenderForm. I have created a single class called Engine that handles the DirectX code.
Engine.cs:
internal class Engine : IDisposable {
#region Fields
private Device device;
private SwapChain swapChain;
private DeviceContext context;
private Texture2D backBuffer;
private RenderTargetView renderView;
private SynchronizationContext syncContext;
#endregion
#region Events
public event EventHandler Draw;
public event EventHandler Update;
private void SendDraw(object data) { Draw(this, new EventArgs()); }
private void SendUpdate(object data) { Update(this, new EventArgs()); }
#endregion
#region Constructor(s)
public Engine(RenderForm form) {
SwapChainDescription description = new SwapChainDescription() {
ModeDescription = new ModeDescription(form.Width, form.Height, new Rational(60, 1), Format.R8G8B8A8_UNorm),
SampleDescription = new SampleDescription(1, 0),
Usage = Usage.RenderTargetOutput,
BufferCount = 1,
OutputHandle = form.Handle,
IsWindowed = !form.IsFullscreen
};
Device.CreateWithSwapChain(DriverType.Hardware, DeviceCreationFlags.Debug, description, out device, out swapChain);
backBuffer = Resource.FromSwapChain<Texture2D>(swapChain, 0);
renderView = new RenderTargetView(device, backBuffer);
context = device.ImmediateContext;
context.OutputMerger.SetRenderTargets(renderView);
context.Rasterizer.SetViewport(new Viewport(0, 0, form.Width, form.Height));
renderForm = form;
}
#endregion
#region Public Methods
public void Initialize() {
if (SynchronizationContext.Current != null)
syncContext = SynchronizationContext.Current;
else
syncContext = new SynchronizationContext();
RenderLoop.Run(renderForm, delegate() {
context.ClearRenderTargetView(renderView, Color.CornflowerBlue);
syncContext.Send(SendUpdate, null);
syncContext.Send(SendDraw, null);
swapChain.Present(0, 0);
});
}
public void Dispose() { }
#endregion
}
Form1.cs:
public partial class Form1: RenderForm {
private Engine gameEngine;
int count = 0;
public Form1() {
InitializeComponent();
gameEngine = new Engine(this);
gameEngine.Update += GameEngine_Update;
gameEngine.Draw += GameEngine_Draw;
gameEgnine.Initialize();
}
private void GameEngine_Update(object sender, EventArgs e) => Debug.WriteLine("Updated.");
private void GameEngine_Draw(object sender, EventArgs e) => Debug.WriteLine($"I've drawn {++count} times.");
}
Final Remarks
Any help is appreciated at this point because its going on day 4 and I am still struggling to understand most of the DirectX 11 code. I am by no means new to C# or development; I am just used to Windows Forms, ASP.NET, Unity, XNA, WPF, etc. This is my first experience with DirectX and its definitely over the top. Even worse than when I tried OpenGL ten years ago with hardly any development experience at all.
Few things to start with.
First, DirectX is a very low level API. The only way to get a lower level API on Windows is to go talk to the graphics driver directly, which would be even more of a nightmare. As a result, things tend to be extremely generic, which allows for high flexibility at the cost of being fairly complicated. If you ever wondered what Unity or Unreal were doing under the hood, this is it.
Second, DirectX, and Direct3D in particular, is written in and for C++. C# resources are hard to come by because the API wasn't really intended for use from C# (not that that's a good thing). As a result, discarding the documentation and answers written for C++ is a really bad idea. All the caveats and restrictions on the C++ API also apply to you in the C# world, and you will need to know them.
Third, I will not be able to provide you an entirely C#/SharpDX answer, since I don't use DirectX from C#, but from C++. I'll do what I can to provide accurate mappings, but be aware you are using an API wrapper, which can and will hide some of the details from you. Best option to discover those details would be to have the source code of SharpDX up as you go through the C++ documentation.
Now on to the questions you have. Strap in, this will be long.
First up: there's no simple way to render a primitive object in Direct3D 11. Rendering a six faced cube has the same steps as rendering a 200 million vertex mesh of New York City.
In the rendering loop, we need to do several actions to render anything. In this list, you've already done step 1 and 7, and partially done step 2:
Clear the back buffer and depth/stencil buffers.
Set the input layout, shaders, pipeline state objects, render targets, and viewports used in the current rendering pass.
Set the vertex buffer, index buffer, constant buffers, shader resources and samplers used by the current mesh being drawn.
Issue the draw call for the given mesh.
Repeat steps 3 and 4 for all meshes that must be drawn in the current rendering pass.
Repeat steps 2 through 5 for all passes defined by the application.
Present the swap chain.
Fairly complex, just to render something as simple as a cube. This process needs several objects, of which we already have a few:
A Device object instance, for creating new D3D objects
A DeviceContext object instance, for issuing drawing operations and setting pipeline state
A DXGI.SwapChain object instance, to manage the back buffer(s) and present the next buffer in the chain to the desktop
A Texture2D object instance, to represent the back buffer owned by the swap chain
A RenderTargetView object instance, to allow the graphics card to use a texture as the destination for a rendering operation
A DepthStencilView object instance, if we're are using the depth buffer
VertexShader and PixelShader object instances, representing the shaders used by the GPU during the vertex and pixel shader stages of the graphics pipeline
An InputLayout object instance, representing the exact layout of one vertex in our vertex buffer
A set of Buffer object instances, representing the vertex buffers and index buffers containing our geometry and the constant buffers containing parameters for our shaders
A set of Texture2D object instances with associated ShaderResourceView object instances, representing any textures or surface maps to be applied to our geometry
A set of SamplerState object instances, for sampling the above textures from our shaders
A RasterizerState object instance, to describe the culling, depth biasing, multisampling, and antialiasing parameters the rasterizer should use
A DepthStencilState object instance, to describe how the GPU should conduct the depth test, what causes a depth test fail, and what a fail should do
A BlendState object instance, to describe how the GPU should blend multiple render targets together
Now, what does this look like as actual C# code?
Probably something like this (for rendering):
//Step 1 - Clear the targets
// Clear the back buffer to blue
context.ClearRenderTargetView(BackBufferView, Color.CornflowerBlue);
// Clear the depth buffer to the maximum value.
context.ClearDepthStencilView(DepthStencilBuffer, DepthStencilClearFlags.Depth, 1.0f, 0);
//Step 2 - Set up the pipeline.
// Input Assembler (IA) stage
context.InputAssembler.InputLayout = VertexBufferLayout;
// Vertex Shader (VS) stage
context.VertexShader.Set(SimpleVertexShader);
// Rasterizer (RS) stage
context.Rasterizer.State = SimpleRasterState;
context.Rasterizer.SetViewport(new Viewport(0, 0, form.Width, form.Height));
// Pixel Shader (PS) stage
context.PixelShader.Set(SimplePixelShader);
// Output Merger (OM) stage
context.OutputMerger.SetRenderTargets(DepthStencilBuffer, BackBufferView);
context.OutputMerger.SetDepthStencilState(SimpleDepthStencilState);
context.OutputMerger.SetBlendState(SimpleBlendState);
//Step 3 - Set up the geometry
// Vertex buffers
context.InputAssembler.PrimitiveTopology = PrimitiveTopology.TriangleList;
context.InputAssembler.SetVertexBuffers(0, new VertexBufferBinding(VertexBuffer, sizeof(Vertex), 0));
// Index buffer
context.InputAssembler.SetIndexBuffer(IndexBuffer, Format.R16_UInt, 0);
// Constant buffers
context.VertexShader.SetConstantBuffer(0, TransformationMatrixBuffer);
context.PixelShader.SetConstantBuffer(0, AmbientLightBuffer);
// Shader resources
context.PixelShader.SetShaderResource(0, MeshTexture);
// Samplers
context.PixelShader.SetSampler(0, MeshTextureSampler);
//Step 4 - Draw the object
context.DrawIndexed(IndexBuffer.Count, 0, 0);
//Step 5 - Advance to the next object and repeat.
// No next object currently.
//Step 6 - Advance to the next pipeline configuration
// No next pipeline configuration currently.
//Step 7 - Present to the screen.
swapChain.Present(0, 0);
The vertex and pixel shaders in this example code expect:
A model with position, normal, and texture coordinates per vertex
The position of the camera in world space, the world-view-projection matrix, world inverse transpose matrix, and world matrix as a vertex shader constant buffer
The ambient, diffuse, and specular colors of the light, as well as its position in the world, as a pixel shader constant buffer
The 2D texture to apply to the surface of the model in the pixel shader, and
The sampler to use when accessing the pixels of the above texture.
Now the rendering code itself is fairly simple - the setup is the harder part of it:
//Create the vertex buffer
VertexBuffer = new Buffer(device, RawVertexInfo, new BufferDescription {
SizeInBytes = RawVertexInfo.Length * sizeof(Vertex),
Usage = ResourceUsage.Default,
BindFlags = BindFlags.VertexBuffer,
CpuAccessFlags = CpuAccessFlags.None,
OptionFlags = ResourceOptionFlags.None,
StructureByteStride = sizeof(Vertex)
});
//Create the index buffer
IndexCount = (int)RawIndexInfo.Length;
IndexBuffer = new Buffer(device, RawIndexInfo, new BufferDescription {
SizeInBytes = IndexCount * sizeof(ushort),
Usage = ResourceUsage.Default,
BindFlags = BindFlags.IndexBuffer,
CpuAccessFlags = CpuAccessFlags.None,
OptionFlags = ResourceOptionFlags.None,
StructureByteStride = sizeof(ushort)
});
//Create the Depth/Stencil view.
Texture2D DepthStencilTexture = new Texture2D(device, new Texture2DDescription {
Format = Format.D32_Float,
BindFlags = BindFlags.DepthStencil,
Usage = ResourceUsage.Default,
Height = renderForm.Height,
Width = renderForm.Width,
ArraySize = 1,
MipLevels = 1,
SampleDescription = new SampleDescription {
Count = 1,
Quality = 0,
},
CpuAccessFlags = 0,
OptionFlags = 0
});
DepthStencilBuffer = new DepthStencilView(device, DepthStencilTexture);
SimpleDepthStencilState = new DepthStencilState(device, new DepthStencilStateDescription {
IsDepthEnabled = true,
DepthComparison = Comparison.Less,
});
//default blend state - can be omitted from the application if defaulted.
SimpleBlendState = new BlendState(device, new BlendStateDescription {
});
//Default rasterizer state - can be omitted from the application if defaulted.
SimpleRasterState = new RasterizerState(device, new RasterizerStateDescription {
CullMode = CullMode.Back,
IsFrontCounterClockwise = false,
});
// Input layout.
VertexBufferLayout = new InputLayout(device, VertexShaderByteCode, new InputElement[] {
new InputElement {
SemanticName = "POSITION",
Slot = 0,
SemanticIndex = 0,
Format = Format.R32G32B32_Float,
Classification = InputClassification.PerVertexData,
AlignedByteOffset = 0,
InstanceDataStepRate = 0,
},
new InputElement {
SemanticName = "NORMAL",
Slot = 0,
SemanticIndex = 0,
Format = Format.R32G32B32_Float,
Classification = InputClassification.PerVertexData,
AlignedByteOffset = InputElement.AppendAligned,
InstanceDataStepRate = 0,
},
new InputElement {
SemanticName = "TEXCOORD0",
Slot = 0,
SemanticIndex = 0,
Format = Format.R32G32_Float,
Classification = InputClassification.PerVertexData,
AlignedByteOffset = InputElement.AppendAligned,
InstanceDataStepRate = 0,
},
});
//Vertex/Pixel shaders
SimpleVertexShader = new VertexShader(device, VertexShaderByteCode);
SimplePixelShader = new PixelShader(device, PixelShaderByteCode);
//Constant buffers
TransformationMatrixBuffer = new Buffer(device, new BufferDescription {
SizeInBytes = sizeof(TransformationMatrixParameters),
BindFlags = BindFlags.ConstantBuffer,
Usage = ResourceUsage.Default,
CpuAccessFlags = CpuAccessFlags.None,
});
AmbientLightBuffer = new Buffer(device, new BufferDescription {
SizeInBytes = sizeof(AmbientLightParameters),
BindFlags = BindFlags.ConstantBuffer,
Usage = ResourceUsage.Default,
CpuAccessFlags = CpuAccessFlags.None,
});
// Mesh texture
MeshTexture = new Texture2D(device, new Texture2DDescription {
Format = Format.B8G8R8A8_UNorm,
BindFlags = BindFlags.ShaderResource,
Usage = ResourceUsage.Default,
Height = MeshImage.Height,
Width = MeshImage.Width,
ArraySize = 1,
MipLevels = 0,
CpuAccessFlags = CpuAccessFlags.None,
OptionFlags = ResourceOptionFlags.None,
SampleDescription = new SampleDescription {
Count = 1,
Quality = 0,
}
});
//Shader view for the texture
MeshTextureView = new ShaderResourceView(device, MeshTexture);
//Sampler for the texture
MeshTextureSampler = new SamplerState(device, new SamplerStateDescription {
AddressU = TextureAddressMode.Clamp,
AddressV = TextureAddressMode.Clamp,
AddressW = TextureAddressMode.Border,
BorderColor = new SharpDX.Mathematics.Interop.RawColor4(255, 0, 255, 255),
Filter = Filter.MaximumMinMagMipLinear,
ComparisonFunction = Comparison.Never,
MaximumLod = float.MaxValue,
MinimumLod = float.MinValue,
MaximumAnisotropy = 1,
MipLodBias = 0,
});
As you can see, there's a lot of stuff to get through.
As this has already gotten a lot longer than most people have the patience for, I'd recommend getting and reading the book by Frank D. Luna, as he does a much better job of explaining the pipeline stages and the expectations Direct3D has of your application.
I'd also recommend reading through the C++ documentation for the Direct3D API, as, again, everything there will apply to SharpDX.
In addition, you'll want to look into HLSL, as you'll need to define and compile a shader to make any of the above code even work, and if you want any texturing, you'll need to figure out how to get the image data into Direct3D.
On the bright side, if you manage to implement all of this in a clean, extensible manner, you'll be able to render practically anything with little additional effort.

Multiple texture on same object opengl4csharp

I got a c# program with opengl4csharp library, which creates a 3D cube, movable in space with the mouse and keyboard.
Actually, I apply only one texture to the cube uniformly. My problem is that I want to apply a different texture to each face.
I tried to initiate a texture array and a textureId array as following :
diceTextures = new Texture[6];
diceTextures[0] = new Texture("top.jpg");
diceTextures[1] = new Texture("bottom.jpg");
diceTextures[2] = new Texture("left.jpg");
diceTextures[3] = new Texture("right.jpg");
diceTextures[4] = new Texture("front.jpg");
diceTextures[5] = new Texture("back.jpg");
diceUint = new uint[diceTextures.Length];
for (uint ui = 0; ui < diceTextures.Length; ui++)
{
diceUint[ui] = diceTextures[ui].TextureID;
}
Then in the OnRenderFrame method, to bind them with :
Gl.UseProgram(program);
Gl.ActiveTexture(TextureUnit.Texture0);
Gl.BindTextures(0, diceTextures.Length, diceUint);
But nothing changes, only the first texture of the array is displayed on the cube, as previously when binding only one texture.
How can I achieve that the textures are applied to the faces ?
Gl.BindTextures(0, diceTextures.Length, diceUint);
This binds 6 textures to 6 separate texture units, 0 through diceTextures.Length - 1. Indeed, if you're going to use glBindTextures, you don't need the glActiveTexture call.
In any case, if your goal is to give each face a different texture, you first have to be able to identify a specific face from your shader. That means each face needs to be given a per-vertex value which is separate from those for the vertices for other faces. This also means that such faces cannot share positions with other faces, since one of their attributes is not shared from face to face.
So you need a new vertex attribute which contains the index for the texture you want that face to use.
From there, you can employ array textures. These are single textures which contain an array of images. When sampling from array textures, you can specify (as part of the texture coordinate) which index in the array to sample from.
Of course, this changes your texture building code, as you must use GL_TEXTURE_2D_ARRAY for your texture type, and allocate multiple array layers for each mipmap face.
Overall, the shader code would look something like this:
#version 330
layout(location = 0) in vec3 position;
layout(location = 2) in vec2 vertTexCoord;
layout(location = 6) in float textureLayer;
out vec2 texCoord;
flat out float layer;
void main()
{
gl_Position = //your usual stuff.
texCoord = vertTexCoord;
layer = textureLayer;
}
Fragment shader:
#version 330
in vec2 texCoord;
flat in float layer;
uniform sampler2DArray arrayTexture;
out vec4 outColor;
void main()
{
outColor = texture(arrayTexture, vec3(texCoord, layer));
}

How to apply barrel distortion (lens correction) with SharpDX?

i've made a small application to grap screenshots from any windowed game and send it to the iPhone to creat an virtual reality app, like oculus rift (see https://github.com/gagagu/VR-Streamer-Windows-Server for more info).
The images will be captured with SharpDX and everything is working fine.
Now i want to implement such like lens correction (barrel distortion) and i'm looking for the fastest way to realize it. I'm looking many internet sites with informations about barrel distortion and i think the fastest way is to use a shader for it, but i'm very new to sharpdx (and no knowledge about shaders) and i don't know how to implement a shader to my code. The most tutorials applys a shader to an object (like a cube) but not to a captured image and so i don't know how to do it.
[STAThread]
public System.Drawing.Bitmap Capture()
{
isInCapture = true;
try
{
// init
bool captureDone = false;
bitmap = new System.Drawing.Bitmap(captureRect.Width, captureRect.Height, PixelFormat.Format32bppArgb);
// the capture needs some time
for (int i = 0; !captureDone; i++)
{
try
{
//capture
duplicatedOutput.AcquireNextFrame(-1, out duplicateFrameInformation, out screenResource);
// only for wait
if (i > 0)
{
using (var screenTexture2D = screenResource.QueryInterface<Texture2D>())
device.ImmediateContext.CopyResource(screenTexture2D, screenTexture);
mapSource = device.ImmediateContext.MapSubresource(screenTexture, 0, MapMode.Read, MapFlags.None);
mapDest = bitmap.LockBits(new System.Drawing.Rectangle(0, 0, captureRect.Width, captureRect.Height),
ImageLockMode.WriteOnly, bitmap.PixelFormat);
sourcePtr = mapSource.DataPointer;
destPtr = mapDest.Scan0;
// set x position offset to rect.x
int rowPitch = mapSource.RowPitch - offsetX;
// set pointer to y position
sourcePtr = IntPtr.Add(sourcePtr, mapSource.RowPitch * captureRect.Y);
for (int y = 0; y < captureRect.Height; y++) // needs to speed up!!
{
// set pointer to x position
sourcePtr = IntPtr.Add(sourcePtr, offsetX);
// copy pixel to bmp
Utilities.CopyMemory(destPtr, sourcePtr, pWidth);
// incement pointert to next line
sourcePtr = IntPtr.Add(sourcePtr, rowPitch);
destPtr = IntPtr.Add(destPtr, mapDest.Stride);
}
bitmap.UnlockBits(mapDest);
device.ImmediateContext.UnmapSubresource(screenTexture, 0);
captureDone = true;
}
screenResource.Dispose();
duplicatedOutput.ReleaseFrame();
}
catch//(Exception ex) // catch (SharpDXException e)
{
//if (e.ResultCode.Code != SharpDX.DXGI.ResultCode.WaitTimeout.Result.Code)
//{
// // throw e;
//}
return new Bitmap(captureRect.Width, captureRect.Height, PixelFormat.Format32bppArgb);
}
}
}
catch
{
return new Bitmap(captureRect.Width, captureRect.Height, PixelFormat.Format32bppArgb);
}
isInCapture = false;
return bitmap;
}
It would be really great to get a little start assist from someone who willing to help.
I've found some shaders on inet but it is written for opengl (https://github.com/dghost/glslRiftDistort/tree/master/libovr-0.4.x/glsl110). Can i use the also for directx (sharpdx)?
Thanks forward for any help!
Now I've never used DirectX myself, but I suppose you'll need to use HLSL instead of GLSL (which should be fairly similar though). The idea is that you'll have to load your "screenshot" into a texture buffer, as an input to your fragment shader (pixel shader). Fragment shaders are deceptively easy to understand, it's just a piece of code (written in GLSL or HLSL) looking very much like a subset of C to which a few math functions has been added (vector and matrices manipulation mostly) executed for every single pixel to be rendered.
The code should be fairly simple, you'll take the current pixel position, apply the barrel distortion transformation to it's coordinates, then look up that coordinate in your screenshot texture. The transformation should look something like that :
vec2 uv;
/// Barrel Distortion ///
float d=length(uv);
float z = sqrt(1.0 - d * d);
float r = atan(d, z) / 3.14159;
float phi = atan(uv.y, uv.x);
uv = vec2(r*cos(phi)+.5,r*sin(phi)+.5);
Here's a shadertoy link if you wanna play with it and figure out how it works
I have no idea how HLSL handles texture filtering (which pixel you'll get when using floating point values for coordinates), but I'd put my money on bilinear filtering, which may very well give an unpleasant pixelyness to your output. You'll have to look at better filtering methods once you get the distortion working. Shouldn't be anything too complicated, familiarize yourself with HLSL syntax, find how to load your screenshot into a texture in DirectX and get rolling.
Edit : I said barrel distortion but the code is actually for the fisheye effect. Of course both are pretty much identical, the barrel distortion being only on one axis. I believe what you need is the fisheye effect though, it's what is commonly used for HMDs if I'm not mistaken.

Monogame Shader Porting Issues

Ok so I ported a game I have been working on over to Monogame, however I'm having a shader issue now that it's ported. It's an odd bug, since it works on my old XNA project and it also works the first time I use it in the new monogame project, but not after that unless I restart the game.
The shader is a very simple shader that looks at a greyscale image and, based on the grey, picks a color from the lookup texture. Basically I'm using this to randomize a sprite image for an enemy every time a new enemy is placed on the screen. It works for the first time an enemy is spawned, but doesn't work after that, just giving a completely transparent texture (not a null texture).
Also, I'm only targeting Windows Desktop for now, but I am planning to target Mac and Linux at some point.
Here is the shader code itself.
sampler input : register(s0);
Texture2D colorTable;
float seed; //calculate in program, pass to shader (between 0 and 1)
sampler colorTableSampler =
sampler_state
{
Texture = <colorTable>;
};
float4 PixelShaderFunction(float2 c: TEXCOORD0) : COLOR0
{
//get current pixel of the texture (greyscale)
float4 color = tex2D(input, c);
//set the values to compare to.
float hair = 139/255; float hairless = 140/255;
float shirt = 181/255; float shirtless = 182/255;
//var to hold new color
float4 swap;
//pixel coordinate for lookup
float2 i;
i.y = 1;
//compare and swap
if (color.r >= hair && color.r <= hairless)
{
i.x = ((0.5 + seed + 96)/128);
swap = tex2D(colorTableSampler,i);
}
if (color.r >= shirt && color.r <= shirtless)
{
i.x = ((0.5 + seed + 64)/128);
swap = tex2D(colorTableSampler,i);
}
if (color.r == 1)
{
i.x = ((0.5 + seed + 32)/128);
swap = tex2D(colorTableSampler,i);
}
if (color.r == 0)
{
i.x = ((0.5 + seed)/128);
swap = tex2D(colorTableSampler, i);
}
return swap;
}
technique ColorSwap
{
pass Pass1
{
// TODO: set renderstates here.
PixelShader = compile ps_2_0 PixelShaderFunction();
}
}
And here is the function that creates the texture. I should also note that the texture generation works fine without the shader, I just get the greyscale base image.
public static Texture2D createEnemyTexture(GraphicsDevice gd, SpriteBatch sb)
{
//get a random number to pass into the shader.
Random r = new Random();
float seed = (float)r.Next(0, 32);
//create the texture to copy color data into
Texture2D enemyTex = new Texture2D(gd, CHARACTER_SIDE, CHARACTER_SIDE);
//create a render target to draw a character to.
RenderTarget2D rendTarget = new RenderTarget2D(gd, CHARACTER_SIDE, CHARACTER_SIDE,
false, gd.PresentationParameters.BackBufferFormat, DepthFormat.None);
gd.SetRenderTarget(rendTarget);
//set background of new render target to transparent.
//gd.Clear(Microsoft.Xna.Framework.Color.Black);
//start drawing to the new render target
sb.Begin(SpriteSortMode.Immediate, BlendState.Opaque,
SamplerState.PointClamp, DepthStencilState.None, RasterizerState.CullNone);
//send the random value to the shader.
Graphics.GlobalGfx.colorSwapEffect.Parameters["seed"].SetValue(seed);
//send the palette texture to the shader.
Graphics.GlobalGfx.colorSwapEffect.Parameters["colorTable"].SetValue(Graphics.GlobalGfx.palette);
//apply the effect
Graphics.GlobalGfx.colorSwapEffect.CurrentTechnique.Passes[0].Apply();
//draw the texture (now with color!)
sb.Draw(enemyBase, new Microsoft.Xna.Framework.Vector2(0, 0), Microsoft.Xna.Framework.Color.White);
//end drawing
sb.End();
//reset rendertarget
gd.SetRenderTarget(null);
//copy the drawn and colored enemy to a non-volitile texture (instead of render target)
//create the color array the size of the texture.
Color[] cs = new Color[CHARACTER_SIDE * CHARACTER_SIDE];
//get all color data from the render target
rendTarget.GetData<Color>(cs);
//move the color data into the texture.
enemyTex.SetData<Color>(cs);
//return the finished texture.
return enemyTex;
}
And just in case, the code for loading in the shader:
BinaryReader Reader = new BinaryReader(File.Open(#"Content\\shaders\\test.mgfx", FileMode.Open));
colorSwapEffect = new Effect(gd, Reader.ReadBytes((int)Reader.BaseStream.Length));
If anyone has ideas to fix this, I'd really appreciate it, and just let me know if you need other info about the problem.
I am not sure why you have "at" (#) sign in front of the string, when you escaped backslash - unless you want to have \\ in your string, but it looks strange in the file path.
You have wrote in your code:
BinaryReader Reader = new BinaryReader(File.Open(#"Content\\shaders\\test.mgfx", FileMode.Open));
Unless you want \\ inside your string do
BinaryReader Reader = new BinaryReader(File.Open(#"Content\shaders\test.mgfx", FileMode.Open));
or
BinaryReader Reader = new BinaryReader(File.Open("Content\\shaders\\test.mgfx", FileMode.Open));
but do not use both.
I don't see anything super obvious just reading through it, but really this could be tricky for someone to figure out just looking at your code.
I'd recommend doing a graphics profile (via visual studio) and capturing the frame which renders correctly then the frame rendering incorrectly and comparing the state of the two.
Eg, is the input texture what you expect it to be, are pixels being output but culled, is the output correct on the render target (in which case the problem could be Get/SetData), etc.
Change ps_2_0 to ps_4_0_level_9_3.
Monogame cannot use shaders built on HLSL 2.
Also the built in sprite batch shader uses ps_4_0_level_9_3 and vs_4_0_level_9_3, you will get issues if you try to replace the pixel portion of a shader with a different level shader.
This is the only issue I can see with your code.

MonoTouch: Doubling Appearance Image size when Hue adjusted on Retina display

I am setting the NavBar's background with this code which works great in Retina and non-Retina displays. There is a #2x and normal image. So, all good:
UINavigationBar.Appearance.SetBackgroundImage(
GetImage(ImageTheme.menubar), UIBarMetrics.Default);
Now, when I apply this ChangeHue() transformation to the image to adjust its hue, on Retina displays the image is twice the size. Non-Retina displays are fine:
UINavigationBar.Appearance.SetBackgroundImage(
ChangeHue(GetImage(ImageTheme.menubar)), UIBarMetrics.Default);
...
UIImage ChangeHue(UIImage originalImage){
var hueAdjust = new CIHueAdjust() {
Image = CIImage.FromCGImage(originalImage.CGImage),
Angle = hue * (float)Math.PI / 180f // angles to radians
};
var output = hueAdjust.OutputImage;
var context = CIContext.FromOptions(null);
var cgimage = context.CreateCGImage(output, output.Extent);
var i = UIImage.FromImage(cgimage);
return i;
}
Here is the result in Non-Retina and Retina displays after the Hue is applied:
Ignore those HACKs, and edit this line in your ChangeHue method:
var i = UIImage.FromImage(cgimage);
to do this instead:
float scale = 1f;
if (UIScreen.MainScreen.RespondsToSelector (new MonoTouch.ObjCRuntime.Selector ("scale"))) {
scale = UIScreen.MainScreen.Scale; // will be 2.0 for Retina
}
var i = new UIImage(cgimage, scale, UIImageOrientation.Up);
This should return a UIImage object that is has the correct 'scale' information to be properly displayed in the UINavigationBar.
I never tried for CoreImage but, for CoreGraphics, you need to use UIGraphics.BeginImageContextWithOptions and specify 0 for scaling (so it will be done automagically for both Retina and non-Retina displays).
So the first thing I would try is to replace your:
var context = CIContext.FromOptions(null);
with the following block:
UIGraphics.BeginImageContextWithOptions (new SizeF (size, size), false, 0);
using (var c = UIGraphics.GetCurrentContext ()) {
var context = CIContext.FromContext (c);
...
}
UIGraphics.EndImageContext ();
UPDATE: FromContext is not available in iOS (it's OSX specific) so the above code won't work.
Filed a bug with the MonoTouch team. Will post solution shortly.
I've got three 'hacks' to suggest for now:
Hack #1
To force the extent to trim the image by replacing this line:
var cgimage = context.CreateCGImage(output, output.Extent);
with this:
var extent = output.Extent;
if (UIScreen.MainScreen.RespondsToSelector (new MonoTouch.ObjCRuntime.Selector("scale"))) {
if (UIScreen.MainScreen.Scale == 2f) {
extent = new System.Drawing.RectangleF(extent.X, extent.Y, extent.Width / 2f, extent.Height / 2f);
}
}
var cgimage = context.CreateCGImage(output, extent);
HOWEVER you 'lose' the Retina resolution on the adjusted image (it uses the #2x image as the source, but only displays the bottom-left quadrant of it after applying the filter, thanks to the image origin starting at the bottom-left).
Hack #2
Along the same lines, you can scale the image returned from the ChangeHue method so that it doesn't expand beyond the navigation bar:
var hued = ChangeHue (navBarImage);
if (hued.RespondsToSelector(new MonoTouch.ObjCRuntime.Selector("scale")))
hued = hued.Scale (new System.Drawing.SizeF(320, 47));
UINavigationBar.Appearance.SetBackgroundImage (hued, UIBarMetrics.Default);
UNFORTUNATELY you 'lose' the Retina resolution again, but at least the image is displayed correctly (just downsampled to 320 wide).
Hack #3
You could save the filtered image to disk and then set the UIAppearance using the image file on 'disk'. The code would look like this:
bool retina = false;
if (UIScreen.MainScreen.RespondsToSelector (new MonoTouch.ObjCRuntime.Selector ("scale"))) {
if (UIScreen.MainScreen.Scale == 2f) {
retina = true;
}
}
if (retina) {
NSError err; // unitialized
UIImage img = ChangeHue (navBarImage);
img.AsPNG ().Save ("tempNavBar#2x.png", true, out err);
if (err != null && err.Code != 0) {
// error handling
}
UINavigationBar.Appearance.SetBackgroundImage (UIImage.FromFile ("tempNavBar.png"), UIBarMetrics.Default);
} else {
UINavigationBar.Appearance.SetBackgroundImage (ChangeHue (navBarImage), UIBarMetrics.Default);
}
The benefit of this final hack is that the image looks correct (ie. Retina resolution is preserved).
I'm still looking for the "perfect" solution, but at least these ideas 'fix' your problem one-way-or-another...

Categories