Render h264 video frames in directx 11 - c#

I am new to DirectX. I am trying to write a custom IP camera video player and for which I am using DirectX11 to render the decoded image with Wpf Gui as my front end.
I am a c# developer and have used managed directx which is no longer updated by microsoft hence moved to wpf and directx11.
All parts of my application up to the rendering of the frames is working fine.
I have managed to create a D3DImage source which will be used in Wpf app, successfully create my viewports and my device including my shared resource since D3DImage only works Directx9. I am using SharpDX as the wrapper for DirectX API.
Now my problem is I can't seem to find a way to create a texture/update a texture from decoded image bytes or what would be the correct way to do so in order to render the decoded image from the bytes received.
Any help on this would be great or if someone can direct me to the right direction as to how this is to be approached?
Thanks.

After nearly 2 weeks of searching and trying to find the solution to my stated problem, i have finally found it as below.
However, this does display the image but not as expected but i believe it is a start for me as the code below answers my question originally asked.
Device.ImmediateContext.ClearRenderTargetView(this.m_RenderTargetView, Color4.Black);
Texture2DDescription colordesc = new Texture2DDescription
{
BindFlags = BindFlags.ShaderResource,
Format = m_PixelFormat,
Width = iWidth,
Height = iHeight,
MipLevels = 1,
SampleDescription = new SampleDescription(1, 0),
Usage = ResourceUsage.Dynamic,
OptionFlags = ResourceOptionFlags.None,
CpuAccessFlags = CpuAccessFlags.Write,
ArraySize = 1
};
Texture2D newFrameTexture = new Texture2D(this.Device, colordesc);
DataStream dtStream = null;
DataBox dBox = Device.ImmediateContext.MapSubresource(newFrameTexture, 0, MapMode.WriteDiscard, 0, out dtStream);
if (dtStream != null)
{
int iRowPitch = dBox.RowPitch;
for (int iHeightIndex = 0; iHeightIndex < iHeight; iHeightIndex++)
{
//Copy the image bytes to Texture
dtStream.Position = iHeightIndex * iRowPitch;
Marshal.Copy(decodedData, iHeightIndex * iWidth * 4, new IntPtr(dtStream.DataPointer.ToInt64() + iHeightIndex * iRowPitch), iWidth * 4);
}
}
Device.ImmediateContext.UnmapSubresource(newFrameTexture, 0);
Device.ImmediateContext.CopySubresourceRegion(newFrameTexture, 0, null, this.RenderTarget, 0);
var shaderRescVw = new ShaderResourceView(this.Device, this.RenderTarget);
Device.ImmediateContext.PixelShader.SetShaderResource(0, shaderRescVw);
Device.ImmediateContext.Draw(6, 0);
Device.ImmediateContext.Flush();
this.D3DSurface.InvalidateD3DImage();
Disposer.SafeDispose(ref newFrameTexture);
With the code above i am now able to populate the texture with the new images data i receive but the images are not being rendered in correct colors/pixels as shown within the red box in the image below.
Screenshot of the rendered image:
The image bytes are received via decoder in BGRA32 pixel format.
Any suggestion to resolve this would be very helpful.

Related

Converting OpenCVSharp4 Rectangle to IronOCR CropRectangle(System.Drawing.Rectangle)

I have a project in which I'm using IronOCR to read an area define by OpenCVSharp4 but the problem I'm encountering is IronOCrs CropRectangle method, it uses System.drawing.rectangle and for some reason my OpenCvSharp.Rect cannot be converted to it, by this I mean when I Finally uses IronOCRs Input.Add(Image, ContentArea) the results I get are not what is expected.
Below the code I have attached a picture of what the code currently produces.
Don't worry about IronOCR not getting the correct letters I believe it has to do with it creating a weird box and some letters getting cut off, it works if I made the area larger for crop rectangle width and height
var Ocr = new IronTesseract();
String[] splitText;
using (var Input = new OcrInput())
{
//OpenCv
OpenCvSharp.Rect rect = new OpenCvSharp.Rect(55, 107, 219, 264);
//IronOCR
Rectangle ContentArea = new Rectangle() { X = rect.TopLeft.X, Y = rect.TopLeft.Y, Height = rect.Height, Width = rect.Width };
CropRectangle r = new CropRectangle(ContentArea);
CordBox.Text = r.Rectangle.ToString();
//OpenCv
resizedMat.Rectangle(rect.TopLeft, rect.BottomRight, Scalar.Blue, 3);
resizedMat.Rectangle(new OpenCvSharp.Point(55, 107), new OpenCvSharp.Point(219, 264), Scalar.Brown, 3);
Cv2.ImShow("resizedMat", resizedMat);
//IronOCR
Input.Add(#"C:\Projects\AnExperiment\WpfApp1\Images\TestSave.PNG", r);
Input.EnhanceResolution();
var Result = Ocr.Read(Input);
ResultBox.Text = Result.Text;
splitText = ResultBox.Text.Split('\n');
}
SO here is the solution I came up with.
This problem is a OpenCvSharp4 one where OpenCvSharp4.Rectangle for some reason does have matching coordinates to System.Drawing.Rectangle. I have posted this on the gitHub for OpenCvSHarp4 and he says its fine, but its not.
So I switched over to Emgu NuGet package its better for C# applications and is a OpenCv Wrapper made for C# (I was just scared of giving it a try before because i never really understood it.)
Emgu uses System.Drawing.Rectangle by default instead of something like OpenCvSharp4.Rectangle so everything matches up nicely.
Mat testMat = new Mat();
System.Drawing.Rectangle roi = CvInvoke.SelectROI("main", testMat );
After finding this out the rest was pretty easy so the final code is below on how it was transformed. (For reference Emgu.CV.CVInvoke is how its called and Emgu.CV.BitmapExtension is its own separate NuGet package)
// Get the original Image
fullPage = CvInvoke.Imread(#"C:\Projects\AnExperiment\WpfApp1\Images\TestImageFinalFilled.png");
// Resize it so it works with the cordinates stored previously in a json file
CvInvoke.Resize(fullPage, resizedMat, EmguSetResolution(fullPage, dpi));
// Save the small version so iron ocr doesnt mess up
var bitmap = Emgu.CV.BitmapExtension.ToBitmap(resizedMat);
bitmap.Save(#"C:\Projects\AnExperiment\WpfApp1\Images\Test.PNG");
// Let user select box
System.Drawing.Rectangle roi = CvInvoke.SelectROI("main", resizedMat);
CvInvoke.DestroyWindow("main");
// Draw Rect for debugging
CvInvoke.Rectangle(resizedMat, roi, new MCvScalar(0, 0, 255), 2);
// Read section we highlighted by pulling the saved resuze imag as a reference
var Ocr = new IronTesseract();
IronOcr.OcrResult ocrResult;
Ocr.UseCustomTesseractLanguageFile(#"C:\Projects\AnExperiment\WpfApp1\tessdata_best-main\eng.traineddata");
using (var Input = new OcrInput())
{
CvInvoke.Rectangle(resizedMat, roi, new MCvScalar(0, 0, 255), 2);
IronOcr.CropRectangle contentArea = new CropRectangle(roi);
Input.AddImage(#"C:\Projects\AnExperiment\WpfApp1\Images\Test.PNG", contentArea);
Input.EnhanceResolution();
Input.Sharpen();
Input.Contrast();
ocrResult = Ocr.Read(Input);
}
File.Delete(#"C:\Projects\AnExperiment\WpfApp1\Images\Test.PNG");
CvInvoke.Imshow("m", resizedMat);
After all this I have some functions that spit the ocrResult.Text into the textbox and separate certain things I needed from it.

SharpDX Texture2D from DataStream

I want to show one of my WPF applications in VR and ran into a little problem when rendering the overlay texture directly from a datastream. The texture isn´t displaye correctly.
First of all I render my WPF Application into a Bitmap and save it to a stream:
RenderTargetBitmap bitmap = new RenderTargetBitmap((int)visual.ActualWidth, (int)visual.ActualHeight, 96, 96, PixelFormats.Pbgra32);
bitmap.Render(visual);
BitmapFrame frame = BitmapFrame.Create(bitmap);
encoder.Frames.Add(frame);
m_vrStream.Position = 0;
encoder.Save(m_vrStream);
I don´t want to use Unity for the VR part so I switched to SharpDX and created a new Texture2D.
var description2D = new Texture2DDescription()
{
Width = 1920,
Height = 1080,
ArraySize = 1,
MipLevels = 1,
Format = SharpDX.DXGI.Format.R8G8B8A8_UNorm,
SampleDescription = new SharpDX.DXGI.SampleDescription(1, 0),
};
m_texture = new Texture2D(m_device, description2D, new[] { new SharpDX.DataBox(m_dataStream.DataPointer,description2D.Width
* (int)FormatHelper.SizeOfInBytes(description2D.Format),0)});
This always brings a System.AccessViolationException. If I lower the width and height in Texture2DDescription, the texture is displayed but not correctly (random colored pixel).
If I first save the datastream into a .png file and load it with
m_texture = TextureLoader.CreateTexture2DFromBitmap(m_device, TextureLoader.LoadBitmap(new SharpDX.WIC.ImagingFactory2(), "test.png"));
everything looks fine.
I´m new to graphics programming and believe that I have a miss understanding in the Texture2DDescription part. The width and height are hard coded for testing purposes and fitting the size of the image I´m generating
Hope someone can give me some hints with that.
Thanks and regards

C#: SharpDX - Draw Basic Stars

The Issue
I've been searching for an answer to this issue for a few days now. I need help finding a way to generate a basic star. I have the code to randomly generate the locations done; however, I am new to DirectX and came from the world of XNA and Unity. DirectX development seems overly-complicated at the best of times. I have found a few tutorials, but, I am finding them difficult to follow. I have been unable to render anything to the screen once I've cleared it. I'm using the basic setup as far as rendering goes, I haven't created any special classes or structs. I have been trying to follow the Richard's Software tutorials that were converted to C# from C++ in the book 3D Game Programming With DirectX 11 by Frank D. Luna. The farthest I have been able to successfully complete was clearing to Color.CornflowerBlue.
Question(s)
Are there any simplistic methods to draw/render objects to the screen, I'm able to render text just fine, but images (sprites) and 3D meshes seem to be giving me issues. Is there a simplistic method to draw basic geometric shapes? For example: Primitives.DrawSphere(float radius, Vector3 location, Color c);
If there aren't any simplistic methods available to draw primitives, what is going to be the simplest approach to rendering stars? I can do spheres, sprites with alpha blending to simulate distance, billboards, etc. What will be the simplest method to implement?
How do I implement the simplest method revealed by question 2 above? Code samples, tutorials (no videos), articles, etc. are greatly appreciated as I am having a hard time tracking down good C# references, it would appear that most are utilizing Unity and Unreal these days, but I don't have those options.
Notes
I work in a government environment and am unable to utilize third party tools that haven't been approved. The approval process is a nightmare so third party tools are typically a no go. All supplied answers, documentation, samples, etc. should be strictly utilizing SharpDX.
My Code
My project is a WindowsFormsApplicaiton where the primary form has been derived from RenderForm. I have created a single class called Engine that handles the DirectX code.
Engine.cs:
internal class Engine : IDisposable {
#region Fields
private Device device;
private SwapChain swapChain;
private DeviceContext context;
private Texture2D backBuffer;
private RenderTargetView renderView;
private SynchronizationContext syncContext;
#endregion
#region Events
public event EventHandler Draw;
public event EventHandler Update;
private void SendDraw(object data) { Draw(this, new EventArgs()); }
private void SendUpdate(object data) { Update(this, new EventArgs()); }
#endregion
#region Constructor(s)
public Engine(RenderForm form) {
SwapChainDescription description = new SwapChainDescription() {
ModeDescription = new ModeDescription(form.Width, form.Height, new Rational(60, 1), Format.R8G8B8A8_UNorm),
SampleDescription = new SampleDescription(1, 0),
Usage = Usage.RenderTargetOutput,
BufferCount = 1,
OutputHandle = form.Handle,
IsWindowed = !form.IsFullscreen
};
Device.CreateWithSwapChain(DriverType.Hardware, DeviceCreationFlags.Debug, description, out device, out swapChain);
backBuffer = Resource.FromSwapChain<Texture2D>(swapChain, 0);
renderView = new RenderTargetView(device, backBuffer);
context = device.ImmediateContext;
context.OutputMerger.SetRenderTargets(renderView);
context.Rasterizer.SetViewport(new Viewport(0, 0, form.Width, form.Height));
renderForm = form;
}
#endregion
#region Public Methods
public void Initialize() {
if (SynchronizationContext.Current != null)
syncContext = SynchronizationContext.Current;
else
syncContext = new SynchronizationContext();
RenderLoop.Run(renderForm, delegate() {
context.ClearRenderTargetView(renderView, Color.CornflowerBlue);
syncContext.Send(SendUpdate, null);
syncContext.Send(SendDraw, null);
swapChain.Present(0, 0);
});
}
public void Dispose() { }
#endregion
}
Form1.cs:
public partial class Form1: RenderForm {
private Engine gameEngine;
int count = 0;
public Form1() {
InitializeComponent();
gameEngine = new Engine(this);
gameEngine.Update += GameEngine_Update;
gameEngine.Draw += GameEngine_Draw;
gameEgnine.Initialize();
}
private void GameEngine_Update(object sender, EventArgs e) => Debug.WriteLine("Updated.");
private void GameEngine_Draw(object sender, EventArgs e) => Debug.WriteLine($"I've drawn {++count} times.");
}
Final Remarks
Any help is appreciated at this point because its going on day 4 and I am still struggling to understand most of the DirectX 11 code. I am by no means new to C# or development; I am just used to Windows Forms, ASP.NET, Unity, XNA, WPF, etc. This is my first experience with DirectX and its definitely over the top. Even worse than when I tried OpenGL ten years ago with hardly any development experience at all.
Few things to start with.
First, DirectX is a very low level API. The only way to get a lower level API on Windows is to go talk to the graphics driver directly, which would be even more of a nightmare. As a result, things tend to be extremely generic, which allows for high flexibility at the cost of being fairly complicated. If you ever wondered what Unity or Unreal were doing under the hood, this is it.
Second, DirectX, and Direct3D in particular, is written in and for C++. C# resources are hard to come by because the API wasn't really intended for use from C# (not that that's a good thing). As a result, discarding the documentation and answers written for C++ is a really bad idea. All the caveats and restrictions on the C++ API also apply to you in the C# world, and you will need to know them.
Third, I will not be able to provide you an entirely C#/SharpDX answer, since I don't use DirectX from C#, but from C++. I'll do what I can to provide accurate mappings, but be aware you are using an API wrapper, which can and will hide some of the details from you. Best option to discover those details would be to have the source code of SharpDX up as you go through the C++ documentation.
Now on to the questions you have. Strap in, this will be long.
First up: there's no simple way to render a primitive object in Direct3D 11. Rendering a six faced cube has the same steps as rendering a 200 million vertex mesh of New York City.
In the rendering loop, we need to do several actions to render anything. In this list, you've already done step 1 and 7, and partially done step 2:
Clear the back buffer and depth/stencil buffers.
Set the input layout, shaders, pipeline state objects, render targets, and viewports used in the current rendering pass.
Set the vertex buffer, index buffer, constant buffers, shader resources and samplers used by the current mesh being drawn.
Issue the draw call for the given mesh.
Repeat steps 3 and 4 for all meshes that must be drawn in the current rendering pass.
Repeat steps 2 through 5 for all passes defined by the application.
Present the swap chain.
Fairly complex, just to render something as simple as a cube. This process needs several objects, of which we already have a few:
A Device object instance, for creating new D3D objects
A DeviceContext object instance, for issuing drawing operations and setting pipeline state
A DXGI.SwapChain object instance, to manage the back buffer(s) and present the next buffer in the chain to the desktop
A Texture2D object instance, to represent the back buffer owned by the swap chain
A RenderTargetView object instance, to allow the graphics card to use a texture as the destination for a rendering operation
A DepthStencilView object instance, if we're are using the depth buffer
VertexShader and PixelShader object instances, representing the shaders used by the GPU during the vertex and pixel shader stages of the graphics pipeline
An InputLayout object instance, representing the exact layout of one vertex in our vertex buffer
A set of Buffer object instances, representing the vertex buffers and index buffers containing our geometry and the constant buffers containing parameters for our shaders
A set of Texture2D object instances with associated ShaderResourceView object instances, representing any textures or surface maps to be applied to our geometry
A set of SamplerState object instances, for sampling the above textures from our shaders
A RasterizerState object instance, to describe the culling, depth biasing, multisampling, and antialiasing parameters the rasterizer should use
A DepthStencilState object instance, to describe how the GPU should conduct the depth test, what causes a depth test fail, and what a fail should do
A BlendState object instance, to describe how the GPU should blend multiple render targets together
Now, what does this look like as actual C# code?
Probably something like this (for rendering):
//Step 1 - Clear the targets
// Clear the back buffer to blue
context.ClearRenderTargetView(BackBufferView, Color.CornflowerBlue);
// Clear the depth buffer to the maximum value.
context.ClearDepthStencilView(DepthStencilBuffer, DepthStencilClearFlags.Depth, 1.0f, 0);
//Step 2 - Set up the pipeline.
// Input Assembler (IA) stage
context.InputAssembler.InputLayout = VertexBufferLayout;
// Vertex Shader (VS) stage
context.VertexShader.Set(SimpleVertexShader);
// Rasterizer (RS) stage
context.Rasterizer.State = SimpleRasterState;
context.Rasterizer.SetViewport(new Viewport(0, 0, form.Width, form.Height));
// Pixel Shader (PS) stage
context.PixelShader.Set(SimplePixelShader);
// Output Merger (OM) stage
context.OutputMerger.SetRenderTargets(DepthStencilBuffer, BackBufferView);
context.OutputMerger.SetDepthStencilState(SimpleDepthStencilState);
context.OutputMerger.SetBlendState(SimpleBlendState);
//Step 3 - Set up the geometry
// Vertex buffers
context.InputAssembler.PrimitiveTopology = PrimitiveTopology.TriangleList;
context.InputAssembler.SetVertexBuffers(0, new VertexBufferBinding(VertexBuffer, sizeof(Vertex), 0));
// Index buffer
context.InputAssembler.SetIndexBuffer(IndexBuffer, Format.R16_UInt, 0);
// Constant buffers
context.VertexShader.SetConstantBuffer(0, TransformationMatrixBuffer);
context.PixelShader.SetConstantBuffer(0, AmbientLightBuffer);
// Shader resources
context.PixelShader.SetShaderResource(0, MeshTexture);
// Samplers
context.PixelShader.SetSampler(0, MeshTextureSampler);
//Step 4 - Draw the object
context.DrawIndexed(IndexBuffer.Count, 0, 0);
//Step 5 - Advance to the next object and repeat.
// No next object currently.
//Step 6 - Advance to the next pipeline configuration
// No next pipeline configuration currently.
//Step 7 - Present to the screen.
swapChain.Present(0, 0);
The vertex and pixel shaders in this example code expect:
A model with position, normal, and texture coordinates per vertex
The position of the camera in world space, the world-view-projection matrix, world inverse transpose matrix, and world matrix as a vertex shader constant buffer
The ambient, diffuse, and specular colors of the light, as well as its position in the world, as a pixel shader constant buffer
The 2D texture to apply to the surface of the model in the pixel shader, and
The sampler to use when accessing the pixels of the above texture.
Now the rendering code itself is fairly simple - the setup is the harder part of it:
//Create the vertex buffer
VertexBuffer = new Buffer(device, RawVertexInfo, new BufferDescription {
SizeInBytes = RawVertexInfo.Length * sizeof(Vertex),
Usage = ResourceUsage.Default,
BindFlags = BindFlags.VertexBuffer,
CpuAccessFlags = CpuAccessFlags.None,
OptionFlags = ResourceOptionFlags.None,
StructureByteStride = sizeof(Vertex)
});
//Create the index buffer
IndexCount = (int)RawIndexInfo.Length;
IndexBuffer = new Buffer(device, RawIndexInfo, new BufferDescription {
SizeInBytes = IndexCount * sizeof(ushort),
Usage = ResourceUsage.Default,
BindFlags = BindFlags.IndexBuffer,
CpuAccessFlags = CpuAccessFlags.None,
OptionFlags = ResourceOptionFlags.None,
StructureByteStride = sizeof(ushort)
});
//Create the Depth/Stencil view.
Texture2D DepthStencilTexture = new Texture2D(device, new Texture2DDescription {
Format = Format.D32_Float,
BindFlags = BindFlags.DepthStencil,
Usage = ResourceUsage.Default,
Height = renderForm.Height,
Width = renderForm.Width,
ArraySize = 1,
MipLevels = 1,
SampleDescription = new SampleDescription {
Count = 1,
Quality = 0,
},
CpuAccessFlags = 0,
OptionFlags = 0
});
DepthStencilBuffer = new DepthStencilView(device, DepthStencilTexture);
SimpleDepthStencilState = new DepthStencilState(device, new DepthStencilStateDescription {
IsDepthEnabled = true,
DepthComparison = Comparison.Less,
});
//default blend state - can be omitted from the application if defaulted.
SimpleBlendState = new BlendState(device, new BlendStateDescription {
});
//Default rasterizer state - can be omitted from the application if defaulted.
SimpleRasterState = new RasterizerState(device, new RasterizerStateDescription {
CullMode = CullMode.Back,
IsFrontCounterClockwise = false,
});
// Input layout.
VertexBufferLayout = new InputLayout(device, VertexShaderByteCode, new InputElement[] {
new InputElement {
SemanticName = "POSITION",
Slot = 0,
SemanticIndex = 0,
Format = Format.R32G32B32_Float,
Classification = InputClassification.PerVertexData,
AlignedByteOffset = 0,
InstanceDataStepRate = 0,
},
new InputElement {
SemanticName = "NORMAL",
Slot = 0,
SemanticIndex = 0,
Format = Format.R32G32B32_Float,
Classification = InputClassification.PerVertexData,
AlignedByteOffset = InputElement.AppendAligned,
InstanceDataStepRate = 0,
},
new InputElement {
SemanticName = "TEXCOORD0",
Slot = 0,
SemanticIndex = 0,
Format = Format.R32G32_Float,
Classification = InputClassification.PerVertexData,
AlignedByteOffset = InputElement.AppendAligned,
InstanceDataStepRate = 0,
},
});
//Vertex/Pixel shaders
SimpleVertexShader = new VertexShader(device, VertexShaderByteCode);
SimplePixelShader = new PixelShader(device, PixelShaderByteCode);
//Constant buffers
TransformationMatrixBuffer = new Buffer(device, new BufferDescription {
SizeInBytes = sizeof(TransformationMatrixParameters),
BindFlags = BindFlags.ConstantBuffer,
Usage = ResourceUsage.Default,
CpuAccessFlags = CpuAccessFlags.None,
});
AmbientLightBuffer = new Buffer(device, new BufferDescription {
SizeInBytes = sizeof(AmbientLightParameters),
BindFlags = BindFlags.ConstantBuffer,
Usage = ResourceUsage.Default,
CpuAccessFlags = CpuAccessFlags.None,
});
// Mesh texture
MeshTexture = new Texture2D(device, new Texture2DDescription {
Format = Format.B8G8R8A8_UNorm,
BindFlags = BindFlags.ShaderResource,
Usage = ResourceUsage.Default,
Height = MeshImage.Height,
Width = MeshImage.Width,
ArraySize = 1,
MipLevels = 0,
CpuAccessFlags = CpuAccessFlags.None,
OptionFlags = ResourceOptionFlags.None,
SampleDescription = new SampleDescription {
Count = 1,
Quality = 0,
}
});
//Shader view for the texture
MeshTextureView = new ShaderResourceView(device, MeshTexture);
//Sampler for the texture
MeshTextureSampler = new SamplerState(device, new SamplerStateDescription {
AddressU = TextureAddressMode.Clamp,
AddressV = TextureAddressMode.Clamp,
AddressW = TextureAddressMode.Border,
BorderColor = new SharpDX.Mathematics.Interop.RawColor4(255, 0, 255, 255),
Filter = Filter.MaximumMinMagMipLinear,
ComparisonFunction = Comparison.Never,
MaximumLod = float.MaxValue,
MinimumLod = float.MinValue,
MaximumAnisotropy = 1,
MipLodBias = 0,
});
As you can see, there's a lot of stuff to get through.
As this has already gotten a lot longer than most people have the patience for, I'd recommend getting and reading the book by Frank D. Luna, as he does a much better job of explaining the pipeline stages and the expectations Direct3D has of your application.
I'd also recommend reading through the C++ documentation for the Direct3D API, as, again, everything there will apply to SharpDX.
In addition, you'll want to look into HLSL, as you'll need to define and compile a shader to make any of the above code even work, and if you want any texturing, you'll need to figure out how to get the image data into Direct3D.
On the bright side, if you manage to implement all of this in a clean, extensible manner, you'll be able to render practically anything with little additional effort.

Reverse Byte Array when converting an animated gif to a Unity Texture2D Array

I am currently trying to load animated gifs into a Unity application (during runtime), but I have run into a bit of a snag:
I am using System.Drawing to load a byte array from each gif-frame and then use Unity's LoadRawTextureData function to create the Texture. The problem I have now is that the order of the bytes in the array are not what Unity expects (even though I specify the Format as ARGB32 for both). Apparently, Unity either expects ABGR and System.Drawing gives me RGBA, or vice versa. Also, the image is flipped vertically (but that can be easily remedied).
This is my current code, which works, but I have to reverse the array which increases the time it takes by a factor of 10-20. It is still at least twice as fast as copying each pixel separately, but I would prefer if I could get closer to the performance I get without reversing.
GraphicsUnit graphicsUnit = GraphicsUnit.Pixel;
RectangleF rect1 = gifImage.GetBounds(ref graphicsUnit);
Rectangle rect2 = new Rectangle((int)rect1.X, (int)rect1.Y, (int)rect1.Width, (int)rect1.Height);
byte[] rgbValues = new byte[rect2.Width * rect2.Height * 4];
for (int i = 0; i < frameCount; i++) {
gifImage.SelectActiveFrame(dimension, i);
Bitmap frame = new Bitmap(gifImage.Width, gifImage.Height);
System.Drawing.Graphics.FromImage(frame).DrawImage(gifImage, Point.Empty);
frame.RotateFlip(RotateFlipType.RotateNoneFlipX);
Texture2D frameTexture = new Texture2D(frame.Width, frame.Height, TextureFormat.ARGB32, false);
BitmapData data = frame.LockBits(rect2, ImageLockMode.ReadOnly, PixelFormat.Format32bppArgb);
Marshal.Copy(data.Scan0, rgbValues, 0, rgbValues.Length);
frame.UnlockBits(data);
frameTexture.LoadRawTextureData(rgbValues.Reverse().ToArray());
frameTexture.Apply();
gifFrames[i] = frameTexture;
}
Is there something else I could try to get the bytes in the 'correct' order?
You can skip the .Reverse() if you construct your Texture2D using TextureFormat.BGRA32. however, this doesn't really address why its necessary

OpenNI show RGB camera feed

Have spent the last week trying to have my C# program show both the depth feed and the RGB feed (similar to how /Samples/Bin64/Release/NiViewer64.exe shows both feeds in a window).
Project specs: C# - VS2013 Express OpenNI - Using a modified
SimpleViewer.net (has two feeds of depth). Asus Xtion Pro Live
I would like one of the feeds to become a normal camera feed instead of the depth feed.
I'm guessing it has something to do with this:
MapOutputMode mapMode = this.depth.MapOutputMode;
this.bitmap = new Bitmap((int)mapMode.XRes, (int)mapMode.YRes,System.Drawing.Imaging.PixelFormat.Format24bppRgb);
Any ideas?
Finally figured it out, thanks to another programmer.
image = context.FindExistingNode(NodeType.Image) as ImageGenerator;
ImageMetaData imd = image.GetMetaData();
lock (this)
{
//**************************************//
//***********RGB Camera Feed************//
//**************************************//
Rectangle rect = new Rectangle(0, 0, this.bitmap.Width, this.bitmap.Height);
BitmapData data = this.camera_feed.LockBits(rect, ImageLockMode.WriteOnly, System.Drawing.Imaging.PixelFormat.Format24bppRgb);
byte* pDest = (byte*)data.Scan0.ToPointer();
byte* imstp = (byte*)image.ImageMapPtr.ToPointer();
// set pixels
for (int i = 0; i < imd.DataSize; i += 3, pDest += 3, imstp += 3)
{
pDest[0] = imstp[2];
pDest[1] = imstp[1];
pDest[2] = imstp[0];
}
and declare this somewhere:
public ImageGenerator image;

Categories