I'm trying to get an image in skiasharp that's left rotated by 90 degrees to be centered and fit perfectly on the canvas. I've tried 2 ways. My own custom way, and another one that seems like a popular solution but maybe I'm not understanding how it works correctly?
My own way.
Here is the code:
SKSurface surf = e.Surface;
SKCanvas canvas = surf.Canvas;
SKSize size = canvasView.CanvasSize;
canvas.Clear();
SKRect rect = SKRect.Create(0.0f, 0.0f, size.Height, size.Width);
canvas.RotateDegrees(85);
canvas.DrawBitmap(m_bm, rect);
"m_bm" is a bitmap that was retrieved in a separate function. That function is:
// Let user take a picture.
var result = await MediaPicker.CapturePhotoAsync(new MediaPickerOptions
{
Title = "Take a picture"
});
// Save stream.
var stream = await result.OpenReadAsync();
// Create the bitmap.
m_bm = SKBitmap.Decode(stream);
// Set to true because the image will be prepared soon.
m_displayedImage = true;
I only put 85 instead of 90 because I wanted to visually see it getting closer but when I do that, it goes off screen. I'm coming from a game programming background so this is normally solved with getting the width of whatever we're working with (like the player in the game) and adding that to the x position, and boom. But with Xamarin, didn't work. That's my own attempt. Then I hit the internet of course to find help, and a different implementation was given to me.
Popular solution.
See here for this popular solution and it's the FIRST answer to this users question. The code I used is SLIGHTLY different because I didn't see the point in returning an image in that users function. Here it is:
// Save stream.
var stream = await result.OpenReadAsync();
using (var bitmap = SKBitmap.Decode(stream))
{
var rotated = new SKBitmap(bitmap.Height, bitmap.Width);
using (var surface = new SKCanvas(rotated))
{
surface.Clear();
surface.Translate(rotated.Height, rotated.Width);
surface.RotateDegrees(90);
surface.DrawBitmap(bitmap, 0, 0);
}
}
I'm drawing the bitmap with the canvas and I thought that would work because testing it in other code samples it did exactly that, so I definitely am not rotating properly or something?
The link that #Cheesebaron gave me in the reply to the original post ended up working out. But a new issue arises but I'll google that myself. Here's my own code:
namespace BugApp
{
public partial class MainPage : ContentPage
{
// Save bitmaps for later use.
static SKBitmap m_bm;
static SKBitmap m_editedBm;
// Boolean for displaying the image captured with the camera.
bool m_displayedImage;
public MainPage()
{
// Set to explicit default values to always be in control of the assignments.
m_bm = null;
m_editedBm = null;
// No picture has been taken yet.
m_displayedImage = false;
InitializeComponent();
}
// Assigned to the button in the xaml page.
private async void SnapPicture(Object sender, System.EventArgs e)
{
// Let user take a picture.
var result = await MediaPicker.CapturePhotoAsync(new MediaPickerOptions
{
Title = "Take a picture"
});
// Save stream.
var stream = await result.OpenReadAsync();
// Create the bitmap.
m_bm = SKBitmap.Decode(stream);
// Get the rotated image.
m_editedBm = Rotate();
// Set to true because the image will be prepared soon.
m_displayedImage = true;
}
public static SKBitmap Rotate()
{
using (var bitmap = m_bm)
{
var rotated = new SKBitmap(bitmap.Height, bitmap.Width);
using (var surface = new SKCanvas(rotated))
{
surface.Translate(bitmap.Width, 0);
surface.RotateDegrees(90);
surface.DrawBitmap(bitmap, 0, 0);
}
return rotated;
}
}
private void OnCanvasViewPaintSurface(object sender, SKPaintSurfaceEventArgs e)
{
if(m_bm != null && m_displayedImage == true)
{
e.Surface.Canvas.Clear();
// Draw in a new rect space.
e.Surface.Canvas.DrawBitmap(m_editedBm, new SKRect(0.0f, 0.0f, 300.0f, 300.0f));
// ---Testing.
// e.Surface.Canvas.DrawBitmap(m_editedBm, new SKRect(112, 238, 184, 310), new SKRect(0, 0, 9, 9));
// Avoid having this function launch again for now.
m_displayedImage = false;
}
}
}
}
The main portion of code that matters is the rotate function which was this one here: Link.
Thanks to everyone that replied.
The Issue
I've been searching for an answer to this issue for a few days now. I need help finding a way to generate a basic star. I have the code to randomly generate the locations done; however, I am new to DirectX and came from the world of XNA and Unity. DirectX development seems overly-complicated at the best of times. I have found a few tutorials, but, I am finding them difficult to follow. I have been unable to render anything to the screen once I've cleared it. I'm using the basic setup as far as rendering goes, I haven't created any special classes or structs. I have been trying to follow the Richard's Software tutorials that were converted to C# from C++ in the book 3D Game Programming With DirectX 11 by Frank D. Luna. The farthest I have been able to successfully complete was clearing to Color.CornflowerBlue.
Question(s)
Are there any simplistic methods to draw/render objects to the screen, I'm able to render text just fine, but images (sprites) and 3D meshes seem to be giving me issues. Is there a simplistic method to draw basic geometric shapes? For example: Primitives.DrawSphere(float radius, Vector3 location, Color c);
If there aren't any simplistic methods available to draw primitives, what is going to be the simplest approach to rendering stars? I can do spheres, sprites with alpha blending to simulate distance, billboards, etc. What will be the simplest method to implement?
How do I implement the simplest method revealed by question 2 above? Code samples, tutorials (no videos), articles, etc. are greatly appreciated as I am having a hard time tracking down good C# references, it would appear that most are utilizing Unity and Unreal these days, but I don't have those options.
Notes
I work in a government environment and am unable to utilize third party tools that haven't been approved. The approval process is a nightmare so third party tools are typically a no go. All supplied answers, documentation, samples, etc. should be strictly utilizing SharpDX.
My Code
My project is a WindowsFormsApplicaiton where the primary form has been derived from RenderForm. I have created a single class called Engine that handles the DirectX code.
Engine.cs:
internal class Engine : IDisposable {
#region Fields
private Device device;
private SwapChain swapChain;
private DeviceContext context;
private Texture2D backBuffer;
private RenderTargetView renderView;
private SynchronizationContext syncContext;
#endregion
#region Events
public event EventHandler Draw;
public event EventHandler Update;
private void SendDraw(object data) { Draw(this, new EventArgs()); }
private void SendUpdate(object data) { Update(this, new EventArgs()); }
#endregion
#region Constructor(s)
public Engine(RenderForm form) {
SwapChainDescription description = new SwapChainDescription() {
ModeDescription = new ModeDescription(form.Width, form.Height, new Rational(60, 1), Format.R8G8B8A8_UNorm),
SampleDescription = new SampleDescription(1, 0),
Usage = Usage.RenderTargetOutput,
BufferCount = 1,
OutputHandle = form.Handle,
IsWindowed = !form.IsFullscreen
};
Device.CreateWithSwapChain(DriverType.Hardware, DeviceCreationFlags.Debug, description, out device, out swapChain);
backBuffer = Resource.FromSwapChain<Texture2D>(swapChain, 0);
renderView = new RenderTargetView(device, backBuffer);
context = device.ImmediateContext;
context.OutputMerger.SetRenderTargets(renderView);
context.Rasterizer.SetViewport(new Viewport(0, 0, form.Width, form.Height));
renderForm = form;
}
#endregion
#region Public Methods
public void Initialize() {
if (SynchronizationContext.Current != null)
syncContext = SynchronizationContext.Current;
else
syncContext = new SynchronizationContext();
RenderLoop.Run(renderForm, delegate() {
context.ClearRenderTargetView(renderView, Color.CornflowerBlue);
syncContext.Send(SendUpdate, null);
syncContext.Send(SendDraw, null);
swapChain.Present(0, 0);
});
}
public void Dispose() { }
#endregion
}
Form1.cs:
public partial class Form1: RenderForm {
private Engine gameEngine;
int count = 0;
public Form1() {
InitializeComponent();
gameEngine = new Engine(this);
gameEngine.Update += GameEngine_Update;
gameEngine.Draw += GameEngine_Draw;
gameEgnine.Initialize();
}
private void GameEngine_Update(object sender, EventArgs e) => Debug.WriteLine("Updated.");
private void GameEngine_Draw(object sender, EventArgs e) => Debug.WriteLine($"I've drawn {++count} times.");
}
Final Remarks
Any help is appreciated at this point because its going on day 4 and I am still struggling to understand most of the DirectX 11 code. I am by no means new to C# or development; I am just used to Windows Forms, ASP.NET, Unity, XNA, WPF, etc. This is my first experience with DirectX and its definitely over the top. Even worse than when I tried OpenGL ten years ago with hardly any development experience at all.
Few things to start with.
First, DirectX is a very low level API. The only way to get a lower level API on Windows is to go talk to the graphics driver directly, which would be even more of a nightmare. As a result, things tend to be extremely generic, which allows for high flexibility at the cost of being fairly complicated. If you ever wondered what Unity or Unreal were doing under the hood, this is it.
Second, DirectX, and Direct3D in particular, is written in and for C++. C# resources are hard to come by because the API wasn't really intended for use from C# (not that that's a good thing). As a result, discarding the documentation and answers written for C++ is a really bad idea. All the caveats and restrictions on the C++ API also apply to you in the C# world, and you will need to know them.
Third, I will not be able to provide you an entirely C#/SharpDX answer, since I don't use DirectX from C#, but from C++. I'll do what I can to provide accurate mappings, but be aware you are using an API wrapper, which can and will hide some of the details from you. Best option to discover those details would be to have the source code of SharpDX up as you go through the C++ documentation.
Now on to the questions you have. Strap in, this will be long.
First up: there's no simple way to render a primitive object in Direct3D 11. Rendering a six faced cube has the same steps as rendering a 200 million vertex mesh of New York City.
In the rendering loop, we need to do several actions to render anything. In this list, you've already done step 1 and 7, and partially done step 2:
Clear the back buffer and depth/stencil buffers.
Set the input layout, shaders, pipeline state objects, render targets, and viewports used in the current rendering pass.
Set the vertex buffer, index buffer, constant buffers, shader resources and samplers used by the current mesh being drawn.
Issue the draw call for the given mesh.
Repeat steps 3 and 4 for all meshes that must be drawn in the current rendering pass.
Repeat steps 2 through 5 for all passes defined by the application.
Present the swap chain.
Fairly complex, just to render something as simple as a cube. This process needs several objects, of which we already have a few:
A Device object instance, for creating new D3D objects
A DeviceContext object instance, for issuing drawing operations and setting pipeline state
A DXGI.SwapChain object instance, to manage the back buffer(s) and present the next buffer in the chain to the desktop
A Texture2D object instance, to represent the back buffer owned by the swap chain
A RenderTargetView object instance, to allow the graphics card to use a texture as the destination for a rendering operation
A DepthStencilView object instance, if we're are using the depth buffer
VertexShader and PixelShader object instances, representing the shaders used by the GPU during the vertex and pixel shader stages of the graphics pipeline
An InputLayout object instance, representing the exact layout of one vertex in our vertex buffer
A set of Buffer object instances, representing the vertex buffers and index buffers containing our geometry and the constant buffers containing parameters for our shaders
A set of Texture2D object instances with associated ShaderResourceView object instances, representing any textures or surface maps to be applied to our geometry
A set of SamplerState object instances, for sampling the above textures from our shaders
A RasterizerState object instance, to describe the culling, depth biasing, multisampling, and antialiasing parameters the rasterizer should use
A DepthStencilState object instance, to describe how the GPU should conduct the depth test, what causes a depth test fail, and what a fail should do
A BlendState object instance, to describe how the GPU should blend multiple render targets together
Now, what does this look like as actual C# code?
Probably something like this (for rendering):
//Step 1 - Clear the targets
// Clear the back buffer to blue
context.ClearRenderTargetView(BackBufferView, Color.CornflowerBlue);
// Clear the depth buffer to the maximum value.
context.ClearDepthStencilView(DepthStencilBuffer, DepthStencilClearFlags.Depth, 1.0f, 0);
//Step 2 - Set up the pipeline.
// Input Assembler (IA) stage
context.InputAssembler.InputLayout = VertexBufferLayout;
// Vertex Shader (VS) stage
context.VertexShader.Set(SimpleVertexShader);
// Rasterizer (RS) stage
context.Rasterizer.State = SimpleRasterState;
context.Rasterizer.SetViewport(new Viewport(0, 0, form.Width, form.Height));
// Pixel Shader (PS) stage
context.PixelShader.Set(SimplePixelShader);
// Output Merger (OM) stage
context.OutputMerger.SetRenderTargets(DepthStencilBuffer, BackBufferView);
context.OutputMerger.SetDepthStencilState(SimpleDepthStencilState);
context.OutputMerger.SetBlendState(SimpleBlendState);
//Step 3 - Set up the geometry
// Vertex buffers
context.InputAssembler.PrimitiveTopology = PrimitiveTopology.TriangleList;
context.InputAssembler.SetVertexBuffers(0, new VertexBufferBinding(VertexBuffer, sizeof(Vertex), 0));
// Index buffer
context.InputAssembler.SetIndexBuffer(IndexBuffer, Format.R16_UInt, 0);
// Constant buffers
context.VertexShader.SetConstantBuffer(0, TransformationMatrixBuffer);
context.PixelShader.SetConstantBuffer(0, AmbientLightBuffer);
// Shader resources
context.PixelShader.SetShaderResource(0, MeshTexture);
// Samplers
context.PixelShader.SetSampler(0, MeshTextureSampler);
//Step 4 - Draw the object
context.DrawIndexed(IndexBuffer.Count, 0, 0);
//Step 5 - Advance to the next object and repeat.
// No next object currently.
//Step 6 - Advance to the next pipeline configuration
// No next pipeline configuration currently.
//Step 7 - Present to the screen.
swapChain.Present(0, 0);
The vertex and pixel shaders in this example code expect:
A model with position, normal, and texture coordinates per vertex
The position of the camera in world space, the world-view-projection matrix, world inverse transpose matrix, and world matrix as a vertex shader constant buffer
The ambient, diffuse, and specular colors of the light, as well as its position in the world, as a pixel shader constant buffer
The 2D texture to apply to the surface of the model in the pixel shader, and
The sampler to use when accessing the pixels of the above texture.
Now the rendering code itself is fairly simple - the setup is the harder part of it:
//Create the vertex buffer
VertexBuffer = new Buffer(device, RawVertexInfo, new BufferDescription {
SizeInBytes = RawVertexInfo.Length * sizeof(Vertex),
Usage = ResourceUsage.Default,
BindFlags = BindFlags.VertexBuffer,
CpuAccessFlags = CpuAccessFlags.None,
OptionFlags = ResourceOptionFlags.None,
StructureByteStride = sizeof(Vertex)
});
//Create the index buffer
IndexCount = (int)RawIndexInfo.Length;
IndexBuffer = new Buffer(device, RawIndexInfo, new BufferDescription {
SizeInBytes = IndexCount * sizeof(ushort),
Usage = ResourceUsage.Default,
BindFlags = BindFlags.IndexBuffer,
CpuAccessFlags = CpuAccessFlags.None,
OptionFlags = ResourceOptionFlags.None,
StructureByteStride = sizeof(ushort)
});
//Create the Depth/Stencil view.
Texture2D DepthStencilTexture = new Texture2D(device, new Texture2DDescription {
Format = Format.D32_Float,
BindFlags = BindFlags.DepthStencil,
Usage = ResourceUsage.Default,
Height = renderForm.Height,
Width = renderForm.Width,
ArraySize = 1,
MipLevels = 1,
SampleDescription = new SampleDescription {
Count = 1,
Quality = 0,
},
CpuAccessFlags = 0,
OptionFlags = 0
});
DepthStencilBuffer = new DepthStencilView(device, DepthStencilTexture);
SimpleDepthStencilState = new DepthStencilState(device, new DepthStencilStateDescription {
IsDepthEnabled = true,
DepthComparison = Comparison.Less,
});
//default blend state - can be omitted from the application if defaulted.
SimpleBlendState = new BlendState(device, new BlendStateDescription {
});
//Default rasterizer state - can be omitted from the application if defaulted.
SimpleRasterState = new RasterizerState(device, new RasterizerStateDescription {
CullMode = CullMode.Back,
IsFrontCounterClockwise = false,
});
// Input layout.
VertexBufferLayout = new InputLayout(device, VertexShaderByteCode, new InputElement[] {
new InputElement {
SemanticName = "POSITION",
Slot = 0,
SemanticIndex = 0,
Format = Format.R32G32B32_Float,
Classification = InputClassification.PerVertexData,
AlignedByteOffset = 0,
InstanceDataStepRate = 0,
},
new InputElement {
SemanticName = "NORMAL",
Slot = 0,
SemanticIndex = 0,
Format = Format.R32G32B32_Float,
Classification = InputClassification.PerVertexData,
AlignedByteOffset = InputElement.AppendAligned,
InstanceDataStepRate = 0,
},
new InputElement {
SemanticName = "TEXCOORD0",
Slot = 0,
SemanticIndex = 0,
Format = Format.R32G32_Float,
Classification = InputClassification.PerVertexData,
AlignedByteOffset = InputElement.AppendAligned,
InstanceDataStepRate = 0,
},
});
//Vertex/Pixel shaders
SimpleVertexShader = new VertexShader(device, VertexShaderByteCode);
SimplePixelShader = new PixelShader(device, PixelShaderByteCode);
//Constant buffers
TransformationMatrixBuffer = new Buffer(device, new BufferDescription {
SizeInBytes = sizeof(TransformationMatrixParameters),
BindFlags = BindFlags.ConstantBuffer,
Usage = ResourceUsage.Default,
CpuAccessFlags = CpuAccessFlags.None,
});
AmbientLightBuffer = new Buffer(device, new BufferDescription {
SizeInBytes = sizeof(AmbientLightParameters),
BindFlags = BindFlags.ConstantBuffer,
Usage = ResourceUsage.Default,
CpuAccessFlags = CpuAccessFlags.None,
});
// Mesh texture
MeshTexture = new Texture2D(device, new Texture2DDescription {
Format = Format.B8G8R8A8_UNorm,
BindFlags = BindFlags.ShaderResource,
Usage = ResourceUsage.Default,
Height = MeshImage.Height,
Width = MeshImage.Width,
ArraySize = 1,
MipLevels = 0,
CpuAccessFlags = CpuAccessFlags.None,
OptionFlags = ResourceOptionFlags.None,
SampleDescription = new SampleDescription {
Count = 1,
Quality = 0,
}
});
//Shader view for the texture
MeshTextureView = new ShaderResourceView(device, MeshTexture);
//Sampler for the texture
MeshTextureSampler = new SamplerState(device, new SamplerStateDescription {
AddressU = TextureAddressMode.Clamp,
AddressV = TextureAddressMode.Clamp,
AddressW = TextureAddressMode.Border,
BorderColor = new SharpDX.Mathematics.Interop.RawColor4(255, 0, 255, 255),
Filter = Filter.MaximumMinMagMipLinear,
ComparisonFunction = Comparison.Never,
MaximumLod = float.MaxValue,
MinimumLod = float.MinValue,
MaximumAnisotropy = 1,
MipLodBias = 0,
});
As you can see, there's a lot of stuff to get through.
As this has already gotten a lot longer than most people have the patience for, I'd recommend getting and reading the book by Frank D. Luna, as he does a much better job of explaining the pipeline stages and the expectations Direct3D has of your application.
I'd also recommend reading through the C++ documentation for the Direct3D API, as, again, everything there will apply to SharpDX.
In addition, you'll want to look into HLSL, as you'll need to define and compile a shader to make any of the above code even work, and if you want any texturing, you'll need to figure out how to get the image data into Direct3D.
On the bright side, if you manage to implement all of this in a clean, extensible manner, you'll be able to render practically anything with little additional effort.
I'm trying to create a Screenshot of all Screens on my PC. In the past I've been using the GDI Method, but due to performance issues I'm trying the DirectX way.
I can take a Screenshot of a single Screen without issues, with a code like this:
using Microsoft.DirectX;
using Microsoft.DirectX.Direct3D;
using System.Windows.Forms;
using System.Drawing;
class Capture : Form
{
private Device device;
private Surface surface;
public Capture()
{
PresentParameters p = new PresentParameters();
p.Windowed = true;
p.SwapEffect = SwapEffect.Discard;
device = new Device(0, DeviceType.Hardware, this, CreateFlags.HardwareVertexProcessing, p);
surface = device.CreateOffscreenPlainSurface(Screen.PrimaryScreen.Bounds.Width, Screen.PrimaryScreen.Bounds.Height, Format.A8B8G8R8, Pool.Scratch);
}
public Bitmap Frame()
{
GraphicsStream gs = SurfaceLoader.SaveToStream(ImageFileFormat.Jpg, surface);
return new Bitmap(gs);
}
}
(Lets ignore deleting the Bitmap from memory for this question)
With that Code I can take a Screenshot of my Primary Screen. Changing the first parameter of the Device constructor to a different number corresponds to a different Screen. If I have 3 Screens and I pass 2 as a parameter, I get a Screenshot of my third Screen.
The issue I have is how to handle capturing all Screens. I came up with the following:
class CaptureScreen : Form
{
private int index;
private Screen screen;
private Device device;
private Surface surface;
public Rectangle ScreenBounds { get { return screen.Bounds; } }
public Device Device { get { return device; } }
public CaptureScreen(int index, Screen screen, PresentParameters p)
{
this.screen = screen; this.index = index;
device = new Device(index, DeviceType.Hardware, this, CreateFlags.HardwareVertexProcessing, p);
surface = device.CreateOffscreenPlainSurface(screen.Bounds.Width, screen.Bounds.Height, Format.A8R8G8B8, Pool.Scratch);
}
public Bitmap Frame()
{
device.GetFrontBufferData(0, surface);
GraphicsStream gs = SurfaceLoader.SaveToStream(ImageFileFormat.Jpg, surface);
return new Bitmap(gs);
}
}
class CaptureDirectX : Form
{
private CaptureScreen[] screens;
private int width = 0;
private int height = 0;
public CaptureDirectX()
{
PresentParameters p = new PresentParameters();
p.Windowed = true;
p.SwapEffect = SwapEffect.Discard;
screens = new CaptureScreen[Screen.AllScreens.Length];
for (int i = 0; i < Screen.AllScreens.Length; i++)
{
screens[i] = new CaptureScreen(i, Screen.AllScreens[i], p);
//reset previous devices
if (i > 0)
{
for(int j = 0; j < i; j++)
{
screens[j].Device.Reset(p);
}
}
width += Screen.AllScreens[i].Bounds.Width;
if (Screen.AllScreens[i].Bounds.Height > height)
{
height = Screen.AllScreens[i].Bounds.Height;
}
}
}
public Bitmap Frame()
{
Bitmap result = new Bitmap(width, height);
using (var g = Graphics.FromImage(result))
{
for (int i = 0; i < screens.Length; i++)
{
Bitmap frame = screens[i].Frame();
g.DrawImage(frame, screens[i].Bounds);
}
}
return result;
}
}
As you can see, I iterate though the available Screens and create multiple devices and surfaces in a seperate Class. But calling Frame() of the CaptureDirectX class throws the following error:
An unhandled exception of type 'Microsoft.DirectX.Direct3D.InvalidCallException' occurred in Microsoft.DirectX.Direct3D.dll
At the line
device.GetFrontBufferData(0, surface);
I've been researching this a bit but didn't have a whole lot of success. I'm not really sure what the issue is.
I've found a link that offers a solution that's talking about resetting the Device Objects. But as you can see in my code above, I've been trying to reset all previously created Device objects, sadly without success.
So my questions are:
Is what I'm trying to achieve even possible through this method (i.e. GetFrontBufferData) ?
What am I doing wrong? What am I missing?
Do you see any performance issues when capturing the Screen at a high rate, like say 30 fps? (Capturing a single screen with a target of 30fps gave me a rate of about 25 - 30fps, compared with the GDI methology which sinks to like 15fps sometimes)
FYI it's a WPF application, i.e. .NET 4.5
Edit: I should mention that I'm aware of IDXGI_DesktopDuplication but sadly it doesn't fit my requirements. As far as I know, that API is only available from Windows 8 onwards, but I'm trying to get a solution that works from Windows 7 onwards because of my clients.
Well, in the end the solution was something completely different. The System.Windows.Forms.Screen Class doesn't play nicely with the DirectX Classes. Why? Because the indexes don't match up. The first object in AllScreens does not necessarly have to be index 0 in the Device instatiation.
Now usually this isn't a problem, except when you have a "strange" monitor setup like mine. On the desk I have 3 screens, one vertical (1200,1920), one horizontal (1920, 1200) and another horizontal laptop screen (1920, 1080).
What happened in my case: The first object in AllScreens was the vertical monitor on the left. I try to create a device for index 0, 1200 width and 1920 height. Index 0 corresponds to my main monitor though, i.e. the horizontal monitor in the middle. So I'm essentially going out of the screen bounds with my instatiation. The instatiation doesn't throw an exception and at some point later I try to read the front buffer data. Bam, Exception because I'm trying to take a 1200x1920 screenshot of a monitor that's 1920x1200.
Sadly, even after I got this working, the performance was no good. A single frame of all 3 monitors takes about 300 to 500ms. Even with a single monitor, the execution time was something like 100ms. Not good enough for my usecase.
Didn't get the Backbuffer to work either, it just produces black images.
I went back to the GDI method and enhanced it by only updating specific chunks of the bitmap on each Frame() call. You want to capture a 1920x1200 region, which gets cut into 480x300 Rectangles.
I've did a lot of research, but I can't find a suitable solution that works with Unity3d/c#. I'm using a Fove-HMD and would like to record/make a video of the integrated camera. So far I managed every update to take a snapshot of the camera, but I can't find a way to merge this snapshots into a video. Does someone know a way of converting them? Or can someone point me in the right direction, in which I could continue my research?
public class FoveCamera : SingletonBase<FoveCamera>{
private bool camAvailable;
private WebCamTexture foveCamera;
private List<Texture2D> snapshots;
void Start ()
{
//-------------just checking if webcam is available
WebCamDevice[] devices = WebCamTexture.devices;
if (devices.Length == 0)
{
Debug.LogError("FoveCamera could not be found.");
camAvailable = false;
return;
}
foreach (WebCamDevice device in devices)
{
if (device.name.Equals("FOVE Eyes"))
foveCamera = new WebCamTexture(device.name);//screen.width and screen.height
}
if (foveCamera == null)
{
Debug.LogError("FoveCamera could not be found.");
return;
}
//-------------camera found, start with the video
foveCamera.Play();
camAvailable = true;
}
void Update () {
if (!camAvailable)
{
return;
}
//loading snap from camera
Texture2D snap = new Texture2D(foveCamera.width,foveCamera.height);
snap.SetPixels(foveCamera.GetPixels());
snapshots.Add(snap);
}
}
The code works so far. The first part of the Start-Method is just for finding and enabling the camera. In the Update-Method I'm taking every update a snapshot of the video.
After I "stop" the Update-Method, I would like to convert the gathered Texture2D object into a video.
Thanks in advance
Create MediaEncoder
using UnityEditor; // VideoBitrateMode
using UnityEditor.Media; // MediaEncoder
var vidAttr = new VideoTrackAttributes
{
bitRateMode = VideoBitrateMode.Medium,
frameRate = new MediaRational(25),
width = 320,
height = 240,
includeAlpha = false
};
var audAttr = new AudioTrackAttributes
{
sampleRate = new MediaRational(48000),
channelCount = 2
};
var enc = new MediaEncoder("sample.mp4", vidAttr, audAttr);
Convert each snapshot to Texture2D
Call consequently AddFrame to add each snapshot to MediaEncoder
enc.AddFrame(tex);
Once done call Dispose to close the file
enc.Dispose();
I see two methods here, one is fast to implement, dirty and not for all platforms, second one harder but pretty. Both rely on FFMPEG.
1) Save every frame into image file (snap.EncodeToPNG()) and then call FFMPEG to create video from images (FFmpeg create video from images) - slow due to many disk operations.
2) Use FFMPEG via wrapper implemented in AForge and supply its VideoFileWriter class with images that you have.
Image sequence to video stream?
Problem here is it uses System.Bitmap, so in order to convert Texture2D to Bitmap you can use: How to create bitmap from byte array?
So you end up with something like:
Bitmap bmp;
Texture2D snap;
using (var ms = new MemoryStream(snap.EncodeToPNG()))
{
bmp = new Bitmap(ms);
}
vFWriter.WriteVideoFrame(bmp);
Both methods are not the fastest ones though, so if performance is an issue here you might want to operate on lower level data like DirectX or OpenGL textures.