I'm trying to capture a screenshot and show it on a Windows Form (or on a Panel) using SharpDX, but I've some trouble. I capture the screenshot using OutputDuplication.AcquireNextFrame, but when I try to draw the caputured frame using WindowRenderTarget.DrawBitmap it fail with the WrongResourceDomain exception. I added a Form and modified the code of the ScreenCapture sample to illustrate my problem:
using System;
using SharpDX;
using SharpDX.Direct2D1;
using SharpDX.Direct3D11;
using SharpDX.DXGI;
namespace MiniTri
{
/// <summary>
/// Screen capture of the desktop using DXGI OutputDuplication.
/// </summary>
internal static class Program
{
[STAThread]
private static void Main()
{
var Form1 = new ScreenCapture.Form1();
// # of graphics card adapter
const int numAdapter = 0;
// # of output device (i.e. monitor)
const int numOutput = 0;
// Create DXGI Factory1
var dxgiFactory = new SharpDX.DXGI.Factory1();
var dxgiAdapter = dxgiFactory.GetAdapter1(numAdapter);
// Create device from Adapter
var d3dDevice = new SharpDX.Direct3D11.Device(dxgiAdapter, DeviceCreationFlags.BgraSupport);
// Create Direct2D1 Factory1
var d2dFactory = new SharpDX.Direct2D1.Factory1();
// Create Direct2D device
SharpDX.Direct2D1.Device d2dDevice;
using (var dxgiDevice = d3dDevice.QueryInterface<SharpDX.DXGI.Device>())
d2dDevice = new SharpDX.Direct2D1.Device(d2dFactory, dxgiDevice);
// Get the Direct2D1 DeviceContext
var d2dContext = new SharpDX.Direct2D1.DeviceContext(d2dDevice, DeviceContextOptions.None);
// Create Direct2D1 WindowRenderTarget
var WindowRenderTarget = new WindowRenderTarget(d2dFactory,
new RenderTargetProperties
{
Type = RenderTargetType.Default
},
new HwndRenderTargetProperties
{
Hwnd = Form1.Handle,
PixelSize = new Size2(Form1.Width, Form1.Height),
PresentOptions = PresentOptions.Immediately
});
// Get DXGI.Output
var output = dxgiAdapter.GetOutput(numOutput);
var output1 = output.QueryInterface<Output1>();
// Create RawRectangleF with the Form size
var FormRectangle = new SharpDX.Mathematics.Interop.RawRectangleF(0, 0, Form1.Width, Form1.Height);
// Duplicate the output
var duplicatedOutput = output1.DuplicateOutput(d3dDevice);
bool captureDone = false;
for (int i = 0; !captureDone; i++)
{
try
{
SharpDX.DXGI.Resource screenResource;
OutputDuplicateFrameInformation duplicateFrameInformation;
// Try to get duplicated frame within given time
duplicatedOutput.AcquireNextFrame(10000, out duplicateFrameInformation, out screenResource);
if (i > 0)
{
// Create a Direct2D1 Bitmap1 from screenResource
var dxgiSurface = screenResource.QueryInterface<Surface>();
var d2dBitmap1 = new Bitmap1(d2dContext, dxgiSurface);
// Should draw the bitmap on the form, but fails with "WrongResourceDomain" exception
WindowRenderTarget.BeginDraw();
WindowRenderTarget.DrawBitmap(d2dBitmap1, FormRectangle, 1.0f, BitmapInterpolationMode.Linear);
WindowRenderTarget.EndDraw();
// Capture done
captureDone = true;
}
screenResource.Dispose();
duplicatedOutput.ReleaseFrame();
}
catch (SharpDXException e)
{
if (e.ResultCode.Code != SharpDX.DXGI.ResultCode.WaitTimeout.Result.Code)
{
throw e;
}
}
}
// TODO: We should cleanp up all allocated COM objects here
}
}
}
I know I can convert the frame to a System.Drawing.Bitmap (like in the original sample) and then use System.Drawing.Graphics.DrawImage to draw the bitmap to the Form, but I want to avoid this and draw the frame using SharpDX.
I've tried also to create a SwapChain and draw the frame using Direct2D1.RenderTarget.DrawBitmap (similar to the MiniRect sample) but it fail with the same exception. The same using Direct2D1.DeviceContext.DrawBitmap.
I read here that this exception means that I am mixing resources from different resource domains, but I don't understand exactly what it means in my code. I am not experienced in using DirectX.
I tried using the SwapChain again, instead of the WindowRenderTarget and this time I was successful.
Create a SwapChainDescription specifing Width, Height and Handle of your target control (for example a Form or a Panel).
Create a new SwapChain using the dxgiFactory, d3dDevice and SwapChainDescription.
Get the back buffer of the SwapChain.
Get a DXGI Surface from the back buffer.
(more about creation of SwapChain, back buffer and DXGI Surface in the MiniRect sample)
Use the d2dContext to create a Direct2D1.Bitmap1 from the DXGI Surface.
Use the d2dContext to create another Direct2D1.Bitmap1 from the screenResource (as illustrated in the above code).
Set the first Direct2D1.Bitmap1 as target of the d2dContext.
Use the d2dContext do draw the second Direct2D1.Bitmap1.
Present the SwapChain.
Done.
Related
I've been working on a small Microsoft Teams extension that takes an incoming Audio/Video feed from each participant and records the data into an mp4 file for each participant. In all the MS Teams documentation they focus on getting access to the raw data itself and gloss over any kind of persistence of it into a usable video container. The video comes in as an NV12 encoded byte array and the audio is wav format. I've been trying to use Gstreamer to take the raw video data, push it to a videoconvert then on to a filesink to try and save the data as an h264 encoded mp4. I've pulled together a mix and match of code examples and think I'm close but missing something in the process. My pipeline creates successfully but when pushing buffers into the appsrc I get the following in my console:
pausing after gst_pad_push() = not-linked
error: Internal data stream error.
error: streaming stopped, reason not-linked (-1)
The code I use to create my pipeline, pads and sinks etc is as follows:
private Gst.App.AppSrc CreatePipeline(Guid identifier, string identity, int width, int height, string directory)
{
Directory.CreateDirectory($"{directory}\\temporary\\{identifier}");
var pipeline = new Gst.Pipeline($"pipeline_{identity}");
var VideoQueue = Gst.ElementFactory.Make("queue", $"video_queue_{identity}");
var VideoConvert = Gst.ElementFactory.Make("videoconvert", $"video_convert_{identity}");
var VideoSink = Gst.ElementFactory.Make("autovideosink", $"video_sink_{identity}");
var AppQueue = Gst.ElementFactory.Make("queue", $"app_queue_{identity}");
var AppSink = new Gst.App.AppSink($"app_sink_{identity}");
var AppSource = new Gst.App.AppSrc($"app_src_{identity}");
var Tee = Gst.ElementFactory.Make("tee", $"tee_{identity}");
var FileSink = Gst.ElementFactory.Make("filesink", $"file_sink_{identity}");
AppSource.Caps = Gst.Caps.FromString($"video/x-raw,format=NV12,width={width},height={height},framerate={this.fixedFps}");
AppSource.IsLive = true;
AppSink.EmitSignals = true;
AppSink.Caps = Gst.Caps.FromString($"video/x-raw,format=NV12,width={width},height={height},framerate={this.fixedFps}");
AppSink.NewSample += NewSample;
Console.WriteLine("Setting Filesink location");
FileSink["location"] = $"{directory}\\temporary\\{identifier}\\{identity}.mp4";
pipeline.Add(AppSource, Tee, VideoQueue, VideoConvert, VideoSink, AppQueue, AppSink, FileSink);
var teeVideoPad = Tee.GetRequestPad("src_%u");
var queueVideoPad = VideoQueue.GetStaticPad("sink");
var teeAppPad = Tee.GetRequestPad("src_%u");
var queueAppPad = AppQueue.GetStaticPad("sink");
if ((teeVideoPad.Link(queueVideoPad) != Gst.PadLinkReturn.Ok) ||
(teeAppPad.Link(queueAppPad) != Gst.PadLinkReturn.Ok))
{
Console.WriteLine("Tee could not be linked.");
throw new Exception("Tee could not be linked.");
}
AppSource.PadAdded += new Gst.PadAddedHandler(this.OnVideoPadAdded);
var bus = pipeline.Bus;
bus.AddSignalWatch();
bus.Connect("message::error", HandleError);
pipeline.SetState(Gst.State.Playing);
return AppSource;
}
And this gets called every time a new participant joins the call. Then each time a new video frame is sent to the call from a participant the following code is executed:
private bool NV12toMP4(byte[] array, string identity, int width, int height, long timestamp, int length)
{
var buffer = new Gst.Buffer(null, (ulong)length, Gst.AllocationParams.Zero)
{
Pts = (ulong)timestamp,
Dts = (ulong)timestamp
};
buffer.Fill(0, array);
var ret = this.participantSources[identity].PushBuffer(buffer);
buffer.Dispose();
if (ret != Gst.FlowReturn.Ok)
{
return false;
}
return true;
}
I was expecting my PadAdded method to get called on the AppSrc when it first detects the type of input but this never gets triggered so I'm not sure where to look next on this.
I need to perform Source In composition on 2 images.
For example this image:
and a mask image (tested with black-transparent and black-white):
should produce result:
I am trying to do this with ImageSharp:
img.Mutate(imgMaskIn =>
{
using (var mask = Image.Load(maskImageFileName))
{
imgMaskIn.DrawImage(mask, new GraphicsOptions { AlphaCompositionMode = PixelAlphaCompositionMode.SrcIn});
}
});
but result is mask image. It should work based on this merge request.
Did I wrongly used library, or there is a bug?
Is there any other way to do this in ASP.NET Core?
Unfortunately, the syntax to do this is with ImageSharp is changing between the current preview version and the development version which should be the final API for this.
With 1.0.0-beta0005, you can blend these images like this:
using (var pattern = Image.Load("img_pattern.png"))
using (var texture = Image.Load("img_texture.png"))
{
var options = new GraphicsOptions { BlenderMode = PixelBlenderMode.In };
using (var result = pattern.Clone(x => x.DrawImage(options, texture)))
{
result.Save("img_out.png");
}
}
Note that you have to use a pattern image with alpha transparency for this. You cannot use a keyed transparency (at least not with this solution).
I’ve made the pattern transparent for that purpose (you can get the one I used here) and got this result:
In the final release, it will look like this:
using (var pattern = Image.Load("img_pattern.png"))
using (var texture = Image.Load("img_texture.png"))
{
var options = new GraphicsOptions { AlphaCompositionMode = PixelAlphaCompositionMode.SrcIn };
using (var result = pattern.Clone(x => x.DrawImage(texture, options)))
{
result.Save("img_out.png");
}
}
A good way to figure that out btw. is to look at the PorterDuffCompositorTests file which contains the tests for this feature and as such will always reflect the current API.
As of March 2022 the accepted answer doesn't work anymore, with ImageSharp 2. The following code doesn't solve the exact problem in the question, but it's how I got something similar to work:
using SixLabors.ImageSharp;
using SixLabors.ImageSharp.PixelFormats;
using SixLabors.ImageSharp.Processing.Processors.Drawing;
...
using (var inImg = Image.Load<RgbaVector>(imagePath)) //has some transparent pixels
using (var background = new Image<RgbaVector>(inImg.Width, inImg.Height, new RgbaVector(1, 0, 0, 1))) //this is just a solid red image of the same size as the loaded image, but it could be any image
{
var processorCreator = new DrawImageProcessor(
inImg,
Point.Empty,
PixelColorBlendingMode.Normal,
PixelAlphaCompositionMode.SrcAtop, //this is the setting you want to play with to get the behavior from the original question
1f
);
var pxProcessor = processorCreator.CreatePixelSpecificProcessor(
Configuration.Default,
background,
inImg.Bounds());
pxProcessor.Execute(); //writes to the image passed into CreatePixelSpecificProcessor, in this case background
background.Save("some_path.png");
}
}
I am trying to use the Intel Real Sense SDK to acquire some guided frames and then I am trying to stitch them using the Intel Stitching algorithm.
However, the SetFileName function is not writing the file to the directory.
Can you please help?
PXCMSenseManager pp = PXCMSenseManager.CreateInstance();
RaiseMessage("Starting...");
pp.EnableFace()
pp.Enable3DScan()
if (!string.IsNullOrEmpty(StreamOutputFilename))
{
if (File.Exists(StreamOutputFilename)) throw new Exception("File already exists");
{
System.Diagnostics.Debug.WriteLine(StreamOutputFilename);
pp.QueryCaptureManager().SetFileName(StreamOutputFilename,true);
Please refer to the solution here
int wmain(int argc, WCHAR* argv[]) {
// Create a SenseManager instance
PXCPointF32 *uvmap = 0;
pxcCHAR *file = L"C:\\new.rssdk";
PXCSenseManager *sm = PXCSenseManager::CreateInstance();
// Set file recording or playback
sm->QueryCaptureManager()->SetFileName(file, 1);
// Select the color stream
sm->EnableStream(PXCCapture::STREAM_TYPE_DEPTH, 640, 480, 0);
// Initialize and Record 300 frames
sm->Init();
for (int i = 0; i<300; i++) {
// This function blocks until a color sample is ready
if (sm->AcquireFrame(true)<PXC_STATUS_NO_ERROR) break;
// Retrieve the sample
PXCCapture::Sample *sample = sm->QuerySample();
sm->ReleaseFrame();
}
// close down
sm->Release();
return 0;
}
I want to use directx on C# and I am using SharpDX wrapper. I got a book called Direct3D rendering cookbook and I got the basic code from it. I want to create a 3d world view. For that I will need a camera view and a grid that helps to recognize world position just like in Autodesk Maya but I do not know how to do that. My mind is rally mixed what should I do to start ?
Here I have code that is ready to render something I think:
using System;
using SharpDX.Windows;
using SharpDX.DXGI;
using SharpDX.Direct3D11;
using Device = SharpDX.Direct3D11.Device;
using Device1 = SharpDX.Direct3D11.Device1;
namespace CurrencyConverter
{
static class Program
{[STAThread]
static void Main()
{
// Enable object tracking
SharpDX.Configuration.EnableObjectTracking = true;
SharpDX.Animation.Timer timer = new SharpDX.Animation.Timer();
#region Direct3D Initialization
// Create the window to render to
Form1 form = new Form1();
form.Text = "D3DRendering - EmptyProject";
form.Width = 640;
form.Height = 480;
// Declare the device and swapChain vars
Device device;
SwapChain swapChain;
// Create the device and swapchain
// First create a regular D3D11 device
using (var device11 = new Device(
SharpDX.Direct3D.DriverType.Hardware,
DeviceCreationFlags.None,
new[] {
SharpDX.Direct3D.FeatureLevel.Level_11_1,
SharpDX.Direct3D.FeatureLevel.Level_11_0,
}))
{
// Query device for the Device1 interface (ID3D11Device1)
device = device11.QueryInterfaceOrNull<Device1>();
if (device == null)
throw new NotSupportedException(
"SharpDX.Direct3D11.Device1 is not supported");
}// Rather than create a new DXGI Factory we reuse the
// one that has been used internally to create the device
using (var dxgi = device.QueryInterface<SharpDX.DXGI.Device2>())
using (var adapter = dxgi.Adapter)
using (var factory = adapter.GetParent<Factory2>())
{
var desc1 = new SwapChainDescription1()
{
Width = form.ClientSize.Width,
Height = form.ClientSize.Height,
Format = Format.R8G8B8A8_UNorm,
Stereo = false,
SampleDescription = new SampleDescription(1, 0),
Usage = Usage.BackBuffer | Usage.RenderTargetOutput,
BufferCount = 1,
Scaling = Scaling.Stretch,
SwapEffect = SwapEffect.Discard,
};
swapChain = new SwapChain1(factory,
device,
form.Handle,
ref desc1,
new SwapChainFullScreenDescription()
{
RefreshRate = new Rational(60, 1),
Scaling = DisplayModeScaling.Centered,
Windowed = true
},
// Restrict output to specific Output (monitor)
adapter.Outputs[0]);
}
// Create references for backBuffer and renderTargetView
var backBuffer = Texture2D.FromSwapChain<Texture2D>(swapChain,
0);
var renderTargetView = new RenderTargetView(device,
backBuffer);
#endregion
// Setup object debug names
device.DebugName = "The Device";
swapChain.DebugName = "The SwapChain";
backBuffer.DebugName = "The Backbuffer";
renderTargetView.DebugName = "The RenderTargetView";
#region Render loop
// Create and run the render loop
RenderLoop.Run(form, () =>
{
// Clear the render target with...
var lerpColor = SharpDX.Color.Lerp(SharpDX.Color.White,
SharpDX.Color.DarkBlue,
(float)((timer.Time) / 10.0 % 1.0));
device.ImmediateContext.ClearRenderTargetView(
renderTargetView,
lerpColor);
// Execute rendering commands here...
//...
//I DO NOT HAVE ANY IDEA
//...
// Present the frame
swapChain.Present(0, PresentFlags.RestrictToOutput);
});
#endregion
#region Direct3D Cleanup
// Release the device and any other resources created
renderTargetView.Dispose();
backBuffer.Dispose();
device.Dispose();
swapChain.Dispose();
#endregion
}
}
}
Generally speaking, with Direct3D you need a substantial amount of code before to have anything happening on the screen.
In the SharpDX repository you have the MiniCube sample which contains enough to really get you started, as it has all the elements required to draw a 3d scene.
I recommend to particularily look for:
Depth buffer creation (DepthStencilView)
fx file, as you need shaders to have anything on the screen (no more fixed funtion)
How the Vertex Buffer is created, you need to split geometry in triangles (in common cases, there are other possibilities).
Don't forget the SetViewport (it's really common to have it omitted)
The calls referring to Input Assembler are assigning the geometry to be drawn
Constant buffer creation : this is to pass matrices and changing data (like diffuse)
Also make sure to have DeviceCreationFlags.None with the Device.CreateWithSwapChain call, and in visual studio debug options, use "Enable Native Code Debugging". This will give you errors and warnings if something is not set properly, plus a meaningful reason in case any for the resource creation fails (instead of "Invalid Args", which is quite pointless).
As another recommendation, all the Direct3D11 resource creation parameters are incredibly error prone and tedious (many options are non compatible between each other), so it quite important to wrap those into some easier to use helper functions (and make a small amount of unit tests to validate them once and for all). The old Toolkit has quite some of those examples
SharpDX wrapper is relatively close to the c++ counterpart, so anything in the c++ documentation applies to it too.
I use directx 9 to build model.
I want to save it to bmp file. I found the function D3DXSaveSurfaceToFile()
But I do not know how to use it in C#.
How can I use it?
Unfortunately, there is no such function in c#.
Try the following code instead:
try
{
// initialize D3D device
PresentParameters presentParams = new PresentParameters();
presentParams.Windowed = true;
presentParams.SwapEffect = SwapEffect.Discard;
Device myDevice = new
Device(0,DeviceType.Hardware,this,CreateFlags.SoftwareVertexProcessing,presentParams);
// create a surface the size of screen,
// format had to be A8R8G8B8, as the GetFrontBufferData returns
// only memory pool types allowed are Pool.Scratch or Pool.System memory
Surface mySurface =
myDevice.CreateOffscreenPlainSurface(SystemInformation.PrimaryMonitorSize.Width,SystemInformation.PrimaryMonitorSize.Height,Format.A8R8G8B8,Pool.SystemMemory);
//Get the front buffer.
myDevice.GetFrontBufferData(0,mySurface);
//saves surface to file
SurfaceLoader.Save("surface.bmp",ImageFileFormat.Bmp,mySurface);
}
catch
{
//whatever
}