Vidyo.IO sharing screen and monitors error - c#

I use vidyo.IO for communication using the following code
ConnectorPKG.Initialize();
var _connector = new Connector(Handle, Connector.ConnectorViewStyle.ConnectorviewstyleDefault, 8, "all#VidyoClient", "VidyoClient.log", 0);
// This should be called on each window resizing.
_connector.ShowViewAt(Handle, 0, 0, Weidth, Height);
// Registering to events we want to handle.
_connector.RegisterLocalCameraEventListener(new LocalCameraListener(this));
_connector.RegisterLocalWindowShareEventListener(new LocalWindowShareListener(this));
_connector.RegisterLocalMicrophoneEventListener(new LocalMicropfoneListener(this));
_connector.RegisterLocalSpeakerEventListener(new LocalSpeakerListener(this));
_connector.RegisterParticipantEventListener(new ParticipantListener(this));
_connector.RegisterLocalMonitorEventListener(new LocalMonitorListener(this));
_connector.RegisterMessageEventListener(new ChatListener(this));
_connector.DisableDebug();
then after joining a room I share window using code like this
var winToShare = LocalWindows.FirstOrDefault( );
if (winToShare != null)
{
winToShare.IsSelected = true;
//SetSelectedLocalWindow(winToShare);
SharingInProgress = _connector.SelectLocalWindowShare(winToShare.Object);
}
and same for monitors , Now I always get this error
can't share overconstrained frame interval

What platform are you developing for? Seems like you may be building a mobile app using Xamarin and if that's the case, you will not be able to do window/app share. That feature is available only on desktop and web clients.

Related

Error when trying to capture desktop using DXGI and DirectX11 on intel 630 HD

I get the below error when trying to use DXGI to capture the builtin screen on my laptop that runs on an Intel 630 HD with the latest driver. The code works when I capture the external screen running on my GTX 1070.
SharpDX.SharpDXException
HResult=0x80070057
Message=HRESULT: [0x80070057], Module: [General], ApiCode: [E_INVALIDARG/Invalid Arguments], Message: The parameter is incorrect.
The code in my form:
desktopDuplicator = new DesktopDuplicatorD11(1,0, DesktopDuplicatorD11.VSyncLevel.None);
The section of the code that errors:
private bool RetrieveFrame()
{
if (desktopImageTexture == null)
desktopImageTexture = new Texture2D(mDevice, mTextureDescription);
frameInfo = new OutputDuplicateFrameInformation();
try
{
mDeskDuplication.AcquireNextFrame(500, out frameInfo, out desktopResource);
}
catch (SharpDXException ex)
{
if (ex.ResultCode.Code == SharpDX.DXGI.ResultCode.WaitTimeout.Result.Code)
{
return true;
}
if (ex.ResultCode.Failure)
{
throw new DesktopDuplicationException("Failed to acquire next frame.");
}
}
using (var tempTexture = desktopResource.QueryInterface<Texture2D>())
{
mDevice.ImmediateContext.CopyResource(tempTexture, desktopImageTexture);
}
return false;
}
It errors specifically on the line:
desktopImageTexture = new Texture2D(mDevice, mTextureDescription);
What is causing the error when using the internal display and the intel 630?
Edit #1:
mTextureDescription creation:
this.mTextureDescription = new Texture2DDescription()
{
CpuAccessFlags = CpuAccessFlags.Read,
BindFlags = BindFlags.None,
Format = Format.B8G8R8A8_UNorm,
Width = this.mOutputDescription.DesktopBounds.Right,
Height = this.mOutputDescription.DesktopBounds.Bottom,
OptionFlags = ResourceOptionFlags.None,
MipLevels = 1,
ArraySize = 1,
SampleDescription = { Count = 1, Quality = 0 },
Usage = ResourceUsage.Staging
};
The whole Desktop Duplication process is done on the same thread.
Update #2:
On the intel 630 Width = this.mOutputDescription.DesktopBounds.Right, returns 0 where as on my 1070 it returns 1920.
The most simplest reason is usually the actual problem.
Intel's final WDDM 2.6 drivers do not work properly with switchable graphics, update to the DCH WDDM 2.7 driver.
First, to get hints from API about invalid argument (this is exactly what you have) you need to enable Direct3D Debug Layer. The article explains it for C++ and it is possible to do a similar trick with C# as well.
Second, important is what are effectively the arguments in the failing call, not just the code.
The code is about right but if coordinates in mOutputDescription are zero or invalid, the mentioned API call is going to fail as well. You need to set a break point and inspect the variable.

C# Take window/desktop screenshot using SlimDX

I am new to SlimDX and I've heard that there is a way to capture screenshots using this library. The reason I want to use SlimDX is that I want to capture screenshots much faster than
Graphics.CopyFromScreen()
so that I can make a livestream app running at higher framerates.
I have some code I found on the internet which should capture the desktop, but it always crashes at the line where I create an instance of Device.
I tried changing the DeviceType parameter to Software and the CreateFlags to Multithreaded just to see if anything changes, but nothing did and this is what it says every time:
SlimDX.Direct3D9.Direct3D9Exception: 'D3DERR_INVALIDCALL: Invalid call (-2005530516)'
Here's the code I have:
var pp = new PresentParameters();
pp.Windowed = true;
pp.SwapEffect = SwapEffect.Discard;
var d = new Device(new Direct3D(), 0, DeviceType.Hardware, IntPtr.Zero, CreateFlags.SoftwareVertexProcessing, pp);
var surface = Surface.CreateOffscreenPlain(d, Screen.PrimaryScreen.Bounds.Width, Screen.PrimaryScreen.Bounds.Height, Format.A8R8G8B8, Pool.Scratch);
d.GetFrontBufferData(0, surface);
//not sure if this will work
var ds = Surface.ToStream(surface, ImageFileFormat.Jpg);
var img = Image.FromStream(ds);
I've also read that it could be a result of BackBuffer not being supported by the graphics card, but in that case I really don't know how to fix this.
My graphics card is AMD R270X.
Any ideas?
Setting pp.BackBufferCount to 0 worked. The capture time is still pretty long though..

Live Video Streaming using Raspberry Pi and C#

I'm in a uni project of live Streaming the video (Taken from a Web Cam) and stream it to the desktop using C# (UWP, Windows 10 IoT Core). Even though I found some projects doing the server side implementation in Java (For Rasp) and Client side using UWP I couldn't find any Projects regarding Server side programming in C#.
Plus, is it really possible to do such server side programming using C# for live streaming as this Microsoft link say it isn't.
View the Microsoft Link
Any help would be deeply appreciated.
Regards,
T.S.
Even though I found some projects doing the server side implementation in Java (For Rasp) and Client side using UWP I couldn't find any Projects regarding Server side programming in C#.
There is another project I have coded and tested successfully. You could have a reference if it could help you.
In the MyVideoServer App the important is getting the camera id and previewFrame of the video. previewFrame = await MyMediaCapture.GetPreviewFrameAsync(videoFrame);Then send video stream to client through streamSocketClient.await streamSocketClient.sendBuffer(buffer);
public MainPage()
{
this.InitializeComponent();
InitializeCameraAsync();
InitSocket();
}
MediaCapture MyMediaCapture;
VideoFrame videoFrame;
VideoFrame previewFrame;
IBuffer buffer;
DispatcherTimer timer;
StreamSocketListenerServer streamSocketSrv;
StreamSocketClient streamSocketClient;
private async void InitializeCameraAsync()
{
var allVideoDevices = await DeviceInformation.FindAllAsync(DeviceClass.VideoCapture);
DeviceInformation cameraDevice = allVideoDevices.FirstOrDefault();
var mediaInitSettings = new MediaCaptureInitializationSettings { VideoDeviceId = cameraDevice.Id };
MyMediaCapture = new MediaCapture();
try
{
await MyMediaCapture.InitializeAsync(mediaInitSettings);
}
catch (UnauthorizedAccessException)
{
}
PreviewControl.Height = 180;
PreviewControl.Width = 240;
PreviewControl.Source = MyMediaCapture;
await MyMediaCapture.StartPreviewAsync();
videoFrame = new VideoFrame(BitmapPixelFormat.Bgra8, 240, 180, 0);
buffer = new Windows.Storage.Streams.Buffer((uint)(240 * 180 * 8));
}
Then the key server code is trying to create a server and connect client by socket communication in InitSocket function. StreamSocketListenerServer should be created as an object and started. At the same time the server ip port is setup.streamSocketSrv = new StreamSocketListenerServer();
await streamSocketSrv.start("22333");Last but not least, the Timer_Tick will send video stream to client every 100ms.
private async void InitSocket()
{
streamSocketSrv = new StreamSocketListenerServer();
await streamSocketSrv.start("22333");
streamSocketClient = new StreamSocketClient();
timer = new DispatcherTimer();
timer.Interval = TimeSpan.FromMilliseconds(100);
timer.Tick += Timer_Tick;
timer.Start();
}
Following you could deploy MyVideoServer App on Raspberry Pi 3.
Then you could deploy MyVideoClient App on PC. Then enter Raspberry Pi 3 IP Address and click Connect button. The video stream would display on the App.
This is the sample code and you could take a reference.

SharpDx direct3d11 how to start rendering

I want to use directx on C# and I am using SharpDX wrapper. I got a book called Direct3D rendering cookbook and I got the basic code from it. I want to create a 3d world view. For that I will need a camera view and a grid that helps to recognize world position just like in Autodesk Maya but I do not know how to do that. My mind is rally mixed what should I do to start ?
Here I have code that is ready to render something I think:
using System;
using SharpDX.Windows;
using SharpDX.DXGI;
using SharpDX.Direct3D11;
using Device = SharpDX.Direct3D11.Device;
using Device1 = SharpDX.Direct3D11.Device1;
namespace CurrencyConverter
{
static class Program
{[STAThread]
static void Main()
{
// Enable object tracking
SharpDX.Configuration.EnableObjectTracking = true;
SharpDX.Animation.Timer timer = new SharpDX.Animation.Timer();
#region Direct3D Initialization
// Create the window to render to
Form1 form = new Form1();
form.Text = "D3DRendering - EmptyProject";
form.Width = 640;
form.Height = 480;
// Declare the device and swapChain vars
Device device;
SwapChain swapChain;
// Create the device and swapchain
// First create a regular D3D11 device
using (var device11 = new Device(
SharpDX.Direct3D.DriverType.Hardware,
DeviceCreationFlags.None,
new[] {
SharpDX.Direct3D.FeatureLevel.Level_11_1,
SharpDX.Direct3D.FeatureLevel.Level_11_0,
}))
{
// Query device for the Device1 interface (ID3D11Device1)
device = device11.QueryInterfaceOrNull<Device1>();
if (device == null)
throw new NotSupportedException(
"SharpDX.Direct3D11.Device1 is not supported");
}// Rather than create a new DXGI Factory we reuse the
// one that has been used internally to create the device
using (var dxgi = device.QueryInterface<SharpDX.DXGI.Device2>())
using (var adapter = dxgi.Adapter)
using (var factory = adapter.GetParent<Factory2>())
{
var desc1 = new SwapChainDescription1()
{
Width = form.ClientSize.Width,
Height = form.ClientSize.Height,
Format = Format.R8G8B8A8_UNorm,
Stereo = false,
SampleDescription = new SampleDescription(1, 0),
Usage = Usage.BackBuffer | Usage.RenderTargetOutput,
BufferCount = 1,
Scaling = Scaling.Stretch,
SwapEffect = SwapEffect.Discard,
};
swapChain = new SwapChain1(factory,
device,
form.Handle,
ref desc1,
new SwapChainFullScreenDescription()
{
RefreshRate = new Rational(60, 1),
Scaling = DisplayModeScaling.Centered,
Windowed = true
},
// Restrict output to specific Output (monitor)
adapter.Outputs[0]);
}
// Create references for backBuffer and renderTargetView
var backBuffer = Texture2D.FromSwapChain<Texture2D>(swapChain,
0);
var renderTargetView = new RenderTargetView(device,
backBuffer);
#endregion
// Setup object debug names
device.DebugName = "The Device";
swapChain.DebugName = "The SwapChain";
backBuffer.DebugName = "The Backbuffer";
renderTargetView.DebugName = "The RenderTargetView";
#region Render loop
// Create and run the render loop
RenderLoop.Run(form, () =>
{
// Clear the render target with...
var lerpColor = SharpDX.Color.Lerp(SharpDX.Color.White,
SharpDX.Color.DarkBlue,
(float)((timer.Time) / 10.0 % 1.0));
device.ImmediateContext.ClearRenderTargetView(
renderTargetView,
lerpColor);
// Execute rendering commands here...
//...
//I DO NOT HAVE ANY IDEA
//...
// Present the frame
swapChain.Present(0, PresentFlags.RestrictToOutput);
});
#endregion
#region Direct3D Cleanup
// Release the device and any other resources created
renderTargetView.Dispose();
backBuffer.Dispose();
device.Dispose();
swapChain.Dispose();
#endregion
}
}
}
Generally speaking, with Direct3D you need a substantial amount of code before to have anything happening on the screen.
In the SharpDX repository you have the MiniCube sample which contains enough to really get you started, as it has all the elements required to draw a 3d scene.
I recommend to particularily look for:
Depth buffer creation (DepthStencilView)
fx file, as you need shaders to have anything on the screen (no more fixed funtion)
How the Vertex Buffer is created, you need to split geometry in triangles (in common cases, there are other possibilities).
Don't forget the SetViewport (it's really common to have it omitted)
The calls referring to Input Assembler are assigning the geometry to be drawn
Constant buffer creation : this is to pass matrices and changing data (like diffuse)
Also make sure to have DeviceCreationFlags.None with the Device.CreateWithSwapChain call, and in visual studio debug options, use "Enable Native Code Debugging". This will give you errors and warnings if something is not set properly, plus a meaningful reason in case any for the resource creation fails (instead of "Invalid Args", which is quite pointless).
As another recommendation, all the Direct3D11 resource creation parameters are incredibly error prone and tedious (many options are non compatible between each other), so it quite important to wrap those into some easier to use helper functions (and make a small amount of unit tests to validate them once and for all). The old Toolkit has quite some of those examples
SharpDX wrapper is relatively close to the c++ counterpart, so anything in the c++ documentation applies to it too.

.Net Application to capture image from pda camera

I need a .net application to interact with pda's camera so that with save button (in my application) I can save in sql server, with zoom button (in my application) I can zoom the image.
mmm, at first glance your question reads like you are after an app to do this (which is why I voted to close the question as non programming related)
but...If you are in fact after the compact framework code for this then, this may help (and I'll try reverse my vote...)
CameraCaptureDialog cameraCapture = new CameraCaptureDialog();
cameraCapture.Owner = null;
cameraCapture.InitialDirectory = #"\My Documents";
cameraCapture.DefaultFileName = #"test.3gp";
cameraCapture.Title = "Camera Demo";
cameraCapture.VideoTypes = CameraCaptureVideoTypes.Messaging;
cameraCapture.Resolution = new Size(176, 144);
cameraCapture.VideoTimeLimit = new TimeSpan(0, 0, 15); // Limited to 15 seconds of video.
cameraCapture.Mode = CameraCaptureMode.VideoWithAudio;
if (DialogResult.OK == cameraCapture.ShowDialog())
{
Console.WriteLine("The picture or video has been successfully captured to:\n{0}", cameraCapture.FileName);
}
This code snipped from the MSDN article on the CameraCaptureDialog

Categories