Change the resolution of stream interactively in Klak Syphon on Unity - c#

I'm very new to Unity and C#. I'm creating an application which can stream the camera output in Unity desktop to a web browser with RebRTC. I'm using this solution:
Unity camera → Syphon → CamTwist → broadcast client → smart phone.
First, I've sent a window size from smart phone to Unity (done). Then, I have to stream a video with the resolution that matches the smart phone one from Unity to Syphon.
My question is that How can I adjust a resolution of Syphon from Unity? Is there another good system?
I know I have to think the way to change a resolution in CamTwist, so I'm looking forward to another system .

I found a solution in Klack Syphon test project.
Setting camera size in Klack Syphon is a little bit tricky than other version such as Funnel or Syphon for Unity.
If you want to set the camera size for Klack Syphon,
Create Render Texture and set size
Add Camera Object
Set Render Texture at Target Texture field in Camera Object
Add Empty, attach Syphon Server(script) component and set Render Texture at Source texture field

Related

VideoLan VLC media player showing green texture unity application in different screens

Currently I am working on development of an Interactive standalone application in Unity 2020.3.30f1.
Here I am using VLC player (https://assetstore.unity.com/packages/tools/video/vlc-for-unity-windows-133979#description) for streaming external device through Makito Haivision.
Application runs on Dell r7610 Graphic Card : Nvidia K5000 , it has 6 display screen which are combined through Mosaic to create single display.
Unity Sprite, texture, video player works perfect, expect VLC player.
It shows green texture in some screen and works fine in others.
I have attached the issue image.
Output Image
Upgrading Mosaic driver solved the issue.
But the application becomes slow.

Access the RGB camera (or Locatable Camera) at 30fps on HoloLens

I am trying to access the Hololens locatable camera at 30 FPS at minimum possible latency. I have tried using WebCamTexture but it has a significant latency along with the frame Drop. I also used the MediaCapture example(which seems to be significantly faster) but it displays on a 2D element CaptureElemnet. Is there any way to get a byte array of each frame using MediaCapture API so that I can render it on cube texture in Unity3D.
We made an open source project called CameraStream to address this need. As a Unity plugin, it uses MediaCapture to feed the byte array into Unity (along with locatable matrices). From there you can assign the bytes to a Texture2D, as shown in the provided Unity example.

Ask about kinect fusion for AR

I want to create an AR application using Kinect Fusion.
I want to insert a 3d model into my kinect fusion reconstruction. The expected output is like these two videos:
https://www.youtube.com/watch?v=gChlRebNloA
https://www.youtube.com/watch?v=MvP4cHfUD5g
How to overlay the 3D object onto the 3d reconstruction?
Is there function in the SDK that can be used to achieve my goal?
Thanks
ps: I use C#
You should probably start to explore the KinectFusionExplorer-WPF KinectFusion available on codeplex:
http://kinectforwindows.codeplex.com/SourceControl/latest#v1.x/ToolkitSamples1.8.0
KinectFusionExplorer-WPF has some nice utility methods for matrix transformations, (virtual) camera configuration, etc.
The process for doing the AR augmentation is:
viewport: set your viewport using the same resolution as the one for the camera image,
video background: retrieve the video image from the Kinect and displays it in the background (either as done in the SDK examples or using a textured quad with the video image as done more traditionally),
geometric registration: use WPF3D and set a Camera object using: the intrinsic parameters of the Kinect camera (or the same ones used by KF), the camera pose you get via GetCurrentWorldToVolumeTransform() from your volume,
rendering: set any local transformation you want for your model (xaml) and render (using the way it's generally done with WPF3D).

How to use vuforia on PC Application

i'm new on vuforia and i need some help with this topic.
Can someone tells me if its possible use vuforia with a webcam on a pc application?
I got webcam frames on a plane texture and i tried to pass those textures to vuforia but i have not got it work.
I used WebCamTexture to get frames of webcam.
I'm not familiar with Vuforia (although it looks interesting, I'm going to try that out myself this weekend!), I might have a suggestion.
Is it possible Vuforia doesn't know what to do because of the images being WebCamTexture instead of something like Texture2d?
WebCamTexture is a Texture (Texture2D, WebCamTexture, RenderTexture all extend from Texture but they aren't exchangeable).
So, try to convert your texture and pass that on to Vuforia?
var tx2d = new Texture2D();
tx2d.SetPixels((go.renderer.material.mainTexture as WebCamTexture).GetPixels());
EDIT: What I found here, under the section Running in the editor:
There is a specific Web Cam Behaviour script.
To use Play Mode for Vuforia in Unity Pro, simply select the attached,
or built-in, webcam that you want to use from the Camera Device menu,
and then activate Play Mode using the Play button at the top of the
Editor UI.
You can also use the standard Unity Play Mode with non-Pro Unity
versions and by setting ‘Don’t use for Play Mode’ in the Web Cam
Behaviour component.
To use standard Play Mode, adjust the transform of the ARCamera object
to get your entire scene in view, and then run the application in the
Unity editor. There is no live camera image or tracking in standard
Play Mode, instead all Targets are assumed to be visible. This allows
you to test the non-AR components of your application, such as scripts
and animations, without having to deploy to the device each time.
Have you tried the $250 plugin at http://holographi.space/ ? I haven't paid $250 for it yet.

Programmaticaly enable stereoscopic 3D in the graphic card

We have a NVIDIA Quadro 5000 and want to set at the beginning of our C#-program the following settings in the graphics card, so that the screen automatically detects the 3D application.
The following settings need to be set:
Stereoscopic Settings: Enable Stereoscopic 3D
stereo display mode: Generic Active Stereo
Enable Stereo: On
Vertical Sync: On
Is this possible, maybe even with XNA?
I had the same problem a while ago and found that you can use an API called NVAPI provided by Nvidia. Please see this topic:
How do you prevent Nvidia's 3D Vision from kicking in? (HOW TO CALL NVAPI FROM .NET)

Categories