I am trying to access the Hololens locatable camera at 30 FPS at minimum possible latency. I have tried using WebCamTexture but it has a significant latency along with the frame Drop. I also used the MediaCapture example(which seems to be significantly faster) but it displays on a 2D element CaptureElemnet. Is there any way to get a byte array of each frame using MediaCapture API so that I can render it on cube texture in Unity3D.
We made an open source project called CameraStream to address this need. As a Unity plugin, it uses MediaCapture to feed the byte array into Unity (along with locatable matrices). From there you can assign the bytes to a Texture2D, as shown in the provided Unity example.
Related
I'm very new to Unity and C#. I'm creating an application which can stream the camera output in Unity desktop to a web browser with RebRTC. I'm using this solution:
Unity camera → Syphon → CamTwist → broadcast client → smart phone.
First, I've sent a window size from smart phone to Unity (done). Then, I have to stream a video with the resolution that matches the smart phone one from Unity to Syphon.
My question is that How can I adjust a resolution of Syphon from Unity? Is there another good system?
I know I have to think the way to change a resolution in CamTwist, so I'm looking forward to another system .
I found a solution in Klack Syphon test project.
Setting camera size in Klack Syphon is a little bit tricky than other version such as Funnel or Syphon for Unity.
If you want to set the camera size for Klack Syphon,
Create Render Texture and set size
Add Camera Object
Set Render Texture at Target Texture field in Camera Object
Add Empty, attach Syphon Server(script) component and set Render Texture at Source texture field
I'm currently developing using Unity 3D, however my background is Apple development. In iOS/OSX development I can use AVFoundation to load an mp4 and get frame by frame data from it as it plays in BGRA format.
I was wondering what the equivalent of this is for .NET (Unity uses .NET 2.0)?
I know it's possible to call functions in an Objective-C++ file from unity but I need a way to do this on any platform not just iOS.
AFAIK there is no builtin way to access the video frame data on mobile in Unity. Playing videos in Unity on mobile devices is just sad. All they offer out of the box is this that basically only works for full screen videos, like cut scenes.
If you want to do more complex things, like pipe a videos to a texture, you have two options:
Mobile Movie Texture
Easy Movie Texture
Note: there are more available on the asset store, but these are the two we use.
The best option?..
Easy Movie Texture
We use Easy Movie Texture for our VR apps (Gear VR & Cardboard). There we decode up to 4k videos on an S6.
For stereo videos we use a component that comes with the plugin, VideoCopyTexture, that copies the texture data from the video into a material (to avoid duplicate decoding & rendering). It references MediaPlayerCtrl.GetVideoTexture() that returns a Unity Texture2D, so from there you should have access to all the data you need!
The not so great option, but we kind of use it so I thought I would mention it anyway...
Mobile Movie Texture
Mobile Movie Texture only works with ogg encoded videos. I don't think it gives you access to the raw frame data, I would invite you to contact the dev directly or check the docs.
But that can be "hacked". You can use a render texture, a separate camera that looks at a plane/quad with the video playing on it. Then grab that texture's bitmap data. It's not ideal, but should do the trick.
Long story short, I use a similar render to texture mechanism to pipe the video data to Scaleform, in Unity, and didn't see any performance loss. Mobile Movie Texture is not the fastest to start with and in our case even decodes in software on some devices. But it works everywhere. So we only use it for small videos.
Cheers,
J.
I want to create an AR application using Kinect Fusion.
I want to insert a 3d model into my kinect fusion reconstruction. The expected output is like these two videos:
https://www.youtube.com/watch?v=gChlRebNloA
https://www.youtube.com/watch?v=MvP4cHfUD5g
How to overlay the 3D object onto the 3d reconstruction?
Is there function in the SDK that can be used to achieve my goal?
Thanks
ps: I use C#
You should probably start to explore the KinectFusionExplorer-WPF KinectFusion available on codeplex:
http://kinectforwindows.codeplex.com/SourceControl/latest#v1.x/ToolkitSamples1.8.0
KinectFusionExplorer-WPF has some nice utility methods for matrix transformations, (virtual) camera configuration, etc.
The process for doing the AR augmentation is:
viewport: set your viewport using the same resolution as the one for the camera image,
video background: retrieve the video image from the Kinect and displays it in the background (either as done in the SDK examples or using a textured quad with the video image as done more traditionally),
geometric registration: use WPF3D and set a Camera object using: the intrinsic parameters of the Kinect camera (or the same ones used by KF), the camera pose you get via GetCurrentWorldToVolumeTransform() from your volume,
rendering: set any local transformation you want for your model (xaml) and render (using the way it's generally done with WPF3D).
Im developing a game like "bubble-bobble". So far I have done physics and collision detection.
Now I want to make my Hero (Rectangle Sprite) animated. I would be glad if someone could explain simple scripting for simple animated characters or some nice links for animation.
The XNA Documentation includes an entire article on Animating a Sprite. The basic technique is to use an AnimatedTexture class, which is included within the Animated sprite sample code.
The high level idea is that you load a texture into memory using a graphics API. Since you're using C#, this is most likely done through XNA.
This texture you have loaded contains each frame of animation that is required, and may span across multiple textures. When you go and render your 'sprite' object, you pass the XNA API the texture you want to use, and a source rectangle coordinates that surround the specific frame of animation you want within that texture.
It's up to you to manage this process. I create tools that assemble these source rectangles and stores meta data about each specific animation each sprite has; like which rectangles, and the duration of each frame, etc.
I want to calculate depth of an image. So that i can eliminate far objects from the image.
Is there any methods to do so in c# with single camera??
This website shows how to get a webcam image using C#. However, just like a photo, it is flat so there is no way to distinguish objects at different distances from the camera. In general, with just one camera and a single photo/image, what you want is impossible.
With one or two cameras that snap two images/photos with some distance in between, you can distinguish depth (just like you do using your two eyes). However, this requires very complex mathematics to first identify the objects and second determine their approximate distance from the camera.
Kinect uses an infrared camera that creates a low-resolution image to measure the distance to objects in front of the camera, so that it can distinguish the player from the background. I read somewhere that Kinect cameras can be attached to a normal computer, but I don't know about the software or mathematics you'll need.
If you illuminate a straight line with a laser at an angle to the scene, the displacement of the line will correspond exactly to the height of the object. This only gives the height along a single line, subject to the resolution of your camera. If you need a complete 3D scan you'll need to move the laser and take multiple pictures.
a c# reference would be needed for each frame as the streaming video file comes in. at the start of the streaming the subject will need to turn their head and spin so a series of measurements can be captured from the subject. this then could be feed to a second camera like unity 3d a virtual camera that transposes a 3d image over the top of the streamed image. there are a lot of mobile phone apps that can capture 3d objects with a series of still frames i had one on my galaxy s6 also the galaxy s6 and up has a depth chip in their cameras that they sold to i phone this is used on the apple 3d camera. i have been thinking about how to do this also would love to email you about it. to note it would be a similar concept to facial recognition software.