WP7: Render a video from image sequence - c#

I am currently developing a Windows Phone 7 application. What I want to do, is to render a video file (avi, wmv, whatever...) from a sequence of images. So, I just need something like a frame writer for video files (e.g. create an in-game video: write every X frame to video stream).
I searched the whole internet and also stackoverflow but I didn't find anything. As far as I know, there are a lot of APIs and interfaces in the Windows Phone 7 stack to handle audio and video, so I think there must be a solution for this somehow.
BTW: I alread had a look at C# Slicer and ffmpeg. Slicer isn't available for Windows Phone 7 and I wasn't able to port it and ffmpeg isn't allowed by Microsoft, because this solution would need to integrate an unmanaged library which isn't allowed.
I hope you can help me.

Would you consider doing this operation on a server? You could upload the images to your host, invoke ffmpeg to create the movie, then push the movie back to the phone. It might even be faster than having the phone do the encoding (assuming you don't have thousands of users hitting the host at the same time, of course!)

Related

Xamarin Android live stream video/audio from device in c#

I am challenged with the task to stream video and audio (camera and mic) from my android API 19 devices directly to a server. Displaying (on the device) is also optional.
Following this link I learned that you can use Mediarecorder with a socket instead of an output file. But they describe this method is to slow (3 seconds delay), because of the encoding. They also show a second method with preview frames (as MJpeg) but only for video.
Some apps from the store use the same method with an additional audio recorder, which causes a big delay between image and sound. I was hoping someone knows how to live stream from android to a server over wifi. C# code would be much appreciated. Third-party libraries are also welcome as long as they are free for commercial use. Any help/ideas are welcome and thanks in advance.

Intercept video frames in a WebRTC session for effects processing?

I have a C#/C++ app that captures a video stream from the camera connected to a user's PC. It then does user segmentation using the Intel RealSense SDK to automatically remove the background from the user. What I need to know is how to
insert myself into the video delivery chain so I get each frame, process it, and send it on to the WebRTC module.
The whole desired effect is to make the user look like they are superimposed over the web page. Note, the only browser I need to support is Chrome since I am running the Chromium DLLs in an embedded browser, thanks to the CefSharp project.
The one piece I can't figure out is to put myself into the video so I can get notified when a new video frame is available, modify it, and then pass it on to WebRTC in Chromium. I downloaded the Chromium source and can't find the keyword getUserMedia anywhere.
Since you are using a port of Chromium, this means that you should have access to WebGL calls through WebKit.
Instead of doing it on the app-side, try doing it on your HTML.
Since the question is "Intercept video frames in a WebRTC session for effects processing?" I think this post and the demo it has, which is on github, does exactly what you are asking for:
Using WebGL to apply effects to WebRTC video frames
And since WebGL shaders are basically written in C++ (GLSL), you could easily create a fragment/pixel shader which removes the background. Odds are that there's probably already one out there, it'd be worth trying to google for that one.
Also, in case performance concerns you, doing it this way would be just as fast since shaders work off of the GPU; even in mobile.

Create a livestream by XSplit Encoder (RTMP Server)

I'm making a program for do a livestream of the XSplit Encoder (RTMP Server). I need to have a site to watch that stream, and this program, the player need to have a button to the spectator can choose your video quality, and the stream have to be fluid and have a good quality. Can someone explain this or send me a link to do that? Please
(C#)
This is a VERY large undertaking. And an impossible question to answer unless you narrow the scope. You need a ingest server that takes in RTMP, You need a machine with enough CPU power to do all the transcodes. You need site to playback on. You also need enough bandwidth (CDN) for all your viewers. How many viewers do you need to support? What platforms do you want to play back on? iOS? then you need HLS. Web? then you need RTMP. Or you can use DASH if your ok with limiting it to modern browsers. Do you want it to play in firefox? Then it MUST be flash, because firefox does not have native support for the h.264 codec. But flash wont play in iOS. You can use JWPlaver premium, that will play HLS in flash. Actually is h.264 the codec you intend to use? Have you looked into services such as Zencoder live transcoding? Or Wowza with the transcoding module? Amazon offers preconfigured wowza instances. What is your budget for this project? Why not just use twitch?
Edit: You can probably string something together using ffmpeg:
https://trac.ffmpeg.org/wiki/StreamingGuide

Simulating Screen capturing as a webcam?

Do we have a way to simulate a webcam driver, that will provide realtime captured screen (30 frames per sec) as it's output?
This is one of several features of ManyCam (free). It is a virtual webcam driver through which you can stream your real webcam video (with optional real-time video effects), video or image files, or your full/partial desktop.
Yes, just google video2webcam. It works quite well and will loop a video or picture as output.
The driver's job is to provide a level of abstraction between the software and hardware. The driver is supposed to issue commands to the hardware. It's not responsible for taking pictures and turning it into an animated GIF for instance. It's going to do low level stuff like, turn the device on and off, send raw data to a socket.
That being said, if you need to create a virtual device driver. Here's an overview of VDD's. Windows Programming/Device Driver Introduction
Generally these are not written in higher-level languages such as C#. Rather, they are written in languages such as C/C++. You will need the KMDF, or Kernel-Mode Driver Framework.
If you just need to access a webcam from a .NET application on a system with a webcam, you just need an API.
Open your browser.. go to google.com and type ".NET webcam API"
You will see something like this:
Webcam in your own application
It appears that this is a wrapper for the DirectShow class.

c# webcam controls?

I am looking to put a camera on top of a my lab in the process of being built and stream it to a website.
How can I be doing this with only c# ? How to get stream of video & send it live on server PC from there he can take instant photos?
Modern web cameras would supports WIA and DirectShow. WIA has a scripting interface which is more friendly to C#, however it is designed for cameras and scanners and is not that fast for streaming. But if you just need to push the image to a server, you don't need to write code, kist use Windows Media Encoder to push to a Windows Media Server's publishing point. You can then get image from the server's publishing point using DirectShow or Windows Media Format SDK. None of these are easy in C# though, you are better off using COM class libraries like ATL for extensive COM programming like this.
If you really want to write this in C#, I've had a lot of success with Egmu.
Capturing images is very straightforward - see this question. After that, it'd be FTP to the server as usual.
I'm curious about Sheng Jiang's Media Encoder solution though. Let me know how you get on.

Categories