Recording SWF and Converting to FLV - c#

I have tonss of videos in database and they can't be accessed directly but I can play them one by one and can record them. Now I want to write a program (probably in C#) that will get a URL and will start Internet Explorer or any other default browser to start the link. Once the link will be started, video will be playing.
Now my job is to record the video for "x" seconds along with audio. I can record the video by taking screenshots very frequently but what about audio and it's quality? Do I need to put microphone in a sound proof room attached with speaker so that I record it or I can directly pull the audio off from audio interface card before letting it toward the speakers?
Any ideas?
Umair

This is not wise at all. You will have a huge quality loss recording video from screen and re-encoding it, not to mention the time this will take.
You should find a way to access those videos directly from the database, and run them through a converter like ffmpeg.

What about using rtmpdump to retrieve the video stream as a .flv? Of course, you will need to parse the stream information from the respective web pages, but that should be manageable.

Related

Send a slice of a Video File through HttpHandler

i am a .Net developer who has written a CMS-System for the Intranet of a specific company.
Our client has the ability to upload videos and other media there and let his employees and customers view them alongside other information.
We use a Standard Httphandler to fetch the uploaded video from HDD of the server and context.response.TransmitFile() it to the Browser.
So we can use this Handler as target for a html5 src-Attribute.
Now i've gotten a request to sort of "emulate" a videostream. The idea is that the client uplaods the video as a file, sets a specific start date from which the video should be viewable and every Request to the video should then return only the slice from the video from startdate to now.
Sort of pretending this video would be a live stream which goes forward on its own.
I tried adapting the HttpHandler to calculate the number of seconds between the startdate and the current request time, multiply it by the bitrate of the video and then simply cut off x bytes from the Stream (for example using Stream.seek) but the resulting data does not get recognized by Brwosers as a valid video stream. I guess this is because of missing (cut away) header-informations and key-frames etc.
Does anybody know a library who allows me to do this (cutting the video in slices without writing them to harddisk, i dont want to have a videofile laying around for every request thats landed on my httphandler)
The video is in mp4 format and i would liek to avoid the additional overhead of having to transcode it (like VLC requires when you use it for streaming)
Any ideas on this topic, im kinda lost...!?
Thanks in advance.
Chris
Clarification:
I do not know how much to cut off the video beforehand, that depends on the moment the stream is requested.
The Formula is easy: Date of the Request (Datetime.Now) - Configured Start time of the video. This Timespan has to be "skipped" from the start of the video.
Ideally i woudl like some library which allows me to load the file as a fileStream, skip x Seconds and write the remaining bytes/frames to the output of the httphandler. But i have no idea how to do this as VLC and FFMPEG seem so only support slicing by writing files, not giving me the sliced data as a stream...

Grab Audio Sessions as they appear in the windows audio mixer (C# or C++)

I'm trying to figure out how to grab the individual audio streams as they appear in the audio mixer to reroute them to an aggregate audio device. I'm specifically looking to keep them as discreet streams for the purposes of the program I'm making (If they're muxed down to a 2-channel mix, that defeats the purposes of what I'm trying to achieve.)
E.X.: (As I've just made this account, I apparently am not able to post images, so here's a link to the image)
windows audio mixer
In this, I'm hoping to grab "System Sounds" and "Stream Client Bootstrapper" as discreet audio streams to route elsewhere, while maintaining their original destination as well (essentially copying the audio going to the original audio device to another simultaneously).
I'm looking to do this in either C# or C++. I've perused the audio APIs that microsoft has published, and while some things look to be close to what I'm trying to do, nothing has hit the nail on the head. I appreciate any help. Thanks.
The sessions can be enumerated using IAudioSessionManager2::GetSessionEnumerator and friends (sample C++ code is here and there). Standard Windows volume mixer application is using this API as well.
The API however has no access to data streams, you won't have either (you certainly don't have data whether they are downmixed or not). Neither you can reroute streams to another device. Applications are not allowed to interfere that deep. The best you can do is to create your own device, interactively select it as default output device and then accept data from applications playing audio through this device.

Implement on media monitoring (like radio ads monitoring)

I want to develop an Audio Monitoring Software, for example to know how many ads of certain company where published on x radio station ?
There is any way to analyse "realtime" the audio stream and detect when any version of an ad is played on the radio?
Or the best way is to analyse every x seconds the audio fragment, if this is the way to go, what can I do to know if only a segment of an audio has the sample audio (for example analyse 20 minutes of radio and return true if the spot (ad) where player in that audio sample)
(Sorry for my English, I hope is understandable)
I guess realtime could be difficult due to the fact that you have to analyze your radiostream. For that you need to cache, analyze / fingerprint and run against an existing database.
But take a look on these questions:
https://stackoverflow.com/questions/2462410/acoustic-fingerprint-opensource
Musicbrainz fingerprinting
More Links:
http://acoustid.org
https://musicbrainz.org/doc/Fingerprinting
http://echoprint.me // service by spotify / echonest
https://www.audiblemagic.com/broadcast-infrastructure
Good luck.
An excellent open-source audio fingerprinting library in Python can be found here:
http://github.com/worldveil/dejavu
It allows you to fingerprint an audio file once, store the fingerprints in a database, and do continual recognition and adding fingerprints as time goes on.
You can even fingerprint small sections of song to save disk storage if you are just doing on-disk deduplication.

How to get sound portion of an MP4 (video file)?

I am developing a windows phone 7 application and it does video recording. I would like to get the sound portion of the video file (MP4) and do some enhancements on the sound. I believe sound is saved as AAC frames in MP4. (Right?) How can I extract sound of a videa MP4 file?
Since this is a video file, it can be huge file. So uploading to cloud and processing there is not a good option. Since this WP7 application I cannot use unmaged dlls :( Is there a way to do in pure C#? Any open source tools/samples?
Thanks!
MP4 is a container format and realistically the sound portion isn't always AAC. It could be MP3 or any other number of different audio formats. You may be thinking of M4A, which I believe requires either AAC or ALAC.
On the subject of audio extraction, it should be possible to extract the audio from an MP4 using just managed code. You'll have to read up on the MP4 format (here, for example - this question is also worth reading) and then search through the file for the location of the audio and then either copy it to its own buffer or do your manipulations in chunks. Even then, you'll have to be able to recognize when it isn't an audio format that your app won't support.
It's possible that there already exists a .net library that can do all of this but I don't know of any. It's probably not very popular because managed code is definitely not the best angle to approach this from, but considering this is Windows Phone, it is, as you noted, your only avenue of approach.
Good luck!

Audio Sync problems using DirectShow.NET

I have started a thread on this at DirectShow.NET's forum, here is the link http://sourceforge.net/projects/directshownet/forums/forum/460697/topic/5194414/index/page/1
but unfortunately the problem still persists...
I have an application that captures video from a webcam and audio from the microphone and save it to a file, for some reason the audio and video are never in-sync, i tried the following:
1. Started with ffdshow encoder and changed to AVI Mux - problem persists, audio is delayed and at the end of the video the picture remains frozen and the audio continues
2. Changed from AVI Mux to WM ASF Writer - video is frozen at the beginning (2 seconds) and rest of video is in-sync (but the two first seconds are not usable)
3. create SampleGrabber that prints the timestamp for both audio and video - saw that the audio timestamp is 500ms earlier but I have no idea what to do with this fact...
4. tried manually setting the ReferenceClock to one of the capture filters (audio/video) but both won't cast to IReferenceClock
5. Created a SystemClock and set it has the ReferenceClock - made no difference
6. Set SyncUsingStreamOffset(true) on the grap - timestamps are much closer now but the final result is the same
7. Tried saving the audio and video to two different files and used VirtualDub to see if they match, they still dont...
Oh i forgot to mention I also tried building the graph in GraphEditPlus but the problem still remains, here's a link to the graph: http://www2.picturepush.com/photo/a/8030745/img/8030745.png
Currently I am testing all my changes on the CapWMV sample from DirectShow.NET's samples.
Please any advice would be highly appreciated, I am hopeless :/
Thanks,
Eran.
Update:
It seems there's a constant 500ms gap between the audio and video, if I use virtualDub to delay the audio by 500ms it looks fine, how can set this in the graph?
You are having latency on the audio stream equal to size of capture buffer. That is, you obtain the full buffer which started being captured 0.5 seconds away. You need to use smaller buffers and/or apply offset on the buffers to adjust the latency.
See:
Minimizing audio capture latency in DirectShow
How to eliminate 1 second delay in DirectShow filter chain? (Using Delphi and DSPACK)
IAMBufferNegotiation is the keyword.
Just wanted to add the solution for my situation, maybe it will help someone.
I was trying to record video from a webcam together with audio from a microphone, video is HD (1080p) so I wanted to save an AVI file encoded in MPEG4, so I used ffshow-tryous (free Mpeg4 encoder) together with an Avi Mux Filter, the problem was that some (well most of them :) ) of my videos had sync issues.
What I discovered was that Avi Mux does not handle synchronization, it assumes the data arrives at the appropriate time (written here - http://msdn.microsoft.com/en-us/library/dd407208(v=vs.85).aspx), so I tried using WMAsfWriter which does handle synchronization and it worked fine (The 2 seconds freeze I mentioned above was just a glitch with VLC Player) but it doesn't work good with high resolutions and I had trouble using it with custom profiles (filters won't get connected).
I also tried Roman's suggestion and although the links were very interesting and promising (I really recommend reading them - can't give +1 to a post yet...) it just didn't made any difference :/
My final solution was to give up on MPEG4 and just use MPEG2, I switched from Avi Mux to Microsoft MPEG2 Encoder which works great, should have thought about that long time ago :)
Hopefully this will help someone else.
Thanks,
Eran.
I had the same problem rendering video from WMV to AVI using Xvid MPEG-4 decoder.
My final solution without giving up MPEG-4 was to configure the AviMuxer setting ConfigAviMux::SetMasterStream property
As explained in the Capturing Video to an AVI File article from MSDN configuration:
If you are capturing audio and video from two separate devices, it is a
good idea to make the audio stream the master stream. This helps to
prevent drift between the two streams, because the AVI Mux filter
adjust the playback rate on the video stream to match the audio
stream.
Example Code :
IConfigAviMux _filterAVIMuxerCfg = (IConfigAviMux)_filterAVIMuxer;
_filterAVIMuxerCfg.SetMasterStream(0); // I've add first audio ;)

Categories