I want to play live video stream from internet providing an url to MediaElement xaml control. Everything work's fine if the video is .mp4. But if the video was taken on iphone and saved as .mov then MediaElement show's nothing and only loading event is fired. (~5min after download completed other events firing too)
Here MediaElement initialisation:
MediaElement media = new MediaElement();
media.MediaFailed += Media_MediaFailed;
media.PartialMediaFailureDetected += Media_PartialMediaFailureDetected;
media.MediaOpened += Media_MediaOpened;
media.DownloadProgressChanged += Media_DownloadProgressChanged;
media.Loading += Media_Loading;
media.MediaEnded += Media_MediaEnded;
media.Source = new Uri(URL);
If download video manually via HttpClient and then provide Stream.AsRamdomAccessStream() to MediaElement, video starts playing immediately. It seems like MediaElement does not support live streaming for .mov/quicktime videos. Is it true?
Unfortunately, mov is not supported by the built-in MediaElement control.
mov video type is proprietary format by Apple Inc. for the QuickTime player, which means it will probably never be supported natively.
The complete list of supported formats and codecs is here on MSDN.
Edit: It seems the MediaElement control can actually use installed codecs on the PC to play directly unsupported formats, including mov.
The real problem was in the order of container atoms. As explained here
In order to start playing a movie file right away its metadata contained in the "moov" atom is paramount to the player. If the movie file atoms are ordered as previously described everything work as expected...but most video editors (ffmpeg, quicktime, flash video) generate atoms in the wrong order (as seen on the right): With the "moov" atom last.
And provided solution:
The only solution is to fix those files and reorder the atoms inside. This can be done:
Oldskool way using some C code from FFMPEG's qt-faststart.c. The code moves the "moov" atom at the top of the file and update all the "stco" sub-atoms pointers by adding the proper offset.
Using iOS AV Foundation framework and a few lines of Objective-C (you can also convert from MOV to MP4 since Android cannot read MOV):
#import <AVFoundation/AVAsset.h>
#import <AVFoundation/AVAssetExportSession.h>
#import <AVFoundation/AVMediaFormat.h>
+ (void) convertVideoToMP4AndFixMooV: (NSString*)filename toPath:(NSString*)outputPath
{
NSURL *url = [NSURL fileURLWithPath:finename];
AVAsset *avAsset = [AVURLAsset URLAssetWithURL:url options:nil];
AVAssetExportSession *exportSession = [AVAssetExportSession
exportSessionWithAsset:avAsset
presetName:AVAssetExportPresetPassthrough];
exportSession.outputURL = [NSURL fileURLWithPath:outputPath];
exportSession.outputFileType = AVFileTypeAppleM4V;
// This should move the moov atom before the mdat atom,
// hence allow playback before the entire file is downloaded
exportSession.shouldOptimizeForNetworkUse = YES;
[exportSession exportAsynchronouslyWithCompletionHandler:
^{
if (AVAssetExportSessionStatusCompleted == exportSession.status) {}
else if (AVAssetExportSessionStatusFailed == exportSession.status) {
NSLog(#"AVAssetExportSessionStatusFailed");
}
else
{
NSLog(#"Export Session Status: %d", exportSession.status);
}
}];
}
As I also developing an iOS client, this solved my issue.
Related
I have very little experience with DirectShow, so far I have managed paly a .wav file over a particular output device while being able to control its volume and get/set its track-position. Basically I’m able to create a very simple sound player application.
Here is the code I’m currently using:
//select an output device
DsDevice[] devices = DsDevice.GetDevicesOfCat(FilterCategory.AudioRendererCategory
DsDevice device = (DsDevice)devices[xxx];
//define a source
Guid iid = typeof(IBaseFilter).GUID;
object source = null;
device.Mon.BindToObject(null, null, ref iid, out source);
//build a basic graph to play the sound
IGraphBuilder player_gra = (IGraphBuilder)new FilterGraph();
player_gra.AddFilter((IBaseFilter)source, "Audio Render");
player_gra.RenderFile(#"test.wav", "");
//start the sound
(player_gra as IMediaControl).Run();
//control the volume
(player_gra as IBasicAudio).put_Volume(volume);
//get/set position on the track
(player_gra as IMediaPosition).get_Duration(out out1);//how long the track is
(player_gra as IMediaPosition).get_CurrentPosition(out out2);
(player_gra as IMediaPosition).put_CurrentPosition(yyy);
What I would like to do now is to play several .wav files simultaneously while being able to control the volume and track-position of each file at runtime.
My first attempt was to create several IGraphBuilder instances and run them at the same time but it seem that only one can play at the same time while the others wait until the currently playing one is terminated via:
Marshal.ReleaseComObject(player_gra);
My second attempt was to give the IGraphBuilder several Files to render before starting it.
…
player_gra.RenderFile(#"testA.wav", "");
player_gra.RenderFile(#"testB.wav", "");
player_gra.RenderFile(#"testC.wav", "");
…
This way the files are played simultaneously but I see no way to control the volume of each individual sound, much less its position on the audio track.
Thank you in advance ;-)
In these lines
DsDevice[] devices = DsDevice.GetDevicesOfCat(FilterCategory.AudioRendererCategory);
DsDevice device = (DsDevice)devices[0];
you enumerate audio output devices and pick the first "random" device which appears to be Default WaveOut Device or one of its instances.
It has a legacy behavior that only one active instance is actually sending data to playback. There is no fundamental limitation in the system to prevent from simultaneous plyback, it is just legacy behavior.
That is, you have both graph playing but audio from the second is muted.
Audio mixing is disabled in the waveOut Audio Renderer, so if you need to mix multiple audio streams during playback, use the DirectSound renderer.
If you use Default DirectSound Device instead (which you can quickly pick up by using different index in device[NNN] in code) you'll hear what you expect to hear.
DirectShow.NET does the enumeration somehow confusingly, Default DirectSound Device normally has highest merit and is listed first and you seem to be given devices in different order.
Problem summarized
We're using Azure Media Services v3 for streaming media files with Azure Media Player. Currently I'm trying to add an option to stream .mp3 with AES encryption.
Uploading and encoding the file works fine, I can see the generated .mp4 file and I can even play it if I download it from the blob. The problem is when I try to stream it in Azure Media Player I get the following error in the player:
image
This is because Azure Media Player doesn't support streaming audio only files as it is stated in its documentation:
May work in some techs for Adaptive Streaming but is currently not supported and does not work in AzureHtml5JS.
What I've tried
So I've tried to add blank/empty h264Layer to the transform job so the encoded file wouldn't be audio-only. I've found some questions regarding this problem, like this, however my problem is still unresolved.
I've also found this suggestion, which led me to Azure Media Services v2 documentation page but I think it isn't what I need.
I've also seen this note:
Many of the advanced Media Services v2 features of the Media Encoder Standard are currently not available in v3. For more information, see feature gaps.
I set the encoding preset to "EncoderNamedPreset.AACGoodQualityAudio". Then to "EncoderNamedPreset.H264SingleBitrate720p" just to see if it creates an empty video layer but the result was the same: the video file was created in its own asset but I wasn't able to stream it.
I've tried to create my own encoder preset with empty H264Layer like this:
new StandardEncoderPreset(
codecs: new Codec[]
{
new AacAudio(
channels: 2,
samplingRate: 48000,
bitrate: 128000,
profile: AacAudioProfile.AacLc
),
new H264Video(
keyFrameInterval: TimeSpan.FromSeconds(2),
sceneChangeDetection: true,
layers: new H264Layer[]
{
new H264Layer (
bitrate: 1, // Units are in bits per second
width: "1",
height: "1",
label: "SD" // This label is used to modify the file name in the output formats
) }
)
},
formats: new Format[]
{
new Mp4Format(filenamePattern:"Video-{Basename}-{Label}-{Bitrate}{Extension}")
}
)
Result while creating Transform:
Exception: Operation returned an invalid status code 'BadRequest'
With blank layer
new H264Layer (
bitrate: 0, // Units are in bits per second
width: "0",
height: "0",
label: "SD" // This label is used to modify the file name in the output formats
)
Result while creating transform:
Exception: Operation returned an invalid status code 'BadRequest'
H264Layer without constructor parameters
new H264Layer()
Result:
Transform is created.
Same error in AMP when trying to stream it.
The video playback was aborted due to a corruption problem or because the video used features your browser did not support. (0x20400003)
Expected result
The expected result would be any kind of file what could be streamed by Azure Media Player. I guess this should be achieved somehow adding a blank/empty H264 layer to the encoded file but I can't figure out how to do it in Azure Media Services v3.
Adding the response, that was conveyed offline, to this post:
You are correct, that Azure Media Player currently requires a video
track
When using v2 APIs, there is a way to insert a video track when encoding an audio-only source
That flag is currently not exposed in our v3 schema
However, should be able to use a v3 Preset like AdaptiveStreaming – it encodes the audio, and generates a ‘blank video’ track
Thanks,
Anil
for a new project, I used the Windows Media Player component. It should play a Livestream and this works fine for me, but after 10s the stream loads again and begins at 0 seconds (just like a 10s video clip).
There are two solutions I can see, but I don't know a way for them. The code itself is pretty simple.
private void tbutton_Click(object sender, EventArgs e)
{
tvplayer.currentPlaylist.name = "TV-Stream";
tvplayer.URL = (stream-url);
}
The first would be to "let the player know" that the video source is a stream and not a video, but I don't know how I should do that.
The second solution would be to modify the duration of the "video", the Media Player plays to... maybe two hours or 24 hours. I know this is somehow possible as I read about it in the Metafile Elements Reference (https://msdn.microsoft.com/de-de/library/windows/desktop/dd564668(v=vs.85).aspx), anyway, I don't see how.
Can someone give me a hint how I could do that?
I tried both HLS and HDS versions of the livestream, there is no difference. The problem is the same. The stream itself has a H.264 MP4-format.
I guess the problem is that the livestream is loaded in 10s-segments.
I've written a video generator that rights a video in h264 format (mp4). When I stream the video from my azure service, i'm seeing the following network traffic:
The AVCodecContext layout I'm using is as follows:
AVCodec* videoCodec = ffmpeg.avcodec_find_encoder(AVCodecID.AV_CODEC_ID_H264)
AVCodecContext* videoCodecContext = ffmpeg.avcodec_alloc_context3(videoCodec);
videoCodecContext->bit_rate = 400000;
videoCodecContext->width = 1280;
videoCodecContext->height = 720;
videoCodecContext->gop_size = 12;
videoCodecContext->max_b_frames = 1;
videoCodecContext->pix_fmt = videoCodec->pix_fmts[0];
videoCodecContext->codec_id = videoCodec->id;
videoCodecContext->codec_type = videoCodec->type;
videoCodecContext->time_base = new AVRational
{
num = 1,
den = 30
};
ffmpeg.av_opt_set(videoCodecContext->priv_data, "preset", "ultrafast");
I'm also tried setting the "movflags" option for avformat_write_header() via an AVDictionary, but then av_write_trailer() returns -2, cause the file to not finish writing.
I cannot figure out how to solve this problem. Videos generating using Windows Movie Maker stream perfectly.
I know this has something to do with mdat and mov positions.
Also, this appears to only happening in Google Chrome.
OK, figured this out. I've been writing the video frames first and the audio frames afterwards. Instead, you have to write them side by side in order for faststart to actually work and allow the video to stream.
So, write a specific amount of audio and then determine if a video frame should be written by checking the timebases against the current writing indexes.
This example will show you how its done.
Also, to get the video and audio streams to have accurate PTS/DTS values, look at this question.
I'm trying to create a media player for the Windows Store. I am using MediaElement to play the file, but I would also like to be able to load multiple songs/videos in a playlist and to display the title of the media file and the length of the file in seconds.
Something like that:
TimeSpan duration = GetFileDuration(mFile.PathToFile);
or
int durationOfFileInSeconds = GetFileDuration(mFile.PathToFile);
How can I do that ? I tried a lot of methods but they don't work because I'm trying do develop for Windows Store. I tried:
System.Windows.Media.MediaPlayer
Microsoft.WindowsAPICodecPack
Shell() and GetDetailsOf()
many other solutions that I find online
I've also tried to create a new page with a MediaElement just to load files in the MediaElement but the MediaElement.MediaOpened or the MediaElement.MediaFailed events are never fired and while ((mPlayer.NaturalDuration.TimeSpan == new TimeSpan())) ; would run indefinitely.
Really need some help with this. I've seen media players on the Windows Store that can display the duration of the file, so it's possible. I just don't know how.