Transcoding mxf video file with CopyAudio in Azure Media Services v3 - c#

We are using Azure Media Services V3 API to transcode video files of various input formats into mp4 output. In case we have a mxf input file we receive the following exception when trying to transcode video with audio codec 'CopyAudio'): Azure Media ReEncode error message: An error has occurred. Stage: ApplyEncodeCommand. Code: 0x00000001.
This is the same issue as mentioned here (Copy audio codec throws exception when transcoding mxf video file), but for the v2 API of Azure Media Services
The answer given there is indeed the solution for Azure Media Services v2.
I'm having trouble to port it to the v3 API though. In code we are creating an instance of StandardEncoderPreset (Microsoft.Azure.Management.Media.Models.StandardEncoderPreset) and try to use the CopyAudio codec. Currently I am unable to figure out how to specify the MOVFormat there.
StandardEncoderPreset preset = new StandardEncoderPreset(
codecs: new List<Codec>()
{
new H264Video
{
KeyFrameInterval = TimeSpan.FromSeconds(2),
SceneChangeDetection = true,
//PreserveResolutionAfterRotation = true,
Layers = new[]
{
new H264Layer
{
Profile = H264VideoProfile.Auto,
Level = "Auto",
Bitrate = bitrate,
MaxBitrate = bitrate,
BufferWindow = TimeSpan.FromSeconds(5),
Width = width.ToString(),
Height = height.ToString(),
BFrames = 3,
ReferenceFrames = 3,
FrameRate = "0/1",
AdaptiveBFrame = true
}
}
}, new CopyAudio()
},
// Specify the format for the output files - one for video+audio, and another for the thumbnails
formats: new List<Format>()
{
new Mp4Format()
{
FilenamePattern = "{Basename}_" + width + "x" + height +"_{Bitrate}.mp4"
}
}
With the preset configured like that I get the same error as mentioned in the original post. CopyAudio only has a property 'Label'.
Also been thinking we need to specify an extra format in the list of 'Formats' but I can't find a MOVFormat (or PCMFormat) class.

Our v3 APIs do not yet support writing to MOV output file format. You would need to go with v2 APIs for such Jobs.

Related

How to use Google Cloud Speech (V1 API) for speech to text - need to be able to process over 3 hours audio files properly and efficiently

I am looking for documentation and stuff but could not find a solution yet
Installed NuGet package
Also generated API key
However can't find proper documentation how to use API key
Moreover, I want to be able to upload very long audio files
So what would be the proper way to upload up to 3 hours audio files and get their results?
I have 300$ budget so should be enough
Here my so far code
This code currently fails since I have not set the credentials correctly at the moment which I don't know how to
I also have service account file ready to use
public partial class MainWindow : Window
{
public MainWindow()
{
InitializeComponent();
}
private void Button_Click(object sender, RoutedEventArgs e)
{
var speech = SpeechClient.Create();
var config = new RecognitionConfig
{
Encoding = RecognitionConfig.Types.AudioEncoding.Flac,
SampleRateHertz = 48000,
LanguageCode = LanguageCodes.English.UnitedStates
};
var audio = RecognitionAudio.FromStorageUri("1m.flac");
var response = speech.Recognize(config, audio);
foreach (var result in response.Results)
{
foreach (var alternative in result.Alternatives)
{
Debug.WriteLine(alternative.Transcript);
}
}
}
}
I don't want to set environment variable. I have both API key and Service Account json file. How can I manually set?
You need to use the SpeechClientBuilder to create a SpeechClient with custom credentials, if you don't want to use the environment variable. Assuming you've got a service account file somewhere, change this:
var speech = SpeechClient.Create();
to this:
var speech = new SpeechClientBuilder
{
CredentialsPath = "/path/to/your/file"
}.Build();
Note that to perform a long-running recognition operation, you should also use the LongRunningRecognize method - I strongly suspect your current RPC will fail, either explicitly because it's trying to run on a file that's too large, or it'll just time out.
You need to set the environment variable before create the instance of Speech:
Environment.SetEnvironmentVariable("GOOGLE_APPLICATION_CREDENTIALS", "text-tospeech.json");
Where the second param (text-tospeech.json) is your file with credentials generated by Google Api.

Recreating thumbnail from existing asset in Azure Media Services

I am using Azure Media Services and Azure Functions to build a VOD element for a website. Basically, when the source video is uploaded a blob trigger starts off a DurableOrchestration to create an asset and then encode the video. It also generates 3 different size thumbnails using the default {Best} frame. So far so good.
What I want to do now is allow the user to select a frame from the encoded video and choose that to be the poster thumbnail.
I have an HttpTrigger which takes the asset id and the frame timestamp and kicks off another durable function which should recreate the thumbnails at the specified frame.
But it isn't working.
I originally got 3 blank images in a new asset and when I tried to force it to put the images back into the original asset, I got nothing.
This is the code I'm using to try to achieve this. It's pretty much the same as the code to create the original asset. The only real difference is that the json preset only has instruction for generating thumbnails, the asset already has 6 encoded videos, 3 thumbnails and associated meta files in it, and I'm not passing the original source video file to it (because I delete that as part of the clean-up once the original encoding is complete).
PostData data = inputs.GetInput<PostData>();
IJob job = null;
ITask taskEncoding = null;
IAsset outputEncoding = null;
int OutputMES = -1;
int taskindex = 0;
bool useEncoderOutputForAnalytics = false;
MediaServicesCredentials amsCredentials = new MediaServicesCredentials();
try
{
AzureAdTokenCredentials tokenCredentials = new AzureAdTokenCredentials(amsCredentials.AmsAadTenantDomain,
new AzureAdClientSymmetricKey(amsCredentials.AmsClientId, amsCredentials.AmsClientSecret),
AzureEnvironments.AzureCloudEnvironment);
AzureAdTokenProvider tokenProvider = new AzureAdTokenProvider(tokenCredentials);
_context = new CloudMediaContext(amsCredentials.AmsRestApiEndpoint, tokenProvider);
IAsset asset = _context.Assets.Where(a => a.Id == data.assetId).FirstOrDefault();
// Declare a new encoding job with the Standard encoder
int priority = 10;
job = _context.Jobs.Create("CMS encoding job", priority);
foreach (var af in asset.AssetFiles)
{
if (af.Name.Contains(".mp4)"))
af.IsPrimary = true;
else
af.IsPrimary = false;
}
// Get a media processor reference, and pass to it the name of the
// processor to use for the specific task.
IMediaProcessor processorMES = MediaServicesHelper.GetLatestMediaProcessorByName(_context, "Media Encoder Standard");
string preset = null;
preset = "MesThumbnails.json"; // the default preset
string start = data.frame;
if (preset.ToUpper().EndsWith(".JSON"))
{
// Build the folder path to the preset
string presetPath = Path.Combine(System.IO.Directory.GetParent(data.execContext.FunctionDirectory).FullName, "presets", preset);
log.Info("presetPath= " + presetPath);
preset = File.ReadAllText(presetPath).Replace("{Best}", start);
}
taskEncoding = job.Tasks.AddNew("rebuild thumbnails task",
processorMES,
preset,
TaskOptions.None);
// Specify the input asset to be encoded.
taskEncoding.InputAssets.Add(asset);
OutputMES = taskindex++;
string _storageAccountName = amsCredentials.StorageAccountName;
outputEncoding = taskEncoding.OutputAssets.AddNew(asset.Name + " MES encoded", _storageAccountName, AssetCreationOptions.None);
asset = useEncoderOutputForAnalytics ? outputEncoding : asset;
job.Submit();
await job.GetExecutionProgressTask(CancellationToken.None);
My question is whether what I am trying to do is actually possible, and if so what is wrong with the approach I'm taking.
I've searched quite a bit on this topic but can always only find reference to generating thumbnails whilst encoding a video, never generating thumbnails from encoded videos after the event.
I'm not passing the original source video file to it
That's likely why you are running into the problem. The output of your Adaptive Streaming Job, as you have seen, contains multiple files. There are some additional flags needed to tell the thumbnail generation Job to focus on just one file (typically the highest bitrate file). The preset below should do the trick.
Note how the preset starts with a Streams section which tells the encoder to pick the highest/top bitrate for video and audio
Note that the Step is set to 2, but range is 1, ensuring only one image is generated in the output
{
"Version": 1.0,
"Sources": [
{
"Streams": [
{
"Type": "AudioStream",
"Value": "TopBitrate"
},
{
"Type": "VideoStream",
"Value": "TopBitrate"
}
]
}
],
"Codecs": [
{
"Start": "00:00:03:00",
"Step": "2",
"Range": "1",
"Type": "JpgImage",
"JpgLayers": [
{
"Quality": 90,
"Type": "JpgLayer",
"Width": "100%",
"Height": "100%"
}
]
}
],
"Outputs": [
{
"FileName": "{Basename}_{Index}{Extension}",
"Format": {
"Type": "JpgFormat"
}
}
]
}

Play audio url using xamarin MediaPlayer

Why xamarin MediaPlayer (on Xamarin.Android) can play audio as a stream from a link like this (mediaUrl1) :
https://ia800806.us.archive.org/15/items/Mp3Playlist_555/AaronNeville-CrazyLove.mp3
But can't do it from a link like this (mediaUrl2):
http://api-streaming.youscribe.com/v1/products/2919465/documents/3214936/audio/stream
private MediaPlayer player;
//..
player = new MediaPlayer();
player.SetAudioStreamType(Stream.Music);
//..
await player.SetDataSourceAsync(ApplicationContext, Android.Net.Uri.Parse(mediaUrl));
//..
player.PrepareAsync();
//..
Is there a way to play the link above (mediaUrl2) without (of course) downloading the file first?
Here is the full source of the sample i am using. Any help would be appreciated.
http://api-streaming.youscribe.com/v1/products/2919465/documents/3214936/audio/stream
That is an HTTP mpga stream and is not directly supported by any of the Android APIs that I know of and thus is not supported by MediaPlayer (consult the Android Support Media Formats for further reading).
You can review the logcat output of your MediaPlayer code and you will see output like:
[MediaPlayerNative] start called in state 4, mPlayer(0x8efb7240)
[MediaPlayerNative] error (-38, 0)
[MediaPlayer] Error (-38,0)
[MediaHTTPConnection] readAt 1161613 / 32768 => java.net.ProtocolException
[MediaHTTPConnection] readAt 1161613 / 32768 => java.net.ProtocolException
[MediaPlayerNative] error (1, -2147483648)
[MediaPlayer] Error (1,-2147483648)
Google's Android ExoPlayer can stream that media format properly.
This is a really simple and very crude example of ExoPlayer, but it will show you that it does play that stream:
ExoPlayer Example:
var mediaUrl = "http://api-streaming.youscribe.com/v1/products/2919465/documents/3214936/audio/stream";
var mediaUri = Android.Net.Uri.Parse(mediaUrl);
var userAgent = Util.GetUserAgent(this, "ExoPlayerDemo");
var defaultHttpDataSourceFactory = new DefaultHttpDataSourceFactory(userAgent);
var defaultDataSourceFactory = new DefaultDataSourceFactory(this, null, defaultHttpDataSourceFactory);
var extractorMediaSource = new ExtractorMediaSource(mediaUri, defaultDataSourceFactory, new DefaultExtractorsFactory(), null, null);
var defaultBandwidthMeter = new DefaultBandwidthMeter();
var adaptiveTrackSelectionFactory = new AdaptiveTrackSelection.Factory(defaultBandwidthMeter);
var defaultTrackSelector = new DefaultTrackSelector(adaptiveTrackSelectionFactory);
exoPlayer = ExoPlayerFactory.NewSimpleInstance(this, defaultTrackSelector);
exoPlayer.Prepare(extractorMediaSource);
exoPlayer.PlayWhenReady = true;
Note: exoPlayer is a class-level variable of SimpleExoPlayer type
Note: this is using the Xamarin.Android binding libraries from the Xam.Plugins.Android.ExoPlayer package
ExoPlayer Docs:
https://developer.android.com/guide/topics/media/exoplayer

Print using Google cloud printer API in C# MVC

I am trying to Print multiple file Using google cloud printer api.
I have integrate local printer to chrome://devices/
and try to implement Printer api from beloved link
https://github.com/lppkarl/GoogleCloudPrintApi
var provider = new GoogleCloudPrintOAuth2Provider(ClientId, SecreteKey);
var url = provider.BuildAuthorizationUrl(redirectUri);
var token = provider.GenerateRefreshTokenAsync(url, redirectUri);
var client = new GoogleCloudPrintClient(provider, token);
var request = new ListRequest { Proxy = proxy };
var response = client.ListPrinterAsync(request);
In this method I am not getting token or Authorization code from request url
and also try to implement print using Print to Google Cloud Print with dynamic URL
for (var i = 0; i < url.length; i++) {
var currenturl = url[i]
var gadget = new cloudprint.gadget();
gadget.setprintdocument("application/pdf", "pdf");
gadget.setprintdocument("url", "document title1", currenturl);
gadget.openprintdialog();
}
Above code is working fine for multiple files.
But it's asking for print for every files.
I want to print multiple file but it should be ask only first time and rest of queued file automatically add in queue for print.
I tried many ways, but no one give me success.
If anyone have any idea than it'll helpfull. Thank you

AWS Elastic Transcoder Endpoint cannot be resolved

I'm working on a project that requires video to be transcoded and thumbnails extracted through use of AWS Elastic Transcoder. I have followed the api to the best of my abilities and have what seems to me correct code. However, I still get an error with NameResolutionFailure thrown and an inner exception saying that The remote name could not be resolved: 'elastictranscoder.us-west-2.amazonaws.com'My code is:
var transcoder =
new AmazonElasticTranscoderClient(Constants.AmazonS3AccessKey,
Constants.AmazonS3SecretKey, RegionEndpoint.USWest2);
var ji = new JobInput
{
AspectRatio = "auto",
Container = "mov",
FrameRate = "auto",
Interlaced = "auto",
Resolution = "auto",
Key = filename
};
var output = new CreateJobOutput
{
ThumbnailPattern = filename + "_{count}",
Rotate = "auto",
PresetId = "1351620000001-000010",
Key = filename + "_enc.mp4"
};
var createJob = new CreateJobRequest
{
Input = ji,
Output = output,
PipelineId = "1413517673900-39qstm"
};
transcoder.CreateJob(createJob);
I have my s3 buckets configure in Oregon and added policies to make the files public.
Apparently my virtual machine was not connecting to the internet, which is why the nameresolutionfailure was thrown. Everything is fine now.

Categories