EmguCV & OpenCV: Whitelist RTP protocol - c#

I am trying to receive some video from a Parrot Bebop 2 drone.
I am using this Bebop.sdp file, which is supplied by Parrot.
I have previously tried in Python which worked by setting an environment variable OPENCV_FFMPEG_CAPTURE_OPTIONS to protocol_whitelist;file,rtp,udp.
This worked in Python.
Since then we have ported that project mostly to C#. But when trying to connect to the stream, we get this error Protocol 'rtp' not on whitelist 'file,crypto'!.
I have seen other examples where this -protocol_whitelist "file,rtp,udp" is passed through ffmpeg arguments, but in this case it does not seem as a solution, since i cannot pass it on.
Firstly i started with a simple test:
VideoCapture videoCapture = new VideoCapture(0);
var frame = videoCapture.QueryFrame();
while (frame != null)
{
using (frame)
{
CvInvoke.Imshow("frame", frame);
CvInvoke.WaitKey(1);
}
frame = videoCapture.QueryFrame();
}
This code get's the stream from the webcam and works.
I get the error when i run it with the SDP file:
VideoCapture videoCapture = new VideoCapture(#"./bebop.sdp");
var frame = videoCapture.QueryFrame();
while (frame != null)
{
using (frame)
{
CvInvoke.Imshow("frame", frame);
CvInvoke.WaitKey(1);
}
frame = videoCapture.QueryFrame();
}
I have tried to add both:
Environment.SetEnvironmentVariable("OPENCV_FFMPEG_CAPTURE_OPTIONS", "protocol_whitelist;file,rtp,udp");
And for a more aggressive approach:
Environment.SetEnvironmentVariable("OPENCV_FFMPEG_CAPTURE_OPTIONS", "protocol_whitelist;file,rtp,udp", EnvironmentVariableTarget.User);
None of them seem to have an impact since i get same error.
I would expect when setting the environment variables that the OpenCV would whitelist the protocols needed so the stream would come trough, and be displayed in the frame.

Environment variable was working, however due to the need for Visual Studio to restart first, in order to update environment variables, this was not discovered before today.

Related

C# WPF application crashing when playing 16 players

I am running into an issue with my C# WPF application crashing silently when I am trying to play on 16 VideoViews. I did not see any error messaged popup, nor did I see anything in Windows Event viewer.
Each player instance have WindowsFormHost and hosting a VideoView in that and I am playing RTSP streams on them.
Crash time is not fixed, sometimes it crash after 2 hours, and sometimes after 7-8 hours.
Core.Initialize(AppInfo.VlcDir.FullName);
private LibVLC libVlc = null;
private LibVLCSharp.Shared.MediaPlayer mediaPlayer = null;
this.libVlc = new LibVLC(this.GetParsedPlayerOptions().ToArray());
this.mediaPlayer = new LibVLCSharp.Shared.MediaPlayer(this.libVlc);
this.videoPlayer.MediaPlayer = this.mediaPlayer;
this.mediaPlayer.Volume = 0;
this.mediaPlayer.EnableKeyInput = false;
this.mediaPlayer.EnableMouseInput = false;
// Then I added a bunch of event handlers for VideoView and MediaPlayer.
// Then I have a different function which plays videos
if (this.mediaPlayer != null)
{
var media = new Media(this.libVlc,GetPlaybackStreamUrl(this.Server), FromType.FromLocation);
this.mediaPlayer.Media = media;
this.mediaPlayer.Play();
try
{
media.Dispose();
}
catch
{
}
}
Please let me know if you need any more information.
Any suggestions of what I could be doing wrong, or anything missing?
I am running on Windows 10. Visual Studio 2019, application compiled as X86.
I am not able to find the option to upload log file, but I did attach that to the issue on videolan forum, which can be found here: https://code.videolan.org/videolan/LibVLCSharp/-/issues/564
Thanks.
I was not able to find the problem with the code or the crash stack for where it's dying.
But I was able to fix the problem by increasing the address space, by using editbin to add /LARGEADDRESSSPACE to process post build.

How to read text from 'simple' screenshot fast and effectively?

I'm working on a small personal application that should read some text (2 sentences at most) from a really simple Android screenshot. The text is always the same size, same font, and in approx. the same location. The background is very plain, usually a few shades of 1 color (think like bright orange fading into a little darker orange). I'm trying to figure out what would be the best way (and most importantly, the fastest way) to do this.
My first attempt involved the IronOcr C# library, and to be fair, it worked quite well! But I've noticed a few issues with it:
It's not 100% accurate
Despite having a community/trial version, it sometimes throws exceptions telling you to get a license
It takes ~400ms to read a ~600x300 pixel image, which in the case of my simple image, I consider to be rather long
As strange as it sounds, I have a feeling that libraries like IronOcr and Tesseract may just be too advanced for my needs. To improve speeds I have even written a piece of code to "treshold" my image first, making it completely black and white.
My current IronOcr settings look like this:
ImageReader = new AdvancedOcr()
{
CleanBackgroundNoise = false,
EnhanceContrast = false,
EnhanceResolution = false,
Strategy = AdvancedOcr.OcrStrategy.Fast,
ColorSpace = AdvancedOcr.OcrColorSpace.GrayScale,
DetectWhiteTextOnDarkBackgrounds = true,
InputImageType = AdvancedOcr.InputTypes.Snippet,
RotateAndStraighten = false,
ReadBarCodes = false,
ColorDepth = 1
};
And I could totally live with the results I've been getting using IronOcr, but the licensing exceptions ruin it. I also don't have $399 USD to spend on a private hobby project that won't even leave my own PC :(
But my main goal with this question is to find a better, faster or more efficient way to do this. It doesn't necessarily have to be an existing library, I'd be more than willing to make my own kind of letter-detection code that would work (only?) for screenshots like mine if someone can point me in the right direction.
I have researched about this topic and the best solution which I could find is Azure cognitive services. You can use Computer vision API to read text from an image. Here is the complete document.
How fast does it have to be?
If you are using C# I recommend the Google Cloud Vision API. You pay per request but the first 1000 per month are free (check pricing here). However, it does require a web request but I find it to be very quick
using Google.Cloud.Vision.V1;
using System;
namespace GoogleCloudSamples
{
public class QuickStart
{
public static void Main(string[] args)
{
// Instantiates a client
var client = ImageAnnotatorClient.Create();
// Load the image file into memory
var image = Image.FromFile("wakeupcat.jpg");
// Performs label detection on the image file
var response = client.DetectText(image);
foreach (var annotation in response)
{
if (annotation.Description != null)
Console.WriteLine(annotation.Description);
}
}
}
}
I find it works well for pictures and scanned documents so it should work perfectly for your situation. The SDK is also available in other languages too like Java, Python, and Node

How to pass parameters of my external C# code to the Unity3D exe file?

I have a C# solution that solves a dynamic problem, I want to simulate the motion of masses in a real-time manner. I prepared a Unity3D exe file with some objects which are controlled using internal scripts of the Unity and it works internally which is not useful for me. in this respect, in my external C# code, after problem solution, I called the exe file using Process.Start();.
The problem is that I can't pass the position values to Unity3D process. How is it possible to send my C# parameters to the running Unity process?
I read so many tutorials in which Unity gets the input from keyboard or mouse or other devices but I want to send my code parameters to Unity as an input.
Is this possible?
Use Environment.GetCommandLineArgs() to get the arguments.
In Unity
class YourScript : Monobehaviour
{
void Start()
{
string[] args = Environment.GetCommandLineArgs();
}
}
Your code
Process.Start("your_program.exe", "arg1 arg2 ...");
If you're looking to just parse data when you launch with arguments, using Environment.GetCommandLineArgs() is good enough.
If you need to send and receive data both ways (or one way), you can use memory mapped files (docs).
Example:
//creates a memory map file in system memory with 1024 bytes of space
var mmf = MemoryMappedFile.CreateOrOpen("memory", 1024);
var accessor = mmf.CreateViewAccessor();
//writes 3.14 as a double to the memory at position 8
accessor.Write(8, 3.14);
//reads the double from position 8
double pi = accessor.Read<double>(8);
This is certainly doable, I use this method of communication between my network process and the game engine in order to avoid assembly reloads when the editor compiles scripts during run time so that I dont lose any connections.
and also there is another problem! the code I am using in Unity is:
void Update()
{
//reads the double from position 1
using (var mmf = MemoryMappedFile.OpenExisting("memory"))
{
using (var accessor = mmf.CreateViewAccessor())
{
accessor.Read(1, out float omegay);
float omegayy = Convert.ToSingle(omegay);
transform.Rotate(new Vector3(0, omegayy, 0) * Time.deltaTime);
}
}
}
my parameter (omegay) is a negative value, but in unity animates it as a positive number!

IVideoWindow::put_WindowStyle throwing "No such interface supported"

I've got a C# control wrapped around the DirectShow libraries. Though I'm not certain it's relevant, I'm running on Windows CE 6.0R3. When trying to play a WMA audio file using the control, the following code throws an exception of "No such interface supported":
m_graph = new DShowGraph(mediaFile);
m_graphBuilder = m_graph.Open();
m_videoWindow = (IVideoWindow)m_graph.GetVideoWindow();
if (m_videoWindow == null)
{
// this is not hit
}
try
{
m_videoWindow.put_WindowStyle((int)(WS.CHILD | WS.VISIBLE | WS.CLIPSIBLINGS));
}
catch (Exception ex)
{
// I end up here
}
The Open call looks like this (error handling, etc. trimmed):
private IGraphBuilder _graphBuilder;
internal IGraphBuilder Open()
{
object filterGraph = ClassId.CoCreateInstance(ClassId.FilterGraph);
_graphBuilder = (IGraphBuilder)filterGraph;
_graphBuilder.RenderFile(_input, null);
return _graphBuilder;
}
The GetVideoWindow call simply looks like this:
public IVideoWindow GetVideoWindow()
{
if (_graphBuilder == null)
return null;
return (IVideoWindow)(_graphBuilder);
}
Strangely, this all works just fine with the same control DLL, same application and same media file when run under Windows CE 5.0.
My suspicion is that it might have something to do with the fact we're playing an audio-only file (checking to see if the same problem occurs with a video file now), but I'm not overly versed in Direct Show, so I'd like to understand exactly what's going on here.
One of the large challenges in debugging this is that I don't have the failing hardware in my office - it's at a customer's site, so I have to make changes, send them and wait for a reply. While that doesn't affect the question, it does affect my ability to quickly follow up with suggestions or follow on questions anyone might have.
EDIT1
Playing a WMV file works fine, so it is related to the file being audio-only. We can't test MP3 to see if it's a WMA codec issue becasu the device OEM does not include the MP3 codec in the OS due to their concerns over licensing.
The graph's IVideoWindow is nothing but forward to underlying IVideoWindow of video rendering filter. With audio only pipeline you don't have the video renderer (obviously) and IVideoWindow does not make much sense. The interface is still available but once you try to call methods, there is nothing to forward, hence the error.

gStreamer-Sharp - pipeline failing to link

We've just started trying out gStreamer-Sharp, to see if we can create pipelines, with a view to writing a media player component for our .NET software. We're running on Windows/.NET, not Linux/Mono.
Some pipelines can be created, linked and run no problem, but others fail. There's a lack of documentation and support out there for this, so I'm hoping on the off-chance someone with some knowledge in this area will drive by my question and give me some hints.
Anyway, without further ado, I have a repro-case below that fails to link from an avidemux element into an mpeg4 element.
using System.Windows.Forms;
using Gst;
using Gst.Interfaces;
namespace gStreamerTest
{
public partial class MainForm : Form
{
// Pipeline.
private Gst.Pipeline MyPipeline;
// Elements.
private Gst.Element MyFileSource, MyDemux, MyMpeg4, MyDrawSink;
// Overlay adapter.
private XOverlayAdapter MySinkAdapter;
public MainForm()
{
InitializeComponent();
// Initialise gStreamer.
Gst.Application.Init();
// Create new pipeline.
MyPipeline = new Gst.Pipeline();
Pipeline pipeline = new Pipeline("pipeline");
// Construct pipeline filesrc -> avidemux -> mpeg4 -> directdrawsink
MyFileSource = ElementFactory.Make("filesrc", "filesrc");
MyFileSource["location"] = "c:\\test.mp4";
MyDemux = ElementFactory.Make("avidemux", "avidemux");
MyMpeg4 = ElementFactory.Make("ffdec_mpeg4", "ffdec_mpeg4");
MyDrawSink = ElementFactory.Make("directdrawsink", "directdrawsink");
// Output to our window.
MySinkAdapter = new XOverlayAdapter(MyDrawSink.Handle);
MySinkAdapter.XwindowId = (ulong)this.Handle;
// Add and link pipeline.
MyPipeline.Add(MyFileSource, MyDemux, MyMpeg4, MyDrawSink);
if (!MyFileSource.Link(MyDemux))
{
}
if (!MyDemux.Link(MyMpeg4))
{
// FAILS HERE
}
if (!MyMpeg4.Link(MyDrawSink))
{
}
// Play video.
MyPipeline.SetState(Gst.State.Playing);
}
}
}
Interestingly, the above pipeline works OK when we launch it from the command line. We have a vague feeling we might be doing something incorrectly here when setting up the pipeline. It seems to fail at linking the demux to the mpeg4 elements.
As I suggested, some pipelines do work. We can also play the test.mp4 in media player and load it no problem elsewhere (e.g. with gStreamer from the command line).
We're also not sure how to switch on logging for gStreamer-Sharp, or if it's even possible. If anyone can help me out here I would really appreciate it.
Thanks.
After some hints from the project maintainer, I see there are a couple of mistakes in my code. The first is that the demux cannot be linked to the mpeg4 element yet, as the demux only gets its pads when the pipeline starts. This means that I simply need to handle the PadAdded event on the demux element and do the link to the mpeg4 component there.
The second problem is that I need to convert the colour space from YUV to RGB in order for the directdrawsink to accept the input. This will involve me adding an ffmpegcolorspace element in-between.
Finally, I couldn't get debug output at all. The solution to this is to redirect stderr to the output window in Visual Studio 2010. To do this I went to project properties -> debug and put the following command line argument in: "2 > ErrorLog.txt" (without the quotes). Now I can see gStreamer debug output, which is controlled with the GST_DEBUG environment variable.
Brilliant!

Categories