I have a HD network camera that I am trying to grab frames over rtsp and using the following code:
//in Form_Load
Application.Idle += getNextFrame;
And the Event Handler:
private void getNextFrame(object sender, EventArgs ags)
{
//where _imgCount is the total image Grabs
lbl_Count.Text = _imgCount++.ToString();
// and ibLive is a Emgu ImageBox
ibLive.Image = capAxis.QueryFrame().Resize(640, 480, INTER.CV_INTER_AREA);
}
When I start the program, it'll grab 20-40 frames before the "streakiness" appears at the bottom of the screen. It's always on the bottom of the image, but some times it takes up half the screen.
The stream resolution is 1920x1080 and it's using mjpeg. I tried switching to h.264 but had the same results.
I am using Emgu version x86-2.4.0.1717
Any Ideas?
Thanks.
I know this is an old question but I ran into the same problem recently.
I would recommend using another streaming library. Eg.
http://net7mma.codeplex.com/
http://www.fluorinefx.com/
If you really need to stream using EMGU then create a stream profile with a lower resolution or higher compression. I set compression to 30 and used the same resolution then provided the stream profile name in the rtsp url. (Assuming you're using an Axis camera like me capAxis)
Capture cap = new Capture(#"rtsp://10.0.0.1/axis-media/media.amp?videocodec=h264&streamprofile=rtspstream");
I have the same problem like that and I have solved it by myself. I used iSpy to know url of my ONVIF Ip Camera. My IP Camera's url is rtsp://192.168.1.xxx:554//user=admin_password=tlJwpbo6_channel=1_stream=0.sdp?real_stream
For stream = 0, my IP Cam is running in HD resolution (1280 x 720) and that resolution makes a streaky result of my image. So there were two options of URL that iSpy gave, and the other one is just different in stream. I changed stream = 1 for low resolution (352 x 288) and the image result is good ! There's no streaky in my image. Something that I learned from this problem was using RTSP you must use it in low resolution. High resolution will make the image result not good. Hope it can help your problem.
Regards,
Alfonsus Dhani
At the end of Capture string add this "?tcp"
Capture cap = new Capture(#"rtsp://10.0.0.1/axis-media/media.amp?videocodec=h264&streamprofile=rtspstream?tcp");
EDIT
This is my code, and yes, it works, I'am using an IP cam DAHUA.
Capture cap = Capture(#"rtsp://admin:12345#10.0.0.01:554/cam/realmonitor?channel=1&subtype=01?tcp");
A late reply but may help someone facing similar challenged.
Emgu's capabilities to deal with RTSP streams are limited and not stable. I was facing similar issues as discussion in this question,
Unable to use EMGU CV to grab images from RTSP stream continuously
The solution was to use RTSPClientSharp which works like a charm.
( https://github.com/BogdanovKirill/RtspClientSharp )
Related
I use Accord.Video.FFMPEG to create a video of 200 images with the H264 codec. For some reason, the video is very poor quality. Its size is less than 1MB. When choosing VideoCodec.Raw, the quality is high, but I am not happy with the huge size.
I do something like this
using (var vFWriter = new VideoFileWriter())
{
vFWriter.Open(video_name, 1920, 1080, 24, VideoCodec.H264);
for (int i = 0; i < 200; ++i)
{
var img_name_src = ...
using (Bitmap src_jpg = new Bitmap(img_name_src))
{
vFWriter.WriteVideoFrame(src_jpg);
}
}
vFWriter.Close();
}
When I run the program, messages appear:
[swscaler # 06c36d20] deprecated pixel format used, make sure you did set range correctly
[swscaler # 06e837a0] deprecated pixel format used, make sure you did set range correctly
[avi # 06c43980] Using AVStream.codec.time_base as a timebase hint to the muxer is deprecated. Set AVStream.time_base instead.
[avi # 06c43980] Using AVStream.codec to pass codec parameters to muxers is deprecated, use AVStream.codecpar instead.
I don’t know if they affect something.
It looks like 1 frame:
This is the frame from the video:
How to fix it?
Is there any other way in C# to create a video from individual frames?
Usually, video quality is down to the bitrate which can be changed with this overload:
writer.Open(fileName, width, height, frameRate, VideoCodec, BitRate);
In the millions, the video still has artifacts on high detail frames but is mostly fine. In the billions however, artifacts disappear entirely but file size sky rockets and playback speed is affected by retrieval times from the disk.
Try experimenting with different VideoCodecs, bitrates and file types (mp4, avi, webm etc) to find a suitable balance for your project.
for a new project, I used the Windows Media Player component. It should play a Livestream and this works fine for me, but after 10s the stream loads again and begins at 0 seconds (just like a 10s video clip).
There are two solutions I can see, but I don't know a way for them. The code itself is pretty simple.
private void tbutton_Click(object sender, EventArgs e)
{
tvplayer.currentPlaylist.name = "TV-Stream";
tvplayer.URL = (stream-url);
}
The first would be to "let the player know" that the video source is a stream and not a video, but I don't know how I should do that.
The second solution would be to modify the duration of the "video", the Media Player plays to... maybe two hours or 24 hours. I know this is somehow possible as I read about it in the Metafile Elements Reference (https://msdn.microsoft.com/de-de/library/windows/desktop/dd564668(v=vs.85).aspx), anyway, I don't see how.
Can someone give me a hint how I could do that?
I tried both HLS and HDS versions of the livestream, there is no difference. The problem is the same. The stream itself has a H.264 MP4-format.
I guess the problem is that the livestream is loaded in 10s-segments.
I have an IDS UEye webcam and want to make a snapshot via the uEyeDotNet.dll (version 1.6.4.2).
At the moment I'm using this piece of code.
var camera = new Camera();
camera.Init(_deskCamInfo.UEyeId);
camera.Memory.Allocate();
camera.Acquisition.Capture();
Thread.Sleep(500);
int s32MemID;
camera.Memory.GetActive(out s32MemID);
Bitmap image;
camera.Memory.ToBitmap(s32MemID, out image);
var converter = new ImageConverter();
var imageData = (byte[])converter.ConvertTo(image, typeof(byte[]));
When I use my code with the Thread.Sleep(500) I get the snapshot as expected and everything works fine. But if I remove the Thread.Sleep(500) I get a black image and I really don't know why.
But I don't want to wait 500ms for each snapshot and would like to solve this problem without it.
In my original code I check each result value from the uEye methods and I get always an success. Just removed this checks because it's hard to read with all the if statements.
I solved the problem. Maybe someone else is having the same issue and it can help.
Second guessed the solution was really simple. I had to change
status = camera.Acquisition.Capture();
to
status = camera.Acquisition.Capture(DeviceParameter.Wait);
and then the camera is waiting till you can capture an image.
You could also subscribe to EventFrame from the camera before starting the camera with Capture. A than read the camera memory in the Subscribed function like this:
Int32 s32MemID;
uEye.Defines.Status statusRet = Camera.Memory.GetLast(out s32MemID);
System.Drawing.Bitmap image= null;
Camera.Memory.ToBitmap(s32MemID, out image);
...
p.s. (DeviceParameter.Wait is according to IDS deprecated but if it solves your issue who gives a damn :-) )
Edit: SOLVED! Please see my answer down below for details.
I was unable to find an answer to the original question but I found an alternate solution
This question may be asked somewhere else but I have been searching for days and can't find anything that helps.
Question: I need to convert "Stream" to "image(bgr, byte)" in one go, Is there a way/command to convert directly from System.Drawing.Image.FromStream to Emgu.CV.Image(Bgr, Byte) without converting from stream to image to bitmap to image(bgr, byte)?
Information: I'm coding in c# in Visual Studio 2010 as part of my dissertation project.
I am taking a image stream from an IP camera on a network and applying many algorithms to detect faces/extract facial features and recognise an individuals face. On my laptops local camera I can achieve FPS of about 25~ (give or take) including algorithms because I don't have to convert the image. For an IP camera stream I need to convert it many times to achieve the desired format and the result is around 5-8fps.
(I know my current method is extremely inefficient which is why I'm here, I'm actually converting an image 5 times total (even gray scaling too), actually only using half of my processors memory (i7, 8gb RAM)). It does have to be image(bgr, byte) as that is the only format the algorithms will function with.
The code I'm using to get the image:
//headers
using System.IO
using System.Threading;
using System.Net;
//request a connection
req = (HttpWebRequest)HttpWebRequest.Create(cameraUrl);
//gives chance for timeout for errors to occur or loss of connection
req.AllowWriteStreamBuffering = true;
req.Timeout = 20000;
//retrieve response (if successfull)
res = req.GetResponse();
//image returned
stream = res.GetResponseStream();
I have alot of stuff in the background managing connections, data, security etc which I have shortened to the above code.
My current code to covert the image to the desired output:
//Convert stream to image then to bitmap
Bitmap bmpImage = new Bitmap(System.Drawing.Image.FromStream(stream));
//Convert to emgu image (desired goal)
currentFrame = new Emgu.CV.Image<Bgr, Byte>(bmpImage);
//gray scale for other uses
gray = currentFrame.Convert<Gray, Byte>();
I understand there is a method to save an image locally temporarily but I would need to avoid that for security purposes. I'm looking more for a direct conversion to help save processing power.
Am I overlooking something? All help is appreciated.
Thanks for reading. (I will update this if anyone requests any more details)
-Dave
You've got a couple potential bottlenecks, not the least of which is that you're probably jpeg decoding the stream into an image and then converting that into a bitmap and then into an openCV image.
One way around this is to bypass the .NET imaging entirely. This would involve trying to use libjpeg directly. There's a free port of it here in C#, and IIRC you can hook into it to get called on a per-scanline basis to fill up a buffer.
The downside is that you're decoding JPEG data in managed code which will run at least 1.5X slower than equivalent the C, although quite frankly I would expect network speed to dwarf this immensely.
OpenCV should be able to read jpeg images directly (wanna guess what they use under the hood? Survey says: libjpeg), which means that you can buffer up the entire stream and hand it to OpenCV and bypass the .NET layer entirely.
I believe I found the answer to my problem. I have dabbled using Vano Maisuradze's idea of processing in memory which improved the fps a tiny margin (not immediately noticable without testing). And also thanks to Plinths answer I have a understanding of Multi-Threading and I can optimise this as I progress as I can split the algorithms up to work in parallel.
What I think is my cause is the networking speed! not the actual algorithm delay. As pointed out by Vano with the stopwatch to find the speed the algorithms didn't actually consume that much. So with and without the algorithms the speed is about the same if I optimise using threading so the next frame is being collected as the previous one finishes processing.
I did some testing on some physical Cisco routers and got the same result if a bit slower messing round with clock speeds and bandwidths which was noticeable. So I need to find out a way to retrieve frames over networks faster, Very big thank you to everyone who answered who helped me understand better!
Conclusion:
Multi-threading to optimise
Processing in memory instead of converting constantly
Better networking solutions (Higher bandwidth and speeds)
Edit: The code to retrieve an image and process in memory for anyone who finds this looking for help
public void getFrames(object sender, EventArgs e)
{//Gets a frame from the IP cam
//Replace "IPADDRESS", "USERNAME", "PASSWORD"
//with respective data for your camera
string sourceURL = "http://IPADDRESS/snapshot.cgi?user=USERNAME&pwd=PASSWORD";
//used to store the image retrieved in memory
byte[] buffer = new byte[640 * 480];
int read, total = 0;
//Send a request to the peripheral via HTTP
HttpWebRequest req = (HttpWebRequest)WebRequest.Create(sourceURL);
WebResponse resp = req.GetResponse();
//Get the image capture after recieving a request
//Note: just a screenshot not a steady stream
Stream stream = resp.GetResponseStream();
while ((read = stream.Read(buffer, total, 1000)) != 0)
{
total += read;
}//While End
//Convert memory (byte) to bitmap and store in a picturebox
pictureBox1.Image = (Bitmap)Bitmap.FromStream(new MemoryStream(buffer, 0, total));
}//getFrames End
private void button1_Click(object sender, EventArgs e)
{//Trigger an event to start running the function when possible
Application.Idle += new EventHandler(getFrames);
}//Button1_Click End
You can save several image in memory (buffer) and then start processing from buffer.
Something like this:
//Convert stream to image then to bitmap
Bitmap bmpImage = new Bitmap(System.Drawing.Image.FromStream(stream));
//Convert to emgu image (desired goal)
currentFrame = new Emgu.CV.Image<Bgr, Byte>(bmpImage);
//gray scale for later use
gray = currentFrame.Convert<Gray, Byte>();
SaveToBuffer(gray);
Queue<Emgu.CV.Image<Gray, Byte>> buffer = new Queue<Emgu.CV.Image<Gray, Byte>>();
bool canProcess = false;
// ...
private void SaveToBuffer(Emgu.CV.Image<Gray, Byte> img)
{
buffer.Enqueue(img);
canProcess = buffer.Count > 100;
}
private void Process()
{
if(canProcess)
{
buffer.Dequeue();
// Processing logic goes here...
}
else
{
// Buffer is still loading...
}
}
But note that you will need enough RAM to store images in memory and also you should adjust buffer size to meat your requirements.
I am using the following command to get the image from ip camera using gstreamer.
gst-launch-0.10 -v rtspsrc location="rtsp://ipaddress
:554/user=&password=&channel=1&stream=0.sdp?real_stream--rtp-cachi
ng=100" do-timestamp=true is_live=treu timeout=5 ! multipartdemux ! ffmpegcolors
pace ! jpegenc ! filesink location=test.jpeg
But i got only empty file. Kindly help me.
First of all you should use a GStreamer 1.x version, the 0.10 versions are no longer supported and you're missing basically 3+ years of bugfixes, new features and other improvements.
But the problem in your pipeline is that you put the output of rtspsrc to multipartdemux. rtspsrc will output one or more RTP streams that have to be depayloaded, decoded, etc. Not multipart encoded data.
What you probably want is
rtspsrc uri=... ! decodebin2 ! ffmpegcolorspace ! jpegenc ! filesink location=test.jpg
Note however that this will not just stop after the first JPEG picture but will append every received frame as JPEG picture to that single file. Use multifilesink instead of filesink if you want to create one file per frame.