I'm trying to send as fast as possible images over socket tried to compress... compare the images... it's still working pretty slow...
btw I tried to save the image before and after the compression and size was the same.... 1 or 2 kb les...
have a look in client side code:
Bitmap pre;
private void Form2_Load(object sender, EventArgs e)
{
pre = GetDesktopImage();
prev = Compress(ImageToByte(pre)).Length;
theThread = new Thread(new ThreadStart(startSend));
theThread.Start();
}
Bitmap curr;
byte[] compressed;
private void startSend()
{
sck = client.Client;
s = new NetworkStream(sck);
while (true)
{
curr = GetDesktopImage();
compressed = Compress(ImageToByte(curr));
if (Math.Abs(compressed.Length - prev) > 500)
{
bFormat.Serialize(s, compressed);
prev = compressed.Length;
count++;
}
}
}
compression methods:
byte[] Compress(byte[] b)
{
using (MemoryStream ms = new MemoryStream())
{
using (GZipStream z = new GZipStream(ms, CompressionMode.Compress, true))
z.Write(b, 0, b.Length);
return ms.ToArray();
}
}
byte[] ImageToByte(Image img)
{
ImageConverter converter = new ImageConverter();
return (byte[])converter.ConvertTo(img, typeof(byte[]));
}
and this is the server side:
while (true)
{
try
{
bFormat = new BinaryFormatter();
inBytes = bFormat.Deserialize(stream) as byte[];
inImage = ByteToImage(Decompress(inBytes));
theImage.Image = (Image)inImage;
count++;
label1.Invoke(new Action(() => label1.Text = count.ToString()));
}
catch { }
}
btw I've seen some people who used socket.send and didn't save the image to stream.... may u guys explain the difference? and suggest me what wrong in my code and how can I improve my algorithm?
Your question is really pushing the limits in terms of "too broad" as a close reason. The general problem of sending image data over a network is a very broad area of research, with a large number of different techniques, the specific application/user-scenario determining which technique is actually best.
That said, there is one very obvious change you can make to you code that is needed, and which might speed it up, depending on where the bottleneck is.
Specifically, you are using ImageConverter.ConvertTo() to convert the Bitmap object to a byte[], and then you are using GzipStream to compress that array of bytes. The problem with this is that ConvertTo() is already compressing the data; the byte[] it returns contains the original bitmap represented as PNG format, which is a fairly good, lossless compression algorithm for images.
So not only does compressing it again accomplish practically nothing, it costs you a lot of CPU to do that nothing. Don't do that. Just send the byte[] data as-is, without running it through GzipStream.
Now, all that said…
As I mentioned, whether that change will really help all that much depends on other things, including how large the bitmaps are, and how fast the network you are using is. If you are already saturating the network even with the inefficient code you posted in your question, then speeding that code up isn't going to help.
Techniques that are used to deal with network bandwidth as a bottleneck include (but are not limited to):
Using lossy compression (e.g. JPEG, MPEG, etc.), and so simply discarding information that costs too much to send.
Using a differential compression technique (e.g. MPEG, MP4, Quicktime, etc.), which takes advantage of the fact that when dealing with motion picture video, most of the pixels from one frame to the next are unchanged or at least are very similar.
Sending rendering commands instead of bitmap data. This is commonly used for things like VNC or Microsoft's Remote Desktop/Terminal Server APIs, and takes advantage of the fact that on-screen drawing is very commonly affecting a large number of pixels using relatively simple drawing commands (filling/outlining rectangles, drawing text, painting small bitmaps, etc.).
In many cases, these techniques are combined in varying ways to achieve maximum performance.
If you want to use these kinds of techniques, you need to do a bit more than just asking a question on Stack Overflow. It is well beyond the scope of this site to provide broad documentation and tutorials on those techniques. You'll need to research them yourself, or even better just use existing implementations to achieve your goals.
Related
I am working on screen sharing project. I am sending only screen differences over socket comparing previous and actual buffer. It working
I am sending 8 to 9 FPS to client using Format16bppRgb555 to reduce overall bytes size of Bitmap
byte[] wholescreensize= new byte[1360 * 768 * 2];// Its around 2 Mb
My problem Is when full screen is changed.
I am getting about 45-60 kb of PNG image using below function
45kb * 10 (FPS) = 450 kb
It is possible to reduce beyond 45 kb.
I am not interested to reduce FPS as it live screen sharing app.
JPEG Compression or LZ4/GZIP also not making much difference as PNG image already compressed
private void SendImgDiffToClient(byte[] contents,Rectangle rectangle)
{
//Converting Small Portion to Bitmap.Bcoz Image.FromStrem not working here error Parameter is not Valid
byte[] byteArrayout = new byte[contents.Length];
var bitmap = new Bitmap(rectangle.Width, rectangle.Height, PixelFormat.Format16bppRgb555);
var bitmap_data = bitmap.LockBits(new Rectangle(0, 0, bitmap.Width, bitmap.Height), ImageLockMode.WriteOnly, PixelFormat.Format16bppRgb555);
Marshal.Copy(contents, 0, bitmap_data.Scan0, byteArrayout.Length);
bitmap.UnlockBits(bitmap_data);
//Converting Small Bitmap to Png Byte Array and Sending to Client
using (MemoryStream ms = new MemoryStream())
{
Image msImage = (Image)bitmap;
msImage.Save(ms, ImageFormat.Png);
msImage.Dispose();
byteArrayout = ms.ToArray();
}
SendtoClient(byteArrayout);
}
My Questing is what is a best approach to reduce bytes in such scenario.
Video streaming is essentially what you're doing; and modern video compression algorithms have lots of enhancements. Perhaps they can track or move an artifact, or otherwise distort said artifact as part of their functionality. Perhaps they can stream the data in a progressively building manner, so that static items eventually acquire more detail (similar to progressive jpeg images.) They do lots of things all at the same time. You can try to research them further, and take inspiration from them, or you could pick and use one.
This is to say that many people here seem to prefer the solution of using a readily available video compression library. Especially if you are worried about streaming bandwidth.
If you don't want to use an existing video library, then you have to decide how much effort you want to put in, versus how sloppy you want to be with consuming more bandwidth than otherwise necessary.
Edit: SOLVED! Please see my answer down below for details.
I was unable to find an answer to the original question but I found an alternate solution
This question may be asked somewhere else but I have been searching for days and can't find anything that helps.
Question: I need to convert "Stream" to "image(bgr, byte)" in one go, Is there a way/command to convert directly from System.Drawing.Image.FromStream to Emgu.CV.Image(Bgr, Byte) without converting from stream to image to bitmap to image(bgr, byte)?
Information: I'm coding in c# in Visual Studio 2010 as part of my dissertation project.
I am taking a image stream from an IP camera on a network and applying many algorithms to detect faces/extract facial features and recognise an individuals face. On my laptops local camera I can achieve FPS of about 25~ (give or take) including algorithms because I don't have to convert the image. For an IP camera stream I need to convert it many times to achieve the desired format and the result is around 5-8fps.
(I know my current method is extremely inefficient which is why I'm here, I'm actually converting an image 5 times total (even gray scaling too), actually only using half of my processors memory (i7, 8gb RAM)). It does have to be image(bgr, byte) as that is the only format the algorithms will function with.
The code I'm using to get the image:
//headers
using System.IO
using System.Threading;
using System.Net;
//request a connection
req = (HttpWebRequest)HttpWebRequest.Create(cameraUrl);
//gives chance for timeout for errors to occur or loss of connection
req.AllowWriteStreamBuffering = true;
req.Timeout = 20000;
//retrieve response (if successfull)
res = req.GetResponse();
//image returned
stream = res.GetResponseStream();
I have alot of stuff in the background managing connections, data, security etc which I have shortened to the above code.
My current code to covert the image to the desired output:
//Convert stream to image then to bitmap
Bitmap bmpImage = new Bitmap(System.Drawing.Image.FromStream(stream));
//Convert to emgu image (desired goal)
currentFrame = new Emgu.CV.Image<Bgr, Byte>(bmpImage);
//gray scale for other uses
gray = currentFrame.Convert<Gray, Byte>();
I understand there is a method to save an image locally temporarily but I would need to avoid that for security purposes. I'm looking more for a direct conversion to help save processing power.
Am I overlooking something? All help is appreciated.
Thanks for reading. (I will update this if anyone requests any more details)
-Dave
You've got a couple potential bottlenecks, not the least of which is that you're probably jpeg decoding the stream into an image and then converting that into a bitmap and then into an openCV image.
One way around this is to bypass the .NET imaging entirely. This would involve trying to use libjpeg directly. There's a free port of it here in C#, and IIRC you can hook into it to get called on a per-scanline basis to fill up a buffer.
The downside is that you're decoding JPEG data in managed code which will run at least 1.5X slower than equivalent the C, although quite frankly I would expect network speed to dwarf this immensely.
OpenCV should be able to read jpeg images directly (wanna guess what they use under the hood? Survey says: libjpeg), which means that you can buffer up the entire stream and hand it to OpenCV and bypass the .NET layer entirely.
I believe I found the answer to my problem. I have dabbled using Vano Maisuradze's idea of processing in memory which improved the fps a tiny margin (not immediately noticable without testing). And also thanks to Plinths answer I have a understanding of Multi-Threading and I can optimise this as I progress as I can split the algorithms up to work in parallel.
What I think is my cause is the networking speed! not the actual algorithm delay. As pointed out by Vano with the stopwatch to find the speed the algorithms didn't actually consume that much. So with and without the algorithms the speed is about the same if I optimise using threading so the next frame is being collected as the previous one finishes processing.
I did some testing on some physical Cisco routers and got the same result if a bit slower messing round with clock speeds and bandwidths which was noticeable. So I need to find out a way to retrieve frames over networks faster, Very big thank you to everyone who answered who helped me understand better!
Conclusion:
Multi-threading to optimise
Processing in memory instead of converting constantly
Better networking solutions (Higher bandwidth and speeds)
Edit: The code to retrieve an image and process in memory for anyone who finds this looking for help
public void getFrames(object sender, EventArgs e)
{//Gets a frame from the IP cam
//Replace "IPADDRESS", "USERNAME", "PASSWORD"
//with respective data for your camera
string sourceURL = "http://IPADDRESS/snapshot.cgi?user=USERNAME&pwd=PASSWORD";
//used to store the image retrieved in memory
byte[] buffer = new byte[640 * 480];
int read, total = 0;
//Send a request to the peripheral via HTTP
HttpWebRequest req = (HttpWebRequest)WebRequest.Create(sourceURL);
WebResponse resp = req.GetResponse();
//Get the image capture after recieving a request
//Note: just a screenshot not a steady stream
Stream stream = resp.GetResponseStream();
while ((read = stream.Read(buffer, total, 1000)) != 0)
{
total += read;
}//While End
//Convert memory (byte) to bitmap and store in a picturebox
pictureBox1.Image = (Bitmap)Bitmap.FromStream(new MemoryStream(buffer, 0, total));
}//getFrames End
private void button1_Click(object sender, EventArgs e)
{//Trigger an event to start running the function when possible
Application.Idle += new EventHandler(getFrames);
}//Button1_Click End
You can save several image in memory (buffer) and then start processing from buffer.
Something like this:
//Convert stream to image then to bitmap
Bitmap bmpImage = new Bitmap(System.Drawing.Image.FromStream(stream));
//Convert to emgu image (desired goal)
currentFrame = new Emgu.CV.Image<Bgr, Byte>(bmpImage);
//gray scale for later use
gray = currentFrame.Convert<Gray, Byte>();
SaveToBuffer(gray);
Queue<Emgu.CV.Image<Gray, Byte>> buffer = new Queue<Emgu.CV.Image<Gray, Byte>>();
bool canProcess = false;
// ...
private void SaveToBuffer(Emgu.CV.Image<Gray, Byte> img)
{
buffer.Enqueue(img);
canProcess = buffer.Count > 100;
}
private void Process()
{
if(canProcess)
{
buffer.Dequeue();
// Processing logic goes here...
}
else
{
// Buffer is still loading...
}
}
But note that you will need enough RAM to store images in memory and also you should adjust buffer size to meat your requirements.
I want to add images on running video using c#.
My Code Is but not working
byte[] mainAudio = System.IO.File.ReadAllBytes(Server.MapPath(image path));//Upload by User
byte[] intreAudio = System.IO.File.ReadAllBytes(Server.MapPath(video path));//File Selected For Interruption
List<byte> int1 = new List<byte>(mainAudio);
int1.AddRange(intreAudio);
byte[] gg = int1.ToArray();
using (FileStream fs =
System.IO.File.Create(Server.MapPath(#"\TempBasicAudio\myfile1.mp3")))
{
if (gg != null)
{
fs.Write(gg, 0, gg.Length);
}
}
Did it ever occor to you that a video file is not justa mindless "array of images" so you can not just add another byte range at the end?
Depending on the video type there is a quite complex structure of management structured you just ignore. Videos are highly complex encoding.
YOu may have to add the images in a specific form WHILE UPDATING THE MANAGEMENT INFORMATION - or you may even have to transcode that (decode all images, then reencode the whole video stream).
Maybe a book about the basics of video processing is in order now? You are like the guy asking why you can not get more horsepower out of your car by running it on rocket fuel - totally ignoring the realities of how cars operate.
EDIT: I keep getting OutOfMemoryException was unhandled,
I think it's how I am saving the image to isolated storage ,I think this is where I can solve my problem how do I reduce the size of the image before I save it? (added code where I save Image)
I am opening images from Isolated storage sometimes over 100 images and I want to loop over them images but I get a OutOfMemory Exception when there is around 100 to 150 images loaded in to a storyboard. How can I handle this exception, I have already brought down the resolution of the images. How can I handle this exception and stop my app from crashing?
I get the exception at this line here
image.SetSource(isStoreTwo.OpenFile(projectFolder + "\\MyImage" + i + ".jpg", FileMode.Open, FileAccess.Read));//images from isolated storage
here's my code
private void OnLoaded(object sender, RoutedEventArgs e)
{
IsolatedStorageFile isStoreTwo = IsolatedStorageFile.GetUserStoreForApplication();
try
{
storyboard = new Storyboard
{
//RepeatBehavior = RepeatBehavior.Forever
};
var animation = new ObjectAnimationUsingKeyFrames();
Storyboard.SetTarget(animation, projectImage);
Storyboard.SetTargetProperty(animation, new PropertyPath("Source"));
storyboard.Children.Add(animation);
for (int i = 1; i <= savedCounter; i++)
{
BitmapImage image = new BitmapImage();
image.SetSource(isStoreTwo.OpenFile(projectFolder + "\\MyImage" + i + ".jpg", FileMode.Open, FileAccess.Read));//images from isolated storage
var keyframe = new DiscreteObjectKeyFrame
{
KeyTime = KeyTime.FromTimeSpan(TimeSpan.FromMilliseconds(100 * i)),
Value = image
};
animation.KeyFrames.Add(keyframe);
}
}
catch (OutOfMemoryException exc)
{
//throw;
}
Resources.Add("ProjectStoryBoard", storyboard);
storyboard.Begin();
}
EDIT This is how I am saving the image to Isolated storage, I think this is where I can solve my problem, How do I reduce the size of the image when saving it to isolated storage?
void cam_CaptureImageAvailable(object sender, Microsoft.Devices.ContentReadyEventArgs e)
{
string fileName = folderName+"\\MyImage" + savedCounter + ".jpg";
try
{
// Save picture to the library camera roll.
//library.SavePictureToCameraRoll(fileName, e.ImageStream);
// Set the position of the stream back to start
e.ImageStream.Seek(0, SeekOrigin.Begin);
// Save picture as JPEG to isolated storage.
using (IsolatedStorageFile isStore = IsolatedStorageFile.GetUserStoreForApplication())
{
using (IsolatedStorageFileStream targetStream = isStore.OpenFile(fileName, FileMode.Create, FileAccess.Write))
{
// Initialize the buffer for 4KB disk pages.
byte[] readBuffer = new byte[4096];
int bytesRead = -1;
// Copy the image to isolated storage.
while ((bytesRead = e.ImageStream.Read(readBuffer, 0, readBuffer.Length)) > 0)
{
targetStream.Write(readBuffer, 0, bytesRead);
}
}
}
}
finally
{
// Close image stream
e.ImageStream.Close();
}
}
I would appreciate if you could help me thanks.
It doesn't matter how large your images are on disk because when you load them into memory they're going to be uncompressed. The memory required for the image will be approximately (stride * height). stride is width * bitsPerPixel)/8, and then rounded up to the next multiple of 4 bytes. So an image that's 1024x768 and 24 bits per pixel will take up about 2.25 MB.
You should figure out how large your images are, uncompressed, and use that number to determine the memory requirements.
You are getting the OutOfMemory Exception because you are storing all the images in memory at the same time in order to create your StoryBoard. I don't think you will be able to overcome the uncompressed bitmap size that the images require to be displayed on screen.
So to get past this we must think about your goal rather than trying to fix the error. If your goal is to show a new image in sequence every X ms then you have a few options.
Keep using StoryBoards but chain them using the OnCompleted event. This way you don't have to create them all at once but can just generate the next few. It might not be fast enough though if you're changing images every 100ms.
Use CompositionTarget.Rendering as mentioned in my answer here. This would probably take the least amount of memory if you just preload the next one (as opposed to having them all preloaded as your current solution does). You'd need to manually check the elapsed time though.
Rethink what you're doing. If you state what you are going after people might have more alternatives.
To answer the edit at the top of your post, try ImageResizer. There's a NuGet package, and a HanselBlog episode on it. Obviously , this is Asp.Net based, but I'm sure you could butcher it to work in your scenario.
Tackling these kind of problems at design layer usually works better.
Making application smart about the running environment via some configurations makes your application more robust. For example you can define some variables like image size, image count, image quality... based on available memory and set these variables at run-time in your App. So your application always works; fast on high memory machines and slow on low memory ones; but never crash. (Don't believe working in managed environment means no worry about the environment... Design always matters)
Also there are some known design patterns like Lazy Loading you can benefit from.
I don't know about windows phone in particular, but in .net winforms, you need to use a separate thread when doing a long-running task. Are you using a BackgroundWorker or equivalent? The finalizer thread can become blocked, which will prevent the resources for the images from being disposed. Using a separate thread from the UI thread will allow will allow the Dispose method to be run automatically.
Ok, an image (1024x768) has at least a memsize of 3 mb (argb)
Don't know how ObjectAnimationUsingKeyFrames works internal. Maybe you can force the gc by destroying the instances of BitmapImage (and KeyFrames) without loss of its data in the animation.
(not possible, see comments!)
Based on one of your comments, you are building a Time Lapse app. Commercial time-lapse apps for WP7 compress the images to video, not stills. e.g. Time Lapse Pro
The whole point of video playback is to reduce similar, or time-related, images to highly compressed stream that do not require massive amounts of memory to play back.
If you can add the ability to encode to video, in your app, you will avoid the problem of trying to emulate a video player (using 100s of single full-resolution frames as a flick-book).
Processing the images into video server-side may be another option (but not as friendly as in-camera).
A friend and I spent the better part of last night nearly tearing our hair out trying to work with some images in a metro app. We got images into the app with the share charm, and then I wanted to do some other work with them, cropping the images and saving them back into the appdata folder. This proved extremely frustrating.
My question, at the end of all this, is going to be "What's the proper way of doing this, without feeling like I'm hammering together a bunch of mismatched jigsaw puzzle pieces?"
When sharing multiple images with the app, they come in as a list of Windows.Storage.StorageFiles. Here's some code used to handle that.
var storageItems = await _shareOperation.Data.GetStorageItemsAsync();
foreach (StorageFile item in storageItems)
{
var stream = await item.OpenReadAsync();
var properties = await item.Properties.GetImagePropertiesAsync();
var image = new WriteableBitmap((Int32)properties.Width, (Int32)properties.Height);
image.SetSource(stream);
images.Add(image);
}
Some searching online has indicated that currently, a Windows.UI.Xaml.Media.Imaging.WriteableBitmap is the only thing capable of letting you access the pixel data in the image. This question includes a helpful answer full of extension methods for saving images to a file, so we used those.
Our problems were the worst when I tried opening the files again later. I did something similar to before:
var files = await ApplicationData.Current.LocalFolder.GetFilesAsync();
foreach (var file in files)
{
var fileStream = await file.OpenReadAsync();
var properties = await file.Properties.GetImagePropertiesAsync();
var bitmap = new WriteableBitmap((Int32)properties.Width, (Int32)properties.Height);
bitmap.SetSource(fileStream);
System.IO.Stream stream = bitmap.PixelBuffer.AsStream();
Here comes a problem. How long is this stream, if I want the bytes out of it?
// CRASH! Length isn't supported on an IRandomAccessStream.
var pixels = new byte[fileStream.Length];
Ok try again.
var pixels = new byte[stream.Length];
This works, except... if the image is compressed, the stream is shorter than you would expect, so you will eventually get an out of bounds exception. For now pretend it's an uncompressed bitmap.
await _stream.ReadAsync(pixels, 0, pixels.Length);
Well guess what. Even though I said bitmap.SetSource(fileStream); in order to read in the data, my byte array is still full of zeroes. I have no idea why. If I pass this same bitmap into a my UI through the sample data group, the image shows up just fine. So it has clearly got the pixel data in that bitmap somewhere, but I can't read it out of bitmap.PixelBuffer? Why not?
Finally, here's what ended up actually working.
var decoder = await BitmapDecoder.CreateAsync(BitmapDecoder.PngDecoderId, fileStream);
var data = await decoder.GetPixelDataAsync();
var bytes = data.DetachPixelData();
/* process my data, finally */
} // end of that foreach I started a while ago
So now I have by image data, but I still have a big problem. In order to do anything with it, I have to make assumptions about its format. I have no idea whether it's rgba, rgb, abgr, bgra, whatever they can be. If I guess wrong my processing just fails. I've had dozens of test runs spit out zero byte and corrupted images, upside down images (???), wrong colors, etc. I would have expected to find some of this info in the properties that I got from calling await file.Properties.GetImagePropertiesAsync();, but no luck. That only contains the image width and height, plus some other useless things. Minimal documentation here.
So, why is this process so painful? Is this just reflecting the immaturity of the libraries right now, and can I expect it to get better? Or is there already some standard way of doing this? I wish it were as easy as in System.Drawing. That gave you all the data you ever needed, and happily loaded any image type correctly, without making you deal with streams yourself.
From what I have seen - when you are planning on loading the WriteableBitmap with a stream - you don't need to check the image dimensions - just do new WriteableBitmap(1,1), then call SetSource().
Not sure why you were thinking var pixels = new byte[fileStream.Length]; would work, since the fileStream has the compressed image bytes and not a pixel array.
You might need to seek to the beginning of the stream to get the pixels array:
var pixelStream = pixelBuffer.AsStream();
var bytes = new byte[this.pixelStream.Length];
this.pixelStream.Seek(0, SeekOrigin.Begin);
this.pixelStream.Read(bytes, 0, Bytes.Length);
I had started working on a WinRT port of WriteableBitmapEx - maybe it could help you: http://bit.ly/WriteableBitmapExWinRT. I have not tested it well and it is based on an older version of WBX, but it is fairly complete in terms of feature support. Might be a tad slower than it is possible too.