Send image from Ios application to web api service - c#

We have an IOS application which send images to a asp.net web api application. So we convert images to Base64 then we send it to the web service as a string .
The problem is that the size of the image is big so the conversion to base64 takes a lot of time and the size of the result string is bigger than the initial image's size.
I need to know :
If another better way , instead of conversion to Base64, exists to convert the image before calling the web service
I used Gzip to compress/decompress an array of bytes like this :
static byte[] Compress(byte[] data)
{
using (var compressedStream = new MemoryStream())
using (var zipStream = new GZipStream(compressedStream, CompressionMode.Compress))
{
zipStream.Write(data, 0, data.Length);
zipStream.Close();
return compressedStream.ToArray();
}
}
Is it possible to convert image to array of bytes in IOS part then call the web service ? Or expose an object like compressedStream or GZipStream as a service argument?
Thanks,

it is possible to convert the image to a byte array, here's an SO answer which touches on that : how to convert byte array to image in ios
The biggest question however is this : do you actually need the image that big? You need to consider that the service will get slow once you have multiple users doing this and will more than likely grind to a halt which will make your app difficult/ slow to use.
You might want to consider reducing the image before sending it over. You can reduce the size, the quality and just make it smaller, then send the result over the wire.
Here is another SO post which touches on this : What's the easiest way to resize/optimize an image size with the iPhone SDK?
Of course if you are using xamarin and c# to build your app then it's even easier and you can find samples of code doing both these things.

Related

How to extract image from TCP Stream

I need your help.
I was creating an application in c# that converts the data from the IP camera to an image (JPEG).
I was able to convert the image using the below code:
hex = "FFD8FFDB008400130D0F1.........";/// supply this with the attached hex dump.
byte[] image = HexString2Bytes(hex);
File.WriteAllBytes("visio.png", image);
Process.Start("visio.png");
private static byte[] HexString2Bytes(string hexString)
{
int bytesCount = (hexString.Length) / 2;
byte[] bytes = new byte[bytesCount];
for (int x = 0; x < bytesCount; ++x)
{
bytes[x] = Convert.ToByte(hexString.Substring(x * 2, 2), 16);
}
return bytes;
}
Sometimes I get a better image as expected:https://ibb.co/pxrwn6p
but sometimes I get a distorted image after converting https://ibb.co/9twx5ZT.
I was wondering if there is a problem with the conversion or the way I save the image.
because as per the supplier what I need to do is to directly save the image from the stream.
but since I receive it as a byte and I still need to convert it maybe there is something wrong with my codes.
the image also starts with ÿØÿÛ FF D8 and ends with ÿ Ùÿÿÿÿ (FF D9 FF FF FF FF)
here's the hex dump from their sample app:
https://drive.google.com/file/d/1CMlQ0xaVjM0jfU5A4MB-_HwK54dUMTOr/view?usp=sharing
using their test application the image can be captured and converted the image perfectly.
captured image using their application:https://ibb.co/2KgyLTc
using the hex from the sniff and convert it using my code:
converted image using my code:https://ibb.co/G0WMjht
sample source code:
please bare with my codes because currently this is only my test app before integrating this feature to another app.
https://drive.google.com/file/d/1Ux7zsR39IVNyd1wrBxQPQKA6yM4YnwJN/view?usp=sharing
Thank You in advance.
Looking at the hex-dump it looks like some kind of XML file with embedded image data. Trying to convert this directly to an image will most likely not work, you would need to parse the XML-data to extract the actual image file. But it looks like you have a valid Jpeg header, so I would guess you have found the start of the image at least. But you probably also need to check the length property from the XML-data to find the length of the image-data block.
However, the datablock looks like it contains large sections of zeros, this should not be present in a jpeg file, so it might indicate some data corruption. Possibly from the way the network data is captured.
I would expect cameras to use some higher level protocol than raw TCP. Like Real Time Streaming Protocol, GigE vision, or mjpeg over http. I have not seen any camera that require you to process a raw TCP streams. But since you do not show how the data is fetched it is difficult to tell if there is any mistakes in that code.

Raspberry Pi and framebuffer input with mono

I'm trying to render a bitmap in Memory using mono. This image should be displayed on Adafruits 2.8" touch TFT (320*240). The Programm is developed with Visual Studio 2013 Community Edition. I want to host a ASP.NET Web Api and Show
some data on the Display. The ASP.NET part is working fine and the image is rendered. My idea was to write the Image to the framebuffer Input, but doing this I get an Exception saying that file is to large. I'm just writing raw data without BMP Header. Has someone managed doing this? Maybe creation of image is
wrong.
It seems as something is happening because the display changes and I can see white areas which might be from my image.
I don't want to use any extra libraries to keep it simple. So my idea is to use FBI directly. Does anyone know this problem and the solution?
Here is some of my code:
using (Bitmap bmp = new Bitmap(240, 320, PixelFormat.Format16bppRgb555))
{
[...]
Byte[] image = null;
using(MemoryStream memoryStream = new MemoryStream())
{
bitmap.Save(memoryStream, ImageFormat.Bmp);
Byte[] imageTemp = memoryStream.GetBuffer();
//Remove BMP header
image = new Byte[imageTemp.Length - 54];
Buffer.BlockCopy(imageTemp, 54, image, 0, image.Length);
//153600 byte
using (FileStream fb1 = new FileStream("/dev/fb1", FileMode.Open, FileAccess.ReadWrite, FileShare.ReadWrite))
{
fb1.Write(image, 0, image.Length);
fb1.Close();
}
}
}
Take a look at http://computerstruggles.blogspot.de/2013/02/how-to-program-directfb-in-c-on.html - the idea is to install the directfb library and use it from C# with PInvoke. The blog's author uses a mini wrapper in C to make using it even easier. BTW why don't you like to install additional libraries and to profit from the work others have done for you?
You may be running out of memory when the MemoryStream reallocates memory. When it needs to grow, it doubles in size. With this large of a write, the internal buffer is probably exceeding available memory. See Why does C# memory stream reserve so much memory? for more information.

I want to save captured image in an Object (Windows Phone App)

i am working in windows Phone 7 Application. using this code i captured the image and saved into Media Library
myCamera.Show();
and this is for Saving to media Library
mediaLibrary.SavePicture("TestPhoto", imageBits);
My Question is > I want to save my captured image into an Object Where i can directly send to server
imageBits is already an object (of type Stream) so what you're asking for doesn't really make sense. Presumably you're trying to convert it to a byte array in order to send it to the server.
MemoryStream ms = new MemoryStream();
//if you've manipulated stream before this call, reset position
e.ChosenPhoto.Position = 0;
e.ChosenPhoto.CopyTo(ms);
byte[] imageByteArray = ms.ToArray();
ms.Dispose();
imageByteArray then contains your image as a byte array. Alternatively, you could convert the image into a Base64 encoded string and send that, but that depends if your server can decode it.
string base64 = Convert.ToBase64String(imageByteArray);

C# Video Streaming

I'm trying to do a application with video stream, and by now I can send only one image from the server to the client. When I try to send more than only one image at the client I receive the following error: "Parameter is not valid." at pictureBox1.Image = new Bitmap(ms);
Client side code:
while((data = cliente.receiveImage()) != null)
{
ms = new MemoryStream(data);
pictureBox1.Image = new Bitmap(ms);
ms.Close();
}
Server side code (this code is repeated continuously):
servidor.sendImage(ms.GetBuffer());
ms.GetBuffer() returns the entire buffer of the memory stream, including any extra unused portion.
You should call ToArray(), which only returns actual contents.
(Or, your data might be invalid for some other reason, such as an issue in sendImage or receiveImage)
Images are nit-picky things, and you have to have the entire set of bytes that comprise the image in order to reconstruct an image.
I would bet my left shoe that the issue is that when the client object is receiving data, it's getting it in chunks comprised of partial images, not the whole image at once. This would cause the line that says
pictureBox1.Image = new Bitmap(ms);
to fail because it simply doesn't have a whole image's bytes.
Alternatives
Rather than having the server push images out to the client, perhaps another approach would be to have the client pull images from the server.
Use an existing streaming mechanism. I personally think that streaming video manually from C# may be more complex than you're bargaining for, and I'd humbly recommend using an existing component or application to stream the video rather than writing your own. There are already so many different options out there (wmv, Flash, and a hundred more) that you're reinventing a wheel that really doesn't need to be re-invented.

C# - How to use Jpeg to compress images and send to a server?

I want to build a Screen Sharing program in C#.(with TCP)
I sniffed around the web and found out that the most efficient way to do it is by sending alot of screenshots from the client to the server.
The point is - how can I compress a Bitmap to Jpeg - receive it on the server and decompress again to Bitmap (so I can show it in a form) ?
I've tried using the JpegBitmapEncoder with no luck, here's my code:
Bitmap screen = TakeScreenshot();
MemoryStream ms = new MemoryStream();
byte[] Bytes = BmpToBytes_Unsafe(screen);
ms.Write(Bytes, 0, Bytes.Length);
Jpeg = new JpegBitmapEncoder();
Jpeg.Frames.Add(BitmapFrame.Create(ms));
Jpeg.QualityLevel = 40;
Jpeg.Save(ms);
BinaryReader br = new BinaryReader(ms);
SendMessage(br.ReadBytes((int)ms.Length));
It throws an NotSupportedException at Jpeg.Frames.Add(BitmapFrame.Create(ms));
No imaging component suitable to complete this operation was found.
So I need a way to convert a Bitmap to Jpeg, then to byte[], then send it over TCP.
And on the other end, do the exact opposite. Any suggestions ?
Thank you.
JPEG was designed for photographs, not for screen captures. Also, most of the screen doesn't change so better to just send the changed portions and only a full screen when much of the screen has changed.
Unless you're just doing this for fun, you are going about this all wrong. VNC has been doing this for years and the source code is free so you could look to see how that's done.

Categories