I have a particular problem that I need help with. I am working with complex proteomics data and one of our plots involves a heatmap of the raw data. These heatmaps I calculate as a raw image that I then resize to fit my chart canvas. The image files that are produced that way are usually very in-balanced when it comes to the width vs height.
Usually, these images are around 10 to a 100 pixels wide and 5000 to 8000 pixels high (this is the size of my raw 2D data array that I have to convert into an image). The target resolution afterwards would be something of 1300 x 600 pixels.
I usually use this function for resizing my image to a target size
public static Image Resize(Image img, int width, int height) {
Bitmap bmp = new Bitmap(width, height);
Graphics graphic = Graphics.FromImage((Image)bmp);
graphic.InterpolationMode = InterpolationMode.NearestNeighbor;
graphic.PixelOffsetMode = PixelOffsetMode.Half;
graphic.DrawImage(img, 0, 0, width, height);
graphic.Dispose();
return (Image)bmp;
}
This usually works fine for the dimension described above. But now I have a new dataset with the dimensions of 6 x 54343 pixels.
When using the same code on this image the resized image is half blank.
Original Image:
http://files.biognosys.ch/FileSharing/20170427_StackOverflow/raw.png
(the original image does not show properly in most browsers so use "save link as...")
How it should look (using photoshop):
http://files.biognosys.ch/FileSharing/20170427_StackOverflow/photoshop_resize.png
How it looks when I use the code snipped above
http://files.biognosys.ch/FileSharing/20170427_StackOverflow/code_resized.png
Please keep in mind, that this has worked for years without problem for images of 6 x 8000 so I guess I am not doing anything fundamentally wrong here.
It is also important that I have NearestNeighbor interpolation for the resizing so any solution that involves other interpolations that do not result in the "How it should look" image are eventually not useful for me.
Oli
It looks like you've hit some legacy limitation from 16-bit Windows era. The obvious way to work it around is to pre-split the source image into smaller chunks using just memory operations and than apply all those chunks with resizing using Graphics. This method assumes your source image is Bitmap rather than just Image but this doesn't seem to be a limitation for you. Here is a sketch of the code:
[DllImport("kernel32.dll", EntryPoint = "CopyMemory", SetLastError = true)]
public static extern void CopyMemoryUnmanaged(IntPtr dest, IntPtr src, int count);
// in case you can't use P/Invoke, copy via intermediate .Net buffer
static void CopyMemoryNet(IntPtr dst, IntPtr src, int count)
{
byte[] buffer = new byte[count];
Marshal.Copy(src, buffer, 0, count);
Marshal.Copy(buffer, 0, dst, count);
}
static Image CopyImagePart(Bitmap srcImg, int startH, int endH)
{
var width = srcImg.Width;
var height = endH - startH;
var srcBitmapData = srcImg.LockBits(new Rectangle(0, startH, width, height), ImageLockMode.ReadOnly, srcImg.PixelFormat);
var dstImg = new Bitmap(width, height, srcImg.PixelFormat);
var dstBitmapData = dstImg.LockBits(new Rectangle(0, 0, width, height), ImageLockMode.ReadWrite, srcImg.PixelFormat);
int bytesCount = Math.Abs(srcBitmapData.Stride) * height;
CopyMemoryUnmanaged(dstBitmapData.Scan0, srcBitmapData.Scan0, bytesCount);
// in case you can't use P/Invoke, copy via intermediate .Net buffer
//CopyMemoryNet(dstBitmapData.Scan0, srcBitmapData.Scan0, bytesCount);
srcImg.UnlockBits(srcBitmapData);
dstImg.UnlockBits(dstBitmapData);
return dstImg;
}
public static Image ResizeInParts(Bitmap srcBmp, int width, int height)
{
int srcStep = srcBmp.Height;
int dstStep = height;
while (srcStep > 30000)
{
srcStep /= 2;
dstStep /= 2;
}
var resBmp = new Bitmap(width, height);
using (Graphics graphic = Graphics.FromImage(resBmp))
{
graphic.InterpolationMode = InterpolationMode.NearestNeighbor;
graphic.PixelOffsetMode = PixelOffsetMode.Half;
for (int srcTop = 0, dstTop = 0; srcTop < srcBmp.Height; srcTop += srcStep, dstTop += dstStep)
{
int srcBottom = srcTop + srcStep;
int dstH = dstStep;
if (srcBottom > srcBmp.Height)
{
srcBottom = srcBmp.Height;
dstH = height - dstTop;
}
using (var imgPart = CopyImagePart(srcBmp, srcTop, srcBottom))
{
graphic.DrawImage(imgPart, 0, dstTop, width, dstH);
}
}
}
return resBmp;
}
Here is what I get for your example image:
It is not the same as your photoshop_resize.png but is quite similar to your code_resized.png
This code can be improved to better handle various "edges" such as cases when srcBmp.Height is not even or edges between different parts (pixels on the edges are interpolated using only half of the pixels they should be) but this is not easy to do without assuming some "good" size of both source and resized image or re-implementing interpolation logic yourself. Still this code might already be good enough for your usage given your scaling factors.
Here is a solution that seems to work. It's based on Windows WIC ("Windows Imaging Component"). It's a native component that Windows (and WPF) uses for all imaging operations.
I have provided a small .NET interop layer for it. It has not all WIC features but it will allow you to load/scale/save a file/stream image. The Scale method has a scaling option similar to the GDI+ one.
It seems to work ok with your sample although the result is not strictly equivalent to the photoshop one. This is how you can use it:
using (var bmp = WicBitmapSource.Load("input.png"))
{
bmp.Scale(1357, 584, WicBitmapInterpolationMode.NearestNeighbor);
bmp.Save("output.png");
}
...
public enum WicBitmapInterpolationMode
{
NearestNeighbor = 0,
Linear = 1,
Cubic = 2,
Fant = 3,
HighQualityCubic = 4,
}
public sealed class WicBitmapSource : IDisposable
{
private IWICBitmapSource _source;
private WicBitmapSource(IWICBitmapSource source, Guid format)
{
_source = source;
Format = format;
Stats();
}
public Guid Format { get; }
public int Width { get; private set; }
public int Height { get; private set; }
public double DpiX { get; private set; }
public double DpiY { get; private set; }
private void Stats()
{
if (_source == null)
{
Width = 0;
Height = 0;
DpiX = 0;
DpiY = 0;
return;
}
int w, h;
_source.GetSize(out w, out h);
Width = w;
Height = h;
double dpix, dpiy;
_source.GetResolution(out dpix, out dpiy);
DpiX = dpix;
DpiY = dpiy;
}
private void CheckDisposed()
{
if (_source == null)
throw new ObjectDisposedException(null);
}
public void Dispose()
{
Dispose(true);
GC.SuppressFinalize(this);
}
~WicBitmapSource()
{
Dispose(false);
}
private void Dispose(bool disposing)
{
if (_source != null)
{
Marshal.ReleaseComObject(_source);
_source = null;
}
}
public void Save(string filePath)
{
Save(filePath, Format, Guid.Empty);
}
public void Save(string filePath, Guid pixelFormat)
{
Save(filePath, Format, pixelFormat);
}
public void Save(string filePath, Guid encoderFormat, Guid pixelFormat)
{
if (filePath == null)
throw new ArgumentNullException(nameof(filePath));
if (encoderFormat == Guid.Empty)
{
string ext = Path.GetExtension(filePath).ToLowerInvariant();
// we support only png & jpg
if (ext == ".png")
{
encoderFormat = new Guid(0x1b7cfaf4, 0x713f, 0x473c, 0xbb, 0xcd, 0x61, 0x37, 0x42, 0x5f, 0xae, 0xaf);
}
else if (ext == ".jpeg" || ext == ".jpe" || ext == ".jpg" || ext == ".jfif" || ext == ".exif")
{
encoderFormat = new Guid(0x19e4a5aa, 0x5662, 0x4fc5, 0xa0, 0xc0, 0x17, 0x58, 0x02, 0x8e, 0x10, 0x57);
}
}
if (encoderFormat == Guid.Empty)
throw new ArgumentException();
using (var file = File.OpenWrite(filePath))
{
Save(file, encoderFormat, pixelFormat);
}
}
public void Save(Stream stream)
{
Save(stream, Format, Guid.Empty);
}
public void Save(Stream stream, Guid pixelFormat)
{
Save(stream, Format, pixelFormat);
}
public void Save(Stream stream, Guid encoderFormat, Guid pixelFormat)
{
if (stream == null)
throw new ArgumentNullException(nameof(stream));
CheckDisposed();
Save(_source, stream, encoderFormat, pixelFormat, WICBitmapEncoderCacheOption.WICBitmapEncoderNoCache, null);
}
public void Scale(int? width, int? height, WicBitmapInterpolationMode mode)
{
if (!width.HasValue && !height.HasValue)
throw new ArgumentException();
int neww;
int newh;
if (width.HasValue && height.HasValue)
{
neww = width.Value;
newh = height.Value;
}
else
{
int w = Width;
int h = Height;
if (w == 0 || h == 0)
return;
if (width.HasValue)
{
neww = width.Value;
newh = (width.Value * h) / w;
}
else
{
newh = height.Value;
neww = (height.Value * w) / h;
}
}
if (neww <= 0 || newh <= 0)
throw new ArgumentException();
CheckDisposed();
_source = Scale(_source, neww, newh, mode);
Stats();
}
// we support only 1-framed files (unlike TIF for example)
public static WicBitmapSource Load(string filePath)
{
if (filePath == null)
throw new ArgumentNullException(nameof(filePath));
return LoadBitmapSource(filePath, 0, WICDecodeOptions.WICDecodeMetadataCacheOnDemand);
}
public static WicBitmapSource Load(Stream stream)
{
if (stream == null)
throw new ArgumentNullException(nameof(stream));
return LoadBitmapSource(stream, 0, WICDecodeOptions.WICDecodeMetadataCacheOnDemand);
}
private static WicBitmapSource LoadBitmapSource(string filePath, int frameIndex, WICDecodeOptions metadataOptions)
{
var wfac = (IWICImagingFactory)new WICImagingFactory();
IWICBitmapDecoder decoder = null;
try
{
decoder = wfac.CreateDecoderFromFilename(filePath, null, GenericAccessRights.GENERIC_READ, metadataOptions);
return new WicBitmapSource(decoder.GetFrame(frameIndex), decoder.GetContainerFormat());
}
finally
{
Release(decoder);
Release(wfac);
}
}
private static WicBitmapSource LoadBitmapSource(Stream stream, int frameIndex, WICDecodeOptions metadataOptions)
{
var wfac = (IWICImagingFactory)new WICImagingFactory();
IWICBitmapDecoder decoder = null;
try
{
decoder = wfac.CreateDecoderFromStream(new ManagedIStream(stream), null, metadataOptions);
return new WicBitmapSource(decoder.GetFrame(frameIndex), decoder.GetContainerFormat());
}
finally
{
Release(decoder);
Release(wfac);
}
}
private static IWICBitmapScaler Scale(IWICBitmapSource source, int width, int height, WicBitmapInterpolationMode mode)
{
var wfac = (IWICImagingFactory)new WICImagingFactory();
IWICBitmapScaler scaler = null;
try
{
scaler = wfac.CreateBitmapScaler();
scaler.Initialize(source, width, height, mode);
Marshal.ReleaseComObject(source);
return scaler;
}
finally
{
Release(wfac);
}
}
private static void Save(IWICBitmapSource source, Stream stream, Guid containerFormat, Guid pixelFormat, WICBitmapEncoderCacheOption cacheOptions, WICRect rect)
{
var wfac = (IWICImagingFactory)new WICImagingFactory();
IWICBitmapEncoder encoder = null;
IWICBitmapFrameEncode frame = null;
try
{
encoder = wfac.CreateEncoder(containerFormat, null);
encoder.Initialize(new ManagedIStream(stream), cacheOptions);
encoder.CreateNewFrame(out frame, IntPtr.Zero);
frame.Initialize(IntPtr.Zero);
if (pixelFormat != Guid.Empty)
{
frame.SetPixelFormat(pixelFormat);
}
frame.WriteSource(source, rect);
frame.Commit();
encoder.Commit();
}
finally
{
Release(frame);
Release(encoder);
Release(wfac);
}
}
private static void Release(object obj)
{
if (obj != null)
{
Marshal.ReleaseComObject(obj);
}
}
[ComImport]
[Guid("CACAF262-9370-4615-A13B-9F5539DA4C0A")]
private class WICImagingFactory
{
}
[StructLayout(LayoutKind.Sequential)]
private class WICRect
{
public int X;
public int Y;
public int Width;
public int Height;
}
[Flags]
private enum WICDecodeOptions
{
WICDecodeMetadataCacheOnDemand = 0x0,
WICDecodeMetadataCacheOnLoad = 0x1,
}
[Flags]
private enum WICBitmapEncoderCacheOption
{
WICBitmapEncoderCacheInMemory = 0x0,
WICBitmapEncoderCacheTempFile = 0x1,
WICBitmapEncoderNoCache = 0x2,
}
[Flags]
private enum GenericAccessRights : uint
{
GENERIC_READ = 0x80000000,
GENERIC_WRITE = 0x40000000,
GENERIC_EXECUTE = 0x20000000,
GENERIC_ALL = 0x10000000,
GENERIC_READ_WRITE = GENERIC_READ | GENERIC_WRITE
}
[Guid("ec5ec8a9-c395-4314-9c77-54d7a935ff70"), InterfaceType(ComInterfaceType.InterfaceIsIUnknown)]
private interface IWICImagingFactory
{
IWICBitmapDecoder CreateDecoderFromFilename([MarshalAs(UnmanagedType.LPWStr)] string wzFilename, [MarshalAs(UnmanagedType.LPArray, SizeConst = 1)] Guid[] pguidVendor, GenericAccessRights dwDesiredAccess, WICDecodeOptions metadataOptions);
IWICBitmapDecoder CreateDecoderFromStream(IStream pIStream, [MarshalAs(UnmanagedType.LPArray, SizeConst = 1)] Guid[] pguidVendor, WICDecodeOptions metadataOptions);
void NotImpl2();
void NotImpl3();
void NotImpl4();
IWICBitmapEncoder CreateEncoder([MarshalAs(UnmanagedType.LPStruct)] Guid guidContainerFormat, [MarshalAs(UnmanagedType.LPArray, SizeConst = 1)] Guid[] pguidVendor);
void NotImpl6();
void NotImpl7();
IWICBitmapScaler CreateBitmapScaler();
// not fully impl...
}
[Guid("00000120-a8f2-4877-ba0a-fd2b6645fb94"), InterfaceType(ComInterfaceType.InterfaceIsIUnknown)]
private interface IWICBitmapSource
{
void GetSize(out int puiWidth, out int puiHeight);
Guid GetPixelFormat();
void GetResolution(out double pDpiX, out double pDpiY);
void NotImpl3();
void NotImpl4();
}
[Guid("00000302-a8f2-4877-ba0a-fd2b6645fb94"), InterfaceType(ComInterfaceType.InterfaceIsIUnknown)]
private interface IWICBitmapScaler : IWICBitmapSource
{
#region IWICBitmapSource
new void GetSize(out int puiWidth, out int puiHeight);
new Guid GetPixelFormat();
new void GetResolution(out double pDpiX, out double pDpiY);
new void NotImpl3();
new void NotImpl4();
#endregion IWICBitmapSource
void Initialize(IWICBitmapSource pISource, int uiWidth, int uiHeight, WicBitmapInterpolationMode mode);
}
[Guid("9EDDE9E7-8DEE-47ea-99DF-E6FAF2ED44BF"), InterfaceType(ComInterfaceType.InterfaceIsIUnknown)]
private interface IWICBitmapDecoder
{
void NotImpl0();
void NotImpl1();
Guid GetContainerFormat();
void NotImpl3();
void NotImpl4();
void NotImpl5();
void NotImpl6();
void NotImpl7();
void NotImpl8();
void NotImpl9();
IWICBitmapFrameDecode GetFrame(int index);
}
[Guid("3B16811B-6A43-4ec9-A813-3D930C13B940"), InterfaceType(ComInterfaceType.InterfaceIsIUnknown)]
private interface IWICBitmapFrameDecode : IWICBitmapSource
{
// not fully impl...
}
[Guid("00000103-a8f2-4877-ba0a-fd2b6645fb94"), InterfaceType(ComInterfaceType.InterfaceIsIUnknown)]
private interface IWICBitmapEncoder
{
void Initialize(IStream pIStream, WICBitmapEncoderCacheOption cacheOption);
Guid GetContainerFormat();
void NotImpl2();
void NotImpl3();
void NotImpl4();
void NotImpl5();
void NotImpl6();
void CreateNewFrame(out IWICBitmapFrameEncode ppIFrameEncode, IntPtr encoderOptions);
void Commit();
// not fully impl...
}
[Guid("00000105-a8f2-4877-ba0a-fd2b6645fb94"), InterfaceType(ComInterfaceType.InterfaceIsIUnknown)]
private interface IWICBitmapFrameEncode
{
void Initialize(IntPtr pIEncoderOptions);
void SetSize(int uiWidth, int uiHeight);
void SetResolution(double dpiX, double dpiY);
void SetPixelFormat([MarshalAs(UnmanagedType.LPStruct)] Guid pPixelFormat);
void NotImpl4();
void NotImpl5();
void NotImpl6();
void NotImpl7();
void WriteSource(IWICBitmapSource pIBitmapSource, WICRect prc);
void Commit();
// not fully impl...
}
private class ManagedIStream : IStream
{
private Stream _stream;
public ManagedIStream(Stream stream)
{
_stream = stream;
}
public void Read(byte[] buffer, int count, IntPtr pRead)
{
int read = _stream.Read(buffer, 0, count);
if (pRead != IntPtr.Zero)
{
Marshal.WriteInt32(pRead, read);
}
}
public void Seek(long offset, int origin, IntPtr newPosition)
{
long pos = _stream.Seek(offset, (SeekOrigin)origin);
if (newPosition != IntPtr.Zero)
{
Marshal.WriteInt64(newPosition, pos);
}
}
public void SetSize(long newSize)
{
_stream.SetLength(newSize);
}
public void Stat(out System.Runtime.InteropServices.ComTypes.STATSTG stg, int flags)
{
const int STGTY_STREAM = 2;
stg = new System.Runtime.InteropServices.ComTypes.STATSTG();
stg.type = STGTY_STREAM;
stg.cbSize = _stream.Length;
stg.grfMode = 0;
if (_stream.CanRead && _stream.CanWrite)
{
const int STGM_READWRITE = 0x00000002;
stg.grfMode |= STGM_READWRITE;
return;
}
if (_stream.CanRead)
{
const int STGM_READ = 0x00000000;
stg.grfMode |= STGM_READ;
return;
}
if (_stream.CanWrite)
{
const int STGM_WRITE = 0x00000001;
stg.grfMode |= STGM_WRITE;
return;
}
throw new IOException();
}
public void Write(byte[] buffer, int count, IntPtr written)
{
_stream.Write(buffer, 0, count);
if (written != IntPtr.Zero)
{
Marshal.WriteInt32(written, count);
}
}
public void Clone(out IStream ppstm) { throw new NotImplementedException(); }
public void Commit(int grfCommitFlags) { throw new NotImplementedException(); }
public void CopyTo(IStream pstm, long cb, IntPtr pcbRead, IntPtr pcbWritten) { throw new NotImplementedException(); }
public void LockRegion(long libOffset, long cb, int dwLockType) { throw new NotImplementedException(); }
public void Revert() { throw new NotImplementedException(); }
public void UnlockRegion(long libOffset, long cb, int dwLockType) { throw new NotImplementedException(); }
}
}
Related
I'm looking for .NET code which performs the same as Snipping Tool - capturing a screen area.
I believe it uses hooks. Would be interesting to know how does it highlight the selected fragment.
Update:
Found http://www.codeproject.com/KB/vb/Screen_Shot.aspx . Though people say it's missing some important files for proper compilation.
The snipping tool effect isn't difficult to implement in Windows Forms. Add a new form to your project and name it "SnippingTool". Make the code look like this:
using System;
using System.Drawing;
using System.Drawing.Drawing2D;
using System.Windows.Forms;
namespace WindowsFormsApplication1 {
public partial class SnippingTool : Form {
public static Image Snip() {
var rc = Screen.PrimaryScreen.Bounds;
using (Bitmap bmp = new Bitmap(rc.Width, rc.Height, System.Drawing.Imaging.PixelFormat.Format32bppPArgb)) {
using (Graphics gr = Graphics.FromImage(bmp))
gr.CopyFromScreen(0, 0, 0, 0, bmp.Size);
using (var snipper = new SnippingTool(bmp)) {
if (snipper.ShowDialog() == DialogResult.OK) {
return snipper.Image;
}
}
return null;
}
}
public SnippingTool(Image screenShot) {
InitializeComponent();
this.BackgroundImage = screenShot;
this.ShowInTaskbar = false;
this.FormBorderStyle = FormBorderStyle.None;
this.WindowState = FormWindowState.Maximized;
this.DoubleBuffered = true;
}
public Image Image { get; set; }
private Rectangle rcSelect = new Rectangle();
private Point pntStart;
protected override void OnMouseDown(MouseEventArgs e) {
// Start the snip on mouse down
if (e.Button != MouseButtons.Left) return;
pntStart = e.Location;
rcSelect = new Rectangle(e.Location, new Size(0, 0));
this.Invalidate();
}
protected override void OnMouseMove(MouseEventArgs e) {
// Modify the selection on mouse move
if (e.Button != MouseButtons.Left) return;
int x1 = Math.Min(e.X, pntStart.X);
int y1 = Math.Min(e.Y, pntStart.Y);
int x2 = Math.Max(e.X, pntStart.X);
int y2 = Math.Max(e.Y, pntStart.Y);
rcSelect = new Rectangle(x1, y1, x2 - x1, y2 - y1);
this.Invalidate();
}
protected override void OnMouseUp(MouseEventArgs e) {
// Complete the snip on mouse-up
if (rcSelect.Width <= 0 || rcSelect.Height <= 0) return;
Image = new Bitmap(rcSelect.Width, rcSelect.Height);
using (Graphics gr = Graphics.FromImage(Image)) {
gr.DrawImage(this.BackgroundImage, new Rectangle(0, 0, Image.Width, Image.Height),
rcSelect, GraphicsUnit.Pixel);
}
DialogResult = DialogResult.OK;
}
protected override void OnPaint(PaintEventArgs e) {
// Draw the current selection
using (Brush br = new SolidBrush(Color.FromArgb(120, Color.White))) {
int x1 = rcSelect.X; int x2 = rcSelect.X + rcSelect.Width;
int y1 = rcSelect.Y; int y2 = rcSelect.Y + rcSelect.Height;
e.Graphics.FillRectangle(br, new Rectangle(0, 0, x1, this.Height));
e.Graphics.FillRectangle(br, new Rectangle(x2, 0, this.Width - x2, this.Height));
e.Graphics.FillRectangle(br, new Rectangle(x1, 0, x2 - x1, y1));
e.Graphics.FillRectangle(br, new Rectangle(x1, y2, x2 - x1, this.Height - y2));
}
using (Pen pen = new Pen(Color.Red, 3)) {
e.Graphics.DrawRectangle(pen, rcSelect);
}
}
protected override bool ProcessCmdKey(ref Message msg, Keys keyData) {
// Allow canceling the snip with the Escape key
if (keyData == Keys.Escape) this.DialogResult = DialogResult.Cancel;
return base.ProcessCmdKey(ref msg, keyData);
}
}
}
Usage:
var bmp = SnippingTool.Snip();
if (bmp != null) {
// Do something with the bitmap
//...
}
This is a modified #Hans's version that is compatible with multiple monitors and works well with DPI scaling (tested on Windows 7 and Windows 10).
public sealed partial class SnippingTool : Form
{
public static event EventHandler Cancel;
public static event EventHandler AreaSelected;
public static Image Image { get; set; }
private static SnippingTool[] _forms;
private Rectangle _rectSelection;
private Point _pointStart;
public SnippingTool(Image screenShot, int x, int y, int width, int height)
{
InitializeComponent();
BackgroundImage = screenShot;
BackgroundImageLayout = ImageLayout.Stretch;
ShowInTaskbar = false;
FormBorderStyle = FormBorderStyle.None;
StartPosition = FormStartPosition.Manual;
SetBounds(x, y, width, height);
WindowState = FormWindowState.Maximized;
DoubleBuffered = true;
Cursor = Cursors.Cross;
TopMost = true;
}
private void OnCancel(EventArgs e)
{
Cancel?.Invoke(this, e);
}
private void OnAreaSelected(EventArgs e)
{
AreaSelected?.Invoke(this, e);
}
private void CloseForms()
{
for (int i = 0; i < _forms.Length; i++)
{
_forms[i].Dispose();
}
}
public static void Snip()
{
var screens = ScreenHelper.GetMonitorsInfo();
_forms = new SnippingTool[screens.Count];
for (int i = 0; i < screens.Count; i++)
{
int hRes = screens[i].HorizontalResolution;
int vRes = screens[i].VerticalResolution;
int top = screens[i].MonitorArea.Top;
int left = screens[i].MonitorArea.Left;
var bmp = new Bitmap(hRes, vRes, PixelFormat.Format32bppPArgb);
using (var g = Graphics.FromImage(bmp))
{
g.CopyFromScreen(left, top, 0, 0, bmp.Size);
}
_forms[i] = new SnippingTool(bmp, left, top, hRes, vRes);
_forms[i].Show();
}
}
#region Overrides
protected override void OnMouseDown(MouseEventArgs e)
{
// Start the snip on mouse down
if (e.Button != MouseButtons.Left)
{
return;
}
_pointStart = e.Location;
_rectSelection = new Rectangle(e.Location, new Size(0, 0));
Invalidate();
}
protected override void OnMouseMove(MouseEventArgs e)
{
// Modify the selection on mouse move
if (e.Button != MouseButtons.Left)
{
return;
}
int x1 = Math.Min(e.X, _pointStart.X);
int y1 = Math.Min(e.Y, _pointStart.Y);
int x2 = Math.Max(e.X, _pointStart.X);
int y2 = Math.Max(e.Y, _pointStart.Y);
_rectSelection = new Rectangle(x1, y1, x2 - x1, y2 - y1);
Invalidate();
}
protected override void OnMouseUp(MouseEventArgs e)
{
// Complete the snip on mouse-up
if (_rectSelection.Width <= 0 || _rectSelection.Height <= 0)
{
CloseForms();
OnCancel(new EventArgs());
return;
}
Image = new Bitmap(_rectSelection.Width, _rectSelection.Height);
var hScale = BackgroundImage.Width / (double)Width;
var vScale = BackgroundImage.Height / (double)Height;
using (Graphics gr = Graphics.FromImage(Image))
{
gr.DrawImage(BackgroundImage,
new Rectangle(0, 0, Image.Width, Image.Height),
new Rectangle((int)(_rectSelection.X * hScale), (int)(_rectSelection.Y * vScale), (int)(_rectSelection.Width * hScale), (int)(_rectSelection.Height * vScale)),
GraphicsUnit.Pixel);
}
CloseForms();
OnAreaSelected(new EventArgs());
}
protected override void OnPaint(PaintEventArgs e)
{
// Draw the current selection
using (Brush br = new SolidBrush(Color.FromArgb(120, Color.White)))
{
int x1 = _rectSelection.X;
int x2 = _rectSelection.X + _rectSelection.Width;
int y1 = _rectSelection.Y;
int y2 = _rectSelection.Y + _rectSelection.Height;
e.Graphics.FillRectangle(br, new Rectangle(0, 0, x1, Height));
e.Graphics.FillRectangle(br, new Rectangle(x2, 0, Width - x2, Height));
e.Graphics.FillRectangle(br, new Rectangle(x1, 0, x2 - x1, y1));
e.Graphics.FillRectangle(br, new Rectangle(x1, y2, x2 - x1, Height - y2));
}
using (Pen pen = new Pen(Color.Red, 2))
{
e.Graphics.DrawRectangle(pen, _rectSelection);
}
}
protected override bool ProcessCmdKey(ref Message msg, Keys keyData)
{
// Allow canceling the snip with the Escape key
if (keyData == Keys.Escape)
{
Image = null;
CloseForms();
OnCancel(new EventArgs());
}
return base.ProcessCmdKey(ref msg, keyData);
}
#endregion
}
Usage:
SnippingTool.AreaSelected += OnAreaSelected;
SnippingTool.Snip();
private static void OnAreaSelected(object sender, EventArgs e)
{
var bmp = SnippingTool.Image;
// Do something with the bitmap
//...
}
Note you need a helper class to get the actual monitor resolution and avoid problems with DPI scaling.
This is the code:
public class DeviceInfo
{
public string DeviceName { get; set; }
public int VerticalResolution { get; set; }
public int HorizontalResolution { get; set; }
public Rectangle MonitorArea { get; set; }
}
public static class ScreenHelper
{
private const int DektopVertRes = 117;
private const int DesktopHorzRes = 118;
[StructLayout(LayoutKind.Sequential)]
internal struct Rect
{
public int left;
public int top;
public int right;
public int bottom;
}
[StructLayout(LayoutKind.Sequential, CharSet = CharSet.Unicode)]
internal struct MONITORINFOEX
{
public int Size;
public Rect Monitor;
public Rect WorkArea;
public uint Flags;
[MarshalAs(UnmanagedType.ByValTStr, SizeConst = 32)]
public string DeviceName;
}
private delegate bool MonitorEnumDelegate(IntPtr hMonitor, IntPtr hdcMonitor, ref Rect lprcMonitor, IntPtr dwData);
[DllImport("user32.dll")]
private static extern bool EnumDisplayMonitors(IntPtr hdc, IntPtr lprcClip, MonitorEnumDelegate lpfnEnum, IntPtr dwData);
[DllImport("gdi32.dll")]
private static extern IntPtr CreateDC(string lpszDriver, string lpszDevice, string lpszOutput, IntPtr lpInitData);
[DllImport("user32.dll", CharSet = CharSet.Unicode)]
private static extern bool GetMonitorInfo(IntPtr hMonitor, ref MONITORINFOEX lpmi);
[DllImport("User32.dll")]
private static extern int ReleaseDC(IntPtr hwnd, IntPtr dc);
[DllImport("gdi32.dll")]
private static extern int GetDeviceCaps(IntPtr hdc, int nIndex);
private static List<DeviceInfo> _result;
public static List<DeviceInfo> GetMonitorsInfo()
{
_result = new List<DeviceInfo>();
EnumDisplayMonitors(IntPtr.Zero, IntPtr.Zero, MonitorEnum, IntPtr.Zero);
return _result;
}
private static bool MonitorEnum(IntPtr hMonitor, IntPtr hdcMonitor, ref Rect lprcMonitor, IntPtr dwData)
{
var mi = new MONITORINFOEX();
mi.Size = Marshal.SizeOf(typeof(MONITORINFOEX));
bool success = GetMonitorInfo(hMonitor, ref mi);
if (success)
{
var dc = CreateDC(mi.DeviceName, mi.DeviceName, null, IntPtr.Zero);
var di = new DeviceInfo
{
DeviceName = mi.DeviceName,
MonitorArea = new Rectangle(mi.Monitor.left, mi.Monitor.top, mi.Monitor.right-mi.Monitor.right, mi.Monitor.bottom-mi.Monitor.top),
VerticalResolution = GetDeviceCaps(dc, DektopVertRes),
HorizontalResolution = GetDeviceCaps(dc, DesktopHorzRes)
};
ReleaseDC(IntPtr.Zero, dc);
_result.Add(di);
}
return true;
}
}
Here is the complete source code
It takes a full-screen screenshot, then (probably) copies it, applies the translucent effect & displays it. When you click-drag it can then overlay the corresponding region from the original capture.
You can get a screenshot using CopyFromScreen() or using the GDI API.
I am reading a TiledExr file as follows using C++ and OpenEXR
std::vector<double> TiledExrDepthMapExtractor::extractDepthMap(const char* filePath,
int& width, int& height, ProgressFunc& progressCallback, ErrorFunc& errorCallback)
{
// Do some stuff then...
return pixelVector;
}
This gets called from C#
public class ExrDepthMapExtractor : IDepthMapExtractor
{
public RawDepthMap GetDepthMap(string filePath)
{
return ExtractRawDepthMap(filePath, Callbacks.ProgressCallback, Callbacks.ErrorCallback);
}
private unsafe RawDepthMap ExtractRawDepthMap(string filePath,
Callbacks.ProgressFunc progressCallback, Callbacks.ErrorFunc errorCallback)
{
using (InternalExtractRawDepthMapWrapper(filePath, out RawDepthMap rdm, progressCallback, errorCallback))
{
rdm.Source = filePath;
return rdm;
}
}
private unsafe PixelVectorSafeHandle InternalExtractRawDepthMapWrapper(
string filePath, out RawDepthMap rdm, Callbacks.ProgressFunc progressCallback, Callbacks.ErrorFunc errorCallback)
{
if (!NativeMethods.ExtractDepthMapAs1DArray(out PixelVectorSafeHandle pixelVectorHandle,
out double* pixels, out int width, out int height, filePath, progressCallback))
throw new FormatException($"Depth map could not be extracted from \"{filePath}\"");
var pixelList = new List<double>();
for (int i = 0; i < width * height; i++)
pixelList.Add(pixels[i]);
rdm = new RawDepthMap()
{
Source = filePath,
DepthMapArray = pixelList,
Height = height,
Width = width
};
return pixelVectorHandle;
}
}
Via an API
[SuppressUnmanagedCodeSecurity()]
internal static class NativeMethods
{
[DllImport("Blundergat.OpenExr.Adapter.dll", CallingConvention = CallingConvention.Cdecl)]
[return: MarshalAs(UnmanagedType.I1)]
internal static unsafe extern bool ExtractDepthMapAs1DArray(
out PixelVectorSafeHandle vectorHandle,
out double* points,
out int width,
out int height,
string filePath,
Callbacks.ProgressFunc progressCallback);
[DllImport("Blundergat.OpenExr.Adapter.dll", CallingConvention = CallingConvention.Cdecl)]
internal static unsafe extern bool Release(IntPtr itemsHandle);
}
and converted to a RawDepthMap
public class RawDepthMap
{
public string Source { get; set; }
public List<double> DepthMapArray { get; set; }
public int Height { get; set; }
public int Width { get; set; }
public override string ToString()
{
string source = !String.IsNullOrEmpty(Source) ? Source : "N/A";
StringBuilder builder = new StringBuilder($"RawDepthMap: Source \"{source}\"");
if (DepthMapArray != null)
builder.Append($", Array size {DepthMapArray:N0}");
builder.Append($", Width {Width:N0}, Height {Height:N0}");
return builder.ToString();
}
}
This is a spherical depth map (it is a spherical panorama image), essentially an 1D array of depth mesurements. From this, I convert to a Cartesian PointCloud using
public class DepthMapToPointCloudAdapter : IDepthMapToPointCloudAdapter
{
public DepthMapToPointCloudAdapter() { }
public IPointCloud GetPointCloudFromDepthMap(RawDepthMap rdm)
{
// Do some transformations.
return new PointCloud(dataPoints.ToArray(), rdm.Source);
}
}
This give the following point cloud
Now, I need to read an image EXR file. I do this again like above using C++
std::vector<double> ImageExrDepthMapExtractor::extractDepthMap(const char* filePath,
int& width, int& height, ProgressFunc& progressCallback, ErrorFunc& errorCallback)
{
// Do some reading...
return pixelVector;
}
Same conversion routine to Cartesian PointCloud but this time I get a spherical geometry.
This is clearly down to the way I am reading the image (RGBA) .exr file, but what exactly is causing this?
I'm need rename field "Overlay", but not remove.
I'm try make link native jar lib to xamarin dll.
I`m create new Binding Libriary project and include jar file inside.
But when i'm try build solution, system output window get error
'Overlay': member names cannot be the same as their enclosing type.
I'm try configurate MetaData file in the following way.
<metadata>
<remove-node path="/api/package[#name='Com.Cdcom.Naviapps.Progorod']/class[#name='Overlay']/method[#name='Overlay']" />
</metadata>
or
<remove-node path="/api/package[#name='Com.Cdcom.Naviapps.Progorod']/class[#name='Overlay']" />
or i'm try change EnumMethods
<enum-method-mappings>
<mapping jni-class="/api/package[#name='com.cdcom.naviapps.progorod']/class[#name='Overlay']">
<method jni-name="Overlay" parameter="return" clr-enum-type="Android.OS.Overlay" />
</mapping>
</enum-method-mappings>
but i'm get other error
"generator.exe" exited with code -532462766.
You can see the class from first error below
// Metadata.xml XPath class reference: path="/api/package[#name='com.cdcom.naviapps.progorod']/class[#name='Overlay']"
[global::Android.Runtime.Register ("com/cdcom/naviapps/progorod/Overlay", DoNotGenerateAcw=true)]
public partial class Overlay : global::Java.Lang.Object {
//region "Event implementation for Com.Cdcom.Naviapps.Progorod.Overlay.IOnOverlayListener"
public event EventHandler<global::Com.Cdcom.Naviapps.Progorod.Overlay.OverlayEventArgs> Overlay {
add {
global::Java.Interop.EventHelper.AddEventHandler<global::Com.Cdcom.Naviapps.Progorod.Overlay.IOnOverlayListener, global::Com.Cdcom.Naviapps.Progorod.Overlay.IOnOverlayListenerImplementor>(
ref weak_implementor___SetOnOverlayListener,
__CreateIOnOverlayListenerImplementor,
__v => OnOverlayListener = __v,
__h => __h.Handler += value);
}
remove {
global::Java.Interop.EventHelper.RemoveEventHandler<global::Com.Cdcom.Naviapps.Progorod.Overlay.IOnOverlayListener, global::Com.Cdcom.Naviapps.Progorod.Overlay.IOnOverlayListenerImplementor>(
ref weak_implementor___SetOnOverlayListener,
global::Com.Cdcom.Naviapps.Progorod.Overlay.IOnOverlayListenerImplementor.__IsEmpty,
__v => OnOverlayListener = null,
__h => __h.Handler -= value);
}
}
//endregion
}
Original java code
public class Overlay
{
public List<OverlayItem> getItems()
{
return this.mOverlayItems;
}
public OnOverlayListener getOnOverlayListener()
{
return this.mOverlayListener;
}
public void populate()
{
double[] latlon = new double[this.mOverlayItems.size() * 2];
int d = 0;
for (OverlayItem oi : this.mOverlayItems)
{
latlon[(d++)] = oi.getGeoPoint().getLatitude();
latlon[(d++)] = oi.getGeoPoint().getLongitude();
}
Native.populateOverlay(this.mId, latlon);
}
public void setBitmap(Bitmap bitmap, float xOffset, float yOffset, boolean isPlain, int sizeInMeters)
{
int width = 0;
int height = 0;
int[] pixels = null;
if (bitmap != null)
{
width = bitmap.getWidth();
height = bitmap.getHeight();
pixels = new int[width * height];
bitmap.getPixels(pixels, 0, width, 0, 0, width, height);
}
Native.setOverlayBitmap(this.mId, width, height, pixels, xOffset, yOffset, isPlain, sizeInMeters);
}
public void setOnOverlayListener(OnOverlayListener listener)
{
this.mOverlayListener = listener;
}
public static int SPECIAL_OVERLAY_START_ROUTE = -1;
public static int SPECIAL_OVERLAY_FINISH_ROUTE = -2;
public static int SPECIAL_OVERLAY_ROUTE_SPRITE = -3;
public static int SPECIAL_OVERLAY_GEOBLOG_SPRITE = -4;
private static int SPECIAL_OVERLAYS_COUNT = 4;
public static Overlay specialOverlay(int id)
{
if (mSpecialOverlays[(id + SPECIAL_OVERLAYS_COUNT)] == null)
{
mSpecialOverlays[(id + SPECIAL_OVERLAYS_COUNT)] = new Overlay();
mSpecialOverlays[(id + SPECIAL_OVERLAYS_COUNT)].mId = id;
}
return mSpecialOverlays[(id + SPECIAL_OVERLAYS_COUNT)];
}
protected int getId()
{
return this.mId;
}
protected int mId = mNextId++;
private static int mNextId = 1;
private List<OverlayItem> mOverlayItems = new ArrayList();
private OnOverlayListener mOverlayListener;
private static Overlay[] mSpecialOverlays = new Overlay[SPECIAL_OVERLAYS_COUNT];
public static abstract interface OnOverlayListener
{
public abstract void onOverlayEvent(Overlay paramOverlay, OverlayItem paramOverlayItem);
}
}
I want to build an application with monodroid to have a live video stream from an IPCamera (with MJpeg format) to my tablet. after digging the internet I found that there is a Mjpeg Library project written in Java from here. it has two files MjpegView.java and MjpegInputStream.Java which I put them both here:
MjpegView.java
package de.mjpegsample.MjpegView;
import java.io.IOException;
import android.content.Context;
import android.graphics.Bitmap;
import android.graphics.Canvas;
import android.graphics.Color;
import android.graphics.Paint;
import android.graphics.PorterDuff;
import android.graphics.PorterDuffXfermode;
import android.graphics.Rect;
import android.graphics.Typeface;
import android.util.AttributeSet;
import android.view.SurfaceHolder;
import android.view.SurfaceView;
public class MjpegView extends SurfaceView implements SurfaceHolder.Callback {
public final static int POSITION_UPPER_LEFT = 9;
public final static int POSITION_UPPER_RIGHT = 3;
public final static int POSITION_LOWER_LEFT = 12;
public final static int POSITION_LOWER_RIGHT = 6;
public final static int SIZE_STANDARD = 1;
public final static int SIZE_BEST_FIT = 4;
public final static int SIZE_FULLSCREEN = 8;
private MjpegViewThread thread;
private MjpegInputStream mIn = null;
private boolean showFps = false;
private boolean mRun = false;
private boolean surfaceDone = false;
private Paint overlayPaint;
private int overlayTextColor;
private int overlayBackgroundColor;
private int ovlPos;
private int dispWidth;
private int dispHeight;
private int displayMode;
public class MjpegViewThread extends Thread {
private SurfaceHolder mSurfaceHolder;
private int frameCounter = 0;
private long start;
private Bitmap ovl;
public MjpegViewThread(SurfaceHolder surfaceHolder, Context context) { mSurfaceHolder = surfaceHolder; }
private Rect destRect(int bmw, int bmh) {
int tempx;
int tempy;
if (displayMode == MjpegView.SIZE_STANDARD) {
tempx = (dispWidth / 2) - (bmw / 2);
tempy = (dispHeight / 2) - (bmh / 2);
return new Rect(tempx, tempy, bmw + tempx, bmh + tempy);
}
if (displayMode == MjpegView.SIZE_BEST_FIT) {
float bmasp = (float) bmw / (float) bmh;
bmw = dispWidth;
bmh = (int) (dispWidth / bmasp);
if (bmh > dispHeight) {
bmh = dispHeight;
bmw = (int) (dispHeight * bmasp);
}
tempx = (dispWidth / 2) - (bmw / 2);
tempy = (dispHeight / 2) - (bmh / 2);
return new Rect(tempx, tempy, bmw + tempx, bmh + tempy);
}
if (displayMode == MjpegView.SIZE_FULLSCREEN) return new Rect(0, 0, dispWidth, dispHeight);
return null;
}
public void setSurfaceSize(int width, int height) {
synchronized(mSurfaceHolder) {
dispWidth = width;
dispHeight = height;
}
}
private Bitmap makeFpsOverlay(Paint p, String text) {
Rect b = new Rect();
p.getTextBounds(text, 0, text.length(), b);
int bwidth = b.width()+2;
int bheight = b.height()+2;
Bitmap bm = Bitmap.createBitmap(bwidth, bheight, Bitmap.Config.ARGB_8888);
Canvas c = new Canvas(bm);
p.setColor(overlayBackgroundColor);
c.drawRect(0, 0, bwidth, bheight, p);
p.setColor(overlayTextColor);
c.drawText(text, -b.left+1, (bheight/2)-((p.ascent()+p.descent())/2)+1, p);
return bm;
}
public void run() {
start = System.currentTimeMillis();
PorterDuffXfermode mode = new PorterDuffXfermode(PorterDuff.Mode.DST_OVER);
Bitmap bm;
int width;
int height;
Rect destRect;
Canvas c = null;
Paint p = new Paint();
String fps = "";
while (mRun) {
if(surfaceDone) {
try {
c = mSurfaceHolder.lockCanvas();
synchronized (mSurfaceHolder) {
try {
bm = mIn.readMjpegFrame();
destRect = destRect(bm.getWidth(),bm.getHeight());
c.drawColor(Color.BLACK);
c.drawBitmap(bm, null, destRect, p);
if(showFps) {
p.setXfermode(mode);
if(ovl != null) {
height = ((ovlPos & 1) == 1) ? destRect.top : destRect.bottom-ovl.getHeight();
width = ((ovlPos & 8) == 8) ? destRect.left : destRect.right -ovl.getWidth();
c.drawBitmap(ovl, width, height, null);
}
p.setXfermode(null);
frameCounter++;
if((System.currentTimeMillis() - start) >= 1000) {
fps = String.valueOf(frameCounter)+"fps";
frameCounter = 0;
start = System.currentTimeMillis();
ovl = makeFpsOverlay(overlayPaint, fps);
}
}
} catch (IOException e) {}
}
} finally { if (c != null) mSurfaceHolder.unlockCanvasAndPost(c); }
}
}
}
}
private void init(Context context) {
SurfaceHolder holder = getHolder();
holder.addCallback(this);
thread = new MjpegViewThread(holder, context);
setFocusable(true);
overlayPaint = new Paint();
overlayPaint.setTextAlign(Paint.Align.LEFT);
overlayPaint.setTextSize(12);
overlayPaint.setTypeface(Typeface.DEFAULT);
overlayTextColor = Color.WHITE;
overlayBackgroundColor = Color.BLACK;
ovlPos = MjpegView.POSITION_LOWER_RIGHT;
displayMode = MjpegView.SIZE_STANDARD;
dispWidth = getWidth();
dispHeight = getHeight();
}
public void startPlayback() {
if(mIn != null) {
mRun = true;
thread.start();
}
}
public void stopPlayback() {
mRun = false;
boolean retry = true;
while(retry) {
try {
thread.join();
retry = false;
} catch (InterruptedException e) {}
}
}
public MjpegView(Context context, AttributeSet attrs) { super(context, attrs); init(context); }
public void surfaceChanged(SurfaceHolder holder, int f, int w, int h) { thread.setSurfaceSize(w, h); }
public void surfaceDestroyed(SurfaceHolder holder) {
surfaceDone = false;
stopPlayback();
}
public MjpegView(Context context) { super(context); init(context); }
public void surfaceCreated(SurfaceHolder holder) { surfaceDone = true; }
public void showFps(boolean b) { showFps = b; }
public void setSource(MjpegInputStream source) { mIn = source; startPlayback();}
public void setOverlayPaint(Paint p) { overlayPaint = p; }
public void setOverlayTextColor(int c) { overlayTextColor = c; }
public void setOverlayBackgroundColor(int c) { overlayBackgroundColor = c; }
public void setOverlayPosition(int p) { ovlPos = p; }
public void setDisplayMode(int s) { displayMode = s; }
}
MjpegInputStream.Java
package de.mjpegsample.MjpegView;
import java.io.BufferedInputStream;
import java.io.ByteArrayInputStream;
import java.io.DataInputStream;
import java.io.IOException;
import java.io.InputStream;
import java.net.URI;
import java.util.Properties;
import org.apache.http.HttpResponse;
import org.apache.http.client.ClientProtocolException;
import org.apache.http.client.methods.HttpGet;
import org.apache.http.impl.client.DefaultHttpClient;
import android.graphics.Bitmap;
import android.graphics.BitmapFactory;
public class MjpegInputStream extends DataInputStream {
private final byte[] SOI_MARKER = { (byte) 0xFF, (byte) 0xD8 };
private final byte[] EOF_MARKER = { (byte) 0xFF, (byte) 0xD9 };
private final String CONTENT_LENGTH = "Content-Length";
private final static int HEADER_MAX_LENGTH = 100;
private final static int FRAME_MAX_LENGTH = 40000 + HEADER_MAX_LENGTH;
private int mContentLength = -1;
public static MjpegInputStream read(String url) {
HttpResponse res;
DefaultHttpClient httpclient = new DefaultHttpClient();
try {
res = httpclient.execute(new HttpGet(URI.create(url)));
return new MjpegInputStream(res.getEntity().getContent());
} catch (ClientProtocolException e) {
} catch (IOException e) {}
return null;
}
public MjpegInputStream(InputStream in) { super(new BufferedInputStream(in, FRAME_MAX_LENGTH)); }
private int getEndOfSeqeunce(DataInputStream in, byte[] sequence) throws IOException {
int seqIndex = 0;
byte c;
for(int i=0; i < FRAME_MAX_LENGTH; i++) {
c = (byte) in.readUnsignedByte();
if(c == sequence[seqIndex]) {
seqIndex++;
if(seqIndex == sequence.length) return i + 1;
} else seqIndex = 0;
}
return -1;
}
private int getStartOfSequence(DataInputStream in, byte[] sequence) throws IOException {
int end = getEndOfSeqeunce(in, sequence);
return (end < 0) ? (-1) : (end - sequence.length);
}
private int parseContentLength(byte[] headerBytes) throws IOException, NumberFormatException {
ByteArrayInputStream headerIn = new ByteArrayInputStream(headerBytes);
Properties props = new Properties();
props.load(headerIn);
return Integer.parseInt(props.getProperty(CONTENT_LENGTH));
}
public Bitmap readMjpegFrame() throws IOException {
mark(FRAME_MAX_LENGTH);
int headerLen = getStartOfSequence(this, SOI_MARKER);
reset();
byte[] header = new byte[headerLen];
readFully(header);
try {
mContentLength = parseContentLength(header);
} catch (NumberFormatException nfe) {
mContentLength = getEndOfSeqeunce(this, EOF_MARKER);
}
reset();
byte[] frameData = new byte[mContentLength];
skipBytes(headerLen);
readFully(frameData);
return BitmapFactory.decodeStream(new ByteArrayInputStream(frameData));
}
}
so I converted that (actually create a c# wrapper) with Binding Library project.
but although I followed the Sample code tutorial of this project as following:
The sample itself:
public class MjpegSample extends Activity {
private MjpegView mv;
public void onCreate(Bundle icicle) {
super.onCreate(icicle);
//sample public cam
String URL = "http://webcam5.hrz.tu-darmstadt.de/axis-cgi/mjpg/video.cgi?resolution=320x240";
requestWindowFeature(Window.FEATURE_NO_TITLE);
getWindow().setFlags(WindowManager.LayoutParams.FLAG_FULLSCREEN,
WindowManager.LayoutParams.FLAG_FULLSCREEN);
mv = new MjpegView(this);
setContentView(mv);
mv.setSource(MjpegInputStream.read(URL));
mv.setDisplayMode(MjpegView.SIZE_BEST_FIT);
mv.showFps(true);
}
What I have Done in Monodroid:
namespace AndroidApplication8
{
[Activity(Label = "AndroidApplication8", MainLauncher = true, Icon = "#drawable/icon")]
public class Activity1 : Activity
{
int count = 1;
protected override void OnCreate(Bundle bundle)
{
base.OnCreate(bundle);
String URL = "rtsp://192.168.1.3/Mjpeg/video.cgi";
var mv = new MjpegView(this);
SetContentView(mv);
**mv.SetSource(MjpegInputStream.Read(URL));
mv.SetDisplayMode(MjpegView.SizeBestFit);
mv.StartPlayback();
}
}
}
but it gives me an error in the line indicated with ** when it wants to execute MjpegInputStream.Read()
and it jumps to the class converted from the native Java files without any more information.
You should check your video type.For example if your video encoding is compressed over there(before getting to your android device) you should encode it before put it into your browser.This could let you write a code in java for example to verify the incoming stream from cameras first(don't use build-in browser of android) and then decode it manually.
Good luck!
I have an IP Camera that receives a char buffer containing an image over the network. I cant access it until i setup the connection to it in a program. I am trying to dissect windows source filter code and im not going very fast so i thought i'd ask if it was possible to just take a buffer like that and cast it to something that could then connect a pin to AVISplitter or such in Directshow/.net
(video buffer from IP Cam) -> (???) -> (AVI Splitter) -> (Profit)
Update
I have my program capturing video in a namespace, and i have this code from the GSSF in its own namespace. I pass a ptr with an image from the cam namespace to the GSSF namespace. This only occurs once, but the graph streams from this one image, and the camera streams from the network. is there a way to continually pass the buffer from cam to GSSF or should i combine the namespaces somehow? I tried sending the main camera pointer to the GSSF but it crashed because its accessing the pointer and its being written. maybe if i grabbed an image, passed the pointer, waited to grab a new one?
*Update*
I shrunk my code and I don't believe im doing the namespace correctly either now that i look at it.
namespace Cam_Controller
{
static byte[] mainbyte = new byte[1280*720*2];
static IntPtr main_ptr = new IntPtr();
//(this function is threaded)
static void Trial(NPvBuffer mBuffer, NPvDisplayWnd mDisplayWnd, VideoCompression compressor)
{
Functions function = new Functions();
Defines define = new Defines();
NPvResult operationalResult = new NPvResult();
VideoCompression mcompressor = new VideoCompression();
int framecount = 0;
while (!Stopping && AcquiringImages)
{
Mutex lock_video = new Mutex();
NPvResult result = mDevice.RetrieveNextBuffer(mBuffer, operationalResult);
if(result.isOK())
{
framecount++;
wer = (int)mDisplayWnd.Display(mBuffer, wer);
main_ptr = (IntPtr)mBuffer.GetMarshalledBuffer();
Marshal.Copy(main_ptr, mainbyte, 0, 720 * 2560);
}
}
}
private void button7_Click(object sender, EventArgs e)
{
IntPtr dd = (IntPtr)mBuffer.GetMarshalledBuffer();
Marshal.Copy(dd, main_byte1, 0, 720 * 2560);
play = new VisiCam_Controller.DxPlay.DxPlay("", panel9, main_byte1);
play.Start();
}
namespace DxPlay
{
public class DxPlay
{
public DxPlay(string sPath, Control hWin, byte[] color)
{
try
{
// pick one of our image providers
//m_ImageHandler = new ImageFromFiles(sPath, 24);
m_ImageHandler = new ImageFromPixels(20, color);
//m_ImageHandler = new ImageFromMpg(#"c:\c1.mpg");
//m_ImageHandler = new ImageFromMpg(sPath);
//m_ImageHandler = new ImageFromMP3(#"c:\vss\media\track3.mp3");
// Set up the graph
SetupGraph(hWin);
}
catch
{
Dispose();
throw;
}
}
}
abstract internal class imagehandler
internal class imagefrompixels
{
private int[] mainint = new int[720 * 1280];
unsafe public ImageFromPixels(long FPS, byte[] x)
{
long fff = 720 * 1280 * 3;
mainptr = new IntPtr(fff);
for (int p = 0; p < 720 * 640; p++)
{
U = (x[ p * 4 + 0]);
Y = (x[p * 4 + 1]);
V = (x[p * 4 + 2]);
Y2 = (x[p * 4 + 3]);
int one = V << 16 | Y << 8 | U;
int two = V << 16 | Y2 << 8 | U;
mainint[p * 2 + 0] = one;
mainint[p * 2 + 1] = two;
}
m_FPS = UNIT / FPS;
m_b = 211;
m_g = 197;
}
}
}
}
Theres also GetImage but thats relatively the same, copy the buffer into the pointer. What happens is i grab a buffer of the image and send it to the DxPlay class. it is able to process it and put it on the directshow line no problems; but it never updates nor gets updated because its just a single buffer. If i instead send DxPlay a IntPtr holding the address of the image buffer, the code crashes for accessing memory because i assume ImageFromPixels code ( which isn't there now ( change
(x[p * 4 + #])
to
(IntPtr)((x-passed as an IntPtr).toInt64()+p*4 + #)
))
is accessing the memory of the pointer as the Cam_Controller class is editing it. I make and pass copies of the IntPtrs, and new IntPtrs but they fail halfway through the conversion.
If you want to do this in .NET, the following steps are needed:
Use the DirectShow.NET Generic Sample Source Filter (GSSF.AX) from the Misc/GSSF directory within the sample package. A source filter is always a COM module, so you need to register it too using "RegSvr32 GSSF.ax".
Implement a bitmap provider in .NET
Setup a graph, and connect the pin from the GSSF to the implementation of the bitmap provider.
Pray.
I am using the following within a project, and made it reusable for future usage.
The code (not the best, and not finished, but a working start) (this takes a IVideoSource, which is bellow):
public class VideoSourceToVideo : IDisposable
{
object locker = new object();
public event EventHandler<EventArgs> Starting;
public event EventHandler<EventArgs> Stopping;
public event EventHandler<EventArgs> Completed;
/// <summary> graph builder interface. </summary>
private DirectShowLib.ICaptureGraphBuilder2 captureGraphBuilder = null;
DirectShowLib.IMediaControl mediaCtrl = null;
IMediaEvent mediaEvent = null;
bool stopMediaEventLoop = false;
Thread mediaEventThread;
/// <summary> Dimensions of the image, calculated once in constructor. </summary>
private readonly VideoInfoHeader videoInfoHeader;
IVideoSource source;
public VideoSourceToVideo(IVideoSource source, string destFilename, string encoderName)
{
try
{
this.source = source;
// Set up the capture graph
SetupGraph(destFilename, encoderName);
}
catch
{
Dispose();
throw;
}
}
/// <summary> release everything. </summary>
public void Dispose()
{
StopMediaEventLoop();
CloseInterfaces();
}
/// <summary> build the capture graph for grabber. </summary>
private void SetupGraph(string destFilename, string encoderName)
{
int hr;
// Get the graphbuilder object
captureGraphBuilder = new DirectShowLib.CaptureGraphBuilder2() as DirectShowLib.ICaptureGraphBuilder2;
IFilterGraph2 filterGraph = new DirectShowLib.FilterGraph() as DirectShowLib.IFilterGraph2;
mediaCtrl = filterGraph as DirectShowLib.IMediaControl;
IMediaFilter mediaFilt = filterGraph as IMediaFilter;
mediaEvent = filterGraph as IMediaEvent;
captureGraphBuilder.SetFiltergraph(filterGraph);
IBaseFilter aviMux;
IFileSinkFilter fileSink = null;
hr = captureGraphBuilder.SetOutputFileName(MediaSubType.Avi, destFilename, out aviMux, out fileSink);
DsError.ThrowExceptionForHR(hr);
DirectShowLib.IBaseFilter compressor = DirectShowUtils.GetVideoCompressor(encoderName);
if (compressor == null)
{
throw new InvalidCodecException(encoderName);
}
hr = filterGraph.AddFilter(compressor, "compressor");
DsError.ThrowExceptionForHR(hr);
// Our data source
IBaseFilter source = (IBaseFilter)new GenericSampleSourceFilter();
// Get the pin from the filter so we can configure it
IPin ipin = DsFindPin.ByDirection(source, PinDirection.Output, 0);
try
{
// Configure the pin using the provided BitmapInfo
ConfigurePusher((IGenericSampleConfig)ipin);
}
finally
{
Marshal.ReleaseComObject(ipin);
}
// Add the filter to the graph
hr = filterGraph.AddFilter(source, "GenericSampleSourceFilter");
Marshal.ThrowExceptionForHR(hr);
hr = filterGraph.AddFilter(source, "source");
DsError.ThrowExceptionForHR(hr);
hr = captureGraphBuilder.RenderStream(null, null, source, compressor, aviMux);
DsError.ThrowExceptionForHR(hr);
IMediaPosition mediaPos = filterGraph as IMediaPosition;
hr = mediaCtrl.Run();
DsError.ThrowExceptionForHR(hr);
}
private void ConfigurePusher(IGenericSampleConfig ips)
{
int hr;
source.SetMediaType(ips);
// Specify the callback routine to call with each sample
hr = ips.SetBitmapCB(source);
DsError.ThrowExceptionForHR(hr);
}
private void StartMediaEventLoop()
{
mediaEventThread = new Thread(MediaEventLoop)
{
Name = "Offscreen Vid Player Medialoop",
IsBackground = false
};
mediaEventThread.Start();
}
private void StopMediaEventLoop()
{
stopMediaEventLoop = true;
if (mediaEventThread != null)
{
mediaEventThread.Join();
}
}
public void MediaEventLoop()
{
MediaEventLoop(x => PercentageCompleted = x);
}
public double PercentageCompleted
{
get;
private set;
}
// FIXME this needs some work, to be completely in-tune with needs.
public void MediaEventLoop(Action<double> UpdateProgress)
{
mediaEvent.CancelDefaultHandling(EventCode.StateChange);
//mediaEvent.CancelDefaultHandling(EventCode.Starvation);
while (stopMediaEventLoop == false)
{
try
{
EventCode ev;
IntPtr p1, p2;
if (mediaEvent.GetEvent(out ev, out p1, out p2, 0) == 0)
{
switch (ev)
{
case EventCode.Complete:
Stopping.Fire(this, null);
if (UpdateProgress != null)
{
UpdateProgress(source.PercentageCompleted);
}
return;
case EventCode.StateChange:
FilterState state = (FilterState)p1.ToInt32();
if (state == FilterState.Stopped || state == FilterState.Paused)
{
Stopping.Fire(this, null);
}
else if (state == FilterState.Running)
{
Starting.Fire(this, null);
}
break;
// FIXME add abort and stuff, and propagate this.
}
// Trace.WriteLine(ev.ToString() + " " + p1.ToInt32());
mediaEvent.FreeEventParams(ev, p1, p2);
}
else
{
if (UpdateProgress != null)
{
UpdateProgress(source.PercentageCompleted);
}
// FiXME use AutoResetEvent
Thread.Sleep(100);
}
}
catch (Exception e)
{
Trace.WriteLine("MediaEventLoop: " + e);
}
}
}
/// <summary> Shut down capture </summary>
private void CloseInterfaces()
{
int hr;
try
{
if (mediaCtrl != null)
{
// Stop the graph
hr = mediaCtrl.Stop();
mediaCtrl = null;
}
}
catch (Exception ex)
{
Debug.WriteLine(ex);
}
if (captureGraphBuilder != null)
{
Marshal.ReleaseComObject(captureGraphBuilder);
captureGraphBuilder = null;
}
GC.Collect();
}
public void Start()
{
StartMediaEventLoop();
}
}
IVideoSource:
public interface IVideoSource : IGenericSampleCB
{
double PercentageCompleted { get; }
int GetImage(int iFrameNumber, IntPtr ip, int iSize, out int iRead);
void SetMediaType(global::IPerform.Video.Conversion.Interops.IGenericSampleConfig psc);
int SetTimeStamps(global::DirectShowLib.IMediaSample pSample, int iFrameNumber);
}
ImageVideoSource (mostly taken from DirectShow.NET examples):
// A generic class to support easily changing between my different sources of data.
// Note: You DON'T have to use this class, or anything like it. The key is the SampleCallback
// routine. How/where you get your bitmaps is ENTIRELY up to you. Having SampleCallback call
// members of this class was just the approach I used to isolate the data handling.
public abstract class ImageVideoSource : IDisposable, IVideoSource
{
#region Definitions
/// <summary>
/// 100 ns - used by a number of DS methods
/// </summary>
private const long UNIT = 10000000;
#endregion
/// <summary>
/// Number of callbacks that returned a positive result
/// </summary>
private int m_iFrameNumber = 0;
virtual public void Dispose()
{
}
public abstract double PercentageCompleted { get; protected set; }
abstract public void SetMediaType(IGenericSampleConfig psc);
abstract public int GetImage(int iFrameNumber, IntPtr ip, int iSize, out int iRead);
virtual public int SetTimeStamps(IMediaSample pSample, int iFrameNumber)
{
return 0;
}
/// <summary>
/// Called by the GenericSampleSourceFilter. This routine populates the MediaSample.
/// </summary>
/// <param name="pSample">Pointer to a sample</param>
/// <returns>0 = success, 1 = end of stream, negative values for errors</returns>
virtual public int SampleCallback(IMediaSample pSample)
{
int hr;
IntPtr pData;
try
{
// Get the buffer into which we will copy the data
hr = pSample.GetPointer(out pData);
if (hr >= 0)
{
// Set TRUE on every sample for uncompressed frames
hr = pSample.SetSyncPoint(true);
if (hr >= 0)
{
// Find out the amount of space in the buffer
int cbData = pSample.GetSize();
hr = SetTimeStamps(pSample, m_iFrameNumber);
if (hr >= 0)
{
int iRead;
// Get copy the data into the sample
hr = GetImage(m_iFrameNumber, pData, cbData, out iRead);
if (hr == 0) // 1 == End of stream
{
pSample.SetActualDataLength(iRead);
// increment the frame number for next time
m_iFrameNumber++;
}
}
}
}
}
finally
{
// Release our pointer the the media sample. THIS IS ESSENTIAL! If
// you don't do this, the graph will stop after about 2 samples.
Marshal.ReleaseComObject(pSample);
}
return hr;
}
}
RawVideoSource (an example of a concrete managed source generator for a DirectShow pipeline):
internal class RawVideoSource : ImageVideoSource
{
private byte[] buffer;
private byte[] demosaicBuffer;
private RawVideoReader reader;
public override double PercentageCompleted
{
get;
protected set;
}
public RawVideoSource(string sourceFile)
{
reader = new RawVideoReader(sourceFile);
}
override public void SetMediaType(IGenericSampleConfig psc)
{
BitmapInfoHeader bmi = new BitmapInfoHeader();
bmi.Size = Marshal.SizeOf(typeof(BitmapInfoHeader));
bmi.Width = reader.Header.VideoSize.Width;
bmi.Height = reader.Header.VideoSize.Height;
bmi.Planes = 1;
bmi.BitCount = 24;
bmi.Compression = 0;
bmi.ImageSize = (bmi.BitCount / 8) * bmi.Width * bmi.Height;
bmi.XPelsPerMeter = 0;
bmi.YPelsPerMeter = 0;
bmi.ClrUsed = 0;
bmi.ClrImportant = 0;
int hr = psc.SetMediaTypeFromBitmap(bmi, 0);
buffer = new byte[reader.Header.FrameSize];
demosaicBuffer = new byte[reader.Header.FrameSize * 3];
DsError.ThrowExceptionForHR(hr);
}
long startFrameTime;
long endFrameTime;
unsafe override public int GetImage(int iFrameNumber, IntPtr ip, int iSize, out int iRead)
{
int hr = 0;
if (iFrameNumber < reader.Header.NumberOfFrames)
{
reader.ReadFrame(buffer, iFrameNumber, out startFrameTime, out endFrameTime);
Demosaic.DemosaicGBGR24Bilinear(buffer, demosaicBuffer, reader.Header.VideoSize);
Marshal.Copy(demosaicBuffer, 0, ip, reader.Header.FrameSize * 3);
PercentageCompleted = ((double)iFrameNumber / reader.Header.NumberOfFrames) * 100.0;
}
else
{
PercentageCompleted = 100;
hr = 1; // End of stream
}
iRead = iSize;
return hr;
}
override public int SetTimeStamps(IMediaSample pSample, int iFrameNumber)
{
reader.ReadTimeStamps(iFrameNumber, out startFrameTime, out endFrameTime);
DsLong rtStart = new DsLong(startFrameTime);
DsLong rtStop = new DsLong(endFrameTime);
int hr = pSample.SetTime(rtStart, rtStop);
return hr;
}
}
And the interops to the GSSF.AX COM:
namespace IPerform.Video.Conversion.Interops
{
[ComImport, Guid("6F7BCF72-D0C2-4449-BE0E-B12F580D056D")]
public class GenericSampleSourceFilter
{
}
[InterfaceType(ComInterfaceType.InterfaceIsIUnknown),
Guid("33B9EE57-1067-45fa-B12D-C37517F09FC0")]
public interface IGenericSampleCB
{
[PreserveSig]
int SampleCallback(IMediaSample pSample);
}
[Guid("CE50FFF9-1BA8-4788-8131-BDE7D4FFC27F"),
InterfaceType(ComInterfaceType.InterfaceIsIUnknown)]
public interface IGenericSampleConfig
{
[PreserveSig]
int SetMediaTypeFromBitmap(BitmapInfoHeader bmi, long lFPS);
[PreserveSig]
int SetMediaType([MarshalAs(UnmanagedType.LPStruct)] AMMediaType amt);
[PreserveSig]
int SetMediaTypeEx([MarshalAs(UnmanagedType.LPStruct)] AMMediaType amt, int lBufferSize);
[PreserveSig]
int SetBitmapCB(IGenericSampleCB pfn);
}
}
Good luck, try to get it working using this. Or comment with further questions so we can iron out other issues.