SharpDX 3 loading .DDS to apply onto a 3d model (C#) - c#

I'm attempting to create a model viewer for a game to try and learn SharpDX but the game uses .DDS files and the viewer can only read .BMPs. I've looked far and wide on the webs and the only things I can find are load them but don't seem to work for SharpDX (I don't know im a noob :D)
using SharpDX.Direct3D11;
using SharpDX.WIC;
namespace ModelViewer.Programming.GraphicClasses
{
public class TextureClass
{
public ShaderResourceView TextureResource { get; private set; }
public bool Init(Device device, string fileName)
{
try
{
using (var texture = LoadFromFile(device, new ImagingFactory(), fileName))
{
ShaderResourceViewDescription srvDesc = new ShaderResourceViewDescription()
{
Format = texture.Description.Format,
Dimension = SharpDX.Direct3D.ShaderResourceViewDimension.Texture2D,
};
srvDesc.Texture2D.MostDetailedMip = 0;
srvDesc.Texture2D.MipLevels = -1;
TextureResource = new ShaderResourceView(device, texture, srvDesc);
device.ImmediateContext.GenerateMips(TextureResource);
}
return true;
}
catch
{
return false;
}
}
public void Shutdown()
{
TextureResource?.Dispose();
TextureResource = null;
}
public Texture2D LoadFromFile(Device device, ImagingFactory factory, string fileName)
{
using (var bs = LoadBitmap(factory, fileName))
return CreateTextureFromBitmap(device, bs);
}
public BitmapSource LoadBitmap(ImagingFactory factory, string filename)
{
var bitmapDecoder = new BitmapDecoder(factory, filename, DecodeOptions.CacheOnDemand);
var result = new FormatConverter(factory);
result.Initialize(bitmapDecoder.GetFrame(0), SharpDX.WIC.PixelFormat.Format32bppPRGBA, BitmapDitherType.None, null, 0.0, BitmapPaletteType.Custom);
return result;
}
public Texture2D CreateTextureFromBitmap(Device device, BitmapSource bitmapSource)
{
int stride = bitmapSource.Size.Width * 4;
using (var buffer = new SharpDX.DataStream(bitmapSource.Size.Height * stride, true, true))
{
bitmapSource.CopyPixels(stride, buffer);
return new Texture2D(device, new Texture2DDescription()
{
Width = bitmapSource.Size.Width,
Height = bitmapSource.Size.Height,
ArraySize = 1,
BindFlags = BindFlags.ShaderResource | BindFlags.RenderTarget,
Usage = ResourceUsage.Default,
CpuAccessFlags = CpuAccessFlags.None,
Format = SharpDX.DXGI.Format.R8G8B8A8_UNorm,
MipLevels = 1,
OptionFlags = ResourceOptionFlags.GenerateMipMaps,
SampleDescription = new SharpDX.DXGI.SampleDescription(1, 0),
},
new SharpDX.DataRectangle(buffer.DataPointer, stride));
}
}
}
}
I have a feeling this will need to be completely redone to utilize the DDS format. Is it easier to simply read one and then convert it to a bitmap?

Related

Face Recognition in C# (using EigenFaceRecognizer) recognizing an unknown face as a trained face

I was trying to do face recognition in C# with "EigenFaceRecognizer". But the problem is that the Recognizer recognizes an unknown face as a known one. Once the recognizer is trained to recognize that unknown face then it recognizes that face correctly. But it never shows "Unknown" as written in the code below.
This is the full code to recognize, capture, save and train faces:-
using System;
using System.Collections.Generic;
using System.Windows.Forms;
using System.IO;
using System.Threading;
using System.Drawing;
using System.ComponentModel;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using Emgu.CV;
using Emgu.CV.CvEnum;
using Emgu.CV.Face;
using Emgu.CV.Structure;
namespace FaceRecognition
{
class FaceRecognition:Form
{
private double distance = 50;
private CascadeClassifier CascadeClassifier = new CascadeClassifier(Environment.CurrentDirectory + "/Resources/Haarcascade/haarcascade_frontalface_alt2.xml");
private Image<Bgr, byte> Frame = (Image<Bgr, byte>)null;
private Capture camera;
private Mat mat = new Mat();
private List<Image<Gray, byte>> trainedFaces = new List<Image<Gray, byte>>();
private List<int> PersonLabs = new List<int>();
private bool isEnable_SaveImage = false;
private string ImageName;
private PictureBox PictureBox_Frame;
private PictureBox PictureBox_smallFrame;
private string setPersonName;
public bool isTrained = false;
private List<string> Names = new List<string>();
private EigenFaceRecognizer eigenFaceRecognizer;
private IContainer components = (IContainer)null;
private List<String> retNames = new List<string>();
public FaceRecgnition()
{
this.InitializeComponent();
if (Directory.Exists(Environment.CurrentDirectory + "\\Training_Data\\Faces\\Image"))
return;
Directory.CreateDirectory(Environment.CurrentDirectory + "\\Training_Data\\Faces\\Image");
}
public void getPersonName(Control control)
{
System.Windows.Forms.Timer timer = new System.Windows.Forms.Timer();
timer.Tick += new EventHandler(timer_getPersonName_Tick);
timer.Interval = 100;
timer.Start();
void timer_getPersonName_Tick(object sender, EventArgs e) => control.Text = this.setPersonName;
}
public void openCamera(PictureBox pictureBox_Camera, PictureBox pictureBox_Trained)
{
this.PictureBox_Frame = pictureBox_Camera;
this.PictureBox_smallFrame = pictureBox_Trained;
this.camera = new Capture();
this.camera.ImageGrabbed += new EventHandler(this.Camera_ImageGrabbed);
this.camera.Start();
}
public void Save_IMAGE(string imageName)
{
this.ImageName = imageName;
this.isEnable_SaveImage = true;
}
private void Camera_ImageGrabbed(object sender, EventArgs e)
{
this.camera.Retrieve((IOutputArray)this.mat, 0);
this.Frame = this.mat.ToImage<Bgr, byte>(false).Resize(this.PictureBox_Frame.Width, this.PictureBox_Frame.Height, (Inter)2);
this.detectFace();
this.PictureBox_Frame.Image = (Image)this.Frame.Bitmap;
}
private void detectFace()
{
Image<Bgr, byte> resultImage = this.Frame.Convert<Bgr, byte>();
Mat mat = new Mat();
CvInvoke.CvtColor((IInputArray)this.Frame, (IOutputArray)mat, (ColorConversion)6, 0);
CvInvoke.EqualizeHist((IInputArray)mat, (IOutputArray)mat);
Rectangle[] rectangleArray = this.CascadeClassifier.DetectMultiScale((IInputArray)mat, 1.1, 4, new Size(), new Size());
if ((uint)rectangleArray.Length > 0U)
{
foreach (Rectangle face in rectangleArray)
{
Image<Bgr, byte> frame = this.Frame;
Rectangle rectangle = face;
Bgr bgr = new Bgr(Color.SpringGreen);
MCvScalar mcvScalar = ((Bgr)bgr).MCvScalar;
CvInvoke.Rectangle((IInputOutputArray)frame, rectangle, mcvScalar, 2, (LineType)8, 0);
this.SaveImage(face);
resultImage.ROI = face;
this.trainedIamge();
String name = this.CheckName(resultImage, face);
if (!retNames.Contains(name))
{
retNames.Add(name);
}
}
}
else
{
this.setPersonName = "";
retNames.Clear();
}
}
private void SaveImage(Rectangle face)
{
if (!this.isEnable_SaveImage)
return;
Image<Bgr, byte> image = this.Frame.Convert<Bgr, byte>();
image.ROI = face;
Task.Factory.StartNew(() =>
{
for(int i = 0; i < 40; i++)
{
((CvArray<byte>)image.Resize(100, 100, (Inter)2)).Save(Environment.CurrentDirectory + "\\Training_Data\\Faces\\Image\\" + this.ImageName + "_" + DateTime.Now.ToString("dd-mm-yyyy-hh-mm-ss") + ".jpg");
Thread.Sleep(1000);
}
});
this.isEnable_SaveImage = false;
this.trainedIamge();
}
private void trainedIamge()
{
try
{
int num = 0;
this.trainedFaces.Clear();
this.PersonLabs.Clear();
this.Names.Clear();
foreach (string file in Directory.GetFiles(Directory.GetCurrentDirectory() + "\\Training_Data\\Faces\\Image", "*.jpg", SearchOption.AllDirectories))
{
this.trainedFaces.Add(new Image<Gray, byte>(file));
this.PersonLabs.Add(num);
String name = file.Split('\\').Last().Split('_')[0];
this.Names.Add(name);
++num;
}
this.eigenFaceRecognizer = new EigenFaceRecognizer(num, this.distance);
((FaceRecognizer)this.eigenFaceRecognizer).Train<Gray, byte>(this.trainedFaces.ToArray(), this.PersonLabs.ToArray());
}
catch
{
}
}
private string CheckName(Image<Bgr, byte> resultImage, Rectangle face)
{
retNames.Clear();
try
{
if (!this.isTrained)
return null;
Image<Gray, byte> image = resultImage.Convert<Gray, byte>().Resize(100, 100, (Inter)2);
//);
CvInvoke.EqualizeHist((IInputArray)image, (IOutputArray)image);
//.Predict((IInputArray)image)
FaceRecognizer.PredictionResult predictionResult = ((FaceRecognizer)this.eigenFaceRecognizer).Predict(image);
if (predictionResult.Label != -1 && predictionResult.Distance < 5000)
{
this.PictureBox_smallFrame.Image = (Image)this.trainedFaces[(int)predictionResult.Label].Bitmap;
this.setPersonName = this.Names[(int)predictionResult.Label].Replace(Environment.CurrentDirectory + "\\Training_Data\\Faces\\Image\\", "").Replace(".jpg", "");
Image<Bgr, byte> frame = this.Frame;
string setPersonName = this.setPersonName;
Point point = new Point(face.X - 2, face.Y - 2);
Bgr bgr = new Bgr(Color.Gold);
MCvScalar mcvScalar = ((Bgr)bgr).MCvScalar;
CvInvoke.PutText((IInputOutputArray)frame, setPersonName, point, (FontFace)1, 1.0, mcvScalar, 1, (LineType)8, false);
return setPersonName;
}
else
{
Image<Bgr, byte> frame = this.Frame;
Point point = new Point(face.X - 2, face.Y - 2);
Bgr bgr = new Bgr(Color.OrangeRed);
MCvScalar mcvScalar = ((Bgr)bgr).MCvScalar;
CvInvoke.PutText((IInputOutputArray)frame, "Unknown", point, (FontFace)1, 1.0, mcvScalar, 1, (LineType)8, false);
return "Unknown";
}
}
catch
{
return null;
}
}
protected override void Dispose(bool disposing)
{
if (disposing && this.components != null)
this.components.Dispose();
base.Dispose(disposing);
}
private void InitializeComponent()
{
this.SuspendLayout();
this.AutoScaleDimensions = new SizeF(8f, 16f);
this.AutoScaleMode = AutoScaleMode.Font;
this.ClientSize = new Size(800, 450);
this.Name = nameof(FaceRecognition);
this.Text = nameof(FaceRecognition);
this.ResumeLayout(false);
}
public List<String> getRetNames { get => retNames; }
private String setRetNames { set => retNames.Add(value); }
}
}
This is the main piece of the code (If you are in hurry) where it recognizes the face:-
private string CheckName(Image<Bgr, byte> resultImage, Rectangle face)
{
retNames.Clear();
try
{
if (!this.isTrained)
return null;
Image<Gray, byte> image = resultImage.Convert<Gray, byte>().Resize(100, 100, (Inter)2);
CvInvoke.EqualizeHist((IInputArray)image, (IOutputArray)image);
FaceRecognizer.PredictionResult predictionResult = ((FaceRecognizer)this.eigenFaceRecognizer).Predict((IInputArray)image);
if (predictionResult.Label != -1 && predictionResult.Distance < 5000)
{
this.PictureBox_smallFrame.Image = (Image)this.trainedFaces[(int)predictionResult.Label].Bitmap;
this.setPersonName = this.Names[(int)predictionResult.Label].Replace(Environment.CurrentDirectory + "\\Training_Data\\Faces\\Image\\", "").Replace(".jpg", "");
Image<Bgr, byte> frame = this.Frame;
string setPersonName = this.setPersonName;
Point point = new Point(face.X - 2, face.Y - 2);
Bgr bgr = new Bgr(Color.Gold);
MCvScalar mcvScalar = ((Bgr)bgr).MCvScalar;
CvInvoke.PutText((IInputOutputArray)frame, setPersonName, point, (FontFace)1, 1.0, mcvScalar, 1, (LineType)8, false);
return setPersonName;
}
else
{
Image<Bgr, byte> frame = this.Frame;
Point point = new Point(face.X - 2, face.Y - 2);
Bgr bgr = new Bgr(Color.OrangeRed);
MCvScalar mcvScalar = ((Bgr)bgr).MCvScalar;
CvInvoke.PutText((IInputOutputArray)frame, "Unknown", point, (FontFace)1, 1.0, mcvScalar, 1, (LineType)8, false);
return "Unknown";
}
}
catch
{
return null;
}
}
Now no matter which face it is, --FaceRecognizer.PredictionResult predictionResult = ((FaceRecognizer)this.eigenFaceRecognizer).Predict((IInputArray)image); always returns predictionResult.Label = 0 and predictionResult.Distance = 0.
What I tried :-
Changing private double distance = 50;. Initially it was 1E+19, then I made it 5000, 2000 and tweaked it with many different values.
Using all the CascadeClassifier xml files.
But at all the instances of me doing something to fix the problem the values of predictionResult.Label and predictionResult.Distance were always "0".
P.S :- This question can be a duplicate of 1 or 2 questions but in those questions neither there was sufficient information provided by the questioner nor there is an answer.
I had a smiliar problem. What fixed it for me was to make sure that
your labels do not contain a zero. 0 is for errors or similar.
Get a public image dataset and train your type "unknown" on that.
This is what i used: http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html
Then Eigen will recognize unknown faces as unknown most of the time.

displaying Point3DCollection with Helixtoolkit c#

I have made my point cloud using the code from the librealsense:
var points = pc.Process(depthFrame).As<Points>();
//float depth = depthFrame.GetDistance(x, y);
//bbox = (287, 23, 86, 320);
// We colorize the depth frame for visualization purposes
var colorizedDepth = colorizer.Process<VideoFrame>(depthFrame).DisposeWith(frames);
//var org = Cv2.ImRead(colorFrame);
// CopyVertices is extensible, any of these will do:
//var vertices = new float[points.Count * 3];
var vertices = new Intel.RealSense.Math.Vertex[points.Count];
// var vertices = new UnityEngine.Vector3[points.Count];
// var vertices = new System.Numerics.Vector3[points.Count]; // SIMD
// var vertices = new GlmSharp.vec3[points.Count];
//var vertices = new byte[points.Count * 3 * sizeof(float)];
points.CopyVertices(vertices);
And I have converted the point cloud to a Point3DCollection from Media3D:
Point3DCollection pointss = new Point3DCollection();
foreach (var vertex in vertices)
{
var point3D = new Point3D(vertex.x, vertex.y, vertex.z);
pointss.Add(point3D);
}
I want to display those points using this line in the XAML file:
<h:HelixViewport3D Grid.ColumnSpan="1" Grid.Column="1" Margin="2.4,1,0,-0.4" >
<h:DefaultLights/>
<h:PointsVisual3D Points="{Binding pointss}" Color="Red" Size ="2"/>
</h:HelixViewport3D>
But I don't see my point cloud. Is there something wrong with my code?
The code that I am using right now looks like this. I have added what was given in the answer But I get the error object reference is not set on an example of an object. The code I am using is below:
namespace Intel.RealSense
{
/// <summary>
/// Interaction logic for Window.xaml
/// </summary>
public partial class CaptureWindow : System.Windows.Window
{
private Pipeline pipeline;
private Colorizer colorizer;
private CancellationTokenSource tokenSource = new CancellationTokenSource();
private Pipeline pipe = new Pipeline();
private PointCloud pc = new PointCloud();
private ThresholdFilter threshold;
private Point3DCollection _pointss;
public Point3DCollection pointss
{
get => _pointss;
set
{
if (_pointss == value)
return;
_pointss = value;
OnPropertyChanged();
}
}
public event PropertyChangedEventHandler PropertyChanged;
protected virtual void OnPropertyChanged(string propertyName = null)
{
PropertyChanged?.Invoke(this, new PropertyChangedEventArgs(propertyName));
}
//static CvTrackbar Track;
//static OpenCvSharp.Point[][] contours;
//static HierarchyIndex[] hierarchy;
static Action<VideoFrame> UpdateImage(Image img)
{
var wbmp = img.Source as WriteableBitmap;
return new Action<VideoFrame>(frame =>
{
var rect = new Int32Rect(0, 0, frame.Width, frame.Height);
wbmp.WritePixels(rect, frame.Data, frame.Stride * frame.Height, frame.Stride);
});
}
public CaptureWindow()
{
InitializeComponent();
ModelImporter import = new ModelImporter();
try
{
Action<VideoFrame> updateDepth;
Action<VideoFrame> updateColor;
// The colorizer processing block will be used to visualize the depth frames.
colorizer = new Colorizer();
// Create and config the pipeline to strem color and depth frames.
pipeline = new Pipeline();
var cfg = new Config();
cfg.EnableStream(Stream.Depth, 640, 480);
cfg.EnableStream(Stream.Color, Format.Rgb8);
var pp = pipeline.Start(cfg);
PipelineProfile selection = pp;
var depth_stream = selection.GetStream<VideoStreamProfile>(Stream.Depth);
Intrinsics i = depth_stream.GetIntrinsics();
float[] fov = i.FOV;
SetupWindow(pp, out updateDepth, out updateColor);
Task.Factory.StartNew(() =>
{
while (!tokenSource.Token.IsCancellationRequested)
{
threshold = new ThresholdFilter();
threshold.Options[Option.MinDistance].Value = 0.0F;
threshold.Options[Option.MaxDistance].Value = 0.1F;
using (var releaser = new FramesReleaser())
{
using (var frames = pipeline.WaitForFrames().DisposeWith(releaser))
{
var pframes = frames
.ApplyFilter(threshold).DisposeWith(releaser);
}
}
// We wait for the next available FrameSet and using it as a releaser object that would track
// all newly allocated .NET frames, and ensure deterministic finalization
// at the end of scope.
using (var frames = pipeline.WaitForFrames())
{
var colorFrame = frames.ColorFrame.DisposeWith(frames);
var depthFrame = frames.DepthFrame.DisposeWith(frames);
var points = pc.Process(depthFrame).As<Points>();
//float depth = depthFrame.GetDistance(x, y);
//bbox = (287, 23, 86, 320);
// We colorize the depth frame for visualization purposes
var colorizedDepth = colorizer.Process<VideoFrame>(depthFrame).DisposeWith(frames);
//var org = Cv2.ImRead(colorFrame);
// CopyVertices is extensible, any of these will do:
//var vertices = new float[points.Count * 3];
var vertices = new Intel.RealSense.Math.Vertex[points.Count];
// var vertices = new UnityEngine.Vector3[points.Count];
// var vertices = new System.Numerics.Vector3[points.Count]; // SIMD
// var vertices = new GlmSharp.vec3[points.Count];
//var vertices = new byte[points.Count * 3 * sizeof(float)];
points.CopyVertices(vertices);
//Point3DCollection pointss = new Point3DCollection();
foreach (var vertex in vertices)
{
var point3D = new Point3D(vertex.x, vertex.y, vertex.z);
pointss.Add(point3D);
}
// Render the frames.
Dispatcher.Invoke(DispatcherPriority.Render, updateDepth, colorizedDepth);
Dispatcher.Invoke(DispatcherPriority.Render, updateColor, colorFrame);
Dispatcher.Invoke(new Action(() =>
{
String depth_dev_sn = depthFrame.Sensor.Info[CameraInfo.SerialNumber];
txtTimeStamp.Text = depth_dev_sn + " : " + String.Format("{0,-20:0.00}", depthFrame.Timestamp) + "(" + depthFrame.TimestampDomain.ToString() + ")";
}));
//HelixToolkit.Wpf.
}
}
}, tokenSource.Token);
}
catch (Exception ex)
{
System.Windows.MessageBox.Show(ex.Message);
System.Windows.Application.Current.Shutdown();
}
}
private void control_Closing(object sender, System.ComponentModel.CancelEventArgs e)
{
tokenSource.Cancel();
}
private void SetupWindow(PipelineProfile pipelineProfile, out Action<VideoFrame> depth, out Action<VideoFrame> color)
{
using (var p = pipelineProfile.GetStream(Stream.Depth).As<VideoStreamProfile>())
imgDepth.Source = new WriteableBitmap(p.Width, p.Height, 96d, 96d, PixelFormats.Rgb24, null);
depth = UpdateImage(imgDepth);
using (var p = pipelineProfile.GetStream(Stream.Color).As<VideoStreamProfile>())
imgColor.Source = new WriteableBitmap(p.Width, p.Height, 96d, 96d, PixelFormats.Rgb24, null);
color = UpdateImage(imgColor);
}
}
You can only bind to public properties, not to fields, so you have to define it like this:
public Point3DCollection pointss { get; } = new Point3DCollection();
If you want to reassign the collection at runtime, you should also implement INotifyPropertyChanged, otherwise assigning a new collection will not trigger a binding update and the change will not be reflected in the UI.
public class YourViewModel : INotifyPropertyChanged
{
private Point3DCollection _pointss;
public Point3DCollection pointss
{
get => _pointss;
set
{
if (_points == value)
return;
_points = value;
OnPropertyChanged();
}
}
public event PropertyChangedEventHandler PropertyChanged;
protected virtual void OnPropertyChanged(string propertyName = null)
{
PropertyChanged?.Invoke(this, new PropertyChangedEventArgs(propertyName));
}
}

How can I get and wait for external process to complete?

I have rewritten a paginator to display header and footer for my documents. Everything seems to work until I save it as xps Document :
public void SaveAsXps(string fileName, FlowDocument document, string DocumentTitle, string DocumentFooter)
{
document.PageHeight = 1122.5 - 30;
document.PageWidth = 793.7 - 30;
using (Package container = Package.Open(fileName + ".xps", FileMode.Create))
{
using (XpsDocument xpsDoc = new XpsDocument(container, CompressionOption.Maximum))
{
XpsSerializationManager rsm = new XpsSerializationManager(new XpsPackagingPolicy(xpsDoc), false);
DocumentPaginator paginator = ((IDocumentPaginatorSource)document).DocumentPaginator;
paginator = new VisualPaginator(paginator, new Size(793.7, 1122.5), new Size(30, 30), DocumentTitle, DocumentFooter);
rsm.SaveAsXaml(paginator);
}
}
}
I give you the paginator :
public class VisualPaginator : DocumentPaginator
{
string m_DocumentTitle;
string m_DocumentFooter;
private Size pageSize;
private Size margin;
private readonly DocumentPaginator paginator;
private Typeface typeface;
public override Size PageSize
{
get { return pageSize; }
set { pageSize = value; }
}
public override bool IsPageCountValid
{
get
{
return paginator.IsPageCountValid;
}
}
public override int PageCount
{
get
{
return paginator.PageCount;
}
}
public override IDocumentPaginatorSource Source
{
get
{
return paginator.Source;
}
}
public VisualPaginator(DocumentPaginator paginator, Size pageSize, Size margin, string DocumentTitle, string DocumentFooter)
{
PageSize = pageSize;
this.margin = margin;
this.paginator = paginator;
m_DocumentTitle = DocumentTitle;
m_DocumentFooter = DocumentFooter;
this.paginator.PageSize = new Size(PageSize.Width - margin.Width * 2,
PageSize.Height - margin.Height * 2);
}
public void DrawFunction(DrawingVisual content, string drawContent, Point point)
{
try
{
using (DrawingContext ctx = content.RenderOpen())
{
if (typeface == null)
{
typeface = new Typeface("Times New Roman");
}
FormattedText text = new FormattedText(drawContent, CultureInfo.CurrentCulture,
FlowDirection.LeftToRight, typeface, 14, Brushes.Black,
VisualTreeHelper.GetDpi(content).PixelsPerDip);
Thread.Sleep(300);
ctx.DrawText(text, point);
}
}
catch (Exception)
{
throw;
}
}
public override DocumentPage GetPage(int pageNumber)
{
DocumentPage page = paginator.GetPage(pageNumber);
// Create a wrapper visual for transformation and add extras
ContainerVisual newpage = new ContainerVisual();
//Title
DrawingVisual pagetitle = new DrawingVisual();
DrawFunction(pagetitle, m_DocumentTitle, new Point(paginator.PageSize.Width / 2 - 100, -96 / 4));
//Page Number
DrawingVisual pagenumber = new DrawingVisual();
DrawFunction(pagenumber, "Page " + (pageNumber + 1), new Point(paginator.PageSize.Width - 200, paginator.PageSize.Height - 100));
//Footer
DrawingVisual pagefooter = new DrawingVisual();
DrawFunction(pagefooter, m_DocumentFooter, new Point(paginator.PageSize.Width / 2 - 100, paginator.PageSize.Height - 100));
DrawingVisual background = new DrawingVisual();
using (DrawingContext ctx = background.RenderOpen())
{
ctx.DrawRectangle(new SolidColorBrush(Color.FromRgb(240, 240, 240)), null, page.ContentBox);
}
newpage.Children.Add(background); // Scale down page and center
ContainerVisual smallerPage = new ContainerVisual();
smallerPage.Children.Add(page.Visual);
//smallerPage.Transform = new MatrixTransform(0.95, 0, 0, 0.95,
// 0.025 * page.ContentBox.Width, 0.025 * page.ContentBox.Height);
newpage.Children.Add(smallerPage);
newpage.Children.Add(pagetitle);
newpage.Children.Add(pagenumber);
newpage.Children.Add(pagefooter);
newpage.Transform = new TranslateTransform(margin.Width, margin.Height);
return new DocumentPage(newpage, PageSize, page.BleedBox, page.ContentBox);
}
}
The code execution leads to a break point due to external process unless I use a Thread.Sleep(300) instruction. I think the program has to wait for some external process to complete but I have no Idea what processes are involve here and what I can do to wait for them to fix the problem without using a Thread.Sleep() which is a very bad practice.
Any help or clues would be gladly appreciate.

How to set up a SwapChain1 for 2D rendering in SharpDX?

So I've been using SharpDX, a C# DirectX wrapper to program in Direct3D11 and Direct2D, to draw to a RenderForm window in my program. However the SwapChain.Present information states I should use a SwapChain1.Present1 instead and I cannot figure out how to change my code to work with SwapChain1. The CreateWithSwapChain method in Device and even Device1 only works with a normal SwapChain and I don't know how else to simply setup this up.
These are the namespaces being used.
using SharpDX.Direct2D1;
using SharpDX.Direct3D;
using SharpDX.Direct3D11;
using SharpDX.DXGI;
using SharpDX.Windows;
using Device = SharpDX.Direct3D11.Device;
using Factory = SharpDX.Direct2D1.Factory;
using Resource = SharpDX.Direct3D11.Resource;
And this is the code I'm using to setup the RenderTarget.
SwapChainDescription desc = new SwapChainDescription() {
BufferCount = 1,
ModeDescription = new ModeDescription(
DXWindow.GetWindow().ClientSize.Width,
DXWindow.GetWindow().ClientSize.Height,
new Rational(60, 1),
Format.R8G8B8A8_UNorm),
IsWindowed = true,
OutputHandle = DXWindow.GetWindow().Handle,
SampleDescription = new SampleDescription(1, 0),
SwapEffect = SwapEffect.Discard,
Usage = Usage.RenderTargetOutput
};
Device.CreateWithSwapChain(DriverType.Hardware, DeviceCreationFlags.BgraSupport, new SharpDX.Direct3D.FeatureLevel[] { SharpDX.Direct3D.FeatureLevel.Level_10_0 }, desc, out Device device, out swapchain);
Texture2D backbuffer = Resource.FromSwapChain<Texture2D>(swapchain, 0);
RenderTargetView renderview = new RenderTargetView(device, backbuffer);
renderView = new RenderTarget(new Factory(), backbuffer.QueryInterface<Surface>(), new RenderTargetProperties(new PixelFormat(Format.Unknown, SharpDX.Direct2D1.AlphaMode.Premultiplied)));
RenderLoop.Run(DXWindow.GetWindow(), CoreLoop);
DXWindow.GetWindow() get returns a RenderForm and CoreLoop is the render loop where I make Draw calls and Present. How do I use SwapChain1 instead of SwapChain in this?
I've posted my code that works for Swapchain2. You can interchange the queryinterface with SwapChain1 where needed. I've included code for Windows Forms and UWP initialisation. I am not sure which one you are doing here. So the #define allows you to swap, although I havent done much more work on UWP and it may of been broken since I last touched it.
// Function Create()
DXGI.Device1 dxgiDevice1_ = _d3dDevice.QueryInterface<DXGI.Device1>();
DXGI.Adapter dxgiAdapter_ = dxgiDevice1_.Adapter;
DXGI.Factory2 dxgiFactory2_ = dxgiAdapter_.GetParent<DXGI.Factory2>();
ReleaseAllDeviceContexts(true);
if (_swapChain != null)
{
_swapChain.Dispose();
}
if (_swapChainBuffer != null)
{
_swapChainBuffer.Dispose();
}
if (_parentSwapchain != null)
{
_parentSwapchain.Dispose();
}
#if !WINDOWS_UWP
DXGI.SwapChainDescription1 swapChainDescription_ = new DXGI.SwapChainDescription1()
{
AlphaMode = DXGI.AlphaMode.Ignore,
Width = _modeDescription.Width,
Height = _modeDescription.Height,
Format = DXGI.Format.R8G8B8A8_UNorm,
Scaling = DXGI.Scaling.None,
BufferCount = _swapChainBufferCount,
SwapEffect = SharpDX.DXGI.SwapEffect.FlipDiscard,
Flags = SharpDX.DXGI.SwapChainFlags.AllowModeSwitch,
Usage = DXGI.Usage.BackBuffer | DXGI.Usage.RenderTargetOutput,
SampleDescription = new DXGI.SampleDescription() { Count = 1, Quality = 0 },
Stereo = false,
};
_parentSwapchain1 = new DXGI.SwapChain1(dxgiFactory2_, _d3dDevice, _parentContainer.WindowHandle, ref swapChainDescription_, new DXGI.SwapChainFullScreenDescription()
{
RefreshRate = _modeDescription.RefreshRate,
Scaling = SharpDX.DXGI.DisplayModeScaling.Unspecified,
Windowed = isFullscreen == false,
ScanlineOrdering = DXGI.DisplayModeScanlineOrder.Unspecified,
}
);
_swapChain = _parentSwapchain.QueryInterface<DXGI.SwapChain2>();
#else
DXGI.SwapChainDescription1 swapChainDescription = new DXGI.SwapChainDescription1()
{
AlphaMode = DXGI.AlphaMode.Ignore,
Width = _modeDescription.Width,
Height = _modeDescription.Height,
Format = DXGI.Format.R8G8B8A8_UNorm,
Scaling = DXGI.Scaling.Stretch,
BufferCount = _swapChainBufferCount,
SwapEffect = SharpDX.DXGI.SwapEffect.FlipDiscard,
Flags = SharpDX.DXGI.SwapChainFlags.AllowModeSwitch | DXGI.SwapChainFlags.AllowTearing,
Usage = DXGI.Usage.BackBuffer | DXGI.Usage.RenderTargetOutput,
SampleDescription = new DXGI.SampleDescription() { Count = 1, Quality = 0 },
Stereo = false,
};
ComObject obj = new ComObject(_parentContainer.WindowHandle);
_parentSwapchain1 = new DXGI.SwapChain1(dxgiFactory3, _device, ref swapChainDescription, null);
_swapChain2 = _parentSwapchain1.QueryInterface<DXGI.SwapChain2>();
using (DXGI.ISwapChainPanelNative nativeObject = ComObject.As<DXGI.ISwapChainPanelNative>(_parentContainer.WindowHandle))
{
// Set its swap chain.
nativeObject.SwapChain = _swapChain2;
}
#endif
_swapChainBuffer = D3D11.Texture2D.FromSwapChain<D3D11.Texture2D>(_swapChain, 0);
dxgiDevice1_.Dispose();
dxgiAdapter_.Dispose();
dxgiFactory2_.Dispose();
The resize of the buffer is also included in here for reference as well. Ignore some of the custom code though, its all internal my game, so some of this wont make sense.
public void ResizeBuffers(bool isFullscreen)
{
try
{
if (_swapChainBuffer != null)
{
// if (_parentSwapchain1.IsFullScreen != isFullscreen)
{
ReleaseAllDeviceContexts(true);
SharpDX.Utilities.Dispose(ref _swapChain2);
SharpDX.Utilities.Dispose(ref _swapChainBuffer);
#if !WINDOWS_UWP
_parentSwapchain1.IsFullScreen = isFullscreen;
_parentSwapchain1.ResizeBuffers(0, _modeDescription.Width, _modeDescription.Height, DXGI.Format.Unknown, SharpDX.DXGI.SwapChainFlags.AllowModeSwitch);
if (isFullscreen)
{
_parentSwapchain1.ResizeTarget(ref _modeDescription);
}
#else
_parentSwapchain1.ResizeBuffers(0, _modeDescription.Width, _modeDescription.Height, DXGI.Format.Unknown, DXGI.SwapChainFlags.AllowTearing);
#endif
_swapChain2 = _parentSwapchain1.QueryInterface<DXGI.SwapChain2>();
_swapChainBuffer = D3D11.Texture2D.FromSwapChain<D3D11.Texture2D>(_swapChain2, 0);
}
_renderViewport = new Viewport(0, 0, _modeDescription.Width, _modeDescription.Height);
_d3dDevice.ImmediateContext1.Rasterizer.SetViewport(_renderViewport);
}
}
catch (Exception ex)
{
ErrorHandler.DoErrorHandling(ex, ErrorHandler.GetCurrentMethod(ex));
}
}

Using iText 7, what's the proper way to export a Flate encoded image?

I am trying to create code to export out the images within a PDF using iText Version 7.19. I'm having some issues with Flate encoded images. All the Flate encoded images from the Microsoft free book I'm using as an example (see Moving to Microsoft Visual Studio 2010) always coming out pink and depending upon how I try to copy the bytes they can come out distorted.
If I attempt to copy all the image bytes at once (see the SaveFlateEncodedImage2 method in the code below), they come out distorted like this one:
If I attempt to copy them row by row (see the SaveFlateEncodedImage method in the code below), they are pink like this one
Here is the code that I'm using to export them:
using iText.Kernel;
using iText.Kernel.Pdf;
using iText.Kernel.Pdf.Filters;
using System;
using System.Drawing;
using System.Drawing.Imaging;
using System.IO;
using System.Runtime.InteropServices;
namespace ITextPdfStuff
{
public class MyPdfImageExtractor
{
private readonly string _pdfFileName;
public MyPdfImageExtractor(string pdfFileName)
{
_pdfFileName = pdfFileName;
}
public void ExtractToDirectory(string directoryName)
{
using (var reader = new PdfReader(_pdfFileName))
{
// Avoid iText.Kernel.Crypto.BadPasswordException: https://stackoverflow.com/a/48065052/97803
reader.SetUnethicalReading(true);
using (var pdfDoc = new PdfDocument(reader))
{
ExtractImagesOnAllPages(pdfDoc, directoryName);
}
}
}
private void ExtractImagesOnAllPages(PdfDocument pdfDoc, string directoryName)
{
Console.WriteLine($"Number of pdf {pdfDoc.GetNumberOfPdfObjects()} objects");
// Extract objects https://itextpdf.com/en/resources/examples/itext-7/extracting-objects-pdf
for (int objNumber = 1; objNumber <= pdfDoc.GetNumberOfPdfObjects(); objNumber++)
{
PdfObject currentObject = pdfDoc.GetPdfObject(objNumber);
if (currentObject != null && currentObject.IsStream())
{
try
{
ExtractImagesOneImage(currentObject as PdfStream, Path.Combine(directoryName, $"image{objNumber}.png"));
}
catch (Exception ex)
{
Console.WriteLine($"Object number {objNumber} is NOT an image! -- error: {ex.Message}");
}
}
}
}
private void ExtractImagesOneImage(PdfStream someStream, string fileName)
{
var pdfDict = (PdfDictionary)someStream;
string subType = pdfDict.Get(PdfName.Subtype)?.ToString() ?? string.Empty;
bool isImage = subType == "/Image";
if (isImage == false)
return;
bool decoded = false;
string filter = pdfDict.Get(PdfName.Filter).ToString();
if (filter == "/FlateDecode")
{
SaveFlateEncodedImage(fileName, pdfDict, someStream.GetBytes(false));
}
else
{
byte[] imgData;
try
{
imgData = someStream.GetBytes(decoded);
}
catch (PdfException ex)
{
imgData = someStream.GetBytes(!decoded);
}
SaveNormalImage(fileName, imgData);
}
}
private void SaveNormalImage(string fileName, byte[] imgData)
{
using (var memStream = new System.IO.MemoryStream(imgData))
using (var image = System.Drawing.Image.FromStream(memStream))
{
image.Save(fileName, ImageFormat.Png);
Console.WriteLine($"{Path.GetFileName(fileName)}");
}
}
private void SaveFlateEncodedImage(string fileName, PdfDictionary pdfDict, byte[] imgData)
{
int width = int.Parse(pdfDict.Get(PdfName.Width).ToString());
int height = int.Parse(pdfDict.Get(PdfName.Height).ToString());
int bpp = int.Parse(pdfDict.Get(PdfName.BitsPerComponent).ToString());
// Example that helped: https://stackoverflow.com/a/8517377/97803
PixelFormat pixelFormat;
switch (bpp)
{
case 1:
pixelFormat = PixelFormat.Format1bppIndexed;
break;
case 8:
pixelFormat = PixelFormat.Format8bppIndexed;
break;
case 24:
pixelFormat = PixelFormat.Format24bppRgb;
break;
default:
throw new Exception("Unknown pixel format " + bpp);
}
// .NET docs https://api.itextpdf.com/iText7/dotnet/7.1.9/classi_text_1_1_kernel_1_1_pdf_1_1_filters_1_1_flate_decode_strict_filter.html
// Java docs have more detail: https://api.itextpdf.com/iText7/java/7.1.7/com/itextpdf/kernel/pdf/filters/FlateDecodeFilter.html
imgData = FlateDecodeStrictFilter.FlateDecode(imgData, true);
// byte[] streamBytes = FlateDecodeStrictFilter.DecodePredictor(imgData, pdfDict);
// Copy the image one row at a time
using (var bmp = new Bitmap(width, height, pixelFormat))
{
BitmapData bmpData = bmp.LockBits(new Rectangle(0, 0, width, height), ImageLockMode.WriteOnly, pixelFormat);
int length = (int)Math.Ceiling(width * bpp / 8.0);
for (int i = 0; i < height; i++)
{
int offset = i * length;
int scanOffset = i * bmpData.Stride;
Marshal.Copy(imgData, offset, new IntPtr(bmpData.Scan0.ToInt64() + scanOffset), length);
}
bmp.UnlockBits(bmpData);
bmp.Save(fileName, ImageFormat.Png);
}
Console.WriteLine($"FlateDecode! {Path.GetFileName(fileName)}");
}
/// <summary>This method distorts the image badly</summary>
private void SaveFlateEncodedImage2(string fileName, PdfDictionary pdfDict, byte[] imgData)
{
int width = int.Parse(pdfDict.Get(PdfName.Width).ToString());
int height = int.Parse(pdfDict.Get(PdfName.Height).ToString());
int bpp = int.Parse(pdfDict.Get(PdfName.BitsPerComponent).ToString());
// Example that helped: https://stackoverflow.com/a/8517377/97803
PixelFormat pixelFormat;
switch (bpp)
{
case 1:
pixelFormat = PixelFormat.Format1bppIndexed;
break;
case 8:
pixelFormat = PixelFormat.Format8bppIndexed;
break;
case 24:
pixelFormat = PixelFormat.Format24bppRgb;
break;
default:
throw new Exception("Unknown pixel format " + bpp);
}
// .NET docs https://api.itextpdf.com/iText7/dotnet/7.1.9/classi_text_1_1_kernel_1_1_pdf_1_1_filters_1_1_flate_decode_strict_filter.html
// Java docs have more detail: https://api.itextpdf.com/iText7/java/7.1.7/com/itextpdf/kernel/pdf/filters/FlateDecodeFilter.html
imgData = FlateDecodeStrictFilter.FlateDecode(imgData, true);
// byte[] streamBytes = FlateDecodeStrictFilter.DecodePredictor(imgData, pdfDict);
// Copy the entire image in one go
using (var bmp = new Bitmap(width, height, pixelFormat))
{
BitmapData bmpData = bmp.LockBits(new Rectangle(0, 0, width, height), ImageLockMode.WriteOnly, pixelFormat);
Marshal.Copy(imgData, 0, bmpData.Scan0, imgData.Length);
bmp.UnlockBits(bmpData);
bmp.Save(fileName, ImageFormat.Png);
}
Console.WriteLine($"FlateDecode! {Path.GetFileName(fileName)}");
}
}
}
The code can be instantiated and called like this from within a .NET Core console application:
string existingFileName = #"c:\temp\ReallyLongBook1.pdf";
var imageExtractor = new MyPdfImageExtractor(existingFileName);
imageExtractor.ExtractToDirectory(#"c:\temp\images");
I'm running the following free Microsoft book through this code:
Moving to Microsoft Visual Studio 2010
The image in question is on page 10 and it's black and white (not pink).
I'm no PDF expert and I've been banging on this code for a couple of days now picking apart a number of examples to try to piece this together. Any help that would get me past my pink images, would be greatly appreciated.
-------Update Feb 4, 2020------
Here is the revised version after MKL's suggested changes. His change extracted more images than mine and produced proper looking images that appear in the book I mentioned above:
using iText.Kernel.Pdf;
using iText.Kernel.Pdf.Canvas.Parser;
using iText.Kernel.Pdf.Canvas.Parser.Data;
using iText.Kernel.Pdf.Canvas.Parser.Listener;
using iText.Kernel.Pdf.Xobject;
using System;
using System.Collections.Generic;
using System.IO;
namespace ITextPdfStuff
{
public class MyPdfImageExtractor
{
private readonly string _pdfFileName;
public MyPdfImageExtractor(string pdfFileName)
{
_pdfFileName = pdfFileName;
}
public void ExtractToDirectory(string directoryName)
{
using (var reader = new PdfReader(_pdfFileName))
{
// Avoid iText.Kernel.Crypto.BadPasswordException: https://stackoverflow.com/a/48065052/97803
reader.SetUnethicalReading(true);
using (var pdfDoc = new PdfDocument(reader))
{
ExtractImagesOnAllPages(pdfDoc, directoryName);
}
}
}
private void ExtractImagesOnAllPages(PdfDocument pdfDoc, string directoryName)
{
Console.WriteLine($"Number of pdf {pdfDoc.GetNumberOfPdfObjects()} objects");
IEventListener strategy = new ImageRenderListener(Path.Combine(directoryName, #"image{0}.{1}"));
PdfCanvasProcessor parser = new PdfCanvasProcessor(strategy);
for (var i = 1; i <= pdfDoc.GetNumberOfPages(); i++)
{
parser.ProcessPageContent(pdfDoc.GetPage(i));
}
}
}
public class ImageRenderListener : IEventListener
{
public ImageRenderListener(string format)
{
this.format = format;
}
public void EventOccurred(IEventData data, EventType type)
{
if (data is ImageRenderInfo imageData)
{
try
{
PdfImageXObject imageObject = imageData.GetImage();
if (imageObject == null)
{
Console.WriteLine("Image could not be read.");
}
else
{
File.WriteAllBytes(string.Format(format, index++, imageObject.IdentifyImageFileExtension()), imageObject.GetImageBytes());
}
}
catch (Exception ex)
{
Console.WriteLine("Image could not be read: {0}.", ex.Message);
}
}
}
public ICollection<EventType> GetSupportedEvents()
{
return null;
}
string format;
int index = 0;
}
}
PDFs internally support a very flexible bitmap image format, in particular as far as different color spaces are concerned.
iText in its parsing API supports export of a subset thereof, essentially the subset of images that easily can be exported as regular JPEGs or PNGs.
Thus, it makes sense to try and export using the iText parsing API first. You can do that as follows:
Directory.CreateDirectory(#"extract\");
using (PdfReader reader = new PdfReader(#"Moving to Microsoft Visual Studio 2010 ebook.pdf"))
using (PdfDocument pdfDocument = new PdfDocument(reader))
{
IEventListener strategy = new ImageRenderListener(#"extract\Moving to Microsoft Visual Studio 2010 ebook-i7-{0}.{1}");
PdfCanvasProcessor parser = new PdfCanvasProcessor(strategy);
for (var i = 1; i <= pdfDocument.GetNumberOfPages(); i++)
{
parser.ProcessPageContent(pdfDocument.GetPage(i));
}
}
with the helper class ImageRenderListener:
public class ImageRenderListener : IEventListener
{
public ImageRenderListener(string format)
{
this.format = format;
}
public void EventOccurred(IEventData data, EventType type)
{
if (data is ImageRenderInfo imageData)
{
try
{
PdfImageXObject imageObject = imageData.GetImage();
if (imageObject == null)
{
Console.WriteLine("Image could not be read.");
}
else
{
File.WriteAllBytes(string.Format(format, index++, imageObject.IdentifyImageFileExtension()), imageObject.GetImageBytes());
}
}
catch (Exception ex)
{
Console.WriteLine("Image could not be read: {0}.", ex.Message);
}
}
}
public ICollection<EventType> GetSupportedEvents()
{
return null;
}
string format;
int index = 0;
}
In case of your example document it exports nearly 400 images successfully, among them your example image above:
But there also are less than 30 images it cannot export, on standard out you'll find "Image could not be read: The color space /DeviceN is not supported.."

Categories