Update PDF image in-place - c#

I am trying to replace an image stream within an SDF document, using PDFNet 7.0.4 and netcoreapp3.1. As much as possible, I want to maintain the original object and its metadata; same dimensions, color system, compression, etc. Ideally object number and even generation would remain the same as well - the goal is that a before and after comparison would show only the changed pixels within the stream.
I'm getting the raw pixel data as a Stream object with this method:
private Stream GetImageData(int objectNum)
{
var image = new PDF.Image(sdfDoc.GetObj(objectNum));
var bits = image.GetBitsPerComponent();
var channels = image.GetComponentNum();
var bytesPerChannel = bits / 8;
var height = image.GetImageHeight();
var width = image.GetImageWidth();
var data = image.GetImageData();
var len = height * width * channels * bytesPerChannel;
using (var reader = new pdftron.Filters.FilterReader(data))
{
var buffer = new byte[len];
reader.Read(buffer);
return new MemoryStream(buffer);
}
}
After manipulating the image data, I want to update it before saving the underlying SDFDoc object. I've tried using the following method:
private void SetImageData(int objectNum, Stream stream)
{
var image = new PDF.Image(sdfDoc.GetObj(objectNum));
var bits = image.GetBitsPerComponent();
var channels = image.GetComponentNum();
var bytesPerChannel = bits / 8;
var height = image.GetImageHeight();
var width = image.GetImageWidth();
var len = height * width * channels * bytesPerChannel;
if (stream.Length != len) { throw new DataMisalignedException("Stream length does not match expected image dimensions"); }
using (var ms = new MemoryStream())
using (var writer = new pdftron.Filters.FilterWriter(image.GetImageData()))
{
stream.CopyTo(ms);
writer.WriteBuffer(ms.ToArray());
}
}
This runs without error, but nothing actually appears to get updated. I've tried playing around with SDFObj.SetStreamData(), but haven't been able to make that work either. What is the lowest impact, highest performance way to directly replace just the raw pixel data within an image stream?
edit
I have this halfway working with this method:
private void SetImageData(int objectNum, Stream stream)
{
var sdfObj = sdfDoc.GetObj(objectNum);
var image = new PDF.Image(sdfObj);
var bits = image.GetBitsPerComponent();
var channels = image.GetComponentNum();
var bytesPerChannel = bits / 8;
var height = image.GetImageHeight();
var width = image.GetImageWidth();
var len = height * width * channels * bytesPerChannel;
if (stream.Length != len) { throw new DataMisalignedException("Stream length does not match expected image dimensions"); }
var buffer = new byte[len];
stream.Read(buffer, 0, len);
sdfObj.SetStreamData(buffer);
sdfObj.Erase("Filters");
}
This works as expected, but with the obvious caveat that it just ignores any existing compression and turns the image into a raw uncompressed stream.
I've tried sdfObj.SetStreamData(buffer, image.GetImageData()); and sdfObj.SetStreamData(buffer, image.GetImageData().GetAttachedFilter());
and this does update the object in the file, but the resulting image fails to render.

The following code shows how to retain an Image object, but change the actual stream data.
static private Stream GetImageData(Obj o)
{
var image = new pdftron.PDF.Image(o);
var bits = image.GetBitsPerComponent();
var channels = image.GetComponentNum();
var bytesPerChannel = bits / 8;
var height = image.GetImageHeight();
var width = image.GetImageWidth();
var data = image.GetImageData();
var len = height * width * channels * bytesPerChannel;
using (var reader = new pdftron.Filters.FilterReader(data))
{
var buffer = new byte[len];
reader.Read(buffer);
return new MemoryStream(buffer);
}
}
static private void SetImageData(PDFDoc doc, Obj o, Stream stream)
{
var image = new pdftron.PDF.Image(o);
var bits = image.GetBitsPerComponent();
var channels = image.GetComponentNum();
var bytesPerChannel = bits / 8;
var height = image.GetImageHeight();
var width = image.GetImageWidth();
var len = height * width * channels * bytesPerChannel;
if (stream.Length != len) { throw new DataMisalignedException("Stream length does not match expected image dimensions"); }
o.Erase("DecodeParms"); // Important: this won'be accurate after SetStreamData
// now we actually do the stream swap
o.SetStreamData((stream as MemoryStream).ToArray(), new FlateEncode(null));
}
static private void InvertPixels(Stream stream)
{
// This function is for DEMO purposes
// this code assumes 3 channel 8bit
long length = stream.Length;
long pixels = length / 3;
for(int p = 0; p < pixels; ++p)
{
int c1 = stream.ReadByte();
int c2 = stream.ReadByte();
int c3 = stream.ReadByte();
stream.Seek(-3, SeekOrigin.Current);
stream.WriteByte((byte)(255 - c1));
stream.WriteByte((byte)(255 - c2));
stream.WriteByte((byte)(255 - c3));
}
stream.Seek(0, SeekOrigin.Begin);
}
And then here is sample code to use.
static void Main(string[] args)
{
PDFNet.Initialize();
var x = new PDFDoc(#"2002.04610.pdf");
x.InitSecurityHandler();
var o = x.GetSDFDoc().GetObj(381);
Stream source = GetImageData(o);
InvertPixels(source);
SetImageData(x, o, source);
x.Save(#"2002.04610-MOD.pdf", SDFDoc.SaveOptions.e_remove_unused);
}

Related

ANT+ FE-C Writing User Configuration data Page 55 (0x37) to Smart Trainer - no change in resistance noticed

I am trying to write user configuration data (Data Page 55, 0x37) to a smart fitness device (bicycle trainer) via FE-C over Bluetooth. Although using different values or using the max values I do not notice any change in resistance. Is there a mistake processing the data? The second method (private static byte[] CreateFECUserConfiguration()) returns a byte array which will be written to the device.
private async Task WriteUserConfiguration(GattCharacteristic characteristic)
{
DataWriter writer = new DataWriter();
byte[] bytes = CreateFECUserConfiguration();
writer.WriteBytes(bytes);
var valResult = await characteristic.WriteValueAsync(writer.DetachBuffer());
if (valResult == GattCommunicationStatus.Success)
{
Debug.WriteLine("Write UserConfiguration Successful");
}
}
// create values for testing, will be provided by the user later on
private static byte[] CreateFECUserConfiguration()
{
byte[] bytes = new byte[13]; // size of message
UInt16 userWeight = (ushort)(655.34 / 0.01); // 0-655.34
byte[] userWeightBytes = BitConverter.GetBytes(userWeight);
byte bicycleWheelDiameterOffset = 10; // 0-10, 0.5 byte
UInt16 bicycleWeight = 50 * 20; // 0 – 50, * 20, 1.5 byte
// start merging bicycle wheel diameter offset and bicycle weight + putting them in the right order
byte[] tempWheelDiameterOffset = new byte[1] { bicycleWheelDiameterOffset };
BitArray bicycleWheelDiameterOffsetBits = new BitArray(tempWheelDiameterOffset);
byte[] testbicycleWeightBytes = BitConverter.GetBytes(bicycleWeight);
BitArray testBicycleWeight = new BitArray(testbicycleWeightBytes);
bool[] tempBicycleWeightPartTwo = new bool[8] { testBicycleWeight[4], testBicycleWeight[5], testBicycleWeight[6], testBicycleWeight[7], testBicycleWeight[8], testBicycleWeight[9], testBicycleWeight[10], testBicycleWeight[11] };
BitArray bicycleWeightBitsTwo = new BitArray(tempBicycleWeightPartTwo);
bool[] mergeBitsAsBools = new bool[8] { testBicycleWeight[0], testBicycleWeight[1], testBicycleWeight[2], testBicycleWeight[3], bicycleWheelDiameterOffsetBits[0], bicycleWheelDiameterOffsetBits[1], bicycleWheelDiameterOffsetBits[2], bicycleWheelDiameterOffsetBits[3] };
BitArray tempMergeWheelDiameterOffsetPlusBicycleWeight = new BitArray(mergeBitsAsBools);
byte[] wheelDiameterOffsetPlusBicycleWeight = new byte[1];
byte[] bicycleWeightByteTwo = new byte[1];
tempMergeWheelDiameterOffsetPlusBicycleWeight.CopyTo(wheelDiameterOffsetPlusBicycleWeight, 0);
bicycleWeightBitsTwo.CopyTo(bicycleWeightByteTwo, 0);
//end merging
byte bicycleWheelDiameter = (byte)(0.5 / 0.01); // 0 – 2.54m
byte gearRatio = (byte)(1 / 0.03); // 0.03 – 7.65
bytes[0] = 0xA4;
bytes[1] = 0x09; // lenght
bytes[2] = 0x4F; // message type
bytes[3] = 0x05; // channel
bytes[4] = 0x37; // Page Number 55
bytes[5] = userWeightBytes[0]; // User Weight LSB
bytes[6] = userWeightBytes[1]; // User Weight MSB
bytes[7] = 0xFF; // Reserved for future use
bytes[8] = wheelDiameterOffsetPlusBicycleWeight[0]; // Bicycle Wheel Diameter Offset 0,5 byte + Bicycle Weight LSN (probably typo in documentation -> LSB?) 0,5 byte
bytes[9] = bicycleWeightByteTwo[0]; // BicycleWeight MSB
bytes[10] = bicycleWheelDiameter; // Bicycle Wheel Diameter
bytes[11] = gearRatio; // Gear Ration
bytes[12] = ComputeChecksum(bytes); // Method to calcute checksum
return bytes;
}
Ant+ FE-C Data Page 55 (0x3)
Ant+ message format structure

Create Dicom Video from mp4 [duplicate]

This question already has answers here:
Creating Dicom file out of video
(1 answer)
How to encapsulate the H.264 bitstream of video file in C++
(1 answer)
Closed 7 months ago.
I'am trying to create a dicom video from mp4 file and add my own tags. I can create it but when I try to reproduce the video with DICOM viewer like MicroDicom i can only see a black image.
public async void VideoToDicom(string srcFilePath, string videoName, string videoId)
{
// var dataset = new DicomDataset(DicomTransferSyntax.MPEG4AVCH264BDCompatibleHighProfileLevel41);
var dataset = new DicomDataset(DicomTransferSyntax.MPEG4AVCH264HighProfileLevel42For2DVideo);
var metadata = await VideoUtils.Instance.GetVideoMetaData(srcFilePath);
dataset.Add(DicomTag.SOPClassUID, DicomUID.CTImageStorage);
dataset.Add(DicomTag.SOPInstanceUID, DicomUID.Generate());
dataset.Add(DicomTag.StudyInstanceUID, DicomUID.Generate());
dataset.Add(DicomTag.Series​Instance​UID, DicomUID.Generate());
var numberOfFrames = metadata.Duration.TotalSeconds * metadata.VideoData.Fps;
var size = metadata.VideoData.FrameSize.Split("x");
dataset.Add(DicomTag.NumberOfFrames, (int)numberOfFrames);
dataset.Add(DicomTag.Columns, Convert.ToUInt16(size[1]));
dataset.Add(DicomTag.Rows, Convert.ToUInt16(size[0]));
dataset.Add(DicomTag.BitsAllocated, "8");
dataset.Add(DicomTag.FrameAcquisitionDuration, "40");
dataset.Add(DicomTag.PixelRepresentation, (ushort)0);
dataset.Add(DicomTag.VideoImageFormatAcquired, "MPEG4");
dataset.Add(DicomTag.LossyImageCompressionMethod, "ISO_14496_10");
dataset.Add(DicomTag.LossyImageCompression, "01");
dataset.Add(DicomTag.PhotometricInterpretation, PhotometricInterpretation.YbrPartial420.Value);
DicomPixelData pixelData = CreateDicomVideoPixelData(metadata, dataset);
byte[] videoBytes = GetVideoData(srcFilePath);
MemoryByteBuffer buffer = new MemoryByteBuffer(videoBytes);
pixelData.AddFrame(buffer);
var dicomFile = new DicomFile(dataset);
dicomFile.FileMetaInfo.TransferSyntax = DicomTransferSyntax.MPEG4AVCH264HighProfileLevel42For2DVideo;
dicomFile.Save("C:\\testvideos\\test.dcm");}
private DicomPixelData CreateDicomVideoPixelData(MetaData metadata, DicomDataset dataset)
{
DicomPixelData pixelData = DicomPixelData.Create(dataset, true);
var numberOfFrames = metadata.Duration.TotalSeconds * metadata.VideoData.Fps;
var size = metadata.VideoData.FrameSize.Split("x");
pixelData.Width = Convert.ToUInt16(size[0]);
pixelData.Height = Convert.ToUInt16(size[1]);
pixelData.NumberOfFrames = (int)numberOfFrames;
pixelData.HighBit = 7;
pixelData.BitsStored = 8;
//pixelData.BitsAllocated = 8; Readonly?
pixelData.SamplesPerPixel = 3;
pixelData.PlanarConfiguration = 0;
pixelData.PixelRepresentation = 0;
//pixelData.PhotometricInterpretation = PhotometricInterpretation.YbrPartial420;
return pixelData;
}
private static byte[] GetVideoData(string videoFile)
{
byte[] buffer;
FileStream fileStream = new FileStream(videoFile, FileMode.Open, FileAccess.Read);
try
{
int length = (int)fileStream.Length; // get file length
buffer = new byte[length]; // create buffer
int count; // actual number of bytes read
int sum = 0; // total number of bytes read
// read until Read method returns 0 (end of the stream has been reached)
while ((count = fileStream.Read(buffer, sum, length - sum)) > 0)
sum += count; // sum is a buffer offset for next reading
}
finally
{
fileStream.Close();
}
return buffer;}
At least if i can not see a video i expect to get all the frames from the video.
Thanks for the help.
My file on dicom viewer

C# check images are equal with tolerance

I have this code where I send images from a thermal camera. getImage() returns the actual image that is provided by the camera. There is no possibility to check directly if the camera can provide a 'new' image, so I did this method to compare two images:
class ImageCompare
{
public enum CompareResult
{
CompareOK,
SizeMismatch,
PixelMismatch
};
public static CompareResult CompareImages(Image i1, Image i2)
{
CompareResult cr = CompareResult.CompareOK;
if (i1.Size != i2.Size)
{
cr = CompareResult.SizeMismatch;
}
else
{
ImageConverter ic = new ImageConverter();
byte[] btImage1 = new byte[1];
btImage1 = (byte[])ic.ConvertTo(i1, btImage1.GetType());
byte[] btImage2 = new byte[1];
btImage2 = (byte[])ic.ConvertTo(i2, btImage2.GetType());
//compute hashes
SHA256Managed shaM = new SHA256Managed();
byte[] hash1 = shaM.ComputeHash(btImage1);
byte[] hash2 = shaM.ComputeHash(btImage2);
for (int i = 0; i < hash1.Length && i < hash2.Length
&& cr == CompareResult.CompareOK; i++)
{
if (hash1[i] != hash2[i])
cr = CompareResult.PixelMismatch;
}
}
return cr;
}
}
and here is how I use this class:
private static void HandleImageSending(Socket client, Socket s)
{
int sent;
int imageCount = 0;
long totalSize = 0;
try
{
while (true)
{
Console.WriteLine("Starting sending...");
Image old = getImage();
byte[] bmpBytes;
using (Image bmp = getImage())
using (MemoryStream ms = new MemoryStream())
{
if (ImageCompare.CompareImages(bmp, old) == ImageCompare.CompareResult.CompareOK)
{
bmp.Save(ms, System.Drawing.Imaging.ImageFormat.Jpeg);
bmpBytes = ms.ToArray();
sent = SendVarData(client, bmpBytes);
imageCount++;
totalSize += sent;
old = bmp;
}
}
}
}
catch (Exception e)
{ ... }
So my problem is that comparing by hash results in
'different' images in about 19 of 20 cases. Since the camera provides only 8 fps, there must be something wrong.
Is there a posibilty of comparing with a kind of tolerance, so maybe lets say 5 or 10 percent of the compared new image may differ to the old?
Since this is used on a mini PC, I would like to use as less CPU load as possible.
Is there anyone who can help me out here?
indexing the image (and decreasing the size) should give the same result for similar images
using
Bitmap imgtarget = imgsource.Clone(
new Rectangle(0, 0, imgsource.Width, imgsource.Height),
PixelFormat.Format8bppIndexed);
from another stackoverflow

get image pixels into array

i am trying to rewrite following code from silverlight to wpf. found here https://slmotiondetection.codeplex.com/
my problem is that WritaeableBitmap.Pixels is missing from wpf. how to achieve that? i understand how it works but i started with C# like week ago.
could you please point me to right direction?
public WriteableBitmap GetMotionBitmap(WriteableBitmap current)
{
if (_previousGrayPixels != null && _previousGrayPixels.Length > 0)
{
WriteableBitmap motionBmp = new WriteableBitmap(current.PixelWidth, current.PixelHeight);
int[] motionPixels = motionBmp.Pixels;
int[] currentPixels = current.Pixels;
int[] currentGrayPixels = ToGrayscale(current).Pixels;
for (int index = 0; index < current.Pixels.Length; index++)
{
byte previousGrayPixel = BitConverter.GetBytes(_previousGrayPixels[index])[0];
byte currentGrayPixel = BitConverter.GetBytes(currentGrayPixels[index])[0];
if (Math.Abs(previousGrayPixel - currentGrayPixel) > Threshold)
{
motionPixels[index] = _highlightColor;
}
else
{
motionPixels[index] = currentPixels[index];
}
}
_previousGrayPixels = currentGrayPixels;
return motionBmp;
}
else
{
_previousGrayPixels = ToGrayscale(current).Pixels;
return current;
}
}
public WriteableBitmap ToGrayscale(WriteableBitmap source)
{
WriteableBitmap gray = new WriteableBitmap(source.PixelWidth, source.PixelHeight);
int[] grayPixels = gray.Pixels;
int[] sourcePixels = source.Pixels;
for (int index = 0; index < sourcePixels.Length; index++)
{
int pixel = sourcePixels[index];
byte[] pixelBytes = BitConverter.GetBytes(pixel);
byte grayPixel = (byte)(0.3 * pixelBytes[2] + 0.59 * pixelBytes[1] + 0.11 * pixelBytes[0]);
pixelBytes[0] = pixelBytes[1] = pixelBytes[2] = grayPixel;
grayPixels[index] = BitConverter.ToInt32(pixelBytes, 0);
}
return gray;
}
`
In order to get the bitmap's raw pixel data you may use one of the BitmapSource.CopyPixels methods, e.g. like this:
var bytesPerPixel = (source.Format.BitsPerPixel + 7) / 8;
var stride = source.PixelWidth * bytesPerPixel;
var bufferSize = source.PixelHeight * stride;
var buffer = new byte[bufferSize];
source.CopyPixels(buffer, stride, 0);
Writing to a WriteableBitmap can be done by one of its WritePixels methods.
Alternatively you may access the bitmap buffer by the WriteableBitmap's BackBuffer property.
For converting a bitmap to grayscale, you might use a FormatConvertedBitmap like this:
var grayscaleBitmap = new FormatConvertedBitmap(source, PixelFormats.Gray8, null, 0d);

Windows Phone Encoding and Decoding audio using NSpeex. Having issue with decoding?

I am trying to encode a recorded audio using Nspeex and then transfer it over internet and decode on the other end. I am doing all this in Windows Phone 7/8. To encode and decode I am using following code. But while decoding I am not getting the result back correctly which I can play again. Can anyone provide me with encoding and decoding code which runs on WP7/8 recorded audio:
private static Microphone mic = Microphone.Default;
private static byte[] EncodeSpeech(byte[] buf, int len)
{
BandMode mode = GetBandMode(mic.SampleRate);
SpeexEncoder encoder = new SpeexEncoder(mode);
// set encoding quality to lowest (which will generate the smallest size in the fastest time)
encoder.Quality = 1;
int inDataSize = len / 2;
// convert to short array
short[] data = new short[inDataSize];
int sampleIndex = 0;
for (int index = 0; index < len; index += 2, sampleIndex++)
{
data[sampleIndex] = BitConverter.ToInt16(buf, index);
}
// note: the number of samples per frame must be a multiple of encoder.FrameSize
inDataSize = inDataSize - inDataSize % encoder.FrameSize;
var encodedData = new byte[len];
int encodedBytes = encoder.Encode(data, 0, inDataSize, encodedData, 0, len);
if (encodedBytes != 0)
{
// each chunk is laid out as follows:
// | 4-byte total chunk size | 4-byte encoded buffer size | <encoded-bytes> |
byte[] inDataSizeBuf = BitConverter.GetBytes(inDataSize);
byte[] sizeBuf = BitConverter.GetBytes(encodedBytes + inDataSizeBuf.Length);
byte[] returnBuf = new byte[encodedBytes + sizeBuf.Length + inDataSizeBuf.Length];
sizeBuf.CopyTo(returnBuf, 0);
inDataSizeBuf.CopyTo(returnBuf, sizeBuf.Length);
Array.Copy(encodedData, 0, returnBuf, sizeBuf.Length + inDataSizeBuf.Length, encodedBytes);
return returnBuf;
}
else
return buf;
}
private byte[] DecodeSpeech(byte[] buf)
{
BandMode mode = GetBandMode(mic.SampleRate);
SpeexDecoder decoder = new SpeexDecoder(mode);
byte[] inDataSizeBuf = new byte[4];
byte[] sizeBuf = new byte[4];
byte[] encodedBuf = new byte[buf.Length - 8];
Array.Copy(buf, 0, sizeBuf, 0, 4);
Array.Copy(buf, 4, inDataSizeBuf, 0, 4);
Array.Copy(buf, 8, encodedBuf, 0, buf.Length - 8);
int inDataSize = BitConverter.ToInt32(inDataSizeBuf, 0);
int size = BitConverter.ToInt32(sizeBuf, 0);
short[] decodedBuf = new short[inDataSize];
int decodedSize = decoder.Decode(encodedBuf, 0, encodedBuf.Length, decodedBuf, 0, false);
byte[] returnBuf = new byte[inDataSize * 2];
for (int index = 0; index < decodedBuf.Length; index++)
{
byte[] temp = BitConverter.GetBytes(decodedBuf[index]);
Array.Copy(temp, 0, returnBuf, index * 2, 2);
}
return returnBuf;
}
private static BandMode GetBandMode(int sampleRate)
{
if (sampleRate <= 8000)
return BandMode.Narrow;
if (sampleRate <= 16000)
return BandMode.Wide;
return BandMode.UltraWide;
}
I think your problem may be that you are newing up a new SpeexEncoder every time you want to encode audio. You should try making that a member for your class and re-use it.
I looked at the code for Nspeex I noticed that SpeexEncoder uses NbEncoder for the narrow band. In that class it looks like it keeps a history of some previous audio data in order perform the encoding. This should mean that the output for different instances of encoders would not go together.
private static Microphone mic = Microphone.Default;
private static SpeexEncoder encoder = CreateEncoder();
private static SpeexEncoder CreateEncoder()
{
BandMode mode = GetBandMode(mic.SampleRate);
SpeexEncoder encoder = new SpeexEncoder(mode);
// set encoding quality to lowest (which will generate the smallest size in the fastest time)
encoder.Quality = 1;
return encoder;
}
private static byte[] EncodeSpeech(byte[] buf, int len)
{
int inDataSize = len / 2;
...

Categories