I'm using Ghostscript.NET, a handy C# wrapper for Ghostscript functionality. I have a batch of PDFs being sent from the clientside to be converted to images on the ASP .NET WebAPI server and returned to the client.
public static IEnumerable<Image> PdfToImagesGhostscript(byte[] binaryPdfData, int dpi)
{
List<Image> pagesAsImages = new List<Image>();
GhostscriptVersionInfo gvi = new GhostscriptVersionInfo(AppDomain.CurrentDomain.BaseDirectory + #"\bin\gsdll32.dll");
using (var pdfDataStream = new MemoryStream(binaryPdfData))
using (var rasterizer = new Ghostscript.NET.Rasterizer.GhostscriptRasterizer())
{
rasterizer.Open(pdfDataStream, gvi, true);
for (int i = 1; i <= rasterizer.PageCount; i++)
{
Image pageAsImage = rasterizer.GetPage(dpi, dpi, i); // Out of Memory Exception on this line
pagesAsImages.Add(pageAsImage);
}
}
return pagesAsImages;
}
This generally works fine (I generally use 500 dpi, which I know is high, but even dropping to 300 I can reproduce this error). But if I give it many PDFs from the clientside (150 1-page PDFs, for example) it will often hit an Out of Memory Exception in Ghostscript.NET Rasterizer. How can I overcome this? Should this be threaded? If so how would that work? Would it help to use the 64 bit version of GhostScript? Thanks in advance.
I'm new to this myself, on here looking for techniques.
According to the example in the documentation here, they show this:
for (int page = 1; page <= _rasterizer.PageCount; page++)
{
var docName = String.Format("Page-{0}.pdf", page);
var pageFilePath = Path.Combine(outputPath, docName);
var pdf = _rasterizer.GetPage(desired_x_dpi, desired_y_dpi, pageNumber);
pdf.Save(pageFilePath);
pagesAsImages.Add(pdf);
}
It looks like you aren't saving your files.
I am still working at getting something similar to this to work on my end as well. Currently, I have 2 methods that I'm going to try, using the GhostscriptProcessor first:
private static void GhostscriptNetProcess(String fileName, String outputPath)
{
var version = Ghostscript.NET.GhostscriptVersionInfo.GetLastInstalledVersion();
var source = (fileName.IndexOf(' ') == -1) ? fileName : String.Format("\"{0}\"", fileName);
var gsArgs = new List<String>();
gsArgs.Add("-q");
gsArgs.Add("-dNOPAUSE");
gsArgs.Add("-dNOPROMPT");
gsArgs.Add("-sDEVICE=pdfwrite");
gsArgs.Add(String.Format(#"-sOutputFile={0}", outputPath));
gsArgs.Add(source);
var processor = new Ghostscript.NET.Processor.GhostscriptProcessor(version, false);
processor.Process(gsArgs.ToArray());
}
This version below is similar to yours, and what I started out using until I started finding other code examples:
private static void GhostscriptNetRaster(String fileName, String outputPath)
{
var version = Ghostscript.NET.GhostscriptVersionInfo.GetLastInstalledVersion();
using (var rasterizer = new Ghostscript.NET.Rasterizer.GhostscriptRasterizer())
{
rasterizer.Open(File.Open(fileName, FileMode.Open, FileAccess.Read), version, false);
for (int page = 0; page < rasterizer.PageCount; page++)
{
var img = rasterizer.GetPage(96, 96, page);
img.Save(outputPath);
}
}
}
Does that get you anywhere?
You don't have to rasterize all pages at the same GhostscriptRasterizer instance. Use disposable rasterizer on each page and collect results in List Image or List byte[] .
Example with results List of Jpeg encoded byte arrays.
List<byte[]> result = new List<byte[]>();
for (int i = 1; i <= pdfPagesCount; i++)
{
using (var pageRasterizer = new GhostscriptRasterizer())
{
pageRasterizer.Open(stream, gsVersion, true);
using (Image tempImage = pageRasterizer.GetPage(dpiX, dpiY, i))
{
var encoder = ImageCodecInfo.GetImageEncoders().First(c => c.FormatID == System.Drawing.Imaging.ImageFormat.Jpeg.Guid);
var encoderParams = new EncoderParameters() { Param = new[] { new EncoderParameter(System.Drawing.Imaging.Encoder.Quality, 95L) } };
using (MemoryStream memoryStream = new MemoryStream())
{
tempImage.Save(memoryStream, encoder, encoderParams);
result.Add(memoryStream.ToArray());
}
}
}
}
If you don't know number of pages in PDF you could call rasterizer one time, and get PageCount property.
Related
The following question answers how to resize a printscreen taken with SharpDX by a power of two Resizing a DXGI Resource or Texture2D in SharpDX. I'm trying to resize the printscreen by a variable amount (e.g. 80% of original size - not necessarily a power of two). Right now I found "a way to make my goal work" by resizing the bitmap generated by the printscreen. I achieve this by first converting into a WicImage:
private void button1_Click(object sender, EventArgs e)
{
Stopwatch stopWatchInstance = Stopwatch.StartNew();
//or Bitmap.save(new filestream)
var stream = File.OpenRead("c:\\test\\pc.png");
var test = DrawResizedImage(stream);
stopWatchInstance.Stop();
File.WriteAllBytes("c:\\test\\result.png", test.ToArray());
int previousCalculationTimeServer = (int)(stopWatchInstance.ElapsedMilliseconds % Int32.MaxValue);
}
MemoryStream DrawResizedImage(Stream fileName)
{
ImagingFactory wic = new WIC.ImagingFactory();
D2D.Factory d2d = new D2D.Factory();
FormatConverter image = CreateWicImage(wic, fileName);
var wicBitmap = new WIC.Bitmap(wic, image.Size.Width, image.Size.Height, WIC.PixelFormat.Format32bppPBGRA, WIC.BitmapCreateCacheOption.CacheOnDemand);
var target = new D2D.WicRenderTarget(d2d, wicBitmap, new D2D.RenderTargetProperties());
var bmpPicture = D2D.Bitmap.FromWicBitmap(target, image);
target.BeginDraw();
{
target.DrawBitmap(bmpPicture, new SharpDX.RectangleF(0, 0, target.Size.Width, target.Size.Height), 1.0f, D2D.BitmapInterpolationMode.Linear);
}
target.EndDraw();
var ms = new MemoryStream();
SaveD2DBitmap(wic, wicBitmap, ms);
return ms;
}
void SaveD2DBitmap(WIC.ImagingFactory wicFactory, WIC.Bitmap wicBitmap, Stream outputStream)
{
var encoder = new WIC.BitmapEncoder(wicFactory, WIC.ContainerFormatGuids.Png);
encoder.Initialize(outputStream);
var frame = new WIC.BitmapFrameEncode(encoder);
frame.Initialize();
frame.SetSize(wicBitmap.Size.Width, wicBitmap.Size.Height);
var pixelFormat = wicBitmap.PixelFormat;
frame.SetPixelFormat(ref pixelFormat);
frame.WriteSource(wicBitmap);
frame.Commit();
encoder.Commit();
}
WIC.FormatConverter CreateWicImage(WIC.ImagingFactory wicFactory, Stream stream)
{
var decoder = new WIC.PngBitmapDecoder(wicFactory);
var decodeStream = new WIC.WICStream(wicFactory, stream);
decoder.Initialize(decodeStream, WIC.DecodeOptions.CacheOnLoad);
var decodeFrame = decoder.GetFrame(0);
var scaler = new BitmapScaler(wicFactory);
scaler.Initialize(decodeFrame, 2000, 2000, SharpDX.WIC.BitmapInterpolationMode.Fant);
var test = (BitmapSource)scaler;
var converter = new WIC.FormatConverter(wicFactory);
converter.Initialize(test, WIC.PixelFormat.Format32bppPBGRA);
return converter;
}
Upon clicking on button, the above code resizes a bitmap (containing the printscreen) to 2000x2000. However, the above code is very slow, it takes about 200ms (not taking into account the fileread and filewrite time). I use BitmapScaler to do the resizing.
Does anyone know how to variably resize the output produced from the Resizing a DXGI Resource or Texture2D in SharpDX question, so the resizing becomes much faster? I tried to look for documentation to apply bitmapscaler directly to any of the objects in the answered code, but didn't succeed.
I've uploaded the above code can be found as a small Visual Studio Project which compiles
Here is a rewritten and commented version of your program that gets a video frame from the desktop using DXGI's Output Duplication, resizes it using any ratio using Direct2D, and saves it to a .jpeg file using WIC.
It works only in the GPU until the image is saved to a file (stream) using WIC. On my PC, I get something like 10-15 ms for the capture and resize, 30-40 ms for WIC save to file.
I've not used the D2D Scale effect I talked about in my comment because the ID2D1DeviceContext::DrawBitmap method can do resize that with various interpolation factors, without using any effect. But you can use the same code to apply Hardware accelerated effects.
Note some objects I create and dispose in button1_Click could be created in the constructor (like factories, etc.) and reused.
using System;
using System.Windows.Forms;
using System.IO;
using DXGI = SharpDX.DXGI;
using D3D11 = SharpDX.Direct3D11;
using D2D = SharpDX.Direct2D1;
using WIC = SharpDX.WIC;
using Interop = SharpDX.Mathematics.Interop;
namespace WindowsFormsApp1
{
public partial class Form1 : Form
{
private readonly D3D11.Device _device;
private readonly DXGI.OutputDuplication _outputDuplication;
public Form1()
{
InitializeComponent();
var adapterIndex = 0; // adapter index
var outputIndex = 0; // output index
using (var dxgiFactory = new DXGI.Factory1())
using (var dxgiAdapter = dxgiFactory.GetAdapter1(adapterIndex))
using (var output = dxgiAdapter.GetOutput(outputIndex))
using (var dxgiOutput = output.QueryInterface<DXGI.Output1>())
{
_device = new D3D11.Device(dxgiAdapter,
#if DEBUG
D3D11.DeviceCreationFlags.Debug |
#endif
D3D11.DeviceCreationFlags.BgraSupport); // for D2D support
_outputDuplication = dxgiOutput.DuplicateOutput(_device);
}
}
protected override void Dispose(bool disposing) // remove from Designer.cs
{
if (disposing && components != null)
{
components.Dispose();
_outputDuplication?.Dispose();
_device?.Dispose();
}
base.Dispose(disposing);
}
private void button1_Click(object sender, EventArgs e)
{
var ratio = 0.8; // resize ratio
using (var dxgiDevice = _device.QueryInterface<DXGI.Device>())
using (var d2dFactory = new D2D.Factory1())
using (var d2dDevice = new D2D.Device(d2dFactory, dxgiDevice))
{
// acquire frame
_outputDuplication.AcquireNextFrame(10000, out var _, out var frame);
using (frame)
{
// get DXGI surface/bitmap from resource
using (var frameDc = new D2D.DeviceContext(d2dDevice, D2D.DeviceContextOptions.None))
using (var frameSurface = frame.QueryInterface<DXGI.Surface>())
using (var frameBitmap = new D2D.Bitmap1(frameDc, frameSurface))
{
// create a GPU resized texture/surface/bitmap
var desc = new D3D11.Texture2DDescription
{
CpuAccessFlags = D3D11.CpuAccessFlags.None, // only GPU
BindFlags = D3D11.BindFlags.RenderTarget, // to use D2D
Format = DXGI.Format.B8G8R8A8_UNorm,
Width = (int)(frameSurface.Description.Width * ratio),
Height = (int)(frameSurface.Description.Height * ratio),
OptionFlags = D3D11.ResourceOptionFlags.None,
MipLevels = 1,
ArraySize = 1,
SampleDescription = { Count = 1, Quality = 0 },
Usage = D3D11.ResourceUsage.Default
};
using (var texture = new D3D11.Texture2D(_device, desc))
using (var textureDc = new D2D.DeviceContext(d2dDevice, D2D.DeviceContextOptions.None)) // create a D2D device context
using (var textureSurface = texture.QueryInterface<DXGI.Surface>()) // this texture is a DXGI surface
using (var textureBitmap = new D2D.Bitmap1(textureDc, textureSurface)) // we can create a GPU bitmap on a DXGI surface
{
// associate the DC with the GPU texture/surface/bitmap
textureDc.Target = textureBitmap;
// this is were we draw on the GPU texture/surface
textureDc.BeginDraw();
// this will automatically resize
textureDc.DrawBitmap(
frameBitmap,
new Interop.RawRectangleF(0, 0, desc.Width, desc.Height),
1,
D2D.InterpolationMode.HighQualityCubic, // change this for quality vs speed
null,
null);
// commit draw
textureDc.EndDraw();
// now save the file, create a WIC (jpeg) encoder
using (var file = File.OpenWrite("test.jpg"))
using (var wic = new WIC.ImagingFactory2())
using (var jpegEncoder = new WIC.BitmapEncoder(wic, WIC.ContainerFormatGuids.Jpeg))
{
jpegEncoder.Initialize(file);
using (var jpegFrame = new WIC.BitmapFrameEncode(jpegEncoder))
{
jpegFrame.Initialize();
// here we use the ImageEncoder (IWICImageEncoder)
// that can write any D2D bitmap directly
using (var imageEncoder = new WIC.ImageEncoder(wic, d2dDevice))
{
imageEncoder.WriteFrame(textureBitmap, jpegFrame, new WIC.ImageParameters(
new D2D.PixelFormat(desc.Format, D2D.AlphaMode.Premultiplied),
textureDc.DotsPerInch.Width,
textureDc.DotsPerInch.Height,
0,
0,
desc.Width,
desc.Height));
}
// commit
jpegFrame.Commit();
jpegEncoder.Commit();
}
}
}
}
}
_outputDuplication.ReleaseFrame();
}
}
}
}
The following question answers how to resize a printscreen taken with SharpDX by a power of two Resizing a DXGI Resource or Texture2D in SharpDX. I'm trying to resize the printscreen by a variable amount (e.g. 80% of original size - not necessarily a power of two). Right now I found "a way to make my goal work" by resizing the bitmap generated by the printscreen. I achieve this by first converting into a WicImage:
private void button1_Click(object sender, EventArgs e)
{
Stopwatch stopWatchInstance = Stopwatch.StartNew();
//or Bitmap.save(new filestream)
var stream = File.OpenRead("c:\\test\\pc.png");
var test = DrawResizedImage(stream);
stopWatchInstance.Stop();
File.WriteAllBytes("c:\\test\\result.png", test.ToArray());
int previousCalculationTimeServer = (int)(stopWatchInstance.ElapsedMilliseconds % Int32.MaxValue);
}
MemoryStream DrawResizedImage(Stream fileName)
{
ImagingFactory wic = new WIC.ImagingFactory();
D2D.Factory d2d = new D2D.Factory();
FormatConverter image = CreateWicImage(wic, fileName);
var wicBitmap = new WIC.Bitmap(wic, image.Size.Width, image.Size.Height, WIC.PixelFormat.Format32bppPBGRA, WIC.BitmapCreateCacheOption.CacheOnDemand);
var target = new D2D.WicRenderTarget(d2d, wicBitmap, new D2D.RenderTargetProperties());
var bmpPicture = D2D.Bitmap.FromWicBitmap(target, image);
target.BeginDraw();
{
target.DrawBitmap(bmpPicture, new SharpDX.RectangleF(0, 0, target.Size.Width, target.Size.Height), 1.0f, D2D.BitmapInterpolationMode.Linear);
}
target.EndDraw();
var ms = new MemoryStream();
SaveD2DBitmap(wic, wicBitmap, ms);
return ms;
}
void SaveD2DBitmap(WIC.ImagingFactory wicFactory, WIC.Bitmap wicBitmap, Stream outputStream)
{
var encoder = new WIC.BitmapEncoder(wicFactory, WIC.ContainerFormatGuids.Png);
encoder.Initialize(outputStream);
var frame = new WIC.BitmapFrameEncode(encoder);
frame.Initialize();
frame.SetSize(wicBitmap.Size.Width, wicBitmap.Size.Height);
var pixelFormat = wicBitmap.PixelFormat;
frame.SetPixelFormat(ref pixelFormat);
frame.WriteSource(wicBitmap);
frame.Commit();
encoder.Commit();
}
WIC.FormatConverter CreateWicImage(WIC.ImagingFactory wicFactory, Stream stream)
{
var decoder = new WIC.PngBitmapDecoder(wicFactory);
var decodeStream = new WIC.WICStream(wicFactory, stream);
decoder.Initialize(decodeStream, WIC.DecodeOptions.CacheOnLoad);
var decodeFrame = decoder.GetFrame(0);
var scaler = new BitmapScaler(wicFactory);
scaler.Initialize(decodeFrame, 2000, 2000, SharpDX.WIC.BitmapInterpolationMode.Fant);
var test = (BitmapSource)scaler;
var converter = new WIC.FormatConverter(wicFactory);
converter.Initialize(test, WIC.PixelFormat.Format32bppPBGRA);
return converter;
}
Upon clicking on button, the above code resizes a bitmap (containing the printscreen) to 2000x2000. However, the above code is very slow, it takes about 200ms (not taking into account the fileread and filewrite time). I use BitmapScaler to do the resizing.
Does anyone know how to variably resize the output produced from the Resizing a DXGI Resource or Texture2D in SharpDX question, so the resizing becomes much faster? I tried to look for documentation to apply bitmapscaler directly to any of the objects in the answered code, but didn't succeed.
I've uploaded the above code can be found as a small Visual Studio Project which compiles
Here is a rewritten and commented version of your program that gets a video frame from the desktop using DXGI's Output Duplication, resizes it using any ratio using Direct2D, and saves it to a .jpeg file using WIC.
It works only in the GPU until the image is saved to a file (stream) using WIC. On my PC, I get something like 10-15 ms for the capture and resize, 30-40 ms for WIC save to file.
I've not used the D2D Scale effect I talked about in my comment because the ID2D1DeviceContext::DrawBitmap method can do resize that with various interpolation factors, without using any effect. But you can use the same code to apply Hardware accelerated effects.
Note some objects I create and dispose in button1_Click could be created in the constructor (like factories, etc.) and reused.
using System;
using System.Windows.Forms;
using System.IO;
using DXGI = SharpDX.DXGI;
using D3D11 = SharpDX.Direct3D11;
using D2D = SharpDX.Direct2D1;
using WIC = SharpDX.WIC;
using Interop = SharpDX.Mathematics.Interop;
namespace WindowsFormsApp1
{
public partial class Form1 : Form
{
private readonly D3D11.Device _device;
private readonly DXGI.OutputDuplication _outputDuplication;
public Form1()
{
InitializeComponent();
var adapterIndex = 0; // adapter index
var outputIndex = 0; // output index
using (var dxgiFactory = new DXGI.Factory1())
using (var dxgiAdapter = dxgiFactory.GetAdapter1(adapterIndex))
using (var output = dxgiAdapter.GetOutput(outputIndex))
using (var dxgiOutput = output.QueryInterface<DXGI.Output1>())
{
_device = new D3D11.Device(dxgiAdapter,
#if DEBUG
D3D11.DeviceCreationFlags.Debug |
#endif
D3D11.DeviceCreationFlags.BgraSupport); // for D2D support
_outputDuplication = dxgiOutput.DuplicateOutput(_device);
}
}
protected override void Dispose(bool disposing) // remove from Designer.cs
{
if (disposing && components != null)
{
components.Dispose();
_outputDuplication?.Dispose();
_device?.Dispose();
}
base.Dispose(disposing);
}
private void button1_Click(object sender, EventArgs e)
{
var ratio = 0.8; // resize ratio
using (var dxgiDevice = _device.QueryInterface<DXGI.Device>())
using (var d2dFactory = new D2D.Factory1())
using (var d2dDevice = new D2D.Device(d2dFactory, dxgiDevice))
{
// acquire frame
_outputDuplication.AcquireNextFrame(10000, out var _, out var frame);
using (frame)
{
// get DXGI surface/bitmap from resource
using (var frameDc = new D2D.DeviceContext(d2dDevice, D2D.DeviceContextOptions.None))
using (var frameSurface = frame.QueryInterface<DXGI.Surface>())
using (var frameBitmap = new D2D.Bitmap1(frameDc, frameSurface))
{
// create a GPU resized texture/surface/bitmap
var desc = new D3D11.Texture2DDescription
{
CpuAccessFlags = D3D11.CpuAccessFlags.None, // only GPU
BindFlags = D3D11.BindFlags.RenderTarget, // to use D2D
Format = DXGI.Format.B8G8R8A8_UNorm,
Width = (int)(frameSurface.Description.Width * ratio),
Height = (int)(frameSurface.Description.Height * ratio),
OptionFlags = D3D11.ResourceOptionFlags.None,
MipLevels = 1,
ArraySize = 1,
SampleDescription = { Count = 1, Quality = 0 },
Usage = D3D11.ResourceUsage.Default
};
using (var texture = new D3D11.Texture2D(_device, desc))
using (var textureDc = new D2D.DeviceContext(d2dDevice, D2D.DeviceContextOptions.None)) // create a D2D device context
using (var textureSurface = texture.QueryInterface<DXGI.Surface>()) // this texture is a DXGI surface
using (var textureBitmap = new D2D.Bitmap1(textureDc, textureSurface)) // we can create a GPU bitmap on a DXGI surface
{
// associate the DC with the GPU texture/surface/bitmap
textureDc.Target = textureBitmap;
// this is were we draw on the GPU texture/surface
textureDc.BeginDraw();
// this will automatically resize
textureDc.DrawBitmap(
frameBitmap,
new Interop.RawRectangleF(0, 0, desc.Width, desc.Height),
1,
D2D.InterpolationMode.HighQualityCubic, // change this for quality vs speed
null,
null);
// commit draw
textureDc.EndDraw();
// now save the file, create a WIC (jpeg) encoder
using (var file = File.OpenWrite("test.jpg"))
using (var wic = new WIC.ImagingFactory2())
using (var jpegEncoder = new WIC.BitmapEncoder(wic, WIC.ContainerFormatGuids.Jpeg))
{
jpegEncoder.Initialize(file);
using (var jpegFrame = new WIC.BitmapFrameEncode(jpegEncoder))
{
jpegFrame.Initialize();
// here we use the ImageEncoder (IWICImageEncoder)
// that can write any D2D bitmap directly
using (var imageEncoder = new WIC.ImageEncoder(wic, d2dDevice))
{
imageEncoder.WriteFrame(textureBitmap, jpegFrame, new WIC.ImageParameters(
new D2D.PixelFormat(desc.Format, D2D.AlphaMode.Premultiplied),
textureDc.DotsPerInch.Width,
textureDc.DotsPerInch.Height,
0,
0,
desc.Width,
desc.Height));
}
// commit
jpegFrame.Commit();
jpegEncoder.Commit();
}
}
}
}
}
_outputDuplication.ReleaseFrame();
}
}
}
}
I need to process a very large text file (6-8 GB). I wrote the code attached below. Unfortunately, every time output file reaches (being created next to source file) reaches ~2GB, I observe sudden jump in memory consumption (~100MB to few GBs) and in result - out of memory exception.
Debugger indicates that OOM occurs at while ((tempLine = streamReader.ReadLine()) != null)
I am targeting .NET 4.7 and x64 architecture only.
Single line is at most 50 character long.
I can workaround this and split original file to smaller parts not to face the problem while processing and merge resuls back to one file at the end, but would like not to do it.
Code:
public async Task PerformDecodeAsync(string sourcePath, string targetPath)
{
var allLines = CountLines(sourcePath);
long processedlines = default;
using (File.Create(targetPath));
var streamWriter = File.AppendText(targetPath);
var decoderBlockingCollection = new BlockingCollection<string>(1000);
var writerBlockingCollection = new BlockingCollection<string>(1000);
var producer = Task.Factory.StartNew(() =>
{
using (var streamReader = new StreamReader(File.OpenRead(sourcePath), Encoding.Default, true))
{
string tempLine;
while ((tempLine = streamReader.ReadLine()) != null)
{
decoderBlockingCollection.Add(tempLine);
}
decoderBlockingCollection.CompleteAdding();
}
});
var consumer1 = Task.Factory.StartNew(() =>
{
foreach (var line in decoderBlockingCollection.GetConsumingEnumerable())
{
short decodeCounter = 0;
StringBuilder builder = new StringBuilder();
foreach (var singleChar in line)
{
var positionInDecodeKey = decodingKeysList[decodeCounter].IndexOf(singleChar);
if (positionInDecodeKey > 0)
builder.Append(model.Substring(positionInDecodeKey, 1));
else
builder.Append(singleChar);
if (decodeCounter > 18)
decodeCounter = 0;
else ++decodeCounter;
}
writerBlockingCollection.TryAdd(builder.ToString());
Interlocked.Increment(ref processedlines);
if (processedlines == (long)allLines)
writerBlockingCollection.CompleteAdding();
}
});
var writer = Task.Factory.StartNew(() =>
{
foreach (var line in writerBlockingCollection.GetConsumingEnumerable())
{
streamWriter.WriteLine(line);
}
});
Task.WaitAll(producer, consumer1, writer);
}
Solutions, as well as advices how to optimize it a little more is greatly appreciated.
Like I said, I'd probably go for something simpler first, unless or until it's demonstrated that it's not performing well. As Adi said in their answer, this work appears to be I/O bound - so there seems little benefit in creating multiple tasks for it.
publiv void PerformDecode(string sourcePath, string targetPath)
{
File.WriteAllLines(targetPath,File.ReadLines(sourcePath).Select(line=>{
short decodeCounter = 0;
StringBuilder builder = new StringBuilder();
foreach (var singleChar in line)
{
var positionInDecodeKey = decodingKeysList[decodeCounter].IndexOf(singleChar);
if (positionInDecodeKey > 0)
builder.Append(model.Substring(positionInDecodeKey, 1));
else
builder.Append(singleChar);
if (decodeCounter > 18)
decodeCounter = 0;
else ++decodeCounter;
}
return builder.ToString();
}));
}
Now, of course, this code actually blocks until it's done, which is why I've not marked it async. But then, so did yours, and it should have been warning about that already.
(You could try using PLINQ instead of LINQ for the Select portion but honestly, the amount of processing we're doing here looks trivial; Profile first before applying any such change)
As the work you are doing is mostly IO bound, you aren't really gaining anything from parallelization. It also looks to me like (correct me if I'm wrong) that your transformation algorithm doesn't depend on you reading the file line-by-line, so I would recommend instead doing something like this:
void Main()
{
//Setup streams for testing
using(var inputStream = new MemoryStream())
using(var outputStream = new MemoryStream())
using (var inputWriter = new StreamWriter(inputStream))
using (var outputReader = new StreamReader(outputStream))
{
//Write test string and rewind stream
inputWriter.Write("abcdefghijklmnop");
inputWriter.Flush();
inputStream.Seek(0, SeekOrigin.Begin);
var inputBuffer = new byte[5];
var outputBuffer = new byte[5];
int inputLength;
while ((inputLength = inputStream.Read(inputBuffer, 0, inputBuffer.Length)) > 0)
{
for (var i = 0; i < inputLength; i++)
{
//transform each character
outputBuffer[i] = ++inputBuffer[i];
}
//Write to output
outputStream.Write(outputBuffer, 0, inputLength);
}
//Read for testing
outputStream.Seek(0, SeekOrigin.Begin);
var output = outputReader.ReadToEnd();
Console.WriteLine(output);
//Outputs: "bcdefghijklmnopq"
}
}
Obviously, you would be using FileStreams instead of MemoryStreams, and you can increase the buffer length to something much larger (as this was just a demonstrative example). Also as your original method is Async, you use the async variants of Stream.Write and Stream.Read
So I'm trying to use TweetSharp in VB.NET 2012 with C# to post a tweet with a image.
I found the code example of how to do it:
service.SendTweetWithMedia(new SendTweetWithMediaOptions
{
Status = "message",
Images = dictionary
}
);
However I'm not sure how to create the "dictionary" with the picture stream.
I tried this:
Dictionary<string, Stream> imageDict = new Dictionary<string, Stream>();
then referenced that later:
Images = imageDict
But it gives the error:
Error Screenshot
Anyone have any ideas of how this is supposed to work?
Another block of code I found and tried is:
using (var stream = new FileStream("Image.jpg", FileMode.Open))
{
var result = tservice.SendTweetWithMedia(new SendTweetWithMediaOptions
{
Status = "Message",
Images = new Dictionary<string, Stream> { { "john", stream } }
});
lblResult.Text = result.Text.ToString();
}
But it gives the same error about "FileStream".
You need to add a reference to the System.IO namespace, this is why you receive this error in the image you posted.
Here is an example:
var thumb = "http://somesite.net/imageurl";
var service = new TwitterService(key, secret);
service.AuthenticateWith(token, tokenSecret);
var req = WebRequest.Create(thumb);
using (var stream = req.GetResponse().GetResponseStream())
{
response = service.SendTweetWithMedia(new SendTweetWithMediaOptions
{
Status = tweet.Trim(),
Images = new Dictionary<string, Stream> { { fullname, stream } }
});
}
A GIF may fail during Tweet creation even if it is within the file size limit. Adhere to the following constraints to improve success rates.
Resolution should be <= 1280x1080 (width x height)
Number of frames <= 350
Number of pixels (width * height * num_frames) <= 300 million
Filesize <= 15Mb
i am beginning in develop winphone and nokia imaging sdk. i have two function.
firstly, i call the function below to change image to gray color
private async void PickImageCallback(object sender, PhotoResult e)
{
if (e.TaskResult != TaskResult.OK || e.ChosenPhoto == null)
{
return;
}
using (var source = new StreamImageSource(e.ChosenPhoto))
{
using (var filters = new FilterEffect(source))
{
var sampleFilter = new GrayscaleFilter();
filters.Filters = new IFilter[] { sampleFilter };
var target = new WriteableBitmap((int)CartoonImage.ActualWidth, (int)CartoonImage.ActualHeight);
var renderer = new WriteableBitmapRenderer(filters, target);
{
await renderer.RenderAsync();
_thumbnailImageBitmap = target;
CartoonImage.Source = target;
}
}
}
SaveButton.IsEnabled = true;
}
then i call function to change image to binary color
private async void Binary(WriteableBitmap bm_image)
{
var target = new WriteableBitmap((int)CartoonImage.ActualWidth, (int)CartoonImage.ActualHeight);
MemoryStream stream= new MemoryStream();
bm_image.SaveJpeg(stream, bm_image.PixelWidth, bm_image.PixelHeight, 0, 100);
using (var source = new StreamImageSource(stream))
{
using (var filters = new FilterEffect(source))
{
var sampleFilter = new StampFilter(5, 0.7);
filters.Filters = new IFilter[] { sampleFilter };
var renderer1 =new WriteableBitmapRenderer(filters, target);
{
await renderer1.RenderAsync();
CartoonImage.Source = target;
}
}
}
}
but when it run to " await renderer1.RenderAsync();" in the second function, it doesn't work. How can i solve it. And you can explain for me about how "await" and "async" work ?
thank you very much!
I'm mostly guessing here since I do not know what error you get, but I'm pretty sure your problem lies in setting up the source. Have you made sure the memory stream position is set to the beginning (0) before creating an StreamImageSource?
Try adding:
stream.Position = 0;
before creating the StreamImageSource.
Instead of trying to create a memory stream from the writeable bitmap I suggest doing:
using Nokia.InteropServices.WindowsRuntime;
...
using (var source = new BitmapImageSource(bm_image.AsBitmap())
{
...
}