I've got 100's (maybe 1000's) of products with 10-30 images of each product coming to an online store I've put together. I need to optimize the images' file sizes as much as possible without loosing image quality.
I haven't used jpegtran, jpegoptim, or any other jpeg optimizer directly but I have noticed that punypng shrinks file sizes down about 4-6% on the larger jpeg images LOSSLESSLY.
Meta data is already stripped from the images during upload (via jumpoader) so that is not an option/problem anymore.
Is there any way to get one of the jpeg optimizers to run from C# code?
Note: I'm using shared Godaddy hosting with IIS7 and .Net 3.5
It might be 7 years too late, but I came across this question while trying to solve this problem. I eventually managed to do it and this is the solution.
For PNG you first need to install nQuant using NuGet.
include:
using System.Web.Hosting;
using System.IO;
using System.Diagnostics;
using nQuant;
using System.Drawing;
using System.Drawing.Imaging;
Methods:
public void optimizeImages()
{
string folder =
Path.Combine(HostingEnvironment.ApplicationPhysicalPath, #"assets\temp");
var files = Directory.EnumerateFiles(folder);
foreach (var file in files)
{
switch (Path.GetExtension(file).ToLower())
{
case ".jpg":
case ".jpeg":
optimizeJPEG(file);
break;
case ".png":
optimizePNG(file);
break;
}
}
}
private void optimizeJPEG(string file)
{
string pathToExe = HostingEnvironment.MapPath("~\\adminassets\\exe\\") + "jpegtran.exe";
var proc = new Process
{
StartInfo =
{
Arguments = "-optimize \"" + file + "\" \"" + file + "\"",
FileName = pathToExe,
UseShellExecute = false,
CreateNoWindow = false,
WindowStyle = ProcessWindowStyle.Hidden,
RedirectStandardError = true,
RedirectStandardOutput = true
}
};
Process jpegTranProcess = proc;
jpegTranProcess.Start();
jpegTranProcess.WaitForExit();
}
private void optimizePNG(string file)
{
string tempFile = Path.GetDirectoryName(file) + #"\temp-" + Path.GetFileName(file);
int alphaTransparency = 10;
int alphaFader = 70;
var quantizer = new WuQuantizer();
using (var bitmap = new Bitmap(file))
{
using (var quantized = quantizer.QuantizeImage(bitmap, alphaTransparency, alphaFader))
{
quantized.Save(tempFile, ImageFormat.Png);
}
}
System.IO.File.Delete(file);
System.IO.File.Move(tempFile, file);
}
It will take all files from /assets/temp folder and optimize jpegs and PNG.
I followed this question for the png part. The jpeg part I scraped from several sources. Including PicJam and Image Optimizer. The way I use it is by uploading all files from the user to the temp folder, running this method, uploading the files to azure blob storage, and deleting the local files. I downloaded jpegtran here.
If you don't like to mess with temporary files, I'd advise to use C++/CLI.
Create a C++/CLI dll project in visual studio. Create one static managed class, and define the functions as you want to use them from C#:
public ref class JpegTools
{
public:
static array<byte>^ Optimize(array<byte>^ input)
};
These functions you define can be directly called from C#, and you can implement them with all that C++ offers.
array^ corresponds to a C# byte array. You'll need to use pin_ptr<> to pin the byte array in memory, so you can pass on the data to the unmanaged Jpeg helper function of your choice. C++/CLI has ample support for marshalling managed types to native types. You can also allocate new array with gc_new, to return CLI compatible types. If you have to marshall strings from C# to C++ as part of this excercise, use Mfc/Atl's CString type.
You can statically link all the jpeg code into the dll. A C++ dll can be mixed pure native and C++/CLI code. In our C++/CLI projects, typically only the interface source files know about CLI types, all the rest work with with C++ types.
There's some overhead work to get going this way, but the upside is that your code is compile-time typechecked, and all the dealings with unmanged code and memory are dealt with on the C++ side. It actually works so well that I used C++/CLI to unit test native C++ code almost directly with NUnit.
Good luck!
I would batch process the images before uploading them to your web server, rather then try to process them while serving them. This will lead to less load on the web server and let you use any match image processing tools you wish.
I'm sure that I'm totally late to answer this question, but recently I have faced on lossless jpeg optimization problem and haven't found any suitable C# implementation of jpegtran utility. So, I have decided to implement by myself routines for lossless jpeg size reducing based on C wrapper of modified jpegtran, which you can find here. It comes, that similar realization with use of pure .Net LibJpeg.NET is far more slower than C wrapped solution, so, I haven't included it to the repo. Using of wrapper is quite simple,
if (JpegTools.Transform(jpeg, out byte[] optimized, copy: false, optimize: true))
{
//do smth useful
}
else
{
//report on error or use original jpeg
}
Hope, someone will find it useful.
Why not call punypng.com with Process.Start()? There is no reason why you .net code can't run external programs, provided the processing is done at the time of uploading (rather then when serving the images)
E.g.
upload into a "upload" folder,
have a windows services that watches for new files in the “upload” folder
when you get a new file, start punypng.com to process it and put the output into the correct image folder.
Related
Can a running .NET .EXE append data to itself? What's stopping it?
I could launch a separate process to do it just fine.
But I can't figure out how to write to itself while it's running. Is there anyway to do this? IN .NET
EDIT: And preferably no hacky solutions like write it out somewhere else then copy/rename
EDIT2: Clarifying type of executeable
EDIT3: Purpose: Writing binary stream to my running EXE file allows me to then parse the .EXE file on disk for those bytes and use them in the program. Without having to create any new files or registry entries or stuff like that. It is self contained. This is extremely convenient.
EDIT4: For those against this file, please thinking about the functions of: FILE ZIPPING, DLL LINKING, and PORTABLE APPLICATIONS before trying to discredit this idea,
There are a lot of bad consequences for storing data this way, as said in the comments, but there's a bigger problem: the answer to "What's stopping it?" question. The Windows PE loader locks the image file for writing while in execution, so you can't get an HANDLE to the file with write permissions, as NtCreateFile and NtOpenFile system calls with FILE_WRITE_DATA option will fail, as well as any attempt to delete the file. This block is implemented at kernel level and set during the NtCreateProcess system call, before the process and its modules entry point are actually called.
The only dirty trick possible without writing data to the disk, sending it to a remote server and without kernel privileges is to use another process via an helper executable, code injection or command-line arguments scripts (e.g. with PowerShell) which can kill your process releasing the lock, append data to the end of file and restart it. Of course these options have even worse consequences, I wrote it only to make clear the OS limitations (made by purpose) and why no professional software uses this technique to store data.
EDIT: since you are so determined to accomplish this behavior I post a proof of concept for appending data via helper executable (file copy), the method relies on executing a new copy of the image in TEMP folder, passing the path to the original executable so it can be "written" because isn't running and locked. FOR READERS I SUGGEST TO DON'T USE IT IN PRODUCTION
using System;
using System.IO;
using System.Reflection;
using System.Diagnostics;
namespace BadStorage
{
class Program
{
static void Main(string[] args)
{
var temp = Path.GetTempPath();
var exePath = Assembly.GetExecutingAssembly().Location;
if (exePath.IndexOf(temp, StringComparison.OrdinalIgnoreCase) >= 0 && args.Length > 0)
{
// "Real" main
var originalExe = args[0];
if (File.Exists(originalExe))
{
// Your program code...
byte[] data = { 0xFF, 0xEE, 0xDD, 0xCC };
// Write
using (var fs = new FileStream(originalExe, FileMode.Append, FileAccess.Write, FileShare.None))
fs.Write(data, 0, data.Length);
// Read
using (var fs = new FileStream(originalExe, FileMode.Open, FileAccess.Read, FileShare.Read))
{
fs.Seek(-data.Length, SeekOrigin.End);
fs.Read(data, 0, data.Length);
}
}
}
else
{
// Self-copy
var exeCopy = Path.Combine(temp, Path.GetFileName(exePath));
File.Copy(exePath, exeCopy, true);
var p = new Process()
{
StartInfo = new ProcessStartInfo()
{
FileName = exeCopy,
Arguments = $"\"{exePath}\"",
UseShellExecute = false
}
};
p.Start();
}
}
}
}
Despite all the negativity, there is a clean way to do this:
The way I have found only requires the program be executed on an NTFS drive.
The trick is to have your app copy itself to an alternate stream as soon as it's launched, then execute that image and immediately close itself. This can be easily done with the commands:
type myapp.exe > myapp.exe:image
forfiles /m myapp.exe /c myapp.exe:image
Once your application is running from an alternate stream (myapp.exe:image), it is free to modify the original file (myapp.exe) and read the data that's stored within it. The next time the program starts, the modified application will be copied to the alternate stream and executed.
This allows you to get the effect of an executable writing to itself while running, without dealing with any extra files and allows you to store all settings within a single .exe file.
The file must be executed on an NTFS partition, but that is not a big deal since all Windows installations use this format. You can still copy the file to other filesystems, you just cannot execute it there.
I've a plugin based web app that allows an administrator to assign various small pieces of functionality (the actual functionality is unimportant here) to users.
This functionality is configurable and an administrator having an understanding of the plugins is important. The administrator is technical enough to be able to read the very simple source code for these plugins. (Mostly just arithmetic).
I'm aware there are a couple of questions already about accessing source code from within a DLL built from that source:
How to include source code in dll? for example.
I've played with getting the .cs files into a /Resources folder. However doing this with a pre-build event obviously don't include these files in the project. So VS never copies them and I'm unable to access them on the Resources object.
Is there a way to reference the source for a particular class... from the same assembly that I'm missing? Or alternatively a way to extract COMPLETE source from the pdb for that assembly? I'm quite happy to deploy detailed PDB files. There's no security risk for this portion of the solution.
As I've access to the source code, I don't want to go about decompiling it to display it. This seems wasteful and overly complicated.
The source code isn't included in the DLLs, and it isn't in the PDBs either (PDBs only contain a link between the addresses in the DLL and the corresponding lines of code in the sources, as well as other trivia like variable names).
A pre-build event is a possible solution - just make sure that it produces a single file that's included in the project. A simple zip archive should work well enough, and it's easy to decompress when you need to access the source. Text compresses very well, so it might make sense to compress it anyway. If you don't want to use zip, anything else will do fine as well - an XML file, for example. It might even give you the benefit of using something like Roslyn to provide syntax highlighting with all the necessary context.
Decompilation isn't necessarily a terrible approach. You're trading memory for CPU, basically. You'll lose comments, but that shouldn't be a problem if your code is very simple. Method arguments keep their names, but locals don't - you'd need the PDBs for that, which is a massive overkill. It actually does depend a lot on the kind of calculations you're doing. For most cases, it probably isn't the best solution, though.
A bit roundabout way of handling this would be a multi-file T4 template. Basically, you'd produce as many files as there are source code files, and have them be embedded resources. I'm not sure how simple this is, and I'm sure not going to include the code :D
Another (a bit weird) option is to use file links. Simply have the source code files in the project as usual, but also make a separate folder where the same files will be added using "Add as link". The content will be shared with the actual source code, but you can specify a different build action - namely, Embedded Resource. This requires a (tiny) bit of manual work when adding or moving files, but it's relatively simple. If needed, this could also be automated, though that sounds like an overkill.
The cleanest option I can think of is adding a new build action - Compile + Embed. This requires you to add a build target file to your project, but that's actually quite simple. The target file is just an XML file, and then you just manually edit your SLN/CSPROJ file to include that target in the build, and you're good to go. The tricky part is that you'll also need to force the Microsoft.CSharp.Core.target to use your Compile + Embed action to be used as both the source code and the embedded resource. This is of course easily done by manually changing that target file, but that's a terrible way of handling that. I'm not sure what the best way of doing that is, though. Maybe there's a way to redefine #(Compile) to mean #(Compile;MyCompileAndEmbed)? I know it's possible with the usual property groups, but I'm not sure if something like this can be done with the "lists".
Taking from #Luaan's suggestion of using a pre-build step to create a single Zipped folder I created a basic console app to package the source files into a zip file at a specific location.
static void Main(string[] args)
{
Console.WriteLine("Takes a folder of .cs files and flattens and compacts them into a .zip." +
"Arg 1 : Source Folder to be resursively searched" +
"Arg 2 : Destination zip file" +
"Arg 3 : Semicolon List of folders to ignore");
if (args[0] == null || args[1] == null)
{
Console.Write("Args 1 or 2 missing");
return;
};
string SourcePath = args[0];
string ZipDestination = args[1];
List<String> ignoreFolders = new List<string>();
if (args[2] != null)
{
ignoreFolders = args[2].Split(';').ToList();
}
var files = DirSearch(SourcePath, "*.cs", ignoreFolders);
Console.WriteLine($"{files.Count} files found to zip");
if (File.Exists(ZipDestination))
{
Console.WriteLine("Destination exists. Deleting zip file first");
File.Delete(ZipDestination);
}
int zippedCount = 0;
using (FileStream zipToOpen = new FileStream(ZipDestination, FileMode.OpenOrCreate))
{
using (ZipArchive archive = new ZipArchive(zipToOpen, ZipArchiveMode.Create))
{
foreach (var filePath in files)
{
Console.WriteLine($"Writing {Path.GetFileName(filePath)} to zip {Path.GetFileName(ZipDestination)}");
archive.CreateEntryFromFile(filePath, Path.GetFileName(filePath));
zippedCount++;
}
}
}
Console.WriteLine($"Zipped {zippedCount} files;");
}
static List<String> DirSearch(string sDir, string filePattern, List<String> excludeDirectories)
{
List<String> filePaths = new List<string>();
foreach (string d in Directory.GetDirectories(sDir))
{
if (excludeDirectories.Any(ed => ed.ToLower() == d.ToLower()))
{
continue;
}
foreach (string f in Directory.GetFiles(d, filePattern))
{
filePaths.Add(f);
}
filePaths.AddRange(DirSearch(d, filePattern, excludeDirectories));
}
return filePaths;
}
Takes 3 parameters for source dir, output zip file and a ";" separated list of paths to exclude. I've just built this as a binary. Committed it to source control for simplicity and included it in the pre-build for projects I want the source for.
No error checking really and I'm certain it will fail for missing args. But if anyone wants it. Here it is! Again Thanks to #Luaan for clarifying PDBs aren't all that useful!
How in C# may I read a named attribute e.g. Title from a WMA file on Win XP or later?
One method to read (and modify/write) WMA metadata attributes is to use the Windows Media Format SDK. In particular the IWMHeaderInfo interface has the functions you want: GetAttributeByName, GetAttributeCount and GetAttributeByIndex. You will have to write P/Invoke code in C# in order to use this COM-based API.
Another option which may be easier would be to use a library such as NAudio which has a WindowsMediaFormat assembly for reading and writing WMA files. With NAudio the task of reading attributes becomes pretty simple.
using (var wmaStream = new NAudio.WindowsMediaFormat.WmaStream(fileName))
{
titleAttribute = wmaStream["Title"];
authorAttribute = wmaStream["Author"];
// ...
// read other meta tag attributes
}
You can find some more details about reading and writing WMA meta tags using NAudio in a post I wrote.
There is a simple solution without any strange frameworks using.
So I propose just to read the file bytewise using pure native .Net:
using System.IO;
...
string metaStr = string.Empty;
using (FileStream fs = File.OpenRead(wmaUrl))
{
byte[] b = new byte[3000];
fs.Read(b, 0, b.Length);
metaStr = Encoding.UTF8.GetString(b, 0, 3000);
metaStr = metaStr.Replace("\0", "");
int metaStart = metaStr.IndexOf("<?xml version");
metaStr = metaStr.Substring(metaStart);
int metaEnd = metaStr.IndexOf("</recordingDetails>");
metaStr = metaStr.Substring(0, metaEnd) + "</recordingDetails>";
}
Now metaStr contains the Comments field of WMA file description, which is usually called an Audio file MetaData.
Just remember, that this Comment field can be updated by other users and can contain other tags (not "recordingDetails" as shown above), so you should use your own custom substrings to define necessary MetaData borders.
I have been trying to read a file, and calculate the hash of the contents to find duplicates. The problem is that in Windows 8 (or WinRT or windows store application or however it is called, I'm completely confused), System.IO has been replaced with Windows.Storage, which behaves differently, and is very confusing. The official documentation is not useful at all.
First I need to get a StorageFile object, which in my case, I get from browsing a folder from a file picker:
var picker = new Windows.Storage.Pickers.FolderPicker();
picker.SuggestedStartLocation = Windows.Storage.Pickers.PickerLocationId.MusicLibrary;
picker.FileTypeFilter.Add("*");
var folder = await picker.PickSingleFolderAsync();
var files = await folder.GetFilesAsync(Windows.Storage.Search.CommonFileQuery.OrderByName);
Now in files I have the list of files I need to index. Next, I need to open that file:
foreach (StorageFile file in files)
{
var filestream = file.OpenAsync(Windows.Storage.FileAccessMode.Read);
Now is the most confusing part: getting the data from the file. The documentation was useless, and I couldn't find any code example. Apparently, Microsoft thought getting pictures from the camera is more important than opening a file.
The file stream has a member ReadAsync which I think reads the data. This method needs a buffer as a parameter and returns another buffer (???). So I create a buffer:
var buffer = new Windows.Storage.Streams.Buffer(1024 * 1024 * 10); // 10 mb should be enough for an mp3
var resultbuffer = await filestream.ReadAsync(buffer, 1024 * 1024 * 10, Windows.Storage.Streams.InputStreamOptions.ReadAhead);
I am wondering... what happens if the file doesn't have enough bytes? I haven't seen any info in the documentation.
Now I need to calculate the hash for this file. To do that, I need to create an algorithm object...
var alg = Windows.Security.Criptography.Core.HashAlgorithmProvider.OpenAlgorithm("md5");
var hashbuff = alg.HashData(resultbuffer);
// Cleanup
filestream.Dispose();
I also considered reading the file in chunks, but how can I calculate the hash like that? I looked everywhere in the documentation and found nothing about this. Could it be the CryptographicHash class type with it's 'append' method?
Now I have another issue. How can I get the data from that weird buffer thing to a byte array? The IBuffer class doesn't have any 'GetData' member, and the documentation, again, is useless.
So all I could do now is wonder about the mysteries of the universe...
// ???
}
So the question is... how can I do this? I am completely confused, and I wonder why did Microsoft choose to make reading a file so... so... so... impossible! Even in Assembly I could figure it out easier than.... this thing.
WinRT or Windows Runtime should not be confused with .NET as it is not .NET. WinRT has access to only a subset of the Win32 API but not to everything like the .NET is. Here is a pretty good article on what are the rules and restrictions in WinRT.
The WinRT in general does not have access to the file system. It works with capabilities and you can allow file access capability but this would restrict your app's access only to certain areas. Here is a good example of how do to file access via WinRT.
I keep getting the error "Stream was not writable" whenever I try to execute the following code. I understand that there's still a reference to the stream in memory, but I don't know how to solve the problem. The two blocks of code are called in sequential order. I think the second one might be a function call or two deeper in the call stack, but I don't think this should matter, since I have "using" statements in the first block that should clean up the streams automatically. I'm sure this is a common task in C#, I just have no idea how to do it...
string s = "";
using (Stream manifestResourceStream =
Assembly.GetExecutingAssembly().GetManifestResourceStream("Datafile.txt"))
{
using (StreamReader sr = new StreamReader(manifestResourceStream))
{
s = sr.ReadToEnd();
}
}
...
string s2 = "some text";
using (Stream manifestResourceStream =
Assembly.GetExecutingAssembly().GetManifestResourceStream("Datafile.txt"))
{
using (StreamWriter sw = new StreamWriter(manifestResourceStream))
{
sw.Write(s2);
}
}
Any help will be very much appreciated. Thanks!
Andrew
Embedded resources are compiled into your assembly, you can't edit them.
As stated above, embedded resources are read only. My recommendation, should this be applicable, (say for example your embedded resource was a database file, XML, CSV etc.) would be to extract a blank resource to the same location as the program, and read/write to the extracted resource.
Example Pseudo Code:
if(!Exists(new PhysicalResource())) //Check to see if a physical resource exists.
{
PhysicalResource.Create(); //Extract embedded resource to disk.
}
PhysicalResource pr = new PhysicalResource(); //Create physical resource instance.
pr.Read(); //Read from physical resource.
pr.Write(); //Write to physical resource.
Hope this helps.
Additional:
Your embedded resource may be entirely blank, contain data structure and / or default values.
A bit late, but for descendants=)
About embedded .txt:
Yep, on runtime you couldnt edit embedded because its embedded. You could play a bit with disassembler, but only with outter assemblies, which you gonna load in current context.
There is a hack if you wanna to write to a resource some actual information, before programm starts, and to not keep the data in a separate file.
I used to worked a bit with winCE and compact .Net, where you couldnt allow to store strings at runtime with ResourceManager. I needed some dynamic information, in order to catch dllNotFoundException before it actually throws on start.
So I made embedded txt file, which I filled at the pre-build event.
like this:
cd $(ProjectDir)
dir ..\bin\Debug /a-d /b> assemblylist.txt
here i get files in debug folder
and the reading:
using (var f = new StreamReader(Assembly.GetExecutingAssembly().GetManifestResourceStream("Market_invent.assemblylist.txt")))
{
str = f.ReadToEnd();
}
So you could proceed all your actions in pre-build event run some exes.
Enjoy! Its very usefull to store some important information and helps avoid redundant actions.