Programatic truncation of large file names (zipped) for different Windows editions - c#

So I'm bugfixing on a program that allows a user to create an Excel document, which it first zip compresses before serving to the customer. One of the client complaints is that, while the zip file can always be downloaded, sometimes the user must copy the Excel file out of the archive (or extract it) before opening it, using the standard Windows compressed tools. Standard "File name too long" error.
The algorithm is, of course, setting the .zip archive name to the same as the report - which can be 100+ character long strings.
The solution I'm implementing is to check the length of the potential report to see it if it violate MAX_PATH, naturally, and truncate the .zip name as needed.
Testing it on Windows 7, this works perfectly. But something odd happens when testing it under 8.1
It still throws an error trying to open from the archive - but this error is a bit more enigmatic.
"Sorry, we couldn't find C:\Users{My user name}\AppData\Local\Temp\Temp1_{Rest of the truncated archive name}.zip{Full file name}.xlsx. Is it possible it was moved, renamed or deleted?"
This error keeps popping up, regardless of whether the file is "Open"ed or "Save"d from the browser.
Normally, I'd just try further tweaking, but testing on the Windows 8.1 platform involves a lot of overhead at the moment, and it doesn't look like the problem is the path name length.
What is going on? Does Windows 8 have problem with the length of names within archives, or...?
Also, the reason I am posting this here, and not, say, User Experience, is because I feel like the solution will be programmatic - something in the code of the program. I readily concede that "allow the end user to skip hitting the Extract button" is about providing an easy user experience, but truncating the length of the entire path solved the problem on Windows 7.
Just not 8.1, and googling/searching the SO site family provides no help - ironically, because of the keyword Excel.

So, as I found in the link above (http://answers.microsoft.com/en-us/office/forum/office_2013_release-excel/sorry-unable-to-find/595333d0-1463-499f-967e-4da8ac2e2047?auth=1) the crux seems to be that, although MAX_PATH is 260 characters, Excel 2013 can't handle anything over 212.
I still haven't been able to give this the rigorous testing it deserves, but if anyone else encounters this problem, and finds this page in their quest for a fix, just truncate until the entire path is under 212 and you should be good.

Related

Printer spooler api number of copies

I really could use some help, this is a question that alot of people are asking on the internet. I have different setups, tried different ways of testing, it's very frustrating.
First setup:
local printers
local running code
print from pdf or notepad: SUCCES (number of copies are 2)
print from word: FAILED (numberof copies is 1)
Second setup:
local printers that are shared
local running code
print from other computer to shared printers
number of copies isalways 1
Sowhat is everyone missing? What happens that some fields are missing while the printer still should know what to print? What does word that also happen when you print from another computer? Can someone tell me why somethings in windows are so terrible? Everythingshould pass the spooler, sowhy isthedata wrong?
Kinds regards!
A printer prints sheets and pages, so copies is converted to pages at some stage.
The notification data you get depends on both the application that is printing and the system and driver components handling the spooling and rendering. In my experience the data cannot be relied on, and the best data is obtained by parsing the spool file. This may or may not contain the number of copies.
Word has had the "copies problem" for a long time. There was a patch to supposedly fix this, but another opinion is that it's because it uses an unusual way of printing. I'll quote some of the link contents here:
With the infamous Word Copy Count bug… the dmCopies filed is 1 in the
SHD. The correct value is found in the DEVMODE record in the SPL file
(if it's an EMF spool).
The only other way i found was to monitor the PrintedPages field of
the JOB_INFO_2 structure, when the job has been sent to the printer,
and see if it is a multiple of TotalPages.
[...]
What happens is not a Word bug, but a Windows bug. Word calls startDoc
always with copies set to 1. After that calls DocumentProperties and
makes the change in dmCopies and calls ResetDC to make the update. It
is a strange way of printing but not wrong. The problem is that the
shd file and printer_info is not updated with this information, just
keeps the Devmode info set on the StartDoc call.
But the call to the ResetDC generating a new DevMode is kept on the
SPL file. You can get that info too if you hook DocumentProperties
API calls.
Thank you for the answer. Is there a way of catching the document properties when they change?
The JOB_INFO_2 structure does have the same total_pages as pages_printed. So that is not a solution.
The SPL File does contain the QTY for the printer i tested on which is correct. BUT we tested on a lot of printers and we see the QTY is not Always set. So not a 100% solution. But already a good fallback.
So if i can catch the document properties without calling the SPL file that would be wonderful because i guess that's where everything is correct. Isn't it?

How to merge 2 zip files together into 1 zip

I am trying to make a custom launcher for Minecraft in C# but I have come across a bump.
I want to add something into it, Minecraft Forge, but the only way I could think of is to change the extension of minecraft.jar to minecraft.zip, extract the contents of the Minecraft Forge.zip and the minecraft.zip into the same folder and then zip that entire folder up into minecraft.jar.
However minecraft.jar has a file named aux.class so whenever my extract script (Made in java) tries to extract it, it simply says:
Unable to find file G:\Programming\C#\Console\Forge Installer\Forge Installer\bin\Debug\Merge\aux.class.
The only other way I can think of is to merge minecraft_forge.zip into minecraft.zip, I have spent around 2 hours looking on Google (watch as someone sees it within a couple of minutes) but it always shows me results for "How to zip multiple files", "How to make a zip file in C#" etc.
So I have come here looking for my answer, sorry if this is a lot to read but I always see comments on here saying "You didn't give enough information for us to help you with".
EDIT: The question in case it wasn't clear is: How am I able to put the contents of minecraft_forge.zip into minecraft.zip?
In your case, if you cannot unzip the files due to OS limitations, you need to "skip" unzipping temporary files to zip them. Instead, only handle input & output streams, as suggested in the answers found here: How can I add entries to an existing zip file in Java?
As you pointed out, "aux" is a protected keyword within windows and it does not matter what the file suffix may be; windows won't let you use it. Here are a couple of threads that discusses this in general.
Ref 1: Windows reserved words.
Ref 2: Windows reserved words.
If you are typing in commands to perform the copy or unzip, there is a chance you can get this to work by using a path prefix of the following \\.\ or \\?\. When I tested this, it worked with either a single or double back-slash following the period or question mark. Such that the following work:
\\.\c:\paths\etc
\\.\\c:\paths\etc
\\?\c:\path\etc
\\?\\c:\path\etc
I used the following command to test this. When trying to rename through windows explorer it gave a "The specified device name is invalid." error message. From the command line it worked just fine. I should point out, that once you create these files, you will have to manually delete them using the same technique. Windows Explorer reports that these text files which have a size of 0 bytes "is too large for the destination file system", ie... the recycle bin.
rename "\.\c:\temp\New Text Document.txt" aux.txt
del "\.\c:\temp\aux.txt"
As far as copying directly from zip or jar files, I tried this myself and it appeared to work. I used 7-zip and opened the jars directly using the "open archive..." windows explorer context menu. I then dragged-and-dropped the contents from forge.jar to the minecraft jar file. Since it is the minecraft jar file with the offending file name the chance of needing to create a temporary file on the filesystem is reduced. I did see someone mention that 7-zip may extract to a temporary file when copying between jars and zips.
7-zip reference on copying between archives
I should point out that my copy of minecraft jar (minecraft_server.1.8.7.jar) did not contain a file named aux.class. I also did not try to use the jar after the copy/merge. Nor did I spend too much time trying to figure out how well it merged the two contents since it appears like there may be a conflict with com\google\common\base\ since there are similar class name but with different $ variable suffixes on them.
I hope these two possible suggestions could give you some room to work with to find a solution for your needs... if you're still looking.

Visual studio 2010 empties the file on crash

I got a really bad problem while working on visual studio 2010. Accidently the power plug switched off and when I started the computer again the file was completly empty. I tried out following things:
I opened it in notepad and other couple of editors and it was empty.
I then opened it in Hex Editor. Hex editor shows that all bytes are set to 0.
I programatically read the file and it also showd all bytes set to 0.
Checked "Documents\Visual Studio 2010\Backup Files\" for my project and it was empty.
The file size is still showing in KBs but the code is completly gone.
Is there any possible way by which I can recover my code?
If there is not, can anyone suggest me a setting/patch taht should be there so that it never happens again.
Note: I already have Autorecover option set for every 5 minutes in IDE.
Update:
As suggested by Henok, If you have compiled and built the code at least once, you can reverse engineer the binary through reflector.
Doesn't look like it, to stop in future though, save and save often. Also look at using subversion like svn, or Git.
IIS has DLLs cached under C:\Windows\Microsoft.NET\v4.0.xyz\Temporary ASP.NET Files. Look for the dll and use a reflector. I use ILSpy.
Save often and use source control. I use C.V.S., personally.
It sounds like the I.D.E. had the file(s) open for writing at byte 0 when the computer went down, clearing everything out.
Beyond your software problems, I suggest you manage your power plug in such a way that it won't be accidentally switched off.
Same thing happened to me and thought I would post it here for those who would come here for answers.
If you have compiled and built the code at least once, you can reverse engineer the binary. Reflector did the trick for me.
Visual Studio still makes source files empty on sudden crashes, so I think I should share my solution.
Use any cloud file syncing service that supports file versions (for deleted files, too). Dropbox and Google Drive is what I can name. I randomly preferred Google Drive, though Dropbox can do all the same things.
I simply put my source tree in Google Drive, because it has file versions. My builds happen in Google Drive too, so there's much of unwanted traffic for big projects, but you can exclude some subfolders from syncing.
The drawback is that sometimes (in rare cases) Google Drive locks files and Visual Studio pops up "Save As..." dialog or some messages. You can usually close it, then save again successfuly. In very rare cases I had "The file is used by process" errors, and I had to restart Google Drive.

Directory.SetCurrentDirectory throws PathTooLongException

There are several related questions on stackoverflow but either my situation is different or I am too dumb to relate those to situation. I am hoping someone can help me with this. Further I am not even much of a .NET developer so I apologize in advance for any wrong terminology use.
My scenario is as follows: The tool that is used to deploy our .net application (One Click?) puts it in a directory whose full name exceeds 300 characters. The application uses a third party component -- lets call it dbstore -- that processes the specified file that resides in the application deployment directory.
So far we were using Assembly.GetExecutingAssembly().GetName().CodeBase to construct the fully qualified name of the file to pass to dbstore. But dbstore uses old style APIs and fails when it tries to open the file.
Since dbstore is not expected to change soon, it was recommended that the application chdir to the deployment directory and pass a relative path name in current directory to it. This is also the approach described in the accepted response PathTooLongException in C# code
However I find that Directory.SetCurrentDirectory also throws PathTooLongException. This happens even when I am using UNC path name, e.g a name starting with \\?\0000000000000\...
Am I doing something fundamentally wrong? Is there another function to use?
EDIT: It seems there is no way to achieve what I am looking for. Far as I can tell there is no way to set current directory to a long path.
Do you get a similar result when using Environment.SetCurrentDirectory() ?
If so, you may want to change your directory subfolder after subfolder.
EDIT:
Windows actually sets a limitation of 255 chars for a file path (WinXP) or 260 chars (Vista).
Note that this limitation does not apply for the filesystem, so you can have a file stored in such a long directory path, but Windows Explorer and many Windows services cannot read from such path.
Actually it seems to also include .NET framework methods since you cannot access such files. You may need to write your own filesystem API, but that's a bit too much overhead. Can't you just shorten the file path ? Does Windows offer a shortened way to address a file (like 8 octet file names) ?
Source: http://labnol.blogspot.com/2006/10/limitations-with-long-file-names-on.html

Responding to interactions with files in C#?

I want to write a program that will encrypt an entire folder and it's sub-folders, I have no problem doing this but I would like to make the entire encryption process rather transparent by letting a user double click it and have it open as if it weren't encrypted, say if it were a picture or a word document and it'd open in it's respective application.
How can a running program of mine become notified about the opening of a target file, stop the file from opening, do what it needs to do (decrypt), followed by running the resulting decrypted file.
How can I watch a file and do this in C#? Can I watch for other interactions like the user copying a watched file (since it won't be in a watched folder, it should be decrypted i.e. it's dragged to a USB device), or for deleting a watched file (say if I want to shred a file before deletion)?
P.S. The FileSystemWatcher doesn't quite meet my needs. EDIT: What I mean is that FileSystemWatcher will tell me when a file is being opened, deleted and all those events, but it won't let me step in real quick, decrypt the file, and hand it back to the process that normally opens that file.
You can rename files, add them your own extension, like thepicture.jpg.encrypted. Set your program as a default program for this extension and handle opening them
It's impossible in C#. the bare minimum would need you to use user-mode hooks on NtCreateFile, NtOpenFile, etc. You can't achieve that in C#. That wouldn't even work properly due to kernel-mode code which may try to access your files. The proper way of doing this would be to write a I/O minifilter (in C of course).
EDIT: If you're really desperate, try EasyHook - it allows you to hook functions from C#. I haven't tried it though, and it does seem risky hooking vital functions like NtCreateFile. Plus you need a fair bit of Native API knowledge.
Are you using Windows? If so, why not use the built-in BitLocker?
See this link:
BitLocker drive encryption
If you are thinking about a competitive application to BitLocker, add a comment, as I can point you in that direction as well.
Instead of trying to reinvent the wheel, use NTFS file encryption. You can encrypt single files or entire folders or drives. Plus it's completely transparent to the user and does exactly what you asks (e.g. automatically decrypt when copying to a UBS drive, etc). Just use System.IO.File.Encrypt(string) - there couldn't be anything easier.
You can't do this from usermode.
Unfortunately the only way to do this is to write a minifilter driver. Minifilter drivers allow you to intercept IO requests to files, you can then encrypt/decrypt the files you care about on the fly.
It sounds simple, but encryption minifilter drivers are very, very, difficult to get right. You will have to end up shadowing file objects which is a real challenge. Check with www.osr.com, they have a ton of information on doing exactly what you want to do.
If you choose to go this route I would recommend getting a copy of VMWare Workstation and download VirtualKD. It will let you debug at near fire-wire speeds into a VM. I would start with x64 Win7 and get remote shares working first.

Categories