json files in a resx are binary instead of text - c#

I have some test data (json files) that I am using while testing some software. It is static data and I need the tests to run locally and on build machines that I don't have to much control of. In order to get uniform access to the test data (json files) I have put them into a RESX file and that is working nicely except that I had to change the extension of the file from .json to .txt.
If I left it as .json it was added to the resx file as a "Binary" instead of "Text File". This by itself wasn't the end of the road... I simply read out the bits and converted it back to a string but when I tried to deserialize the string (after the conversion from byte[]) I got an exception for unexpected char at position 0 line 0.
The only real downside to the "txt" extension is that I loose the color coding in the IDE for a JSON file.
Is there a way to force the RESX to treat the .json extension as a "Text File"?

Maybe it's too late, but there is a very simple method to achieve what you want. Just select the desired file in the resources window, hit F4 or go to the Properties and select proper FileType there. It has two options: binary and text.

A colleague of mine just encountered a similar issue with a .cshtml file. The actual content of the file didn't seem to matter when we renamed the file to have an extension that was known to work caused it to add as text.
Also, Axm's suggestion worked to fix it, but we initially missed it because you can't right-click on the resource and select properties.

Related

OpenXmlSDK can't read manualy created xlsx file: 'The specified package is invalid. The main part is missing.'

I have a third-party library, which creates xlsx-file. It doesn't use OpenXmlSDK, it combines file from fragments of the xml-markup. For zipping there are used ZipArchive class.
But when I try to do with OpenXmlSDK
var document = SpreadsheetDocument.Open(fileStream, false);
it fails with error:
DocumentFormat.OpenXml.Packaging.OpenXmlPackageException: 'The specified package is invalid. The main part is missing.'
MS Excel opens this file normally. Resaving from Excel helps.
Also I unzip files, then zip them again (without any changes), try to call above code again and it works.
Where is the problem? How to zip xlsx-file ready for OpenXmlSDK?
SOLUTION
Problem was with saving file by third-party library. Files, included to zip have entry name with \ instead /. Code of that library was edited to fix that and all is ok.
After some research I found people complaining about this exception in two scenarios:
document uses or references not installed font (as described here:
https://github.com/OfficeDev/Open-XML-SDK/issues/561)
invalid file name extension (other than xlsx, as described here: https://social.msdn.microsoft.com/Forums/office/en-US/6e7e27d4-cd97-46ae-9eca-bfd618dde301/openxml-sdk20-the-specified-package-is-invalid-the-main-part-is-missing?forum=oxmlsdk)
Since You open the file from a stream, the second cause is rather not applicable in this case.
If font usage is not the cause, try to manually compare file versions before and after saving with Excel in Open XML Productivity Tool (https://www.microsoft.com/en-us/download/details.aspx?id=30425).
If there are no differences in documents' contents, try to compare archive compression settings.
UPDATE
It seems I've found some more information about the issue that can help to find the solution.
I was able to reproduce The main part is missing. error by creating archive with: ZipFile.CreateFromDirectory(#"C:\DirToCompress", destFilePath, CompressionLevel.Fastest, false);.
Then, I've checked that opening the file with Package.Open(destFilePath, FileMode.Open, FileAccess.Read) actually listed 0 parts found in the file.
After verifying some differences, I noticed that in the correct xlsx file, entries nested within folders in the archive have FullName paths presented using / character, for example: _rels/.rels. In the corrupted file, the names were written with \ character, for example: _rels\.rels.
You can investigate it by opening a file using ZipArchive class (for example: new ZipArchive(archiveStream, ZipArchiveMode.Read, false, UTF8Encoding.UTF8);) and inspecting the Entries collection.
The important thing to note is that there are naming rules for parts described in the Office Open XML specification: https://www.ecma-international.org/news/TC45_current_work/Office%20Open%20XML%20Part%202%20-%20Open%20Packaging%20Conventions.pdf
As a test, I wrote a code that opens the corrupted xlsx file using ZipArchive class and rewrites each entry by copying its contents and replacing \ with / for the name of the recreated entry. After this operation, the resulting file seems to be opened correctly by SpreadsheetDocument.Open(...) method.
Please note that the name fixing method I used was very simple and may be not enough or working correctly in some scenarios. However, these notes may help to find a desired solution for the issue.

What's the best structure to conserve file related information?

I am building an interface whose primary function would be to act as a file renaming tool (the underlying task here is to manually classify each file within a folder according to rules that describe their content). So far, I have implemented a customized file explorer and a preview window for the files.
I now have to find a way to inform a user if a file has already been renamed (this will show up in the file explorer's listView). The program should be able to read as well as modify that state as the files are renamed. I simply do not know what method is optimal to save this kind of information, as I am not fully used to C#'s potential yet. My initial solution involved text files, but again, I do not know if there should be only one text file for all files and folders or simply a text file per folder indicating the state of its contained items.
A colleague suggested that I use an Excel spreadsheet and then simply import the row or columns corresponding to my query. I tried to find more direct data structures, but again I would feel a lot more comfortable with some outside opinion.
So, what do you think would be the best way to store this kind of data?
PS: There are many thousands of files, all of them TIFF images, located on a remote server to which I have complete access.
I'm not sure what you're asking for, but if you simply want to keep some file's information such as name, date, size etc. you could use the FileInfo class. It is marked as serializable, so that you could easily write an array of them in an xml file by invoking the serialize method of an XmlSerializer.
I am not sure I understand you question. But what I gather you want to basically store the meta-data regarding each file. If this is the case I could make two suggestions.
Store the meta-data in a simple XML file. One XML file per folder if you have multiple folders, the XML file could be a hidden file. Then your custom application can load the file if it exists when you navigate to the folder and present the data to the user.
If you are using NTFS and you know this will always be the case, you can store the meta-data for the file in a file stream. This is not a .NET stream, but a extra stream of data that can be store and moved around with each file without impacting the actual files content. The nice thin about this is that no matter where you move the file, the meta-data will move with the file, as long as it is still on NTFS
Here is more info on the file streams
http://msdn.microsoft.com/en-us/library/aa364404(VS.85).aspx
You could create an object oriented structure and then serialize the root object to a binary file or to an XML file. You could represent just about any structure this way, so you wouldn't have to struggle with the
I do not know if there should be only one text file for all files and folders or simply a text file per folder indicating the state of its contained items.
design issues. You would just have one file containing all of the metadata that you need to store. If you want speedier opening/saving and smaller size, go with binary, and if you want something that other people could open and view and potentially write their own software against, you can use XML.
There's lots of variations on how to do this, but to get you started here is one article from a quick Google:
http://www.codeproject.com/KB/cs/objserial.aspx

images added into resx file

This has been a interesting file for me.
I found that ,whenever a image is added to this ,the image is actually converted into bytes and stored as data rather than a file.
I just want to know the benefit of this?also will be there be chances of bytes getting curropted when the file is put into version control(Most unlikely)
A file is nothing else but data written to disk and usually contains the actual data plus some header information (e.g. image type, so you know how to interprete the data). If it helps you understand it, think of the resx file as a "mini" file system from which the image data can be retrieved.
If put in version control, resx files should be no problem.
You might want to take a look at ResourceManager.

Getting the type of file from the ASP.NET FileUpload control?

I want to get the type of file uploaded using the ASP.NET FileUpload control. When I upload a file, I want to be able to get the type of file uploaded, so I can assign a an icon to the file (such as a word, excel, pdf icon).
Here is the problem, I can't go off the file extension because a file could be called test.xxxxxxxx and be a valid pdf file, or a file might not have an extension.
The other option is to read the content-type, but with some of these appear not to be standard or in a simple to read format such as excel files, so is there another option to determine the file type?
I would review the process that names a PDF file "test.xxxxxxxx" without a ".pdf" extension. Most workflows that files have some kind of naming convention (manual or automated)
If you cannot read the extension, the file format will need to be detected by interogating markers that make it a known file format: eg: if the stream starts with or contains "%PDF" its a PDF.
If you don't know the extension, then you would have to know the file format of popular file types and then read the file and see if it matches one of your known formats. That doesn't sound optimal, though.
An easy way to check the true type of a file server side, using System.IO.BinaryReader, is described here:
http://forums.asp.net/post/2680667.aspx
and VB version here:
http://forums.asp.net/post/2681036.aspx
You'll need to know the binary 'codes' for the file type(s) you're checking for, but you can get those by implementing this solution and debugging the code.
Also note that when the BinaryReader is closed with the r.Close() statement, this will make FileUploader.HasFile = False

how to create a custom file extension in C#?

I need help in how to create a custom file extension in my C# app. I created a basic notes management app. Right now I'm saving my notes as .rtf (note1.rtf). I want to be able to create a file extension that only my app understands (like, note.not, maybe)
As a deployment point, you should note that ClickOnce supports file extensions (as long as it isn't in "online only" mode). This makes it a breeze to configure the system to recognise new file extensions.
You can find this in project properties -> Publish -> Options -> File Associations in VS2008. If you don't have VS2008 you can also do it manually, but it isn't fun.
File extensions are an arbitrary choice for your formats, and it's only really dependent on your application registering a certain file extension as a file of a certain type in Windows, upon installation.
Coming up with your own file format usually means you save that format using a format that only your application can parse. It can either be in plain text or binary, and it can even use XML or whatever format, the point is your app should be able to parse it easily.
There are two possible interpretations of your question:
What should be the file format of my documents?
You are saving currently your notes in the RTF format. No matter what file name extension you choose to save them as, any application that understands the RTF format will be able to open your notes, as long as the user knows that it's in RTF and points that app to that file.
If you want to save your documents in a custom file format, so that other applications cannot read them. you need to come up with code that takes the RTF stream produced by the Rich Edit control (I assume that's what you use as editor in your app) and serializes it in a binary stream using your own format.
I personally would not consider this worth the effort...
What is the file name extension of my documents
You are currently saving your documents in RTF format with .rtf file name extension. Other applications are associated with that file extension, so double-clicking on such file in Windows Explorer opens that application instead of your.
If you want to be able to double click your file in Windows Explorer and open your app, you need to change the file name extension you are using AND create the proper association for that extension.
The file extension associations are defined by entries in the registry. You can create these per-machine (in HKLM\Software\Classes) or per-user (in HKCU\Software\Classes), though per-machine is the most common case. For more details about the actual registry entries and links to MSDN documentation and samples, check my answer to this SO question on Vista document icon associations.
I think it's a matter of create the right registry values,
or check this codeproject's article
You can save file with whatever extension you want, just put it in file name when saving file.
I sense that your problem is "How I can save file in something other than RTF?". You'll have to invent your own format, but you actually do not want that. You still can save RTF into file named mynote.not.
I would advise you to keep using format which is readable from other programs. Your users will be thankful once they want to do something with their notes which is not supported by your program.

Categories