Edit web.config using get-content and replace in powershell - c#

I'm new to Powershell commands and I use get-content and then .replace to edit part of web.config on server, I wanted to know if this method of editing web.config is safe or not?
thanks

In a word, no.
web.config is an XML file, and should be edited as an XML document. The file contents are text (instead of binary), but the catch is that the XML element structure must be maintained. Editing the file as text is prone to mistakes, such as mixing up open/close element order, missing elements and encoding characters that are reserved.
If one makes a single mistake in editing XML as it would be an ordinary text file, any application that expects it to be a valid XML document will throw an error as the file is not valid.
To be honest, it is possible to update an XML config as it would be a text file. Why take the risk, though? Since .Net, and thus Powershell, is capable of parsing and processing XML, load the config file as an XML document, update the contents and save it. By processing it as an XML document, .Net libraries will take care about how to process the file so that the result is always a valid XML file.
There are existing questions at, SO, try searching for "powershell web.config $setting-you-want-to-change" for similar cases.
Also, the config can be updated via web administration cmdlets, for example, adding SSL settings.

Related

OpenXmlSDK can't read manualy created xlsx file: 'The specified package is invalid. The main part is missing.'

I have a third-party library, which creates xlsx-file. It doesn't use OpenXmlSDK, it combines file from fragments of the xml-markup. For zipping there are used ZipArchive class.
But when I try to do with OpenXmlSDK
var document = SpreadsheetDocument.Open(fileStream, false);
it fails with error:
DocumentFormat.OpenXml.Packaging.OpenXmlPackageException: 'The specified package is invalid. The main part is missing.'
MS Excel opens this file normally. Resaving from Excel helps.
Also I unzip files, then zip them again (without any changes), try to call above code again and it works.
Where is the problem? How to zip xlsx-file ready for OpenXmlSDK?
SOLUTION
Problem was with saving file by third-party library. Files, included to zip have entry name with \ instead /. Code of that library was edited to fix that and all is ok.
After some research I found people complaining about this exception in two scenarios:
document uses or references not installed font (as described here:
https://github.com/OfficeDev/Open-XML-SDK/issues/561)
invalid file name extension (other than xlsx, as described here: https://social.msdn.microsoft.com/Forums/office/en-US/6e7e27d4-cd97-46ae-9eca-bfd618dde301/openxml-sdk20-the-specified-package-is-invalid-the-main-part-is-missing?forum=oxmlsdk)
Since You open the file from a stream, the second cause is rather not applicable in this case.
If font usage is not the cause, try to manually compare file versions before and after saving with Excel in Open XML Productivity Tool (https://www.microsoft.com/en-us/download/details.aspx?id=30425).
If there are no differences in documents' contents, try to compare archive compression settings.
UPDATE
It seems I've found some more information about the issue that can help to find the solution.
I was able to reproduce The main part is missing. error by creating archive with: ZipFile.CreateFromDirectory(#"C:\DirToCompress", destFilePath, CompressionLevel.Fastest, false);.
Then, I've checked that opening the file with Package.Open(destFilePath, FileMode.Open, FileAccess.Read) actually listed 0 parts found in the file.
After verifying some differences, I noticed that in the correct xlsx file, entries nested within folders in the archive have FullName paths presented using / character, for example: _rels/.rels. In the corrupted file, the names were written with \ character, for example: _rels\.rels.
You can investigate it by opening a file using ZipArchive class (for example: new ZipArchive(archiveStream, ZipArchiveMode.Read, false, UTF8Encoding.UTF8);) and inspecting the Entries collection.
The important thing to note is that there are naming rules for parts described in the Office Open XML specification: https://www.ecma-international.org/news/TC45_current_work/Office%20Open%20XML%20Part%202%20-%20Open%20Packaging%20Conventions.pdf
As a test, I wrote a code that opens the corrupted xlsx file using ZipArchive class and rewrites each entry by copying its contents and replacing \ with / for the name of the recreated entry. After this operation, the resulting file seems to be opened correctly by SpreadsheetDocument.Open(...) method.
Please note that the name fixing method I used was very simple and may be not enough or working correctly in some scenarios. However, these notes may help to find a desired solution for the issue.

XML file generated in windows not loading in linux environment

We are generating an xml file in C# using xmlseralizer and UTF8 encoding. We check the output and the xml is well formed and passes XSD validation.
We send this xml to customer who load this in UNIX environment. They keep on telling us that xml is not valid and has invalid characters. We don't have UNIX environment to test.
The question being, is there any difference when loading xml files in UNIX?
What can we ask the customer to provide to better understand this situation?
You might have a UTF-8 BOM as the first three bytes of your file:
<?xml version="1.0" encoding="utf-8"?>
It is not part of the XML document so a file reader should not pass it on to be interpreted by the XML parser. If you have it, you could try to remove it and see if your users have the same complaint. Most editors will not show it to you so you might have use a hex editor. (Hex: EF BB BF).
If the problem remains, you'd need to know at what byte offset the purported invalid characters are and which section of the XML specification they violate. Which program and version they are use and what feedback it gives might be helpful, too.
You might also consider that the file is getting damaged in delivery. A round trip transmission might help detect that.

What's the best structure to conserve file related information?

I am building an interface whose primary function would be to act as a file renaming tool (the underlying task here is to manually classify each file within a folder according to rules that describe their content). So far, I have implemented a customized file explorer and a preview window for the files.
I now have to find a way to inform a user if a file has already been renamed (this will show up in the file explorer's listView). The program should be able to read as well as modify that state as the files are renamed. I simply do not know what method is optimal to save this kind of information, as I am not fully used to C#'s potential yet. My initial solution involved text files, but again, I do not know if there should be only one text file for all files and folders or simply a text file per folder indicating the state of its contained items.
A colleague suggested that I use an Excel spreadsheet and then simply import the row or columns corresponding to my query. I tried to find more direct data structures, but again I would feel a lot more comfortable with some outside opinion.
So, what do you think would be the best way to store this kind of data?
PS: There are many thousands of files, all of them TIFF images, located on a remote server to which I have complete access.
I'm not sure what you're asking for, but if you simply want to keep some file's information such as name, date, size etc. you could use the FileInfo class. It is marked as serializable, so that you could easily write an array of them in an xml file by invoking the serialize method of an XmlSerializer.
I am not sure I understand you question. But what I gather you want to basically store the meta-data regarding each file. If this is the case I could make two suggestions.
Store the meta-data in a simple XML file. One XML file per folder if you have multiple folders, the XML file could be a hidden file. Then your custom application can load the file if it exists when you navigate to the folder and present the data to the user.
If you are using NTFS and you know this will always be the case, you can store the meta-data for the file in a file stream. This is not a .NET stream, but a extra stream of data that can be store and moved around with each file without impacting the actual files content. The nice thin about this is that no matter where you move the file, the meta-data will move with the file, as long as it is still on NTFS
Here is more info on the file streams
http://msdn.microsoft.com/en-us/library/aa364404(VS.85).aspx
You could create an object oriented structure and then serialize the root object to a binary file or to an XML file. You could represent just about any structure this way, so you wouldn't have to struggle with the
I do not know if there should be only one text file for all files and folders or simply a text file per folder indicating the state of its contained items.
design issues. You would just have one file containing all of the metadata that you need to store. If you want speedier opening/saving and smaller size, go with binary, and if you want something that other people could open and view and potentially write their own software against, you can use XML.
There's lots of variations on how to do this, but to get you started here is one article from a quick Google:
http://www.codeproject.com/KB/cs/objserial.aspx

Manipulating a file that is like XML

I have a file that I want to read and manipulate. It is XML like but is not an actual XML file. It does reference a DTD however. What part of the .Net framework can I use to do the above? Will the XML API's work some how with this file?
Based on your reply to my comments, it sounds like if all that is different from a standard XML document is the lack of a header, the tools should work perfectly fine with that data. I would give System.Xml a try, and if that doesn't work, try prepending the header.

How can I append text to a start and end of a file as header and footer?

I'm filling an XML document manually using C# and I need to have <data> as header and </data> as footer for the whole XML file. Is there an easy way to do that ? I know It can be done but I couldn't find a way to do it. Keep in mind that I'm updating the entries so I need to make sure that they always come between the header and the footer.
Thanks
Example
<data>
Entry
New Entry 1
New Entry 2
</data>
As for appending text to a file, this is easy just open the file in append mode and write the text.
For inserting, there is no POSIX or Windows way available to insert text. So you need to write a new file with the header, and then write the rest of the file.
I'm not sure what you are trying to do makes sense - and I agree with Brian on a lack of way to pre-pend a file without writing a new file. A better approach would be to store the Entry items (in a separate file or other storage medium) and then generate the final version as Brian stipulates.
If you are manipulating XML content generally, you might consider using Linq to XML or the native .Net XML libraries.
Well, simplest is to use these three commands in the following order. Of course, this doesn't take care of checking if you already put the tags or not. For that, you should use XML related libs.
Assume, file is ab.txt. Run these dos commands -
echo ^<data^> > new_ab.txt
type ab.txt >> new_ab.txt
echo ^<^/data^> >> new_ab.txt
:)
There are many ways to do it. Do you want to do it via C# code? It will be bit lengthy to put it here :) I'm sure you know how to read/write files via C#.
Cheers!
Assumption: you are working on Windows. For unix, commands are much easier. :)

Categories