I have two code refactorings, one that amongst other things, generates a csv file and adds it to the Project in the solution, and another that is supposed to edit it.
I am successfully (seems like it, it shows in the solution explorer after it) generating and attaching the csv file to the solution Project through the code:
var doc = editor.GetChangedDocument();
SourceText st = SourceText.From(csv.GetStringDocument(), Encoding.Default);
var additionalDoc = doc.Project.AddAdditionalDocument(filename, st, new List<string> { "_TestData" });
return additionalDoc.Project.Solution;
Before returning, if I check additionalDoc.Project.AdditionalDocuments.Count(), its' value is 1.
And, in the other, when I try to access the additional files in the Project, the IEnumerable is empty.
How is it supposed to access the additional documents added with code refactorings/code fixers?
Related
My goal is exactly the same as stated in this issue on github:
how to read an existing .proto file and get a FileDescriptor from it
I cannot use the suggested "workaround", for 2 reasons:
I have "plain" .proto files, i.e.:
they are text files, just like good old addressbook.proto
they are not self-describing
I do not want to invoke the protoc compiler as an external application.
According to Marc this is possible with protobuf-net library:
Without a compiled schema, you would need a runtime .proto parser. [...] protobuf-net includes one (protobuf-net.Reflection)
I found Parsers.cs
Thanks Marc, but how do I use/do this?
Is this the right entry point?
Is there a minimal working example somewhere?
var set = new FileDescriptorSet();
set.Add("my.proto", true);
set.Process();
That's all you need; note that if you want to provide the actual contents (rather than having the library do the file access), there is an optional TextReader parameter. If you need imports:
set.AddImportPath(...);
Once you've called Process, the .Files should be populated along with the .MessageTypes of each file, etc.
For a more complete example:
var http = new HttpClient();
var proto = await http.GetStringAsync(
"https://raw.githubusercontent.com/protocolbuffers/protobuf/master/examples/addressbook.proto");
var fds = new FileDescriptorSet();
fds.Add("addressbook.proto", true, new StringReader(proto));
fds.Process();
var errors = fds.GetErrors();
Console.WriteLine($"Errors: {errors.Length}");
foreach(var file in fds.Files)
{
Console.WriteLine();
Console.WriteLine(file.Name);
foreach (var topLevelMessage in file.MessageTypes)
{
Console.WriteLine($"{topLevelMessage.Name} has {topLevelMessage.Fields.Count} fields");
}
}
Which outputs:
addressbook.proto
Person has 5 fields
AddressBook has 1 fields
google/protobuf/timestamp.proto
Timestamp has 2 fields
Notice that you didn't have to provide timestamp.proto or an import path for it - the library embeds a number of the common imports, and makes them available automatically.
(each file is a FileDescriptorProto; the group of files in a logical parse operation is the FileDescriptorSet - which is the root object used from descriptor.proto; note that all of the objects in this graph are also protobuf serializable, if you need a compiled/binary schema)
I'm experiencing an issue with adding a file to a DICOMDir. Based on this example I've successfully created and saved to disk an image from a series. Then, I tried adding that file to the DICOMDIR, so that the Dir references the new file, and, though the saving is successful, when I try to open the DICOMDir and its series again, I get a "Tag: (0088,0200) not found in dataset" exception.
Code is as follows:
var dataset = new DicomDataset();
this.FillDataset(dataset); //this function copies several Tag values of an already existing DICOM Series file, such as Patient information
dataset.Add(DicomTag.PhotometricInterpretation, PhotometricInterpretation.Rgb.Value);
dataset.Add(DicomTag.Rows, (ushort)rows);
dataset.Add(DicomTag.Columns, (ushort)columns);
var pixelData = DicomPixelData.Create(dataset, true);
pixelData.AddFrame(buffer);
var dicomfile = new DicomFile(dataset);
var pathImage = Path.Combine(dirImages.FullName, imageFileName);
dicomfile.Save(pathImage); //Image is saved fine and it's well formed, I've checked opening it with an online DICOM viewer
var dicomdirPath = Path.Combine(studyPath, Constants.DICOMDIRFileName);
var dicomdir = DicomDirectory.Open(dicomdirPath);
dicomdir.AddFile(dicomfile, $#"Images\{imageFileName}");
dicomdir.Save(dicomdirPath); //this executes without problems and the DICOMDIR is saved
And this is the series opening method:
var dicomDirectory = await DicomDirectory.OpenAsync(dicomdirPath);
foreach (var patientRecord in dicomDirectory.RootDirectoryRecordCollection)
{
foreach (var studyRecord in patientRecord.LowerLevelDirectoryRecordCollection)
{
foreach (var seriesRecord in studyRecord.LowerLevelDirectoryRecordCollection)
{
foreach (var imageRecord in seriesRecord.LowerLevelDirectoryRecordCollection)
{
var dicomDataset = imageRecord.GetSequence(DicomTag.IconImageSequence).Items.First(); //This line works fine before saving the image in the method above, but throws when opening the same study
//Load data and series from dataset
}
}
}
}
I don't know if I'm missing something regarding saving a DICOMDir file, or if it's an error.
You try to access the IconImageSequence (0088,0200) that is obviously not present. DicomDir does only contain some main data of the image. When Adding an image to the dicomdir it is up to you to add additional information.
One of those optional informations, that fo-dicom does not automatically add, is the Icon. DicomDir allows to contain a small icon to show if you want to display some previews quickly.
Actually imageRecord should contain all the informations you might need like instanceuid or filename etc.
I don't know why the line of code worked well before you stored the file with fo-dicom. I assume there already was a DICOMDIR created with some other application that included the Icon? then the foreach crashes when you reach the newly added entry.
You could either add an Icon yourself when adding the new instance to the DICOMDIR, or you could add a check like "if imageRecord.TryGetSequece(iconImageSequence, out seq).." to handle the cases where there are no icons.
I recommend to add the check anyway, because you might read a DICOMDIR with a reference to some strucured report some day and those structured reports do not have pixel data and therefore will not have an icon included.
I am trying to do what I thought would be a simple task, but can't figure it out.
I have created a new Xamerin Forms project (using .net standard, as sharing strategy,) and it looks like this:
I created a text file and I want to read it into an array so I can use it in my app.
I added the text file to my project (by adding it to the top project, which doesn't have an OS associated with it), here:
In my app I have the following code:
InitializeComponent ();
//Define our array variable
string[] quotations = new string[10];
//Read the text from text file and populate array
StreamReader SR = new StreamReader(#"Quotations.txt");
for (int i = 0; i < 9; i++)
{
quotations[i] = SR.ReadLine();
}
Quote.Text = quotations[0];
//Close text file
SR.Close();
...
I have checked the properties for the file, and set them to 'build action: embedded resource', in the top project.
(I haven't added the file into the individual OS projects...)
When I run my application, on IOS it generates an exception, and exits the app almost as soon as it started:
Unhandled Exception: System.IO.FileNotFoundException:
How can I attach a file to my project, and read the contents into an array , on IOS?
Thanks
new StreamReader(#"Quotations.txt"); loads a file from the file system, but your data isn't in the file system - it is embedded in the assembly.
Embedded resources need to be accessed in a special way - in particular via yourAssembly.GetManifestResourceStream(resourceName). Note that the embedded name might not be quite what you expect, so the best thing to do is to (as a quick test) use yourAssembly.GetManifestResourceNames() and write out the names that are embedded. That'll tell you the one to actually include as resourceName. Once you have the Stream, you can use a new StreamReader(theResourceStream) on it.
Note: the easiest way to get yourAssembly is something like typeof(SomeTypeInYourAssembly).Assembly.
I've a plugin based web app that allows an administrator to assign various small pieces of functionality (the actual functionality is unimportant here) to users.
This functionality is configurable and an administrator having an understanding of the plugins is important. The administrator is technical enough to be able to read the very simple source code for these plugins. (Mostly just arithmetic).
I'm aware there are a couple of questions already about accessing source code from within a DLL built from that source:
How to include source code in dll? for example.
I've played with getting the .cs files into a /Resources folder. However doing this with a pre-build event obviously don't include these files in the project. So VS never copies them and I'm unable to access them on the Resources object.
Is there a way to reference the source for a particular class... from the same assembly that I'm missing? Or alternatively a way to extract COMPLETE source from the pdb for that assembly? I'm quite happy to deploy detailed PDB files. There's no security risk for this portion of the solution.
As I've access to the source code, I don't want to go about decompiling it to display it. This seems wasteful and overly complicated.
The source code isn't included in the DLLs, and it isn't in the PDBs either (PDBs only contain a link between the addresses in the DLL and the corresponding lines of code in the sources, as well as other trivia like variable names).
A pre-build event is a possible solution - just make sure that it produces a single file that's included in the project. A simple zip archive should work well enough, and it's easy to decompress when you need to access the source. Text compresses very well, so it might make sense to compress it anyway. If you don't want to use zip, anything else will do fine as well - an XML file, for example. It might even give you the benefit of using something like Roslyn to provide syntax highlighting with all the necessary context.
Decompilation isn't necessarily a terrible approach. You're trading memory for CPU, basically. You'll lose comments, but that shouldn't be a problem if your code is very simple. Method arguments keep their names, but locals don't - you'd need the PDBs for that, which is a massive overkill. It actually does depend a lot on the kind of calculations you're doing. For most cases, it probably isn't the best solution, though.
A bit roundabout way of handling this would be a multi-file T4 template. Basically, you'd produce as many files as there are source code files, and have them be embedded resources. I'm not sure how simple this is, and I'm sure not going to include the code :D
Another (a bit weird) option is to use file links. Simply have the source code files in the project as usual, but also make a separate folder where the same files will be added using "Add as link". The content will be shared with the actual source code, but you can specify a different build action - namely, Embedded Resource. This requires a (tiny) bit of manual work when adding or moving files, but it's relatively simple. If needed, this could also be automated, though that sounds like an overkill.
The cleanest option I can think of is adding a new build action - Compile + Embed. This requires you to add a build target file to your project, but that's actually quite simple. The target file is just an XML file, and then you just manually edit your SLN/CSPROJ file to include that target in the build, and you're good to go. The tricky part is that you'll also need to force the Microsoft.CSharp.Core.target to use your Compile + Embed action to be used as both the source code and the embedded resource. This is of course easily done by manually changing that target file, but that's a terrible way of handling that. I'm not sure what the best way of doing that is, though. Maybe there's a way to redefine #(Compile) to mean #(Compile;MyCompileAndEmbed)? I know it's possible with the usual property groups, but I'm not sure if something like this can be done with the "lists".
Taking from #Luaan's suggestion of using a pre-build step to create a single Zipped folder I created a basic console app to package the source files into a zip file at a specific location.
static void Main(string[] args)
{
Console.WriteLine("Takes a folder of .cs files and flattens and compacts them into a .zip." +
"Arg 1 : Source Folder to be resursively searched" +
"Arg 2 : Destination zip file" +
"Arg 3 : Semicolon List of folders to ignore");
if (args[0] == null || args[1] == null)
{
Console.Write("Args 1 or 2 missing");
return;
};
string SourcePath = args[0];
string ZipDestination = args[1];
List<String> ignoreFolders = new List<string>();
if (args[2] != null)
{
ignoreFolders = args[2].Split(';').ToList();
}
var files = DirSearch(SourcePath, "*.cs", ignoreFolders);
Console.WriteLine($"{files.Count} files found to zip");
if (File.Exists(ZipDestination))
{
Console.WriteLine("Destination exists. Deleting zip file first");
File.Delete(ZipDestination);
}
int zippedCount = 0;
using (FileStream zipToOpen = new FileStream(ZipDestination, FileMode.OpenOrCreate))
{
using (ZipArchive archive = new ZipArchive(zipToOpen, ZipArchiveMode.Create))
{
foreach (var filePath in files)
{
Console.WriteLine($"Writing {Path.GetFileName(filePath)} to zip {Path.GetFileName(ZipDestination)}");
archive.CreateEntryFromFile(filePath, Path.GetFileName(filePath));
zippedCount++;
}
}
}
Console.WriteLine($"Zipped {zippedCount} files;");
}
static List<String> DirSearch(string sDir, string filePattern, List<String> excludeDirectories)
{
List<String> filePaths = new List<string>();
foreach (string d in Directory.GetDirectories(sDir))
{
if (excludeDirectories.Any(ed => ed.ToLower() == d.ToLower()))
{
continue;
}
foreach (string f in Directory.GetFiles(d, filePattern))
{
filePaths.Add(f);
}
filePaths.AddRange(DirSearch(d, filePattern, excludeDirectories));
}
return filePaths;
}
Takes 3 parameters for source dir, output zip file and a ";" separated list of paths to exclude. I've just built this as a binary. Committed it to source control for simplicity and included it in the pre-build for projects I want the source for.
No error checking really and I'm certain it will fail for missing args. But if anyone wants it. Here it is! Again Thanks to #Luaan for clarifying PDBs aren't all that useful!
I have a VSTO document level customization that performs specific functionality when opened from within our application. Basically, we open normal documents from inside of our application and I copy the content from the normal docx file into the VSTO document file which is stored inside of our database.
var app = new Microsoft.Office.Interop.Word.Application();
var docs = app.Documents;
var vstoDoc = docs.Open(vstoDocPath);
var doc = docs.Open(currentDocPath);
doc.Range().Copy();
vstoDoc.Range().PasteAndFormat(WdRecoveryType.wdFormatOriginalFormatting);
Everything works great, however using the above code leaves out certain formatting related to the document. The code below fixes these issues, but there will most likely be more issues that I come across, as I come across them I could address them one by one ...
for (int i = 0; i < doc.Sections.Count; i++)
{
var footerFont = doc.Sections[i + 1].Footers.GetEnumerator();
var headerFont = doc.Sections[i + 1].Headers.GetEnumerator();
var footNoteFont = doc.Footnotes.GetEnumerator();
foreach (HeaderFooter foot in vstoDoc.Sections[i + 1].Footers)
{
footerFont.MoveNext();
foot.Range.Font.Name = ((HeaderFooter)footerFont.Current).Range.Font.Name;
}
foreach (HeaderFooter head in vstoDoc.Sections[i + 1].Headers)
{
headerFont.MoveNext();
head.Range.Font.Name = ((HeaderFooter)headerFont.Current).Range.Font.Name;
}
foreach (Footnote footNote in vstoDoc.Footnotes)
{
footNoteFont.MoveNext();
footNote.Range.Font.Name = ((Footnote)footNoteFont.Current).Range.Font.Name;
}
}
I need a fool proof safe way of copying the content of one docx file to another docx file while preserving formatting and eliminating the risk of corrupting the document. I've tried to use reflection to set the properties of the two documents to one another, the code does start to look a bit ugly and I always worry that certain properties that I'm setting may have undesirable side effects. I've also tried zipping and unzipping the docx files, editing the xml manually and then rezipping afterwards, this hasn't worked too well, I've ended up corrupting a few of the documents during this process.
If anyone has dealt with a similar issue in the past, please could you point me in the right direction.
Thank you for your time
This code copies and keeps source formatting.
bookmark.Range.Copy();
Document newDocument = WordInstance.Documents.Add();
newDocument.Activate();
newDocument.Application.CommandBars.ExecuteMso("PasteSourceFormatting");
There is one more elegant way to manage it based upon
Globals.ThisAddIn.Application.ActiveDocument.Range().ImportFragment(filePath);
or you can do the following
Globals.ThisAddIn.Application.Selection.Range.ImportFragment(filePath);
in order to obtain current range where filePath is a path to the document you are copping from.