I am writing a tool that, given a (compileable) piece of C# code, generates another source, that using Mono.Cecil, outputs an assembly that is equivalent to the one produced by compiling the original piece of code. This is achieved by parsing the C# code and visiting the resulting AST and generating calls to the equivalent Mono.Cecil APIs (I guess this is somewhat similar to what the code generation part of Roslyn does, but generating calls to Mono.Cecil instead of IL).
Given that, processing the lowered version of a given AST would make the code easier to implement, more robust, etc but looking into the Roslyn sources it does not look like there's a way to access it.
In the best case my code would need to call into the various types in charge of doing the lowering in https://github.com/dotnet/roslyn/blob/main/src/Compilers/CSharp/Portable/Lowering which, AFAICS, are all internal.
Whence the question: is it really not possible to get the lowered version of a given AST ?
The nearest thing we have that is somewhat "lowered" is our IOperation APIs which are a bit lower level than the syntax/semantic APIs. We don't have an API to give you the fully lowered representation.
Related
I use NDepend to analyze static code dependencies. However, it does not recognize dependencies introduced by constants, because constants are inlined by the compiled and so the dependency is not visible to reflection used by NDepend.
I am at a loss here. I cannot replace constants with Enums - too much code, too many non integer constants (like strings).
Theoretically using Roslyn API should help here, but I do not understand what am I to do exactly. Given a source code file, do I need to build the syntax tree of the file and scan every node looking for a constant? I have not seen any nodes dedicated to constants, so it must be more complicated than just filtering the root.DescendantNodes().
Maybe there is an undocumented compiler option that helps somehow here. I could not find anything.
The context of this request is refactoring a big monolithic application and part of this work is identifying compile time dependencies.
I want to convert asm to c(assembly to C)
I saw http://www.textmaestro.com/InfoEx_17_Convert_Assembly.htm
(please the page)
page on web and easily after that i try to Do this job using find and Replace with Regex in C#
i am not computer field student so i am not professional at Regex.
I am working 5 days and after this time now i know that i cant do this.i wrote very code but without any success
sample program:
mov r1,1;
mov r2,2;
convert to :
r1=1;
r2=2;
please help me to do this correctly
OP has (painfully) learned that regexps are not a good solution to problems that involve analysis or translation of software. Processing strings simply is not the same as building context-sensitive analyses of text with complex structure.
People keep re-learning this lesson. It is true that you can use repeated regex to simulate Post rewriting systems, and Post systems, being Turing capable, can technically do anything. It is also true that nobody really wants to, or more importantly, nobody can write a very complex program for a real Turing machine [or an equivalent Post system]. This is why we have all these other computer languages and tools. [The TextMaestro system to which OP refers is trying to be exactly that Post system.]
However, the task he wants to do is possible and practical with the proper tools: program transformation systems (PTS).
In particular, he should see this technical paper for a description of precisely how this has been done with one particular PTS: See Pigs from sausages? Reengineering from assembler to C via FermaT transformations. Such a tool in effect is a custom compiler from assembly source code to the target language, and includes parsing, name (label) resolution, often data flow analysis and complex code generation and optimization. A PTS is used because they make it relatively easy to build that kind of compiler. This tool has been used for at least Intel assembly to C, and mainframe (System 360/370/Z) assembly to C, for large-scale tasks. (I have no relationship to this tool but do have huge respect for the authors).
The naysayers in the comments seem to think this is impossible to do except for extremely constrained circumstances. It is true that the more one knows about the assembly code in terms of idioms, the somewhat easier this gets, but the technical approach in the paper is not limited to specific compiler output by any means. It is also true that truly arcane assembler code (especially self-modifying or having runtime code generation) is extremely difficult to translate.
Let's say I have a WinForm App...written in C#.
Is it possible?
After all, put my eye on Iron Python.
C# is not interpreted, so unlike javascript or other interpreted languages you can't do that natively. You can go four basic routes, listed here in order of least to most complex...
1) Provide a fixed set of operations that the user can apply. Parse the user's input, or provide checkboxes or other UI elements to indicate that a given operation should be applied.
2) Provide a plugin-based or otherwise dynamically defined set of operations. Like #1, this has the advantage of not needing special permissions like full trust. MEF might come in handy for this approach: http://mef.codeplex.com/
3) Use a dynamic c# compilation framework like paxScript: http://eco148-88394.innterhost.net/paxscriptnet/. This would, in theory, allow you to compile small c# snippets on demand.
4) Use IL Emit statements to parse code and generate your operations on the fly. This is by far the most complex solution, likely requires full trust, and is extremely error prone. I don't recommend it unless you have some very obscure requirements and sophisticated users.
The CSharpCodeProvider class will do what you want. For a (VERY outdated, but still working with a few tweaks) example of its use, check out CSI.
If you are willing to consider targeting the Mono runtime, the type Mono.CSharp.Evaluator provides an API for evaluating C# expressions and statements at runtime.
I wrote an application which makes use of a meta-parser generated using CSharpCC (a port of JavaCC). Everything works fine and very good I can say.
For the nature of the project, I would like to have more flexibility on the possibility to extend the syntax of the meta-language used by the application.
Do you know any existing libraries (or articles describing the process of implementation) for Java or C# which I could use to programatically implement my own parser, without being forced to rely to a static syntax?
Thank you very much for the support.
Would Scala's combinator parsers do the trick for you? Since Scala compiles to Java bytecode, anything you write could be called from your Java code however you please.
Take a look at the way that the JNode command-line interface handles parsing of command line arguments. Each command 'registers' descriptors for the arguments it is expecting. The command line syntax is specified separately in XML descriptors, allowing users to tailor a command's syntax to meet their needs.
This is underpinned by a framework of Argument classes that are basically context sensitive token recognizers, and a two level grammar / parser. The parser 'prepares' a user-friendly form of a command syntax into something like BNF, then does a naive backtracking parse, accepting the first complete parse that it finds.
The downside of the current implementation is that the parser is inefficient, and probably impractical for parsing input that is more than 20 or so tokens, depending on the syntax. (We have ideas for improving performance, but a real fix is probably not possible without a major redesign ... and banning potentially ambiguous command syntaxes.)
(Aside: one motivation for this is to support intelligent command argument completion. To do this, the parser runs in a "completion" mode in which it explores all possible (partial) parses, noting its state when it encounters the token / position that the user is trying to complete. Where appropriate, the corresponding Argument classes are then asked to provide context sensitive completions for the current "word".)
The parser (written in C#) used in the Heron language (a simple object-oriented language) is relatively simple and stable, and should be easy to modify for your needs. You can download the source here.
I happened upon a brief discussion recently on another site about C# runtime compilation recently while searching for something else and thought the idea was interesting. Have you ever used this? I'm trying to determine how/when one might use this and what problem it solves. I'd be very interested in hearing how you've used it or in what context it makes sense.
Thanks much.
Typically, I see this used in cases where you are currently using Reflection and need to optimize for performance.
For example, instead of using reflection to call method X, you generate a Dynamic Method at runtime to do this for you.
You can use this to add scripting support to your application. For examples look here or here.
It is quite easily possible to publish parts of your internal object framework to the scripting part, so you could with relative ease add something to your application that has the same effect as for example VBA for Office.
I've seen this (runtime compilation / use of System.Reflection.Emit classes) in generating dynamic proxies ( Code project sample ) or other means of optimizing reflection calls (time-wise).
At least one case you might use it is when generating dynamic code. For example, the framework is using this internally to generate XML serializers on the fly. After looking into a class at runtime, it can generate the code to serialize / deserialize the class. It then compiles that code and users it as needed.
In the same way you can generate code to handle arbitrary DB tables etc. and then compile and load the generated assembly.
Well, all C# code is run-time compiled, since it's a JIT (just-in-time) compiler. I assume you are referring to Reflection.Emit to create classes etc. on the fly. Here's an example I have seen recently in the Xml-Rpc.Net library.
I create a C# interface that has the same signature as an XML-RPC service's method calls, e.g.
IMyProxy : IXmlRpcProxy
{
[XmlRpcMethod]
int Add(int a, int b);
}
Then in my code I call something like
IMyProxy proxy = (IMyProxy)XmlRcpFactory.Create(typeof(IMyProxy));
This uses run-time code generation to create a fully functional proxy for me, so I can use it like this:
int result = proxy.Add(1, 2);
This then handles the XML-RPC call for me. Pretty cool.
I used runtime compiler services from .NET in my diploma thesis. Basically, it was about visually creating some graphical component for a process visualization, which is generated as C# code, compiled into an assembly and can then be used on the target system without being interpreted, to make it faster and more compact. And, as a bonus, the generated images could be packaged into the very same assembly as resources.
The other use of that was in Java. I had an application that had to plot a potentially expensive function using some numerical algorithm (was back at university) the user could enter. I put the entered function into a class, compiled and loaded it and it was then available for relatively fast execution.
So, these are my two experiences where runtime code generation was a good thing.
something I used it for was for allowing C# and VB code to bu run by the user ad-hoc. They could type in a line of code (or a couple lines) and it would be compiled, loaded into an app domain, and executed, and then unloaded. This probably isnt the best example of its usage, but an example of it none-the-less