What is __argvalue? - c#

Also, there is one other thing that is an lvalue in VC#, though it's a language extension - __argvalue().
Source
That was the only Google result for __argvalue.
I tried it in LINQPad and it doesn't seem to exist.

I can definitively state that there is no __argvalue in C# as of .NET Framework 4.0. The compiler binary contains a table of tokens. You can find the other hidden __ prefixed keywords starting at 0x00009840. However, a search of the entire binary shows that there is no __argvalue token.
The author of that comment may have been referring to __arglist, which can be an lvalue.

Related

How to filter values based on a property of the node in gremlin

So i understand how to filter values on gremlin console but things like filter, gt etc dont work on gremlin.net. continuously get errors.
I would like to know how to use Filter in gremlin.net to filter out nodes or edges. I cant find documentation pertaining to how to do this in C# using the gremlin.net library
I tried writing the code i write on gremlin console but some of those functions were not recognized
I am trying to filter out all those nodes that have the property idnum greater than 5
: g.V().Has("idnum", gt(5));
it keeps saying gt is not found under the current context.
Gremlin is largely the same regardless of the programming language you use. There are typically only minor differences in syntax as it relates to the idioms of the programming language itself (e.g. in Java we typically see the initial letter in method names lower cased whereas in C# they are upper cased). So, the general step documentation, though demonstrated in Groovy/Java style, typically gives you enough information on how steps work for you to then translate to your language of choice. Also in that same documentation, where necessary, there are specific notes on programming language specific differences that may be relevant.
That said, I assume your issue is related to the importing of P.gt() for C#:
using static Gremlin.Net.Process.Traversal.P;
You can read more about other Common Imports in the Reference Documentation here.

Folding case to speed up comparisons

"strasse".Equals("STRAße",StringComparison.InvariantCultureIgnoreCase)
This returns true. Which is correct. Unfortunately, when I store one of these in postgres, it thinks they are not the same when doing a case insensitive match (for example, with ~*). I've also tested with citext.
So one solution would be to pre-fold the case, thus storing strasse for either of these values, in another column. I could then index and search on that for matches.
I've been looking for how to fold case in C# for a while, and haven't been able to find a solution in C#. Obviously that knowledge is there because it can compare these strings properly, I just can't find where to get it from.
One solution would be to spawn a perl process perl -E "binmode STDOUT, ':utf8'; binmode STDIN, ':utf8'; while (<>) { print fc }", set the C# side of the process to utf8 for those pipes as well, and just send the text through perl to fold the case. But there has to be a better way than that.
There is string.Normalize(), which takes a NormalizationForm parameter. Michael Kaplan goes into detail on this. He claims it does a better job than FoldStringW.
It does not, however, normalize the case to upper or lower, it only folds to the canonical form. I would suggest you just apply ToUpper or ToLower afterwards.
Looking through the sources I eventually found that most of this implementation is in a set of classes called CompareInfo.
You can find these at github.com/dotnet/runtime
That led me to this page that clues in to the inner workings for the .net culture stuff. .NET globalization and ICU
It seems that dotnet is actually relying completely on native libraries for everything except ordinal operations.
I would assume by this that the .Net Framework is probably using NLS from Win32. For that there is the FoldStringW method that looks promising.
For ICU there is documentation for Case Mappings and I found the u_strFoldCase method.

Implement Language Auto-Completion based on ANTLR4 Grammar

I am wondering if are there any examples (googling I haven't found any) of TAB auto-complete solutions for Command Line Interface (console), that use ANTLR4 grammars for predicting the next term (like in a REPL model).
I've written a PL/SQL grammar for an open source database, and now I would like to implement a command line interface to the database that provides the user the feature of completing the statements according to the grammar, or eventually discover the proper database object name to use (eg. a table name, a trigger name, the name of a column, etc.).
Thanks for pointing me to the right direction.
Actually it is possible! (Of course, based on the complexity of your grammar.) Problem with auto-completion and ANTLR is that you do not have complete expression and you want to parse it. If you would have complete expression, it wont be any big problem to know what kind of element is at what place and to know what can be used at such a place. But you do not have complete expression and you cannot parse the incomplete one. So what you need to do is to wrap the input into some wrapper/helper that will complete the expression to create a parse-able one. Notice that nothing that is added only to complete the expression is important to you - you will only ask for members up to last really written character.
So:
A) Create the wrapper that will change this (excel formula) '=If(' into '=If()'
B) Parse the wrapped input
C) Realize that you are in the IF function at the first parameter
D) Return all that can go into that place.
It actually works, I have completed intellisense editor for several simple languages. There is much more infrastructure than this, but the basic idea is as I wrote it. Only be careful, writing the wrapper is not easy if not impossible if the grammar is really complex. In that case look at Papa Carlo project. http://lakhin.com/projects/papa-carlo/
As already mentioned auto completion is based on the follow set at a given position, simply because this is what we defined in the grammar to be valid language. But that's only a small part of the task. What you need is context (as Sam Harwell wrote: it's a semantic process, not a syntactic one). And this information is independent of the parser. And since a parser is made to parse valid input (and during auto completion you have most of the time invalid input), it's not the right tool for this task.
Knowing what token can follow at a given position is useful to control the entire process (e.g. you don't want to show suggestions if only a string can appear), but is most of the time not what you actually want to suggest (except for keywords). If an ID is possible at the current position, it doesn't tell you what ID is actually allowed (a variable name? a namespace? etc.). So what you need is essentially 3 things:
A symbol table that provides you with all possible names sorted by scope. Creating this depends heavily on the parsed language. But this is a task where a parser is very helpful. You may want to cache this info as it is time consuming to run this analysis step.
Determine in which scope you are when invoking auto completion. You could use a parser as well here (maybe in conjunction with step 1).
Determine what type of symbol(s) you want to show. Many people think this is where a parser can give you all necessary information (the follow set). But as mentioned above that's not true (keywords aside).
In my blog post Universal Code Completion using ANTLR3 I especially addressed the 3rd step. There I don't use a parser, but simulate one, only that I don't stop when a parser would, but when the caret position is reached (so it is essential that the input must be valid syntax up to that point). After reaching the caret the collection process starts, which not only collects terminal nodes (for keywords) but looks at the rule names to learn what needs to be collected too. Using specific rule names is my way there to put context into the grammar, so when the collection code finds a rule table_ref it knows that it doesn't need to go further down the rule chain (to the ultimate ID token), but instead can use this information to provide a list of tables as suggestion.
With ANTLR4 things might become even simpler. I haven't used it myself yet, but the parser interpreter could be a big help here, as it essentially doing what I do manually in my implementation (with the ANTLR3 backend).
This is probably pretty hard to do.
Fundamentally you want to use some parser to predict "what comes next" to display as auto-completion. This has to at least predict what the FIRST token is at the point where the user's input stops.
For ANTLR, I think this will be very difficult. The reason is that ANTLR generates essentially procedural, recursive descent parsers. So at runtime, when you need to figure out what FIRST tokens are, you have to inspect the procedural source code of the generated parser. That way lies madness.
This blog entry claims to achieve autocompletion by collecting error reports rather than inspecting the parser code. Its sort of an interesting idea, but I do not understand how his method really works, and I cannot see how it would offer all possible FIRST tokens; it might acquire some of them. This SO answer confirms my intuition.
Sam Harwell discusses how he has tackled this; he is one of the ANTLR4 implementers and if anybody can make this work, he can. It wouldn't surprise me if he reached inside ANTLR to extract the information he needs; as an ANTLR implementer he would certainly know where to tap in. You are not likely to be so well positioned. Even so, he doesn't really describe what he did in detail. Good luck replicating. You might ask him what he really did.
What you want is a parsing engine for which that FIRST token information is either directly available (the parser generator could produce it) or computable based on the parser state. This is actually possible to do with bottom up parsers such as LALR(k); you can build an algorithm that walks the state tables and computes this information. (We do this with our DMS Software Reengineering Toolkit for its GLR parser precisely to produce syntax error reports that say "missing token, could be any of these [set]")

TinyPG doesn't properly parse this grammar, bug or bad grammar?

I need to parse a simple language that I didn't design, so I can't change the language. I need the results in C#, so I've been using TinyPG because it's so easy to use, and doesn't require external libraries to run the parser.
Things had been going pretty well, until I ran into this construct in the language. (This is a simplified version, but it does show the problem):
EOF -> #"^\s*$";
[Skip] WHITESPACE -> #"\s+";
LIST -> "LIST";
END -> "END";
IDENTIFIER -> #"[a-zA-Z_][a-zA-Z0-9_]*";
Expr -> LIST IDENTIFIER+ END;
Start -> (Expr)+ EOF;
The resulting parser cannot parse this:
LIST foo BAR Baz END
because it greedily lexes END as an IDENTIFIER, instead of properly as the END keyword.
So, Here are my questions:
Is this grammar ambiguous or wrong for LL(1) parsing? Or is this a bug in TinyPG?
Is there any way to redesign the grammar such that TinyPG will properly parse the example line?
Are there any other suggestions for a simple parser that outputs code in C# and doesn't require additional libraries? I've looked at LLLPG and ANTLR4, but found them much more troublesome than TinyPG.
You might be the same guy since the issue looks identical, as the one I answered on GitHub, but here it is again for people who google this issue.
Here is an example from the Simple-CIL-compiler project,
The identifier has to catch single words except the ones listed, which means you have to include the exception token's in to the identifier
IDENTIFIER-> #"[a-zA-Z_][a-zA-Z0-9_]*(?<!(^)(end|else|do|while|for|true|false|return|to|incby|global|or|and|not|write|readnum|readstr|call))(?!\w)";
Hope that helps.
(Link to Original post)

Detect a code fragment from looking at a binary

Without working on the source, just on the basis of a binary, is there a way (there sure must be using CodeDom, but it'll be nice if it is possible without CodeDom) to tell if a method's body has an if construct, using reflection?
If it's .Net, grab reflector.
update
After seeing your comment, I think there's a lot of information missing from your question. In particular, what language is the binary written in? Are you asking how to decompile a given .Net binary or are you asking how to use .net to decompile a binary written in some other language not based on the .Net framework?
If the latter, then no, reflection won't allow you to determine what code exists.
If the former, then I'm puzzled. The purpose of reflector is to decompile .net binaries... at which point you could just visually inspect whether an if statement did in fact exist in the method in question.
Decompile (as advised by Chris)
Run the decompiled code through a code parser (See for example CS Parser for C# 2.0 : http://csparser.codeplex.com/
Use parser output to obtain info required, such as presence of token Y within body of method Z.

Categories