I'd like a preprocessing language for metaprogramming - c#

I'm looking for a language sort of like PHP, but more brief -- I'm tempted to call it a "templating engine" but I'm pretty sure that's the wrong term. What is the right term? A text preprocessor?
Anyway I'd like it to be .NET-based because I want to use it to help write .NET code. Because .NET generics are unsuited for writing fast numeric code (the known workaround is too cumbersome and limited for my needs), I'd like to write a math library using some sort of preprocessing language that allows me to output C# code. For example, I'd like to generate a series of "Point" classes made from various data types (PointF, PointD, PointI, etc.):
#foreach(($T, $Type) in {(F, float), (D, double), (I, int), ...}) #{
public struct Point$T {
public $Type X, Y;
...
}
#}
What can you fine people suggest?

Have you had a chance to try T4 templates? That should be sufficient for what you are trying to achieve. http://msdn.microsoft.com/en-us/library/bb126445.aspx

The T4 code generation and templating engine comes with Visual Studio.
Understanding T4: Preprocessed Text Templates
There's also String Template, which has a C# port.

Related

How do I convert a String to runnable code? (in Unity)

I'm making a little game in unity and right now I'm trying to build the quest system.
This game is going to be a simulation, so there's a huge amount of different data classes / systems the quests are going to have to interact with.
I can just make a bunch of utility classes... or even make a fake "database" to handle data calls... but that's inelegant.
Surely there's gotta be a way where I can just denote actual code from a string?
Example:
String questText = Hello player.getFullName();, how are you?
questResults<String>[1] = player.inventory.add(GameObjectBuilder.Create(new WhateverObject()));
I am using Unity's ScriptableObject to make quests, so that I can fill in text data via the editor rather than do it on IDE side (especially since unity doesn't support interpolated & composite strings as far as I know).
I know Java has an API called "Reflection" which from what I understand does something like this, but I was never able to fully wrap my head around it.
So how do I convert elements from a string into runnable code?
If that is possible, will that cause preformance issues with an indefinate amount of objects that might be encountering scripts that need to be converted?
Are there any other alternative methods that achieve a similar goal? (this one is just a curiosity)
As an alternative method, you can use a keyword that you search for and replace, rather than write the actual code directly into your string. I would suggest using this approach as it's cleaner to read and easier to maintain. I have used this approach in a similar system.
It works well if there is only a small number of possibilities that you will need to resolve (or you don't mind adding 'handlers' for all keywords., you can include a sort of keyword in your text, that you can pass through a method before using it.
For example..
Hello {PLAYER_NAME}, how are you? - this is the raw string.
public string ParseQuestText(string input)
{
if(input.contains("{PLAYER_NAME}")
input.Replace("{PLAYER_NAME}",player.getFullName());
/Add other replacers here
return input;
}

How to avoid writing repetitive code for different numeric types in .NET

I am trying to write generic Vector2 type which would suite float, double, etc. types and use arithmetical operations. Is there any chance to do it in C#, F#, Nemerle or any other more or less mature .NET language?
I need a solution with
(1)good performance (same as I would have writing separate
Vector2Float, Vector2Double, etc. classes),
(2)which would allow
code to look nice (I do not want to emit code for each class in
run-time)
(3)and which would do as much compile time checking as possible.
For reasons 1 and 3 I would not like to use dynamics. Right now I am checking F# and Nemerle.
UPD: I expect to have a lot of mathematical code for this type. However, I would prefer to put the code in extension methods if it is possible.
UPD2: 'etc' types include int(which I actually doubt I would use) and decimal(which I think I might use, but not now). Using extension methods is just a matter of taste - if there are good reasons not to, please tell.
As mentioned by Daniel, F# has a feature called statically resolved type arguments which goes beyond what you can do with normal .NET generic in C#. The trick is that if you mark function as inline, F# generates specialized code automatically (a bit like C++ templates) and then you can use more powerful features of the F# type system to write generic math.
For example, if you write a simple add function and make it inline:
let inline add x y = x + y;;
The type inference prints the following type:
val inline add :
x: ^a -> y: ^b -> ^c
when ( ^a or ^b) : (static member ( + ) : ^a * ^b -> ^c)
You can see that the inferred type is fairly complex - it specifies a member constraint that requires one of the two arguments to define a + member (and this is also supported by standard .NET types) The good thing is that this can be fully inferred, so you will rarely have to write the ugly type definitions.
As mentioned in the comments, I wrote an article Writing generic numeric code that goes into more details of how to do this in F#. I don't think this can be easily done in C# and the inline functions that you write in F# should only be called from F# (calling them from C# would essentially use dynamic). But you can definitely write your generic numerical computations in F#.
This more directly addresses your previous question. You can't put a static member constraint on a struct, but you can put it on a static Create method.
[<Struct>]
type Vector2D<'a> private (x: 'a, y: 'a) =
static member inline Create<'a when 'a : (static member (+) : 'a * 'a -> 'a)>(x, y) = Vector2D<'a>(x, y)
C# alone will not help you in achieving that, unfortunately. Emitting structs at run-time wouldn't help you much either since your program couldn't statically refer to them.
If you really can't afford to duplicate the code, then as far as I know, "offline" code generation is the only way to go about this. Instead of generating the code at runtime, use AssemblyBuilder and friends to create an on-disk assembly with your Vector2 types, or generate a string of C# code to be fed to the compiler. I believe some of the native library wrappers take this route (ie OpenTK, SharpDX). You can then use ilmerge if you want to merge those types to one of your hand-coded libraries.
I'm assuming you must be coming from a C++ background where this is easily achieved using templates. However, you should ask yourself whether you actually need Vector2 types based on integral, decimal and other "exotic" numeric types. You probably won't be able to parameterize the rest of your code based on a specific Vector2 either so the effort might not be worth it.
Look into inline functions and Statically Resolved Type Parameters.
As I understand you a strict type in the compile time , but you don't care what happens in the runtime.
Nemerle language currently doesn't support this construction as you want.
But it supports macros and allows you writing DSLs to generate arbitrary code.
For instance you can do some macro which analyzes this code and transforms it to the correct type.
def vec = vector { [1,2] };
Assuming we have or create a type VectorInt the code could be translated to
def vec = VectorInt(1,2);
Of course you can write any code inside and transform it to any code you want :)
Operators can be implemented as usual operators of the class.
Nemerle also allows you to define any operators like F#.
make use of Generics , this makes is also type safe
more info on generics : http://msdn.microsoft.com/en-us/library/512aeb7t.aspx
But you also have availible datastructures such as List and Dictionary
Sounds like you want operator overloading, there are a lot of examples for this. There is not realy a good way to only allow decial, float and such. The only thing you can do is restrict to struct, but thats not exactly what you want.

Proper way to limit possible string argument values and see them in intellisense

Environment: Visual Studio 2012, .NET 4 Framework, ASP.NET web application (C#)
I'd like to know the best, most advisable approach to accomplish limiting incoming arguments (of any type...int, string, etc...) to a predefined set of desired values. I'd like to know the industry-accepted best way.
(the following code does not work - it's just to better illustrate the question)
Lets say I have a utilities class something like this:
public class Utilities
{
public string ConvertFromBytes(int intNumBytes, string strOutputUnit)
{
//determine the desired output
switch(strOutputUnit.ToUpper())
{
case "KB":
//Kilobytes - do something
break;
case "MB":
//Megabytes - do something
break;
case "GB":
//Gigabytes - do something
break;
}
//Finish converting and return file size string in desired format...
}
}
Then, in one of my pages, I have something like this:
Utilities ut = new Utilities();
strConvertedFilesize = ut.ConvertFromBytes(1024,
What I'd like to know is, in the line of code immediately above, what is the best way for me to make it such that "KB", "MB", or "GB" are the only possible string values that can be entered for the "strOutputUnit" parameter? (And) Preferably with intellisense showing the possible available options?
Update: JaredPar I'm using your suggestion and came up with the following reworked code:
public class Utilities
{
public enum OutputUnit {
KB,
MB,
GB
}
public string ConvertFromBytes(int intNumBytes, OutputUnit ou)
{
//determine the desired output
switch (ou)
{
case OutputUnit.KB:
//Kilobytes - do something
break;
case OutputUnit.MB:
//Megabytes - do something
break;
case OutputUnit.GB:
//Gigabytes - do something
break;
default:
//do something crazy
break;
}
//Finish converting and return file size string in desired format...
return "";
}
}
and to call it:
Utilities ut = new Utilities();
string strConvertedFilesize = ut.ConvertFromBytes(1024, Utilities.OutputUnit.MB);
Is this above approach the most efficient way of using the enums as arguments? In the line directly above, having to type in the "Utilities.OutputUnit." part in my method call feels a little clunky...of course I could always assign shorter names, but are there any ways to better streamline the above?
BTW Thanks everyone for the answers. JaredPar i will choose yours since it's correct and came in first. The other answers were very informative and helpful however - thanks to all.
In this case instead of using a String you should use an enum.
enum OutputUnit {
KB,
MB,
GB
}
This will make the choice of input arguments much more explicit for developers. Of course it is not a fool proof operation because a developer can always create an invalid enum value by directly casting from int
OutputUnit u = (OutputUnit)10000;
In addition to using an enum in your specific case, from a more general perspectice, you might take a look at (by no means inclusive -- there's more than one way to do it) these:
Data Annotations. Attribute-based validation.
Microsoft Research's Design by Contract extensions for C#: https://stackoverflow.com/a/267671/467473.
You can install the Visual Studio extension at http://visualstudiogallery.msdn.microsoft.com/1ec7db13-3363-46c9-851f-1ce455f66970
Here's an article by Jon Skeet on code contracts: http://www.infoq.com/articles/code-contracts-csharp.
The ur-text for design-by-contract — and arguably the best overall book on O-O design — is Bertrand Meyer's Object-Oriented Software Construction, 2nd ed.. The Eiffel language has design-by-contract at its core: Eiffel/Eiffel Studio is a full-fledged Eiffel IDE that produces CLR-compliant assemblies.
This is exactly what enums are for:
public enum SizeUnit { KB, MB, GB }
Enums, as suggested by several others, are by far the most common and recognized way to accomplish this.
There is one other method you can look at that is less common, but quite powerful:
Code Contracts
Code Contracts provide a structure similar to many unit tests. Where they excel is that they also have some IDE support within Visual Studio, to help someone calling your method know what the expected values(contracts) are. Code Contracts are useful when the potential range of allowed values is very large, such that an enum would be cumbersome. The downside is that Code Contracts are not yet universally supported or recognized.
I am going to suggest you don't use an enumeration, mostly because your switch statement has a section to "do something," which makes it seems like you might want polymorphism. It might be overkill, but you should consider the tradeoffs, at least.
You'll find lots of examples of enumeration classes in the .Net framework. The example I'd give is the Unit type (which is technically a struct, but still works for the point). It uses static members to enumerate {Point, Pixel, Percentage}. It also implements some methods, so you can use polymorphism instead of a switch statement. It also implements a Parse method, which can be used as a factory to get the object you want from a string.

Why aren't there macros in C#?

When learning C# for the first time, I was astonished that they had no support for macros in the same capacity that exists in C/C++. I realize that the #define keyword exists in C#, but it is greatly lacking compared to what I grew to love in C/C++. Does anyone know why real macros are missing from C#?
I apologize if this question is already asked in some form or another - I promise I spent a solid 5 minutes looking for duplicates before posting.
from the C# faq.
http://blogs.msdn.com/CSharpFAQ/archive/2004/03/09/86979.aspx
Why doesn't C# support #define macros?
In C++, I can define a macro such as:
#define PRODUCT(x, y, z) x * y * z
and then use it in code:
int a = PRODUCT(3, 2, 1);
C# doesn't allow you to do this. Why?
There are a few reasons why. The first is one of readability.
One of our main design goals for C# is to keep the code very readable. Having the ability to write macros gives the programmer the ability to create their own language - one that doesn't necessarily bear any relation to what the code underneath. To understand what the code does, the user must not only understand how the language works, but he must also understand all of the #define macros that are in effect at that point in time. That makes code much harder to read.
In C#, you can use methods instead of macros, and in most cases, the JIT will inline them, giving you the same performance aspect.
There's also a somewhat more subtle issue. Macros are done textually, which means if I write:
int y = PRODUCT (1 + 2, 3 + 4, 5 + 6)
I would expect to get something that gives me 3 * 7 *11 = 231, but in fact, the expansion as I've defined it gives:
int y = 1 + 2 * 3 + 4 * 5 + 6;
which gives me 33. I can get around that by a judicious application of parenthesis, but its very easy to write a macro that works in some situations and not in others.
Although C# doesn't strictly speaking have a pre-processor, it does have conditional compilation symbols which can be used to affect compilation. These can be defined within code or with parameters to the compiler. The "pre-processing" directives in C# (named solely for consistency with C/C++, despite there being no separate pre-processing step) are (text taken from the ECMA specification):
#define and #undef
Used to define and undefine conditional compilation symbols
#if, #elif, #else and #endif
Used to conditionally skip sections of source code
#line
Used to control line numbers emitted for errors and warnings.
#error and #warning
Used to issue errors and warnings.
#region and #endregion
Used to explicitly mark sections of source code.
See section 9.5 of the ECMA specification for more information on the above. Conditional compilation can also be achieved using the Conditional attribute on a method, so that calls to the method will only be compiled when the appropriate symbol is defined. See section 24.4.2 of the ECMA specifcation for more information on this.
Author: Eric Gunnerson
So that you can have fun typing THIS over and over and over again.
// Windows presetation foundation dependency property.
public class MyStateControl : ButtonBase
{
public MyStateControl() : base() { }
public Boolean State
{
get { return (Boolean)this.GetValue(StateProperty); }
set { this.SetValue(StateProperty, value); }
}
public static readonly DependencyProperty StateProperty = DependencyProperty.Register(
"State", typeof(Boolean), typeof(MyStateControl),new PropertyMetadata(false));
}
Obviously the designers of C# and .NET never actually use any of the libraries or frameworks they create. If they did, they would realize that some form of hygenic syntactic macro system is definitely in order.
Don't let the shortcomings of C and C++'s lame macros sour you on the power of compile time resolved code. Compile time resolution and code generation allows you to more effectively express the MEANING and INTENT of code without having to spell out all of the niggling details of the source code. For example, what if you could replace the above with this:
public class MyStateControl : ButtonBase
{
public MyStateControl() : base() { }
[DependencyProperty(DefaultValue=true)]
bool State { get; set; }
}
Boo has them, OcamML (at least Meta ML) has them, and C and C++ has them (in a nasty form, but better than not having them at all). C# doesn't.
C++-style macros add a huge amount of complexity without corresponding benefit, in my experience. I certainly haven't missed them either in C# or Java. (I rarely use preprocessor symbols at all in C#, but I'm occasionally glad they're there.)
Now various people have called for Lisp-style macros, which I know little about but certainly sound rather more pleasant than C++-style ones.
What do you particularly want to do with macros? We may be able to help you think in a more idiomatically C# way...
C# is aimed at wider audience (or in other term, consumer base) than C++, C or ASM. The only way of achieving this goal is reaching programmers considerably less skilled. Therefore, all the powerful but dangerous tools are taken away. I.e. macros, multiple inheritance, control over object lifetime or type-agnostic programming.
In a very same way matches, knives and nailguns are useful and necessary, but they have to be kept out of reach of children. (sadly, arsons, murders, memory leaks and unreadable code still do happen).
And before accusing me of not thinking C#, how many times have you wrote that:
protected int _PropOne;
public int PropOne
{
get
{
return _PropOne;
}
set
{
if(value == _PropOne) { return; }
NotifyPropertyChanging("PropOne");
_PropOne = value;
NotifyPropertyChanged("PropOne");
}
}
With macros, every time those 16 lines would look like that:
DECLARE_PROPERTY(int, PropOne)
DECLARE_PROPERTY(string, PropTwo)
DECLARE_PROPERTY(BitmapImage, PropThree)
Macros in C / C++ were used to define constants, produce small inline functions, and for various things directly related to compiling the code (#ifdef).
In C#, you have strongly typed constants, a smart enough compiler to inline functions when necessary, and knows how to compile stuff the right way (no precompiled header nonsense).
But there's no particular reason why you couldn't run your CS file through the C preprocessor first if you really wanted to :)
As a long time C# programmer who went off to learn C++ for a while, I now miss rich support for metaprogramming C#. At least, I now have a more expansive appreciation for what metaprogramming can mean.
I would really like to see the kind of macro support that's instilled in Nemerle in C#. It seems to add a very natural and powerful extension capability to the language. If you haven't looked at it, I really recommend doing so.
There are some great examples on Wikipedia.
Macros are overused in C++ but they still have their uses, however most of these uses are not relevant in C# due to reflection and the better integrated use of exceptions for error reporting.
This article compares perl and lisp macros but the point is still the same: Text level macros (perl/c++) cause massive problems compared to source level macros (lisp)
http://lists.warhead.org.uk/pipermail/iwe/2005-July/000130.html
Braver people than me have rolled their own macro like system in c# http://www.codeproject.com/KB/recipes/prepro.aspx
Macros are a tool for the days when most programmers were smarter than the compiler. In C/C++, there are still some cases where this is true.
Nowdays, most programmers aren't as smart as the C# compiler/runtime.
You can do some thing you do with macros like PropertyChanged with ways like this
If thats better than macros ?
Thats a question YOU must decide :)
Anyone who agrees with the idea that macros are bad should read the book, "With Folded Hands." http://en.wikipedia.org/wiki/With_Folded_Hands It tells a story about how we can keep people from doing stupid things all the way to the point of preventing them from doing very wise things.
While I like C#, I do really hate that it contributes to the stupidification of actual software engineers. So, yes, leave macros to the professionals. While we're at it, leave the naming of variables to professionals, too. That can make for some really unreadable code. To follow the full statement of "code must be ultimately readable" all variables should be named A-Z, followed by a-z (or some other arbitrary construct like only nouns). Because some unskilled person may name their variable "SomethingUsefulButNotAllowedByTheCompilerBecauseSomeUsersMayDoDumbThings".

What problems does reflection solve?

I went through all the posts on reflection but couldn't find the answer to my question.
What were the problems in the programming world before .NET reflection
came and how it solved those problems?
Please explain with an example.
It should be stated that .NET reflection isn't revolutionary - the concepts have been around in other framework.
Reflection in .NET has 2 facets:
Investigating type information
Without some kind of reflection / introspection API, it becomes very hard to perform things like serialization. Rather than having this provided at runtime (by inspecting the properties/fields/etc), you often need code-generation instead, i.e. code that explicitly knows how to serialize each of your types. Tedious, and painful if you want to serialize something that doesn't have a twin.
Likewise, there is nowhere to store additional metadata about properties etc, so you end up having lots of additional code, or external configuration files. Something as simple as being able to associate a friendly name with a property (via an attribute) is a huge win for UI code.
Metaprogramming
.NET reflection also provides a mechanism to create types (etc) at runtime, which is hugely powerful for some specific scenarios; the alternatives are:
essentially running a parser/logic tree at runtime (rather than compiling the logic at runtime into executable code) - much slower
yet more code generation - yay!
I think to understand the need for reflection in .NET, we need to go back to before .NET. After all, modern languages like like Java and C# do not have a history BF (before reflection).
C++ arguably has had the most influence on C# and Java. But C++ did not originally have reflection and we coded without it and we managed to get by. Occasionally we had void pointer and would use a cast to force it into whatever type we wanted. The problem here was that the cast could fail with terrible consequences:
double CalculateSize(void* rectangle) {
return ((Rect*)rectangle)->getWidth() * ((Rect*)rectangle)->getHeight());
}
Now there are plenty of arguments why you shouldn't have coded yourself into this problem in the first place. But the problem is not much different from .NET 1.1 with C# when we didn't have generics:
Hashtable shapes = new Hashtable();
....
double CalculateSize(object shape) {
return ((Rect)shape).Width * ((Rect)shape).Height;
}
However, when the C# example fails it does so with a exception rather than a potential core dump.
When reflection was added to C++ (known as Run Time Type Identification or RTTI), it was hotly debated. In Stroustrup's book The Design and Evolution of C++, he lists the following
arguments against RTTI, in that some people:
Declared the support unnecessary
Declared the new style inherently evil ("against the spirit of C++")
Deemed it too expensive
Thought it too complicated and confusing
Saw it as the beginning of an avalanche of new features
But it did allow us to query the type of objects, or features of objects. For example (using C#)
Hashtable shapes = new Hashtable();
....
double CalculateSize(object shape) {
if(shape is Rect) {
return ((Rect)shape).Width * ((Rect)shape).Height;
}
else if(shape is Circle) {
return Math.Power(((Circle)shape).Radius, 2.0) * Math.PI;
}
}
Of course, with proper planning this example should never need to occur.
So, real world situations where I've needed it include:
Accessing objects from shared memory, all I have is a pointer and I need to decide what to do with it.
Dynamically loading assemblies, think about NUnit where it loads every assembly and uses reflection to determine which classes are test fixtures.
Having a mixed bag of objects in a Hashtable and wanting to process them differently in an enumerator.
Many others...
So, I would go as far as to argue that Reflection has not enabled the ability to do something that couldn't be done before. However, it does make some types of problems easier to code, clearer to reader, shorter to write, etc.
Of course that's just my opinion, I could be wrong.
I once wanted to have unit tests in a text file that could be modified by a non-technical user in the format in C++:
MyObj Function args //textfile.txt
But I couldn't find a way to read in a string and then have the code create an object instance of the type represented by the string without reflection which C++ doesn't support.
char *str; //read in some type from a text file say the string is "MyObj"
str *obj; //cast a pointer as type MyObj
obj = new str; //create a MyObj
Another use might be to have a generic copy function that could copy the members of an class without knowing them in advance.
It helps a lot when you are using C# attributes like [Obsolete] or [Serializable] in your code. Frameworks like NUnit use reflection on classes and containing methods to understand which methods are tests, setup, teardown, etc.

Categories