Related
I need to provide a copy of the source code to a third party, but given it's a nifty extensible framework that could be easily repurposed, I'd rather provide a less OO version (a 'procedural' version for want of a better term) that would allow minor tweaks to values etc but not reimplementation using the full flexibility of how it is currently structured.
The code makes use of the usual stuff: classes, constructors, etc. Is there a tool or method for 'simplifying' this into what is still the 'source' but using only plain variables etc.
For example, if I had a class instance 'myclass' which initialised this.blah in the constructor, the same could be done with a variable called myclass_blah which would then be manipulated in a more 'flat' way. I realise some things like polymorphism would probably not be possible in such a situation. Perhaps an obfuscator, set to a 'super mild' setting would achieve it?
Thanks
My experience with nifty extensible frameworks has been that most shops have their own nifty extensible frameworks (usually more than one) and are not likely to steal them from vendor-provided source code. If you are under obligation to provide source code (due to some business relationship), then, at least in my mind, there's an ethical obligation to provide the actual source code, in a maintainable form. How you protect the source code is a legal matter and I can't offer legal advice, but really you should be including some license with your release and dealing with clients who are not going to outright steal your IP (assuming it's actually yours under the terms you're developing it.)
As had already been said, if this is a requirement based on restrictions of contracts then don't do it. In short, providing a version of the source that differs from what they're actually running becomes a liability and I doubt that it is one that your company should be willing to take. Proving that the code provided matches the code they are running is simple. This is also true if you're trying to avoid license restrictions of libraries your application uses (e.g. GPL).
If that isn't the case then why not provide a limited version of your extensibility framework that only works with internal types and statically compile any required extensions in your application? This will allow the application to continue to function as what they currently run while remaining maintainable without giving up your sacred framework. I've never done it myself but this sounds like something ILMerge could help with.
If you don't want to give out framework - just don't. Provide only source you think is required. Otherwise most likely you'll need to either support both versions in the future OR never work/interact with these people (and people they know) again.
Don't forget that non-obfuscated .Net assemblies have IL in easily de-compilable form. It is often easier to use ILSpy/Reflector to read someone else code than looking at sources.
If the reason to provide code is some sort of inspection (even simply looking at the code) you'd better have semi-decent code. I would seriously consider throwing away tool if its code looks written in FORTRAN-style using C# ( http://www.nikhef.nl/~templon/fortran/fortran_style ).
Side note: I believe "nifty extensible frameworks" are one of the roots of "not invented here" syndrome - I'd be more worried about comments on the framework (like "this code is ##### because it does not use YYY pattern and spacing is wrong") than reuse.
What is the simplest way to identify if a given method is reading or writing a member variable or property?
I am writing a tool to assist in an RPC system, in which access to remote objects is expensive. Being able to detect if a given object is not used in a method could allow us to avoid serializing its state. Doing it on source code is perfectly reasonable (but being able to do it on compiled code would be amazing)
I think I can either write my own simple parser, I can try to use one of the existing C# parsers and work with the AST. I am not sure if it is possible to do this with Assemblies using Reflection. Are there any other ways? What would be the simplest?
EDIT: Thanks for all the quick replies. Let me give some more information to make the question clearer. I definitely prefer correct, but it definitely shouldn't be extremely complex. What I mean is that we can't go too far checking for extremes or impossibles (as the passed-in delegates that were mentioned, which is a great point). It would be enough to detect those cases and assume everything could be used and not optimize there. I would assume that those cases would be relatively uncommon.
The idea is for this tool to be handed to developers outside of our team, that should not be concerned about this optimization. The tool takes their code and generates proxies for our own RPC protocol. (we are using protobuf-net for serialization only, but no wcf nor .net remoting). For this reason, anything we use has to be free or we wouldn't be able to deploy the tool for licensing issues.
You can have simple or you can have correct - which do you prefer?
The simplest way would be to parse the class and the method body. Then identify the set of tokens which are properties and field names of the class. The subset of those tokens which appears in the method body are the properties and field names you care about.
This trivial analysis of course is not correct. If you had
class C
{
int Length;
void M() { int x = "".Length; }
}
Then you would incorrectly conclude that M references C.Length. That's a false positive.
The correct way to do it is to write a full C# compiler, and use the output of its semantic analyzer to answer your question. That's how the IDE implements features like "go to definition".
Before attempting to write this kind of logic yourself, I would check to see if you can leverage NDepend to meet your needs.
NDepend is a code dependency analysis tool ... and much more. It implements a sophisticated analyzer for examining relationships between code constructs and should be able to answer that question. It also operates on both source and IL, if I'm not mistaken.
NDepend exposes CQL - Code Query Language - which allows you to write SQL-like queries against the relationships between structures in your code. NDepend has some support for scripting and is capable of being integrated with your build process.
To complete the LBushkin answer on NDepend (Disclaimer: I am one of the developer of this tool), NDepend can indeed help you on that. The Code LINQ Query (CQLinq) below, actually match methods that...
shouldn't provoque any RPC calls but
that are reading/writing any fields of any RPC types,
or that are reading/writing any properties of any RPC types,
Notice how first we define the 4 sets: typesRPC, fieldsRPC, propertiesRPC, methodsThatShouldntUseRPC - and then we match methods that violate the rule. Of course this CQLinq rule needs to be adapted to match your own typesRPC and methodsThatShouldntUseRPC:
warnif count > 0
// First define what are types whose call are RDC
let typesRPC = Types.WithNameIn("MyRpcClass1", "MyRpcClass2")
// Define instance fields of RPC types
let fieldsRPC = typesRPC.ChildFields()
.Where(f => !f.IsStatic).ToHashSet()
// Define instance properties getters and setters of RPC types
let propertiesRPC = typesRPC.ChildMethods()
.Where(m => !m.IsStatic && (m.IsPropertyGetter || m.IsPropertySetter))
.ToHashSet()
// Define methods that shouldn't provoke RPC calls
let methodsThatShouldntUseRPC =
Application.Methods.Where(m => m.NameLike("XYZ"))
// Filter method that should do any RPC call
// but that is using any RPC fields (reading or writing) or properties
from m in methodsThatShouldntUseRPC.UsingAny(fieldsRPC).Union(
methodsThatShouldntUseRPC.UsingAny(propertiesRPC))
let fieldsRPCUsed = m.FieldsUsed.Intersect(fieldsRPC )
let propertiesRPCUsed = m.MethodsCalled.Intersect(propertiesRPC)
select new { m, fieldsRPCUsed, propertiesRPCUsed }
My intuition is that detecting which member variables will be accessed is the wrong approach. My first guess at a way to do this would be to just request serialized objects on an as-needed basis (preferably at the beginning of whatever function needs them, not piecemeal). Note that TCP/IP (i.e. Nagle's algorithm) should stuff these requests together if they are made in rapid succession and are small
Eric has it right: to do this well, you need what amounts to a compiler front end. What he didn't emphasize enough is the need for strong flow analysis capabilities (or a willingness to accept very conservative answers possibly alleviated by user annotations). Maybe he meant that in the phrase "semantic analysis" although his example of "goto definition" just needs a symbol table, not flow analysis.
A plain C# parser could only be used to get very conservative answers (e.g., if method A in class C contains identifier X, assume it reads class member X; if A contains no calls then you know it can't read member X).
The first step beyond this is having a compiler's symbol table and type information (if method A refers to class member X directly, then assume it reads member X; if A contains **no* calls and mentions identifier X only in the context of accesses to objects which are not of this class type then you know it can't read member X). You have to worry about qualified references, too; Q.X may read member X if Q is compatible with C.
The sticky point are calls, which can hide arbitrary actions. An analysis based on just parsing and symbol tables could determine that if there are calls, the arguments refer only to constants or to objects which are not of the class which A might represent (possibly inherited).
If you find an argument that has an C-compatible class type, now you have to determine whether that argument can be bound to this, requiring control and data flow analysis:
method A( ) { Object q=this;
...
...q=that;...
...
foo(q);
}
foo might hide an access to X. So you need two things: flow analysis to determine whether the initial assignment to q can reach the call foo (it might not; q=that may dominate all calls to foo), and call graph analysis to determine what methods foo might actually invoke, so that you can analyze those for accesses to member X.
You can decide how far you want to go with this simply making the conservative assumption "A reads X" anytime you don't have enough information to prove otherwise. This will you give you a "safe" answer (if not "correct" or what I'd prefer to call "precise").
Of frameworks that might be helpful, you might consider Mono, which surely parses and builds symbol tables. I don't know what support it provides for flow analysis or call graph extraction; I would not expect the Mono-to-IL front-end compiler to do a lot of that, as people usually hide that machinery in the JIT part of JIT-based systems. A downside is that Mono may be behind the "modern C#" curve; last time I heard, it handled only C# 2.0 but my information may be stale.
An alternative is our DMS Software Reengineering Toolkit and its C# Front End.
(Not an open source product).
DMS provides general source code parsing, tree building/inspection/analysis, general symbol table support and built-in machinery for implementing control-flow analysis, data flow analysis, points-to analysis (needed for "What does object O point to?"), and call graph construction. This machinery has all been tested by fire with DMS's Java and C front ends, and the symbol table support has been used to implement full C++ name and type resolution, so its pretty effective. (You don't want to underestimate the work it takes to build all that machinery; we've been working on DMS since 1995).
The C# Front End provides for full C# 4.0 parsing and full tree building. It presently does not build symbol tables for C# (we're working on this) and that's a shortcoming compared to Mono. With such a symbol table, however, you would have access to all that flow analysis machinery (which has been tested with DMS's Java and C front ends) and that might be a big step up from Mono if it doesn't provide that.
If you want to do this well, you have a considerable amount of work in front of you. If you want to stick with "simple", you'll have to do with just parsing the tree and being OK with being very conservative.
You didn't say much about knowing if a method wrote to a member. If you are going to minimize traffic the way you describe, you want to distinguish "read", "write" and "update" cases and optimize messages in both directions. The analysis is obviously pretty similar for the various cases.
Finally, you might consider processing MSIL directly to get the information you need; you'll still have the flow analysis and conservative analysis issues. You might find the following technical paper interesting; it describes a fully-distributed Java object system that has to do the same basic analysis you want to do,
and does so, IIRC, by analyzing class files and doing massive byte code rewriting.
Java Orchestra System
By RPC do you mean .NET Remoting? Or DCOM? Or WCF?
All of these offer the opportunity to monitor cross process communication and serialization via sinks and other constructs, but they are all platform specific, so you'll need to specify the platform...
You could listen for the event that a property is being read/written to with an interface similar to INotifyPropertyChanged (although you obviously won't know which method effected the read/write.)
I think the best you can do is explicitly maintain a dirty flag.
Omar Al Zabir is looking for "a simpler way to do AOP style coding".
He created a framework called AspectF, which is "a fluent and simple way to add Aspects to your code".
It is not true AOP, because it doesn't do any compile time or runtime weaving, but does it accomplish the same goals as AOP?
Here's an example of AspectF usage:
public void InsertCustomerTheEasyWay(string firstName, string lastName, int age,
Dictionary<string, string> attributes)
{
AspectF.Define
.Log(Logger.Writer, "Inserting customer the easy way")
.HowLong(Logger.Writer, "Starting customer insert", "Inserted customer in {1} seconds")
.Retry()
.Do(() =>
{
CustomerData data = new CustomerData();
data.Insert(firstName, lastName, age, attributes);
});
}
Here are some posts by the author that further clarify the aim of AspectF:
AspectF fluent way to put Aspects into your code for separation of concern (Blog)
AspectF (google code)
AspectF Fluent Way to Add Aspects for Cleaner Maintainable Code (CodeProject)
According to the author, I gather that AspectF is not designed so much as an AOP replacement, but a way to achieve "separation of concern and keep code nice and clean".
Some thoughts/questions:
Any thoughts on using this style of coding as project grows?
Is it a scalable architecture?
performance concerns?
How does maintainability compare against a true AOP solution?
I don't mean to bash the project, but
IMHO this is abusing AOP. Aspects are not suitable for everything, and used like this it only hampers readability.
Moreover, I think this misses one of the main points of AOP, which is being able to add/remove/redefine aspects without touching the underlying code.
Aspects should be defined outside of the affected code in order to make them truly cross-cutting concerns. In AspectF's case, the aspects are embedded in the affected code, which violates SoC/SRP.
Performance-wise there is no penalty (or it's negligible) because there is no runtime IL manipulation, just as explained in the codeproject article. However, I've never had any perf problems with Castle DynamicProxy.
On a recent project, it was recommended to me that I give AspectF a try.
I took to heart the idea of laying all the concerns up front, and having the code that does the real work blissfully unaware of all the checks and balances that happened outside of it.
I actually took it a little further, and added a security "concern" that required credentials that were being received as part of a WCF request. It went off to the database and did what it had to. I did obvious validations, and the security check before running the actual code that would return the required data.
I found this approach quite a refreshing change, and I certainly liked that I had the source of AspectF to walk through as I was debugging and testing the service calls.
In the office, some argued that these concerns should be implemented as a decoration on a class / method. But it doesn't really matter where you decorate it, at some point somewhere, you need to say you wish to perform certain actions / checks. I like the fact that it's all laid out in-place, not as another code file, not as some kind of configuration file, and for once, not adding yet another decoration to a class / method.
I'm not saying it's true AOP - and I certainly think there are solutions and situations where it really isn't the best way of implementing your objectives, but given that it's just a couple of K of source files, that makes for a very light-weight implementation.
AspectF is basically a very clever way of chaining delegates together.
I don't think every developer is going to look at the code and say how wonderful it is to look at, indeed in our office it confused some of us! But once you understand what's going on, it's an inexpensive way of achieving much of what can be done by other approaches too.
I have heard people state that Code Generators and T4 templates should not be used. The logic behind that is that if you are generating code with a generator then there is a better more efficient way to build the code through generics and templating.
While I slightly agree with this statement above, I have not really found effective ways to build templates that can say for instance instantiate themselves. In otherwords I can never do :
return new T();
Additionally, if I want to generate code based on database values I have found that using Microsoft.SqlServer.Management.SMO in conjunction with T4 templates have been wonderful at generating mass amounts of code without having to copy / paste or use resharper.
Many of the problems I have found with Generics too is that to my shock there are a lot of developers who do not understand them. When I do examine generics for a solution, there are times where it gets complicated because C# states that you cannot do something that may seem logical in my mind.
What are your thoughts? Do you prefer to build a generator, or do you prefer to use generics? Also, how far can generics go? I know a decent amount about generics, but there are traps and pitfalls that I always run into that cause me to resort to a T4 template.
What is the more proper way to handle scenarios where you need a large amount of flexibility? Oh and as a bonus to this question, what are good resources on C# and Generics?
You can do new T(); if you do this
public class Meh<T>
where T : new()
{
public static T CreateOne()
{
return new T();
}
}
As for code-generators. I use one every day without any problems. I'm using one right now in fact :-)
Generics solve one problem, code-generators solve another. For example, creating a business model using a UML editor and then generating your classes with persistence code as I do all of the time using this tool couldn't be achieved with generics, because each persistent class is completely different.
As for a good source on generics. The best has got to be Jon Skeet's book of course! :-)
As the originator of T4, I've had to defend this question quite a few times as you can imagine :-)
My belief is that at its best code generation is a step on the way to producing equivalent value using reusable libraries.
As many others have said, the key concept to maintain DRY is never, ever changing generated code manually, but rather preserving your ability to regenerate when the source metadata changes or you find a bug in the code generator. At that point the generated code has many of the characteristics of object code and you don't run into copy/paste type problems.
In general, it's much less effort to produce a parameterized code generator (especially with template-based systems) than it is to correctly engineer a high quality base library that gets the usage cost down to the same level, so it's a quick way to get value from consistency and remove repetition errors.
However, I still believe that the finished system would most often be improved by having less total code. If nothing else, its memory footprint would almost always be significantly smaller (although folks tend to think of generics as cost free in this regard, which they most certainly are not).
If you've realised some value using a code generator, then this often buys you some time or money or goodwill to invest in harvesting a library from the generated codebase. You can then incrementally reengineer the code generator to target the new library and hopefully generate much less code. Rinse and repeat.
One interesting counterpoint that has been made to me and that comes up in this thread is that rich, complex, parametric libraries are not the easiest thing in terms of learning curve, especially for those not deeply immersed in the platform. Sticking with code generation onto simpler basic frameworks can produce verbose code, but it can often be quite simple and easy to read.
Of course, where you have a lot of variance and extremely rich parameterization in your generator, you might just be trading off complexity an your product for complexity in your templates. This is an easy path to slide into and can make maintenance just as much of a headache - watch out for that.
Generating code isn't evil and it doesn't smell! The key is to generate the right code at the right time. I think T4 is great--I only use it occasionally, but when I do it is very helpful. To say, unconditionally, that generating code is bad is unconditionally crazy!
It seems to me code generators are fine as long as the code generation is part of your normal build process, rather than something you run once and then keep its output. I add this caveat because if just use the code generator once and discard the data that created it, you're just automatically creating a massive DRY violation and maintenance headache; whereas generating the code every time effectively means that whatever you are using to do the generating is the real source code, and the generated files are just intermediate compile stages that you should mostly ignore.
Lex and yacc are classic examples of tools of allow you to specify functionality in an efficient manner and generate efficient code from it. Trying to do their jobs by hand will lengthen your development time and probably produce less efficient and less readable code. And while you could certainly incorporate something like lex and yacc directly into your code and do their jobs at run time instead of at compile time, that would certainly add considerable complexity to your code and slow it down. If you actually need to change your specification at run time it might be worth it, but in most normal cases using lex/yacc to generate code for you at compile time is a big win.
A good percentage of what is in Visual Studio 2010 would not be possible without code generation. Entity Framework would not be possible. The simple act of dragging and dropping a control onto a form would not be possible, nor would Linq. To say that code generation should not be used is strange as so many use it without even thinking about it.
Maybe it is a bit harsh, but for me code generation smells.
That code generation is used means that there are numerous underlying common principles which may be expressed in a "Don't repeat yourself" fashion. It may take a bit longer, but it is satisfying when you end up with classes that only contain the bits that really change, based on an infrastructure that contains the mechanics.
As to Generics...no I don't have too many issues with it. The only thing that currently doesn't work is saying that
List<Animal> a = new List<Animal>();
List<object> o = a;
But even that will be possible in the next version of C#.
Code generation is for me a workaround for many problems found in language, frameworks, etc. They are not evil by themselves, I would say it is very very bad (i.e. evil) to release a language (C#) and framework which forces you to copy&paste (swap on properties, events triggering, lack of macros) or use magical numbers (wpf binding).
So, I cry, but I use them, because I have to.
I've used T4 for code generation and also Generics. Both are good, have their pros and cons, and are suited for different purposes.
In my case, I use T4 to generate Entities, DAL and BLL based on a database schema. However, DAL and BLL reference a mini-ORM I built, based on Generics and Reflection. So I think you can use them side by side, as long as you keep in control and keep it small and simple.
T4 generates static code, while Generics is dynamic. If you use Generics, you use Reflection which is said to be less performant than "hard-coded" solution. Of course you can cache reflection results.
Regarding "return new T();", I use Dynamic Methods like this:
public class ObjectCreateMethod
{
delegate object MethodInvoker();
MethodInvoker methodHandler = null;
public ObjectCreateMethod(Type type)
{
CreateMethod(type.GetConstructor(Type.EmptyTypes));
}
public ObjectCreateMethod(ConstructorInfo target)
{
CreateMethod(target);
}
void CreateMethod(ConstructorInfo target)
{
DynamicMethod dynamic = new DynamicMethod(string.Empty,
typeof(object),
new Type[0],
target.DeclaringType);
ILGenerator il = dynamic.GetILGenerator();
il.DeclareLocal(target.DeclaringType);
il.Emit(OpCodes.Newobj, target);
il.Emit(OpCodes.Stloc_0);
il.Emit(OpCodes.Ldloc_0);
il.Emit(OpCodes.Ret);
methodHandler = (MethodInvoker)dynamic.CreateDelegate(typeof(MethodInvoker));
}
public object CreateInstance()
{
return methodHandler();
}
}
Then, I call it like this:
ObjectCreateMethod _MetodoDinamico = new ObjectCreateMethod(info.PropertyType);
object _nuevaEntidad = _MetodoDinamico.CreateInstance();
More code means more complexity. More complexity means more places for bugs to hide, which means longer fix cycles, which in turn means higher costs throughout the project.
Whenever possible, I prefer to minimize the amount of code to provide equivalent functionality; ideally using dynamic (programmatic) approaches rather than code generation. Reflection, attributes, aspects and generics provide lots of options for a DRY strategy, leaving generation as a last resort.
Generics and code generation are two different things. In some cases you could use generics instead of code generation and for those I believe you should. For the other cases code generation is a powerful tool.
For all the cases where you simply need to generate code based on some data input, code generation is the way to go. The most obvious, but by no means the only example is the forms editor in Visual Studio. Here the input is the designer data and the output is the code. In this case generics is really no help at all, but it is very nice that VS simply generates the code based on the GUI layout.
Code generators could be considered a code smell that indicate a flaw or lack of functionality in the target langauge.
For example, while it has been said here that "Objects that persist can not be generalized", it would be better to think of it as "Objects in C# that automatically persist their data can not be generalized in C#", because I surely can in Python through the use of various methods.
The Python approach could, however, be emulated in static languages through the use of operator[ ](method_name as string), which either returns a functor or a string, depending on requirements. Unfortunately that solution is not always applicable, and returning a functor can be inconvenient.
The point I am making is that code generators indicate a flaw in a chosen language that are addressed by providing a more convenient specialised syntax for the specific problem at hand.
The copy/paste type of generated code (like ORMs make) can also be very useful...
You can create your database, and then having the ORM generate a copy of that database definition expressed in your favorite language.
The advantage comes when you change your original definition (the database), press compile and the ORM (if you have a good one) can re-generates your copy of the definition. Now all references to your database can be checked by the compilers type checker and your code will fail to compile when you're using tables or columns that do not exist anymore.
Think about this: If I call a method a few times in my code, am I not referring to the name I gave to this method originally? I keep repeating that name over and over... Language designers recognized this problem and came up with "Type-safety" as the solution. Not removing the copies (as DRY suggests we should do), but checking them for correctness instead.
The ORM generated code brings the same solution when referring to table and column names. Not removing the copies/references, but bringing the database definition into your (type-safe) language where you can refer to classes and properties instead. Together with the compilers type checking, this solves a similar problem in a similar way: Guarantee compile-time errors instead of runtime ones when you refer to outdated or misspelled tables (classes) or columns (properties).
quote:
I have not really found effective ways to build templates that can say for instance instantiate themselves. In otherwords I can never do :
return new T();
public abstract class MehBase<TSelf, TParam1, TParam2>
where TSelf : MehBase<TSelf, TParam1, TParam2>, new()
{
public static TSelf CreateOne()
{
return new TSelf();
}
}
public class Meh<TParam1, TParam2> : MehBase<Meh<TParam1, TParam2>, TParam1, TParam2>
{
public void Proof()
{
Meh<TParam1, TParam2> instanceOfSelf1 = Meh<TParam1, TParam2>.CreateOne();
Meh<int, string> instanceOfSelf2 = Meh<int, string>.CreateOne();
}
}
Why does being able to copy/paste really, really fast, make it any more acceptable?
That's the only justification for code generation that I can see.
Even if the generator provides all the flexibility you need, you still have to learn how to use that flexibility - which is yet another layer of learning and testing required.
And even if it runs in zero time, it still bloats the code.
I rolled my own data access class. It knows everything about connections, transactions, stored procedure parms, etc, etc, and I only had to write all the ADO.NET stuff once.
It's now been so long since I had to write (or even look at) anything with a connection object in it, that I'd be hard pressed to remember the syntax offhand.
Code generation, like generics, templates, and other such shortcuts, is a powerful tool. And as with most powerful tools, it amplifies the capaility of its user for good and for evil - they can't be separated.
So if you understand your code generator thoroughly, anticipate everything it will produce, and why, and intend it to do so for valid reasons, then have at it. But don't use it (or any of the other technique) to get you past a place where you're not to sure where you're headed, or how to get there.
Some people think that, if you get your current problem solved and some behavior implemented, you're golden. It's not always obvious how much cruft and opaqueness you leave in your trail for the next developer (which might be yourself.)
I'm just starting out with C# and to me it seems like Microsoft Called their new system .Net because you have to use the Internet to look everything up to find useful functions and which class they stashed it in.
To me it seems nonsensical to require procedure/functions written and designed to stand alone ( non instantiated static objects) to have their class not also function as their namespace.
That is Why can't I use Write or WriteLine instead of Console.WriteLine ?
Then when I start to get used to the idea that the objects I am using ( like string) know how to perform operations I am used to using external functions to achieve ( like to upper, tolower, substring, etc) they change the rules with numbers, numbers don't know how to convert themselves from one numeric type to another for some reason, instead you have to invoke Convert class static functions to change a double to an int and Math class static functions to achieve rounding and truncating.. which quickly turns your simple( in other languages) statement to a gazillion character line in C#.
It also seems obsessed with strong typing which interferes somewhat with the thought process when I code. I understand that type safety reduces errors , but I think it also increases complexity, sometimes unnecessarily. It would be nice if you could choose context driven types when you wish without the explicit Casting or Converting or ToStringing that seems to be basic necessity in C# to get anything done.
So... Is it even possible to write meaningful code in notepad and use cl with out Internet access? What ref book would you use without recourse to autocomplete and Network access?
Any suggestions on smoothing the process towards grokking this language and using it more naturally?
I think you're suffering a bit from the fact that you've used to working in one way during some years, and now must take time to get yourself comfortable using / developing in a new platform.
I do not agree with you , that MS hasn't been consistent on the fact that a string knows how it should convert itself to another type, and other datatypes (like ints) do not.
This is not true, since strings do not know for themselves how they should be converted to another type as well. (You can use the Convert class to Convert types to other types).
It is however true that every type in .NET has a ToString() method, but, you should not rely on that method to convert whatever you have to a string.
I think you have never worked in an OO language before, and therefore, you're having some difficulties with the paradigm shift.
Think of it this way: it's all about responsabilities and behaviour. A class is (if it is well designed) responsible for doing one thing, and does this one thing good.
There is no excuse to use notepad to code a modern language. SharpDevelop or Visual C# Express provide the functionality to work with C# in a productive way.
And no, due to the complexity, not using the internet as a source of information is also not a good option.
You could buy a book that introduces you to the concepts of the language in a structured way, but to get up-to-date information, the internet is neccessary.
Yes, there are drawbacks in C#, like in any other language. I can only give you the advice to get used to the language. Many of the drawbacks become understandable after that, even if some of them don't become less annoying. I recommend that you ask clear, direct questions with example code if you want to know how some language constructs work or how you can solve specific problems more efficiently. That makes it easier to answer those questions.
For notepad, I have no useful advice, however I would advise you to use one of the free IDE's, Microsofts Express Editions, or Sharp Develop.
The IDE will speed the groking of the language, at which point, you can switch back to notepad.
Reading your post I was thinking that you worked mostly with C or dynamic languages previously. Maybe C# is just a wrong choice for you, there are IronPython, F# and a bunch of other languages that have necessary functionality (like functions outside of classes etc.)
I disagree with you about consistency. In fact there are small inconsistency between some components of .NET, but most part of FW is very consistent and predictable.
Strong typing is a huge factor in low defect count. Dynamic typing plays nice in small/intermediate projects (like scripts, etc). In more or less complex program dynamism can introduce a lot of complexity.
Regarding internet/autocomplete - I can hardly imagine any technology with size of .NET that doesn't require a lot of knowledge sources.
Programming in c# using notepad is like buying a ferrari to drive in dirt roads.
At least use Visual Studio Express Edition. For what you wrote I understand that you come from a non OO background, try to learn the OO concept and try to use it. You will eventually understand most design decisions made for .Net.
http://en.wikipedia.org/wiki/Object-oriented_programming
Oh boy where do i start with you(this will be a long post hahaha), well, lets go little by little:
"Microsoft called their system .NET because you have to use Intenet...", the reason why is called .NET is because the SUITE OF MICROSOFT LANGUAGUES(and now some other ones too like Phyton and Ruby, etc) CAN CALL ANY LIBRARY or DLLs, example you can "NET"(Network OR CALL) a DLL that was built in Visual Basic, F#, C++ from WITHIN C# or from any of those languagues you can also call(or ".NET") C# libraries. OK ONE DOWN!!!
NEXT ONE: "it seems nonsensical to require....to have their class not also function as their namespace", this is because a Namespace can have AS MANY CLASSES AS YOU WISH, and your question:
"That is Why can't I use Write or WriteLine instead of Console.WriteLine ?".
The reason is because: "Console"(System.Console hense the "Using" statement at the beginning of your program) Namespace is where "Write" and "WriteLine" LIVES!!(you can also FULLY qualify it (or "call It"). (all this seems to me that you need to study C# Syntax), ok NEXT:
"when I start to get used to the idea that the objects...", ok in simple words:
C# is a "Strongly Type-Safe language" so that SHOULD-MUST tell you what "you are getting in to" otherwise STAY WITH "WEAK or NO TYPE SAFE LANGUAGES" LIKE PHP or C , etc. this does NOT means is bad it just MEANS IS YOUR JOB TO MAKE SURE, as i tell my students: "IF YOU NEED AN INT THEN DEFINE AN INT INSTEAD LETTING THE COMPILER DO IT FOR YOU OTHERWISE YOU WILL HAVE A LOT OF BAD BUGS", or in other words do YOUR homework BEFORE DESIGNING A PIECE OF SOFTWARE.
Note: C# is IMPLICITY TYPE SAFE language SO IF YOU WANT YOU CAN RUN IT AS UNSAFE so from then it wiLL be your job to make sure, so dont complain later(for being lazy) when bugs arrive AT RUNTIME(and a lot of times when the customer is already using your crappy software).
...and last but not least : Whey do you wan to shoot yourself by using notepad? Studio Express is FREE, even the database SQL SERVER is FREE TOO!!, unless you work for a company I WILL ASK FOR PRO, ETC. all the "extra" stuff is for large companies, teams, etc, YOU CAN DO 99% OF THE STUFF WITH THE FREE VERSIONS(and you can still buy-update to full version once you want to scalate to Distributed Software or a Large Project, or if your software becomes a big hit, Example: if you need millions of queryes or hits PER SECOND from your database or 100 people are working on same project(code) but for the majority of times for 2 or 3 "normal" developers working at home or small office the FREE ONES ARE ENOuGH!!)
cherrsss!!! (PS: Software Developer since the 80's)