A full operating system in c# - c#

I saw this thread here. I was wondering if this was legit (sounds like it) and what are the drawbacks of doing this. What does it entail to run it stand alone in some architecture?
Thanks

Trying to create an operating system in a managed language is currently an "interesting research problem". This means that it seems possible, but there are still quite a few important issues that need to be resolved (for example, I wouldn't expect "managed windows" anytime soon).
For example, take a look at the Singularity project (also available at CodePlex). It still has some native parts, but very few of them. As far as I know, even the garbage collector is written in managed code (with some language extension that allows safe manipulation with pointers).
The trick is that even managed code will eventually be compiled to native code. In .NET, the compilation is done usually by JITter when you start the application. In Singularity, this is done in advance, so you run native code (but generated from managed). Singularity has some other interesting aspects - for example, processes communicate via messages (and cannot dynamically load code), which makes it possible to do some aggressive optimizations when generating native code.

There's an open source project that's trying to achieve exactly that.
It's called the "Managed Operating System Alliance". Mainly targeted as a framework (supplying users with a compiler, libraries, interfaces, tools and an example kernel), it will also feature a complete operating system kernel and small apps.
For further information:
Website: http://mosa-project.org/projects/mosa
IRC: #mosa on freenode

It is legit. Drawbacks are clear, this is a micro kernel. It is going to be a while before your video adapter driver will be fully managed as well. That takes acquiring critical mass with many devs and manufacturers jumping on the bandwagon. Difficult, but it has happened with Linux as the obvious example.
This is being pursued by Microsoft as well. Singularity has been well published about. It has evolved into a secret research project named Midori. There have been enough leaks about it to know its goal, Wikipedia has an article about it. I think lots of the devs that worked on the original CLR joined this project. Whether it will come to a good end is an open question. If it does, clearly the project backer is probably enough to get that critical mass rolling.

Microsoft's Singularity project is a operating system architecture framework which will allow people to write customizable operating system and probably Microsoft's new operating system will be based on singularity.
.NET is very powerful framework, it evolved and it probably contains everything from metadata attributes to linq and which certainly makes us free from bad pointer errors.
Just like Windows Phone and iPhone, people will be able to write customizable operating system for devices.
Today most of firewall, routers (the hardware ones) contain customized linux, that can be replaced with Singularity kernal and your own business process.
Singularity kernel is small it looks like perfect alternative of embedded windows/linux.
I dont think there is any drawback, except that it is totally new system and it will take time for hardware vendors to supply devices comptabile with this, but it will happen in future.

Related

Track Data Input Through Application Code and System Libraries

I am a security dude, and I have done extensive research on this one, and at this point I am looking for guidance on where to go next.
Also, sorry for the long post, I bolded the important parts.
What I am trying to do at a high level is simple:
I am trying to input some data into a program, and "follow" this data, and track how it's processed, and where it ends up.
For example, if I input my login credentials to FileZilla, I want to track every memory reference that accesses, and initiate traces to follow where that data went, which libraries it was sent to, and bonus points if I can even correlate it down to the network packet.
Right now I am focusing on the Windows platform, and I think my main question comes down to this:
Are there any good APIs to remote control a debugger that understand Windows forms and system libraries?
Here are the key attributes I have found so far:
The name of this analysis technique is "Dynamic Taint Analysis"
It's going to require a debugger or a profiler
Inspect.exe is a useful tool to find Windows UI elements that take input
The Windows automation framework in general may be useful
Automating debuggers seems to be a pain. IDebugClient interface allows for more rich data, but debuggers like IDAPro or even CheatEngine have better memory analysis utilities
I am going to need to place memory break points, and track the references and registers that are associated with the input.
Here are a collection of tools I have tried:
I have played with all the following tools: WinDBG (awesome tool), IDA Pro, CheatEngine, x64dbg, vdb (python debugger), Intel's PIN, Valgrind, etc...
Next, a few Dynamic Taint Analysis tools, but they don't support detecting of .NET components or other conveniences that Windows debugging framework provides natively provided by utilities like Inspect.exe:
https://github.com/wmkhoo/taintgrind
http://bitblaze.cs.berkeley.edu/temu.html
I then tried writing my own C# program using IDebugClient interface, but the it's poorly documented, and the best project I could find was from this fellow, and is 3 years old:
C# app to act like WINDBG's "step into" feature
I am willing to contribute code to an existing project that fits this use case, but at this point I don't even know where to start.
I feel like as a whole dynamic program analysis and debugging tools could use some love... I feel kind of stuck, and don't know where to move from here. There are so many different tools and approaches to solving this problem, and all of them are lacking in some manner of another.
Anyway, I appreciate any direction or guidance. If you made it this far thanks!!
-Dave
If you insist on doing this at runtime, Valgrind or Pin might be your best bet. As I understand it (having never used it), you can configure these tools to interpret each machine instruction in an arbitrary way. You want to trace dataflows through machine instructions to track tainted data (reads of such data, followed by writes to registers or condition code bits). A complication will likely be tracing the origin of an offending instruction back to a program element (DLL? Link module? Named subroutine) so that you can complain appropriately.
This a task you might succeed at doing as an individual in terms of effort.
This should work for applications.
I suspect one of your problems will be tracing where goes in the OS. That's a lot harder although the same principle applies; your difficulty will be getting the OS supplier to let you track insructions executed in the OS.
Doing this as runtime analysis has the downside that if a malicious application doesn't do anything bad on your particular execution, you won't find any problems. That's the classic shortcoming of dynamic analysis.
You could consider tracking the data the source code level using classic compiler techniques. This requires that you have access to all the source code that might be involved (that's actually really hard if your application depends on a wide variety of libraries), that you have tools that can parse and track dataflows through source modules, and that these tools talk to each other for different languages (assembler, C, Java, SQL, HTML, even CSS...).
As static analysis, this has the chance of detecting an undesired dataflow no matter which execution occurs. Turing limitations means that you likely cannot detect all such issues. THat's the shortcoming of static analysis.
Building your own tools, or even integrating individual ones, to do this is likely outside what you can reasonably do as an individual. You'll need to find uniform framework for building such tools. [Check my bio for one].

Migrating MFC (VC6) application to .NET 2008

I want very specific answer from the developers who did this and want to know how you resolved the problems you faced.
We have very big MFC (VC6) 32 bit application existing for 10 years. Now we would like to migrate it to .NET unmanaged 64-bit application. Here we have some problems, our UI should not change, we may need some managed .NET class for easier development, without affecting architecture how to add managed code with unmanaged code, lots win32 APIs might get changed to new APIs, should be run in XP, Vista, Windows 7 OS machine without any change, these activities should not be time consuming, new technologies analysis should be done as we are MFC programmers...
Pls share your experience and if you have any clear documents will be very helpful...
Note:
For the clear understanding, I am re-phrasing some points again. We want to migrate our VC6 native code 32-bit application to VS2008 (or VS2010) native code (unmanaged C++) with 64-bit support. The primary requirement is there should not be any change in existing UI. In Addition, if .NET supports the combination of managed code with unmanaged code, we can try using some features like .NET remoting in unmanaged C++ environment. And one more important thing I would like to convey to all is that we are not going to start any coding in C# or from scratch.
We've done this is steps (VC6 -> VS2005 -> VS2008 -> (soon) VS2010 ) and most issues were related to changes in the API.
unsafe string operations (strcpy vs. strcpy_s ) that give out a TON of warning messages (use the _CRT_SECURE_NO_WARNINGS preprocessor define to remove them if you do not want to fix them all )
Changed in prototypes for message handlers (returns LRESULT, changes in WPARAM and LPARAM, ... )
Deprecated API (you'll find them out soon enough, I think there's a page on msdn about that)
Compiler might be a bit more strict in regards to standard C++ .
It's hard to go to specifics...
but have a look at this blog entry for some more info : http://insidercoding.com/post/2008/08/20/Migrating-from-VC6-to-VC9.aspx
Good luck.
Max.
What you're asking for isn't really a migration -- it's a nearly complete rewrite, at least of the entire UI. Unless your "very big ... application" has a very small, simple UI, I'd sit back and think hard. Read Joel's Things You Should Never Do.
Bottom line: this is almost certainly a really bad idea. If you do it anyway, you need to start out aware that it will be quite time consuming.
If you decide to go ahead with this, you nearly need to do it incrementally. To do that, I'd probably start by finding discrete "pieces" of functionality in your existing code, and turning those pieces into ActiveX controls. Once you've gotten that to the point that your main program is basically a fairly minimal framework that most instantiates and uses ActiveX controls, it becomes fairly easy to create a new framework in managed code that does roughly the same thing, delegating most of the real work to the existing ActiveX controls. Once you've done that, you can start to migrate individual controls to managed code as you see fit.
The point of doing things this way is to avoid having a long interval during which you're just writing new code that duplicates the old code, but can't do any updates for your customers. Nearly the only reasonable alternative I've seen work reasonably well is to have two separate development teams: one continues to work on and update the old code base at the same time as the second team starts the total rewrite from the ground up. If you have enough money, that approach can work out quite well.
Microsoft, for one example, has done things like this a number of times. Just for a couple of examples, years ago when there were rumors that Borland (then Microsoft's biggest competitor in programming language tools) was going to produce a TurboBASIC, Microsoft decided QuickBASIC (V2 at the time) really couldn't compete. They responded by setting up two teams: one to do as much upgrading as reasonable to QuickBASIC 2 to produce QuickBASIC 3. The second team did a total rewrite from the ground up, producing what became QuickBASIC 4.
Another example is Windows 95/98/... vs. Windows NT. They continued development of the existing Windows code base at the same time as they did a total rewrite from the ground up to produce Windows NT. Though they kept the two teams synchronized so UIs looked similar and such, the two were developed almost entirely separately from each other. Only after years of overlap between the two did they finally quit working on the old code base (and I sometimes wonder whether the crappiness of Windows Me wasn't at least partly intentional, to more or less force users to migrate to the NT code base).
Unless you can do that, however, nearly your only chance of success is to use an incremental approach.
As Jerry has mentioned Joel talks about this in his blog Things You Should Never Do.
Moreover there are other things to be considered with this conversion.
What would you do with existing VC++ 6.0 code base.
Qualification time( different OS, SQL etc) needed to test the entire product after the changes are made.
How do you manage 2 code bases with and without 64 bit support.
PS: Most importantly by the time you fix and qualify your product in VS2010 i guess VS 2012 would have released :)

Should I go with C# / .Net OR C++ for my application?

I am working on a project which talks to SQL Server and most of the back end code is in C++.
This is an application which controls flow of few fluids while loading them into carriers. Some of the back end modules which talk to controllers which in turn control flow of fluids are in C++. Since they have memory leaks and some other bugs, there has been attempt to migrate them to .Net.
What I understand is, performance comes down when we use .Net for back end modules. So my opinion was NOT to convert these back end modules to .Net but to fix the issues in C++ itself.
The code in discussion is an application which interacts with firmware of controllers. It basically takes some commands and gets response from controllers. This code does not have UI and the same code interacts with SQL as well to update the data. Both are part of one exe.
.Net is believed to be good when performance is not expected to be rigorous. It would have been suitable if new code had to be written and especially when it involves design of UI. Another school of thought is, .Net is good for higher layers but not for lower layers in a multi tier architecture.
I would like to know opinions of others from different perspective. Some of the aspects to consider are:
speed
maintainability of code
migration related risks in future
etc.
Please comment from angle of rewriting existing code. It will be one to one C++ line conversion to C# if we decide to go for it.
Quick Answer:
If you have capable C++ programmers who can use debuggers and understand their application domain and how to use the controllers, it would probably be easier to do a careful review and fix the memory bugs and issues. After all, time and effort has already been spent on the code, and unless it is trivial code, rewriting in C# could introduce new logic errors.
Questions for the OP:
The code in discussion is driver code which interacts with firmware of controllers. It
basically takes some commands and gets response from controllers. This code does not have
UI and the same driver code interacts with SQL as well to update the data
Are you talking about user-mode software that you have named the "driver", or are you talking about a kernel-mode device driver?
It would help if you could provide more information about these controllers running firmware that control fluid flow. Does the C++ back-end connect to the controllers through RS232 (Serial)? Ethernet? USB? TCP/IP? PCI?
If you're connecting to the controller hardware via TCP/IP or RS232 (Serial), C#/.NET is well equipped to handle the task. For anything else like USB, PCI, Ethernet, etc., you're going to need a device driver which has to be programmed in C or C++ depending on the requirements of the driver. Of course you can encapsulate the user-mode part that is in C++ or encapsulate direct calls to Win32, but it will add more development tasks to your project.
Apparently the only problem with existing C++ code is memory leaks.
That seems to me an insufficient reason to rewrite it all in C#.
Instead I'd suggest running memory leak detection software to find the leaks.
every language is special in its own way, you should find out which language suits best for the scenario
Don't rewrite a whole program in a different language because of a few bugs -- there will just be different ones in the new product, and the Q&A cycle will have to be restarted. I'd fix the bugs in the C++ program. If the issue is memory management, I'd strongly look into std::auto_ptr or std::tr1::shared_ptr, which will automatically delete memory for you when finished. If that's not an option, I'm sure that running something through valgrind or even paying for commercial memory checkers would be cheaper than rewriting the whole thing (in both time and money).
"language is special in its own way" man I need a hug, for real. Don't change languages because code is not written well...write better code and use resources available to you.

What can be done in VC++ (native) that can't be done with VC#?

What can be done in VC++ (native) that can't be done with VC#?
From what I can tell the only thing worth using VC++ native for is when you need to manage memory yourself instead of the CLR garbage collector, which I haven't seen a purpose in doing either (but thats for another question to be asked later).
Cross-platform development. Yes Mono exists, and Java's somewhat more predictable to have it function EXACTLY the same on more platforms, you can find a C/C++ compiler for just about any platform out there, where you can't with C#.
Also linking into 3rd-party libraries, while I'm sure there's a way to leverage them in C#, you'll be able to take advantage of them without interop (Marshaling, etc) in C++.
Edit: one last thing: RELIABLE memory management. Yes you can use dispose(), and try-finally, but there's nothing quite like KNOWING the memory is gone when it's popped off of the stack. Through techniques like RAII, when you use well-constructed classes, you will KNOW when your classes release resources, and not waiting around for the GC to happen.
With P/Invoke there is very little that is impossible in .NET (most obviously device drivers).
There are also things where the advice is to not use .NET (e.g. shell extensions, which get loaded into any process that opens a file dialogue1).
Finally there are things which will be much harder in .NET, if possible at all (e.g. creating a COM component that aggregates the FTM).
1 This can create a problem if that process is already using a different version of .NET. This should be alleviated in the future with .NET 4 having the ability to support side by side instances of the runtime.
I'm not sure if you're talking about language features or applications. My answer though is for applications / components.
Really there are only 2 things you cannot do in C# that you can do in C++.
You cannot use C#, or any other .Net language, to write a component for a system that only accepts native components
You cannot use C#, or any other .Net language, to alter certain properties of a CCW for which the CLR does not allow customization
The most notable item here is Device Drivers. This is a framework that only accepts native components and there is no way to plug in a managed component.
For everything else it's possible to do the same thing in C# as it is in C++. There are just a lot of cases where you simply don't want to and a native solution is better. It's possible for instance to manage and manipulate memory in C# via unsafe code or IntPtr. It's just not nearly as easy and generally there's no reason.
You can't write device drivers for one.
I think there are several important points:
You can do anything in C#/C++/Java/Python/Lisp or almost any other language, finally all of them Turing complete ;)... The question is it suits your needs?
There is one big and extreamly important limitation of C#... It runs only one single platform Windows... (Mono is still not mature enough).
There are many applications where GC is just a waste of resources, applications that can't afford you throw up 1/2 of memory untill next gc cycle: Games, Data Bases, Video auido Processing and many other mission critical applications.
Real Time applications (again games, video processing and so on). Non-deterministic GC makes life much harder for them.
In fact, most of desktop applications: Web Browsers, Word Processors, Desktop Environment itself (like Windows Explorer, KDE or Gnome) are written in compiled languages with careful thinking about resources... Otherwise, they would just be terrible bloated applications.
Whereas writing shell extensions in Windows XP was possible in C# it is next to impossible to write shell extensions for Vista and Windows 7. Shell extensions and Namespace extensions (and anything else that uses the new Properties system) (kindof) must be done in C++ unless you're into pain.
There are two obvious answers:
VC# can never run without the .NET
framework. Native C++ can. That may
be necessary in some areas (others
have mentioned device drivers, but
more common examples might simply be
clients where the .NET framework is
not installed. Perhaps you're
distributing an application and you
know not all of your customers are
willing to install .NET, so your
sales would go up if you made an app
that just worked without the
dependency on .NET. Or perhaps you're
working on some mobile device where
the couple of megabytes taken up by
the .NET CF can not be justified. Or shell extensions where using .NET can cause nasty problem for the user.
And VC# can never use C++ language
features. Native C++ can. (Managed
C++ can too, of course, but that's a
different issue). There are, believe it or not, things that can be done more conveniently or elegantly in C++. And they're only accessible if you're programming in C++.
System calls are no problem, however. p/invoke lets you do those from C#, almost as easily as you could from C++.
inline assembler
You cannot use C++-Libraries with classes (P/Invoke can only be used for functions AFAIK)
You cannot use callbacks with P/Invoke.
Is C# in particular and .NET in general self compiling yet (this is not a troll, I genuinely don't know)? If not, you can use VC++ to write C# and .NET, but you can't use C# to do the same job.
This is tongue in cheek, but it also is an answer to your question... you can screw things up much more severely in VC++ than you can in VC#. Not that you can't manage to screw things up severely in VC#, but in general, you can screw them up easier and more thoroughly in VC++.
Again, kind of tongue in cheek, but also an answer to your question. Perhaps not what you were hoping for, but... :-)
There's also hard real-time applications. Any language with a GC cannot be used, just in case it decides to collect during a time-constrained part of the code. Java was notorious for not even allowing you to try (hence the EULA about not using it for software "intended for use in the design, construction, operation or maintenance of any nuclear facility"
(yes, I know they've since made a modified version of Java for real time systems).
For example, it makes sense to use C++ if it's harder to translate the header files for existing libraries than it is to give up the existing managed libraries.
The Main difference is:
C++ is a core language with which you can build stand-alone programs. These Programs communicate directly with the the operating system and nothing else. C++ compilers exist for more or less all platforms (operating systems).
C# is a language that conforms to the CLS. A program written in C# can not start without a CLI engine (.NET Framework, Mono, etc.). A Program written in C# communicates with the .NET framework AND with the operating system. You have a man in the middle. Like all servicing personal, this man can help but it will cause additional trouble. If you want to port, you have a different man in the middle etc. CLI Implementations do not exist for all platforms.
By my opinion every additional framework is a additional source of problems.
Using SSE instructions seems to be one of these cases. Some .NET runtimes will use some SSE instructions, depending on your code. But in VC++, you can use the SSE intrinsics directly. So, if you're writing a multimedia code, you'd probably want C++. (C++/CLI might work as well, presumably)

What are the advantages of c# over, say, delphi/realbasic for windows applications [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
Has anyone ever written an application bigger than its .NET luggage?
People used to criticize VB6 for its 2 MB runtime but it rarely dwarfed the app it accompanied.
Today despite having Vista on my machine I had to download 35 MB of the 3.5 framework and reboot to then try out an app half that size.
When you factor in the decreased source code security I wonder why anyone would anyone develop a windows application in .NET rather than in a language that allowed for the building of native executables.
What is superior about .NET that outshadows these drawbacks when it comes to writing applications to run on Windows?
PEOPLE: Please note that this was written in February, 2009, and what is said was appropriate at that time - yelling at me in late 2012 (3+ years later) is meaningless. :-)
Delphi has some considerable advantages for Win32. Not that .NET apps are inherently bad, but try:
running a .NET app (any version) on Win95/ME, where .NET doesn't exist (AFAIK)
distributing any small ( < 1.5 MB) .NET app (yes, floppy drives still exist)
providing any .NET app on a system that has no Internet access (yes, they exist)
distributing your .NET apps in countries without widespread high bandwidth
keep people from seeing your source code without spending a ton of dough (Reflection, anyone?)
Garbage collection in .NET might be really nice, but anyone who knows anything about programming can also handle manual allocation/deallocation of memory easily with Delphi, and GC is available with reference-counted interfaces. Isn't one of the things that brought all of the non-programmers to proliferation the pseudo-GC of VB? IMO, GC is one of the things that makes .NET dangerous, in the same way VB was dangerous - it makes things too easy and allows people who really have no clue what they're doing to write software that ends up making a mess. (And, before I get flamed to death here, it's great for the people who do know what they're doing as well, and so was VB; I'm just not so sure that the advantage to the skilled outweights the hazards to us from the unskilled. )
Delphi Prism (AKA Rem Objects Oxygene, formerly Chrome) provides the GC version of Delphi that those who need it are looking for, along with ASP.NET and WPF/Silverlight/CE, with the readability (and lack of curly braces) of Delphi. For those (like me) for which Unicode support isn't a major factor, Delphi 2007 provides ASP.NET and VCL.NET, as well as native Win32 support. And, at a place like where I work, when workstations are only upgraded (at a minimum) every three years, and where we just got rid of the last Win95 machine because it wasn't a priority to upgrade, the .NET framework is an issue. (Especially with company proxy requirements only allowing Internet access to a handful of people, limiting bandwidth and download capabilities, and proper non-admin accounts with no USB devices allowed, all still running across a Netware network - no such thing as Windows Update, and never a virus so far because nothing gets in.)
I work some in .NET languages (C#, Delphi Prism), but the bread and butter both full-time and on the side, comes from Win32 and Delphi.
Okay, I doubt this will persuade you as you don't want to be persuaded, but here's my view of the advantages of .NET over older technologies. I'm not going to claim that every advantage applies to every language you mentioned in the question, or that .NET is perfect, but:
A managed environment catches common errors earlier and gives new opportunities:
Strong typing avoids treating one type as another improperly
Garbage collection largely removes memory management concerns (not totally, I'll readily admit)
"Segmentation fault" usually translates to "NullReferenceException" - but in a much easier to debug manner!
No chance of buffer overruns (aside from the potential for CLR bugs, of course) - that immediately removes a big security concern
A declarative security model and a well-designed runtime allows code to be run under a variety of trust levels
JITting allows the CLR to take advantage of running on a 64 bit machine with no recompilation necessary (other than for some interop situations)
Future processor developments can also be targeted by the JITter, giving improvements with no work on the part of the developer (including no need to rebuild or distribute multiple versions).
Reflection allows for all kinds of things which are either impossible or hard in unmanaged environments
A modern object-oriented framework:
Generics with execution time knowledge (as opposed to type erasure in Java)
Reasonable threading support, with a new set of primitives (Parallel Extensions) coming in .NET 4.0
Internationalisation and Unicode support from the very start - just one string type to consider, for one thing :)
Windows Presentation Framework provides a modern GUI framework, allowing for declarative design, good layout and animation support etc
Good support for interoperating with native libraries (P/Invoke is so much nicer than JNI, for example)
Exceptions are much more informative (and easier to deal with) than error codes
LINQ (in .NET 3.5) provides a lovely way of working with data in-process, as well giving various options for working with databases, web services, LDAP etc.
Delegates allow a somewhat-functional style of coding from VB and C#; this is better in C# 3.0 and VB9 due to lambda expressions.
LINQ to XML is the nicest XML library I've used
Using Silverlight as an RIA framework allows you to share a lot of code between your lightweight client and other access methods
A mostly-good versioning story, including binding redirection, runtime preference etc
One framework targeted by multiple languages:
Simpler to share components than with (say) COM
Language choice can be driven by task and team experience. This will be particularly significant as of .NET 4.0: where a functional language is more appropriate, use F#; where a dynamic language is more appropriate, use IronRuby or IronPython; interoperate easily between all languages
Frankly, I just think C# is a much cleaner language than VB or C++. (I don't know Delphi and I've heard good things about it though - and hey, you can target .NET with Delphi now anyway.)
The upshot of most of this - and the soundbite, I guess - is that .NET allows faster development of more robust applications.
To address the two specific issues you mentioned in the question:
If your customer is running Vista, they already have .NET 3.0. If they're running XP SP2 or SP3, they probably have at least .NET 2.0. Yes, you have to download a newer version of the framework if you want to use it, but I view that as a pretty small issue. I don't think it makes any sense to compare the size of your application with the size of the framework. Do you compare the size of your application with the size of the operating system, or the size of your IDE?
I don't view decompilation as nearly such a problem as most people. You really need to think about what you're afraid of:
If you're afraid of people copying your actual code, it's usually a lot easier to code from scratch if you're aware of the basic design. Bear in mind that a decompiler won't give local variable names (assuming you don't distribute your PDB) or comments. If your original source code is only as easy to understand as the decompiled version, you have bigger problems than piracy.
If you're afraid of people bypassing your licensing and pirating your code, you should bear in mind how much effort has gone into stopping people from pirating native applications - and how ineffective it's been.
A lot of the use of .NET is on the server or for internal applications - in neither of these cases is decompilation an issue.
I've written more on this topic in this article about decompilation and obfuscation.
To name a few:
Automatic memory management, garbage collection
Type safety
Bounds checking
Access to thousands of classes that you will not have to create
OK first up, No one language/platform is ever going to be universally superior.
Specialization will always provide a better use case in certain areas but general purpose languages will be applicable to more domains.
Multi-paradigm languages will suffer from complex boundary cases between paradigms e.g.
Type inference in any functional language that also allows OOP when presented with sub classes
The grammar of C++ is astonishingly complex, This has a direct effect on the abilities of its tool chain.
The complexities of co/contra variance coupled with generics in c# is very hard to understand.
Older languages will have existing code bases that work, this is both positive (experience, well tested, extensive supporting literature) but also a negative (the resulting inertia against change, multiple different ways to do things leading to confusion for new entrants).
The selection/use of both languages and platforms is, as are most things, a balancing of the pros and cons.
In the following lists Delphi has some of the same pros and cons, but differs on many too.
Potential Negatives of .Net (if they are not an issue to you they aren't negatives)
Yes, you need the runtime deployed (and installed), and it's big.
If you wanted a language with multiple inheritance you're not going to get it
The BCL collections library has some serious flaws
Not widely supported outside the MS universe (mono is great but it lags the official implementation significantly)
Potential patent/copyright encumbrance
Jitted (ignoring ngen) start up time is always going to be slower and more memory will be needed.
There are more but these are the highlights.
Potential Positives (again if they don't matter to you)
A universal GC, no reference counting that prevents certain data structures being usable, I know of no widely used Functional language without GC, I can't think of significant language of the last 10 years without at least optional GC. If you believe this is not a big deal you would appear to be in a minority.
A large BCL (some parts not so good as others but it's very broad)
Vast numbers of languages (and a surprising number of paradigms) can be used and interact with each other (I use c#, f#, C++/CLI within one wider application using each where it makes most sense but able to easily use aspects of one from another).
Full featured introspection combined with declarative programming support. A wide variety of frameworks become much simpler and easy to use in this manner.
Jitted - alterations in underlying CPU architecture can be largely transparent, sophisticated runtime optimizations not available to pre-compiled languages are possible (java is doing rather better on this currently)
memory access safety
Fusion dll loading and the GAC for system dlls
Likewise specifically for c#
Con:
Syntax based on C underpinnings
(pre 4.0) late binding solely via inheritance
More verbose than some imperative languages
poor handling of complex embedded literals (regexes/xml/multi line strings)
variable capture within closures can be confusing
nested generators are both cumbersome and perform appallingly
Pro:
Syntax based on C underpinnings
Much functional support through lambdas
Expressions allowing compile time validation of non code areas such as Linq to SQL
Strongly typed but with some Type inference to make this easier
if you really need to unsafe is there for you
interaction with existing C++ ABI code via P/Invoke is both simple and clear.
multicast event model built in.
light weight runtime code generation
The C underpinnings really is a pro and con. It is understandable by a vast number of programmers (compared to pascal based style) but has a certain amount of cruft (switch statements being a clear example).
Strong/Weak/Static/Dynamic type systems are a polarising debate but it is certainly not contentious to say that, where the type system is more constraining it should strive to not require excessive verbosity as a result, c# is certainly better than many in that regard.
For many internal Line of Business applications a vast number of the .Net platform Cons are absolutely immaterial (controlled deployment being a common and well solved problem within corporations).
As such using .Net (and this does largely mean c#, sorry VB.Net guys) is a pretty obvious choice for new development within a windows architecture.
The "simplicity" of developing complex(and simple) applications using it. A lot of basic stuff is already coded for you in the framework and you can just use it. And downloading 35mb file today is much easier than 2mb file 8-6 years ago.
There are a lot of reasons. I don't know much about RealBasic, but as far as Delphi goes:
Less widespread than .NET, smaller development community. Many of the Delphi resources on the net are ancient and outdated.
Until Delphi 2009, Delphi didn't have full unicode support.
I don't know about Delphi 2009, but 2007 didn't have very good garbage collection. It had some sort of clunky reference counting that required some intervention on behalf of the developer. .NET has a much more advanced GC that does virtually everything for you.
.NET has a larger standard library and more up-to-date 3rd party libraries.
.NET languages like C# are arguably better, and certainly easier to understand for those new to the language.
There's a lot of supposed advantages cited by .NET developers here that shouldn't be in that comparison, simply because Delphi has them as well:
Type safety
Bounds checking
Access to thousands of classes (components) that you will not have to create
There are however some things in .NET that Delphi doesn't have out-of-the box, and only some of those can be added by libraries and own code. To name a few:
Support for multiple languages working on top of the same runtime - allowing to choose the matching language for the problem (e.g. functional programming using F#)
Dynamic source code generation and compilation - this is something so alien to Delphi programmers that they probably don't even see how it could be useful [1]
Multicast events
Better multi-threading support (for example BackgroundWorker class, asynchronous delegates)
Seamless support for both 32 and 64 bit processes
Weak references
[1] If you don't know but are interested, check out the home page of Marc Clifton, especially the articles about declarative programming.
Edit: I'd like to respond to the comment by Mason Wheeler:
Re dynamic code: I know that there are solutions to have Pascal scripting embedded in the application. There is however a distinct difference between making parts of your internal object hierarchy available to the scripting engine, and having the same compiler that is used for your code available at runtime as well. There are always differences between the Delphi compiler and the compiler of the scripting engine. Anyway, what you get with .NET goes far beyond anything that is available for Delphi. And anyway, it's not the point whether one would be able to code similar infrastructure for Delphi, the point is that with .NET it's already there for you, when you need it.
Re Multicast events: Exactly, there's ways to code it, but it's not part of Delphi / the VCL out-of-the-box. That's what I was saying above.
Re weak references: You are sadly mistaken. Try to use interfaces in a non-trivial way, creating circular references on the way. Then you have to start to use typecasts and wish for weak references.
Well, the .NET Framework is shared for all .NET applications, so you have it only once on your machine and 35MB are nothing today (compare it to the size of your Vista installation). For your second .NET application you don't have to download it again.
For Windows app, .NET (using C# or whatever) gives you more direct access to the latest and greatest Windows features. It's also very well supported by Microsoft, has a huge community and lots of books written about it.
REALbasic (now Xojo) is for cross-platform apps. Using it just on Windows can sometimes be useful, but that would not be its strength (which is that it's amazingly easy to use).
I don't know much about Delphi.
From what I can see RealBASIC doesn't have much (if anything) in the way of Object Relational tools and probably wouldn't be as good a choice for n-tier, database-centric applications.

Categories