I am about to start developing an edge detection system (once I've read through a couple of books, which I'm doing so at good speed), but one thing I am wondering is the speed of an app like Matlabs (which can compile code to C++) vs AFORGE.NET for edge detecton.
Is unmanaged code generally faster?
Thanks
Unamanged code can be faster for some things, but whether it will make a difference in your case is impossible to say without testing it. You could try coding a small, but representative, subset of the problem in both C# and C++ to measure the difference. Of course, making it "representative" (in terms of performance characteristics) is the challenge.
Your first consideration should be to select the most productive language/environment from the set of languages and environments that provide acceptable performance. Productivity thoroughly trumps performance for the vast majority of computing problems.
Another contender, which is highly productive, is Python with numpy and scipy extensions. It can make use of LAPACK, which largely mitigates any performance concerns over using Python.
Firstly yes Unmanaged code is generally faster, but how much ? It depends. In your case, it shouldn't make much difference.
I can tell you this because very recently a friend of mine developed a edge detection system (to implement a motion detection system i guess), using the AFORGE.Net library. And i don't remember him complaining about performance.
Related
I have to make a simulator. Functionalities are mentioned below:
Inputs: Unchangeable C codes
Function: Compile and run those codes and produce outputs. Each C code may variables. Input function (time based) is given for all those variables. So, if the input c code file has a simple adder int add(a,b)/* a and b change with time with a specified fn. So, at t=0 a=1,b=2 at t=4 b may change to 6 and so on*/ I have to run the code at diff times and generate outputs. Check those o/ps too. For all those purposes, I would need a GUI too.
Need suggestions for both backend and Gui tool.
P.S: My research tells me C backend with GTK for GUI is a workable solution but GTK is too lengthy and tedious. I am confused about Qt/C++ as it may not work all that well with a c backend. C++ backend would be difficult for me to import and run those input codes (extern doesnt work all the time). I also looked at C# for GUI and the linkings to dll files but that is also mentioned in many blogs as not so feasible. Any suggestions? Thanks already.
P.P.S Please suggest open source and non-licensed tools only.
C++ can and does interface trivially with any C library or "code" out there, so that's not your concern.
I'd suggest using Qt. It comes with a nice, free IDE: Qt Creator. It offers high-level functionality that often surpasses GTK. Alas, GTK library ecosystem is very "hackable" -- the underlying object model allows all sorts of creative uses and abuses, but has IMHO a very shallow learning curve -- it'll take you much longer to "fully" understand GTK than Qt.
That's simply because GTK's underlying low-level libraries offer a lot of functionality that's taken over by the C++ language or simply hidden from view in Qt. I've found GTK's underpinnings to be more flexible, but it seems all a solution in search of a problem. Qt's/C++'s much simpler and less flexible object model works just as well and is easier to learn because there's much less of it.
Think also from the viewpoint of investing your time. Learning to be proficient in C++ is portable to other projects that merely use the same language. Learning to be proficient in glib only helps you with C projects that specifically use glib, and there's much less of them I'd think than C++ projects.
Sometimes too much of a good thing is a bad thing, and that seems -- to me -- to be the case with GTK. It's a library collection that's bottom heavy with low-level functionality, but seems to be light on higher-level abstractions. Never mind that the individual libraries are all designed with slightly different guiding principles and are essentially a loosely bound collection. That's good in theory for decoupling of the components, but makes learning much harder as there's really little in the way of common design elements in the various libraries that make up GTK.
Qt, in contrast, is pretty much a single, uniform large library split into modules that you can elect not to use.
PS. Most people use "steep learning curve" incorrectly to mean that something is hard to learn. A learning curve shows "knowledge" vs. time. If it's steep, that means you are learning fast. A shallow one means you're learning slowly.
Why do Actionscript, Java, C#, etc. compile into intermediate code?
I'm aware of cross-platform benefits of using intermediate code.
The question is: What is the benefit of compiling to intermediate code, compared to scripts (JS, Python, PHP, Perl, etc.) that are interpreted?
Is it just for code obfuscation? Or what?
In addition what is the benefit compared to compiling to native code?
It is much faster to parse and JIT-compile IL code than to parse a high-level language like Java or especially C# (which has more features).
It also allows developers to use new language features without updating anything on the end-users' machines. (eg, LINQBridge)
First of all historically java was targeted to mobiles and embedded devices with very limited CPU/memory, and JVM on such devices can't do any intensive optimization, like modern JITs do for now. So java engeneers decided to move optimization to compile phase.
Next, having "byte code"/IL allows you to have many compilers from different languages into such byte code - and only one JVM/JIT. It is much more efficient then to create separate VM+JIT for each language. Remember, you have Java, JRuby, JPython, Groovy, etc... in JVM world, and C#, VB#, ASP.NET, F#, etc -- in .NET world, and only one runtime/VM for each.
Intermediate code is very similar to assembly in that it contains a limited set of instructions. The runtime is then able to reliably and consistently operate on this (somewhat) small set of instructions without having to worry about parsing the language, etc. It can therefore gain performance improvements and make optimizations.
In .NET compilation to MSIL solves interoperability between different languages like C#, VB.NET etc.
Binary code instructions are very simple and atomic, much more so than programming statements can be.
For example, z = a + b - (c * d) would need to be translated into loading four values into registers, then an add, multiply, and subtract, then write into another memory location. So we have at least 8 instructions or so just for that line!
Intermediate code is the best of both worlds-- cross-platform, but also closer to the way machine code is normally written.
In addition to SLaks's answer, compiling to IL enables a degree of cross-language interoperability that generally doesn't exist in interpreted languages.
This advantage can be huge for new languages. Scala's only been around since 2003, and it's already gained an immense amount of traction. Ruby, on the other hand, hasn't spread very far beyond being used for Rails apps in its 1.5 decades of existence. This is at least in part because Scala is bytecode-compatible with all pre-existing Java code and libraries, which gives it a huge leg up: Its community can focus most of its effort on the language itself, and potential adopters don't have to worry about going through any special contortions (or, worse yet, replacing their entire codebase) in order to start using Scala. F#'s story is almost identical, but for the other major managed environment.
Meanwhile Ruby doesn't speak with code from other languages quite so easily, so its community has to sink a lot more effort into developing Ruby-specific libraries and frameworks, and its potential users have to be a lot more willing to commit to a large-scale platform shift in order to use it.
Good morning,
I am writting a spell checker which, for the case, is performance-critical. That being, and since I am planning to connect to a DB and making the GUI using C#, I wrote an edit-distance calculation routine in C and compiled to a DLL which I use in C# using DllImport. The problem is that I think (though I am possibly wrong) that marshalling words one by one from String to char * is causing a lot of overhead. That being, I thought about using C++/CLI so that I can work with the String type in .NET directly... My question is then how does C++/CLI performance compares to native C code for heavy mathematical calculations and array access?
Thank you very much.
C++/CLI will have to do some kind of marshaling too.
Like all performance related problems, you should measure and optimize. Are you sure C# is not going to be fast enough for your purposes? Don't underestimate the optimizations that JIT compiler is going to do. Don't speculate on the overhead of a language implementation solely for being managed without trying. If it's not enough, have you considered unsafe C# code (with pointers) before trying unmanaged code?
Regarding the performance profile of C++/CLI, it really depends on the way it's used. If you compile to managed code (CIL) with (/clr:pure), it's not going to be very different from C#. Native C++ functions in C++/CLI will have similar performance characteristics to plain C++. Passing objects between native C++ and CLI environment will have some overhead.
I would not expect that the bottleneck will be with the DLLImport.
I have written programs which call DLLImport several hundert times per second and it just works fine.You will pay a small performance fine, but the fine is small.
Don't assume you know what needs to be optimized. Let sampling tell you.
I've done a couple spelling-correctors, and the way I did it (outlined here) was to organize the dictionary as a trie in memory, and search upon that.
If the number of words is large, the size of the trie can be much reduced by sharing common suffixes.
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
I am thinking of making a simple game engine for my course final year project. I want it to be modular and expandable so that I can add new parts if I have time. For example I would make a graphics engine that would be completely independent of the other systems, once that was finished I could add a physics engine etc.
I would also want to make a tool set to go with this engine. For the tools I would like to use C# but I am not sure about the libraries. My question is, if I want a C# GUI program, can I reference a library written in C++? Also would there be any performance problems etc. if I made some of the libraries in C# but wanted to use them from a C++ game.
I would like to avoid C++ as much as possible, my experiences have shown that development time can be a lot higher for a project over that of using C# or Java etc. My graphics development would be in OpenGL, this is all I have been taught. We have only done this in C++ but I have seen that projects such as SharpGL allow for the development with C#. Is there any performance issues with this. I am not looking for a blindingly fast, top graphics game. It will most likely be something simple to show my engine working. My engine probably won't be that great either as I only have a year and am working on my own.
Any advice on this would be appreciated. I am still really in the planning stages so it wouldn't be too much work to completely change what I want to do. I would just like to preempt any major problems I might have.
Thanks
If you can pull this off even under a relaxed set of goals, you're set for a great project. First, you need to get a grasp of scope:
How long do you have to work on the project?
How many people are working on the project?
If you can only create one "piece of the pie" on your own, which one would you pick? Use this to establish a working plan to make sure if you don't get as far as you'd like, you still have enough to make the work show as a great project.
A game engine is a big development task. A game engine with a toolchain is an enormous development task. In a lot of ways, choosing a smaller but more challenging task is preferable because it shows higher-level thinking about problem solving, which is greatly preferred by academics - double that if you are CS and not [Area of] Engineering. Since you are working in managed languages, things you may want to consider are:
Expressing gameplay logic (rules of the game) in a clean manner so as to provide an efficient and reliable path from designer->developer->tester. If you want, this could absolutely include the manner in which you describe the rules (Custom editor? Code API? DSL?)
Game AI has no shortage of extremely challenging problems.
Physics and graphics are interesting and I believe managed languages will eventually be used in these areas, but you may find yourself a bit more limited in your ability to solve these problems. If I were going to work in this area now, I would be trying to answer: "If I could use a managed language for writing [graphics|physics] code without hurting performance, what kinds of [language|runtime] features make the most difference in improving the expressiveness, correctness, maintainability, and reliability of the resulting programs." This goes way past simply having garbage collection and pointer safety.
You should look into XNA. It performs quite well, and from what I've heard, is quite easy to work with.
About referencing C++ code from C#: It's perfectly doable, though it will take some effort from your side to get it right. C++/CLI can work as an intermediate wrapper or you can use P/Invoke. Just remember that C++ is unmanaged and that you will need to do some manual garbage collection, which can be somewhat icky in a managed environment like .Net. Perfomance in the C++ -> C# bridge is okayish, but I'm unsure how it will perform if you need do 100.000 calls to a math-library each second. I guess a small test would be good.. I'll see if I get time to do this later today, though I somewhat doubt it :)
When creating a graphics engine (I'll only really speak about graphics as that is my area of expertise), these are a few of the choices you need to make EARLY in the project:
1) Is it an indoor and/or outdoor engine?
2) What sort of visibility system are you going to use?
3) Forward or deferred renderer?
4) How will you have animation hierarchies?
5) Dynamic or static lighting?
6) How are you going to handle transparency?
7) How will you handle a 2D overlay?
8) What mesh formats will you use? Roll your own?
Bear in mind you need a GOOD solid vector/matrix library and preferably a full maths library that will help with things from eulers to quaternions to axis-aligned bounding boxes.
Do a lot of research as well. Try to find out what the potential problems you are going to face are. Remember things like changing a texture or a shader can have HUGE impacts on your pipelining. You need to minimise these as much as possible.
Bear in mind that getting a rotating bump mapped mesh on the screen is simple by comparison to getting an engine running. Don't let this put you off. It should be no problem to write a fairly simple game rendering engine in a matter of months :)
Also ... find communities specific to this area. You will get a lot of good (though non-OpenGL-centric) information off DirectXDev. There is a fair bit of general game development algorithm information available at GDAlgorithms. There is also an OpenGL specific mailing list here.
Its worth noting that DirectXDev and GDAlgorithms, at least (I don't know so much on the GL mailing list) are populated by some VERY experienced 3D engine and game developers. Don't post up lots of "beginner" questions as this does tend to breed contempt among the members. Though the odd query or 2 at whatever level (beginner to advanced) will get amazing answers.
Good luck! I wish I'd had the chance to do this at University. I might not have gone off and joined the games industry and enjoyed an extra year of sleeping late ;) hehehe
Why do you need any C++ code?
There are already several wrapper libraries exposing OpenGL or DirectX to .NET, such as SlimDX (which, as the name implies, is a much thinner, more light-weight wrapper than something like XNA)
If you're more comfortable with C#, there's no reason why you couldn't write your entire game in that.
Performance generally won't be a problem. In most cases, the performance of C# code is comparable to C++. Sometimes it's faster, sometimes it's slower. But there are few cases where C# isn't fast enough. (However, there is a significant performance cost to interfacing between native and .NET code -- so doing that too often will hurt performance -- so the trick is, if you use native code at all, to have sufficiently big native operations, so the jump to/from .NET isn't done too often)
Apart from that, a word of advice: Don't bother writing an "engine".
You'll be wasting your time producing a big monolithic chunk of code which, ultimately, doesn't work, because it was never tested against the requirements of an actual game, only what you thought your future game would need.
If you want to experiment with game development, make a game. And then, by all means, refactor it and clean it up and try to extract reusable parts of the code. But if the code hasn't already been used in a game, you won't be able to use it to build a game in the future either.
The engines used in commercial games are just this, code extracted from previous games, code which has been tested, and which works.
By contrast, hobbyist engines pretty much always end up taking 2+ years of the developer's time, without ever offering anything usable.
The whole concept of a "game engine" is flawed. In every other field of software development, you'd frown at the idea of one vaguely-defined component doing basically "everything I need to make my product". You'd be especially suspicious of the idea that it is a separate entity that can be developed separate from the actual product it's supposed to support.
Only in game development, which is by and large stuck in 80's methodologies, is it a common approach. Even though it doesn't actually work.
If this is a school project, I'm sure whoever is supposed to grade it will appreciate it if you apply common good software practices to the field. Just because many newcomers to game development don't do that (and prefer to stick to some kind of myth about "engines"), there's no reason why you shouldn't do better.
Make sure your scope is feasible.
As a teenager I went through the build-my-own-engine phase, a few years later I realized I would have gotten a whole lot more done if I had just used pygame and pyopengl and not wasted the effort.
Check out the forums at gamedev.net.
Sounds to me like you have a pretty good plan for your project. (I get your point about XNA in your comment to #Meeh)
You can interop with C++ via P/Invoke directly or COM, you could also I guess come up with some SOA way of doing it, but to be honest as yucky as this sounds I would be inclined to target COM as your API lever of choice...why? because then you open up your API to a lot of common client language's not just C# and VB.NET you will also get Delphi, VBA, Powerbuilder etc.
Performance should not be a problem as the API entry points are just to kick of the work and transport data structures, the real work is done in your library in native code, so don't worry too much about the perf. ATL will be your friend with creating COM Classes that provide entry to your Library.
Whilst it's certainly do-able to bridge between C++ and C# (Via managed C++ is a good way to go if the interface is complex, P/Invoke for a very simple one), for tool<->engine communication I might suggest instead a network based interface. This is ideal for a high-ish level interface, such as you might want for a level editor/model viewer or such. An actual object modeller is not such a good target. What tools do you envision?
If you do it this way you gain the ability to connect to remote instances of the engine, or multiple instances, or even instances running on different hardware platforms. It'll also teach you a bit about sockets if you don't know them already.
Recently I was talking with a friend of mine who had started a C++ class a couple months ago (his first exposure to programming). We got onto the topic of C# and .NET generally, and he made the point to me that he felt it was 'doomed' for all of the commonly-cited issues (low speed, breakable bytecode, etc). I agreed with him on all those issues, but I held back in saying it was doomed, only because I felt that, in time, languages like C# could instead become native code (if Microsoft so chose to change the implementation of .NET from a bytecode, JIT runtime environemnent to one which compiles directly to native code like your C++ program does).
My question is, am I out to lunch here? I mean, it may take a lot of work (and may break too many things), but there isn't some type of magical barrier which prevents C# code from being compiled natively (if one wanted to do it), right? There was a time where C++ was considered a very high-level language (which it still is, but not as much as in the past) yet now it's the bedrock (along with C) for Microsoft's native APIs. The idea that .NET could one day be on the same level as C++ in that respect seems only to be a matter of time and effort to me, not some fundamental flaw in the design of the language.
EDIT: I should add that if native compilation of .NET is possible, why does Microsoft choose not to go that route? Why have they chosen the JIT bytecode path?
Java uses bytecode. C#, while it uses IL as an intermediate step, has always compiled to native code. IL is never directly interpreted for execution as Java bytecode is. You can even pre-compile the IL before distribution, if you really want to (hint: performance is normally better in the long run if you don't).
The idea that C# is slow is laughable. Some of the winforms components are slow, but if you know what you're doing C# itself is a very speedy language. In this day and age it generally comes down to the algorithm anyway; language choice won't help you if you implement a bad bubble sort. If C# helps you use more efficient algorithms from a higher level (and in my experience it generally does) that will trump any of the other speed concerns.
Based on your edit, I also want to explain the (typical) compilation path again.
C# is compiled to IL. This IL is distributed to local machines. A user runs the program, and that program is then JIT-compiled to native code for that machine once. The next time the user runs the program on that machine they're running a fully-native app. There is also a JIT optimizer that can muddy things a bit, but that's the general picture.
The reason you do it this way is to allow individual machines to make compile-time optimizations appropriate to that machine. You end up with faster code on average than if you distributed the same fully-compiled app to everyone.
Regarding decompilation:
The first thing to note is that you can pre-compile to native code before distribution if you really want to. At this point you're close to the same level as if you had distributed a native app. However, that won't stop a determined individual.
It also largely misunderstands the economics at play. Yes, someone might perhaps reverse-engineer your work. But this assumes that all the value of the app is in the technology. It's very common for a programmer to over-value the code, and undervalue the execution of the product: interface design, marketing, connecting with users, and on-going innovation. If you do all of that right, a little extra competition will help you as much as it hurts by building up demand in your market. If you do it wrong, hiding your algorithm won't save you.
If you're more worried about your app showing up on warez sites, you're even more misguided. It'll show up there anyway. A much better strategy is to engage those users.
At the moment, the biggest impediment to adoption (imo) is that the framework redistributable has become mammoth in size. Hopefully they'll address that in a relatively near release.
Are you suggesting that the fact that C# is managed code is a design flaw??
C# can be natively compiled using tool such as NGEN, and the MONO (open source .net framework) team has developed full AOT (ahead of time) compilation which allows c# to run on the IPhone. However, full compilation is culbersome because it destroys cross-platform compatibility, and some machine-specific optimizations cannot be done. However, it is also important to note that .net is not an interpreted language, but a JIT (just in time) compiled language, which means it runs natively on the machine.
dude, fyi, you can always compile your c# assemblies into native image using ngen.exe
and are you suggesting .net is flawed design? it was .net which brought back ms back into the game from their crappy vb 5, vb 6, com days. it was one of their biggest bets
java does the same stuff - so are you suggesting java too is a mistake?
reg. big vendors - please note .net has been hugely hugely successful across companies of all sizes (except for those open source guys - nothing wrong with that). all these companies have made significant amount of investments into the .net framework.
and to compare c# speed with c++ is a crazy idea according to me. does c++ give u managed environment along with a world class powerful framework?
and you can always obfuscate your assemblies if you are so paranoid about decompilation
its not about c++ v/s c#, managed v/s unmanaged. both are equally good and equally powerful in their own domains
C# could be natively compiled but it is unlikely the base class library will ever go there. On the flip side, I really don't see much advantage to moving beyond JIT.
It certainly could, but the real question is why? I mean, sure, it can be slow(er), but most of the time any major differences in performance come down to design problems (wrong algorithms, thread contention, hogging resources, etc.) rather than issues with the language. As for the "breakable" bytecode, it doesn't really seem to be a huge concern of most companies, considering adoption rates.
What it really comes down to is, what's the best tool for the job? For some, it's C++; for others, Java; for others, C#, or Python, or Erlang.
Doomed? Because of supposed performance issues?
How about comparing the price of:
programmer's hour
hardware components
If you have performance issues with applications, it's much cheaper to just buy yourself better hardware, compared to the benefits you loose in switching from a higher-abstraction language to a lower one (and I don't have anything against C++, I've been a C++ developer for a long time).
How about comparing maintenance problems when trying to find memory leaks in C++ code compared to garbage-collected C# code?
"Hardware is Cheap, Programmers are Expensive": http://www.codinghorror.com/blog/archives/001198.html