Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I've read a lot about differences between x86-x64, ARM, ECMA memory models for C#. What is the real world's best practice: developing according with stronger x86-x64 model or with weaker ECMA? Should I consider possible reordering, stale values, safe publishing for applications run only on x86-x64 hardware?
I've read a lot about differences between x86-x64, ARM, ECMA memory models for C#. What is the real world's best practice: developing according with stronger x86-x64 model or with weaker ECMA? Should I consider possible reordering, stale values, safe publishing for applications run only on x86-x64 hardware?
The best practice is to write your code so that it is correct.
I choose to write correct code by not writing multithreaded shared memory code. That's what I would encourage you to do as well.
If you must write multithreaded shared memory code then I would recommend that you use high-level libraries such as the Task Parallel Library, rather than trying to understand the complexities of the memory model.
If you want to write low-level shared memory multithreaded code that is correct only on strong memory models, well, I can't stop you, but that seems like an enormous amount of work to go to in order to create a program that has subtle bugs when you try to run it on ARM.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I am engaged in a project that works mainly in AutoCAD to design and manufacture prefabricated building components such as roofing trusses. One of our goals is to redesign a program that was written in LISP that functions in designing roofing trusses. We are to rewrite the LISP code in C# and incrementally implement it into the current libraries that they have set up.
My problem is that I have been tasked with building a rudimentary LISP to C# converter. After some research (as Google results quickly show that such a thing does not readily exist on hand), I have come to the question of which way of converting this legacy code would be more efficient. Would it be better to take chunks of the LISP code to analyze and rewrite in C#, or should I continue on with developing a rudimentary converter for the AutoLISP code?
You should take chunks of the LISP code and rewrite in C#.
Even if it was less effort to write a general purpose LISP interpreter in C# than to rewrite the LISP in c# (which is highly improbable), the LISP is probably running AutoCAD commands like you would type in the AutoCAD command line instead doing things the ObjectARX way. So you would also need to convert the commands to use the ObjectARX API.
C# is a compiled object-oriented programming language whereas AutoLISP is an interpreted expression-oriented language. Therefore there is never going to be a really straightforward way of converting one to the other without a monumental effort.
Its worth noting that AutoLISP has flexibility to be modified quickly without needing to be recompiled. The benefit to using native in-process C# is that it's extremely fast versus a similar LISP approach. I've found there's a nice middle ground for maintaining the flexibility of LISP with the speed and power of C# which leverages the LispFunction command flag and ResultBuffer type in the .NET native API.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
Whats the main difference between using the MonoGame with C# and SDL with C++?
Which of them is easier to use? Which is recommended for multi-platform support?
Its important for us to have structure and all-pervading OOP. It should be performant but not at the cost of productivity (e.g. not reinventing the wheel or managing memory). We are a small team so we need a structured, simple and clear framework, which allows us to concentrate on the actual work.
C++ with SDL is native and can run on almost any platform (cross-platform), more specifically those with limited system specifications.
C# with MonoGame is great for proto-typing a concept, but you could run into unavoidable bottle necks for large games. Additionally, SDL is just a graphics layer, where MonoGame is a complete API for interactive media. MonoGame could be cross-platform too, but I am unsure of its complete audience.
Is MonoGame really cross-platform?
You will do a little work from scratch when using C++ with SDL, but there are many libraries out there for C++ game development that will make it a breeze. If productivity is an issue, then you could have problems using C++, unless you use an existing framework for your game, which typically handles memory management. But that is the risk you take with C++; write more efficient code in a longer time frame.
Irrlicht is a great library for rendering. Simple and clean.
http://irrlicht.sourceforge.net/
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I'm writting TCP Server for a game in C# with TPL. I came across on Node.js on the internet and it seems that it has much better performance and is generally better for server than .Net.
Is that true?
PS. I have to say I hate JS's scripting convention, and I would be relieved if you said my C# server concept is not in danger
node.js will sometimes be faster than c#. and only sometimes much faster. for a game server it might be a good choice. c# will be faster in some cases too however.
you might just continue your c# work.
understanding the difference between event based server and a threading server (TPL is still threading, although it schedules them considering machine cores) you might be able to estimate if node.js is faster in your case. if you did not yet, read http://nodejs.org/about/ for example.
if you want to know which one is faster you will have to implement both and measure.
as cool as node.js is - i always preferred single threaded solutions and often argue(d) against thread fanatics - do not believe the overhype that everything non-node.js is bad and that node.js is always the "much fastest". understanding the architecture difference is a dev must have, so dig into it.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
Can anyone tell me what would be more efficient: A large program is written in visual-C++ years ago is now intended to be written in C#. What would be better, re-writing the whole code of visual-C++ in C# or write C++ DLLs to be used in C# program via DLLimport?
I guess it depends on how data-centric your code is. If you can easily separate out the functionality that does not require an interface, then you'd most likely be better off writing a DLL to utilize this functionality, and then re-writing the interface in C#.
If the program is rather interface heavy, and you do not want to go through separating out all of the data functions, then I'd just go ahead and re-write the whole thing in C#, although I'd expect to lose some performance.
VisualC++ is still a very widely used language - is this your only reason for wanting to move to C# (i.e. finding it hard to recruit people, lacking skills to continue development)?
There is only a single answer to this: "it depends". We cannot possibly know this, it's something you must decide.
Check what you need in terms of time and other resources for both. Check what benefit your gain from both. Weigth cost against benefit. Decide.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I am looking to make a data visualization tool that will visualize biological data. I am used to being a C# and .net coder. However, as I understand it, you can run into trouble if you are running a C# app in ubuntu. Any suggestions for a language to use with these specifications in mind? I was thinking Java but am happy to take suggestions.
C# is a solid choice, especially if you already know the language. C# and the .NET framework have a solid cross-platform port with the Mono project and you can create Gnome UIs using the Gtk# bindings.
As an alternative, Java is used for a lot of bioinformatics applications. Though personally I have to say that most of those have horrible user interfaces and Java’s memory management seems ill-suited to deal with the data sizes that are common in bioinformatics – tools routinely run out of memory or become extremely slow. This isn’t necessarily an inherent problem of Java as much as sloppy programming, but Java certainly doesn’t help.
An alternative to Java would also be Python with a suitable GUI library (there are some good ones), especially since Python offers a much nicer, more polished syntax.
Yet another alternative that’s worthwhile especially if you’re really dealing with big data or if performance is important, would be C++ with Qt to build the GUI. Note that this will make development vastly more complicated if you’re not already proficient in C++.