Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I'm writting TCP Server for a game in C# with TPL. I came across on Node.js on the internet and it seems that it has much better performance and is generally better for server than .Net.
Is that true?
PS. I have to say I hate JS's scripting convention, and I would be relieved if you said my C# server concept is not in danger
node.js will sometimes be faster than c#. and only sometimes much faster. for a game server it might be a good choice. c# will be faster in some cases too however.
you might just continue your c# work.
understanding the difference between event based server and a threading server (TPL is still threading, although it schedules them considering machine cores) you might be able to estimate if node.js is faster in your case. if you did not yet, read http://nodejs.org/about/ for example.
if you want to know which one is faster you will have to implement both and measure.
as cool as node.js is - i always preferred single threaded solutions and often argue(d) against thread fanatics - do not believe the overhype that everything non-node.js is bad and that node.js is always the "much fastest". understanding the architecture difference is a dev must have, so dig into it.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
Of course, it does depend on the application in question. If I am using, for example, a grammar checker to check for mistakes to make the code more readable, I don't think that that is a bad practice (though tell me if it is).
But I am thinking about bigger extensions like Resharper that adds so much, with me not even knowing 95% of what it does.
My big question is: is it a bad practice to use Resharper or similar applications that I mostly don't understand (while the few bits I do understand does help me), while I don't even know how most of the basic Visual Studios application works?
A productivity tool (like R# or others) is supposed to enhance your productivity.
That means you should be able to do your job, just do it faster (or cheaper or whatever other metric you use) with the tool.
If you catch yourself not being able to do the job without the tool, because you don't understand what the tool does or cannot replicate it without the tool, that is a problem.
Just keep in mind that a tool can vanish for any reason at any time. Your employer may not want to pay for it, may not like it, use a different product or maybe the product does not support your preferred environment anymore or simply has bugs. You cannot tell an employer that you cannot do something because a $100 tool broke when you are paid $100K. It's acceptable that you take longer, but not that you have to give up.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I've read a lot about differences between x86-x64, ARM, ECMA memory models for C#. What is the real world's best practice: developing according with stronger x86-x64 model or with weaker ECMA? Should I consider possible reordering, stale values, safe publishing for applications run only on x86-x64 hardware?
I've read a lot about differences between x86-x64, ARM, ECMA memory models for C#. What is the real world's best practice: developing according with stronger x86-x64 model or with weaker ECMA? Should I consider possible reordering, stale values, safe publishing for applications run only on x86-x64 hardware?
The best practice is to write your code so that it is correct.
I choose to write correct code by not writing multithreaded shared memory code. That's what I would encourage you to do as well.
If you must write multithreaded shared memory code then I would recommend that you use high-level libraries such as the Task Parallel Library, rather than trying to understand the complexities of the memory model.
If you want to write low-level shared memory multithreaded code that is correct only on strong memory models, well, I can't stop you, but that seems like an enormous amount of work to go to in order to create a program that has subtle bugs when you try to run it on ARM.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
Can anyone tell me what would be more efficient: A large program is written in visual-C++ years ago is now intended to be written in C#. What would be better, re-writing the whole code of visual-C++ in C# or write C++ DLLs to be used in C# program via DLLimport?
I guess it depends on how data-centric your code is. If you can easily separate out the functionality that does not require an interface, then you'd most likely be better off writing a DLL to utilize this functionality, and then re-writing the interface in C#.
If the program is rather interface heavy, and you do not want to go through separating out all of the data functions, then I'd just go ahead and re-write the whole thing in C#, although I'd expect to lose some performance.
VisualC++ is still a very widely used language - is this your only reason for wanting to move to C# (i.e. finding it hard to recruit people, lacking skills to continue development)?
There is only a single answer to this: "it depends". We cannot possibly know this, it's something you must decide.
Check what you need in terms of time and other resources for both. Check what benefit your gain from both. Weigth cost against benefit. Decide.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
The community reviewed whether to reopen this question 8 months ago and left it closed:
Opinion-based Update the question so it can be answered with facts and citations by editing this post.
Improve this question
I'm currently developing an application where it's rather crucial to keep the user from decompiling the code. Now, I'm aware of the fact that most .exes are decompilable if executed by an experienced programmer. However, my goal is simply to keep it safe from the "regular" user with basic programming knowledge.
I've come across several obfuscators, and the one I'm using right now is Codeplex Confuser, which you can find here. Since I'm no obfuscation guru, or that experienced within programming of any high level, I'm asking you if you know anything about the safety of this obfuscator.
From my experience Confuser is one of the hardest (free) obfuscators to reverse at this moment. At least by 1-click tools.
Personally I had a few issues with it, namely, a few false positives when using max settings and a few cases that made my .exe's unable to run.
Keep in mind that regardless if it's a bit harder to reverse than other free alternatives it's still very possible for someone to do so if he devotes a bit of time.
Using an obfuscator will make it a lot harder to decompile the code, but it's still not safe.
The only way to keep your code safe is to keep it out of the hands of the users. You can put critical code in a web service, and let the application call it. Unless the user can actually hack the server and get to the code, it's completely safe from decompiling.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I am looking to make a data visualization tool that will visualize biological data. I am used to being a C# and .net coder. However, as I understand it, you can run into trouble if you are running a C# app in ubuntu. Any suggestions for a language to use with these specifications in mind? I was thinking Java but am happy to take suggestions.
C# is a solid choice, especially if you already know the language. C# and the .NET framework have a solid cross-platform port with the Mono project and you can create Gnome UIs using the Gtk# bindings.
As an alternative, Java is used for a lot of bioinformatics applications. Though personally I have to say that most of those have horrible user interfaces and Java’s memory management seems ill-suited to deal with the data sizes that are common in bioinformatics – tools routinely run out of memory or become extremely slow. This isn’t necessarily an inherent problem of Java as much as sloppy programming, but Java certainly doesn’t help.
An alternative to Java would also be Python with a suitable GUI library (there are some good ones), especially since Python offers a much nicer, more polished syntax.
Yet another alternative that’s worthwhile especially if you’re really dealing with big data or if performance is important, would be C++ with Qt to build the GUI. Note that this will make development vastly more complicated if you’re not already proficient in C++.