Did rewriting Roslyn in C# make it slower? [closed] - c#

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I read this article: https://medium.com/microsoft-open-source-stories/how-microsoft-rewrote-its-c-compiler-in-c-and-made-it-open-source-4ebed5646f98
Since C# has a built in garbage collector, is Roslyn slower than the previous compiler which was written in C++? Did they perform any benchmarks?

Let me address a question that you didn't explicitly ask but applies to your question.
Question: Is explicit garbage collection faster than implicit garbage collection?
Answer: As you may already know C++/C uses explicit garbage collection which means that free() must be called to deallocate memory allocated on the heap. On the other hand, C# uses implicit garbage collection which means the memory on the heap is deallocated in the background. The key here is implicit garbage collection will deallocate memory when needed at optimal times while explicit will always deallocate each object individually(if done correctly). Implicit garbage collection achieves this by communicating with the OS and by using some other algorithms. In all, in most situations, implicit garbage collection will perform better than explicit due to the above explanation. For more info check out this post.
Answer To Your Question: Because I have not seen any bench marks myself, it is almost impossible to say if one would be faster than the other for sure. There are many other features than garbage collection which would effect the speed of each langauge implementation. To clarify, C# is a bytecode based language that uses the JIT(Just-In-Time) compiler. If I had to choose, I would choose the C++ implementation to be faster due to the JIT optimizations lacking in some cases compared to the C++ compiler. Again, when it comes to how fast these two languages will perform it will depend on the situation. For example, there are some optimizations that JIT can preform that are impossible to do with the C++ compiler.

Related

C++ pointer vs C# pointer [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I am an experienced C# developer, and I just started asking myself something.
I've read a lot about differences between C++ and C# in game development.
Almost everyone said that C++ is better in game dev because it can directly access memory through a pointer. As far as I know, C# can also use pointers if the unsafe keycode is used.
So, that means that C# can also directly access memory. And then the question comes again, what is the difference between the C++ pointer and the C# pointer? Is one better then the other? If there are no differences, why would C++ be better than C#?
(I know from my own experience that I've had problems with the C# garbage collection, so I thought that this might be the reason C++ is preferred)
I'll say that the weak point of C# is that, given a void* pointer, you can't always cast it to a MyStruct* pointer, and surely as hell you can't cast it to a MyStruct[] or a byte[] (and the array type is one of the basic types of .NET, and is used pretty much everywhere). This makes interop quite difficult and slow, because often you have to first copy from a void* to a newly created MyStruct[] just to be able to use the data in .NET.
The alternative clearly is working everywhere with pointers in C# (you can probably do it) and minimize the use of arrays [], but the languages isn't built for that. For example the generic subsystem (List<T>) doesn't accept pointers (you can't List<int*>). You can clearly use IntPtr... but then you have to cast it to int* when you need a int*... it is a pain.

What is an F# tail call? Why is it a performance boost over C#? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
This article spells out some reasons F#'s performance is occasionally better than C#. It says in it's "Firstly" section that only F# generates tail calls.
What exactly does that mean? And why is it a performance boost? This one thing may actually make or break between F# and C# for my chess app, which uses a ton of recursion.
Performance will depend more on the way you implement your program than the language. F# may generate IL better for some things while the C# compiler will be better for others. When choosing the languages you should consider other things rather than just performance.
If you're writing your chess program to learn F#, give it a try, it's an awesome language, just don't expect super blazing fast programs just because you're using a functional language.
Edit to answer the new question:
The F# compiler indeed does generate IL that has the tail. op code whareas the C# compiler doesn't. That by itself doesn't make F# faster or more performatic than C#, as you can see in my original answer above, but can indeed make a difference in your specific chess app, since you are stating that recursion is heavily used.
As a side note, the CLR may generate some simpler tail call optimizations during runtime, so for simpler functions in a x64 enviroment, even IL generated by the C# compiler may have tail call optimization.

Structs on stack classes on heap [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
Everyone knows that structs are value types and classes are reference types and that therefore structs are allocated on the stack and that objects are allocated on the heap.
What I'd like to know is what is the implication of something being allocated on the stack as opposed to something being allocated on the heap?
One general implication of allocating memory on the stack is that it is lost once it leaves scope (e.g., function/method returns). Heap memory can persist longer and need not worry about things like that.
Update:
Another important item that I didn't mention is that heap memory must be managed by someone. Depending on the language this can be the programmer (C, C++, etc.), a garbage collector (Java, C#, etc.). If heap memory isn't cleaned up when it's done being used you end up with memory leaks.

Disposable resources in library code [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
Imagine you're writing a library. Say, this library is going to be used in 24/7 server application. There are some unmaneged resourses, wrapped in your public API, so you implement Disposable pattern( you may even implement finalizers)
Normally, you would use using statement to free the unmanaged resources. But you are writing just a library, not a final application. What if another programmer 'forgotten' to call Dispose()? You are going to get resource leak in your lib!
We could rely on finalizers, but there is no guarantee that a finalizer would ever been called.
So, is there a way to guarantee that somehow the unmanaged resources would be freed? Any ideas?
There is no solution except documenting your classes. Write explicitly in your documentation how your classes are meant to be used (i.e. they are meant to be disposed at the earliest possible time, possibly with using, or with an explicit call to Dispose).
You are no more responsible for memory leaks if your consumer does not properly dispose its object than industrials are responsible for the pollution if people trash their garbage in the wild.
You could hope that the server application has code analysis rule CA2213: Disposable fields should be disposed enabled.
Otherwise I don't know if there is a way to guarantee that they call your Dispose() method.

Stack vs. Heap in .NET [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
In your actual programming experience, how did this knowledge of STACK and HEAP actually rescue you in real life? Any story from the trenches? Or is this concept good for filling up programming books and good for theory?
The distinction in .NET between the semantics of reference types and value types, is a much more important concept to grasp.
Personally, I have never bothered thinking about the stack or heap in all my years of coding (just CLR based).
To me it is the difference between being a "developer/programmer" and a "craftsman". Anyone can learn to write code and see how things just "magically happen" for you not knowing why/how. To really be valuable at what you do, I think there is a great importance to find out as much as you can about the Framework you're using. Remember it's not just a language, it's a Framework that you leverage to create the best application to your abilities.
I've analyzed many memory dumps over the years and found it extremely helpful knowing the internals and differences between the two. Most of these have been OutOfMemory conditions and unstable applications. This knowledge is absolutely necessary to use WinDbg when looking at dumps. When investigating a memory dump, knowing how memory is allocated between the kernel/user-mode process and the CLR can at least tell you where to begin your analysis.
For example, let's take an OOM case:
The allocated memory you see in the Heap Sizes, Working Set, Private Memory, Shared Memory, Virtual Memory, Committed Memory, Handles, and Threads can be a big indicator of where to start.
There about 8 different heaps that the CLR uses:
Loader Heap: contains CLR structures and the type system
High Frequency Heap: statics, MethodTables, FieldDescs, interface map
Low Frequency Heap: EEClass, ClassLoader and lookup tables
Stub Heap: stubs for CAS, COM wrappers, P/Invoke
Large Object Heap: memory allocations that require more than 85k bytes
GC Heap: user allocated heap memory private to the app
JIT Code Heap: memory allocated by mscoreee (Execution Engine) and the JIT compiler for managed code
Process/Base Heap: interop/unmanaged allocations, native memory, etc
Finding what heap has high allocations can tell me if I have memory fragmentation, managed memory leaks, interop/unmanaged leaks, etc.
Knowing that you have 1MB (on x86)/ 4MB (on x64) of stack space allocated for each thread that your app uses reminds me that if I have 100 threads you will have an additional 100MB of virtual memory usage.
I had a client that had Citrix servers crashing with OutOfMemory problems, being unstable, slow responsiveness when their app was running on it in multiple sessions. After looking at the dump (I didn't have access to the server), I saw that there were over 700 threads being used by that instance of the app! Knowing the thread stack allocation, allowed me to correlate the OOMs were caused by the high thread usage.
In short, because of what I do for my "role", it is invaluable knowledge to have. Of course even if you're not debugging memory dumps it never hurts either!
It certainly is helpful to understand the distinction when one is building compilers.
Here are a few articles I've written about how various issues in memory management impact the design and implementation of the C# language and the CLR:
http://blogs.msdn.com/ericlippert/archive/tags/Memory+Management/default.aspx
I don't think it matters if you're just building average business applications, which I think most .NET programmers are.
The books I've seen just mention stack and heap in passing as if memorizing this fact is something of monumental importance.
Personally, this is one of the very few technical questions that I ask every person I'm going to hire.
I feel that it is critical to understanding how to use the .NET framework (and most other languages). I never hire somebody who doesn't have a clear understanding of memory usage on the stack vs. the heap.
Without understanding this, it's almost impossible to understand the garbage collector, understand .NET performance characteristics, and many other critical development issues.
The important distinction is between reference types and value types. It's not true that "value types go on the stack, reference types go on the heap". Jon Skeet has written about this and so has Eric Lippert.
We had a Claim Entity (business Object) which contained data for an entire claim. One of the requirements of the application was to create an audit trail of every single value changed by the user. In order to this without hitting the database twice we would maintain Original Claim Entity in the form and a Working Claim Entity. The Working Claim Entity would get updated when the user clicked Save and we would then compare the Original Claim Entity properties with corresponding Working Claim Entity properties to determine what changed. One day we noticed hey our compare method is never finding a difference. This is where my understanding of the Stack and Heap saved my rear end (specifically value types vs reference types). Because we needed to maintain to copies of the same object in memory the developer simply created two objects
Dim originalClaim As ClaimBE
Dim workingClaim As ClaimBE
then called the business layer method to return the claim object and assigned the same claimBE to both variables
originalClaim = BLL.GetClaim()
workingClaim = originalClaim
hence two reference types pointing to the same value type. Nightmare averted.

Categories