Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I want to set automatically cleaning method for a desktop running application, because it throw an error "out of memory".
Is there any way to do this?
.
There is already an "automatic cleaning method"; the GC. You should virtually never need to tell it what to do - it understands memory more than most people do. If your code is throwing OOM, you need to investigate why; for example, are you leaking objects? (static event handlers are notorious for this); are you asking for huge slabs of contiguous memory (huge arrays, etc)? are you asking for an array that is more than 2 GiB (without large array support enabled)? are you running on 32-bit and just using lots of memory? is it actually not really an OOM condition, but really GDI+ handle exhaustion (which demonstrates in the same way)?
The first thing to check is how much memory your process is using - and how much free memory the OS has - when it throws OOM. If there is plenty of free memory, it isn't actually OOM (unless you're using over 1 GiB on a 32-bit system, in which case all bets are off).
Related
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 6 years ago.
Improve this question
I want to write a C# application that compares the performance of two PCs, and know which PC will perform a task faster than the other.
So is there an algorithm for doing this?
for example ( (NumberOfProcessCurntlyRunning*AvailableRAM)+CPUUsage).
Assuming that we have 2 computers with the same computing and hardware power.
As I agree that there is no general purpose algorithm to determine the overall performance of a computer, there are some algorithms that are being used by scientists to create more reliable benchmarks for their papers. So if you implement another solution targeting the same problem you can tell if it's better then previously discovered ones even tho you are working on a different machine then previous teams.
One example can be a benchmark algorithm dfmax. It will give you in a short time some foggy idea on how fast the current machine is, but it won't take into account RAM that is available. But I think that it could be some start for you.
No, there isn't an algorithm to determine overall computer performance. There are lots of things that can affect overall performance.
You should determine what to compare; access to memory, the efficiency of the CPU cache, efficiency of GC, then implement functions for each of them.
If you need a blind test and you don't care any of specific metric then you can run the same function (e.g. quick sort) and log time and compare milliseconds.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I would Like to Know if there's a way to see what is in the Memory wile Debugging.
eg. see what data-tables are still active ,Where most of the memory is allocated to ect...
I know about the "Watch" Feature in VS2013 But its not really what I'm looking for,I want to know what objects i did not dispose of correctly.And to go trough it manually is HARD!Something like a memory overview.I have tried Process explorer too but it only shows how mush the process is using at the moment in total,its good to check for a Memory leak however.
If Something like this exists is there a tutorial you can point me to?
ANTS Memory Profiler will allow you analyze which objects are in memory when you expect them not to. As well as let you investigate why that is.
I'm sure there are other, similar tools.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
Just trying to understand the out of memory exception in dot net.
If I create a infinite while loop and in the loop I create a new object and that object writes something to a file.
Will this application run out of memory? Will this cause out of memory exception?
An OutOfMemoryException is thrown whenever the application tries and fails to allocate memory to perform an operation. According to Microsoft's documentation, the following operations can potentially throw an OutOfMemoryException:
Boxing (i.e., wrapping a value type in an Object)
Creating an array
Creating an object
If you try to create an infinite number of objects, then it's pretty reasonable to assume that you're going to run out of memory sooner or later.
(Note: don't forget about the garbage collector. Depending on the lifetimes of the objects being created, it will delete some of them if it determines they're no longer in use.)
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I've read a lot about differences between x86-x64, ARM, ECMA memory models for C#. What is the real world's best practice: developing according with stronger x86-x64 model or with weaker ECMA? Should I consider possible reordering, stale values, safe publishing for applications run only on x86-x64 hardware?
I've read a lot about differences between x86-x64, ARM, ECMA memory models for C#. What is the real world's best practice: developing according with stronger x86-x64 model or with weaker ECMA? Should I consider possible reordering, stale values, safe publishing for applications run only on x86-x64 hardware?
The best practice is to write your code so that it is correct.
I choose to write correct code by not writing multithreaded shared memory code. That's what I would encourage you to do as well.
If you must write multithreaded shared memory code then I would recommend that you use high-level libraries such as the Task Parallel Library, rather than trying to understand the complexities of the memory model.
If you want to write low-level shared memory multithreaded code that is correct only on strong memory models, well, I can't stop you, but that seems like an enormous amount of work to go to in order to create a program that has subtle bugs when you try to run it on ARM.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I have been told to make a process that insert data for clients using multithreading.
Need to update a client database in a short period of time.There is an application that does the job but it's single threaded.Need to make it multithread.
The idea being is to insert data in batches using the existing application
EG
Process 50000 records
assign 5000 record to each thread
The idea is to fire 10-20 threads and even multiple instance of the same application to do the job.
Any ideas,suggestions examples how to approach this.
It's .net 2.0 unfortunately.
Are there any good example how to do it that you have come across,EG ThreadPool etc.
Reading on multithreading in the meantime
I'll bet dollars to donuts the problem is that the existing code just uses an absurdly inefficient algorithm. Making it multi-threaded won't help unless you fix the algorithm too. And if you fix the algorithm, it likely will not need to be multi-threaded. This doesn't sound like the type of problem that typically benefits from multi-threading itself.
The only possible scenario I could see where this matters is if latency to the database is an issue. But if it's on the same LAN or in the same datacenter, that won't be an issue.