Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 6 years ago.
Improve this question
I want to write a C# application that compares the performance of two PCs, and know which PC will perform a task faster than the other.
So is there an algorithm for doing this?
for example ( (NumberOfProcessCurntlyRunning*AvailableRAM)+CPUUsage).
Assuming that we have 2 computers with the same computing and hardware power.
As I agree that there is no general purpose algorithm to determine the overall performance of a computer, there are some algorithms that are being used by scientists to create more reliable benchmarks for their papers. So if you implement another solution targeting the same problem you can tell if it's better then previously discovered ones even tho you are working on a different machine then previous teams.
One example can be a benchmark algorithm dfmax. It will give you in a short time some foggy idea on how fast the current machine is, but it won't take into account RAM that is available. But I think that it could be some start for you.
No, there isn't an algorithm to determine overall computer performance. There are lots of things that can affect overall performance.
You should determine what to compare; access to memory, the efficiency of the CPU cache, efficiency of GC, then implement functions for each of them.
If you need a blind test and you don't care any of specific metric then you can run the same function (e.g. quick sort) and log time and compare milliseconds.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I want to set automatically cleaning method for a desktop running application, because it throw an error "out of memory".
Is there any way to do this?
.
There is already an "automatic cleaning method"; the GC. You should virtually never need to tell it what to do - it understands memory more than most people do. If your code is throwing OOM, you need to investigate why; for example, are you leaking objects? (static event handlers are notorious for this); are you asking for huge slabs of contiguous memory (huge arrays, etc)? are you asking for an array that is more than 2 GiB (without large array support enabled)? are you running on 32-bit and just using lots of memory? is it actually not really an OOM condition, but really GDI+ handle exhaustion (which demonstrates in the same way)?
The first thing to check is how much memory your process is using - and how much free memory the OS has - when it throws OOM. If there is plenty of free memory, it isn't actually OOM (unless you're using over 1 GiB on a 32-bit system, in which case all bets are off).
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
Is there a case when a hashcode collision would be beneficial?
(Other than when the objects are identical, of course.)
EDIT: beneficial meaning to calculate the hashcode in less cpu cycles, or use less memory in the calculation.
I guess a clarification would be: If a certain GetHashCode() is 10 times faster, but it also causes twice (for example) as many collisions, is it worth it?
'Beneficial' is a difficult term to quantify, especially in this case. It depends on your definition of beneficial.
If you're checking for object equality and they collide but the objects are not the same, then that would not be beneficial.
If you're building a hashmap, then you might have specific mechanisms built into your implementation to handle these cases. I'm fairly certain most (if not all) modern hashmap implementations do this.
You could also argue there's a bunch of fringe benefits, like maybe you're a mathematician or a security researcher, and you're looking to show the strength (or lack thereof) for the algorithm used in GetHashCode(). Or maybe you want to give an excellent proof-of-concept for why Microsoft should hire you for the .NET team.
Overall, your question is pretty vague. If there's something specific you're wondering, you should rethink/edit your question.
To answer your question you first need to understand what a hash code is used for. A hash code is a fast "pre test" for checking the equality of two objects.
So is there a case where a collision is beneficial?
Yes, if in the process of generating the hash code you are spending a relatively large amount of time to create a more unique hash code the overhead of that generation may be more than the benefits you get from having a more unique hash.
To address your latest edit, the only way to tell if it is worth it is try both methods in place with your real data and see how the two compare. Doing a artificial head to head benchmark is not going to give you any meaningful information, things like hash code lookups depend too much on the data it is working with.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
In a project I have, I want to be able to select a range of products that a customer might want to buy. The range of products should be selected based on patterns of other customers.
I am currently thinking about neural networks, however I am not sure if that is the way to go. Since I am learning about this, I am looking for good examples/tutorials to help me on this.
So far, I was thinking that multi-layer feed-forward neural network would do the trick, but usually the articles are talking about one predicted value, while I am looking for a range of values. The idea that I got is to use the error to calculate the range. Is that the way it is done?
My other approach is more statistical by using probability.
Can anyone point me to the right direction, preferably with C# examples as it is the chosen language to work with?
I can't point you at any tutorials. However, I used to work on the peripherals of this area alongside people who were very experienced in this sort of thing and their opinion was, overwhelmingly, that a probability-based approach is the more cost-effective of the two.
Machine learning of one kind or another is undoubtedly the more powerful technique but it requires a colossal amount of comparative time and effort, and the quality of the results may not prove to be worth the additional resources.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I've read a lot about differences between x86-x64, ARM, ECMA memory models for C#. What is the real world's best practice: developing according with stronger x86-x64 model or with weaker ECMA? Should I consider possible reordering, stale values, safe publishing for applications run only on x86-x64 hardware?
I've read a lot about differences between x86-x64, ARM, ECMA memory models for C#. What is the real world's best practice: developing according with stronger x86-x64 model or with weaker ECMA? Should I consider possible reordering, stale values, safe publishing for applications run only on x86-x64 hardware?
The best practice is to write your code so that it is correct.
I choose to write correct code by not writing multithreaded shared memory code. That's what I would encourage you to do as well.
If you must write multithreaded shared memory code then I would recommend that you use high-level libraries such as the Task Parallel Library, rather than trying to understand the complexities of the memory model.
If you want to write low-level shared memory multithreaded code that is correct only on strong memory models, well, I can't stop you, but that seems like an enormous amount of work to go to in order to create a program that has subtle bugs when you try to run it on ARM.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I have been told to make a process that insert data for clients using multithreading.
Need to update a client database in a short period of time.There is an application that does the job but it's single threaded.Need to make it multithread.
The idea being is to insert data in batches using the existing application
EG
Process 50000 records
assign 5000 record to each thread
The idea is to fire 10-20 threads and even multiple instance of the same application to do the job.
Any ideas,suggestions examples how to approach this.
It's .net 2.0 unfortunately.
Are there any good example how to do it that you have come across,EG ThreadPool etc.
Reading on multithreading in the meantime
I'll bet dollars to donuts the problem is that the existing code just uses an absurdly inefficient algorithm. Making it multi-threaded won't help unless you fix the algorithm too. And if you fix the algorithm, it likely will not need to be multi-threaded. This doesn't sound like the type of problem that typically benefits from multi-threading itself.
The only possible scenario I could see where this matters is if latency to the database is an issue. But if it's on the same LAN or in the same datacenter, that won't be an issue.