Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question appears to be off-topic because it lacks sufficient information to diagnose the problem. Describe your problem in more detail or include a minimal example in the question itself.
Closed 8 years ago.
Improve this question
I'm lookng for all partitions of a set. For set that contains 12 number it's ok, but for 13 I'm getting out of memory exception.
I'm sure of my algorithm. It gives me good result and good number of subsets. For 1, 2, 3... 12.. and then i try to get a 13 and there's a problem.
That's how many is them: WOLPHRAM STRILING
Is there any way to increase memory? Or at least write a method that will gives me with keyword out dynamicly alocated number of parameters?
I'm using the code from the second post:
Code of partitioning
If I assume correctly what you are trying to do:
Are trying to allocate rougly 630 MB on the LOH! If you are running a 32 bit application you have no chance that whis will work - as .net can use in total up to somewhat 1.4 GB!
On a 64 bit process you should get further: .Net Why can't I get more than 11GB of allocated memory in a x64 process?
Hope that makes things a little clearer!
EDIT
What you are dealing with is called "Curse of Dimensionality"
http://en.wikipedia.org/wiki/Curse_of_dimensionality
Related
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
Normally, when init a collection such as dictionary, if we know how many key-value pairs ahead, we could use
var dict = new Dictionary<>(capacity);
Is it possible to set capacity for C# string intern pool? (I think internally it is like a dict.)
Edits:
I am profiling a unity game.
And there are about 20 MB strings. The number of these strings are about 400000.
After intern these strings, the memory is about 7MB and the number is about 98000 strings. So I guess I could reduce memory use intern.
The reason I use intern instead of dictionary-like data structures, because the architecture issues. There is no easy way to inject such data structures.
And the strings are all config data which means they will be persistent once starts.
No, it is not possible to restrict the size of the intern pool.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
int[] age=new int[5];
After reading many posts and blogs ,I am still not clear about the fact why arrays are reference types.I know this question is like very old and have been asked plenty times but still couldnt find direct answer.
Known things :Since Arrays are allocated on heap,hence they are reference types .
Need to know:Whats the reason behind making(compiler) arrays as reference types?.
Suppose age[0]=5,age[1]=25 ,what is the difficulty in assigning these on stack or make reference to heap if their type was object.why consider heap as accessing time is slow comparatively?.
Why heap ,why not on stack like for structures?.
Several reasons:
value types are passed by value (as in copied). So if you called a method with a value-type array, you'd be copying all the contents.
Stack memory is limited and primarily designed for quick access to parameters that are in current use (as implied by the term stack). Putting a large object onto the stack would cause lookups of other local state to take much longer, because they wouldn't all be on the same cache line in the CPU cache anymore.
Array contents are modifiable. So you'd have all the issues that you have with mutable structs when you tried to set a value, only to find that the copy, not the "original" was modified.
EDIT:
Yes you can use stackalloc in unsafe code. Doesn't make it a good idea.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
I wanted to ask what do you think would be the best way to denoise an array of coordinate points.
I've got something like on the bottom drawing and need to convert it into what's on the top drawing.
Thanks :)
Median filter
A typical way of doing signal noise removal is a median filter.
If you have a noisy signal f(x), you can get a denoised signal g(x) by the following:
g(x) = medianz in R(x)(f(z))
where R(x) = [x-w/2, x+w/2] and w is some window width.
Example
Wikipedia has a concrete example.
Here's an example of denoising using a median filter. The first image is the source, the second image is the noisy version, the third image is the denoised version, and the fourth image is the difference between the source and the denoised versions. Notice that most of the error is near boundaries, whereas error in regions without sudden jumps is very low.
For a broader look at the topic, look at Noise reduction.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
I have two tables:
**Users**
UserID | UserName | Password
**Task**
TaskId | Hours | UserID (Empty as of now)
I need to assign UserID in Tasks table to tasks so that all users gets tasks of even hours. I have about 5000 tasks in database and Hours column value range from 1 to 30.
How can it be done with SQL Server query OR LINQ?
This sounds like the partition problem (see here), where you are trying to assign the tasks so the sum of the hours for each user is the same.
In some situations, the problem is easily solvable (for instance, if all tasks are 1 hour in length). In other situations, the problem has no solution (for instance, if there are more users than tasks). As a hint, when a problem has such extreme variations, it probably cannot be solved by using a SQL query.
Of course, you can represent the data in tables. And, you can use a cursor over some query with complicated logic, and call this SQL.
The Wikipedia page has descriptions of several different possible algorithms. Good luck.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
Can someone help me out with implementing this sequence of calculations in C#?
This problem essentially describes a CRC with a 24-bit polynomial.
You can solve the problem simply using shift and XOR operations and a 24-bit (or larger) variable; no bigint required.
Recommended introductory reading:
http://en.wikipedia.org/wiki/Cyclic_redundancy_check
http://www.mathpages.com/home/kmath458.htm
http://www.ross.net/crc/download/crc_v3.txt
I took the opportunity to dabble with this. Interpreting the equations in the context of an implementation in software is tricky because there are many ways in which the polynomials can be mapped to data structures in memory - and, I assume, you'll want the solution you produce to seamlessly inter-operate with other implementations. In this context, it matters if your byte ordering is MSB or LSB first... it also matters if you align your bit-strings that aren't a multiple of 8 to the left or right. It is worth noting that the polynomials are denoted in ascending powers of X - whereas one might assume, because the leftmost bit in a byte has maximum index, that the leftmost bit should correspond to the maximum power of X - but that's not the convention in use.
Essentially, there are two very different approaches to calculating CRCs using generator polynomials. The first, and least efficient, is to use arbitrary precision arithmetic and modulo - as the posted extract suggests. A faster approach involves successive application of the polynomial and exclusive-or.
A implementation in Pascal can be found here: http://jetvision.de/sbs/adsb/crc.htm - translation to C# should prove trivial.
A more direct approach might involve encoding the message and the generator polynomial as System.Numerics.BigInteger objects (using C#/.Net 4.0) and calculate the parity bits exactly as the text above suggests - by finding the message modulo the polynomial - simply using the "%" operator on suitably encoded BigIntegers. The only challenge here is in converting your message and parity bits to/from a format suitable for your application.