Extreme Memory Conditions Testing : How to saturate RAM? - c#

I would like to write a small piece of program that launches threads, consumes available RAM memory in a linear fashion, until a certain level, and stops (ideally, pauses until "enough" memory is freed and continues creating threads after that, and so on.)
I tried the following, but the list.Add(new byte[]) requires contiguous RAM space and drops an OutOfMemoryException, which is NOT what I am trying to simulate.
EDIT :
I have a multi-threaded memory-hungry application that eats up a whole bunch of RAM GB's. All I want is to isolate/reproduce that situation in "Lab conditions" to tackle it, i.e write an adaptive mem-monitoring / thread-limiter draft. I am using x64 OS an x64 Platform.
To make it clear : The result I want to see is the Task Manager Memory Monitor going up straight due to the program.
static void Main(string[] args)
{
ComputerInfo ci = new ComputerInfo();
D("TOTAL PHYSICAL MEMORY : " + Math.Round(ci.TotalPhysicalMemory / Math.Pow(10,9),3) +" GB");
//########### Fill Memory ###############
var list = new List<byte[]>();
Thread FillMem= new Thread(delegate()
{
while (Process.GetCurrentProcess().PrivateMemorySize64 < MAX_MEM_LEVEL)
{
list.Add(new byte[1024 * 10000]); //<- I Need to change this
Thread.Sleep(100);
}
});
FillMem.Start();
//########### Show used Memory ###############
Thread MonitorMem = new Thread(delegate()
{
while (true)
{
D("PROCESS MEMORY : " + Math.Round(Process.GetCurrentProcess().PrivateMemorySize64 / Math.Pow(10, 6), 3) + " MB");
Thread.Sleep(1000);
}
});
MonitorMem.Start();
Console.Read();
}

The question is still quite confusing; it is not clear to me what you are trying to do here and why.
If you genuinely want to be consuming physical memory -- that is, telling the operating system no, really do not use this portion of the physical RAM chip that is installed in the machine for anything other than what I say -- then I would probably use the aptly-named AllocateUserPhysicalPages function from unmanaged code.
That will then reduce the amount of physical memory that is available for other uses, forcing more virtual memory pages to go out to the page file.
Other than making all the programs running on your machine a whole lot slower, I'm not sure what you intend to accomplish by this. Can you clarify?

The thing is that with C# you can not grow more then approximately 1.2 GB of RAM on 32 bit .NET framework. You can have even 8GB of RAM on your 64 bit machine, but if the process you run was compiled for 32bit architecture, it will lead to OutOfMemoryException as soon as it reaches approx 1.2GB.
For this kind of testing I would suggest choosing other type of languages/frameworks.
EDIT
A good link on subject:
is-there-a-memory-limit-for-a-single-net-process

If the problem that you're running into is that your process is running out of virtual memory space before the hardware is running out of physical memory space then you could just spin up a number (5 maybe?) processing with your code (and something to stop them at say 1-2 GB so they don't OOM themselves). It's probably not as good of a solution as a unmanaged call to allocate memory, but it would be easy enough to do.

Related

KafkaConsumer.PlainSource Method (in C#.Net) Using large amount of CPU

below method using large amount of CPU can anyone help me to minimize the CPU usage with proper solution.
KafkaConsumer.PlainSource(
consumerSettings, subscription)
.RunForeach(result =>
{
_ActorRef.Tell(result.Message.Value);
}, materializer);
I'm running the SimpleProducer and SimpleConsumer samples baked into the Akka.Streams.Kafka repository - and the PlainSource is designed in a nearly identical fashion to yours:
KafkaConsumer.PlainSource(consumerSettings, subscription)
.RunForeach(result =>
{
Console.WriteLine($"Consumer: {result.Topic}/{result.Partition} {result.Offset}: {result.Message.Value}");
}, materializer);
Here's what my CPU utilization looks like - bearing in mind that the producer is continuously producing new events for my consumer:
This is extremely low resource consumption - which is what Akka.Streams and all of its plugins (such as Kafka) provide out of the box.
Your setup has no backpressure support (since IActorRef.Tell is non-blocking) and therefore this stream is going to run at full blast inside your system. Whatever your actors are doing is probably what's responsible for high CPU utilization.
Your other ticket is asking about how to add backpressure support to your Akka.Streams.Kafka application, so I'll help answer that too.

Get application pool's memory usage

I want to get my application pool's memory usage using Process class. This is my code so far:
ServerManager serverManager = new ServerManager();
ApplicationPoolCollection applicationPoolCollection = serverManager.ApplicationPools;
Process process = Process.GetProcessById(applicationPoolCollection.Where(p => p.Name == "mywebsite.com").First().WorkerProcesses[0].ProcessId);
I can get Process class but how do I know what is the amount of RAM it is using currently? I do see properties like PagedMemorySize64 and NonpagedSystemMemorySize64. Which one would give me the correct RAM that it is occupying at the moment?
I cannot use PerformanceCounter like so:
PerformanceCounter performanceCounter = new PerformanceCounter();
performanceCounter.CategoryName = "Process";
performanceCounter.CounterName = "Working Set";
performanceCounter.InstanceName = process.ProcessName;
Console.WriteLine(((uint)performanceCounter.NextValue() / 1024).ToString("N0"));
The reason being ProcessName would return w3p and therefore PerformanceCounter would write the total IIS RAM usage in Console which is not what I want.
I just need the application pool's RAM usage. Therefore I think my answer lies in Process class and not PerformanceCounter
Any help is appreciated.
If you need simply amount of ram visible to the process, which means amount of ram assigned to the process, but not necessary that every single bit of it is actually used by the process, you can use: WorkingSet. (Choose appropriate version for your .net)
Just invite your attention on the fact, that there is much more into process memory diagnostics, in case you might be concerned about allocated Virtual Memory, Commited Memory, Mapped Files and whatnot.

Loading many large photos into a Panel efficiently

How do I load many large photos from a directory and its sub-directories in such a way as to prevent an OutOfMemoryException?
I have been using:
foreach(string file in files)
{
PictureBox pic = new PictureBox() { Image = Image.FromFile(file) };
this.Controls.Add(pic);
}
which has worked until now. The photos that I need to work with now are anywhere between 15 and 40MB's each, and there could be hundreds of them.
You're attacking the garbage collector with this approach. Loading 15-40mb objects in a loop will always invite an OutOfMemoryException. This is because the objects go straight onto the large object heap, all objects > 85K do. Large objects become Gen 2 objects immediately and the memory is not automatically compacted as of .Net 4.5.1 (you request it) and will not be compacted at all in earlier versions.
Therefore even if you get away with initially loading the objects and the app keeps running, there is every chance that these objects, even when dereferenced completely, will hang around, fragmenting the large object heap. Once fragmentation occurs and for example the user closes the control to do something else for a minute or two and opens the control again, it is much more likely all the new objects will not be able to slot in to the LOH - the memory must be contiguous when allocation occurs. The GC runs collections on Gen 2 and LOH much less often for performance reasons - memcpy is used by the GC in the background and this is expensive on larger blocks of memory.
Also, the memory consumed will not be released if you have all of these images referenced from a control that is in use as well, imagine tabs. The whole idea of doing this is misconceived. Use thumbnails or load full scale images as needed by the user and be careful with the memory consumed.
UPDATE
Rather than telling you what you should and should not do I have decided to try to help you do it :)
I wrote a small program that operates on a directory containing 440 jpeg files with a total size of 335 megabytes. When I first ran your code I got the OutOfMemoryException and the form remained unresponsive.
Step 1
The first thing to note is if you are compiling as x86 or AnyCpu you need to change this to x64. Right click project, go to Build tab and set the target platform to x64.
This is because the amount of memory that can be addressed on a 32 bit x86 platform is limited. All .Net processes run within a virtual address space and the CLR heap size will be whatever the process is allowed by the OS and is not really within the control of the developer. However, it will allocate as much memory as is available - I am running on 64 bit Windows 8.1 so changing the target platform gives me an almost unlimited amount of memory space to use - right up to the limit of physical memory your process will be allowed.
After doing this running your code did not cause an OutOfMemoryException
Step 2
I changed the target framework to 4.5.1 from the default 4.5 in VS 2013. I did this so I could use GCSettings.LargeObjectHeapCompactionMode, as it is only available in 4.5.1 . I noticed that closing the form took an age because the GC was doing a crazy amount of work releasing memory. Basically I would set this at the end of the loadPics code as it will allow the large object heap to not get fragmented on the next blocking garbage collection. This will be essential for your app I believe so if possible try to use this version of the framework. You should test it on earlier versions too to see the difference when interacting with your app.
Step 3
As the app was still unresponsive I made the code run asynchronously
Step 4
As the code now runs on a separate thread to the UI thread it caused a GUI cross thread exception when accessing the form, so I had to use Invoke which posts a message back to the UI thread from the code's thread. This is because UI controls can only be accessed from a UI thread.
Code
private async void button1_Click(object sender, EventArgs e)
{
await LoadAllPics();
}
private async Task LoadAllPics()
{
IEnumerable<string> files = Directory.EnumerateFiles(#"C:\Dropbox\Photos", "*.JPG", SearchOption.AllDirectories);
await Task.Run(() =>
{
foreach(string file in files)
{
Invoke((MethodInvoker)(() =>
{
PictureBox pic = new PictureBox() { Image = Image.FromFile(file) };
this.Controls.Add(pic);
}));
}
}
);
GCSettings.LargeObjectHeapCompactionMode = GCLargeObjectHeapCompactionMode.CompactOnce;
}
You can try resizing the image when you are putting on the UI.
foreach(string file in files)
{
PictureBox pic = new PictureBox() { Image = Image.FromFile(file).resizeImage(50,50) };
this.Controls.Add(pic);
}
public static Image resizeImage(this Image imgToResize, Size size)
{
return (Image)(new Bitmap(imgToResize, size));
}

Program doesn't use all hardware resources

I'm working on one program that takes information from files and then stores them in MySQL database. This MySQL database is located in another dedicated server which is much more powerful than this server here. Data is being sent over LAN using 1gbps connection.
It is using 8 threads because my server has 8 cores, but somehow it runs so slowly.
CPU is: Intel Xeon E3-1270 v 3 # 3.50Ghz
RAM: 16 GB ECC
HDD: SATA 3 1TB
My program's CPU usage is only 0-5%
CPU affinity is all 8 cores
So, do you have any ideas what's wrong or how can I increase the speed of my program?
UPDATE:
I updated my code and it appears to be faster:
Parallel.For(0, this.data_files.Count, new ParallelOptions { MaxDegreeOfParallelism = this.MaxThreads }, i =>
{
this.ThreadCount++;
this.ParseFile(this.GetSource());
});
Here's a code snippet that deploys threads:
while (true)
{
if (this.ThreadCount < this.MaxThreads)
{
Task.Factory.StartNew(() =>
this.ParseFile(this.GetFile())
);
this.ThreadCount++;
}
else
{
Thread.Sleep(1);
}
this.UpdateConsole();
}
GetFile function:
private string GetFile()
{
string file = "";
string source = "";
while (true)
{
if (this.data_files.Count() != 0)
{
file = this.data_files[0];
this.data_files.RemoveAt(0);
if (File.Exists(file) == true)
{
source = File.ReadAllText(file);
File.Delete(file);
break;
}
}
}
return source;
}
I'm working on one program that takes information from files and then stores them in MySQL database.
Clearly your program is not CPU bound, it's IO bound. The bottlenecks are going to be based on your hard disk(s) and your network connection. Odds are even a single thread is going to be able to ensure proper utilization of these resources (in a well designed application). Adding extra threads generally won't help, it'll just create a bunch of threads that will spend their time waiting on various IO operations.
To use all the hardware resources is not the right goal for a program to have.
Instead, a better goal is to be as fast as possible. This is significantly different. While using more hardware resources can help, it is not always sufficient.
Sometimes, adding more resources to a problem doesn't help. In those cases, don't. Adding threads makes your program more complex, but not necessarily faster as you've seen.
C# already has good Asynchronous programming features with the TPL (which you are already using), so why not take advantage of that?
This will mean that the .NET framework will automatically manage the threads for you in an efficient way.
Here's what I propose:
foreach (var file in GetFilesToRead()) {
var task = PerformOperation(file);
// Keep a list of tasks, if you wish.
}
...
Task PerformOperation (string filename) {
var file = await ReadFile(file);
await ParseFile(file);
DoSomething();
}
Note that even in CPU-bound programs, threads (and tasks) may not help you if you're using locks.
Although locks help keep programs well-behaved, they come at a significant performance cost.
Within a lock, only one thread may be executing at a time.
This means that the first thread is locking your _lock instance, and then the other threads are waiting for that lock to be released.
In your program, only one thread is active at a time.
To solve this, don't use locks. Instead, write programs that do not need locks at all. Copy variables instead of sharing them. Use immutable collections instead of mutable collections and so on.
My program above uses exactly zero locks and, as such, will better utilize your threads.

64bit method call slow C#

HI i have a 32bit application being ported to 64bit
somehow method calls of 64bit is a lot slower than 32bit.
code example
class huge_class
{
class subclass0{}
class subclass1{}
class subclass2{}
class subclass3{}
class subclass4{}
class subclass5{}
class subclass6{}
class subclass7{}
//so on... say 300
private object[] GetClassObj(Stopwatch x)
{
Console.WriteLine(x.ElapsedMilliseconds.ToString()); //<- the latency can be observed here, the time it takes to execute this line takes a big amount of time
object[] retObj = new object[300];
retObj[0] = new subclass0();
retObj[1] = new subclass1();
retObj[2] = new subclass2();
retObj[3] = new subclass3();
retObj[4] = new subclass4();
retObj[5] = new subclass5();
retObj[6] = new subclass6();
//so on... to 299
}
}
Class CallingClass{
static void Main(string[] args)
{
Console.WriteLine("Ready");
Console.ReadKey();
huge_class bigClass = new huge_class();
Console.WriteLine("Init Done");
Console.ReadKey();
Stopwatch tmr = Stopwatch.StartNew();
object[] WholeLottaObj = bigClass.GetClassObj(tmr);
Console.WriteLine(tmr.ElapsedMilliseconds.ToString());
Console.WriteLine("Done");
Console.ReadKey();
}
for some odd reason on 32bit the GetClassObj is entered faster than on its 64bit version
what am i doing wrong
This may be due to cache coherency. Don't forget that each reference will be twice as large on a 64-bit machine as it is on a 32-bit machine. That means:
Each of your instance objects is going to be bigger, so they'll be spread out further in memory (there's more per-object overhead in x64 anyway, and any reference fields will be twice the size)
The array itself will be about twice as big
Now it could easily be that in the 32-bit CLR you were just within one of the fastest caches on your CPU - whereas on the 64-bit CLR you've gone outside it so it's having to swap memory in and out of that cache, either to another cache or to main memory.
That's why x86 is the default for executable projects in VS2010 (and possibly 2008; not sure). This blog post goes into a lot more detail.
Why it should be faster in the first place? 64-bit pointer operations are twice as heavy (in memory terms), so it's natural for 64-bit app to be slower.

Categories