we have a 2008 SQL Server machine, for some reason which we still havent figured out, every two weeks this server stops responding around the same time and around the same day, it's either sunday or saturday, we have checked the logs and the only message we have found is this
A significant part of sql server process memory has been paged out.
On the operating system log we algo found a message
Application popup: Windows - Virtual Memory Minimum Too Low : Your system is low on virtual memory. Windows is increasing the size of your virtual memory paging file. During this process, memory requests for some applications may be denied. For more information, see Help.
so it looks like the operating system is out of physical memory, we do not undestand why this happens every two weeks, it seems as if memory never gets freed and two weeks is the period it takes to get full. Is there a way we could better diagnose this? we are also wondering if it is related to how we are using NHibernate? or is there any other cause?
SQL Server consumes more and more memory over time, this is normal. We faced this issue after the server has been up for a couple of MONTHS. SQL's memory consumption went up to several GB and Windows has eventually cut it down...
Setting up a "max server memory" for your SQL Server should help. On our 8 GB server we've set it to 5.5 GB.
PS. Setting up a "low memory" email alert is a good practice. It will let know just before the things are about to mess up. This blog post explains how you do this.
1) identify the process that consumes the memory. Use the Process object and identify the process that consumes memory (large Private Bytes and Virtual Bytes)
2) If the process turns out to be SQL Server, follow SQL Server memory diagnostics steps. See Monitoring Memory Usage and Using DBCC MEMORYSTATUS to Monitor SQL Server Memory Usage
Depending on the identified memory consumer process and posibly on the identified SQL Server memory clerk, appropiate actions and remedies can be recommended, but untill you do the due diligence and find the cause no advice can possibly be given.
Related
I have written a C# windows service that collects syslog and inserts them to a SQL server database.
Average insert range is 5-10 inserts Per Second and each time I have a read operation and an insert operation.
I should note that I use the "Using" command for managing SQL server connections.
The only problem I face is that memory usage of windows increases steadily and even reaches to 6GB!
Is this a normal case?
and if it is what's the solution to decrease memory usage?
The only problem I face is that memory usage of windows increases steadily and even reaches to 6GB! Is this a normal case?
This is normal ,until you face issues with memory pressure..
How to identify and troubleshoot, if you have memory pressure..?
How SQLserver manages memory is a complex topic,but there are many links in SO which will help you
and if it is what's the solution to decrease memory usage?
Its Good to follow best practices and leave 10% to OS and leave the rest to SQLServer by setting max memory limit
I have an asp.net/c# 4.0 website on a shared server.
The os Windows Server 2012 is running IIS 7, has 32GB of ram and 3.29GHz of processor.
The server is running into difficulty now and again, such as problems RDP'ing and other PHP websites running slow.
Our sys admin has suggested my website as being a possible memory hog and cause of these issues.
At any given time the sites "Memory (private working set)" is 2GB and can peak as high as 15GB.
I ran a trial version of JetBrains dotMemory on the server, attached to the website's w3wp.exe process. My first time using this program, I am a complete novice here.
I took two memory snapshots using dotMemory.
And the basic snapshot comparison can be seen here: http://i.imgur.com/0Jk8yYE.jpg
From the above comparison we can see that System.String and BusinessObjects.Item are the two items with the most survived bytes.
Drilling down on system.string I could see that the main dominating survived object was System.Web.Caching.CacheEntry with 135MB. See the screengrab: http://i.imgur.com/t1hs8nd.jpg
Which leads me to suspect maybe I cache too much?
I cache all data that comes out of my database: HTML Page Content, Nav-Menu Items, Relationships between pages & children, articles etc. Using HttpContext.Current.Cache.Insert.
With the cache timeout set to 10080 minutes.
My Questions:
1 - is 2GB Memory (private working set) and a peak as high as 15GB to much for a server with 32 GB Ram?
2 - Have I used dotMemory correctly to identify an issue?
3 - Is my caching an issue?
4 - Possible other causes?
5 - Possible solutions?
Thanks
Usage of a large amount of memory itself can not slow down a program. It can be slowed down
if windows actively using swap file. Ask an admin to check this case
if .NET program causes garbage collecting too much (produces too much memory traffic)
You can see a memory traffic information in dotMemory. Unfortunately it can't gather such information if it attaches to the program, the only way to collect object creation stack traces and memory traffic information is to launch your program under dotMemory profiler, enabling corresponding settings.
BUT!
The problem is you can't evaluate is memory traffic amount "N" high or no, the only way to find a root of a performance problem is using performance profiler.
For example JetBrains dotTrace can show you, how much time you program spends in garbage collecting, and only in case this is a bottle neck, you should use memory profiler to find a root of traffic.
Conclusion: try to find a bottle neck using performance profiler first.
Then, if you still have a questions about dotMemory, ask me, I'll try to help you :)
As the load on our Azure website has increased (along with the complexity of the work that it's doing), we've noticed that we're running into CPU utilization issues. CPU utilization gradually creeps upward over the course of several hours, even as traffic levels remain fairly steady. Over time, if the Azure stats are correct, we'll somehow manage to get > 60 seconds of CPU per instance (not quite clear how that works), and response times will start increasing dramatically.
If I restart the web server, CPU drops immediately, and then begins a slow creep back up. For instance, in the image below, you can see CPU creeping up, followed by the restart (with the red circle) and then a recovery of the CPU.
I'm strongly inclined to suspect that this is a problem somewhere in my own code, but I'm scratching my head as to how to figure it out. So far, any attempts to reproduce this on my dev or testing environments have proven ineffectual. Nearly all the suggestions for profiling IIS/C# performance seem to presume either direct access to the machine in question or at least a "Cloud Service" instance rather than an Azure Website.
I know this is a bit of a long shot, but... any suggestions, either for what it might be, or how to troubleshoot it?
(We're using C# 5.0, .NET 4.5.1, ASP.NET MVC 5.2.0, WebAPI 2.2, EF 6.1.1, Azure System Bus, Azure SQL Database, Azure redis cache, and async for every significant code path.)
Edit 8/5/14 - I've tried some of the suggestions below. But when the website is truly busy, i.e., ~100% CPU utilization, any attempt to download a mini-dump or GC dump results in a 500 error, with the message, "Not enough storage." The various times that I have been able to download a mini-dump or GC dump, they haven't shown anything particularly interesting, at least, so far as I could figure out. (For instance, the most interesting thing in the GC dump was a half dozen or so >100KB string instances - those seem to be associated with the bundling subsystem in some fashion, so I suspect that they're just cached ScriptBundle or StyleBundle instances.)
Try remote debugging to your site from visual studio.
Try https://{sitename}.scm.azurewebsites.net/ProcessExplorer/ there you can take memory dumps an GC dumps of your w3wp process.
Then you can compare 2 GC dumps to find memory leaks and open the memory dump with windbg/VS for further "offline" debugging.
I have a problem with an application that I wrote in .NET/C#. It consists of a server which manages a few other machines, and runs tests on them. It is a windows forms application. In order to run tests with proper error handling, for each machine I have two threads: one for running tests and one that pings it continuously. Each machine has a running queue, in which tasks are stored, tasks that will be run on that particular machine.
The issue is that after some time, when more than a few tasks are present in the queue, the memory it consumes(process explorer, task manager) gradually increases from about 50-100MB to 1.6-1.8 GB. At about this limit almost every transaction(file copy on share, remote WMI access) with the remote machines fails with either "Not enough storage" or "Out of memory". I tried some tools in order to localize the string and the closest I got was .Net Memory Profiler. That wasn't of great help, because the largest amount of memory was residing in "Private Data - Unidentified". This I'm guessing it's unmanaged data, because I can evaluate every other data(from my program) down to each string and int, and every instance of it.
Can anyone suggest a tool I can use in order to properly localize the leak and maybe fix it. It would help me a lot if I would know the DLL(from my app)/Thread that uses that memory, or at least if I can view somehow what is in that memory.
Note: A lot of posts are out there about the two exceptions: Not enough storage, and Out of memory. Most of them suggest increasing the IRPStackSize on the 'server' machine(in my case, clients). I have IRPStackSize of 50(0x32) on all of the machines, including the server.
EDIT
Regarding the comments: yes, I do maintain a log, but nothing strange happens. Using a memory profiler I discovered that my application, the .NET side uses about 20MB of memory when the unmanaged part is well over 1GB. With the help of WinDbg I found out what resides in that extra memory(in most of it). In order to access the machines and run different tests on them I use WMI, for which I have a wrapper. Everything I use is being disposed(using statements, and for some actually calling the Dispose method. Strangely though, the memory is filled with clones of this class. Does anyone know why a class would clone itself in memory.
Note: the rate at which the memory usage increases in about 5MB/s, so it's not really over a long period of time. I also wonder why it is not being freed by the garbage collector. I am using C# classes to work with WMI, not COM, nor unmanaged code. Also, among the objects on the heap I see a lot of data belonging to wmiutils, CWbemError. Oddly enough, google doesn't even know the word(no results for CWbemError)
I've got a code base with lots of this:
byte[] contents = FileUtils.FileToByteArray(FileOfGartantuanProportions);
I don't control my IIS server, so I can't see the system log or do instrumentation, I just get to see my request fail to return (white page of death) and sometimes YSOD with Out of Memory error.
Does anyone have a rule of thumb for what is the most data you can load up into memory before IIS5 or IIS6 will kill the work process or just keel over and die?
Or better yet, is there an API call I an make, something like:
if(!IsEnoughMemoryFor(FileOfGartantuanProportion.Length)) throw new SomeException() ;
On my XP Pro workstation I can get get an ASP.NET page to successfully deal with a very large byte array in memory, but these results obviously weren't applicable to a real shared server.
According to Tess Ferrandez's talk at TechEd, you can start seeing Out Of Memory exceptions on a 32bit server when you have about 800MB in Private Bytes or 1.4GB in Virtual Bytes.
She also had a good post about why this is here:
A restaurant analogy
Other points she made included thinking about what you're serialising into session - for example serialising a 1MB dataset can result in 15-20MB of memory used on the server every page as that data is serialised and de-serialised.
With IIS6 in native mode you can configure the limits for each Application Pool.
With IIS5 it's configured using the element in machine.config as a percentage of total system memory - default is 60%.
For IIS 6, you'll likely run into the memory recycling limits PeriodicRestartPrivateMemory and PeriodicRestartMemory. I think on XP, it's 60% of physical memory. At least that's what I remember about asp.net 1.1, I'm not sure about 2.0
The YSOD is probably best handled with try/catch around the large allocations.