Is a 300MB memory footprint bad? [closed] - c#

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
We have a c# .NET app that has a footprint of about 300 MB.
My questions:
Do you monitor the memory footprint of your applications?
Is this 300MB footprint bad?
Are there guidelines out there?

Short answer:
We have only once monitored the memory usage of a WPF application, which went on to become pretty serious caused by a bug by one of our third party controls.
Since .NET offers a managed Framework, the only guideline probably is to not worry about memory as long as it does not become a concern. The GC can handle itself pretty well and as long as their is memory still available, why not use it?
So when does it become a concern? Getting out of memory exceptions could be your point when you need to start to worry. I personally have never seen that happen though.

As a rule a static memory footprint is not a problem however large it is, unless it is causing problems on the target computer. The real problem with memory is when you have memory leaks, where the memory is increasing.
The reason I never bother is that I have no idea what a good or bad memory footprint is for a particular application. I think there are better routes to identifying issues within your code than focussing on memory. In simplistic terms, if your code is well written, and you have removed unneeded references, then your memory footprint will be right.

The most important thing is that it does not grow over time. To asess this correctly, it can be useful to do some occasional forced garbage collections, preferably at a time when you expect memory use to be low.
In our GUI app, we do this when a session is closed, which is every 10 minutes or so. If people oppose forcing a GC in a release app, you can make this a debug-build only feature.

I would only worry if the memory grows over a period of time - you may need to use automated testing to repeat operations over and over again in order to expose such memory leaks (see How to test a WPF user interface?)
If you are really worried I would look into using a memory profiler such as Redgate ANTS Profiler or if you want free try CLRProfiler4; the latter is hard work to use but it will still find memory leaks with patience.

Related

When writing an application, finish the functions first, then optimize? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Donald Knuth said that "premature optimization is the root of all evil", and I gradully believe the saying.
So can I put it that when writing an application, we should concentrate on completing the functions, without concerning performance issue, until we cannot bear the low performance?
I'm afraid that if I use a wrong pattern many times which slows down the application, as a result fix the issue may consume a considerable amount of time. Should I test the performance of the pattern before widely using it?
The pattern I mentioned may refer to use Linq or for-loop, use Delegate.BeginInvoke, Parallel.For, or Task<T>, dispose IDisposable or just ignore it, etc.
Any refernece materials are all welcomed.
I agree with the spirit of Knuth's quote about premature optimization, as it can cause code to become overly complex and unwieldy too early in development, impacting both the quality of the code and the time needed to complete a project.
Two concerns I have about your post:
You should have a sense about whether or not your function/algorithms can theoretically scale/perform or not to meet your requirement (e.g. the O complexity of your solution -> http://en.wikipedia.org/wiki/Analysis_of_algorithms )
The patterns you mention are actually concrete implementation items, only some of which are related to performance, e.g.
Parallel.For/Task - these are useful for gaining performance on multi-core systems
IDisposable - this is for resource management related, and not something to be avoided
Linq vs. for-loop - this can be a point-optimization, but you'll need to benchmark/assess your use case to determine which is best for you (e.g. Linq can be moved to PLinq in some cases for parallelism)
Delegate.BeginInvoke - a non-optional component of thread synchronization
Never code without concern for performance.
Code for performance up until the code would get more complex (e.g. parallel).
But with C# 5.0 even parallel is not complex.
If you have some calls you think might need to optimize then design for that.
Design the code so optimization does not change the interface to the method.
There is speed, memory, concurrency (if a server app), reliability, security, and support.
Clean code is often the best code.
Don't get crazy until you know you have a performance problem but don't get sloppy.
In answering another question on SO I told the person they did not need a DataTable and DataReader would be faster with less memory. Their response was it still runs in 1/2 a second I don't care. To me that is sloppy code.
#JonhSanders I disagree that "Code for performance up until the code would get more complex" will cause either bugs or incomplete. For me coding for performance is not the same as optimize. First pass on anything but throw away code I code for performance - nothing exotic just best practices. Where I see potential hot spots that I might need to come back and optimize I write with optimization in mind. P.S. I agree on closing the question.

Consuming a SOAP Web Service with the lowest overhead [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I'm implementing a SOAP Webservice for sending thousands of emails and storing thousands of XML response records in a local database. (c#.net, visual studio 2012)
I would like to make my service consumer as fast and lightweight as possible.
I need to know some of the considerations. I always have a feeling that my code should run faster than it is.
E.g.
I've read that using datasets increase overhead. So should I use lists of objects instead?
Does using ORM introduce slowness into my code?
Is a console application faster than a winform? Because the user needs no GUI to deal with. There are simply some parameters sent to the app that invoke some methods.
What are the most efficient ways to deal with a SOAP Web Service?
Make it work, then worry about making it fast. If you try to guess where the bottle necks will be, you will probably guess wrong. The best way to optimize something is to measure real code before and after.
Datasets and ORM and win form apps, and console apps can all run plenty fast. Use the technologies that suit you, then tune the speed if you actually need it.
Finally if you do have a performance problem, changing your choice of algorithms to better suit your problem will likely yield much greater performance impact than changing any of the technologies you mentioned.
Considering my personal experience with soap, in this scenario I would say your main concern should be on how you retrieve this information from your database (procedures, views, triggers, indexes and etc).
The difference between console, winform and webapp isn't that relevant.
After the app is done you should make a huge stress test on it to be able to see where lies your performance problem, if it exists.

multithreading for file load in C# 4.0 [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I need to zip each text file and copy into another server. File size may very from 500MB to 8GB. there is no dependency in each file. I have 35 files Appx.
My regular code taking appx 3-4 hours for this. To reduce the time, I am just thinking to implement Threading for this. Do you thing Threading will reduce the time or is there any other best way to do this.
.Net 4.0 has a new Threading.Task namespace that makes it a lot easier to schedule tasks without having to get deep into the threading scheduling.
It allows you to queue up subsequent tasks to run once the previous one has completed (regardless of success or failure).
http://msdn.microsoft.com/en-us/library/system.threading.tasks.aspx
http://www.codethinked.com/net-40-and-systemthreadingtasks
But, as previous commenters have suggested, if the bottleneck isn't the CPU doing the file compression, but rather the network transfer then it may not help much.
I would recommend you to use Task.Factory.StartNew because it by default creates 1 thread per core and que ups the other thread.
In my experience in working with large files, multi-threading does not speed up the process due to the limitations on the Hard Drive read/write itself and/or network.
You are not only doing a lot of reading and writing with your hard drive, but also copying large files to another computer over the network.
If your average file size is 4.25 GB, that comes out to be 148.75GB of storage space we are dealing with (at 35 file count). That is a lot of space and not only are you reading all that space into memory (hopefully not all at once, otherwise virtual memory will start kicking in and it will write out even more to your hard drive), you are also writing some of that space back out as a zip file.
Add that factor to file transfer over a network, I am not surprised at all at the times you are getting if your network is typical of the networks I have to deal with. Megabit and Gigabit speeds are never what they claim they are.
If you are using an external utility for zipping (i.e. 7-zip), and process spin up is not a concern for your application , I would keep it simple and just Process.Start() as many 7-zip EXEs as you need to do the tasks in (quasi) parallel, or do some number at a time, like 5. Up to you.

Is C# a viable language for a major project? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 12 years ago.
I've been using C# for a while now and I love it for its great integration with Windows. The Win32 API in C++ is a monster, but that's another story. Anyway, I was wondering, is C# a "good enough" language to use for larger projects? Does Microsoft use C# in any of their applications? I've always assumed C++ was the only choice for large projects because of its speed and has no need for the CLR.
What is your opinion on C#?
EDIT: By large I mean applications like Microsoft Project (first example that came to my mind). It could also mean mission-critical applications, as well.
Unless you're after extreme performance (eg gaming), C# is perfectly acceptable for almost any application - I've been developing enterprise-scale apps in it for years and as a general rule, the advantages far outweigh any (negligible) performance losses compared to C++ - especially when you factor in development time and the relatively low cost of improving CPU speed.
Since I've started using C#, it would take a VERY good reason to make me go back to something lower-level like C++ - There are simply so many advantages in terms of ease of development, memory management, a huge library (.Net framework), WCF, LINQ, etc.
I would personally consider C# before any other language when starting a new project
I've using c# since it came out and so far there were only a few things I couldn't do with it such as print drivers. Other than that I developed fairly complex multithreaded server-side applications that are both reliable and fast.
Also - you may want to define "large".
Yes, but I'd encourage you not to follow a Vendor Lock in pattern. Although you could argue that doesn't really apply to .net, but you would lose.
SharePoint and Dynamics (Microsoft products) (and several others) are written almost exclusively in .NET (C#). They would certainly be considered large, enterprise scale applications. Almost all internal project at Microsoft, with the exception or Windows and Office supposedly are written in C# these days (I think it's even a requirement for new projects).
You will incur larger startup times, as you have to load the .NET runtime. Other than that, performance is actually quite good for most things.
And with the garbage collection, memory leaks are less of an issue, so Java and C# both become good options for mission-critical applications (memory leaks grow over time, and can kill things that run for weeks or months at a time. With a more reliable memory footprint, you application can be more stable).
so... yes.

Best .NET memory and performance profiler? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
We are using JetBrains' dotTrace. What other profiling tools can be recommended that are better for profiling C# Windows Forms applications?
No. I have tried pretty much every .NET profiler on the market (ANTS, vTune, OptimizeIt, DevPartner, YourKit), and in my opinion dotTrace is the best of the lot. It is one of only two profilers I have used (the other being YourKit) that has low enough overhead to handle a highly CPU-intensive application.
If and only if your application is relatively light, I could recommend ANTS Profiler. Its line-by-line stats are sometimes quite useful, but they come at a price in profiling efficiency.
I have used the EQATEC Profiler. It is free and is a code profiler, not a memory profiler.
For memory profiling you have both the free CLR profiler and the commercial .NET memory profiler. Both are excellent but the latter is a bit more polished.
We've got on really well with AQTime. The great thing from our point of view is that it does the unmanaged parts of our code too.
It hasn't been mentioned yet, but for memory analysis Windbg is about as thorough and low-level as you can get. Using it in combination with sos.dll is incredibly powerful, but there is a fairly steep learning curve.
It's a free tool though, and Tess Ferrandez' blog is a great place to start with it. ANTS and other profilers are much more user-friendly, but Windbg can slice and dice the managed heap like none other in my opinion.
Ants Profiler just released version 4.
We use it, and are quite happy with it. There's a 14 day trial to evaluate (as is true for most offerings).
We use DotTrace like you, but in the past we used Ants Profiler by RedGate. It is a nice tool also.
I am very happy with RedGate ANTS. The only other one I tried was the one that comes with Visual Studio Team, and it sucks.
You should check out SpeedTrace. We are pleased with the software, and it helps us a lot in resolving the root causes of my problem.
nProf is a good tool if you're looking for something free. It's kind of finicky at points, and a little buggy, but if you're on a tight budget, it'll do the job.
I've been using the free SlimTune since its recent release. Although it has a minimal interface, it is super easy to use and provides good diagnostics which have already helped me a lot. It currently supports two kinds of displays, one of which is similar to nProf. It is from the same developer as SlimDX, so I expect the tool to become even better in the short term.
EDIT: As far as I know, it does not support memory profiling yet.

Categories