Efficient temporary dataset - c#

I have a class in C# that is storing information on a stack to be used by other pieces of the application later. The information is currently stored in a class (without any methods) and consists of some ints, shorts, floats and a couple of boolean values.
I can be processing 10-40 of these packets every second - potentially more - and the information on the stack will be removed when it's needed by another part of the program; however this isn't guaranteed to occur at any definite interval. The information also has a chance of being incomplete (I'm processing packets from a network connection).
Currently I have this represented as such:
public class PackInfo
{
public boolean active;
public float f1;
public float f2;
public int i1;
public int i2;
public int i3;
public int i4;
public short s1;
public short s2;
}
Is there a better way that this information can be represented? There's no chance of the stack getting too large (most of the information will be cleared if it starts getting too big) but I'm worried that there will be a needless amount of memory overhead involved in creating so many instances of the class to act as little more than a container for this information. Even though this is neither a computationally complex or memory-consuming task, I don't see it scaling well should it become either.

This sounds like it would be a good idea to use a generic Queue for storing these. I have the assumption that you're handling these "messages" in order.
As for the overhead of instantiating these classes, I don't think that instantiating 10-40 per second would have any visible impact on performance, the actual processing you do afterwards would likely be a much better candidate for optimization than the cost of instantiation.
Also, I would recommend only optimizing when you can actually measure performance in your application, otherwise you might be wasting your time doing premature optimization.

Related

Is it faster to get a property or to pass a value to a method

I have a static class with properties to store user's inputs:
public static class UserData
{
public static double UserInput1 { get; set; }
}
And I have nested methods that need the user's inputs
public static double Foo()
{
[...]
var input1 = UserData.UserInput1;
var bar = Bar();
[...]
}
private static double Bar()
{
var input1 = UserData.UserInput1;
[...]
}
The positive thing is that I do not have to pass all user inputs to Foo(), then to Bar() (and to further nested methods within Bar()).
The negative thing is that I have to get UserData.UserInput1 and other user inputs very often. I could change the code to get the user inputs only once:
public static double Foo()
{
[...]
var input1 = UserData.UserInput1;
var bar = Bar(input1);
[...]
}
private static double Bar(double input1)
{
[...]
}
Which one is faster?
The second one is the faster than the first one. Because you avoid to obtain the static property from UserData.
It's not a big goal works with static when we talk about performance cost due to the need to perform a lookup in the symbol table and track shared memory. By passing input values as parameters, this is avoided and slightly better performance is achieved.
But both options are ok. It's more important to focus on code readability and maintainability rather than performance unless you are working on a critical performance issue.
Which one is faster?
Using static mutable state in this way will be way slower in the long run. Because you will spend a bunch of time trying to find and fix bugs. This time could be better spent doing things that will actually help performance, like profiling and optimizing code.
Try to make method that compute anything take the required input as parameters. Try to make input fields properties of the associated UI class. This should help keep the code simple and understandable.
Accessing a static property will be translated to a indirect memory access. Passing a parameter to a method might be free if the parameter is already in a register, or might involve a bit more work if it needs to be loaded, moved or passed on the stack. But we are talking about single digit cycles here, optimization on this level should only be done in super tight loops that are run many millions of times each second, and then you should typically ensure that all methods can be inlined, side stepping the problem.
If you're worried about such micro-optimizations (which you generally wouldn't need), consider using inlining.
[MethodImpl(MethodImplOptions.AggressiveInlining)]
https://learn.microsoft.com/en-us/dotnet/api/system.runtime.compilerservices.methodimploptions?view=net-7.0
PS: using one over the other, or using AggressiveInlining, will not save you anywhere close to the 1ms you are hoping for, under non-extreme/farfetched scenarios.

C# - how does variable scope and disposal impact processing efficiency?

I was having a discussion with a colleague the other day about this hypothetical situation. Consider this pseudocode:
public void Main()
{
MyDto dto = Repository.GetDto();
foreach(var row in dto.Rows)
{
ProcessStrings(row);
}
}
public void ProcessStrings(DataRow row)
{
string string1 = GetStringFromDataRow(row, 1);
string string2 = GetStringFromDataRow(row, 2);
// do something with the strings
}
Then this functionally identical alternative:
public void Main()
{
string1 = null;
string2 = null,
MyDto dto = Repository.GetDto();
foreach(var row in dto.Rows)
{
ProcessStrings(row, string1, string2)
}
}
public void ProcessStrings(DataRow row, string string1, string string2)
{
string1 = GetStringFromDataRow(row, 1);
string2 = GetStringFromDataRow(row, 2);
// do something with the strings
}
How will these differ in processing when running the compiled code? Are we right in thinking the second version is marginally more efficient because the string variables will take up less memory and only be disposed once, whereas in the first version, they're disposed of on each pass of the loop?
Would it make any difference if the strings in the second version were passed by ref or as out parameters?
When you're dealing with "marginally more efficient" level of optimizations you risk not seeing the whole picture and end up being "marginally less efficient".
This answer here risks the same thing, but with that caveat, let's look at the hypothesis:
Storing a string into a variable creates a new instance of the string
No, not at all. A string is an object, what you're storing in the variable is a reference to that object. On 32-bit systems this reference is 4 bytes in size, on 64-bit it is 8. Nothing more, nothing less. Moving 4/8 bytes around is overhead that you're not really going to notice a lot.
So neither of the two examples, with the very little information we have about the makings of the methods being called, creates more or less strings than the other so on this count they're equivalent.
So what is different?
Well in one example you're storing the two string references into local variables. This is most likely going to be cpu registers. Could be memory on the stack. Hard to say, depends on the rest of the code. Does it matter? Highly unlikely.
In the other example you're passing in two parameters as null and then reusing those parameters locally. These parameters can be passed as cpu registers or stack memory. Same as the other. Did it matter? Not at all.
So most likely there is going to be absolutely no difference at all.
Note one thing, you're mentioning "disposal". This term is reserved for the usage of objects implementing IDisposable and then the act of disposing of these by calling IDisposable.Dispose on those objects. Strings are not such objects, this is not relevant to this question.
If, instead, by disposal you mean "garbage collection", then since I already established that neither of the two examples creates more or less objects than the others due to the differences you asked about, this is also irrelevant.
This is not important, however. It isn't important what you or I or your colleague thinks is going to have an effect. Knowing is quite different, which leads me to...
The real tip I can give about optimization:
Measure
Measure
Measure
Understand
Verify that you understand it correctly
Change, if possible
You measure, use a profiler to find the real bottlenecks and real time spenders in your code, then understand why those are bottlenecks, then ensure your understanding is correct, then you can see if you can change it.
In your code I will venture a guess that if you were to profile your program you would find that those two examples will have absolutely no effect whatsoever on the running time. If they do have effect it is going to be on order of nanoseconds. Most likely, the very act of looking at the profiler results will give you one or more "huh, that's odd" realizations about your program, and you'll find bottlenecks that are far bigger fish than the variables in play here.
In both of your alternatives, GetStringFromDataRow creates new string every time. Whether you store a reference to this string in a local variable or in argument parameter variable (which is essentially not much different from local variable in your case) does not matter. Imagine you even not assigned result of GetStringFromDataRow to any variable - instance of string is still created and stored somewhere in memory until garbage collected. If you would pass your strings by reference - it won't make much difference. You will be able to reuse memory location to store reference to created string (you can think of it as the memory address of string instance), but not memory location for string contents.

.NET small class with many instances optimization scenario?

I have millions of instances of class Data, I seek optimization advise.
Is there a way to optimize it in any way - save memory for example by serializing it somehow, although it will hurt the retrieval speed which is important too. Maybe turning the class to struct - but it seems that the class is pretty large for struct.
Queries for this objects can take hundreds-millions of these objects at a time. They sit in a list and queried by DateTime. The results are aggregated in different ways, many calculation can be applied.
[Serializable]
[DataContract]
public abstract class BaseData {}
[Serializable]
public class Data : BaseData {
public byte member1;
public int member2;
public long member3;
public double member4;
public DateTime member5;
}
Unfortunately, while you did specify that you want to "optimize", you did not specify what the exact problem is you mean to tackle. So I cannot really give you more than general advice.
Serialization will not help you. Your Data objects are already stored as bytes in memory. Nor will turning it into a struct help. The difference between a struct and a class lies in their addressing and referencing behaviour, not in their memory footprint.
The only way I can think of to reduce the memory footprint of a collection with "hundreds-millions" of these objects would be to serialize and compress the entire thing. But that is not feasible. You would always have to decompress the entire thing before accessing it, which would shoot your performance to hell AND actually almost double the memory consumption on access (compressed and decompressed data both lying in memory at that point).
The best general advice I can give you is not to try to optimize this scenario yourself, but use specialized software for that. By specialized software, I mean a (in-memory) database. One example of a database you can use in-memory, and for which you already have everything you need on-board in the .NET framework, is SQLite.
I assume, as you seem to imply, that you have a class with many members, have a large number of instances, and need to keep them all in memory at the same time to perform calculations.
I ran a few tests to see if you could actually get different sizes for the classes you described.
I used this simple method for finding the in-memory size of an object:
private static void MeasureMemory()
{
int size = 10000000;
object[] array = new object[size];
long before = GC.GetTotalMemory(true);
for (int i = 0; i < size; i++)
{
array[i] = new Data();
}
long after = GC.GetTotalMemory(true);
double diff = after - before;
Console.WriteLine("Total bytes: " + diff);
Console.WriteLine("Bytes per object: " + diff / size);
}
It may be primitive, but I find that it works fine for situations like this.
As expected, almost nothing you can do to that class (turning it to a struct, removing the inheritance, or the method attributes) influences the memory being used by a single instance. As far as memory usage goes, they are all equivalent. However, do try to fiddle with your actual classes and run them through the given code.
The only way you could actually reduce the memory footprint of an instance would be to use smaller structures for keeping your data (int instead of long for example). If you have a large number of booleans, you could group them into a byte or integer, and have simple property wrappers to work with them (A boolean takes a byte of memory). These may be insignificant things in most situations, but for a hundred million objects, removing a boolean could make a difference of a hundred MB of memory. Also, be aware that the platform target you choose for your application can have an impact on the memory footprint of an object (x64 builds take up more memory then x86 ones).
Serializing the data is very unlikely to help. An in-memory database has it's upsides, especially if you are doing complex queries. However, it is unlikely to actually reduce the memory usage for your data. Unfortunately, there just aren't many ways to reduce the footprint of basic data types. At some point, you will just have to move to a file-based database.
However, here are some ideas. Please be aware that they are hacky, highly conditional, decrease the computation performance and will make the code harder to maintain.
It is often a case in large data structures that objects in different states will have only some properties filled, and the other will be set to null or a default value. If you can identify such groups of properties, perhaps you could move them to a sub-class, and have one reference that could be null instead of having several properties take up space. Then you only instantiate the sub-class once it is needed. You could write property wrappers that could hide this from the rest of the code. Have in mind that the worst case scenario here would have you keeping all the properties in memory, plus several object headers and pointers.
You could perhaps turn members that are likely to take a default value into binary representations, and then pack them into a byte array. You would know which byte positions represent which data member, and could write properties that could read them. Position the properties that are most likely to have a default value at the end of the byte array (a few longs that are often 0 for example). Then, when creating the object, adjust the byte array size to exclude the properties that have the default value, starting from the end of the list, until you hit the first member that has a non-default value. When the outside code requests a property, you can check if the byte array is large enough to hold that property, and if not, return the default value. This way, you could potentially save some space. Best case, you will have a null pointer to a byte array instead of several data members. Worst case, you will have full byte arrays taking as much space as the original data, plus some overhead for the array. The usefulness depends on the actual data, and assumes that you do relatively few writes, as the re-computation of the array will be expensive.
Hope any of this helps :)

In memory representation of large data

Currently, I am working on a project where I need to bring GBs of data on to client machine to do some task and the task needs whole data as it do some analysis on the data and helps in decision making process.
so the question is, what are the best practices and suitable approach to manage that much amount of data into memory without hampering the performance of client machine and application.
note: at the time of application loading, we can spend time to bring data from database to client machine, that's totally acceptable in our case. but once the data is loaded into application at start up, performance is very important.
This is a little hard to answer without a problem statement, i.e. what problems you are currently facing, but the following is just some thoughts, based on some recent experiences we had in a similar scenario. It is, however, a lot of work to change to this type of model - so it also depends how much you can invest trying to "fix" it, and I can make no promise that "your problems" are the same as "our problems", if you see what I mean. So don't get cross if the following approach doesn't work for you!
Loading that much data into memory is always going to have some impact, however, I think I see what you are doing...
When loading that much data naively, you are going to have many (millions?) of objects and a similar-or-greater number of references. You're obviously going to want to be using x64, so the references will add up - but in terms of performance the biggesst problem is going to be garbage collection. You have a lot of objects that can never be collected, but the GC is going to know that you're using a ton of memory, and is going to try anyway periodically. This is something I looked at in more detail here, but the following graph shows the impact - in particular, those "spikes" are all GC killing performance:
For this scenario (a huge amount of data loaded, never released), we switched to using structs, i.e. loading the data into:
struct Foo {
private readonly int id;
private readonly double value;
public Foo(int id, double value) {
this.id = id;
this.value = value;
}
public int Id {get{return id;}}
public double Value {get{return value;}}
}
and stored those directly in arrays (not lists):
Foo[] foos = ...
the significance of that is that because some of these structs are quite big, we didn't want them copying themselves lots of times on the stack, but with an array you can do:
private void SomeMethod(ref Foo foo) {
if(foo.Value == ...) {blah blah blah}
}
// call ^^^
int index = 17;
SomeMethod(ref foos[index]);
Note that we've passed the object directly - it was never copied; foo.Value is actually looking directly inside the array. The tricky bit starts when you need relationships between objects. You can't store a reference here, as it is a struct, and you can't store that. What you can do, though, is store the index (into the array). For example:
struct Customer {
... more not shown
public int FooIndex { get { return fooIndex; } }
}
Not quite as convenient as customer.Foo, but the following works nicely:
Foo foo = foos[customer.FooIndex];
// or, when passing to a method, SomeMethod(ref foos[customer.FooIndex]);
Key points:
we're now using half the size for "references" (an int is 4 bytes; a reference on x64 is 8 bytes)
we don't have several-million object headers in memory
we don't have a huge object graph for GC to look at; only a small number of arrays that GC can look at incredibly quickly
but it is a little less convenient to work with, and needs some initial processing when loading
additional notes:
strings are a killer; if you have millions of strings, then that is problematic; at a minimum, if you have strings that are repeated, make sure you do some custom interning (not string.Intern, that would be bad) to ensure you only have one instance of each repeated value, rather than 800,000 strings with the same contents
if you have repeated data of finite length, rather than sub-lists/arrays, you might consider a fixed array; this requires unsafe code, but avoids another myriad of objects and references
As an additional footnote, with that volume of data, you should think very seriously about your serialization protocols, i.e. how you're sending the data down the wire. I would strongly suggest staying far away from things like XmlSerializer, DataContractSerializer or BinaryFormatter. If you want pointers on this subject, let me know.

Do I have a serious memory leak problem?

I'm building a windows form application in C# that reads from hundreds of file and create an object hierarchy. In particular:
DEBUG[14]: Imported 129 system/s, 6450 query/s, 6284293 document/s.
The sum is the total number of object I've created. Objects are really simple by the way, just some int/string properties and strongly typed lists inside.
Question: is normal that my application is consuming about 700MB of memory (in debug mode)? What can I do for how to reduce memory usage?
EDIT: here is why i have 6284293 objects, if you're just curious. Imagine a search engine, called "system". A system have more queries inside it.
public class System
{
public List<Query> Queries;
}
Each query object refers to a "topic"; that is the main argument (eg. search for "italy weekend"). It ha a list of retrieved document inside:
public class Query
{
public Topic Topic; // Maintain only a reference to the topic
public List<RetrievedDocument> RetrievedDocuments;
public System System; // Maintain only a reference to the system
}
Each retrieved document has a score and a rank and has a reference to the topic document:
public class RetrievedDocument
{
public string Id;
public int Rank;
public double Score;
public Document Document;
}
Each topic has a collection of documents inside, that can be relevant or not relevant, and a reference to its parent topic:
public class Topic
{
public int Id;
public List<Document> Documents;
public List<Document> RelevantDocuments
{
get {return Documents.Where(d => d.IsRelevant());}
}
}
public class Document
{
public string Id;
public bool IsRelevant;
public Topic Topic; // Maintain only a reference to the topic
}
There are 129 systems, 50 main topics (129*50 = 6450 query objects), each query has a different number of retrieved documents, 6284293 in total. I need this hierarchy for doing some calculations (average precision, topic ease, system mean average precision, relevancy). This is how TREC works...
If you're reading 6284293 documents and are holding on to these in an object hierarchy, then obviously your application if going to use a fair amount of memory. It is hard to say if you're using more than could be expected given that we don't know the size of these objects.
Also, remember that the CLR allocates and frees memory on behalf of your application. So even though your application has released memory this may not be immediately reflected on the process' memory usage. If the application is not leaking this memory will be reclaimed at some point, but you shouldn't expect to see managed memory usage immediately reflected in process memory usage as the CLR may hold on to memory to reduce the number of allocations/frees.
Go get the scitech profiler (with two week free trial) and find out.
Watch out for empty lists, they take 40 bytes each.
It's hard to say what's going on without knowing more about your code, but here's some ideas and suggestions:
Make sure you close files after you finish reading from them
Make sure you're not maintaining references to objects that are no
longer being used
Look at what data structures you're using. Sometimes, there's a more
memory-efficient way to arrange your
data
Look at your data types, are you using Long or Double in places where
Byte would suffice?
Every program will use more memory in Debug mode than not-Debug mode,
but the difference should be on the
order of single or 10's of megabytes,
not hundreds. Can you use task
manager to check how much memory
you're using outside of Debug mode?

Categories