ILNumerics: ILArray<T> as instance variables; - c#

I am using ILNumerics to represent some time series.
Ideally I would like to have all data incapsulated a la object oriented and, therefore, to use instance variables and instance methods to process such variables.
I have several questions, but all related to what is the best way to implement ILArray in a class, in an efficient way and, possibly, as instance variables. I have gone through the relevant documentation and checked previous SO examples, but none seems to explicitly address these issues.
First: the example proposed on the website for 'Array Utilization Class'
[source: http://ilnumerics.net/ClassRules.html] does not seem to compile, at least with ILNumerics trial edition and VS 2013 professional (.net 4.5). Am I missing something?
Or is it because this part of the code:
public ILRetArray<double> A
{
get
{
// lazy initialization
if (m_a.IsEmpty)
{
m_a.a = ILMath.rand(100,100);
}
}
set { m_a.a = value; }
does not have a return statement?
In the mentioned example then the m_a array may be modified through the following instance method:
public void Do()
{
using (ILScope.Enter())
{
// assign via .a property only!
m_a.a = m_a + 2;
}
}
How can one access a specific component of the vector: suppose we want something like
m_a[0] = 2.2; would this get in the way of the memory management?
As a general observation, it would seem to me that the natural way of using ILNumerics is through static methods as one would write the code in Fortran (or possibly in R/Matlab): this is how I have used it, so far. Am I right or class definition having ILArray types as instance variables and relevant methods should be as efficient and straightforward?
Alternatively, would you recommend adopting System arrays as instance variables and then importing/exporting to ILarray only through static methods to perform array operation? I would tend to avoid this path or I would like to keep it as confined as possible.

The documentation section 'ILArray and Classes' has been updated. As you stated, there was a mistake in the sample code.
Modifying ILArray instances as Class Member
By following the rules described in the documentation, all array members will be of type ILArray (or ILLogical or ILCell). These types are mutable types. You can alter them freely during their lifetime. m_a[0] = 2.2; works as expected. You may also decide to replace the array completely:
m_a.a = ILMath.rand(2,3,5);
Just keep in mind, not to simply assign to the array but to use the .a = property or .Assign() method on the array. The compiler will prevent you from mistakenly assigning anyway, since you have declared your array as readonly.
Such alteration does work with the memory management smoothly.
Mixing Static Methods and Class Instances
As long as you keep an eye on the rules for both: functions (ILScope blocks, distinct input parameter array types, assignemens via .a property) and classes (readonly ILArray<T> declaration, ILMath.localMember<T> initialization) you can freely mix both schemes. It will work both ways and reuse all memory not needed anymore immediately.
Mixing intensive use of System.Array with ILArray<T> on the other side may lead to disadvantageous allocation patterns. In general, it is easy to create ILArray from System.Array. The System.Array will be used directly by the ILArray if it fits into the storage scheme (i.e. if it is 1dimensional). But the other way around is not very efficient. It generally involves a copy of the data and the ILNumerics memory management cannot work efficiently either.
That's why we recommend to stay with ILArray and the like. As you see, there are some rules to keep in mind, but usually you will internalize them very quickly.

Related

Force function input parameters to be immutable?

I've just spent the best part of 2 days trying to track down a bug, it turns out I was accidentally mutating the values that were provided as input to a function.
IEnumerable<DataLog>
FilterIIR(
IEnumerable<DataLog> buffer
) {
double notFilter = 1.0 - FilterStrength;
var filteredVal = buffer.FirstOrDefault()?.oilTemp ?? 0.0;
foreach (var item in buffer)
{
filteredVal = (item.oilTemp * notFilter) + (filteredVal * FilterStrength);
/* Mistake here!
item.oilTemp = filteredValue;
yield return item;
*/
// Correct version!
yield return new DataLog()
{
oilTemp = (float)filteredVal,
ambTemp = item.ambTemp,
oilCond = item.oilCond,
logTime = item.logTime
};
}
}
My programming language of preference is usually C# or C++ depending on what I think suits the requirements better (this is part of a larger program that suits C# better)...
Now in C++ I would have been able to guard against such a mistake by accepting constant iterators which prevent you from being able to modify the values as you retrieve them (though I might need to build a new container for the return value). I've done a little searching and can't find any simple way to do this in C#, does anyone know different?
I was thinking I could make an IReadOnlyEnumerable<T> class which takes an IEnumerable as a constructor, but then I realized that unless it makes a copy of the values as you retrieve them it won't actually have any effect, because the underlying value can still be modified.
Is there any way I might be able to protect against such errors in future? Some wrapper class, or even if it's a small code snippet at the top of each function I want to protect, anything would be fine really.
The only sort of reasonable approach I can think of at the moment that'll work is to define a ReadOnly version of every class I need, then have a non-readonly version that inherits and overloads the properties and adds functions to provide a mutable version of the same class.
The problem is here isn't really about the IEnumerable. IEnumerables are actually immutable. You can't add or remove things from them. What's mutable is your DataLog class.
Because DataLog is a reference type, item holds a reference to the original object, instead of a copy of the object. This, plus the fact that DataLog is mutable, allows you to mutate the parameters passed in.
So on a high level, you can either:
make a copy of DataLog, or;
make DataLog immutable
or both...
What you are doing now is "making a copy of DataLog". Another way of doing this is changing DataLog from a class to a struct. This way, you'll always create a copy of it when passing it to methods (unless you mark the parameter with ref). So be careful when using this method because it might silently break existing methods that assume a pass-by-reference semantic.
You can also make DataLog immutable. This means removing all the setters. Optionally, you can add methods named WithXXX that returns a copy of the object with only one property different. If you chose to do this, your FilterIIR would look like:
yield return item.WithOilTemp(filteredVal);
The only sort of reasonable approach I can think of at the moment that'll work is to define a ReadOnly version of every class I need, then have a non-readonly version that inherits and overloads the properties and adds functions to provide a mutable version of the same class.
You don't actually need to do this. Notice how List<T> implements IReadOnlyList<T>, even though List<T> is clearly mutable. You could write an interface called IReadOnlyDataLog. This interface would only have the getters of DataLog. Then, have FilterIIR accept a IEnumerable<IReadOnlyDataLog> and DataLog implement IReadOnlyDataLog. This way, you will not accidentally mutate the DataLog objects in FilterIIR.

Adding a reference to a list c# struct

I'm having a problem with BoundingSpheres in XNA. I'm wanting to add a BoundingSphere to a list of BoundingSpheres. At the moment it's along the lines of:
Aircraft(Vector3 pos, float radius, CollisionManager colMan)
{
BoundingSphere sphere = new BoundingSphere(pos, radius);
colMan.AddSphere(sphere)
}
List<BoundingSphere> spheres = new List<BoundingSphere>();
CollisionManager()
{
spheres = new List<BoundingSphere>();
}
AddSphere(BoundingSphere boundingSphere)
{
spheres.Add(boundingSphere);
}
Rather then a reference being added, it seems to be adding the values. I believe this is because boundingSpheres are structs? How can I get round this? I tried the ref keyword, but the values still aren't being updated in the list.
To be straightforawrd, you can't, at least not directly. Structs are value types, and are thus passed and stored by value. Even judicious use of the ref keyword won't get around it because List<T>.Item can't return a reference to a value type.
The work-arounds are to either turn your struct into a class, or embed the stuct inside a class, or, just deal with the fact it's a value type and treat it appropriately (ie, don't try to modify local copies, but replace values in the list when the change). The last option is, imo, the best.
Value types are passed by value (this means that you're getting fresh new copy in the method or in the container) to avoid this you can change your struct to class, add an interface to struct declaration and box your struct to store reference to the interface instead.
But it seems that you're using mutable struct and this is a very dangerous because you can face really dangerous behavior (see mutable structs considered harmful for more details).
You'd have to change the definition of BoundingSphere from a class to a struct. This is impossible since it's defined in an assembly outside of your control.
You can't box the structure, as every time you unbox it, you're going to get a copy of the structure you're holding.
That said, the only way you can do this (and this isn't a good idea, in my opinion) is by creating a class wrapper for the structure, and delegating all of the calls from the properties to the structure internally:
public class BoundingSphereWrapper
{
// Set through constructor.
private readonly BoundingSphere _boundingSphere = ...;
// One of the forwarded calls.
public ContainmentType Contains(BoundingBox box)
{
// Forward the call.
return _boundingSphere.Contains(box);
}
// And so on...
}
Of course, you can't pass these class instances to members that expect a BoundingSphere, and you'd have to try and detect changes (which are near impossible, unless the instances are passed by reference) when you expose the underlying structure.
Namely, though, you don't really want to do this; the designers of the structure probably chose it as a structure for the following reasons:
While mutable (which is a no-no when dealing with structures), the lifetime is intended to be limited
There could be many of these instantiated at the same time, and it's more efficient to do this on the stack than to do it on the heap (that would cause lots of first generation garbage collections, which can definitely have an impact on performance on a gaming platform)

What's the method representation in memory?

While thinking a little bit about programming in Java/C# I wondered about how methods which belong to objects are represented in memory and how this fact does concern multi threading.
Is a method instantiated for each object in memory seperately or do
all objects of the same type share one instance of the method?
If the latter, how does the executing thread know which object's
attributes to use?
Is it possible to modify the code of a method in
C# with reflection for one, and only one object of many objects of
the same type?
Is a static method which does not use class attributes always thread safe?
I tried to make up my mind about these questions, but I'm very unsure about their answers.
Each method in your source code (in Java, C#, C++, Pascal, I think every OO and procedural language...) has only one copy in binaries and in memory.
Multiple instances of one object have separate fields but all share the same method code. Technically there is a procedure that takes a hidden this parameter to provide an illusion of executing a method on an object. In reality you are calling a procedure and passing structure (a bag of fields) to it along with other parameters. Here is a simple Java object and more-or-less equivalent pseudo-C code:
class Foo {
private int x;
int mulBy(int y) {
return x * y
}
}
Foo foo = new Foo()
foo.mulBy(3)
is translated to this pseude-C code (the encapsulation is forced by the compiler and runtime/VM):
struct Foo {
int x = 0;
}
int Foo_mulBy(Foo *this, int y) {
return this->x * y;
}
Foo* foo = new Foo();
Foo_mulBy(foo, 3)
You have to draw a difference between code and local variables and parameters it operates on (the data). Data is stored on call stack, local to each thread. Code can be executed by multiple threads, each thread has its own copy of instruction pointer (place in the method it currently executes). Also because this is a parameter, it is thread-local, so each thread can operate on a different object concurrently, even though it runs the same code.
That being said you cannot modify a method of only one instance because the method code is shared among all instances.
The Java specifications don't dictate how to do memory layout, and different implementations can do whatever they like, providing it meets the spec where it matters.
Having said that, the mainstream Oracle JVM (HotSpot) works off of things called oops - Ordinary Object Pointers. These consist of two words of header followed by the data which comprises the instance member fields (stored inline for primitive types, and as pointers for reference member fields).
One of the two header words - the class word - is a pointer to a klassOop. This is a special type of oop which holds pointers to the instance methods of the class (basically, the Java equivalent of a C++ vtable). The klassOop is kind-of a VM-level representation of the Class object corresponding to the Java type.
If you're curious about the low-level detail, you can find out a lot more by looking in the OpenJDK source for the definition of some of the oop types (klassOop is a good place to start).
tl;dr Java holds one blob of code for each method of each type. The blobs of code are shared among each instance of the type, and hidden this pointers are used to know which instance's members to use.
I am going to try to answer this in the context of C#.There are basically 3 different types of Methods
virtual
non-virtual
static
When your code is executed, you basically have two kinds of objects that are formed on the heap.
The object corresponding to the type of the object. This is called Type Object. This holds the type object pointer, the sync block index, the static fields and the method table.
The object corresponding to the object itself, which contains all the non static fields.
In response to your questions,
Is a method instantiated for each object in memory seperately or do all objects of the same type share one instance of the method?
This is a wrong way of understanding objects. All methods are per type only. Look at it this way. A method is just a set of instructions. The first time you call a particular method, the IL code is JITed into native instructions and saved in memory. The next time this is called, the address is picked up from the method table and the same instructions are executed again.
2.If the latter, how does the executing thread know which object's attributes to use?
Each static method call on a Type results in looking up the method table from the corresponding Type Object and finding the address of the JITed instruction. In case of methods that are not static, the the relevant object on which the method is called is maintained on the thread's local stack. Basically, you get the nearest object on the stack. That is always the object on which we want the method to be called.
3.Is it possible to modify the code of a method in C# with reflection for one, and only one object of many objects of the same type?
No, It is not possible now. (And I am thankful for that). The reason is that reflection only allows code inspection. If you figure out what some method actually means, there is no way you are going to be able to change the code in the same assembly.

C# to C++ 'Gotchas'

I have been developing a project that I absolutely must develop part-way in C++. I need develop a wrapper and expose some C++ functionality into my C# app. I have been a C# engineer since the near-beginning of .NET, and have had very little experience in C++. It still looks very foreign to me when attempting to understand the syntax.
Is there anything that is going to knock me off my feet that would prevent me from just picking up C++ and going for it?
C++ has so many gotchas that I can't enumerate them all. Do a search for "C# vs C++". A few basic things to know:
In C++:
struct and a class are basically the same thing (Default visibility for a struct is public, it's private for a class).
Both struct and class can be created either on the heap or the stack.
You have to manage the heap yourself. If you create something with "new", you have to delete it manually at some point.
If performance isn't an issue and you have very little data to move around, you can avoid the memory management issue by having everything on the stack and using references (& operator).
Learn to deal with .h and .cpp. Unresolved external can be you worse nightmare.
You shouldn't call a virtual method from a constructor. The compiler will never tell you so I do.
Switch case doesn't enforce "break" and go thru by default.
There is not such a thing as an interface. Instead, you have class with pure virtual methods.
C++ aficionados are dangerous people living in cave and surviving on the fresh blood of C#/java programmers. Talk with them about their favorite language carefully.
Garbage collection!
Remember that everytime you new an object, you must be responsible for calling delete.
There are a lot of differences, but the biggest one I can think of that programmers coming from Java/C# always get wrong, and which they never realize they've got wrong, is C++'s value semantics.
In C#, you're used to using new any time you wish to create an object. And whenever we talk about a class instance, we really mean "a reference to the class instance". Foo x = y doesn't copy the object y, it simply creates another reference to whatever object y references.
In C++, there's a clear distinction between local objects, allocated without new (Foo f or Foo f(x, y), and dynamically allocated ones (Foo* f = new Foo() or Foo* f = new Foo(x, y)). And in C# terms, everything is a value type. Foo x = y actually creates a copy of the Foo object itself.
If you want reference semantics, you can use pointers or references: Foo& x = y creates a reference to the object y. Foo* x = &y creates a pointer to the address at which y is located. And copying a pointer does just that: it creates another pointer, which points to whatever the original pointer pointed to. So this is similar to C#'s reference semantics.
Local objects have automatic storage duration -- that is, a local object is automatically destroyed when it goes out of scope. If it is a class member, then it is destroyed when the owning object is destroyed. If it is a local variable inside a function, it is destroyed when execution leaves the scope in which it was declared.
Dynamically allocated objects are not destroyed until you call delete.
So far, you're probably with me. Newcomers to C++ are taught this pretty soon.
The tricky part is in what this means, how it affects your programming style:
In C++, the default should be to create local objects. Don't allocate with new unless you absolutely have to.
If you do need dynamically allocated data, make it the responsibility of a class. A (very) simplified example:
class IntArrayWrapper {
explicit IntArrayWrapper(int size) : arr(new int[size]) {} // allocate memory in the constructor, and set arr to point to it
~IntArrayWrapper() {delete[] arr; } // deallocate memory in the destructor
int* arr; // hold the pointer to the dynamically allocated array
};
this class can now be created as a local variable, and it will internally do the necessary dynamic allocations. And when it goes out of scope, it'll automatically delete the allocated array again.
So say we needed an array of x integers, instead of doing this:
void foo(int x){
int* arr = new int[x];
... use the array ...
delete[] arr; // if the middle of the function throws an exception, delete will never be called, so technically, we should add a try/catch as well, and also call delete there. Messy and error-prone.
}
you can do this:
void foo(int x){
IntArrayWrapper arr(x);
... use the array ...
// no delete necessary
}
Of course, this use of local variables instead of pointers or references means that objects are copied around quite a bit:
Bar Foo(){
Bar bar;
... do something with bar ...
return bar;
}
in the above, what we return is a copy of the bar object. We could return a pointer or a reference, but as the instance created inside the function goes out of scope and is destroyed the moment the function returns, we couldn't point to that. We could use new to allocate an instance that outlives the function, and return a function to that -- and then we get all the memory management headaches of figuring out whose responsibility it is to delete the object, and when that should happen. That's not a good idea.
Instead, the Bar class should simply be designed so that copying it does what we need. Perhaps it should internally call new to allocate an object that can live as long as we need it to. We could then make copying or assignment "steal" that pointer. Or we could implement some kind of reference-counting scheme where copying the object simply increments a reference counter and copies the pointer -- which should then be deleted not when the individual object is destroyed, but when the last object is destroyed and the reference counter reaches 0.
But often, we can just perform a deep copy, and clone the object in its entirety. If the object includes dynamically allocated memory, we allocate more memory for the copy.
It may sound expensive, but the C++ compiler is good at eliminating unnecessary copies (and is in fact in most cases allowed to eliminate copy operations even if they have side effects).
If you want to avoid copying even more, and you're prepared to put up with a little more clunky usage, you can enable "move semantics" in your classes as well as (or instead of) "copy semantics". It's worth getting into this habit because (a) some objects can't easily be copied, but they can be moved (e.g. a Socket class), (b) it's a pattern established in the standard library and (c) it's getting language support in the next version.
With move semantics, you can use objects as a kind of "transferable" container. It's the contents that move. In the current approach, it's done by calling swap, which swaps the contents of two objects of the same type. When an object goes out of scope, it is destructed, but if you swap its contents into a reference parameter first, the contents escape being destroyed when the scope ends. Therefore, you don't necessarily need to go all the way and use reference counted smart pointers just to allow complex objects to be returned from functions. The clunkiness comes from the fact that you can't really return them - you have to swap them into a reference parameter (somewhat similar to a ref parameter in C#). But the language support in the next version of C++ will address that.
So the biggest C# to C++ gotcha I can think of: don't make pointers the default. Use value semantics, and instead tailor your classes to behave the way you want when they're copied, created and destroyed.
A few months ago, I attempted to write a series of blog posts for people in your situation:
Part 1
Part 2
Part 3
I'm not 100% happy with how they turned out, but you may still find them useful.
And when you feel that you're never going to get a grip on pointers, this post may help.
No run-time checks
One C++ pitfall is the behaviour when you try to do something that might be invalid, but which can only be checked at runtime - for example, dereferencing a pointer that could be null, or accessing an array with an index that might be out of range.
The C# philosophy emphasises correctness; all behaviour should be well-defined and, in cases like this, it performs a run-time check of the preconditions and throws well-defined exceptions if they fail.
The C++ philosophy emphasises efficiency, and the idea that you shouldn't pay for anything you might not need. In cases like this, nothing will be checked for you, so you must either check the preconditions yourself or design your logic so that they must be true. Otherwise, the code will have undefined behaviour, which means it might (more or less) do what you want, it might crash, or it might corrupt completely unrelated data and cause errors that are horrendously difficult to track down.
Just to throw in some others that haven't been mentioned yet by other answers:
const: C# has a limited idea of const. In C++ 'const-correctness' is important. Methods that don't modify their reference parameters should take const-references, eg.
void func(const MyClass& x)
{
// x cannot be modified, and you can't call non-const methods on x
}
Member functions that don't modify the object should be marked const, ie.
int MyClass::GetSomething() const // <-- here
{
// Doesn't modify the instance of the class
return some_member;
}
This might seem unnecessary, but is actually very useful (see the next point on temporaries), and sometimes required, since libraries like the STL are fully const-correct, and you can't cast const things to non-const things (don't use const_cast! Ever!). It's also useful for callers to know something won't be changed. It is best to think about it in this way: if you omit const, you are saying the object will be modified.
Temporary objects: As another answer mentioned, C++ is much more about value-semantics. Temporary objects can be created and destroyed in expressions, for example:
std::string str = std::string("hello") + " world" + "!";
Here, the first + creates a temporary string with "hello world". The second + combines the temporary with "!", giving a temporary containing "hello world!", which is then copied to str. After the statement is complete, the temporaries are immediately destroyed. To further complicate things, C++0x adds rvalue references to solve this, but that's way out of the scope of this answer!
You can also bind temporary objects to const references (another useful part of const). Consider the previous function again:
void func(const MyClass& x)
This can be called explicitly with a temporary MyClass:
func(MyClass()); // create temporary MyClass - NOT the same as 'new MyClass()'!
A MyClass instance is created, on the stack, func2 accesses it, and then the temporary MyClass is destroyed automatically after func returns. This is convenient and also usually very fast, since the heap is not involved. Note 'new' returns a pointer - not a reference - and requires a corresponding 'delete'. You can also directly assign temporaries to const references:
const int& blah = 5; // 5 is a temporary
const MyClass& myClass = MyClass(); // creating temporary MyClass instance
// The temporary MyClass is destroyed when the const reference goes out of scope
Const references and temporaries are frequent in good C++ style, and the way these work is very different to C#.
RAII, exception safety, and deterministic destructors. This is actually a useful feature of C++, possibly even an advantage over C#, and it's worth reading up on since it's also good C++ style. I won't cover it here.
Finally, I'll just throw in this is a pointer, not a reference :)
The traditional stumbling blocks for people coming to C++ from C# or Java are memory management and polymorphic behavior:
While objects always live on the heap and are garbage collected in C#/Java, you can have objects in static storage, stack or the heap ('free store' in standard speak) in C++. You have to cleanup the stuff you allocate from the heap (new/delete). An invaluable technique for dealing with that is RAII.
Inheritance/polymorphism work only through pointer or reference in C++.
There are many others, but these will probably get you first.
Virtual destructors.
Header files! You'll find yourself asking, "so why do I need to write method declarations twice every time?"
Pointers and Memory Allocation
...I'm a C# guy too and I'm still trying to wrap my head around proper memory practices in C/C++.
Here is a brief overview of Managed C++ here. An article about writing an Unmanaged wrapper using the Managed C++ here. There is another article here about mixing Unmanaged with Managed C++ code here.
Using Managed C++ would IMHO make it easier to use as a bridge to the C# world and vice versa.
Hope this helps,
Best regards,
Tom.
The biggest difference is C#'s reference semantics (for most types) vs. C++'s value semantics. This means that objects are copied far more often than they are in C#, so it's important to ensure that objects are copied correctly. This means implementing a copy constructor and operator= for any class that has a destructor.
Raw memory twiddling. Unions, memsets, and other direct memory writes. Anytime someone writes to memory as a sequence of bytes (as opposed to as objects), you lose much of the ability to reason about the code.
Linking
Linking with external libraries is not as forgiving as it is in .Net, $DEITY help you if you mix something compiled with different flavors of the same msvcrt (debug, multithread, unicode...)
Strings
And you'll have to deal with Unicode vs Ansi strings, these are not exactly the same.
Have fun :)
The following isn't meant to dissuade in any way :D
C++ is a minefield of Gotcha's, it's relatively tame if you don't use templates and the STL -- and just use object orientation, but even then is a monster. In that case object based programming (rather than object-oriented programming) makes it even tamer -- often this form of C++ is enforced in certain projects (i.e., don't use any features that have even a chance of being naively used).
However you should learn all those things, as its a very powerful language if you do manage to traverse the minefield.If you want to learn about gotcha's you better get the books from Herb Sutter, Scott Myers, and Bjarne Stroustrup. Also Systematically going over the C++ FAQ Lite will help you to realize that it indeed does require 10 or so books to turn into a good C++ programmer.

Global member vs. passing parameters

I have a ASP.NET project, in which I have a method with 10 local variables. This method calls about 10 other methods. 3 of the called methods need all the variables. Is it considered good practice to turn all those variables into global members, and then they don't have to passed as parameters?
If you want to pass complex state around, package it in an object - i.e.
public class Foo {
public string Key {get;set;}
public decimal Quantity {get;set;}
// etc
}
And have the methods accept this object as an argument. Then you just create an instance of this and pass it along.
Global is a big no-no; ASP.NET is highly threaded - this would be a nightmare. Per-request state is possible, but a bit messy.
Do these variables relate to each other, either all of them or perhaps into a few groups? If so, encapsulate them into a type. So you might have half of your variables relating to a user, and half relating to a requested operation - and suddenly your method taking 10 variables only takes 2.
Making things global is almost always the wrong way to go.
create a structure instead and pass the structure instead of passing those 10 parameters
Eg :
public struct user
{
public string FirstName;
public string LastName;
public string zilionotherproperties;
public bool SearchByLastNameOnly;
public datetime date1;
}
Well, it depends entirely on what you mean by "global members".
If, considering you're writing an ASP.NET application, you mean session/application-based cache values, then it depends. There's performance implications so you should measure to see if it has any impact on your app.
If you mean static variables, then no. Static is per application, and will thus be for all users of your web application, not just one person. Thread static is not a good idea either as a single user may float between threads during his lifetime in the application.
If you have methods that truly do act upon a large number of variables, such as you mention, you can also consider designing a class that has the purpose of acting as a data container. Once populated, you can then pass the class to the functions that require the data instead of ten parameters.
I can not remember the exact example offhand, but in Microsoft's "Framework Design Guidelines" book, they explicitly describe a scenario like your as well as how they have followed the same approach in the .NET Framework.
Also, if you have the need to pass that many parameters, take a step back and make sure that the code in question does not need to be refactored. There are legitimate cases where a lot of data is needed by a method, but I use long method signatures as a sign that I need to look inside the method to make sure it is doing only what it needs to.
Just be sure to be conscious about boxing. If you are passing 10 ref types around, it comes down to personal preference.
However, if you are passing 10 value types, if you were to declare them as member variables within a class, they will be boxed, then they will have to be unboxed by the recipient.
If you leave them confined as local variables within the method stack(passing as parameters), they will remain purely on the stack, rather than being boxed to the heap.
For a purely mechanical refactoring, packaging the values together (as suggested) is probably the best solution.
However, you have a large series of dependent methods, each of which acts on common state (at least 10 values). It sounds like you should design a class to handle this operation.
The class would encapsulate the behavior and relevant state, rather than be a simple property bag (see Anemic Domain Model).

Categories