Imagine that I have a several Viewer component that are used for displaying text and they have few modes that user can switch (different font presets for viewing text/binary/hex).
What would be the best approach for managing shared objects - for example fonts, find dialog, etc? I figured that static class with lazily initialized objects would be OK, but this might be the wrong idea.
static class ViewerStatic
{
private static Font monospaceFont;
public static Font MonospaceFont
{
get
{
if (monospaceFont == null)
//TODO read font settings from configuration
monospaceFont = new Font(FontFamily.GenericMonospace, 9, FontStyle.Bold);
return monospaceFont;
}
}
private static Font sansFont;
public static Font SansFont
{
get
{
if (sansFont == null)
//TODO read font settings from configuration
sansFont = new Font(FontFamily.GenericSansSerif, 9, FontStyle.Bold);
return sansFont;
}
}
}
For items you wish to create once and then re-use there are two relevant patterns: Singleton and Cache. If you will re-use the item forever, the Singleton is OK. The memory allocated to that instance will never be cleared. If you will re-use the item for a while, but then maybe that function won't be used for a few days, I suggest using the cache. Then the memory can be cleared when the item is no longer in use.
If you are using the Singleton, you probably want to just init the Fonts directly rather than using the Lazy init pattern. To me, Fonts sound pretty simple and not likely to error out. However, if the item might fail during construction (perhaps due to a missing font file or something), then lazy pattern at least allows it to retry next time. You cannot redo a static initializer later, even if it fails, without restarting the whole application. Be careful to limit those retries!
Finally, the name of your class "ViewerStatic" raises a concern. There is an anti-pattern known as the "God" object. I call it the "bucket". If you create it, stuff will come. You will soon find all kinds of stuff being dumped in the bucket. Your ViewerStatic class will become huge. It would be better to have a class called "FontFlyWeights" and then another one called "ConstantStrings" or "SystemDialogFactory" ... etc.
That seems fine to me, but is it really necessary? The simple approach would be to just create new fonts and dialogs when you need them, then Dispose them if necessary and let the garbage collector clean them up.
Have you measured to see if the simple approach has a noticeable cost that makes it worth adding the complexity of caching shared objects?
Related
I am tasked with writing a system to process result files created by a different process(which I have no control over) and and trying to modify my code to make use of Parallel.Foreach. The code works fine when just calling a foreach but I have some concerns about thread safety when using the parallel version. The base question I need answered here is "Is the way I am doing this going to guarantee thread safety?" or is this going to cause everything to go sideways on me.
I have tried to make sure all calls are to instances and have removed every static anything except the initial static void Main. It is my current understanding that this will do alot towards assuring thread safety.
I have basically the following, edited for brevity
static void Main(string[] args)
{
MyProcess process = new MyProcess();
process.DoThings();
}
And then in the actual process to do stuff I have
public class MyProcess
{
public void DoThings()
{
//Get some list of things
List<Thing> things = getThings();
Parallel.Foreach(things, item => {
//based on some criteria, take actions from MyActionClass
MyActionClass myAct = new MyActionClass(item);
string tempstring = myAct.DoOneThing();
if(somecondition)
{
MyAct.DoOtherThing();
}
...other similar calls to myAct below here
};
}
}
And over in the MyActionClass I have something like the following:
public class MyActionClass
{
private Thing _thing;
public MyActionClass(Thing item)
{
_thing = item;
}
public string DoOneThing()
{
return _thing.GetSubThings().FirstOrDefault();
}
public void DoOtherThing()
{
_thing.property1 = "Somenewvalue";
}
}
If I can explain this any better I'll try, but I think that's the basics of my needs
EDIT:
Something else I just noticed. If I change the value of a property of the item I'm working with while inside the Parallel.Foreach (in this case, a string value that gets written to a database inside the loop), will that have any affect on the rest of the loop iterations or just the one I'm on? Would it be better to create a new instance of Thing inside the loop to store the item i'm working with in this case?
There is no shared mutable state between actions in the Parallel.ForEach that I can see, so it should be thread-safe, because at most one thread can touch one object at a time.
But as it has been mentioned there is nothing shared that can be seen. It doesn't mean that in the actual code you use everything is as good as it seems here.
Or that nothing will be changed by you or your coworker that will make some state both shared and mutable (in the Thing, for example), and now you start getting difficult to reproduce crashes at best or just plain wrong behaviour at worst that can be left undetected for a long time.
So, perhaps you should try to go fully immutable near threading code?
Perhaps.
Immutability is good, but it is not a silver bullet, and it is not always easy to use and implement, or that every task can be reasonably expressed through immutable objects. And even that accidental "make shared and mutable" change may happen to it as well, though much less likely.
It should at least be considered as a possible option/alternative.
About the EDIT
If I change the value of a property of the item I'm working with while
inside the Parallel.Foreach (in this case, a string value that gets
written to a database inside the loop), will that have any affect on
the rest of the loop iterations or just the one I'm on?
If you change a property and that object is not used anywhere else, and it doesn't rely on some global mutable state (for example, sort of a public static Int32 ChangesCount that increments with each state change), then you should be safe.
a string value that gets written to a database inside the loop - depending on the used data access technology and how you use it, you may be in trouble, because most of them are not designed for multithreaded environment, like EF DbContext, for example. And obviously do not forget that dealing with concurrent access in database is not always easy, though that is a bit away from our original theme.
Would it be better to create a new instance of Thing inside the loop to store the item i'm working with in this case - if there is no risk of external concurrent changes, then it is just an unnecessary work. And if there is a chance of another threads(not Parallel.For) making changes to those objects that are being persisted, then you already have bigger problems than Parallel.For.
Objects should always have observable consistent state (unlike when half of properties set by one thread, and half by another, while you try to persist that who-knows-what), and if they are used by many threads, then they should be already thread-safe - there should be no way to put them into inconsistent state.
And if they want to be persisted by external code, such objects should probably provide:
Either SyncRoot property to synchronize property reading code.
Or some current state snapshot DTO that is created internally by some thread-safe method like ThingSnapshot Thing.GetCurrentData() { lock() {} }.
Or something more exotic.
I am working on a project where individual regions of a map are either generated dynamically, or loaded from a file if it has already been generated and saved. Regions are only loaded/generated as needed, and saved and discarded when they aren't anymore.
There are several different tasks that will be using one or more regions of this map for various purposes. For instance, one of these tasks will be to draw all currently visible regions (about 9 at any given time). Another is to get information about, or even modify regions.
The problem is that these tasks may or may not be working with the same regions as other tasks.
Since these regions are rather large, and are costly to generate, it would be problematic (for these and other reasons) to use different copies for each task.
Rather, I think it would be a good idea to create and manage a pool of currently loaded regions. New tasks will first check the pool for their reqired region. They can then use it if it exists, or else create a new one and add it to the pool.
Provided that works, how would I manage this pool? How would I determine if a region is no longer needed by any tasks and can be safely discarded? Am I being silly and overcomplicating this?
I am using c# if that matters to anyone.
Edit:
Now that I'm more awake, would it be as simple as incrementing a counter in each region for each place it's used? then discarding it when the counter reaches 0?
Provided that works, how would I manage this pool? How would I determine if a region is no longer needed by any tasks and can be safely discarded?
A simple way of doing this can be to use weak references:
public class RegionStore
{
// I'm using int as the identifier for a region.
// Obviously this must be some type that can serve as
// an ID according to your application's logic.
private Dictionary<int, WeakReference<Region>> _store = new Dictionary<int, WeakReference<Region>>();
private const int TrimThreshold = 1000; // Profile to find good value here.
private int _addCount = 0;
public bool TryGetRegion(int id, out Region region)
{
WeakReference<Region> wr;
if(!_store.TryGetValue(id, out wr))
return false;
if(wr.TryGetTarget(out region))
return true;
// Clean up space in dictionary.
_store.Remove(id);
return false;
}
public void AddRegion(int id, Region region)
{
if(++_addCount >= TrimThreshold)
Trim();
_store[id] = new WeakReference<Region>(region);
}
public void Remove(int id)
{
_store.Remove(id);
}
private void Trim()
{
// Remove dead keys.
// Profile to test if this is really necessary.
// If you were fully implementing this, rather than delegating to Dictionary,
// you'd likely see if this helped prior to an internal resize.
_addCount = 0;
var keys = _store.Keys.ToList();
Region region;
foreach(int key in keys)
if(!_store[key].TryGetTarget(out wr))
_store.Remove(key);
}
}
Now you've a store of your Region objects, but that store doesn't prevent them being garbage collected if no other references to them exist.
Certain task will be modifying regions. In this case I will likely raise an "update" flag in the region object, and from there update all other tasks using it.
Do note that this will be a definite potential source of bugs in the application as a whole. Mutability complicates any sort of caching. If you can move to a immutable model, it will likely simplify things, but then uses of outdated objects brings its own complications.
ok, i don't know how you have your app designed, but i sugest you to have a look at this
You can also use static to share you variable with other tasks but then you may want to use block variables to prevent you to write or read from that variable while other processes are using it. (here)
in my App.cs i have the following
private static LayoutManager layoutManager;
public static LayoutManager LayoutManager
{
get { return layoutManager ?? (layoutManager = new LayoutManager()); }
set { layoutManager = value; }
}
I need to access this variable from another Library so I defined it in the App XAML so I could use Application.Current.FindResource("LayoutManager"), without having to reference the project that contains the App, because i would get a circular dependency
<Managers:LayoutManager x:Key="LayoutManager"/>
is adding an object to the resources the best option?
what are the best programming practices in such case?
The two methods are essentially the same, the difference is mainly semantic.
As for your first question, adding an entry to a resource dictionary creates a new object, and places it in a dictionary of that scope (App, window, panel, etc.), this applies to anything you place in a resource dictionary, so the real question is what to place there. Resources placed in xaml are usually used by the xaml code (or something that affects it), it usually keeps styles, animations and so forth. You can, of course place anything you like there, but it's less common.
As for the best practice for this situation, I think you'll do better to place the object in a static property, since you try to access it in c#, and not in xaml. It gives you a little performance boost then trying to locate the resource, and you don't have to rely on magic strings, that won't give you a compile time error when if the property changes.
Introduction
I just thought of a new design pattern. I'm wondering if it exists, and if not, why not (or why I shouldn't use it).
I'm creating a game using an OpenGL. In OpenGL, you often want to "bind" things -- i.e., make them the current context for a little while, and then unbind them. For example, you might call glBegin(GL_TRIANGLES) then you draw some triangles, then call glEnd(). I like to indent all the stuff inbetween so it's clear where it starts and ends, but then my IDE likes to unindent them because there are no braces. Then I thought we could do something clever! It basically works like this:
using(GL.Begin(GL_BeginMode.Triangles)) {
// draw stuff
}
GL.Begin returns a special DrawBind object (with an internal constructor) and implements IDisposable so that it automatically calls GL.End() at the end of the block. This way everything stays nicely aligned, and you can't forget to call end().
Is there a name for this pattern?
Usually when I see using used, you use it like this:
using(var x = new Whatever()) {
// do stuff with `x`
}
But in this case, we don't need to call any methods on our 'used' object, so we don't need to assign it to anything and it serves no purpose other than to call the corresponding end function.
Example
For Anthony Pegram, who wanted a real example of code I'm currently working on:
Before refactoring:
public void Render()
{
_vao.Bind();
_ibo.Bind(BufferTarget.ElementArrayBuffer);
GL.DrawElements(BeginMode.Triangles, _indices.Length, DrawElementsType.UnsignedInt, IntPtr.Zero);
BufferObject.Unbind(BufferTarget.ElementArrayBuffer);
VertexArrayObject.Unbind();
}
After refactoring:
public void Render()
{
using(_vao.Bind())
using(_ibo.Bind(BufferTarget.ElementArrayBuffer))
{
GL.DrawElements(BeginMode.Triangles, _indices.Length, DrawElementsType.UnsignedInt, IntPtr.Zero);
}
}
Notice that there's a 2nd benefit that the object returned by _ibo.Bind also remembers which "BufferTarget" I want to unbind. It also draws your atention to GL.DrawElements, which is really the only significant statement in that function (that does something noticeable), and hides away those lengthy unbind statements.
I guess the one downside is that I can't interlace Buffer Targets with this method. I'm not sure when I would ever want to, but I would have to keep a reference to bind object and call Dispose manually, or call the end function manually.
Naming
If no one objects, I'm dubbing this Disposable Context Object (DCO) Idiom.
Problems
JasonTrue raised a good point, that in this scenario (OpenGL buffers) nested using statements would not work as expected, as only one buffer can be bound at a time. We can remedy this, however, by expanding on "bind object" to use stacks:
public class BufferContext : IDisposable
{
private readonly BufferTarget _target;
private static readonly Dictionary<BufferTarget, Stack<int>> _handles;
static BufferContext()
{
_handles = new Dictionary<BufferTarget, Stack<int>>();
}
internal BufferContext(BufferTarget target, int handle)
{
_target = target;
if (!_handles.ContainsKey(target)) _handles[target] = new Stack<int>();
_handles[target].Push(handle);
GL.BindBuffer(target, handle);
}
public void Dispose()
{
_handles[_target].Pop();
int handle = _handles[_target].Count > 0 ? _handles[_target].Peek() : 0;
GL.BindBuffer(_target, handle);
}
}
Edit: Just noticed a problem with this. Before if you didn't Dispose() of your context object there wasn't really any consequence. The context just wouldn't switch back to whatever it was. Now if you forget to Dispose of it inside some kind of loop, you're wind up with a stackoverflow. Perhaps I should limit the stack size...
A similar tactic is used with Asp.Net MVC with the HtmlHelper. See http://msdn.microsoft.com/en-us/library/system.web.mvc.html.formextensions.beginform.aspx (using (Html.BeginForm()) {....})
So there's at least one precedent for using this pattern for something other than the obvious "need" for IDisposable for unmanaged resources like file handles, database or network connections, fonts, and so on. I don't think there's a special name for it, but in practice, it seems to be the C# idiom that serves as the counterpart to the C++ idiom, Resource Acquisition is Initialization.
When you're opening a file, you're acquiring, and guaranteeing the disposal of, a file context; in your example, the resource you're acquiring is a is a "binding context", in your words. While I've heard "Dispose pattern" or "Using pattern" used to describe the broad category, essentially "deterministic cleanup" is what you're talking about; you're controlling the lifetime the object.
I don't think it's really a "new" pattern, and the only reason it stands out in your use case is that apparently the OpenGL implementation you're depending on didn't make a special effort to match C# idioms, which requires you to build your own proxy object.
The only thing I'd worry about is if there are any non-obvious side effects, if, for example, you had a nested context where there were similar using constructs deeper in your block (or call stack).
ASP.NET/MVC uses this (optional) pattern to render the beginning and ending of a <form> element like this:
#using (Html.BeginForm()) {
<div>...</div>
}
This is similar to your example in that you are not consuming the value of your IDisposable other than for its disposable semantics. I've never heard of a name for this, but I've used this sort of thing before in other similar scenarios, and never considered it as anything other than understanding how to generally leverage the using block with IDisposable similar to how we can tap into the foreach semanatics by implementing IEnumerable.
I would this is more an idiom than a pattern. Patterns usually are more complex involving several moving parts, and idioms are just clever ways to do things in code.
In C++ it is used quite a lot. Whenever you want to aquire something or enter a scope you create an automatic variable (i.e. on the stack) of a class that begins or creates or whatever you need to be done on entry. When you leave the scope where the automatic variable is declared the destructor is called. The destructor should then end or delete or whatever is required to clean up.
class Lock {
private:
CriticalSection* criticalSection;
public:
Lock() {
criticalSection = new CriticalSection();
criticalSection.Enter();
}
~Lock() {
criticalSection.Leave();
delete criticalSection;
}
}
void F() {
Lock lock();
// Everything in here is executed in a critical section and it is exception safe.
}
I created a class awhile back. I used List.Add(this) inside of the class so I could access the controls I created later. It seemed to be very useful and I do not know how to create controls (more than one in the same parent control without a predefined limit) and access them later.
I was looking for Add(this) on the internet and couldn't find anymore information on it.
Is this a large resource hog or ineffective? Why can't I find more information on it? It seems very useful.
public class GlobalData
{
private static List<Member> _Members;
public partial class ChildrenPanel
{
private static List<ChildrenPanel> _ListCP = new List<ChildrenPanel>();
//X and Y position Panel | Container is the control recieving the Control
public void CreatePanel(int X, int Y, Panel Container)
{
//
// pnlStudent
//
_pnlStudent.BorderStyle = System.Windows.Forms.BorderStyle.Fixed3D;
_pnlStudent.Controls.Add(_lblCLastName);
_pnlStudent.Controls.Add(_lblCFirstName);
_pnlStudent.Controls.Add(_lblGrade);
_pnlStudent.Controls.Add(_lblSelected);
_pnlStudent.Controls.Add(_lblSeason);
_pnlStudent.Controls.Add(_lblAvailable);
_pnlStudent.Controls.Add(_lblGender);
_pnlStudent.Controls.Add(_ddlGrade);
_pnlStudent.Controls.Add(_ddlSelectedSports);
_pnlStudent.Controls.Add(_ddlAvailableSports);
_pnlStudent.Controls.Add(_ddlSeason);
_pnlStudent.Controls.Add(_rdbFemale);
_pnlStudent.Controls.Add(_rdbMale);
_pnlStudent.Controls.Add(_btnRemoveChild);
_pnlStudent.Controls.Add(_btnRemoveSport);
_pnlStudent.Controls.Add(_btnAddSport);
_pnlStudent.Controls.Add(_txtCLastName);
_pnlStudent.Controls.Add(_txtCFirstName);
_pnlStudent.Location = new System.Drawing.Point(X, Y);
_pnlStudent.Name = "pnlStudent";
_pnlStudent.Size = new System.Drawing.Size(494, 105);
//Still playing with the tab index
_pnlStudent.TabIndex = 10;
// Adds controls to selected forms panel
Container.Controls.Add(_pnlStudent);
// Creates a list of created panels inside the class
ListCP.Add(this);
}
Just make sure that you Remove the instance again when it's no longer needed, otherwise the List holding a reference to it will keep it in memory forever (Welcome to memory leaks in .NET after all).
I may revise this answer once I see some code, but my initial response is that it is not a resource hog. As to whether it is effective or not, some example code will be required.
Adding an object to a collection does not take up a large amount of resources because you are simply adding a reference to the object into the collection. You still only have a single object, but two (or more) variables that point to that object, so the only extra resources you are using are the minimal memory used by the references.
If your List is static or otherwise globally available, then you're doing something very bad.
ASP.Net is structured such that every request to your page - including postbacks - from every user results in a new instance of the page class. that's a lot of page instances. If references to all these instances are saved somewhere, the instances can never be garbage collected. You've created something analogous to a memory leak and you'll quickly find yourself running out of resources after you deploy to production.
The really dangerous thing here is that if you only do functional testing and no load testing the problem will likely not show up during your tests at all, because it will work fine for a few hundred (maybe even thousand) requests before blowing up on you.
If you're worried about dynamic controls, there are several better ways to handle this:
Put a fixed limit on the maximum number of controls you will allow, and add all of them to the page up front. Then only show/render them (toggled via the .Visible property) as you need them.
Make it data-driven. Rather than dynamically add a control, insert something to a database table and then bind a query on that table to a repeater or other data control (my preferred method).
Just make sure you're recreating every dynamic control you need at the right place (Pre-Init) in the page lifecycle.