Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
just a simple question on data updating.
Suppose I have a TextBox called txtBox1 and I want to update the value of a string variable called foo.
Which gives the best performance and best to do?
// The lengthier code but will check if the value is the same before updating.
if (foo != txtBox1.Text)
foo = txtBox1.Text;
or
// The shorter code but will update it regardless if it's the same value
foo = txtBox1.Text;
It really depends on what you do with foo variable.
If updating foo involves updating other parts of your application (via data binding for example) then yes, you should only update it when necessary.
Original Answer
Warning: I messed up... this answer applies for the opposite case, that is:
txtBox1.Text = foo
It may depend on what TextBox you are using...
I haven't reviewed all the clases with that name in the .NET framework from Microsoft. But I can tell for System.Windows.Forms.TextBox that the check is done internally, so doing it yourself is a waste. This is probably the case for the others.
New Answer
Note: This is an edit based on the comments. It it taken from granted that the objective is keep track of the modifications of the texbox and that we are working in windows forms or similar dektop forms solution (that may be WinForms, WPF, GTK#, etc..).
IF you need every value...
TextChanged is the way to go if you want a a log or undo feature where you want to offer each value the textbox was in.
Although take note that the event runs in the same thread as that the text was assigned, and that thread ought to be the thread that created the textbox. Meaning that if you cause any kind of lock or do an expensive operation, it will heavily^1 impact the performance of the form, causing it to react slowly because the thread that must update the form is busy in the TextChanged handler.
^1: heavily compared to the alternative presented below.
If you need to do an expensive operation, what you should do is add the values to a ConcurrentQueue<T> (or similar). And then you can have an async^2 operation run in the background that takes the values from it and process them. Make sure to add to the queue the necessary parameters^3, that way the expensive operation can happen in the background.
^2: It doesn't need to be using the async keyword, it can be a ThreadPool, a Timer, a dedicated Thread or something like that.
^3: for example the text, and the time in the case of a log. If have to monitor multiple controls you could also consider using a POCO (Plain Old CLR Object) class or struct to store all the status that need to be kept.
IF you can miss some values...
Using the event
Use the event to update a version number instead of reading the value.
That is, you are going to keep two integer variables:
The current version number that you will increment when there were a change. Use Thead.VolatireWrite for this (there is no need for Interlocked)
The last checked version number that you will update when you read the values from the form (this done from an async operation), and that you will use to verify if there has been any updates recently. Use Interlocked.Exchange to update the value and proceed if the old value is different from the readed one.
Note: Test the case of aritmetic overflow and make sure it wraps MaxValue to MinValue. No, it will not happen often, but that's no excuse.
Again, under the idea that it is ok to miss some values... If you are using a dedicated Thread for this, you may want to use a WaitHandle (ManualResetEvent or AutoResetEvent [and preferably it's slim counterparts]) to have the thread sleep when there hasn't been modifications instead of having it nopping (spin waiting). You will then set the WaitHandle in the event.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I'm working on a project that was made by another developer, and I've been assigned the job to add extra functionality, although this question isn't about one application, it's about a language.
In C#, I find myself running across this probably around 50 times a day, I need to grab a value from a method and store it to a variable, or I need to store a variable, or even if I just need to hard code a variable to something.
Do I go with my head or my heart? My head says store it in a variable incase I need to use it more than once in the future, but then my heart says lets be lazy and just add it to the if check, instead of calling the variable, let me give you an example...
Example 1:
var name = SomeClass.GetName();
if (name.Contains("something"))
{
// do something
}
Example 2:
if (SomeClass.GetName().Contains("something"))
{
// do something
}
I guess what I am asking is, does it have any sort of advantage? Or does it not really matter?
Am I using memory by storing these? especially if I'm storing hundreds across a solution in all different types of methods?
Is it worth just using it inside the if directly for an advantage, or should I just have a variable just in case? Can anyone explain the difference? If there is any that is.
I'm talking about if I only ever use the variable once, so don't worry about the "having to change in multiple locations" issue, although if anyone does want to go into that aswell, I would appreciate it.
I think there will not be any notable advantages in performance wise as well as in memory-wise. But when we look into the following scenarios storing return values have some advantages.
The calling method(SomeClass.GetName() in this case) may return null
Consider that the SomeClass.GetName() may return null subject to some conditions, then null.Contains() will definitely throw NullReferenceException [This will be same in both examples that you listed] in such case you can do something like the following:
var name = SomeClass.GetName();
if (name!= null && name.Contains("something"))
{
// do something
}
Need to use the return value more than one time:
Here you are using the return value only for checking the .Contains("something"), consider that you wanted to use the return value later in the calling method, then it's always better to store the value in a local variable instead for calling the method repeatedly. If it's only for checking contains then change the return type to boolean and finish the job within the method
Ask yourself this question about this line of code:
var name = SomeClass.GetName();
How expensive is GetName() method? Is it going over the internet and downloading a file from somewhere and it takes seconds to minutes to download the file? Or is it doing some crazy computation that takes a few seconds to minutes. Or is it getting data from the database? These answers will help you decide if you should store it in a variable and then reuse the variable.
The next question even if the above answers were "Na! It is pretty quick and does nothing fancy" is to ask yourself this: "How many places in the current class are you making this call? 1? 10? 100? If your boss comes one day and says, "You know that method GetName(), well we are not going to use it anymore. We will use another method named GetName2()". How long will it take? Well imagine if you need to make the changes in 100 different places.
So my point is simple: It all depends.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I'm writing a simple code generator in C# for automating common tasks on bussiness applications such as data binding, model and viewmodel generation and record updating.
The generated code uses a data mapper that implements equallity by reference comparision (without id) and flag properties for transient state (if the object was created but not persisted).
For updating the object properties I have 3 options:
On the property setter call an UPDATE for one column only immediatly. This would provide instant persistence without any other mecanism managed by the final programmer, but it would requiere and unnecessary number of UPDATE calls
Mantain a Frozen state on all entities wich would prevent any property set, and BeginModification and EndModification methods, wich would enable property setters and UPDATE all modified columns on the EndModification. This woud requiere the programmer to call this methods wich is undesirable for the code generator, because code simplicity and diminishing programmer intervention is its primary goal
Mantain a timer for each entity (wich can be implemented as a global timer and local counters), and give certain "dirty time" to entities, when a property is setted, its dirty time is resetted to 0 and when its local clock gets to certain values, columns UPDATE would be made. This wouldn't require any extern final programmer code and woud group several property sets on a single UPDATE, because contiguos property sets have almost 0 time between.
The timer aproach can be combined with a CommitChanges method that will call the UPDATE immediatly if desired
My prefered way is the local dirty timer because the posibility of zero programmer intervention besides property sets, the question is: It is posible that this timer aproach would lead to data inconsistency?
If you're writing this as an educational exercise or as a means for further honing your design skills, then great! If you're writing this because you actually need an ORM, I would suggest that looking at one of the many existing ORM's would be a much wiser idea. These products--Entity Framework, NHinbernate, etc.--already have people dedicated to maintaining them, so they provide a much more viable option than trying to roll your own ORM.
That said, I would shy away from any automatic database updates. Most existing ORM's follow a pattern of storing state information at the entity level (typically an entity represents a single row in a table, though entities can relate to other entities, of course), and changes are committed by the developer explicitly calling a function to do so. This is similar to your timer approach, but without the...well...timer. It can be nice to have changes committed automatically if you're writing something like a Winforms application and the user is updating properties through data binding, but that is generally better accomplished by having a utility class (such as a custom binding list implementation) that detects changes and commits them automatically.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
As you know, it it a good idea to call Task.ConfigureAwait(false) when you are waiting on a task in a code that does not need to capture a synchronization context, because it can cause deadlocks otherwise.
Well, how often do you need to capture a synchronization context? I my practice, very rarely. In most situations I am working with "library" code that pretty much forces me to use Task.ConfigureAwait(false) all the time.
So my question is pretty simple: why Task.ConfigureAwait(false) is not the default option for a task? Would not it be much better to force "high-level" code to use Task.ConfigureAwait(true)? Is there a historical reason for it, or am I missing something?
Most code that works with .ConfigureAwait(false) also works, although subobtimal, with .ConfigureAwait(true). Yes, not all code, but still most. The current default lets the highest percentage of code work without tinkering with settings that an average programmer might not understand.
A different default would just lead to thousands of questions about why the code does not work, and worse yet, thousands of answers in the form of "Microsoft sucks, they make you write Control.CheckForIllegalCrossThreadCalls = false; in every program. Why isn't that the default?" rather than actually adding the appropriate .ConfigureAwait(true) calls.
Look at the second example solution from that link:
public async void Button1_Click(...)
{
var json = await GetJsonAsync(...);
textBox1.Text = json;
}
public class MyController : ApiController
{
public async Task<string> Get()
{
var json = await GetJsonAsync(...);
return json.ToString();
}
}
If the default behaviour was ConfigureAwait(false), the textBox1.Text = json; statement would execute on a random thread pool thread instead of the UI thread.
Both snippets look like code someone could reasonably write, and by default one of them has to be broken. Since deadlocks are a lot less dangerous and easier to detect than thread-unsafe accesses, picking ConfigureAwait(true) as the default is the more conservative choice.
Just because your typical use case requires ConfigureAwait(false), it doesn't mean that it is the "correct" or most used option.
One of the things async/await is designed for, is to write responsive GUI programs. In such cases, returning to the UI thread after offloading some work to a Task is critical, since UI updates can only happen from the main thread on most Windows GUI platforms. Async/await helps GUI developers do the right thing.
This is not the only example where the default option makes better sense. I can only speculate, but I would suspect that the decision for the ConfigureAwait default is based on making sure async works with as little friction as possible, for the use cases that Microsoft anticipates it will be used for the most. Not everyone writes frameworks.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
The docs don't explain it. They only say what should be locked on and what not.
From here it seems like the same object should be used by all threads for the lock to work. While from here it seems that that is exactly what should be avoided to prevent deadlock.
Keep in mind that I might be misunderstanding this whole matter of lock, because I just asked a question about how to "lock" a variable and got what seems to me not to achieve that at all (except locking code).
Think of a lock as a "talking stick" that is used in some meetings. Whoever is holding the stick can talk. Anyone that wants to talk must wait until the speaker relinquishes the stick.
When a piece of code acquires a lock on an object, any other piece of code that requests a lock on that same object must wait until the original code releases the lock.
So which object should you lock? It depends greatly on the context. The rule of thumb is you lock an object that anyone else who could affect the code block can lock as well. If you're updating a collection, then you can ICollection.SyncRoot as an example.
EDIT by OP (Hopefully correct):
"Anyone that wants to talk" - As the speaker "of that stick". (Anyone can just talk.)
As for the second link in the question - it's referring to a problem of one lock waiting for a second, while the second is waiting for the first.
lock should be used around any shared resource. By "shared resource" I mean anything that is accessed by more than one thread.
All a lock does is:
Incoming thread wants access to a piece of code, encounters lock
Lock is empty, thread is allowed in
Thread gets switched out
Another thread wants access to the same code (or code locked on the same variable), encounters lock
Variable is already locked, thread has to wait
Original thread is switched back in, exits locked code
Second thread is switched back in, executes the locked code
If it is possible to have threads in a lock and waiting on another lock at the same time, that then waits on the first lock, you have a gridlock condition. Typically you don't "nest" your locks to avoid this problem. Also, for performance if nothing else, you rarely lock on the same variable as another unless you actually have both pieces relying on the code not executing concurrently (probably a bad design if it is so :) )
Locking something is intended to protect a piece of shared memory. So, you have to use the same SyncRoot for a specific element that you are protecting... However, say you have 3 objects that need to be protected, and they are in no way related:
A a = new A();
B b = new B();
C c = new C();
Then there is NO reason to use the same SyncRoot for all 3 of them. In fact, if they are truly separate, it would be inefficient.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I am building a ASP.NET webapplication in which I use several classes containing static functions for retreiving database values and such (based on session of user so their results are session specific, not application wide).
These functions can also be called from markup, which makes developing my GUI fast and easy.
Now I am wondering: is this the right way of doing things, or is it better to create a class, containing these functions and create an instance of the class when needed?
What will happen when there are a lot of visitors to this website? Will a visitor have to wait until the function is 'ready' if it's also called by another session? Or will IIS spread the workload over multiple threads?
Or is this just up to personal preferences and one should test what works best?
EDIT AND ADDITIONAL QUESTION:
I'm using code like this:
public class HandyAdminStuff
{
public static string GetClientName(Guid clientId)
{
Client client = new ClientController().GetClientById(clientId);
return client.Name;
}
}
Will the Client and ClientController classes be disposed of after completion of this function? Will the GarbageCollector dispose of them? Or will they continue to 'live' and bulk up memory everytime the function is called?
** Please, I don't need answers like: 'measure instead of asking', I know that. I'd like to get feedback from people who can give a good answer an maybe some pro's or cons, based on their experience. Thank you.
"Will a visitor have to wait until the function is 'ready' if it's also called by another session?"
Yes. It may happen if you have thread safe function body, or you perform some DB operations within transaction that locks DB.
Take a look at these threads:
http://forums.asp.net/t/1933971.aspx?THEORY%20High%20load%20on%20static%20methods%20How%20does%20net%20handle%20this%20situation%20
Does IIS give each connected user a thread?
It would be better to have instance based objects because they can also be easily disposed (connections possibly?) and you wouldn't have to worry about multithreading issues, additional to all the problems "peek" mentioned.
For example, each and every function of your static DAL layer should be atomic. That is, no variables should be shared between calls inside the dal. It is a common mistake in asp.net to think that [TreadStatic] data is safe to be used inside static functions. The only safe pool for storing per request data is the Context.Items pool, everything else is unsafe.
Edit:
I forgot to answer you question regarding IIS threads. Each and every request from your customers will be handled by a different thread. As long as you are not using Session State, concurrent requests from the same user will be also handled concurrently by different threads.
I would not recommend to use static function for retrieving data. This because these static functions will make your code harder to test, harder to maintain, and can't take advantage of any oo principals for design. You will end up with more duplicate code, etc.