Update .NET desktop application in real-time - c#

I have no experience building a .NET desktop application, all my experience is with the web. A friend of mine has asked me to do a quick estimate for a small desktop application.
The application just displays a list of items from the database. When rows are deleted/added from the database, they need to be deleted/added from the list on the user's desktop.
Is this done pretty easily in a desktop application, or do I need to do any sort of "reload" every X seconds?

The simplest design would involve polling the database every so often to look for new records. Adjust the number of seconds between polling to best reflect the appearance of real time and also for performance.
Any design that would allow the database management system to broadcast updates to a desktop application would be quite complicated and (depending on your needs) would most likely be overkill.

Elaborating on Andrew Hare's design slightly, I'd suggest that you include some sort of mechanism to 'short-circuit' the refresh cycle when user interaction occurs, i.e.
Refresh every x seconds
AND
Immediatey if the user clicks a control that is deemed to be a critical one AND the required update is less than x records
EXCEPT
where this would increase the refresh rate beyond a certain throttle value
Basically, you want to give the impression of high performance. Perceived performance doesn't mean accomplishing tasks quickly, it's more like doing the slow work during periods that you expect the user to be thinking, faffing around or typing something, rather than when they're waiting for a response. Very few applications are busy even a small fraction of the time they are running - any perceived slow performance is derived from a poor design where the program does too much work at once at the point the user asks for it, requiring them to wait. Caching in the background allows you to only assign the bare minimum amount of work to directly respond to user input, improving the user's perception of performance.
Trying to be directly helpful:
You state you're using .Net - which is handy. .Net databinding is very rich and powerful, and is quite likely to make this job a breeze.
However - read on...
There is a chance that it won't do exactly what you want. This is where databinding becomes a massive pain. Databinding requires certain things to be set up the way .Net wants it, and if they aren't, it's quite a lot of work reimplementing the basic functionality in the way you require. In this case, don't hesitate to reach for the MSDN documentation, and StackOverflow. Ask Early, Ask Often.

Related

What to make parallel? What will make me better? (.net Web Business Application, MVC+SL)

I'm working on a web application framework, which uses MSSQL for data storage, mostly just does CRUD operations (but on arbitrarly complex structures), provides a WCF interface for rich Silverlight admin and has an MVC3 display (and some basic forms like user settings, etc).
It's getting quite good at being able to load, display, edit and save any (reasonably) complex data structure, in a user-friendly way.
But, I'm looking towards the future, and want to expand my capabilities (and it would be fun to learn new things along the way as well...) - so I've decided (in the light of what's coming for C#5...) to try to get some parallel/async optimalization... Now, I haven't even learned TPL and PLinq yet, so I'm happy for any advice there as well.
So my question is, what are possible areas where parallel processing maybe of help, and where does TPL and PLinq help me on that?
My guts tell me, I could try saving branches of a data structure in a parallel way to the database (this is where I'd expect the biggest peformance optimalization), I could perform some complex operations (file upload, mail sending maybe?) in a multithreaded enviroment, etc. Can I build complex SL UI views in parallel on the client? (Creating 60 data-bound fields on a view can cause "blinking"...) Can I create partial views (menus, category trees, search forms, etc) in MVC at once?
ps: If this turns into "Tell me everything about parallel stuffs" thread, I'm happy to make it community-wiki...
Remember that an asp.net web application is intrinsically a parallel application in any case. Requests can be serviced in parallel and this will all be managed by the asp.net framework. So there are two cases:
You have lots of users all hitting the site at once. In which case the parallel processing capability of the server is probably being used to capacity in any case.
You don't have lots of users all hitting the site at once. In which case the server is probably quite capable of dealing with the responses without parallel processing in a suitable fast response time.
Any time you start thinking about optimising something just because it might be fun, or because you just think you should make stuff faster then you are almost certainly guilty of premature optimization. Your efforts could almost certainly be better spent enriching the functionality of the framework, rather than making what is probably a plenty fast enough solution a little bit faster (at the cost of significantly increase complexity).
In answer to the question of where can TPL and PLINQ really help. In my opinion the main advantage of these technologies is in places in the application where you really do have a lot of long running blocking processes. For example if you have a situation where you call out several times to an external web service - it can be a significant advantage to make these calls in parallel. I would strongly question whether writing to a local database - or even a database on a different box on a local network would count as being a long running blocking process to the extent that this kind of parallelisation is of any significant value.
Pretty much all the examples you list fall in to the category of getting the PC to do something in parallel that it was previously doing in sequence. How many CPUs are on your server - how many are really free when the website is under load. Making something parallel does not necessarily equate to making it faster unless the process involved has some measure of time when you PC is sitting around doing nothing waiting for an external event.
First question is to ask the users / testers which bits seem slow. The only way to know for sure what's slowing you down is to use a profiler like dottrace. The results are sometimes surprising.
If you do find something, parallel processing may not be the answer. You need to remember that there is an overhead in splitting tasks up, so if the task is fairly quick in the first place, it could end up being slower. You also have to consider the added complexity, e.g. what happens if half a task succeeds, and half fails? (Although TPL and PLINQ hide you from this to an extend)
Have fun, but I wondering whether this is a case of 1) solution chasing a problem, and 2) premature optimization.

What algorithm to use for Dynamic Scheduling System?

I'm planning to develop an expert system that automatically fits the school faculty's work load (time, teaching load, etc), and generate class sections, room that is at least 90% accurate with what the Director of a certain department wants to assign the schedule for a certain semester.
What algorithm to use? Heuristics? Optimization? Any suggestions or help is highly appreciated!
Two friends of mine did something similar for a class project. They used the simulated annealing heuristic. They concluded that it might not be the best tool for the job.
Hey, knowing what not to do can be useful, right? :)
Here are some general observations:
1) Manual scheduling is rarely attempted from scratch. Instead, somebody starts with the schedule for the previous year and alters it to take account of changes in requirements. One way of mimicing this with a computer is to use a hill-climbing algorithm, which repeatedly tries a number of small changes to improve a solution so far. This can then be started off at the current schedule.
2) Does the manual process ever terminate with the conclusion that the requirements are collectively unachievable and that some of them must be dropped? In that case your algorithm must be transparent enough that failures can be understood, or at least capable of proposing such changes (e.g. by maximising a penalty function which allows it to produce a "least bad" solution which does not satisfy all of the original constraints). I know of one case where a sophisticated constraint-based approach was replaced by a much simpler algorithm because failures of the constraint-based system did not give enough user feed back.
3) Curiously enough, the next generation system did not use sophisticated scheduling at all. It turned out - roughly speaking - that at the time the decisions had to be made not all of the consequences of sophisticated scheduling decisions could be forseen, and, in the long run, a simple predictable schedule that could be maintained indefinitely was more productive than constantly rearranging schedules to grab small momentary advantages.
Take a look at the curriculum course lesson scheduling example of Drools Planner (open source, java I am afraid). It uses meta-heuristics such as simulated annealing and tabu search.
Here is a paper on dynamic scheduling using genetic algorithms... you might find some of the ideas here useful... even if the domain isn't the same, I think the idea is more generally applicable.

using C# for real time applications

Can C# be used for developing a real-time application that involves taking input from web cam continuously and processing the input?
You cannot use any main stream garbage collected language for “hard real-time systems”, as the garbage collect will sometimes stop the system responding in a defined time. Avoiding allocating object can help, however you need a way to prove you are not creating any garbage and that the garbage collector will not kick in.
However most “real time” systems don’t in fact need to always respond within a hard time limit, so it all comes down do what you mean by “real time”.
Even when parts of the system needs to be “hard real time” often other large parts of the system like the UI don’t.
(I think your app needs to be fast rather than “real time”, if 1 frame is lost every 100 years how many people will get killed?)
I've used C# to create multiple realtime, high speed, machine vision applications that run 24/7 and have moving machinery dependent on the application. If something goes wrong in the software, something immediately and visibly goes wrong in the real world.
I've found that C#/.Net provide pretty good functionality for doing so. As others have said, definitely stay on top of garbage collection. Break up to processing into several logical steps, and have separate threads working each. I've found the Producer Consumer programming model to work well for this, perhaps ConcurrentQueue for starters.
You could start with something like:
Thread 1 captures the camera image, converts it to some format, and puts it into an ImageQueue
Thread 2 consumes from the ImageQueue, processing the image and comes up with a data object that is put onto a ProcessedQueue
Thread 3 consumes from the ProcessedQueue and does something interesting with the results.
If Thread 2 takes too long, Threads 1 and 3 are still chugging along. If you have a multicore processor you'll be throwing more hardware at the math. You could also use several threads in place of any thread that I wrote above, although you'd have to take care of ordering the results manually.
Edit
After reading other peoples answers, you could probably argue my definition of "realtime". In my case, the computer produces targets that it sends to motion controllers which do the actual realtime motion. The motion controllers provide their own safety layers for things like timing, max/min ranges, smooth accel/decelerations and safety sensors. These controllers read sensors across an entire factory with a cycle time of less than 1ms.
Absolutely. The key will be to avoid garbage collection and memory management as much as possible. Try to avoid new-ing objects as much as possible, using buffers or object pools when you can.
Of course, someone has even developed a library to do that: AForge.NET
As with any real-time application and not just C#, you'll have to manage the buffers well as #David suggested.
Not only that, there're also the XNA Framework (for things like 3D games) and you can program DirectX using C# as well which are very real-time.
And did you know that, if you want, you can do pointer manipulations in C# too?
It depends on how 'real-time' it needs to be; ie, what your timing constraints are, and how quickly you need to 'do something'.
If you can handle 'doing something' maybe every 300ms or so in .NET, say on a timer event, I've found Windows to work okay. Note that this is something I found true on multiple systems of different ages and different speeds. As always, YMMV.
But that number is awfully long for a lot of applications. Maybe not for yours.
Do some research, make sure your app responds quickly enough for your application.

ASP.NET MVC Caching scenario

I'm still yet to find a decent solution to my scenario. Basically I have an ASP.NET MVC website which has a fair bit of database access to make the views (2-3 queries per view) and I would like to take advantage of caching to improve performance.
The problem is that the views contain data that can change irregularly, like it might be the same for 2 days or the data could change several times in an hour.
The queries are quite simple (select... from where...) and not huge joins, each one returns on average 20-30 rows of data (with about 10 columns).
The queries are quite simple at the sites current stage, but over time the owner will be adding more data and the visitor numbers will increase. They are large at the moment and I would be looking at caching as traffic will mostly be coming from Google AdWords etc and fast loading pages will be a benefit (apparently).
The site will be hosted on a Microsoft SQL Server 2005 database (But can upgrade to 2008 if required).
Do I either:
Set the caching to the minimum time an item doesn't change for (E.g. cache for say 3 mins) and tell the owner that any changes will take upto 3 minutes to appear?
Find a way to force the cache to clear and reprocess on changes (E.g. if the owner adds an item in the administration panel it clears the relevant caches)
Forget caching all together
Or is there an option that would be suit this scenario?
If you are using Sql Server, there's also another option to consider:
Use the SqlCacheDependency class to have your cache invalidated when the underlying data is updated. Obviously this achieves a similar outcome to option 2.
I might actually have to agree with Agileguy though - your query descriptions seem pretty simplistic. Thinking forward and keeping caching in mind while you design is a good idea, but have you proven that you actually need it now? Option 3 seems a heck of a lot better than option 1, assuming you aren't actually dealing with significant performance problems right now.
Premature optimization is the root of all evil ;)
That said, if you are going to Cache I'd use a solution based around option 2.
You have less opportunity for "dirty" data in that manner.
Kindness,
Dan
2nd option is the best. Shouldn't be so hard if the same app edits/caches data. Can be more tricky if there is more than one app.
If you can't go that way, 1st might be acceptable too. With some tweaks (i.e. - i would try to update cache silently on another thread when it hits timeout) it might work well enough (if data are allowed to be a bit old).
Never drop caching if it's possible. Everyone knows "premature optimization..." verse, but caching is one of those things that can increase scalability/performance of application dramatically.

What is wrong with polling?

I have heard a few developers recently say that they are simply polling stuff (databases, files, etc.) to determine when something has changed and then run a task, such as an import.
I'm really against this idea and feel that utilising available technology such as Remoting, WCF, etc. would be far better than polling.
However, I'd like to identify the reasons why other people prefer one approach over the other and more importantly, how can I convince others that polling is wrong in this day and age?
Polling is not "wrong" as such.
A lot depends on how it is implemented and for what purpose. If you really care about immedatly notification of a change, it is very efficient. Your code sits in tight loop, constantly polling (asking) a resource whether it has changed / updated. This means you are notified as soon as you can be that something is different. But, your code is not doing anything else and there is overhead in terms of many many calls to the object in question.
If you are less concerned with immediate notification you can increase the interval between polls, and this can also work well, but picking the correct interval can be difficult. Too long and you might miss critical changes, too short and you are back to the problems of the first method.
Alternatives, such as interrupts or messages, etc. can provide a better compromise in these situations. You are notified of a change as soon as is practically possible, but this delay is not something you control, it depends on the component tself being timely about passing on changes in state.
What is "wrong" with polling?
It can be resource hogging.
It can be limiting (especially if you have many things you want to know about / poll).
It can be overkill.
But...
It is not inherently wrong.
It can be very effective.
It is very simple.
Examples of things that use polling in this day and age:
Email clients poll for new messages (even with IMAP).
RSS readers poll for changes to feeds.
Search engines poll for changes to the pages they index.
StackOverflow users poll for new questions, by hitting 'refresh' ;-)
Bittorrent clients poll the tracker (and each other, I think, with DHT) for changes in the swarm.
Spinlocks on multi-core systems can be the most efficient synchronisation between cores, in cases where the delay is too short for there to be time to schedule another thread on this core, before the other core does whatever we're waiting for.
Sometimes there simply isn't any way to get asynchronous notifications: for example to replace RSS with a push system, the server would have to know about everyone who reads the feed and have a way of contacting them. This is a mailing list - precisely one of the things RSS was designed to avoid. Hence the fact that most of my examples are network apps, where this is most likely to be an issue.
Other times, polling is cheap enough to work even where there is async notification.
For a local file, notification of changes is likely to be the better option in principle. For example, you might (might) prevent the disk spinning down if you're forever poking it, although then again the OS might cache. And if you're polling every second on a file which only changes once an hour, you might be needlessly occupying 0.001% (or whatever) of your machine's processing power. This sounds tiny, but what happens when there are 100,000 files you need to poll?
In practice, though, the overhead is likely to be negligible whichever you do, making it hard to get excited about changing code that currently works. Best thing is to watch out for specific problems that polling causes on the system you want to change - if you find any then raise those rather than trying to make a general argument against all polling. If you don't find any, then you can't fix what isn't broken...
There are two reasons why polling could be considered bad by principle.
It is a waste of resources. It is very likely that you will check for a change while no change has occurred. The CPU cycles/bandwidth spend on this action does not result in a change and thus could have been better spend on something else.
Polling is done on a certain interval. This means that you won’t know that a change has occurred until the next time that the interval has passed.
It would be better to be notified of changes. This way you’re not polling for changes that haven’t occurred and you’ll know of a change as soon as you receive the notification.
Polling is easy to do, very easy, its as easy as any procedural code. Not polling means you enter the world of Asynchronous programming, which isn't as brain-dead easy, and might even become challenging at times.
And as with everything in any system, the path of less resistance is normally more commonly taken, so there will always be programmers using polling, even great programmers, because sometimes there is no need to complicate things with asynchronous patterns.
I for one always thrive to avoid polling, but sometimes I do polling anyways, especially when the actual gains of asynchronous handling aren't that great, such as when acting against some small local data (of course you get a bit faster, but users won't notice the difference in a case like this). So there is room for both methodologies IMHO.
Client polling doesn't scale as well as server notifications. Imagine thousands of clients asking the server "any new data?" every 5 seconds. Now imagine the server keeping a list of clients to notify of new data. Server notification scales better.
I think people should realize that in most cases, at some level there is polling being done, even in event or interrupt driven situations, but you're isolated from the actual code doing the polling. Really, this is the most desirable situation ... isolate yourself from the implementaion, and just deal with the event. Even if you must implement the polling yourself, write the code so that it's isolated, and the results are dealt with independently of the implementation.
The thing about polling is that it works! Its reliable and simple to implement.
The costs of pooling can be high -- if you are scanning a database for changes every minute when there are only two changes a day you are consuming a lot of resources for a very small result.
However the problem with any notification technoligy is that they are much more complex to implement and not only can they be unreliable but (and this is a big BUT) you cannot easily tell when they are not working.
So if you do drop polling for some other technoligy make sure it is usable by average programmers and is ultra reliable.
Its simple - polling is bad - inefficient, waste of resources, etc. There is always some form of connectivity in place that is monitoring for an event of some sort anyway, even if 'polling' is not chosen.
So why go the extra mile and put additional polling in place.
Callbacks are the best option - just need to worry about tie the callback in with your current process. Underlying, there is polling going on to see that the connection is still in place anyhow.
If you keep phoning/ringing your girlfriend and shes never answers, then why keep calling? Just leave a message, and wait until she 'calls back' ;)
I use polling occasionally for certain situations (for example, in a game, I would poll the keyboard state every frame), but never in a loop that ONLY does polling, rather I would do polling as a check (has resource X changed? If yes, do something, otherwise process something else and check again later). Generally speaking though, I avoid polling in favor of asynchronous notifications.
The reasons being that I do not spend resources (CPU time, whatever) waiting for something to happen (especially if those resources could speed up that thing happening in the first place). The cases where I use polling, I don't sit idle waiting, I use the resources elsewhere, so it's a non-issue (for me, at least).
If you are polling for changes to a file, then I agree that you should use the filesystem notifications that are available for when this happens, which are available in most operating systems now.
In a database you could trigger on update/insert and then call your external code to do something. However it might just be that you don't have a requirement for instant actions. For instance you might only need to get data from Database A to Database B on a different network within 15 minutes. Database B might not be accessible from Database A, so you end up doing the polling from, or as a standalone program running near, Database B.
Also, Polling is a very simple thing to program. It is often a first step implementation done when time constraints are short, and because it works well enough, it remains.
I see many answers here, but I think the simplest answer is the answer it self:
Because is (usually) much more simple to code a polling loop than to make the infrastructure for callbacks.
Then, you get simpler code which if it turns out to be a bottleneck later can be easily understood and redesigned/refactored into something else.
This is not answering your question. But realistically, especially in this "day and age" where processor cycles are cheap, and bandwidth is large, polling is actually a pretty good solution for some tasks.
The benefits are:
Cheap
Reliable
Testable
Flexible
I agree that avoiding polling is a good policy. However, In reference to Robert's post, I would say that the simplicity of polling can make it a better approach in instances where the issues mentioned here are not such a big problem, as the asynchronous approach is often considerably less readable and harder to maintain, not to mention the bugs that can creep in to its implementation.
As with everything, it depends. A large high-transaction system I work on currently uses a notification with SQL (A DLL loaded within SQL Server that is called by an extended SP from triggers on certain tables. The DLL then notifies other apps that there is work to do).
However we're moving away from this because we can practically guarantee that there will be work to do continuously. Therefore in order to reduce the complexity and actually speed things up a bit, the apps will process their work and immediately poll the DB again for new work. Should there be none it'll try again after a small interval.
This seems to work quicker and is much simpler. However, another part of the application which is much lower volume does not benefit from a speed increase using this method - unless the polling interval is very small, which leads to performance problems. So we're leaving it as is for this part. Therefore it's a good thing when it's appropriate, but everybody's needs are different.
Here is a good summary of relative merits of push and pull:
https://stpeter.im/index.php/2007/12/14/push-and-pull-in-application-architectures/
I wish I could summarize it further into this answer but some things are best left unabridged.
When thinking about SQL polling, back in the day of VB6 you used to be able to create recordsets using the WithEvents keyword which was an early incarnation of async "listening".
I personally would always look for a way of using an events driven implementation before polling. Failing that a manual implementation of any of the following might help:
sql service broker / dependency class
Some kind of queue technology(RabbitMQ or similar)
UDP broadcast - interesting technique that can
be built with multiple node listeners. Not always possible on some net works though.
Some of these may require a slight redesign of your project, but in an enterprise world might be the better route to go rather than a polling service.
Agreee with most responses that Async/Messaging is usually better. I absolutely agree with Robert Gould's answer. But I'd like to add one more point.
One addition is that polling can kill two birds with one stone. In one particular use case, a project I was involved with used a message queue between databases but polling from an application server to one of the databases. Because the network from app server to DB was occasionally down, polling was additionally used to notify the app of network issues.
In the end, use what makes most sense for the use case with scale-ability in mind.
I'm using polling to check for updates on a file because I'm getting information about that file across a heterogeneous system with different OS types, one of which is very old. The notifications for Linux won't work if the file is on a remote system with a different OS, because that information is not transmitted, but polling works. It's a low bandwidth check, so it doesn't hurt anything.

Categories