I have about 20 grid views that I have to create. All of them are pretty standard across the board. Just take IEnumerable T and display it in a grid view, that's it.
I would prefer to create one aspx page and have the grid view be dynamically generated by using ITemplate. And I guess for the data source use IEnumerable Object.
Are there significant performance considerations between doing it the way I'd like to do it or would it be better to go ahead and build the 20 or more grid views on separate aspx pages?
An example of a concern I have is taking List T and casting to IEnumerable T where T is type Object.
Build just the one and performance test. It will be easier to apply lessons learned.
If the data is long, turn buffering off to improve time to first byte.
Having one generic view page is preferable - where it is possible which it sounds like it is in your case.
Secondly no performance hit going from List to IEnumerable as IEnumerable is a behaviour which List has.
However you will get a performance hit building a List if you don't already have it - it is much better to ensure that you are passing IEnumerable from the LINQ statements directly as it is only realized when used - which can have major benefit with long lists and when using sorting or filtering (because you can modify the IEnumerable before it is realized)
As with anything related to performance build it and profile it to see if performance is an issue. No amount of opinion, however well informed is a substitute for profiling and only optimising when necessary, always avoid premature optimisation.
20 grid views, Ok
Just make sure you disable the ViewState of the controls that do not require it.
That will considerably reduce you page size & in turn reduce the page load time.
If there is nothing custom and you have to only show default data for all 20 tables/lists then i think you should use one page
Related
I have "successfully" implemented a non recombining trinomial tree to price certain fixed-income derivatives. (Something like shown in the picture below - but with three branches that don't reconnect)
Unfortunately it turned out that the number of nodes I can use was severely limited by the available memory. If I build a tree with 20 time-steps this results in 3^19 nodes (so 1,1 Billion nodes)
The nodes of each time step are saved in List<Node> and these arrays are stored in a Dictionary<double,List<Node>>
Each node is instantiated via new Node(...). I also instantiate each of the lists and the dictionary via new Class() Perhaps this is the source of my error.
Also System.OutOfMemoryException isn't thrown because of the Dictionary/List-Object being to large (as is often the case) but because I seem to have too many Nodes - after a while new Node(...) can't allocate any further memory. Eventually the 2GB max List-Capacity will also kick in I think - seeing as how List grows exponentially larger with each time step.
Perhaps my data-structure is too wasteful or not really suited for the task at hand.
A possible solution could be to save the tree to a text-file thus avoiding the memory-problem completely. This however would necessitate a HUGE workaround.
Edit:
To add some more background. I need the tree to price path dependant products. This means that unfortunately I will have to access all the nodes. What is more after the tree has been build I start from the leaves and go backwards in time to determine the price. I also already only generate the nodes I need.
Edit2:
I have given the topic some though and also considered the various responses. Could it be that I just need to serialize the respective tree levels to the hard-drive. So basically - I create one time-step (List<Node>) write it to Disk etc. Later on when I start from the leaves - I will just have to load it in reverse oder.
You basically have two choices. evaluate only the branches you care about (Andrew's yield) and don't store results or build up your tree and save it to disk and implement a custom collection interface on top of it that accesses the right part of the disk. In this case you are still going to keep a minimal amount of data in your process memory and rely on the OS to do proper disk caching to make access fast. If you start working with large data sets the second option is a good tool to have in your tool belt, so you should probably write this with reuse in mind.
What we have here is a classic problem of doing an enormous amount of processing up front... and then storing EVERYTHING into memory to be processed at a later time.
While simple, given harsh enough conditions (like having a billion entries), it will eat up all the memory.
Now, the OP didn't really specify what the intention of the tree was or how it was going to be used... but I would propose that instead of building it all at once... build it as you need it.
Lazy Evaluation with yield
Instead of doing everything all at once and having to store it... it might be ideal to do it ONLY when you actually require it. Check out this post for more info and examples of using yield.
This won't work great though if you need to traverese the tree a bunch of times... but it might still allow you to have deeper depth than you currently do.
I don't think Serializing to disk will help much. One, when you attempt to deserialize the list you will still run out of memory (as, to the best of my knowledge, there is no way to partially deserialize an object).
Have you considered changing your data structure into a relational database model and storing it in a SQLEXPRESS database?
This would give you the added benefit of performing queries with indexes instead of your custom tree traversal logic.
In my application we have multi-lingual language strings which are stored in custom tables, as the user can edit, delete, import new languages etc... via a UI
Currently, what I'm doing is at the beginning of each request is. I'm going off and getting all the language strings (From our database) for the currently selected language and sticking them in a dictionary.
I then have a Html Helper extension method which I use in the razor views (See below), which fishes in the dictionary I got at the beginning of the request to pull out the correct language based on the key supplied in the helper.
Html.LanguageString("MyLanguage.KeyHere")
Now this works fine. However, as the application is getting bigger. We are getting more and more language strings. It's not an issue right now, as its still very fast as there are only around 200 strings to get.
But this also means I'm getting all of them, even if a page has say one on it. I'd ideally like a way of processing the LanguageString("")'s before hand and doing a query to just get those that are needed at the beginning of the request? Or maybe my own linq based language that can be processed and product a more efficient call.
I'm looking for some advice on how to do this. As I'd like the application to be as efficient as possible. Any advice, help, tips are greatly received. Thanks.
I'd suggest caching language strings on the application basis rather than fetching them for every request. For example, this can be done by maintaining a static dictionary and invalidating the cache only when the user makes changes to these strings. This will make your application more responsive as well as save you from implementing (imho) rather more complex and not necessarily efficient technique of loading this data on-demand.
As a side note I'd add the following: it's usually a good practice to address these kinds of problems when they arise (rather than fixing something that is not broken) and focus on more important things. I totally agree that performance implications of a given solution must always be taken into consideration, I'm just saying that premature optimizations are not always a good idea.
For my new web app, I'm debating on using multiple views, or conditionals within views.
An example scenario would be showing different info to users who are authenticated vs non-authenticated. This could be handled a couple ways.
In the controller, check IsAuthenticated and return a view based on that
In the view, check IsAuthenticated and show blocks of info based on that
Pros of multiple views: Smaller, less complicated view - next to no logic in the view
Pros of single views: less view files to maintain
The obvious cons are the opposites of the pros: more files to maintain or more complicated view files.
Which do you prefer? Why? Any pros/cons I haven't outlined here?
Update: Assume each view uses a layout page and partial views to abstract the obviously repetitive code.
This sounds like a nice venue to discuss the merits of avoiding premature generalization. As the cousin to premature optimization, PG can be just as crippling. I say this because I often prematurely generalize and it tends to dissuade the ladies from flirting with me, laughing at my hilarious jokes, etc.
See: http://ryanfarley.com/blog/archive/2004/04/30/570.aspx
My general rule of thumb is this:
Repeat yourself twice.
When you're about to repeat yourself a third time, create an abstraction
I tend to follow this principle in my Views and my Partials:
I create my first View -- no
partials.
I create my second View -- no
partials.
I create my third View by
abstracting pieces of code from the
first and second View into reusable
partials.
I repeat until the Mountain Dew is all gone.
Though my answer to your question may seem overt, I think the point I'm trying to make is that, as developers, we tend to enjoy wasting a great deal of time contemplating the different ways that we can abstract away more and more layers from our individuated instantiations. Ironically, an abstraction is only valuable insofar as it reduces the necessity of repetition, and repetition is harmful only insofar as it reduces the likeliness that you'll accomplish anything, so a repetitive desire to over-abstract is just as detrimental as coding with a bunch of ON ERROR RESUME NEXT's.
I doubt that helped. But, alas.
I prefer a single view if it's simply an "if x display y" situation. Anything more than that and it can get out of control pretty easily. Reducing the duplicate html is worth the tradeoff of a small amount of simple logic, though.
I suspect the answers on this will be pretty much split down the middle because each side has its own merits.
i'd say it depends on how different the 2 scenarios are. if it's a major difference or a difference, do a separate view. if it's a difference that appears on multiple pages (like showing login controls vs. a signout button), make it into a separate partialview. for a couple of tiny differences, an if block is ok
I would say start with a single view...then depending on how complicated the difference between authenticated and unauthenticated views get, you can create multiple views.
This guide on optimizing DataBinding says:
There is a significant performance impact when you data bind to a single CLR object with thousands of properties. You can minimize this impact by dividing the single object into multiple CLR objects with fewer properties.
What does this mean? I am still trying to get familiar with DataBinding, but my analogy here is that properties are like SQL table fields, and objects are rows. This advice then translates to "to avoid problems with a large number of fields, use less fields and create more rows". As this doesn't make any sense to me, possibly my understanding of databinding is completely askew?
Does this advice actually apply? I am unsure if it is specific to .NET 4/WPF, while i am using 3.5 and a custom WinForms based control library (DevExpress)
As an aside: am I correct in thinking DataBinding uses reflection when an using IList style datasource?
This is not just a academic question. I am currently trying to speed up loading a XtraGridView (DevExpress Control) with ~100,000 objects with 50 properties or so.
This advice then translates to "to avoid problems with a large number of fields, use less fields and create more rows"
I think it should translate to "use less fields and create smaller tables" (i.e. with less fields). And the original advice should read "[...]dividing the single class into multiple classes", with fewer properties. As you correctly noted, it wouldn't make sense to create more "rows"...
Anyway, if you do have a class that exposes hundreds or thousands of properties, you have a far more serious problem than binding performance... This is a serious design flaw that you should fix after reading some OO principles.
Does this advice actually apply? I am unsure if it is specific to .NET 4/WPF, while i am using 3.5 and a custom WinForms based control library (DevExpress)
Well, the page you mentioned is about WPF, but I think the idea of binding to smaller objects can apply to WinForms too (because the more properties need to be watched, the slower it will be)
As an aside: am I correct in thinking DataBinding uses reflection when an using IList style datasource?
You're partially correct... it actually uses TypeDescriptor, which in turns uses reflection to examine regular CLR objects. But this mechanism is much more flexible than reflection: a type can implement ICustomTypeDescriptor to provide its own description, list of members, etc (DataTable is one example of such a type)
You are solving the wrong problem. It will take a typical user well over a week to find back what she is looking for when she's got 5 million fields to search through. The speed of your UI becomes irrelevant. Only a machine can do a better job finding the data back.
You've got one. Help the user narrow down what she is searching for by letting her enter search terms so that the total query result doesn't contain more than, say, a hundred rows. The dbase engine helps you make that fast. And it automatically solves your grid perf problem.
Which one is better from performance view user control or custom control?
Right now I am using user control and In a specific scenario, I am creating around 200(approx.) different instances of this control but it is bit slow while loading and I need to wait atlest 20-30 second to complete the operation. What should I do to increase the performance?
Edit:
The scenario is:
In my Window, I have a TreeView, each item of this represents different user-defined types, So I have defined DataTemplate for each type. These DataTemplates are using user-controls and these usercontrols are binded with properties of user-defined types. As simple, TreeView maps a Hierarchical Data Structure of user-defined types. Now I read from Xml and create the Heirarchical structure and assign it to TreeView and it takes a lot of time to load. Any help?
I have an application that is loading around 500 hundred small controls. We originally built these as user controls, but loading the baml seems to cause the controls to load slow (each one is really fast, but when we get around 300, the total of all of them together seems to add up). The user controls also seem to use up a good amount of memory. We switched these to custom controls and the app launches almost twice as fast and takes up about 1/3 the ram. Not saying this will always be the case, but custom controls made a big difference for us.
FYI: Here's a link on using a VirtualizingPanel with the TreeView: http://msdn.microsoft.com/en-us/library/cc716882.aspx
Make sure to SuspendLayout while adding controls en masse. Try to completely configure the control before adding it to any container.
Here is the follow-up article to my issues with WPFs Virtualizing Stack Panel and TreeView. I hope this helps you.
http://lucisferre.net/2010/04/21/virtualizing-stack-panel-wpf-part-duex/
Long story short: It is possible to do the navigation with the current VSP, but it is a bit of a hack. The current VSP design needs a rework, as the way it currently virtualizes the View breaks the coupling between the View and ViewModel which, in turn, breaks the whole concept of MVVM.
I worked at Microsoft and was not allowed to use the UserControl because of its poor performance. We always created controls in C#. Not sure about the performance of DataTemplates, but am interested in knowing if it is better. I suspect that it is.