Its a simple config app with 4 checkboxes and 5 textboxes, and all values must persist across sessions.
do I have to serialize the fields and restore them by hand? I really have no idea the best way to approach this.
You could use the User settings, reading the values on load and saving on exit.
You can find info about the settings and how to retrieve and save it on runtime here.
If you are talking an ideal solution, you should think about using MVC (which win forms does NOT encourage), so that all the data you care about comes in an encapsulated, non-UI bound object (ie, the model). The UI form populates itself from the data and the app can retrieve the data when the form is torn down. If the data object implements ISerializable, then you're pretty much done.
If you're talking expedient and are absolutely sure that it will never need to grow or change (which never happens - I only do this in one-off apps), then I would scrape the form contents and write them to an appropriate place (user settings, data files, etc).
If you're talking about building something that makes this as easy as possible with never, ever, having to worry about it ever, ever, ever again, I would see about using data binding, or creating a mapping objects that understand how to map data from objects onto UI elements and out again (for example, you could subclass the main form elements to include a name field or use the initial text field to figure out a key to look up in a serializable hashtable or a property name in an serializable object).
Related
I want to save edited values from a WPF mobile app, via a Web API, as the user tabs out of each field. So on the LostFocus event.
When using EF then the whole entity graph is posted (put) to the Web API each time a field is updated. Even if I just make a DTO for the basic fields on the form, I would still be posting unnecessary data each time.
I was thinking of forgetting about EF in the Web API and simply posting the entity ID, field name and new value. Then in the controller, create my own SQL update statement and use good old ADO.Net to update the database.
This sounds like going back to the noughties or even the nineties, but is there any reason why I should not do that?
I have read this post which makes me lean towards my proposed solution.
Thanks for any comments or advice
Sounds like you are trying to move away from having a RESTful Web API and towards something a little more RPC-ish. Which is fine, as long as you are happy that the extra hassle of implementing this is worth it in terms of bandwith saved.
In terms of tech level, you're not regressing by doing what you proposed; I use EF every day but I still often need to use plain old ADO.NET every now and then and there is a reason why it's still well supported in the CLR. So there is no reason not to, as long as you are comfortable with writing SQL, etc.
However, I'd advise against your current proposal for a couple of reasons
Bandwidth isn't necessarily all that precious
Even for mobile devices, sending 20 or 30 fields back at a time probably isn't a lot of data. Of course, only you can know for your specific scenario if that's too much but considering the wide-spread availability of 3 & 4G networks, I wouldn't see this as a concern unless those fields contain huge amounts of data - of course, it's your use case so you know best :)
Concurrency
Unless the form is actually a representation of several discrete objects which can be updated independently, then by sending back individual changes every time you update a field, you run the risk of ending up with invalid state on the device.
Consider for example if User A and User B are both looking at the same object on their devices. This object has 3 fields A, B, C thus:
A-"FOO"
B-"42"
C-"12345"
Now suppose User A changes field "A" to "BAR" and tabs out of the field, and then User B changes field "C" to "67890" and tabs.
Your back-end now has this state for the object:
A - "BAR"
B - "42"
C - "67890"
However, User A and User B now both have an incorrect state for the Object!
It gets worse if you also have a facility to re-send the entire object from either client because if User A re-sends the entire form (for whatever reason) User B's changes will be lost without any warning!
Typically this is why the RESTful mechanism of exchanging full state works so well; you send the entire object back to the server, and get to decide based on that full state, if it should override the latest version, or return an error, or return some state that prompts the user to manually merge changes, etc.
In other words, it allows you to handle conflicts meaningfully. Entity Framework for example will give you concurrency checking for free just by including a specially typed column; you can handle a Concurreny exception to decide what to do.
Now, if it's the case that the form is comprised of several distinct entities that can be independently updated, you have more of a task-based scenario so you can model your solution accordingly - by all means send a single Model to the client representing all the properties of all of the individual entities on the form, but have separate POST back models, and a handler for each.
For example, if the form shows Customer Master data and their corresponding Address record, you can send the client a single model to populate the form, but only send the Customer Master model when a Customer Master field changes, and only the Address model when an address field changes, etc. This way you can have your cake and eat it because you have a smaller POST payload and you can manage concurrency.
I am a little confused as to how ASP.NET works. To my understanding, each time a webpage is created, it is an instance of the ASP.NET program. First of all, is this correct? For my website I have a class called 'Control' which inherits from System.Web.UI.Page, from which every other class (e.g. the aspx pages and their code behind pages) inherits. I need to maintain a list of customers etc. somewhere where it can be accessed by every user of the website (currently accessing it) and thought that this may be a good place, but if every user is accessing a different instance of the program, this list will be different for every user (as only they will be communicating with it).
If my thoughts are correct, to keep this list updated would I have to synchronize it in every instance of the program some how (possibly using threading)? Or would I have to connect to an external program which maintains this list? Or am I wrong about everything?
Thanks in advance, and sorry if this sounds like a load of nonsense; I am very confused!
Edit:
Thank you to all who have answered. I already have a database to which this data is being stored, but I also wanted to represent some of the data in the program.
I am making a booking system and have a big input form, and my plan is to load the data into objects (bookings, customers etc.) when it comes into the program (so that I don't lose the data during successive post backs), get these objects to write it to the database (it is a requirement of my client to write all data to the database as soon as it comes in to the program to minimize loss if the system goes down), and to then retain those objects software side as the program has to put constrains on what users can book (check that these services are available to them) and this would require some logic which would be easier with objects instead of having to back to the database a lot.
I therefore had the idea of storing this data in a place which was accessible to every website instance, and this is what I was confused about how to do.
It sounds like you are looking for the Cache property of the HttpContext class. The Cache shares data across the application domain, as opposed to the Items collection, which is per request. See msdn. Note that you will still need to store the data in a database as commented above.
You want to store your data in an external place like a database. Your application can then for every user load the data needed to display from the same database. Your application will grow and if you have to edit the data to a later point in time you already have all the needed pieces in place.
Most of the examples I've seen online shows object change tracking in a WinForms/WPF context. Or if it's on the web, connected objects are used, therefore, the changes made to each object can be tracked.
In my scenario, the objects are disconnected once they leave the data layer (Mapped into business objects in WCF, and mapped into DTO on the MVC application)
When the users make changes to the object on MVC (e.g., changing 1 field property), how do I send that change from the View, all the way down to the DB?
I would like to have an audit table, that saves the changes made to a particular object. What I would like to save is the before & after values of an object only for the properties that we modified
I can think of a few ways to do this
1) Implement an IsDirty flag for each property for all Models in the MVC layer(or in the javascript?). Propagate that information all the way back down to the service layer, and finally the data layer.
2) Having this change tracking mechanism within the service layer would be great, but how would I then keep track of the "original" values after the modified values have been passed back from MVC?
3) Database triggers? But I'm not sure how to get started. Is this even possible?
Are there any known object change tracking implementations out there for an n-tier mvc-wcf solution?
Example of the audit table:
Audit table
Id Object Property OldValue NewValue
--------------------------------------------------------------------------------------
1 Customer Name Bob Joe
2 Customer Age 21 22
Possible solutions to this problem will depend in large part on what changes you allow in the database while the user is editing the data.
In otherwords, once it "leaves" the database, is it locked exclusively for the user or can other users or processes update it in the meantime?
For example, if the user can get the data and sit on it for a couple of hours or days, but the database continues to allow updates to the data, then you really want to track the changes the user has made to the version currently in the database, not the changes that the user made to the data they are viewing.
The way that we handle this scenario is to start a transaction, read the entire existing object, and then use reflection to compare the old and new values, logging the changes into an audit log. This gets a little complex when dealing with nested records, but is well worth the time spent to implement.
If, on the other hand, no other users or processes are allowed to alter the data, then you have a couple of different options that vary in complexity, data storage, and impact to existing data structures.
For example, you could modify each property in each of your classes to record when it has changed and keep a running tally of these changes in the class (obviously a base class implementation helps substantially here).
However, depending on the point at which you capture the user's changes (every time they update the field in the form, for example), this could generate a substantial amount of non-useful log information because you probably only want to know what changed from the database perspective, not from the UI perspective.
You could also deep clone the object and pass that around the layers. Then, when it is time to determine what has changed, you can again use reflection. However, depending on the size of your business objects, this approach can impose a hefty performance penalty since a complete copy has to be moved over the wire and retained with the original record.
You could also implement the same approach as the "updates allowed while editing" approach. This, in my mind, is the cleanest solution because the original data doesn't have to travel with the edited data, there is no possibility of tampering with the original data and it supports numerous clients without having to support the change tracking in the UI level.
There are two parts to your question:
How to do it in MVC:
The usual way: you send the changes back to the server, a controller handles them, etc. etc..
The is nothing unusual in your use case that mandates a change in the way MVC usually works.
It is better for your use case scenario for the changes to be encoded as individual change operations, not as a modified object were you need to use reflection to find out what changes if any the user made.
How to do it on the database:
This is probably your intended question:
First of all stay away from ORM frameworks, life is too complex as it.
On the last step of the save operation you should have the following information:
The objects and fields that need to change and their new values.
You need to keep track of the following information:
What the last change to the object you intend to modify in the database.
This can be obtained from the Audit table and needs to be saved in a Session (or Session like object).
Then you need to do the following in a transaction:
Obtain the last change to the object(s) being modified from the database.
If the objects have changed abort, and inform the user of the collision.
If not obtain the current values of the fields being changed.
Save the new values.
Update the Audit table.
I would use a stored procedure for this to make the process less chatty, and for greater separations of concerns between the database code and the application code.
I would like the admins to control default values, and determine whether an input field is defaulted / can be written/seen by users.
A couple of ideas I had were to:
Include one 'default' record that the admins can update, and then grab the values every time a user creates a new entry. In this scenario I'm not sure how to control readonly/view.
Create a structure that uses 'field' objects, and in the 'field', include bools for read-only/viewable, and a field for the actual field type and default. The downside is that the table that holds the users' entries would be a subset of this set of objects. Also I am not sure how complex this structure is going to end up being, with regard to client/server validations, etc.
If it matters, we are using ASP.net MVC3, with Code-First Entity Framework 4.1. Another idea was to change the annotations at runtime, which seems complicated and maybe hard to maintain/easy to screw up.
This is something that I will be implementing soon, so I have been thinking about it some. Here are my ideas. I haven't implemented anything yet or researched which (if any) of these ideas will work, so please receive them that way.
First, I figured I would have a stored procedure that would read the data from the security tables in the database and return it in a standardized format. This data could then be put into object that will be stored in Application (somewhere that will persist between requests) to be used on future requests.
Next, I would create either editor templates or html helpers that would use the stored security information to determine whether to display read only/editable and whether to display the default value or not.
Again, please remember that these are just my initial thoughts that have not been researched or implemented yet.
Hope this helps out.
This is a general architecture question, hopefully to folks out there already using EF in final applications.
We have a typical N-Tier application:
WPF Client
WCF Services
EF STE DTO's
EF Data Layer
The application loads all known business types during load time (at the same time as the user logs in) then loads a very large "Work Batch" on demand, this batch is around 4-8Mg and is composed of over 1.000 business objects. When we finish loading this "Batch" we then link everything with the previously loaded business types, etc...
In the end we have around 2K-5K business objects in memory all correctly reference so we can use and abuse LINQ on the client side, we also do some complex math on all these objects on the client side, so we really need the large graph.
The issue comes when we want to save changes to the Database. With such a large object graph, we hardly want to send over everything again through the Network.
Our current aproach, which I dislike, given the complexity of the T4 templates so far, is to detach and attach everything on update. We basically want to update a given object, detach it from the rest of the graph, send it over the network, updated it on the WCF side, and then reattach it again on the client side. The main problem is when you want to update linked objects, let's say you add something that has a reference for something that is also added, then another reference to something modified, etc. This forces a lot of client code to make sure we don't break anything.
All this is done with generated code, so we are talking about 200-800 lines of T4 code per template.
What I'm looking at right now is a way to customize serialization and deserialization of the STE's, so that I can control what is sent over the network or not, and be able to update batches instead of just a single STE. Checking references, see if those references are Unchanged or not; if not don't serialize, if yes serialize and update everything just by attaching it to the context on the WCF side.
After some studying I found 2 solutions to this method.
One is by writing a custom DataContractSerializer.
The second one is by changing the STE template created by EF and playing around with the KnownTypeAttribute, instead of generating it for each reference type, have it reference a method that inspects the object and only marks for serialization references that are not unchanged.
Has anyone ever come across this
issue before?
What solutions did you use?
What problems did you encounter down
the line?
How easy was it to maintain the
templates created?
I don't know whole application design but if you generally load the work batch to the service and then send it to the client to play with it, it looks like service layer is somehow unnecessary and you can directly load data from database (and you will get much better performance). Depending on complexity of computation you can also do some computation directly in the database and you will again get much better performance.
Your approach to save only part of the graph is abuse to STE concept. STE works in manner - you load the graph, modify the graph and save the same graph. If you want to have a big dataset for reading and save only small chunks it is probably better to load data set for reading and once you decide to update a chunk, load only the chunk again, modify it and send it back.
Interfering the internal STEs behavior is imho the best way to lost some changes in some corner / unexpected scenarios.
Btw. this somehow looks like a scenario for syncing local database with a global one - I have never done that but it is quite common in smart-clients.