Change the table that Orchard reads from - c#

I have followed tutorial on how to write content part in Orchard CMS.
http://docs.orchardproject.net/Documentation/Writing-a-content-part
So, my content part writes the data from the backend to the record table that I wanted to, but the backend isn't reading saved custom content from the same table, i.e. when I manually change the record value in the database and refresh orchard admin, I don't see it changed.
How to change this?

That documentation article is slightly misleading because while the code it provides does store your data in the table you created in the database, it also stores the data within Orchards document storage (xml stored in the ContentVersionRecord table, column called data I believe). So basically for fetching data it will use the document storage, for any querying/filtering it will use the data stored in your record. You can change your code so it will only store it in your table if you'd prefer.
public double Latitude
{
get { return Record.Latitude; }
set { Record.Latitude = value; }
}
So yeah, I shall try to update the documentation tonight because that article is particularly confusing. Have a look at Bertrand's article on Orchard's document storage model: The Shift. Useful read
And I know it's annoying to hear this, but when you are playing with Orchard, it's best to play by its rules. Is there a particular reason you need to modify data directly in the db? Or just playing around?

Related

Regarding WPF, need advice on data sources and data binding

I am not new to WPF, but I am still a rookie. Let's say, I want to build an application which stores data about a person in an unique and separate file, and not in a database, sort of, like, Notepad. My application should do the following things.
It should be able to save a person's info in an unique file.
It should be able to open an user specified file and auto fill the properties/form.
How do I achieve this? Is the XML binding only way to achieve this, or is there an any other alternative? What I mean is, If I use XML binding I can write code which will enable the user to open and save different XML files, but I also read that binding to XML should be avoided from the architecture perspective. So, is there an alternative solution for my problem?
I think if you try doing the stuff by using a Reading and writing the things to a CSV(Comma separated values) file(If not planning to implement databases) then you can achieve what you wanted.
Also if you are planning to have a separate file for each user its not at all a good idea.
Its not possible to explain everything thing here . So please have a look to link posted below , in which it has explained in detail how to achieve Reading and Writing to a csv file .
This example has been posted from here for getting full detail please look to following link Reading and writing to a csv file
Apparently your requirement is to save person details into a unique file. If you really want to use that approach, one option is using XMLSerialization.
You can create your normal person object for data binding.
When you want to save data into the specific person's file you can serialize the object and save file with a proper name (person id or so)
When you want to get Person data back from the file, you can deserialize the it directly to a person object.
// Serialize and write to file
Person person = myPerson;
var serializer = new XmlSerializer(person.GetType());
using (var writer = XmlWriter.Create("person1.xml"))
{
serializer.Serialize(writer, person);
}
// Deserialize back to an instance
var serializer = new XmlSerializer(typeof(Person));
using (var reader = XmlReader.Create("person1.xml"))
{
var person= (Person)serializer.Deserialize(reader);
}
For saving user data, such as sessions and settings. There are plenty of ways you can do this.
Saving to data to txt files. See here.
Saving data to a database. See here.
My personal favourite, saving to the Settings file. See here.
These are only some of the ways you can save data locally.
Note that I mentioned saving data to a database because it is something that you shouldn't completely knock, especially if you will be saving lots of data.
To answer your question more directly, I would suggest that you go with option 3. For relatively small sets of data, like user info and user settings, it would be your best bet to save them to the built in Settings file. It's dead easy.
Good luck!

What the most efficient way of monitoring changes to data on an ASP.Net form?

I have an ASP.Net form and I want to send an email when the user changes their data. The email should only include data that has changed, and there are about 15 data fields total.
I don't want to use an ORM since I am updating a website that a 3rd party built for us, and all their data access calls go through a custom library of theirs.
The only ways to do this I can think of is
Make another database call to get old values and compare the form values one-by-one. If they're different, append to the email.
Store original data somewhere when it's first loaded (hidden field, session, etc), and once again compare the data one field at a time and append the differences to an email
Have someone on SO tell me there's an easier and/or simpler way that I haven't thought of
All the text boxes will have a TextChanged event, you can have them mark themselves as modified. ComboBox's will have a SelectedIndexChanged event, and so on.
Edit: All changed events can check their initial values (even on reverted changes) and either mark themselves as still modified or on a revert, as un-modified.
Here are some suggestions that may / may not be useful:
Trigger on the database table and the trigger compares the old (using the DELETED table) and updated (using the INSERTED table) and then sends an email. This may or may not be viable and I am not a big advocate of triggers.
Like you have already said you could make another database call, which would be my reccommended approach.
From what you've said I think that the only way forward is to create a duplicate dataset on the form to store the old data and run a comparison at the point where you want to produce the email.
You can use Dataset.Copy to copy structure and data.
However, now that I think about it there's always the Datset.GetChanges() method and the Dataset.AcceptChanges() along with DataSet.HasChanges()
Example code from this link:
if(dataSet.HasChanges(DataRowState.Modified |
DataRowState.Added)&& dataSet.HasErrors)
{
// Use GetChanges to extract subset.
changesDataSet = dataSet.GetChanges(
DataRowState.Modified|DataRowState.Added);
PrintValues(changesDataSet, "Subset values");
// Insert code to reconcile errors. In this case, reject changes.
foreach(DataTable changesTable in changesDataSet.Tables)
{
if (changesTable.HasErrors)
{
foreach(DataRow changesRow in changesTable.Rows)
{
//Console.WriteLine(changesRow["Item"]);
if((int)changesRow["Item",DataRowVersion.Current ]> 100)
{
changesRow.RejectChanges();
changesRow.ClearErrors();
}
}
}
}
// Add a column to the changesDataSet.
changesDataSet.Tables["Items"].Columns.Add(
new DataColumn("newColumn"));
PrintValues(changesDataSet, "Reconciled subset values");
// Merge changes back to first DataSet.
dataSet.Merge(changesDataSet, false,
System.Data.MissingSchemaAction.Add);
}
PrintValues(dataSet, "Merged Values");

Storing Data from Forms without creating 100's of tables: ASP.NET and SQL Server

Let me first describe the situation. We host many Alumni events over the course of each year and provide online registration forms for each event. There is a large chunk of data that is common for each event:
An Event with dates, times, managers, internal billing info, etc.
A Registration record with info about the payment and total amount charged per form submission
Bio/Demographic and alumni data about the 1 or more attendees (name, address, degree, etc.)
We store all of the above data within columns in tables as you would expect.
The trouble comes with the 'extra' fields we are asked to put on the forms. Maybe it is a dinner and there is a Veggie or Carnivore option, perhaps there is lodging and there are bed or smoking options, or perhaps there is an optional transportation option. There are tons of weird little "can you add this to the form?" types of requests we receive.
Currently, we JSONify any non-standard data and store it all in one column (per attendee) called 'extras'. We can read this data out in code but it is not well suited to querying. Our internal staff would like to generate a quick report on Veggie dinners needed for instance.
Other than creating a separate table for each form that holds the specific 'extra' data items, are there any other approaches that could make my life (and reporting) easier? Anyone working in a simialr environment?
This is actually one of the toughest problem to solve efficiently. The SQL Server Customer Advisory Team has dedicated a white-paper to the topic which I highly recommend you read: Best Practices for Semantic Data Modeling for Performance and Scalability.
You basically have 3 options:
semantic database (entity-attribute-value)
XML column
sparse columns
Each solution comes with ups and downs. Out of the top of my hat I'd say XML is probably the one that gives you the best balance of power and flexibility, but the optimal solution really depends on lots of factors like data set sizes, frequency at which new attributes are created, the actual process (human operators) that create-populate-use these attributes etc, and not at least your team skill set (some might fare better with an EAV solution, some might fare better with an XML solution). If the attributes are created/managed under a central authority and adding new attributes is a reasonable rare event, then the sparse columns may be a better answer.
Well you could also have the following db structure:
Have a table to store custom attributes
AttributeID
AttributeName
Have a mapping table between events and attributes with:
AttributeID
EventID
AttributeValue
This means you will be able to store custom information per event. And you will be able to reuse your attributes. You can include some metadata as
AttributeType
AllowBlankValue
to the attribute to handle it easily afterwards
Have you considered using XML instead of JSON? Difference: XML is supported (special data type) and has query integration ;)
quick and dirty, but actually nice for querying: simply add new columns. it's not like the empty entries in the previous table should cost a lot.
more databasy solution: you'll have something like an event ID in your table. You can link this to an n:m table connecting events to additional fields. And then store the additional field data in a table with additional_field_id, record_id (from the original table) and the actual value. Probably creates ugly queries, but seems politically correct in terms of database design.
I understand "NoSQL" (not only sql ;) databases like couchdb let you store arbitrary fields per record, but since you're already with SQL Server, I guess that's not an option.
This is the solution that we first proposed in ASP.NET Forums (that later became Community Server), and that the ASP.NET team built a similar version of in the ASP.NET 2.0 Membership when they released it:
Property Bags on your domain objects
For example:
Event.Profile() or in your case, Event.Extras().
Basically, a property bag is a serialized collection of data stored in a name/value pair in a column (or columns). The ASP.NET 2.0 Membership went the route of storing names in a semi-colon delimited list, and values in the same:
Table: aspnet_Profile
Column: PropertyNames (separated by semi-colons, and has start index and end index)
Column: PropertyValues (separated by semi-colons, and only stores the string value)
The downside to that approach is it is all strings, and manually has to be parsed (even though the membership system does it for you automatically).
Recently, my current method is I've built FormCollection and NameValueCollection C# extension methods that automatically serialize the collections to an XML result. And I store that XML in the table in it's own column associated with that entity. I also have a deserializer C# extension on XElement that deserializes that data back to the collection at runtime.
This gives you the power of actually querying those properties in XML, via SQL (though, that can be slow though - always flatten out your read-only data).
The final note is runtime querying: The general rule we follow is, if you are going to query a property of an entity in normal application logic, then you move that property to an actual column on the table - and create the appropriate indexes. If that data will never be queried directly (for example, Linq-to-Sql or EF), then leave it in the XML Property Bag.
Property Bags gives you the power of extending your domain models however you like, without having to modify the db schema.

how to insert data into db as a serialized object

my basic question is how to insert data into DB as a serialized object and how to extract and use it then ... any suggestion !!?
e.g :
{id:1, userId:1, type:PHOTO, time:2008-10-15 12:00:00, data:{photoId:2089, photoName:A trip to the beach}}
as you see how could I insert data into column Data and then to use it !?
another question is that if I stored the photoName inside Data instead of using JOINS and get the name from it's table (photos) according to it's Id thats will not implement the last update on the photoName (right !?) besides that I'll not be able to make a relation between table photos and the Current table - (Id => photoId) - if I stored data like that .. so part of the problem is that I don't know exactly what kind of information are going to be stored in colum Data So I can't customize a separate column for every type of these information ...
Typically I see two options for you here.
You can store an XML serialized object into the database, and simply use standard XML Serialization, here is an example that you can adapt for your needs.
You can create a true table for this object, and do things the "Standard" way.
With option 1, filtering/joining/searching on the information in the "data" column although still technically possible, is NOT something i would recommend and would be more for a static storage process in my opinion. Something like a user settings entity, or some other item that is VERY unlikely to be needed for a backend query.
With option 2, yes, you have to do more work, but if you define the object well, it will be possible.
Clarification
With regards to my example in #1 above. You would write out to a memory stream, etc for the serialization rather than a file.
If you don't want to store the data relationally, you're really better off not using a relational database. Several object databases speak JSON and would be able to handle this kind of problem pretty easily.
You can store it as JSON string and use JSONSerializer of JSON lib
http://json-lib.sourceforge.net/apidocs/index.html
to convert javabean into json string/object and vice versa.
Generally we use this to store the configuration where no of config parameters are unknown.
Regarding saving an object in your database; you can serialize your object into xml using XDocument.ToString() and save it in database's xml datatype column.
cmd.Parameters.AddWithValue("#Value", xmldoc.ToString());
Checkout, Work with XML Data Type in SQL Server

Mid-Tier Help Needed

In one sentence, what i ultimately need to know is how to share objects between mid-tier functions w/ out requiring the application tier to to pass the data model objects.
I'm working on building a mid-tier layer in our current environment for the company I am working for. Currently we are using primarily .NET for programming and have built custom data models around all of our various database systems (ranging from Oracle, OpenLDAP, MSSQL, and others).
I'm running into issues trying to pull our model from the application tier and move it into a series of mid-tier libraries. The main issue I'm running into is that the application tier has the ability to hang on to a cached object throughout the duration of a process and make updates based on the cached data, but the Mid-Tier operations do not.
I'm trying to keep the model objects out of the application as much as possible so that when we make a change to the underlying database structure, we can edit and redeploy the mid-tier easily and multiple applications will not need to be rebuilt. I'll give a brief update of what the issue is in pseudo-code, since that is what us developers understand best :)
main
{
MidTierServices.UpdateCustomerName("testaccount", "John", "Smith");
// since the data takes up to 4 seconds to be replicated from
// write server to read server, the function below is going to
// grab old data that does not contain the first name and last
// name update.... John Smith will be overwritten w/ previous
// data
MidTierServices.UpdateCustomerPassword("testaccount", "jfjfjkeijfej");
}
MidTierServices
{
void UpdateCustomerName(string username, string first, string last)
{
Customer custObj = DataRepository.GetCustomer(username);
/*******************
validation checks and business logic go here...
*******************/
custObj.FirstName = first;
custObj.LastName = last;
DataRepository.Update(custObj);
}
void UpdateCustomerPassword(string username, string password)
{
// does not contain first and last updates
Customer custObj = DataRepository.GetCustomer(username);
/*******************
validation checks and business logic go here...
*******************/
custObj.Password = password;
// overwrites changes made by other functions since data is stale
DataRepository.Update(custObj);
}
}
On a side note, options I've considered are building a home grown caching layer, which takes a lot of time and is a very difficult concept to sell to management. Use a different modeling layer that has built in caching support such as nHibernate: This would also be hard to sell to management, because this option would also take a very long time tear apart our entire custom model and replace it w/ a third party solution. Additionally, not a lot of vendors support our large array of databases. For example, .NET has LINQ to ActiveDirectory, but not a LINQ to OpenLDAP.
Anyway, sorry for the novel, but it's a more of an enterprise architecture type question, and not a simple code question such as 'How do I get the current date and time in .NET?'
Edit
Sorry, I forgot to add some very important information in my original post. I feel very bad because Cheeso went through a lot of trouble to write a very in depth response which would have fixed my issue were there not more to the problem (which I stupidly did not include).
The main reason I'm facing the current issue is in concern to data replication. The first function makes a write to one server and then the next function makes a read from another server which has not received the replicated data yet. So essentially, my code is faster than the data replication process.
I could resolve this by always reading and writing to the same LDAP server, but my admins would probably murder me for that. The specifically set up a server that is only used for writing and then 4 other servers, behind a load balancer, that are only used for reading. I'm in no way an LDAP administrator, so I'm not aware if that is standard procedure.
You are describing a very common problem.
The normal approach to address it is through the use of Optimistic Concurrency Control.
If that sounds like gobbledegook, it's not. It's pretty simple idea. The concurrency part of the term refers to the fact that there are updates happening to the data-of-record, and those updates are happening concurrently. Possibly many writers. (your situation is a degenerate case where a single writer is the source of the problem, but it's the same basic idea). The optimistic part I'll get to in a minute.
The Problem
It's possible when there are multiple writers that the read+write portion of two updates become interleaved. Suppose you have A and B, both of whom read and then update the same row in a database. A reads the database, then B reads the database, then B updates it, then A updates it. If you have a naive approach, then the "last write" will win, and B's writes may be destroyed.
Enter optimistic concurrency. The basic idea is to presume that the update will work, but check. Sort of like the trust but verify approach to arms control from a few years back. The way to do this is to include a field in the database table, which must be also included in the domain object, that provides a way to distinguish one "version" of the db row or domain object from another. The simplest is to use a timestamp field, named lastUpdate, which holds the time of last update. There are other more complex ways to do the consistency check, but timestamp field is good for illustration purposes.
Then, when the writer or updater wants to update the DB, it can only update the row for which the key matches (whatever your key is) and also when the lastUpdate matches. This is the verify part.
Since developers understand code, I'll provide some pseudo-SQL. Suppose you have a blog database, with an index, a headline, and some text for each blog entry. You might retrieve the data for a set of rows (or objects) like this:
SELECT ix, Created, LastUpdated, Headline, Dept FROM blogposts
WHERE CONVERT(Char(10),Created,102) = #targdate
This sort of query might retrieve all the blog posts in the database for a given day, or month, or whatever.
With simple optimistic concurrency, you would update a single row using SQL like this:
UPDATE blogposts Set Headline = #NewHeadline, LastUpdated = #NewLastUpdated
WHERE ix=#ix AND LastUpdated = #PriorLastUpdated
The update can only happen if the index matches (and we presume that's the primary key), and the LastUpdated field is the same as what it was when the data was read. Also note that you must insure to update the LastUpdated field for every update to the row.
A more rigorous update might insist that none of the columns had been updated. In this case there's no timestamp at all. Something like this:
UPDATE Table1 Set Col1 = #NewCol1Value,
Set Col2 = #NewCol2Value,
Set Col3 = #NewCol3Value
WHERE Col1 = #OldCol1Value AND
Col2 = #OldCol2Value AND
Col3 = #OldCol3Value
Why is it called "optimistic"?
OCC is used as an alternative to holding database locks, which is a heavy-handed approach to keeping data consistent. A DB lock might prevent anyone from reading or updating the db row, while it is held. This obviously has huge performance implications. So OCC relaxes that, and acts "optimistically", by presuming that when it comes time to update, the data in the table will not have been updated in the meantime. But of course it's not blind optimism - you have to check right before update.
Using Optimistic Cancurrency in practice
You said you use .NET. I don't know if you use DataSets for your data access, strongly typed or otherwise. But .NET DataSets, or specifically DataAdapters, include built-in support for OCC. You can specify and hand-code the UpdateCommand for any DataAdapter, and that is where you can insert the consistency checks. This is also possible within the Visual Studio design experience.
(source: asp.net)
If you get a violation, the update will return a result showing that ZERO rows were updated. You can check this in the DataAdapter.RowUpdated event. (Be aware that in the ADO.NET model, there's a different DataAdapter for each sort of database. The link there is for SqlDataAdapter, which works with SQL Server, but you'll need a different DA for different data sources.)
In the RowUpdated event, you can check for the number of rows that have been affected, and then take some action if the count is zero.
Summary
Verify the contents of the database have not been changed, before writing updates. This is called optimistic concurrency control.
Other links:
MSDN on Optimistic Concurrency Control in ADO.NET
Tutorial on using SQL Timestamps for OCC

Categories