I'm currently trying to implement a table within my SQL database. I'm looking to create a table that can be used to check if a user on my website has liked a post. The idea is to have a table with one axes iterating the posts on the website and one axis with the userID values iterated. Then in each box hold a binary value as to whether they have liked it. I'm just wondering how I would implement this. I have been doing this in C# by creating classes and converting these into server side code using Entity Framework 6.4.0.
Any help would be great.
What you are suggesting is a normalized structure for your use case; it would, for example, require adding more columns to the table everytime a post is added to the database (or a user, depending on whether you use rows or columns).
A typical database solution would be a bridge table, that represents the many to many relationship between posts and users.
Say table user_like_posts, with the following columns:
user_id -- foreign key to the "users" table
post_id -- foreign key to the "posts" table
You may want to add additional columns to the bridge table, like the timestamp when the user liked the post, or the-like.
Will every user have an opinion on every post? If not then you don't have the data you described. If users and posts are not related one to one then you have a simple relation. For each post that a user likes (or dislikes?) there is an entry for that user:
Likes/Dislikes Table:
User identifier
Post identifier
The binary value that indicates like or dislike
If the table only indicates 'likes' then you don't need the last column.
A design like this would work even if every user and every post is in this table. The table might get large in a hurry and keep growing every time you introduced a new post. But if this table only includes actual 'likes' (and/or 'dislikes') it should be manageable.
For a class you just have an enumerable that has the posts 'liked' (and possibly another that indicates the posts 'disliked.')
Think about what you are trying to represent. Ask yourself questions. Don't just latch on to an idea and try to 'do' it.
Will every user have an opinion of every post?
Do you need to store both 'likes' and 'dislikes?'
Can there be a 'neutral' opinion on a post?
Can users change their opinions?
You can only discover the correct data structure by asking and answering all the questions that matter to your situation (my list is not exhaustive - it is only an example.)
Related
I have built a CRUD app using C# which is used to input cases against employees.
Im trying to figure out a way to link cases together if necessary.
e.g - A big fight breaks out
Tom was involved and has a case raised against him (caseID: 1)
Mark was also involved and has a case raised against him (caseID: 2)
Steve was also involved and has a case raised against him (caseID: 3)
(All the above stored in a single table)
As this was the same fight we want to link all the cases together.
How would you suggest I store this in another table?
Yes! You apparently need an "incident id" of some sort. This would be a different table with one row per incident. If the only incidents are fights, then this would be a fights table.
The incident id would then be related to each of the "cases" that you have.
Information about the incident would be in the incident table. That would probably include information such as type, date/time, location, and so on. In fact, what you are calling "cases" might simply be incidentParticipants or something like that.
looking for examples/tutorial for custom user fields, not via EAV
EAV is going to be problematic for various reasons such as performance
there are many base entities/tables with over 100000 records each
there will likely be over a dozen attributes
the records are to be displayed in a flat ui grid incl. custom fields so flattening them would be an issue while maintaining performance
Looking at enabling this via DDL where all custom fields would go into a matching table such as
<tablename>_custom_<userid>
and all user attributes would map to a column each and all their metadata stored in a metadata table
the retrieval would be simpler where the query would simply be
select *
from <tablename> A, tableName_custom_userid B
where B.KeyField = A.KeyField --( perhaps using outer join, haven't gone that far yet )
Wondering if there are any gotchas down the road that i need to be aware of ?
of course any samples/pointers would be helpful to kickstart the effort
specifically would appreciate any advice on using DDL for Sql Server compact 4
One technique I have seen used is to use a sort of 'hard-coded' EAV pattern. Don't hang up! It worked well with the dataset sizes you were talking about and didn't actually use EAV - it was only EAV-esque.
The idea is to have a set of tables to store these custom attributes within it, with some triggers (described below) on them. The custom attributes tablesets store metadata about the attribute (what table it goes with, data type, constraints, etc). You can get very fancy with this but I did not haev the need.
The triggers on your meta-tables are there to re-generate views that rollup base+extension into first class objects within the DB. So instead of table person + employee extension table, you have an employee view that includes both. When you drop a new value into the custom attributes tables, the triggers will re-roll the views and include the new stuff. If you wanted to go nuts, you could also have the triggers re-write stored procedures as well. Depending on how your mid-tier code is structured, you would still be forced to re-code some, however this would be the case anyway should you be applying rules that read the data.
In testing, I found that for the relatively small # of records you're talking about, performance was somewhat slower but followed roughly the same pattern of degradation (2x the number of records, ~2x as slow).
-- edits --
How I saw it done, you had a table that represented your first class objects, so a row for 'person' and a row for 'employee,' etc. We'll call that FCO. Then you had a secondary table that stored what tables represented the FCO. We'll call that Srcs.. For person, there would be one row, which is the person table. For Employee, there would be two rows, the person table and the Employee extension. There is a third table, called Attribs, which stores the columns from the tables that constitute the FCO. For simplicity, we'll say Employee has ID, Name and Address, and Employee has Hire Date and Department, and obviously PersonID referring back to Person table. So, 2 rows in FCO table (person and employee), 3 rows in Src table, 8 rows in Attribs.
The view, we'll call it vw_Employee, selects PersonID, Name, Address, Hire Date, Department from the two tables. It is built by a SQL stored procedure we'll call OnMetadataChange.
This SP is fired (by trigger or batch process), and its purpose is to generate the CREATE VIEW statements. It will iterate through every First Class Object, collect which fields from which tables constitute the view, and will issue a CREATE statement based on that. So OnMetadataChange produces a DROP and CREATE for each view, it generates a dynamic SQL statement that is executed once per entry in FCO table. It is preferable to do this with Triggers but not necessary. Hopefully your FCO definitions won't change too often, and when they do, there will probably be a code release as well. You can run your OnMetadataChange SP at that time.
The end result is a 2-layer database. The views constitute the First Class Object layer, which is meaningful to the application. The application only uses views. The tables constitute the 'physical' layer, which the application shouldn't care about. The meta-tables are essentially your mapping between the FCO layer and the physical layer. It takes some time to set it up, but it's quite effective, and gives you many of the benefits of EAV, while at the same time giving you the concrete benefits of 3nf tables (indexability, etc).
If you'd like I can throw some sample SQL out there.
Part of the problem you are having is that you are trying to store schema-less data in a SQL database, which is not its strength. There are three approaches that would make your life far easier:
1) Have a column which stores the serialized custom fields, with whatever format is mst convenient. For example, this column could store xml. Upsides are that you can use SQL Server Compact and pulling back a record is trivial. Downsides are that you always have to pull/push the entire xml blob to do an update, and it is difficult to impossible to query on any custom fields.
2) Upgrade to SQL Server Express, and use XML columns. This is nearly the same as the first suggestion, except that any server ready version of SQL Server has native support for XML data. These columns can have indexes added and fields within the data can be used in queries.
3) Use a Schema-less Database, like MongoDB or CouchDB. These databases are all about storing schemaless data, so your custom fields will be no different than any other field. As such, you can index and query custom fields. Upsides are that custom data is incredibly easy to work with, downsides are that you would have to spend some time rethinking how you store data to fit within their model.
If you do not need to query based on custom fields, or if you can query custom fields within business logic, then the first option can work for you. In any other case, I would err towards something with more capabilities than compact. If cost is the deciding factor, both SQL Server Express and MongoDB are free.
Before insert new value to table, I need change one field in all rows of that table.
What the best way to do this? in c# code, ore use trigger? if C# can you show me the code?
UPD
*NEW VERSION of Question*
Hello. Before insert new value to table, I need change one field in all rows of that table with specific ID( It is FK to another table).
What the best way to do this? in c# code, ore use trigger? if C# can you show me the code?
You should probably consider changing your design this doesn't sound like it will scale well, i would probably do it with a trigger if it is always required, but if not, id use ExecuteCommand.
var ctx = new MyDataContext();
ctx.ExecuteCommand("UPDATE myTable SET foo = 'bar'");
Looking at your comment on Paul's answer, I feel like I should chime in here. We have a few tables where we need to keep a history of each entry in that table. We implement this by creating a separate table for each. For example, we may have a Comment table, and then a CommentArchive table with a foreign key reference to the CommentId in the Comment table.
A trigger on the Comment table ensures that each time certain fields in the Comment table are updated, the "old" version (which is accessible via the deleted table in the trigger) gets pushed to the CommentArchive table. Obviously, this means several CommentArchive entries may exist for each Comment, but if you're only looking for the "active" comments, you just look in the Comment table. And if you need information about the history of a comment, you can easily use LINQ to SQL to jump from the Comment you're interested in to the CommentArchives that reference it.
Because the triggers we use in the above example only insert a single value into the Archive table for each update, they run very quickly and we get good performance. We had issues recently where I tried making the triggers more complex and we started getting dead-locks with as few as 15 concurrent transactions. So the lesson is that you should make these triggers simple, and make them touch as few rows in as few tables as possible.
Let me first describe the situation. We host many Alumni events over the course of each year and provide online registration forms for each event. There is a large chunk of data that is common for each event:
An Event with dates, times, managers, internal billing info, etc.
A Registration record with info about the payment and total amount charged per form submission
Bio/Demographic and alumni data about the 1 or more attendees (name, address, degree, etc.)
We store all of the above data within columns in tables as you would expect.
The trouble comes with the 'extra' fields we are asked to put on the forms. Maybe it is a dinner and there is a Veggie or Carnivore option, perhaps there is lodging and there are bed or smoking options, or perhaps there is an optional transportation option. There are tons of weird little "can you add this to the form?" types of requests we receive.
Currently, we JSONify any non-standard data and store it all in one column (per attendee) called 'extras'. We can read this data out in code but it is not well suited to querying. Our internal staff would like to generate a quick report on Veggie dinners needed for instance.
Other than creating a separate table for each form that holds the specific 'extra' data items, are there any other approaches that could make my life (and reporting) easier? Anyone working in a simialr environment?
This is actually one of the toughest problem to solve efficiently. The SQL Server Customer Advisory Team has dedicated a white-paper to the topic which I highly recommend you read: Best Practices for Semantic Data Modeling for Performance and Scalability.
You basically have 3 options:
semantic database (entity-attribute-value)
XML column
sparse columns
Each solution comes with ups and downs. Out of the top of my hat I'd say XML is probably the one that gives you the best balance of power and flexibility, but the optimal solution really depends on lots of factors like data set sizes, frequency at which new attributes are created, the actual process (human operators) that create-populate-use these attributes etc, and not at least your team skill set (some might fare better with an EAV solution, some might fare better with an XML solution). If the attributes are created/managed under a central authority and adding new attributes is a reasonable rare event, then the sparse columns may be a better answer.
Well you could also have the following db structure:
Have a table to store custom attributes
AttributeID
AttributeName
Have a mapping table between events and attributes with:
AttributeID
EventID
AttributeValue
This means you will be able to store custom information per event. And you will be able to reuse your attributes. You can include some metadata as
AttributeType
AllowBlankValue
to the attribute to handle it easily afterwards
Have you considered using XML instead of JSON? Difference: XML is supported (special data type) and has query integration ;)
quick and dirty, but actually nice for querying: simply add new columns. it's not like the empty entries in the previous table should cost a lot.
more databasy solution: you'll have something like an event ID in your table. You can link this to an n:m table connecting events to additional fields. And then store the additional field data in a table with additional_field_id, record_id (from the original table) and the actual value. Probably creates ugly queries, but seems politically correct in terms of database design.
I understand "NoSQL" (not only sql ;) databases like couchdb let you store arbitrary fields per record, but since you're already with SQL Server, I guess that's not an option.
This is the solution that we first proposed in ASP.NET Forums (that later became Community Server), and that the ASP.NET team built a similar version of in the ASP.NET 2.0 Membership when they released it:
Property Bags on your domain objects
For example:
Event.Profile() or in your case, Event.Extras().
Basically, a property bag is a serialized collection of data stored in a name/value pair in a column (or columns). The ASP.NET 2.0 Membership went the route of storing names in a semi-colon delimited list, and values in the same:
Table: aspnet_Profile
Column: PropertyNames (separated by semi-colons, and has start index and end index)
Column: PropertyValues (separated by semi-colons, and only stores the string value)
The downside to that approach is it is all strings, and manually has to be parsed (even though the membership system does it for you automatically).
Recently, my current method is I've built FormCollection and NameValueCollection C# extension methods that automatically serialize the collections to an XML result. And I store that XML in the table in it's own column associated with that entity. I also have a deserializer C# extension on XElement that deserializes that data back to the collection at runtime.
This gives you the power of actually querying those properties in XML, via SQL (though, that can be slow though - always flatten out your read-only data).
The final note is runtime querying: The general rule we follow is, if you are going to query a property of an entity in normal application logic, then you move that property to an actual column on the table - and create the appropriate indexes. If that data will never be queried directly (for example, Linq-to-Sql or EF), then leave it in the XML Property Bag.
Property Bags gives you the power of extending your domain models however you like, without having to modify the db schema.
i have a countries list. Each user can check multiple countries. Once saved, this "user country list" will be used to get whether other users fit into countries certain user chose.
Question is what would be the most efficient approach to this problem...
I have one, one to save user selection as delimited list like Canada,USA,France ... in single varchar(max) field but problem with it would be that once user from Germany enters page i perform this check on. To search for Germany i would be needed to get all items and un-delimit each field to check against value or to use sql 'like' which again is pretty damn slow..
If you have better solution or some tips i would be glad to hear.
Just to make sure, many users will have their own selections of countries from which and only they want to have users to land on their page. While millions of users will reach those pages. So the faster approach will be the better.
technology, MSSQL and ASP.NET
thanks
You should not store a list of values in one cell. Consider having a separate table that stores each of the selected countries with a foreign key reference to the user table. This is standard Database Normalization.
PLEASE don't go down the route you're thinking of, storing multiple entries in one field. I've had to re-write more applications because of bad database design than for any other reason, and that is a bad design.
Added
I have this poster on my wall at work: http://www.informationqualitysolutions.com/FreeStuff/rettigNormalizationPoster.pdf
One of my predecessors was a newbie to DB Design, and this helped her a lot. I keep it for any new hires that may need it. It explains normalization very nicely, with examples.
Do not save delimited fields into your database. Your database will not be normalized.
You need a many-to-many table for users and countries:
UserId
CountryId
If you do start using a delimited field, you end up needing to parse it (either in SQL or your Code). It is more difficult to query and optimize.
In this case, you want will want to create a table called UserCountries (or some such) which would store the UserID and CountryID. This is a standard relational construct. To beginners, it seems strange and too involved, but this structure makes it very easy and very fast to write flexible queries against this type of data. No delimiting required!
I think it would be better to use a UserCountry table, which contains a link to the User and the Country table. This creates a lot more possibilities to query against the database. Example queries that are much simpler this way:
Number of Countries per user
All users which selected a particular country
Sort all popular countries
Do not store multiple countries in a single field. Add 2 additional tables - Countries (ID, Name) and UserCountries (UserID, CountryID)