Storing user access level in a database - c#

I am storing a list of "Users" in a table. The business logic of the application will have a reference to an object with all the data in this table for the currently logged-in user. And be able to allow the user to perform operations if they have the correct access.
I'm wondering what is the best way to store "access levels?"
One way I'm thinking of storing the access level is as an integer, and using C# "flags" to combine multiple access levels without requiring a bunch of fields, is this wise?
Create = 1
Read = 2
Update = 4
Delete = 8
FullAcc = 16
The other option I'm thinking of, feels less elegent, but I've seen it done a lot:
Read/Write = 1
R/W + Delete= 2
Full Access = 3
The reason I'm wondering, is that it seems like it would be more simple to add additional items to the second method, but at some point, it would become a pain in the ass to maintain. What are your thoughts?

I've always preferred the first approach using flags. The danger is that you get too many levels of permissions and you have to keep extending your enum and start using huge numbers and therefor maybe have to change the data type in your database to a large int. However, for something like permissions the number of options should be fairly limited. The one suggestion I would make is to have FullAcc defined as the sum of Create, Read, Update and Delete instead of as a separate entity. That way you won't have to check if a user has Update OR FullAcc permissions when they are trying to update something.

I would go with Option #1 because it gives me individual flags for each type of access.
I would also recommend that you store history of changes with timestamps.

I'd go the enum route. Its strongly typed, transfers reasonably well between the db and code (ints and enums cast well), you can use the FlagsAttribute to combine security rights, and enums are pretty flexible when it comes to versioning issues (as long as you don't remove or rename previously defined enum values).

Your 'flags' idea is more flexible, allowing you any combination of rights if that ever becomes necessary. The 'FullAcc' item should not be defined as a specific number in your enum, however - it should be a combination of other flags or'd together (like this, with a few left out):
enum Rights { Create, read, Update, FullAcc = Create | Read | Update }
The only pain I see with this is if you add more items to the enum, you have to modify the FullAcc item, and then identify your FullAcc records in the db and update the flag value.

Related

"Smart" SQL Update using ListBox

I am developing a project which access a database in sql server 2012 through C# and performs CRUD modifications on it. Here is the main form:
both listboxes on the right are used to deal with informations contained in an intermediate tables (many-to-many relationship). Here is how they work: Basically, you choose types and abilities from the comboboxes, then click on 'add' and they are added in the respective listboxes. To delete items in the listboxes, you just need to select one item and then click 'delete'.
Here's another print to clear any doubts:
On the first print I've provided here, you will see a 'Bulbasaur' data. The PokémonID = 1 is represented by the 'Bulbasaur'; TypeID = 1 and 12 are 'Grass' and 'Poison', respectively; and AbilityID = 1 is 'Overgrow'.
I was trying to create an update function (update_click) using sql queries (SqlCommand, SqlDataReader and so on...), but without deleting the whole associations of a pokémon and its types (and abilities) and then re-adding them, based on the new modifications on the listboxes. I want to avoid it in order to save some memory in cases that some pokémon may hold thousands of types and abilities...
Is it possible? If necessary, I can send you my C# project for more details.
I would suggest a combination of:
1) Use table-valued parameters to send all the data (in its present state in your listboxes) to your T-SQL query or stored procedure at once
2) Consider using the EXCEPT and/or INTERSECT operators (as well as any necessary LEFT or RIGHT JOIN) to compare the contents of your table-valued parameter (essentially a table itself) with the data currently in the underlying tables
3) UPDATE/DELETE/INSERT accordingly
Essentially it sounds like what you are saying you'd like to do is to only "send the changes" to the database:
add any abilities that were not there before;
remove any abilities that were in the database but have been removed
If that's the case then what you need to be able to do is simple set operations:
Set Union
Set Intersect
Set Difference
while you can perform these operations using simple arrays or lists, it is much more efficient to use an actual set implementation such as a generic HashSet<>. With a correct implementation using sets or hash tables you ca achieve linear-time performance.
I hope this helps point you in the right direction..

Most lightweight collection/generic/object to store data in session

I have a scenario where the user can select check boxes to select multiple records which need to be processed on a separate page. I have decided to use Session to store this data. The Session mode will be "In proc".
I wanted to know which is the most lightweight collection/generic/object to store say 30-40 IDs ( most probably Guids/unique Identifiers ) in the session?.
Any alternate approaches/design patterns are also welcome.
If the list of possible records is a closed list you can use an enum as a bit field.
This way you can store up to 64 boolean values in a single long.
If, on the other hand you don't have a closed list of possible values, than an array is the lightest collection there is, as most other collections are simply extensions of an array.
However, are you absolutely sure you have to use GUIDs? GUIDs are heavy and cumbersome.
The only use case you must use them is when you have data coming from multiple sources and you have to retain the IDs as they come.
I'd consider switching to int or long if it's possible.

Strategies for modeling large (50~) number of properties

Scenario
I'm parsing emails and inserting them a database using an ORM (NHibernate to be exact). While my current approach does technically work I'm not very fond of it but can't of a better solution. The email contains 50~ fields and is sent from a third party and looks like this (obviously a very short dummy sample).
Field #1: Value 1 Field #2: Value 2
Field #3: Value 3 Field #4: Value 4 Field #5: Value 5
Problem
My problem is that with parsing this many fields the database table is an absolute monster. I can't create proper models employing any kind of relationships either AFAIK because each email sent is all static data and doesn't rely on any other sources.
The only idea I have is to find commonalities between each field and split them into more manageable chunks. Say 10~ fields per entity, so 5 entities total. However, I'm not terribly in love with that idea either seeing as all I'd be doing is create one-to-one relationships.
What is a good way of managing large number of properties that are out of your control?
Any thoughts?
Create 2 tables: 1 for the main object, and the other for the fields. That way you can programatically access each field as necessary, and the object model doesn't look to nasty.
But this is just off the top of my head; you have a weird problem.
If the data is coming back in a file that you can parse easily, then you might be able to get away with creating a command line application that will produce scripts and c# that you can then execute and copy, paste into your program. I've done that when creating properties out of tables from html pages (Like this one I had to do recently)
If the 50 properties are actually unique and discrete pieces of data regarding this one entity, I don't see a problem with having those 50 properties (even though that sounds like a lot) on one object. For example, the Type class has a large number of boolean properties relating to it's data (IsPublic, etc).
Alternatives:
Well, one option that comes to mind immediately is using dynamic object and overriding TryGetMember to lookup the 'property' name as a key in a dictionary of key value pairs (where your real set up of 50 key value pairs exists). Of course, figuring out how to map that from your ORM into your entity is the other problem and you'd lose intellisense support.
However, just throwing the idea out there.
Use a dictionary instead of separate fields. In the database, you just have a table for the field name and its value (and what object it belongs to).

Storing Data from Forms without creating 100's of tables: ASP.NET and SQL Server

Let me first describe the situation. We host many Alumni events over the course of each year and provide online registration forms for each event. There is a large chunk of data that is common for each event:
An Event with dates, times, managers, internal billing info, etc.
A Registration record with info about the payment and total amount charged per form submission
Bio/Demographic and alumni data about the 1 or more attendees (name, address, degree, etc.)
We store all of the above data within columns in tables as you would expect.
The trouble comes with the 'extra' fields we are asked to put on the forms. Maybe it is a dinner and there is a Veggie or Carnivore option, perhaps there is lodging and there are bed or smoking options, or perhaps there is an optional transportation option. There are tons of weird little "can you add this to the form?" types of requests we receive.
Currently, we JSONify any non-standard data and store it all in one column (per attendee) called 'extras'. We can read this data out in code but it is not well suited to querying. Our internal staff would like to generate a quick report on Veggie dinners needed for instance.
Other than creating a separate table for each form that holds the specific 'extra' data items, are there any other approaches that could make my life (and reporting) easier? Anyone working in a simialr environment?
This is actually one of the toughest problem to solve efficiently. The SQL Server Customer Advisory Team has dedicated a white-paper to the topic which I highly recommend you read: Best Practices for Semantic Data Modeling for Performance and Scalability.
You basically have 3 options:
semantic database (entity-attribute-value)
XML column
sparse columns
Each solution comes with ups and downs. Out of the top of my hat I'd say XML is probably the one that gives you the best balance of power and flexibility, but the optimal solution really depends on lots of factors like data set sizes, frequency at which new attributes are created, the actual process (human operators) that create-populate-use these attributes etc, and not at least your team skill set (some might fare better with an EAV solution, some might fare better with an XML solution). If the attributes are created/managed under a central authority and adding new attributes is a reasonable rare event, then the sparse columns may be a better answer.
Well you could also have the following db structure:
Have a table to store custom attributes
AttributeID
AttributeName
Have a mapping table between events and attributes with:
AttributeID
EventID
AttributeValue
This means you will be able to store custom information per event. And you will be able to reuse your attributes. You can include some metadata as
AttributeType
AllowBlankValue
to the attribute to handle it easily afterwards
Have you considered using XML instead of JSON? Difference: XML is supported (special data type) and has query integration ;)
quick and dirty, but actually nice for querying: simply add new columns. it's not like the empty entries in the previous table should cost a lot.
more databasy solution: you'll have something like an event ID in your table. You can link this to an n:m table connecting events to additional fields. And then store the additional field data in a table with additional_field_id, record_id (from the original table) and the actual value. Probably creates ugly queries, but seems politically correct in terms of database design.
I understand "NoSQL" (not only sql ;) databases like couchdb let you store arbitrary fields per record, but since you're already with SQL Server, I guess that's not an option.
This is the solution that we first proposed in ASP.NET Forums (that later became Community Server), and that the ASP.NET team built a similar version of in the ASP.NET 2.0 Membership when they released it:
Property Bags on your domain objects
For example:
Event.Profile() or in your case, Event.Extras().
Basically, a property bag is a serialized collection of data stored in a name/value pair in a column (or columns). The ASP.NET 2.0 Membership went the route of storing names in a semi-colon delimited list, and values in the same:
Table: aspnet_Profile
Column: PropertyNames (separated by semi-colons, and has start index and end index)
Column: PropertyValues (separated by semi-colons, and only stores the string value)
The downside to that approach is it is all strings, and manually has to be parsed (even though the membership system does it for you automatically).
Recently, my current method is I've built FormCollection and NameValueCollection C# extension methods that automatically serialize the collections to an XML result. And I store that XML in the table in it's own column associated with that entity. I also have a deserializer C# extension on XElement that deserializes that data back to the collection at runtime.
This gives you the power of actually querying those properties in XML, via SQL (though, that can be slow though - always flatten out your read-only data).
The final note is runtime querying: The general rule we follow is, if you are going to query a property of an entity in normal application logic, then you move that property to an actual column on the table - and create the appropriate indexes. If that data will never be queried directly (for example, Linq-to-Sql or EF), then leave it in the XML Property Bag.
Property Bags gives you the power of extending your domain models however you like, without having to modify the db schema.

Persisting Enums in database tables

I have an order which has a status (which in code is an Enum). The question is how to persist this. I could:
Persist the string in a field and then map back to enum on data retrieval.
Persist this as an integer and then map back to enum on data retrieval.
Create separate table for enum value and do a join on data retrieval.
Thoughts?
If this is a fixed list (which it seems it is, or else you shouldn't store it as an enum), I wouldn't use #1.
The main reason to use #3 over #2 is for ease of use with self-service querying utilities. However, I'd actually go with a variant of #2: Store the value as an integer and map to an enum on data retrieval. However, also create a table representing the enum type, with the value as the PK and the name as another column. That way it's simple, quick, and efficient to use with your code, but also easy to get the logical value with self-service querying and other uses that don't use your data access code.
#3 is the most "proper" from a database/normalization standpoint. Your status is, in effect, a domain entity that's linked to the order entity.
hibernate uses integers by default.
if your enum is not going to change very often, this is not a bad idea, i think.
I'd use an integer mapped to value in another table with the values.
You could also then map the enum to the same value, but then you'd have to update in both spots.
I suppose it depends on where the data will be retrieved. With #3, you could retrieve the data without relying on your .NET front end. But it is also possible for your database table to get out of sync with the enum code.
Option #2 is certainly the most efficient way to do it for storage... but storage is cheap.

Categories