Update Command -Architecture Question - c#

scenario:
you need to update 2 fields of a customer
you don't have a method UpdateCustomer yet anywhere in your project
Should you create a method called UpdateCustomer(Customer customer) that can take in a full customer object and just do an all update to all fields
or
for example (obviously the name is just for posting here) create an UpdateCustomer2Fields(string month, string year) and just update those 2 fields
I could imagine a shitload of UpdateCustomerThis
UpdateCustomerThat
but if I just expose one UpdateCustomer, I can pass it a customer object and have it update anything and use this anywhere.
Good, Bad? which way to go.

You should just update all fields, unless there is a reason not to, to reduce your maintenance headaches.
Reasons not to update all fields. These are all very specific to environment, and are only valid if you've observed them in your situation.
The vast majority of your transactions are updates, resulting in a heavy burden due to unnecessary data being passed over the network.
A business or legal compliance requires you to log exactly what users changed which exact data with every transaction (these do exist). However, depending on the environment it may be best to log this at the database server.
Some users should not have update access to some fields. This is architecture specific and relates to how you expose your functions. If someone must have certain credentials to update specific information, then generally, you will not want to have every transaction update everything. This can be dynamic, and may result in passing in a dictionary of fields to update (or a myriad of other choices). I typically run into this with tiered architectures using services with multiple consumers that have different access rights.
Did I miss any?
Generally, the answer is just pass the entire object.

Create a method called UpdateCustomer(Customer customer) that can take in a full customer object and just do an all update to all fields.
Keep it simple. Now you can spend your time on bigger problems.

Related

How to query aggregate root by some other property apart from Id?

For clarification: BuckupableThing is some hardware device with program written in it (which is backed-up).
Updated clarification: This question is more about CQRS/ES implementation than about DDD modelling.
Say I have 3 aggregate roots:
class BackupableThing
{
Guid Id { get; }
}
class Project
{
Guid Id { get; }
string Description { get; }
byte[] Data { get; }
}
class Backup
{
Guid Id { get; }
Guid ThingId { get; }
Guid ProjectId { get; }
DateTime PerformedAt { get; }
}
Whenever I need to backup BackupableThing, I need to create new Project first and then create new Backup with ProjectId set to this new Project's Id. Everything is working as long as for each new Backup there's new project.
But really I need to create project if only it doesn't already exist, where unique id of existing project should be it's Data property (some kind of Hash of byte[] array). So when any other BackupableThing gets backed-up and the system sees that another BackupableThing has already been backed-up with the same result (Data) - show already created and working project with all descriptions and everything set.
First I thought of approaching this problem by encoding hash in Guid somehow, but this seems hacky and not straightforward, also it increases chances of collision with randomly generated Guids.
Then I came up with the idea of separate table (with separate repository) which holds two columns: Hash of data (some int/long) and PlcProjectId (Guid). But this looks very much like projection, and it is in fact going to be kind of projection, so I could rebuild it in theory using my Domain Events from Event Store. I read that it's bad to query read-side from domain services / aggregates / repository (from the write side), and I couldn't come up with something else in some time.
Update
So basically I create read-side inside the domain to which only domain has access. And I query it before adding new Project so that if it already exists I just use already existing one? Yes, I thought of it already over night, and it seems that not only I have to make such domain storage and query it before creating new aggregate, I also have to introduce some compensating action. For example, if multiple requests sent to create the same Project simultaneously, two identical projects would be created. So I need my domain storage to be an event handler and if user created the same project - I need to fire compensating command to remove/move/recreate this project using existing one...
Update 2
I'm also thinking of creating another aggregate for this purpose - aggregate for the scope of uniqueness of my project (in this specific scenario - GlobalScopeAggregate or DomainAggregate) which will hold {name, Guid} key-value reference. Separate GlobalScopeHandler will be responsible for ProjectCreated, ProjectArchived, ProjectRenamed events and will ultimately fire compensating actions if ProjectCreated event occurs with the same name which already has been created. But I am confused about compensating actions. How should I react if user has already made backup and has in his interface related view to the project? He can change description, name and etc. of wrong project, which already has been removed by compensating action. Also, my compensating action will remove Project and Backup aggregates, and create new Backup aggregate with existing ProjectId, because my Backup aggregate doesn't have setter on ProjectId field (it is immutable record of backup performed action). Is this normal?
Update 3 - DOMAIN clarification
There's number of industrial devices (BackupableThing, programmable controllers) on the wide network which have some firmware programmed in it. Customers update the firmware and upload it into the controllers (backupable things). This very program is gets backuped. But there's a lot of controllers of the same type, and it's very likely that customers will upload the same program over and over again to multiple controllers as well as to the same controller (as a means to revers some changes). User needs to repeatedly backup all those controllers. Backup is some binary data (stored in the controller, the program) and date of the backup occurrence. Project is some entity to encapsulate binary data as well as all information related to the backup. Given I can't backup program in the state that it was previously uploaded (I can only get unreadable raw binary data which I can also upload back into controller again), I require separate aggregate Project which holds Data property as well as number of attached files (for example, firmware project files), description, name and other fields. Now, whenever some controller is backed-up, I don't want to show "just binary data without any description" and force user to fill in all the descriptionary fields again. I want to look up if there's have already been done backup with the same binary data, and then just link this project to this backup so that user who backed-up another controller would instantly see lots of information regarding what's in this controller lives right now :)
So, I guess this is the case of set-based validation which occurs very often (as opposed to regular unique constraints), and also I would have lots of backups, so that separate aggregate which holds it all in the memory would be unwise.
Also I just thought there's another problem raises. I can't compute hash of binary data and tolerate small risk of two different backups be considered as the same project. This is industry domain which needs precise and robust solution. At the same time, I can't force unique constraint at binary data column (varbinary in SQL), because my binary data could be relatively big. So I guess I need to create separate table for [int (hash of binary data), Guid (id of the project)] relations and if hash of binary data of new backup is found, I need to load related aggregate and make sure binary data is the same. And if it's not - I also need some kind of mechanism to store more than one relation with the same hash.
Current implementation
I ended up creating separate table with two columns: DataHash (int) and AggregateId (Guid). Then I created domain service which has factory method GetOrCreateProject(Guid id, byte[] data). This method gets aggregate id by calculated data hash (it gets multiple values if there's multiple rows with the same hash), loads this aggregate and compares data parameter and aggregate.Data property. If they are equal - existing and loaded aggregate returned. If they are not equal - new hash entity added to hash table and new aggregate created.
This hash table is part of the domain now and now part of the domain is not event sourced. All future need for uniqueness validation (name of the BackupableThing, for example) would imply creation of such tables which add state-based storage to the domain side. This increases overall complexity and binds domain tightly. This is the point where I'm starting to ponder over if event sourcing even applies here and if not, where does it apply at all? I tried to apply it to simple system as a means to increase my knowledge and fully understand CQRS/ES patterns, but now I'm fighting complexities of set-based validation and see that simple state-based relational tables with some kind of ORM would be much better case (since I don't even need event log).
You are prematurely shoehorning your problem into DDD patterns when major aspects of the domain haven't been fully analyzed or expressed. This is a dangerous mix.
What is a Project, if you ask an expert of your domain? (hint: probably not "Project is some entity to encapsulate binary data")
What is a Backup, if you ask an expert of your domain?
What constraints about them should be satisfied in the real world?
What is a typical use case around Backupping?
We're progressively finding out more about some of these as you add updates and comments to your question, but it's the wrong way around.
Don't take Aggregates and Repositories and projections and unique keys as a starting point. Instead, first write clear definitions of your domain terms. What business processes are users carrying out? Since you say you want to use Event Sourcing, what events are happening? Figure out if your domain is rich enough for DDD to be a relevant modelling approach. When all of this is clearly stated, you will have the words to describe your backup uniqueness problem and approach it from a more relevant angle. I don't think you have them now.
No need to "query read-side" - as that is a bad idea. What you do is create a domain storage model for just the domain.
So you'll have the domain objects saved to EventStore and some special things saved somewhere else SQL, Key-Value, etc. Then a read consumer building your read models in SQL.
For instance in my app my domain instances listen to events to build domain query models which I save to riak kv.
A simple example which should illustrate my meaning. Queries are handled via a query processor, a popular pattern
class Handler :
IHandleMessages<Events.Added>,
IHandleMessages<Events.Removed>,
IHandleQueries<Queries.ObjectsByName>
{
public void Handle(Events.Added e) {
_orm.Add(new { ObjectId = e.ObjectId, Name = e.name });
}
public void Handle(Events.Removed e) {
_orm.Remove(x => x.ObjectId == e.ObjectId && x.Name == e.Name);
}
public void Handle(Queries.ObjectsByName q) {
_orm.Query(x => x.Name == q.Name);
}
}
My answer is quite generic as I'm not sure to fully understand you problem domain, but there's only 2 main ways to tackle set validation problems.
1. Enforce strong consistency
Enforcing strong consistency means that the invariant will be protected transactionnaly and therefore will never allow to be violated.
Enforcing strong consistency will most likely limit the scalability of your system, but if you can afford it then it may be the simplest way to go: preventing the conflict from occuring rather than dealing with the conflict after the fact is usually easier.
There are numerous ways strong consistency can be enforced, but here's two common ones:
Rely on a database unique constraint: If you have a datastore that supports them and both, your event store and this datastore can participate in the same transaction then you can use this approach.
E.g. (pseudo-code)
transaction {
uniquenessService.reserve(uniquenessKey); //writes to a DB unique index
//save aggregate that holds uniquenessKey
}
Use an aggregate root: This approach is very similar to the one described above, but one difference is that the rule lives explicitely in the domain rather than in the DB. The aggregate will be responsible for maintaining an in-memory set of uniqueness keys.
Given that the entire set of keys will have to be brought into memory every time you need to record a new one you should probably cache these kinds of aggregates in memory at all times.
I usually use this approach only when there's a very small set of potential unique keys. It could also be useful in scenarios where the uniqueness rule is very complex in itself and not a simple key lookup.
Please note that even when enforcing strong consistency the UI should probably prevent invalid commands from being sent. Therefore, you could also have the uniqueness information available through a read model which would be consumed by the the UI to detect conflicts early.
2. Eventual consistency
Here you would allow the rule to get violated, but then perform some compensating actions (either automated or manual) to resolve the problem.
Sometimes it's just overly limiting or challenging to enforce strong consistency. In these scenarios, you can ask the business if they would accept to resolve the broken rule after the fact. Duplicates are usually extremely rare especially if the UI does validate the command before sending it like it should (hackers could abuse the client-side check, but that is another story).
Events are great hooks when it comes to resolve consistency problems. You could listen to events such as SomeThingThatShouldBeUniqueCreated and then issue a query to check if there are duplicates.
Duplicates would be handled in the way the business wants them to be. For instance, you could send a message to an administrator so that he can manually resolve the problem.
Even though we may think that strong consistency is always needed, in many scenarios it is not. You have to explore the risks of allowing a rule to get violated for a period of time with business experts and determine how often that would occur. Sometimes you may realize that there is no real risk for the business and that the strong consistency was artificially imposed by the developer.

Why would I use Entity Framework in a mobile situtation?

I want to save edited values from a WPF mobile app, via a Web API, as the user tabs out of each field. So on the LostFocus event.
When using EF then the whole entity graph is posted (put) to the Web API each time a field is updated. Even if I just make a DTO for the basic fields on the form, I would still be posting unnecessary data each time.
I was thinking of forgetting about EF in the Web API and simply posting the entity ID, field name and new value. Then in the controller, create my own SQL update statement and use good old ADO.Net to update the database.
This sounds like going back to the noughties or even the nineties, but is there any reason why I should not do that?
I have read this post which makes me lean towards my proposed solution.
Thanks for any comments or advice
Sounds like you are trying to move away from having a RESTful Web API and towards something a little more RPC-ish. Which is fine, as long as you are happy that the extra hassle of implementing this is worth it in terms of bandwith saved.
In terms of tech level, you're not regressing by doing what you proposed; I use EF every day but I still often need to use plain old ADO.NET every now and then and there is a reason why it's still well supported in the CLR. So there is no reason not to, as long as you are comfortable with writing SQL, etc.
However, I'd advise against your current proposal for a couple of reasons
Bandwidth isn't necessarily all that precious
Even for mobile devices, sending 20 or 30 fields back at a time probably isn't a lot of data. Of course, only you can know for your specific scenario if that's too much but considering the wide-spread availability of 3 & 4G networks, I wouldn't see this as a concern unless those fields contain huge amounts of data - of course, it's your use case so you know best :)
Concurrency
Unless the form is actually a representation of several discrete objects which can be updated independently, then by sending back individual changes every time you update a field, you run the risk of ending up with invalid state on the device.
Consider for example if User A and User B are both looking at the same object on their devices. This object has 3 fields A, B, C thus:
A-"FOO"
B-"42"
C-"12345"
Now suppose User A changes field "A" to "BAR" and tabs out of the field, and then User B changes field "C" to "67890" and tabs.
Your back-end now has this state for the object:
A - "BAR"
B - "42"
C - "67890"
However, User A and User B now both have an incorrect state for the Object!
It gets worse if you also have a facility to re-send the entire object from either client because if User A re-sends the entire form (for whatever reason) User B's changes will be lost without any warning!
Typically this is why the RESTful mechanism of exchanging full state works so well; you send the entire object back to the server, and get to decide based on that full state, if it should override the latest version, or return an error, or return some state that prompts the user to manually merge changes, etc.
In other words, it allows you to handle conflicts meaningfully. Entity Framework for example will give you concurrency checking for free just by including a specially typed column; you can handle a Concurreny exception to decide what to do.
Now, if it's the case that the form is comprised of several distinct entities that can be independently updated, you have more of a task-based scenario so you can model your solution accordingly - by all means send a single Model to the client representing all the properties of all of the individual entities on the form, but have separate POST back models, and a handler for each.
For example, if the form shows Customer Master data and their corresponding Address record, you can send the client a single model to populate the form, but only send the Customer Master model when a Customer Master field changes, and only the Address model when an address field changes, etc. This way you can have your cake and eat it because you have a smaller POST payload and you can manage concurrency.

Implementing object change tracking in an N-Tier WCF MVC application

Most of the examples I've seen online shows object change tracking in a WinForms/WPF context. Or if it's on the web, connected objects are used, therefore, the changes made to each object can be tracked.
In my scenario, the objects are disconnected once they leave the data layer (Mapped into business objects in WCF, and mapped into DTO on the MVC application)
When the users make changes to the object on MVC (e.g., changing 1 field property), how do I send that change from the View, all the way down to the DB?
I would like to have an audit table, that saves the changes made to a particular object. What I would like to save is the before & after values of an object only for the properties that we modified
I can think of a few ways to do this
1) Implement an IsDirty flag for each property for all Models in the MVC layer(or in the javascript?). Propagate that information all the way back down to the service layer, and finally the data layer.
2) Having this change tracking mechanism within the service layer would be great, but how would I then keep track of the "original" values after the modified values have been passed back from MVC?
3) Database triggers? But I'm not sure how to get started. Is this even possible?
Are there any known object change tracking implementations out there for an n-tier mvc-wcf solution?
Example of the audit table:
Audit table
Id Object Property OldValue NewValue
--------------------------------------------------------------------------------------
1 Customer Name Bob Joe
2 Customer Age 21 22
Possible solutions to this problem will depend in large part on what changes you allow in the database while the user is editing the data.
In otherwords, once it "leaves" the database, is it locked exclusively for the user or can other users or processes update it in the meantime?
For example, if the user can get the data and sit on it for a couple of hours or days, but the database continues to allow updates to the data, then you really want to track the changes the user has made to the version currently in the database, not the changes that the user made to the data they are viewing.
The way that we handle this scenario is to start a transaction, read the entire existing object, and then use reflection to compare the old and new values, logging the changes into an audit log. This gets a little complex when dealing with nested records, but is well worth the time spent to implement.
If, on the other hand, no other users or processes are allowed to alter the data, then you have a couple of different options that vary in complexity, data storage, and impact to existing data structures.
For example, you could modify each property in each of your classes to record when it has changed and keep a running tally of these changes in the class (obviously a base class implementation helps substantially here).
However, depending on the point at which you capture the user's changes (every time they update the field in the form, for example), this could generate a substantial amount of non-useful log information because you probably only want to know what changed from the database perspective, not from the UI perspective.
You could also deep clone the object and pass that around the layers. Then, when it is time to determine what has changed, you can again use reflection. However, depending on the size of your business objects, this approach can impose a hefty performance penalty since a complete copy has to be moved over the wire and retained with the original record.
You could also implement the same approach as the "updates allowed while editing" approach. This, in my mind, is the cleanest solution because the original data doesn't have to travel with the edited data, there is no possibility of tampering with the original data and it supports numerous clients without having to support the change tracking in the UI level.
There are two parts to your question:
How to do it in MVC:
The usual way: you send the changes back to the server, a controller handles them, etc. etc..
The is nothing unusual in your use case that mandates a change in the way MVC usually works.
It is better for your use case scenario for the changes to be encoded as individual change operations, not as a modified object were you need to use reflection to find out what changes if any the user made.
How to do it on the database:
This is probably your intended question:
First of all stay away from ORM frameworks, life is too complex as it.
On the last step of the save operation you should have the following information:
The objects and fields that need to change and their new values.
You need to keep track of the following information:
What the last change to the object you intend to modify in the database.
This can be obtained from the Audit table and needs to be saved in a Session (or Session like object).
Then you need to do the following in a transaction:
Obtain the last change to the object(s) being modified from the database.
If the objects have changed abort, and inform the user of the collision.
If not obtain the current values of the fields being changed.
Save the new values.
Update the Audit table.
I would use a stored procedure for this to make the process less chatty, and for greater separations of concerns between the database code and the application code.

Exposing Database IDs to the UI

This is a beginner pattern question for a web forms-over-data sort of thing. I read Exposing database IDs - security risk? and the accepted answer has me thinking that this is a waste of time, but wait...
I have an MVC project referencing a business logic library, and an assembly of NHibernate SQL repositories referencing the same. If something forced my hand to go and reference those repositories directly from my controller codebase, I'd know what went wrong. But when those controllers talk in URL parameters with the database record IDs, does it only seem wrong?
I can't conceive of those IDs ever turning un-consumable (by MVC actions). I don't think I'd ever need two UI entities corresponding to the same row in the database. I don't intend for the controller to interpret the ID in any way. Surrogate keys would make zero difference. Still, I want to have the problem because assumptions about the ralational design aren't any better than layer-skipping dependencies.
How would you make a web application that only references the business logic assembly and talks in BL objects and GUIDs that only have meaning for that session, while the assembly persists transactions using database IDs?
You can encrypt or hash your ids if you want. Using session id as a salt. It depends on the context. A public shopping site you want the catalog pages to be clear an easily copyable. User account admin it's fine to encrypt the ids, so users can't url hack into someone else's account.
I would not consider this to be security by obscurity. If a malicious user has one compromised account they can look at all the form fields, url ids, and cookie values set while logged in as that user. They can then try using those when logged in as a different user to escalate permissions. But by protecting them using session id as a salt, you have locked that data down so it's only useful in one session. The pages can't even be bookmarked. Could they figure out your protection? Possibly. But likely they'd just move on to another site. Locking your car door doesn't actually keep anyone out of your car if they want to get in, but it makes it harder, so everyone does it.
I'm no security expert, but I have no problem exposing certain IDs to the user, those such as Product IDs, User IDs, and anything that the user could normally read, meaning if I display a product to the user, displaying its Product ID is not a problem.
Things that are internal to the system that the users do not directly interact with, like Transaction IDs, I do not display to the user, not in fear of them editing it somehow, but just because that is not information that is useful to them.
Quite often in forms, I would have the action point to "mysite.com/messages/view/5", where 5 is the message they want to view. In all of these actions, I always ensure that the user has access to view it (modify or delete, which ever functionality is required), by doing a simple database check and ensure the logged in user is equal to the messages owner.
Be very very very careful as parameter tampering can lead to data modification. Rules on 'who can access what ids' must be very very carefully built into your application when exposing these ids.
For instance, if you are updating an Order based on OrderId, include in your where clause for load and updates that :
where order.orderid=passedInOrderId and Order.CustomerId=
I developed an extension to help with stored ids in MVC available here:
http://mvcsecurity.codeplex.com/
Also I talk about this a bit in my security course at: Hack Proofing your ASP.NET MVC and Web Forms Applications
Other than those responses, sometimes it's good to use obvious id's so people can hack the url for the information they want. For example, www.music.com\artist\acdc or www.music.com\arist\smashing-pumpkins. If it's meaningful to your users and if you can increase the information the user understands from the page through the URL then all the better and especially if your market segment is young or tech savvy then use the id to your advantage. This will also boost your SEO.
I would say when it's not of use, then encode it. It only takes one developer one mistake to not check a customer id against a session and you expose your entire customer base.
But of course, your unit tests should catch that!
While you will find some people who say that IDs are just an implementation detail, in most systems you need a way of uniquely identifying a domain entity, and most likely you will generate an ID for that identifier. The fact that the ID is generated by the database is an implementation detail; but once it has been generated it becomes an attribute of the domain entity, and it is therefore perfectly reasonable to use it wherever you need to reference the entity.

Options to retrieve role based content pieces from the database

I have a need to bring some role based content (several pieces resulting into one object) from the database to a service layer. Access to each content piece will depend on the role that the user has. If the user role has no access to a piece, that one will be empty. This is not a huge application, just a web service method exposing some content based on roles.
I could do this in three ways.
1) Based on the user role, make database calls for each piece of content.
Pros - The roles are managed in the code thus helping business logic (?) stay in the code. Only brings back data that is needed.
Cons - Multiple db calls. Code needs to be modified when a new role is added. (unless some really complicated business logic is used.)
2) Make a single db call and bring back all the content pieces as separate results sets. Loop through the sets and obtain pieces based on the user role.
Pros - Single db call. Roles are managed within the code.
Cons - Code needs to be modified for new roles. Extra data is brought back though it may not be needed. Those unneeded queries can add a couple of seconds.
3) Send the roles to the db and get each piece based on the role access.
Pros - single db call. Only brings back what is needed. No need to change code for new roles as only stored procedure needs to change.
Cons - Business logic in the database?
It looks to me that #3 > #2 > #1. (overriden > to mean better than)
Does anyone have any insights into which approach may be better?
Update -based on some comments, some more details are below.
User role is obtained from another system. #3 would ideally pass it to the db, and in crude terms for the db, return data as- if user_role ="admin", get all pieces, for "editor" get content pieces 1,3 and 55. Again, this is not a big application where the role management is done in the db. Its a web service method to expose some data for several companies.
We obviously cannot use this model for managing the roles across an application. But for a method level access control as in this scenario, I believe #3 is the best way. Since the roles come from another system than where the content resides, the logic to control access to different content pieces has to reside somewhere. The database looks like the right place to have a maintainable,scalable, less hassle solution in this particular scenario. Perhaps, even create a look up table in the content db to hold roles and content piece access to give a sense of "data" and "logic" separation, rather than having a udf to perform the logic.
If no one can think of a valid case against #3, I think I'll go ahead with it.
I would always pick option 3 and enforce it in the database itself.
Security is best handled at the closest point to the actual data itself for a lot of reasons. Look at it this way: It is more common for an additional application to be added in a different language than it is to toss a database model. When this happens all of your role handling code would have to be duplicated.
Or let's say the application is completely bypassed during a hack. The database should still enforce it's security.
Finally, although people like separating "business logic" from their data, the reality is that most data has no meaning without said logic. Further "security logic" isn't the same thing as regular "business logic" anyway. It is there to protect you and your clients. But that's my $0.02.
Looking at your other options:
2) You are sending too much data back to the client. This is both a security and performance no no. What if your app isn't the one making the data request? What if you have a slight bug in your app that shows too much to the user?
1 and 2) Both require redeploy for even slight logic changes (such as fixing the mythical bug above). This might not be desired. Personally, I prefer making minor adjustments to stored procedures over redeploying code. On a sizeable enough project it might be difficult to know exactly what all is being deployed and just generally has a higher potential of problems.
UPDATE
Based on your additional info, I still suggest sticking with #3.
It depends on how your database is structured.
If you can manage access rights in the database, you might have a table design along the lines of
Content Table
ContentId Content
Role Table
RoleID RoleName
ContentAccess Table
ContentID RoleID
Then passing in the role as a query parameter is absolutely not "business logic in the database". You would obviously write a query to join the "content" and "contentaccess" tables to retrieve those rows in the content table where there's a matching record in ContentAccess for the current user's role.
If your application uses code to determine if a user is allowed to see a specific piece of content, that doesn't work. The crudest example of this would be "if user_role = "admin" then get all content, if user_role = "editor" get items 1, 3 and 55". I'd argue that this is not really a maintainable design - but you say the application is not really that big to begin with, so it might not be a huge deal.
Ideally, I'd want to refactor the application to "manage access rights as data, not code", because you do mention maintainability as a requirement.
If you don't want to do that, option 1 is the way to go; you could perhaps refine it to an "in" query, rather than multiple different queries. So, you run whatever logic determines whether a user role can see the content, and then execute a query along the lines of "select * from content where content_id in (1, 3, 55)".
Different people have different feelings about stored procedures; my view is to avoid using stored procedures unless you have a proven, measurable performance requirement that can only be met by using stored procedures. They are hard to test, hard to debug, it's relatively rare to find developers who are great at both Transact SQL and C# (or whatever), and version control etc. is usually a pain.
How many new roles per year do you anticipate? If few roles, then stick everything in code if it makes the code simpler. If a lot, use option #3.
If you really dislike multiple calls, you can always do a SELECT ... UNION or defer the retrieval to a simple stored procedure.
Alternatively, consider just getting one of myriad RBAC frameworks and let it take care of the problem.

Categories