Is there a pattern or recommended method using ASP.NET MVC where I could be editing one object, and need to create a related object on the fly, (which may need another object created on the fly)? Perhaps a library/ jQuery combo package that makes this easy?
Let's say I am in a page called JournalEntries/Edit/1234 and I realize I need to create different Account object for the JournalEntry object... and maybe that Acount object needed a Vendor object that didn't yet exist. I wouldn't want to leave the page and lose everything that was already done, but maybe nest creation forms and pass the state to the parent window when the object was successfully created so that the workflow would be, essentially, uninterrupted.
Does such a thing exist, or are the business requirements too vague and variable to make that a realistic creation? Are there any pitfalls or issues I would need to worry about, building this sort of model?
You could consider delegating creation of the object (and its dependencies) off to a business service, which would in turn use a unit of work and repositories to create the object in the data store. The business service would return the ID of the newly created object if it could create one successfully.
Now you can create a controller action which would invoke the business services. Your front end code can call the controller action via ajax when you need to create the dependent object.
Since above approach is un-obtrusive, your workflow will not be interrupted and you wont need any special library other than jquery
The short answer here, apparently, is "no"... no such library or pattern exists at this point.
Related
Summary
This question is for a methodology. The answer should be a link to the holy grail in working with contexts for the described scenario.
We have been experiencing different problems in our MVC web application project, related to the use of dbContext.
After reading many question-answer blogs, articles ... including proposals with repositories and injection patterns, Owin, Entity Framework, Ninject, we are still not clear about the right way to work with dbContext’s.
Is there any article, demo, with “The Way” to do it in a more complex application than just “CRUD” operations using separation between MVVC-presentation / Domain Entities / Logic / DataAccess layers, including Identity security handling users and roles permissions?
Description
Previously, our approach was to create dbContext objects when needed in each repository.
Soon we discovered errors like “dbContext is disposed” since the connection dies together with the repository function. This makes the retrieved objects “partially available” to the upper layers in the app (using the trick .ToList(), limited because we can access collections and attributes but not later navigation into the object child tables, and so on). Also using 2 contexts from different repositories, we got an exception telling that 2 contexts are trying to register changes to the same object.
Due to timed commitments to deliver prototypes, we created a single static dbContext shared for the whole application, which is called from everywhere when needed (Controllers, Models, Logic, DataAccess, database initializers). We are aware that is a very dirty workaround but it has been working better than the previous approach.
Still with problems: dbContext can handle only 1 async method call at a time, and we can have many calls (eg. userManager.FindByNameAsync - there are only async methods). Exception: “A second operation started on this context before a previous asynchronous operation completed”.
We were thinking about creating the context as the very first step when an action is called in the controller, then to carry this object as “relay race” to every other layer or function called. In this way the connection will live from the “click in the browser” until the response is loaded back on it. But we don’t like the idea that every single function must have an extra parameter “context” just to share the connection through the layers for the entire operation route.
We are sure that we are not the first ones wondering about what is the right way to use contexts.
Application layers
We have these (logical) layers, differents workspaces, but same webapp MVC project, top to down:
Views: HTML + Razor + JQuery + CSS. Code here is restricted to the layout, but some HTML might depend on the Role. Method calls are to controllers only, plus utils (like formatting).
ViewModels: The data container to be exchanged between Controllers and Views. Classes only define attributes, plus functions to convert to and from Domain entities only (Translators).
Controllers: Actions called from the browser result in calls to functions in the Logic layers. Authentication here restricts access to actions or limits inside an action. Controllers avoid using Domain entities but ViewModels, so that to communicate with Logic layer ViewModels translation functions are called.
Domain Entities: Used for the logic layer, and used to create database tables by Entity Framework.
Logic Classes: A Domain entity has an EntityLogic class with all the operations. These are the core where all the rules that are common and abstracted from specific consumer clients (ViewModels are unknown).
Repositories: To access the database. Not sure if we do need this since Domain entities are already mapped to objects in database by Entity Framework.
Typical scenario
The browser calls an action (POST) in the Products controller to edit a product. The ProductViewModel is used as container of the data.
The controller action is restricted to a collection of roles. Inside the action, depending of the role, a different Logic function is called and ProductViewModel is translated to ProductDomainEntity and passed as parameter.
The logic EditProduct function calls others functions in different logic classes and also use localization and security to restrict or filter. The logic may or may not call a Repository to access the data, or to use a global context for all, and deliver the resulting domain entity collections to the Logic.
Based on the results, the logic may or may not try to navigate the results’ children collections. The results are given back to the controller action as domain entity (or collection of), and depending of this results, the controller may call more Logic, or redirect to another action or respond with a View translating the results to the right ViewModel.
Where, when and how to create the dbContext to support the whole operation in the best way?
UPDATE: All classes within the Logic layer are static. The methods are called from controllers simply like this:
UserLogic.GetCompanyUserRoles(user)
, or
user.GetCompanyRoles()
where GetCompanyRoles() is an extension method for User implemented in UserLogic. Thus, no instances for Logic classes means no constructors to receive a dbContext to use inside its methods.
I want a static method inside a static class to know where to get the instance of the dbContext active to the current HttpRequest.
Could NInject and OnePerRequestHttpModule help with this? Someone who tried?
I don't believe there is a "Holy Grail" or magic-bullet answer to this or any other problem with EF / DbContexts. Because of that, I also don't believe that there is one definitive answer to your question, and that any answers will be primarily opinion-based. However I have found personally that using a CQRS pattern rather than a repository pattern allows for more control and fewer problems when dealing with EF semantics and quirks. Here are a few links that you may (or may not) find helpful:
https://stackoverflow.com/a/21352268/304832
https://stackoverflow.com/a/21584605/304832
https://www.cuttingedge.it/blogs/steven/pivot/entry.php?id=91
https://www.cuttingedge.it/blogs/steven/pivot/entry.php?id=92
http://github.com/danludwig/tripod
Some more direct answers:
...This makes the retrieved objects “partially available” to the upper layers in the app (using the trick .ToList(), limited because we can access collections and attributes but not later navigation into the object child tables, and so on). Also using 2 contexts from different repositories, we got an exception telling that 2 contexts are trying to register changes to the same object.
The solutions to these problems are to 1) eager load all of the child and navigation properties that you will need when initially executing the query instead of lazy loading, and 2) only work with 1 DbContext instance per HTTP request (inversion of control containers can help with this).
Due to timed commitments to deliver prototypes, we created a single static dbContext shared for the whole application, which is called from everywhere when needed (Controllers, Models, Logic, DataAccess, database initializers). We are aware that is a very dirty workaround but it has been working better than the previous approach.
This is actually much worse than a "dirty workaround", as you will start to see very strange and hard to debug errors when you have a static DbContext instance. I am very surprised to hear that this is working better than your previous approach, but it only points out that there are more problems with your previous approach if this one works better.
We were thinking about creating the context as the very first step when an action is called in the controller, then to carry this object as “relay race” to every other layer or function called. In this way the connection will live from the “click in the browser” until the response is loaded back on it. But we don’t like the idea that every single function must have an extra parameter “context” just to share the connection through the layers for the entire operation route
This is what an Inversion of Control container can do for you, so that you don't have to keep passing around instances. If you register your DbContext instance one per HTTP request, you can use the container (and constructor injection) to get at that instance without having to pass it around in method arguments (or worse).
ViewModels: The data container to be exchanged between Controllers and Views. Classes only define attributes, plus functions to convert to and from Domain entities only (Translators).
Little piece of advice: Don't declare functions like this on your ViewModels. ViewModels should be dumb data containers, void of behavior, even translation behavior. Do the translation in your controllers, or in another layer (like a Query layer). ViewModels can have functions to expose derived data properties that are based on other data properties, but without behavior.
Logic Classes: A Domain entity has an EntityLogic class with all the operations. These are the core where all the rules that are common and abstracted from specific consumer clients (ViewModels are unknown).
This could be the fault in your current design. Boiling all of your business rule and logic into entity-specific classes can get messy, especially when dealing with repositories. What about business rules and logic that span entities or even aggregates? Which entity logic class would they belong to?
A CQRS approach pushes you out of this mode of thinking about rules and logic, and more into a paradigm of thinking about use cases. Each "browser click" is probably going to boil down to some use case that the user wants to invoke or consume. You can find out what the parameters of that use case are (for example, which child / navigation data to eager load) and then write 1 (one) query handler or command handler to wrap the entire use case. When you find common subroutines that are part of more than one query or command, you can factor those out into extension methods, internal methods, or even other command and query handlers.
If you are looking for a good place to start, I think that you will get the most bang for your buck by first learning how to properly use a good Inversion of Control container (like Ninject or SimpleInjector) to register your EF DbContext so that only 1 instance gets created for each HTTP request. This should help you avoid your disposal and multi-context exceptions at the very least.
I always use a BaseController that contains a dbContext and passes it to the logic functions (Extensions i call). That way you only use one context per call and if something fails it will do a rollback.
Example:
Controller1 that inherits BaseController
Controller1 now have access to the property db that is a context
Controller1 contains an action "Action1"
Action1 will call the function "LogicFunctionX(db, value1, Membership.CurrentUserId, true)"
In Action1 you can call other logic functions or even call them inside "LogicFunctionX". Always passing the property db through functions.
To save the context i do it inside the controller (mostly) after calling all the logic functions.
Note: the argument true that i pass in LogicFunctionX is to save the context inside or not. Like:
if(Save)
db.SaveChanges();
I had several problems before doing this.
My project group and I are to develop a generic workflow system, and have decided to implement a single Node (a task in the workflow) as a C# Visual Studio Web API project (Using the ASP.NET MVC structure).
In the process of implementing a Node's logic, we've come across the trouble of how to store data in our Node. Our Node specifically consists of a few lists of Uri's leading to other Nodes as well as some status/state boolean values. These values are currently stored in a regular class but with all the values as internal static fields.
We're wondering if there's a better way to do this? Particularly, as we'd like to later apply a locking-mechanism, it'd be prefereable to have an object that we can interact with, however we are unsure of how we can access this "common" object in various Controllers - or rather in a single controller, which takes on the HTTP requests that we receive for ou Node.
Is there a way to make the Controller class use a modified constructor which takes this object? And if so, the next step: Where can we provide that the Controller will receive the object in this constructor? There appears to be no code which instantiates Web API controllers.
Accessing static fiels in some class seems to do the trick, data-wise, but it forces us to implement our own locking-mechanism using a boolean value or similar, instead of simply being able to lock the object when it is altered.
If I am not making any sense, do tell. Any answers that might help are welcome! Thanks!
Based on your comments, I would say the persistence mechanism you are after is probably one of the server-side caching options (System.Runtime.Caching or System.Web.Caching).
System.Runtime.Caching is the newer of the 2 technologies and provides the an abstract ObjectCache type that could potentially be extended to be file-based. Alternatively, there is a built-in MemoryCache type.
Unlike a static method, caches will persist state for all users based on a timeout (either fixed or rolling), and can potentially have cache dependencies that will cause the cache to be immediately invalidated. The general idea is to reload the data from a store (file or database) after the cache expires. The cache protects the store from being hit by every request - the store is only hit after the timeout is reached or the cache is otherwise invalidated.
In addition, you can specify that items are "Not Removable", which will make them survive when an application pool is restarted.
More info: http://bartwullems.blogspot.com/2011/02/caching-in-net-4.html
I am currently developing a Microsoft Word Add-In that communicates with a backend via webservices. The dialogs in this Add-In are created with WPF and I make use of the MVVM pattern. The viewmodels communicate with the repository over services. In order to decouple viewmodel and services I use the DI-container framework Unity.
There is some kind of state (I call it "Context", similar to the http-context) that depends on the active document at the time a window/viewModel was created. This context contains stuff like the active user.
Since a picture is worth more than a thousand words, I prepared a diagram to illustrate the design.
Now my problem is, that when a service method is called, the service needs to know what the active context is in order to process the request.
At the moment I am avoiding this problem by having one service for each document. But since this cuts across the statelessness of services, I don't consider it as a durable solution.
I also considered passing the context to the viewModel, so that I can pass it back to the service when calling a method there. But I see three problems here:
Technical Problems:
Each time I want to resolve a Window with Unity I would have to pass a ParameterOverride-object with the context - what creates dependencies to the concrete viewModel implementation.
=> Or is there a better way to achieve this with unity?
Cosmetical Problems:
I would have to pass the Context as object since the class for it is part of the startup-project and the ViewModels are not. When I now want to obtain data from the context, I'd have to cast it.
Consider a windowViewModel that contains data for a TreeView with hundreds of TreeViewItems. I would have to pass the Context to each of these "TreeItemViewModels" if I'd want to call a service-method in one of these.
So I'm wondering if there is a way of automatically "injecting" (maybe reflection?) the context into the viewModel at runtime without the viewModel knowing anything about it. This is probably impossible to achieve with Unity, but I'm always open to being convinced.
On the other side, when a method on a service is called, the injected context is automatically extracted (maybe by some kind of layer in front the actual service) and saved into some globally accessible property.
I hope that some of you can help me. I appreciate any kind of idea.
I am using MVVM with WPF, but I am having a hard time understanding the concepts behind this desing pattern.
I have a "myclass" object that is the state of the application (it stores data loaded from the repository). All pages of my application will use this data and the object should be syncronized between them all.
My first approach was to store this data in the service layer, using a singleton class. So, all ViewModel should call this service to get the data. Any modification should also call this service, and a event would be fired to synchronize all views.
I am wondering now if it would be better to store this data in the model layer...
What's be best option?
EDIT:
Adding more information:
The data stored is a list of projects loaded into a solution. Since there must be only one solution, I implemented it as a singleton. The must can interactively load, change, or remove any project.
A Service to my understanding is just something that abstracts a piece of functionality(access the file-system, access a database...) which say implements a given interface that the VM's can then use when requiring that functionality.
A Model however holds the business logic of your application and anything that will assist performing that business logic (which can / cannot implement INPC if desired)
So in essence you use a service to get something done and let go of it, a Model is more ingrained to your application
For your given use-case, I'd have the stored info in the Model and implement INPC on it so that the ViewModels get notified of changes automatically if another ViewModel makes a change to the Model.
I am currently studying towards my final year of a Computer Science degree, and working on my final project and dissertation. I will be using ASP.NET Web Forms and C# to create a 3-Layer project - I can't really call it 3-Tier as it will most likely never be hosted on anything other than my local PC for testing as it is for uni purposes only.
My main question is this:
From my understanding, the idea of 3-Layer is that the BLL references the DAL, and the UI references the BLL to create complete separation of concerns. However I have made a small mock up project following a few tutorials so get the hang of 3-Layer, and most basic tutorials still require a reference between the UI and BLL.
For example in the project I have created, which is a very basic Products and Categories type e-commerce system, I have created the Product and ProductDAL classes in the DAL, then the ProductBLL class in the BLL. With this setup, of only using one database table (forget categories for now), the BLL seems to only serve as a sort of interface between the UI and DAL, the methods are exactly the same as those in the DAL and only call the DAL version.
The problem is that to access the DAL via the BLL, I have to pass in a Product object to the BLL method arguments, which means creating a Product object in the UI first, which means referencing the DAL from the UI. Is this the correct way of doing things?
I can get around simple cases like this by creating a method in the BLL that takes the appropriate fields, e.g. strings and ints to create the Product Object and returns it to the AddProduct method. However when it comes to binding different product attributes to labels in the UI, I still need access to the Product object.
So essentially, do I need to make a load of methods in the BLL to access properties of the Product Object? If not, what kind of methods would actually go there, can you give me any examples of methods that may go in the BLL in this kind of Product scenario?
Thanks in advance, and apologies if this has been asked before - I did read through a lot of posts about 3-Layer architecture but most are very basic and only access one table.
the BLL seems to only serve as a sort of interface between the UI and DAL
This is only because this application is very simple - just a CRUD interface at the moment. More complex applications have business rules that would be encapsulated in the BLL (and not be in the UI or DAL).
I have to pass in a Product object to the BLL method arguments, which means creating a Product object in the UI first, which means referencing the DAL from the UI. Is this the correct way of doing things?
Well, there are several different options here:
You can have a Product data access object (DAO) that is shared between the different layers. This object is not a DAL object, but the DAL uses it. It is called a DTO - Data Transfer Object.
You can have several different Product object - one to be consumes by the UI, one by the BLL and one by the DAL and have mapping layers to translate between the different objects.
Some combination of the above.
A common way of separating concerns is to start by having a project called YourProject.Entities or something of the like. This contains the main class definitions and you reference it when you need to get a large entity like a customer or a product or something of the like. Alongside, you have another project which acts as a repository. Depending on the technology that you are using this can either implement something like EF to get your objects from your DB or can contain methods which query your DB directly using straight SQL or stored procedures.
What you have to keep in mind is that these projects are primarily going to function based on user input. Your users will act and your program will respond. The idea though is that the actual business logic is separated from your UI and your data access. You can mix and match these ideas as you wish, but what I have tended to see in my professional experience is basic data constraint enforcement done on the DB access side of things, and data validation either done directly when creating your objects in the Entities project or in a separate EntitiesValidation project which takes entities as a parameter.
If you don't want to have a separate validation project, something to keep in mind is that you can implement business logic directly in objects using constructors and properties. Constructors can enforce logic on inputs before creating objects, and using full properties--that is to say this...
private string myProp
public string MyProp
{
get
{
// Some code
}
set
{
// Some code
}
}
instead of this...
public string MyProp { get; set; }
Allows you to implement rules when accessing the data associated with those properties.
In the end, these questions can be answered many different ways and I am sure that every response to this question will give you different ideas and opinions on the best way to do things. For me, the two rules I always follow are DRY (do not repeat yourself) and code maintainability. By separating logic from data access from object design from UI, you will have a much easier time maintaining and updating your program when that time comes... even if it is just a school project ;).