I have a question about the standard way to perform a restful update.
We have a restful Api, with an update URL like the following :
put /jobs/{jobUid:guid}
The signature in the restful controller is:
UpdateJob(Guid jobUid, [FromBody] UpdateJobOperation job)
Our UpdateJobOperation class has all the same properties as our Job class except for the Id (Guid) is not in the UpdateJobOperation class.
Inside this update method, we map the UpdateJobOperation to our Job business object, then we call update on the service layer and pass in the job. The job object has a Guid Id property on it. So my question is the following :
should the signatures of our update on the service layer and our update on repository layer (service will do business logic then call update on repository) be like:
UpdateJob(Job job)
OR
UpdateJob(Guid jobUid, Job job)
If we use single Job parameter, obviously we need to set the JobUid property on the Job before calling UpdateJob on the service.
Obviously both methods work but I have been unable to find if there is a best practice on service/repo updates.
What are your recommendations?
Thanks!
Without risking a religious argument...
Strictly from a restful API point of view a PUT is for updating a resource that you have an id for. In this sense you API interface is fine. At your service layer I would be tempted to use the Update(Job job) signature as this can be re-used for you POST operation.
Your current implementation is correct. Especially, since if you were to get rid of the jobUid parameter you would end up with the end point put /jobs This could be mistook for an end point that updates multiple jobs as opposed to a single one.
Related
I have created a service that allows me access a controllername,actionname & header values from HttpContext.Current
Its working currently however I am trying to test the service and discovering using HttpContext within the service might be a bad idea as my service layer needs to be fully aware of HttpContext
e.g.
public virtual string GetControllerName()
{
return HttpContext.Current.Request.RequestContext.RouteData.Values["controller"].ToString();
}
I then thought of passing the request through as a paramater but again this doesnt feel right.
public virtual string GetActionName(HttpRequest request)
{
return request.RequestContext.RouteData.Values["action"].ToString();
}
Can anyone provide and guidance on how my service should be setup to allow me to achieve what I need?
If your service class is "aware" that there is a controller and an action, then it knows it's being called during an HTTP request to a controller. If that's the case then it's not decoupled from the controller - in other words, that service can't function if it's not called while servicing a request. Because if it was, there couldn't be a controller or an action.
So from that point of view, it doesn't matter too much that the service is depending directly on the HttpContext, because it's coupled to it either way.
Does the service class actually need those specific values, or is it just performing some logging and you want to know what called it?
If it depends on those values then you could create an interface like
interface IControllerContextProvider
{
string ControllerName;
string ActionName;
}
Then have an implementation like
public class HttpContextControllerContextProvider : IControllerContextProvider
{
//implements the interface by returning values from HttpContext.Current
}
If the service itself doesn't need those values in order to function (maybe it's just logging) then you could use an interceptor (see PostSharp, Windsor, others) and when you inject your service into your controller, the interceptor "intercepts" the call, does your logging, and then continues with the original method call.
The interceptor is a class written for that specific purpose, so it makes sense for it to be coupled to your HttpContext. But the benefit is that if the details about the controller and action aren't really relevant to the main purpose of your service class then the interceptor provides a way to keep that code out of your service class.
That's done a lot with exception logging. If a class isn't going to actually handle exceptions then there's no need for it to contain code to log errors, since that's not the purpose of that class. Interceptors let us create those behaviors but keep them separate from classes that don't care about them.
I am working on a system design. I have implemented multiple layers in my application where the web layer calls the business layer and business layer call the data layer.
I want to keep a common co-relation id for every call, so that I can log the input to any method and in case of any error, the exception can be logged into database using the same co-relation id and finally need to show the co-relation id to the users screen in case of any error.
I have implemented this using WCF service where I take the message id as the co-relational id and use it throughout the request life cycle. But I am not sure how to implement this with normal libraries. I don’t want to pass the co-relation id on every method as parameter or a parameter to the constructer of every class.
Can anyone point me to any article or implementation approach for this.
Thanks
You should provide more information about how your layers are structured.
That said, if all your business services are stateless, one can assume you instantiate for each request a new XXXService class (say for example CustomerService).
What you could then do is to pass your correlation ID to every service class you instantiate, for example using a dependency injection framework. So inside the CustomerService class, you could have access to the correlation ID that was generated for your request.
I would have all my domain services' methods require an executionContext parameter (could be a Dictionary<string, object> or a domain class if you want). This allows for more extensibility as the requirements change in the future.
Is there a pattern or recommended method using ASP.NET MVC where I could be editing one object, and need to create a related object on the fly, (which may need another object created on the fly)? Perhaps a library/ jQuery combo package that makes this easy?
Let's say I am in a page called JournalEntries/Edit/1234 and I realize I need to create different Account object for the JournalEntry object... and maybe that Acount object needed a Vendor object that didn't yet exist. I wouldn't want to leave the page and lose everything that was already done, but maybe nest creation forms and pass the state to the parent window when the object was successfully created so that the workflow would be, essentially, uninterrupted.
Does such a thing exist, or are the business requirements too vague and variable to make that a realistic creation? Are there any pitfalls or issues I would need to worry about, building this sort of model?
You could consider delegating creation of the object (and its dependencies) off to a business service, which would in turn use a unit of work and repositories to create the object in the data store. The business service would return the ID of the newly created object if it could create one successfully.
Now you can create a controller action which would invoke the business services. Your front end code can call the controller action via ajax when you need to create the dependent object.
Since above approach is un-obtrusive, your workflow will not be interrupted and you wont need any special library other than jquery
The short answer here, apparently, is "no"... no such library or pattern exists at this point.
I have a WCF service with the service file as - Serivce.svc
Here I can read incoming headers using the WebOperationContext.Current
The code from the Service file access a data access utility layer which makes other calls; I need to to do some work in the data access layer based on the header passed in.
However, the WebOperationContext.Current is null here.
How do I get around this?
From your question, it seems your "data access utility layer" is dependent on information that was passed to the service through the headers. Make this explicit, preferably through an interface so it's easily testable. Something like this:
public class DataAccessLayer(IMetaInfoFromHeaders requiredInfo)
{ /* implementation */ }
(Alternatively you could just have the IMetaInfoFromHeaders be an argument for just one or a few methods in the DAL, if that seems better - this depends on the specifics.)
Your service is responsible for processing the message. It should extract the information from the headers, and pass it to the DAL using an object implementing IMetaInfoFromHeaders.
Bottom line: don't make the DAL dependent on the WebOperationContext.
I am using MVC 3. I am trying to get my head around the services layer and the service. I am currently working through the sample app that comes with the DoFactory source code. This question is based on the sample application, but in general.
There is a service layer (WCF) that exposes a set of service methods. The service layer implements a single point of entry (the Façade pattern) through which all communication with the layers below must occur. The Façade is the entry point into the business layer and exposes a very simple, course-grained API.
Lets says I am trying to get a list of clients, then in the MVC controller it will call the repository's GetCustomers method, then this will call the service layers GetCustomers method.
I think I am a bit confused here. Is this application architecture correct? Shouldn't the controller call the service layer's method and then this call repository's method. I always thought that the repository was always the last method called to get data?
Please can someone help clarify this?
Your architecture is correct.
I always thought that the repository was always the last method called to get data?
Yes, in your case the data comes from a WCF service but it could be anything: SQL database, XML file, ...