I'm witting a WCF service for a customer to send part information to our application. We have multiple customers that will have one or many locations, and part information is scoped to each location for the customer. When the customer calls our service they will need to specify the location.
Options that we have considered are:
1) Placing a location id(s) in a custom header. All part information would apply to all locations listed.
2) Adding a "context" node to the message body. All part information would apply to all locations listed.
3) Adding a location node in the message body over that would contain the part information. Each location would have it's own list of parts.
I'm looking for best practice/standards help in determining how this should be handled. We will have to create other services that will have the customer/location scope as well, and would like to handle this in a consistent manor.
I would say if it's only one or two operations that need it, make it part of the data contract - sort of like making it a parameter to a method call. If every operation requires it, put it in the header, since it's just as much context as username, roles, tenant, or other authentication information - sort of like something you'd put in a request context (e.g., HttpContext).
Do you need to use a message contract? I use Data contracts unless I need to stream something back, so everything just ends up in the body. But, even for a message contract I would put that information in the body, I tend to reserve the header for authentication information.
We plan to send a response with processing summary information and details about any part that could not be processed.
The message contract has a collection of parts, and the parts are defined in a data contract. There is also a flag in the message contract to control processing of the parts collection. This may or may not be the right place for this flag.
Related
Is it recommended practice to implement the below endpoint using 'PUT' verb to create & update a resource?
PUT/jobs/{jobid}
(or)
POST/jobs - to create resource
PUT/jobs/{jobid} - only to update the existing record
Mixing up create & update logic in PUT endpoint may create issue in the endpoint consumer side as PUT is idempotent while POST is NOT idempotent.
What are the other consequences if I mix up create & update resource logic with in 'PUT' endpoint?
Point me to any relevant RFCs, if any.
The HTTP PUT verb is used to update a resource, but it can also be used to create a resource if the resource does not already exist, but it's bad practise as it goes against its meaning.
POST is not idempotent, while PUT is idempotent. This means that multiple identical POST requests may create multiple resources, while multiple identical PUT requests should update the same resource each time.
If you want to support both creating and updating a resource using the same endpoint, you can use the POST verb for both operations and include an additional parameter or field in the request to indicate whether you are creating or updating the resource.
you can refer to the HTTP 1.1 specification (RFC 7231): https://tools.ietf.org/html/rfc7231#section-4.3
Mixing up create & update logic in PUT endpoint may create issue in the endpoint consumer side as PUT is idempotent while POST is NOT idempotent.
It shouldn't introduce any client issues.
An important constraint in REST is the uniform interface, which means (among other things) that everybody understands message semantics the same way. In the context of HTTP, that means that everybody agrees that HTTP PUT means... whatever the current standard says it means.
The current registered reference for HTTP PUT is RFC 9110:
The PUT method requests that the state of the target resource be created or replaced with the state defined by the representation enclosed in the request message content.
A successful PUT of a given representation would suggest that a subsequent GET on that same target resource will result in an equivalent representation being sent in a 200 (OK) response.
In other words, PUT is a lot like "save file"; it's the HTTP method we would use if we were using HTTP to publish a new page to our website.
The uniform interface constraint tells us that our HTTP APIs should understand messages exactly the same way that a general purpose web server would understand them.
The power that gives us is that it allows us to use general purpose components (browsers, caches, proxies) without needing to know anything about the semantics of the resource or its representation.
(Note: the important thing to recognize is agreeing on what the messages mean doesn't mean that the server needs to do a specific thing. See Fielding 2002 on the semantic constraints of HTTP GET; the principle is general to all standardized HTTP methods).
Now, you can use HTTP POST if you prefer (see Fielding, 2009). The problem is that POST semantics allow a lot more freedom, which restricts a general purpose component from doing intelligent things because it doesn't know enough about what is going on.
For example, on an unreliable network an HTTP response may be delayed or lost. Because the semantics of PUT describe an idempotent action, general purpose clients can know that it is safe to try sending the request again. POST, on the other hand, doesn't imply that constraint, and therefore general purpose components shouldn't automatically retry those requests.
But it's a trade off - POST limits what a general purpose component can do in response to a contingency, but maybe it is worth it if that means your API is more familiar to the human developers who are going to use it, or if it makes life easier for the operators keeping your API running, or whatever.
if PUT can create or update a record then how it should be idempotent?
Because idempotent, in HTTP means:
the intended effect on the server of multiple identical requests with that method is the same as the effect for a single such request.
It's a lot like how we use maps/dictionaries/associative-arrays to store information
dict["readme.txt"] = "Hello World"
and
dict["readme.txt"] = "Hello World"
dict["readme.txt"] = "Hello World"
dict["readme.txt"] = "Hello World"
Call it once, call it twice, call it thrice, the end result is the same: we have this specific value stored under this specific key.
That's really what PUT means; the target URI is the key, the request body is the value. "Please make your document look like my document".
I'm developing a service that'll be provide messages to a calling UI based on conditions that exist per message.
I have a json file that stores all the different possible messages the UI can receive and I expect that file to grow throughout the application's lifetime (i.e. I plan to add new message configurations)
The problem is each message has a different condition on whether it should be included in the response and these conditions rely on a downstream call being made to some DB or other service and then logic that processes those downstream calls to resolve whether a condition is true or false.
I want to have each message in my configuration file to have a value that resolves to a class in my code so that when the endpoint is called, I can just call a "resolve" method passing that "message configuration" i.e. the resolve method calls the class associated to that message (and it's condition resolution method) and makes all the necessary calls to downstream services.
Is there a way to have each of my messages have classes associated to them in my json configuration file? Am I missing a much easier way to implement this?
As I can see, when you need to add a new message to the application, you have to add it to the configuration file and additionally define a separate class with conditions. Right?
If your answer is yes, message file (json) won't save you from builds and deployments over and over again.
You can move your messages into classes that implement the interface. Then simply register them as a collection and process messages through that collection.
The message and conditions will be resolved without any "magic" tricks.
Of course, for a new message type, you have to declare a new class with the same interface, new message, and the appropriate logic.
I'm trying to stay Resftful, and follow protocols. I have an "Organisation" domain object, and have the usual POST/GET/PUT/DELETE operations:
POST https://www.example.com/api/organisation saves a new org.
GET https://www.example.com/api/organisation/{id} gets an org by it's ID
etc
Something a user can do from the client side (Website, mobile, etc) is set their default organisation. On the database side, we're just setting a default flag against the org they want to be defaulted.
But the API side, I'm not sure how to do, and keep it within good practice. At the moment, I have a method in my code:
[HttpPost]
public async Task<IActionResult> SetDefaultOrganisation()
I am not sure how to expose that to the API.
https://www.example.com/api/organisation/setdefault/{id}
That doesn't seem right. I don't want to do a PATCH, as.. the api must describe whats happening. It's not an any-item-can-change.
Is https://www.example.com/api/organisation/{id}/setdefault a more acceptable option?
One important point in restful endpoints is to be resource centric and not process centric.
That's mean if you have a verb (an action, like save or add) in you endpoint, you have a problem in your design.
In your question you wrote
Something a user can do from the client side (Website, mobile, etc) is set their default organisation
So from a resource point of view, the main resource here is the user.
With this point the endpoint can be like this (a POST or PUT)
/api/user/{userId}/orgranizations/default
And the body:
{ orgID : 1234}
Restful way to perform a data changing action
In REST, the way we pass information to the web server is by editing the website.
This is normally done in one of two ways
We GET a representation of the resource, make edits to our copy, and then deliver the revised representation to the server
We fill in form, and deliver the representation of the form data to the server.
From the perspective of REST, these two patterns are the same ones that you use for editing any resource (that's the uniform interface constraint at work: our general purpose tools don't need to know anything about the meaning of the edits we are making).
Therefore, the design problem is to identify which resource (document) includes the representation of the information you want, and what representation (schema) you will use to communicate that change of information.
From the perspective of general purpose REST components, we don't care what the document is -- that's the server's problem; we only care about the string literal used to identify the document.
General purpose components also don't care about spelling conventions; /a472a51b-7e36-404b-a985-1fc79d1b7464 is a perfectly satisfactory identifier. What this degree of freedom means is that you can choose identifiers that are easy for human beings to understand, and identifiers that are easy for your implementation to route.
/api/organisation/{id}/setdefault
From the point of view of a general purpose components, this is fine. Human beings, however, a likely to object -- because this looks like it identifies an action, and what we really want is an identifier that tells us what the document is.
So that might be /api/organization/{id}, if somewhere in the document that describes the organization we have a list of users for whom this is the default organization.
But I think it would be more common to have something like /api/profiles/{userId}, where the profile document includes information about what the default organization is for each user.
If you needed a fine grain resource, then it might instead be something like /api/profiles/{userId}/defaultOrganization or even /api/defaultOrganizations/{userId}
(The ordering of path segments is something you decide upon based on how you want to take advantage of the relative references, as defined by RFC 3986).
In the realm of DDD I like the idea of avoiding getters and setters to fully encapsulate a component, so the only interaction that is allowed is the interaction which has been built through behavior. Combining this with Event Sourcing I can get a nice history of what has been actioned and when to a component.
One thing I have been thinking about is when I want to create, for example, a restful gateway to the underlying service. For the purposes of example, lets say I have a Task object with the following methods,
ChangeDueDate(DateTime date)
ChangeDescription(string description)
AddTags(params string[] tags)
Complete()
Now obviously I will have instance variables inside this object for controlling state and events which will be fired when the relevant methods are invoked.
Going back to the REST Service, the way I see it there are 3 options:
Make RPC style urls e.g. http://127.0.0.1/api/tasks/{taskid}/changeduedate
Allow for many commands to be sent to a single endpoint e.g.:
URL: http://127.0.0.1/api/tasks/{taskid}/commands
This will accept a list of commands so I could send the following in the same request:
ChangeDueDate command
ChangeDescription command
Make a truly RESTful verb available and I create domain logic to extract changes from a DTO and in turn translate into the relevant events required e.g.:
URL: http://127.0.0.1/api/tasks/{taskid}
I would use the PUT verb to send a DTO representation of a task
Once received I may give the DTO to the actual Task Domain Object through a method maybe called, UpdateStateFromDto
This would then analyse the dto and compare the matching properties to its fields to find differences and could have the relevant event which needs to be fired when it finds a difference with a particular property is found.
Looking at this now, I feel that the second option looks to be the best but I am wondering what other peoples thoughts on this are, if there is a known true restful way of dealing with this kind of problem. I know with the second option that it would be a really nice experience from a TDD point of view and also from a performance point of view as I could combine changes in behavior into a single request whilst still tracking change.
The first option would definitely be explicit but would result in more than 1 request if many behaviors needed to be invoked.
The third option does not sound bad to be but I realise it would require some thougth to come with a clean implementation that could account for different property types, nesting etc...
Thanks for your help in this, really bending my head through analysis paralysis. Would just like some advice on what others think would be the best way from the options or whether I am missing a trick.
I would say option 1. If you want your service to be RESTful then option 2 is not an option, you'd be tunneling requests.
POST /api/tasks/{taskid}/changeduedate is easy to implement, but you can also do PUT /api/tasks/{taskid}/duedate.
You can create controller resources if you want to group several procedures into one, e.g. POST /api/tasks/{taskid}/doThisAndThat, I would do that based on client usage patterns.
Do you really need to provide the ability to call any number of "behaviors" in one request? (does order matter?)
If you want to go with option 3 I would use PATCH /api/tasks/{taskid}, that way the client doesn't need to include all members in the request, only the ones that need to change.
Let's define a term: operation = command or query from a domain perspective, for example ChangeTaskDueDate(int taskId, DateTime date) is an operation.
By REST you can map operations to resource and method pairs. So calling an operation means applying a method on a resource. The resources are identified by URIs and are described by nouns, like task or date, etc... The methods are defined in the HTTP standard and are verbs, like get, post, put, etc... The URI structure does not really mean anything to a REST client, since the client is concerned with machine readable stuff, but for developers it makes easier to implement the router, the link generation, and you can use it to verify whether you bound URIs to resources and not to operations like RPC does.
So by our current example ChangeTaskDueDate(int taskId, DateTime date) the verb will be change and the nouns are task, due-date. So you can use the following solutions:
PUT /api{/tasks,id}/due-date "2014-12-20 00:00:00" or you can use
PATCH /api{/tasks,id} {"dueDate": "2014-12-20 00:00:00"}.
the difference that patch is for partial updates and it is not necessary idempotent.
Now this was a very easy example, because it is plain CRUD. By non CRUD operations you have to find the proper verb and probably define a new resource. This is why you can map resources to entities only by CRUD operations.
Going back to the REST Service, the way I see it there are 3 options:
Make RPC style urls e.g. http://example.com/api/tasks/{taskid}/changeduedate
Allow for many commands to be sent to a single endpoint e.g.:
URL: http://example.com/api/tasks/{taskid}/commands
This will accept a list of commands so I could send the following in the same request:
ChangeDueDate command
ChangeDescription command
Make a truly restful verb available and I create domain logic to extract changes from a dto and in turn translate into the relevant
events required e.g.:
URL: http://example.com/api/tasks/{taskid}
I would use the PUT verb to send a DTO representation of a task
Once received I may give the DTO to the actual Task Domain Object through a method maybe called, UpdateStateFromDto
This would then analyse the dto and compare the matching properties to its fields to find differences and could have the
relevant event which needs to be fired when it finds a difference with
a particular property is found.
The URI structure does not mean anything. We can talk about semantics, but REST is very different from RPC. It has some very specific constraints, which you have to read before doing anything.
This has the same problem as your first answer. You have to map operations to HTTP methods and URIs. They cannot travel in the message body.
This is a good beginning, but you don't want to apply REST operations on your entities directly. You need an interface to decouple the domain logic from the REST service. That interface can consist of commands and queries. So REST requests can be transformed into those commands and queries which can be handled by the domain logic.
From .NET(C#) code, we are invoking a Java Web Service.
Its a SAOP request. The Java WebServices are developed using Axis 1.4.
Following is the sample code that makes the java web service request:
private string GetUserInfo(string strEID)
{
string strEmpData = string.Empty;
GetEmpInfo.EmpProxyService objEmp;
try
{
objEmp = new GetEmpInfo.EmpProxyService();
strEmpData = objEmp.searchByEID(strEID);
}
catch (WebException ex)
{
}
objEmp.Dispose();
return strEmpData;
}
Now, we have a change request which requires passing some additional information - a name_value pair to the java webservice.
How can we achieve this ?
Can I pass the information in HTTP/SOAP headers?
Changing the method signature and adding the additional info to pass the info is not at all the good idea I guess.
EDIT: Its basically we want to add the logging inforamtion of who are all consuming the web services. Once the java webservice request is processed successfully, we will log the usage information along with the source of the request (from webappln/windows appln/flex client).
We want the clients to send its unique id to identify it. Since this has nothing to do with the business logic, can we add it meta-data info...say in headers.
If you have control over the service signature, I would actually suggest that you change the signature of this web service, or add another method that takes the additional arguments. When you're using a high-level language like C# or Java, the tendency is for the web service framework to abstract the entire SOAP stack away from you, and leaves you dealing with just the plain objects that eventually get serialized to make the method call. With only the argument objects exposed, it can be tricky to try to inject additional stuff into the SOAP message, if it's not part of the actual method signature.
There are usually ways to manipulate the SOAP message by hand, but I would probably shy away from that if possible, as editing the SOAP message by hand goes against the point of using a serialization-driven framework. That said, if you have no control over the service method, and the group in control of it needs you to pass additional data outside of the soap objects, you might be stuck messing with the SOAP message by hand.
If you want to add some future proofing to your services, I would suggest passing a full-fledged object rather than a single string or primitive value. In your object, you could include a key-value data store like a HashMap or Dictionary so that additional data can be passed without changing the signature or schema of the web service. With key-value data, documentation becomes important because it's no longer clearly specified data types or parameters.
You can use SOAP headers but I would rather not go that route since the headers have no business meaning. Rather change the signature and use request and response objects.
SearchByEIDResponse GetEmpInfo.EmpProxyService.searchByEID(SearchByEIDRequest)
Ths makes any changes less painfull and prevents huge parameter lists.
How you pass information to a web service is dependent on the methods that web service exposes. What language that service is written in is inconsequential to you as the consumer. IF the Java web service requires a name value pair to retrieve some data than a method signature will expose that.
objEmp.searchByEID(strEID, strVal1, strVal2);
That said you as Eben indicates you are better off using request and response objects to keep your parameter lists short. When to use these more complex types comes with experience i.e. don't use a request object from the get go if you need to only pass a single string value, but use a request object if you need to pass 50 string values.
If you have multiple webservices and don't want to change all methods (which is reasonable), a SoapExtension is the way to go http://msdn.microsoft.com/en-us/library/system.web.services.protocols.soapextension.aspx
You write your soap extension class on the client, declare it in the web.config and you're done.