rest api can't receive encoded parameters in urls - c#

We have some endpoints that receive a username, the username may contain special characters, for example:
POST /api/users/{userName}/cars/
if the username is "joe+doe", then we have a 404 error, we have enconded and send
POST /api/users/joe%2Bdoe/cars/
Still it doesn't work
We could either change to an id or change it into a format like "api/users/cars?username={username}" but we would like to follow the logic we have.
So the question is "is there a way to configure endpoints in rest api to accepts encoded chars"

Related

ServiceStack Axios URL special charaters

What is the best way to deal with special characters in URL's with ServiceStack and a Javascript Axios client, or any other client.
Example:
URL Path: /MasterItems/{Code} - Code can have any character in it, (/ \ & etc.)
The above URL could be generated by the API (backend), so it will come back from a previous request as part of the response _links.
Example: of the response _links
/MasterItems/A1200G/FA (the code is A1200G/FA)
My frontend code, (VueJS, Javascript, Axios) will simply get the _links resource and call the GET.
How should I treat this, turn my GET into POST and pass a parameter?
Encode the URL?
Note:
I noticed when using the built-in swagger feature of ServiceStack the resulting call works.
The URL will be: /MasterItems/%7BCode%7D?Code=A1200G%2FA

How to Submit String with 536000 Characters to API

I interpret a G code file (CNC language), serialize it into a class, and try to send in the http protocol to my API, which has a GET method.
However it is too long a string to be sent by Http.
Is there any solution to this problem? Something like compression?
Request URL Too Long
HTTP Error 414. The request URL is too long.
Using Asp.Net WebAPI
Try using POST on both sides. Doing so also makes more sense from a REST point of view, as you are submitting data, not doing a query.
Note that GET requests are usually very limited in size, see e.g. here: maximum length of HTTP GET request?

mvc encoding/decoding the querystring

I am developing an asp.net mvc 6 application, and as part of the application we will send out emails, which contain a link that a user can click on that sends them to a particular action method.
An example of an emailed link would be
http://identity.platform:7000/account/register?emailinvitation=true&email=yerg#test.com
Which would then go to the AccountController Register action method:
Register(string userName, bool emailInvitation=false, string email="" )
What I would like is to do a Base-64 encoding of the url, so the user is not then tempted to manually change any of the parameters, so then we have a link like
http://identity.platform:7000/account/register?url=ZW1haWxpbnZpdGF0aW9uPXRydWUmZW1haWw9eWVyZ0B0ZXN0LmNvbQ==
So the flow in my mvc application would be
receive the request
check if there is a url parameter that needs decoding
if so decode and send on to the appropriate controller/action method
My question is, whereabouts should I be intercepting the request and decoding it? Should this happen in the routing, or somewhere later? How do I then redirect to the action method with the appropriate parameters
On the server, generate a GUID for the specific invite, and send that in the email instead of the params.
You will also need an overload for the Register action method which accepts the GUID string instead.
Register(string guid) {
}
It will fetch the linked details (e.g. email address) from the data store and then proceed as per your normal process.
Unlike base64, there's no way for anyone to reverse it and discover the parameters, and it's hard for the user to guess another valid GUID. There's no need for you to worry about encoding and decoding them, and you can easily make them one-time-only tokens, which may be helpful to your business process. Another bonus is that you don't end up with sensitive data like email addresses in your server logs or user's browsing history, or transmitted in the clear over HTTP (as per your example URL).

How safe or dangerous it is to use "UserAgent" of the request to identify the origin of the request?

I am developing an SMS service which sends SMS to the destination numbers using Twilio as an SMS provider. Twilio is suppose to send a POST request to my web service as and when the status of the message is updated (i.e., sent, delivered, etc).
In order to make sure that the POST request is not sent by anyone else than Twilio, I am validating UserAgent of the request as below.
If ((HttpRequest)request.OriginalRequest).UserAgent.StartsWith("TwilioProxy/"))
{
return true;
}
Currently I am getting "TwilioProxy/1.0" as User Agent in each of the POST action, where I believe the version number can be changed in future, so I have skipped it from validation.
Is it possible to receive a request with the same user agent (something starting to "TwilioProxy/") from any other origin than Twilio? Is it safe to rely on UserAgent for this type of verification?
Any inputs/ suggestions on this will be much helpful to me.
Thanks
Twilio developer evangelist here.
As the comments have mentioned, it is trivial to spoof a header and since the UserAgent header for Twilio is very simple, then it is unreliable to rely on it.
However, if you are interested in validating that requests are made by Twilio then you need to check out how we sign requests to ensure they are not malicious.
Here's how it works:
Turn on TLS on your server and configure your Twilio account to use HTTPS urls.
Twilio assembles its request to your application, including the final URL and any POST fields (if the request is a POST).
If your request is a POST, Twilio takes all the POST fields, sorts them by alphabetically by their name, and concatenates the parameter name and value to the end of the URL (with no delimiter).
Twilio takes the resulting string (the full URL with query string and all POST parameters) and signs it using HMAC-SHA1 and your AuthToken as the key.
Twilio sends this signature in an HTTP header called X-Twilio-Signature
Then to verify that this X-Twilio-Signature contains a valid signature, you need to do the following in your application:
Take the full URL of the request URL you specify for your phone number or app, from the protocol (https...) through the end of the query string (everything after the ?).
If the request is a POST, sort all of the POST parameters alphabetically (using Unix-style case-sensitive sorting order).
Iterate through the sorted list of POST parameters, and append the variable name and value (with no delimiters) to the end of the URL string.
Sign the resulting string with HMAC-SHA1 using your AuthToken as the key (remember, your AuthToken's case matters!).
Base64 encode the resulting hash value.
Compare your hash to ours, submitted in the X-Twilio-Signature header. If they match, then you're good to go.
Within our official libraries, we include a request validator that can do all of this for you. There is an example of doing this in C# in the documentation.
Let me know if this helps at all.

Secure WCF REST Webservice and headers

I'm writing a secure WCF REST webservice using C#.
My code is something like this:
public class MyServiceAuthorizationManager : ServiceAuthorizationManager
{
protected override bool CheckAccessCore(OperationContext operationContext)
{
base.CheckAccessCore(operationContext);
var ctx = WebOperationContext.Current;
var apikey = ctx.IncomingRequest.Headers[HttpRequestHeader.Authorization];
var hash = ctx.IncomingRequest.Headers["Hash"];
var datetime = ctx.IncomingRequest.Headers["DateTime"];
...
I use headers (Authorization,Hash,DateTime) to store informations about apikey, current datetime and the hashed request URL while request body contains only URL and webservice parameters.
Example:
http://127.0.0.1:8081/helloto/daniele
Is this the right way or I've to pass and retieve those parameters from URL like this:
http://127.0.0.1:8081/helloto/daniele&apikey=123&datetime=20120101&hash=ddjhgf764653ydhgdhgfjiutu56
are there differences between those two methods?
I think both methods would work for simple cases. However, if you want to make maximum use of native HTTP behaviours, you should go with the headers approach, not the URL query parameters one.
This will allow you to (for example) use HTTP response codes to indicate to client that a resource has been permanently moved (response code 301) so the client can automatically update links. If the URL included the authentication information, it is not clear to a client that two different URLs are actually referring to the same resource. In other redirect scenarios, the headers will be automatically included so you don't have to worry about appending parameters to redirect URLs.
Also, it should allow better caching behaviour on clients (if that is relevant in your scenario).
As another example, using headers would allow you to authenticate a request based just on the headers without requiring the client to send the message body. The idea is that you authenticate with the headers, then send the client an HTTP 100 Continue response. The client should not send the message body until it gets the 100. This could be an important optimisation if you are doing POSTs or PUTs with large message bodies.
There are other examples, but whether any given one is relevant depends on your scenarios and on the clients you expect to serve.
In summary, I would say it is better to make use of elements of the protocol as they were explicitly intended - this gives you the best chance of behaving as a client expects and should make your service more accessible, efficient and usable in the longer term.
Based on your implementation, your required parameters would have to be passed in the HTTP Headers of the request, which would most certainly not be on the query string.

Categories