What is phrase "URL reservation in HTTP.SYS" means? - c#

Can't understand the meaning of this phrase. People on forums suggests each other to reserve url in HTTP.sys, but what does it mean? What is it for? How does it works?
All it comes from HttpWebRequest uac problems.

Several Win32 APIs and .NET framework components (such as WCF) utilize the HTTP Server API when they want to send or receive HTTP requests targeted at the local machine. The HTTP Server API basically provides such functionality in a manner managed by the OS without the need for deploying a standalone web server such as IIS on the machine.
At this point it's probably best to quote the Dev Center page linked above:
A reservation persistently allocates a portion of the URL namespace to
individual users allowing them to reserve or "own" that part of
namespace. Reservations give the user the right to register to service
requests for the namespace. The HTTP Server API ensures that users do
not register URLs from portions of the namespace that they do not own.
In order to ensure namespace security, ACLs (Access Control List) are
applied to the portion of the namespace reserved for each user.
Reserved namespaces are identified by URL prefix strings, formatted in
the same fashion as URL prefixes used for registrations. This means
that all the various host specifier categories are also available for
reservations.
Namespace reservations are persisted across reboots, and changes take
effect dynamically so there is no need to stop and restart the
machine.
What this means is that before the HTTP Server API allows you to listen to incoming requests to a particular URL namespace (think of that as a "URL path"), you have to register for them. Registration is performed on a user account basis as stated above, so what matters here is the user account under which the process that wants to listen to the request runs, which may be different than the account of the currently logged in user.

Related

Mitigating the risk of Server-Side Request Forgery when downloading files with the .NET Framework

Question: If I have an untrusted, user-supplied URL to a file, how do I protect myself against server-side request forgery when I download that file? Are there tools in the .NET Framework (4.8) base class library that help me, or is there some canonical reference implementation for this use case?
Details: Our web application (an online product database) allows users to upload product images. We have the requirement that users should be allowed to supply the URL to a (self-hosted) image instead of uploading an image.
So far, so good. However, sometimes our web application will have to fetch the image from the (external, user-supplied) URL to do something with it (for example, to include the image in a PDF product data sheet).
This exposes my web application to the risk of Server-Side Request Forgery. The OWASP Cheat Sheet documents this use case as "Case 2" and suggests mitigations such as validating URLs and backlisting known internal IP addresses.
This means that I cannot use the built-in methods for downloading files such as WebClient or HttpWebRequest, since those classes take care of DNS resolution, and I need to validate IP addresses after DNS resolution but before performing the HTTP request. I could perform DNS resolution myself and then create a web request with the (validated) IP address and a custom Host header, but that might mess up TLS certificate checking.
To make a long story short, I feel like I am reinventing the wheel here, for something that sounds like a common-enough use case. (I am surely not the first web developer who has to fetch files from user-supplied URLs.) The .NET Framework has tools for protection against CSRF built-in, so I'm wondering if there are similar tools available for SSRF that I just haven't found.
Note: There are similar question (such as this one) in the ssrf tag, but, contrary to them, my goal is not to "get rid of a warning", but to actually protect my system against SSRF.
Confirm the requirement with the business stakeholders. It's very possible they don't care how the file is obtained-- they just want the user to be able to specify a URL rather than a local file. If that is the case, your application can use Javascript to download the file from the browser then upload it from there. This avoids the server-side problem completely.
If you have to do it server-side, ask for budget for a a dedicated server. Locate this in your DMZ (between the perimeter firewall and the firewall that isolates your web servers from the rest of your network). Use this server to run a program that downloads the URLs and puts the data where your main application can get it, e.g. a database.
If you have to host it on your existing hardware, use a dedicated process, running in a dedicated application pool with a dedicated user identity. The proper location for this service is on your web server (not application or database servers).
Audit and monitor the security logs for the dedicated user.
Revoke any permission to private keys or local resources such as the filesystem.
Validate the protocol (http or https only).
To the extent possible, validate the IP address, and maintain a black list.
Validate the domain name to ensure it is a public URL and not something within your network. If possible, use a proxy server with public DNS.

How to Indicate Name of Application in HTTP Request using Embedded Browser

Our application uses the Chromium Embedded Framework. We need a way to communicate to our servers within our requests that they are communicating with a Chrome browser embedded in our application. Changing the user agent isn't really an option because some sites do not play well with browsers which are not recognized. I suppose we could get around this by appending the application name to the end of the default Chromium user agent header. Our server could then check to see if the user agent header contains the name of our application. I'm unsure though if some sites will still have an issue recognizing our application with this method. I'm also unsure if there is better way to indicate this, maybe through the use of cookie or setting a custom field on the request header?

OAuth 2 with JWT in Web API with 1-N callers from the same customer

Yes, the title of this question is horrible but after 5 minutes of trying to figure out how to word it I moved on!
I am starting the design phases of a stand-alone authorization server (C#, Web API, Azure) for a software platform that is receiving a significant overhaul. I read the always great articles by Taiseer Joudeh (http://bitoftech.net/2014/06/01/token-based-authentication-asp-net-web-api-2-owin-asp-net-identity) on the subject and have the implementation mostly mapped out in my head. The external users of the authorization server (app developers who will be consuming the services from various end points) will have a portal that they can log into and manage their accounts, white list IP addresses, select their API version (similar to how Stripe does it) and other various administrative tasks.
What has me hung up are the two different real world (happens today) scenarios where a caller has more than one unit on their access access the API and processing transactions. Below is a summary of the current requirements (could change, nothing is set in stone if there is a good argument I have not considered) as well as the two scenarios I am struggling with.
Question: Do I allow my authorized customers (customers with access to an account on the auth server to define 1-N number of "apps" that will have their own refresh/auth tokens...
Facts:
Many of my clients run multiple servers which process transactions through the platform. These may be dedicated on prem servers or worker roles in Azure and everything in between.
The implementation must support long periods of time between API calls without the need to fully reauthenticate. Some clients make calls once a month.
I like Box's implementation: Refresh tokens are valid for 60 days, access tokens are valid for 1 hour. Each new access token provides a new refresh token.
The customers within the portal will have to "whitelist" their IP addresses and new access tokens (and with them new refresh tokens) will only be handed out to a request coming from one of their whitelisted IPs.
The token must contain the version number for the application for which the token is being requested for use in the destination app (controller selection). See SO for more details.
Every "refresh" of the access token issues a new refresh token.
The refresh tokens are not "burned" until the first successful call with the new access token (in case the new access token is lost in transit, the client can obtain a new access token with the previous refresh token).
I have not committed to this yet but it seems like a value add with little downside given the whitelisted IP addresss.
Access tokens will have a very short life (15 mins). This is both for security purposes as well as API version migrations since the version will be in the token.
Concerns:
Two different scenarios in a multiple caller scenario:
On Prem Servers: To me it makes complete sense in this scenario to allow the customer within the authorization server portal to define 1-n number of callers, each representing one of their servers which will be making requests. In this scenario there is no shared cache and it would be very difficult (if not impossible) to allow a shared refresh/access token. What would happen the first time multiple servers received a 401?
WorkerRole (Azure): In contrast to the On Prem Server config, it would be very difficult to support multiple refresh/access tokens without some config file wizardry where each scalable unit was aware of its place in line (i.e. what instance number it happened to be). In this scenario the customer would have access to shared cache and could implement a concurrent collection or the like to keep track of the current refresh and access token and the first caller to hit a 401 could block the others while the new access token was obtained.
* Update based on Brice's Answer *
With respect to client/application credentials there are in fact both but I wanted to clarify what my current plans are which now include Thanktecture. I used it back in the WS-Fed (v2 I believe) days and have not looked at it for OAuth2 but I will.
Each customer that has been granted access to the API (think of these as the parent accounts who then have many child accounts under them who do not have API access) will have a login to the API portal (likely Identity 2.0 based w/an MVC or Angular front end). In this portal they will be able to manage their account (set API version for a particular application amongst other things). Here is where they will be able to "create" application credentials assuming that dynamic registration is not implemented. They will not be using the Identity 2.0 portal account creds for their clientID. I will most likely be generating these credentials dynamically (clientId and secret) with some strong random password generation code.
The question heart of the my initial which you touched on was whether I should allow the creation of one and only one client ID/secret and not enforce a one active refresh/auth token per clientID constraint OR allow them to create 1-N and create a 1:1 between that particular client ID/secret.
It is very unlikely that I would ever need multiple different permissions/roles for specific customer against a specific application. In other words if CustomerA had the ability to call POST against a specific URI, that will always be true and if it changes it would change for that customer across the board. As a result my plan was to set application specific permissions at the customer level (that identity 2.0 portal account I referred to above) and that creates an abstraction of sorts if I go with the 1-N clientID/secrets under the account. Each of those clientID/secrets would inherit those permissions.
I am ashamed to admit as I had started crawling down the proverbial rabbit hole that I neglected to consider the possibility of allowing a 1:Many relationship between the clientID/secret and active refresh/auth tokens. I kind of had it in my head that they would be tied together and would only ever exist as a pair. In this scenario, if a user had 3 active agents running against the API, the third one to obtain an auth token (and as a result a new refresh token) would have broken the ability for the other two agents to obtain a new auth token without fully reauthenticating since agent 3's refresh token was the last one granted.
In the 1:Many scenario each and every authenticating agent coming from the white listed IP, using the clientID/secret would get a valid refresh/auth token for their own use. This really solves all of the problems I described but reduces the visibility a little on "who" the caller is or how many callers there really are if they are behind a load balancer.
Of course the balance I am trying to strike is that of solid security coupled with a relatively painless implementation for the end user who today is familiar with a key-value-pair in a json object implementation for authentication and authorization.
Having developed a similar architecture I can see you supporting one of two scenarios (or possibly both):
Allow customers to share application/client credentials across multiple instances (servers or worker roles) of the application.
Require customers to use a unique set of application/client credentials for each instance of an installed application.
You don't mention use of client or application level credentials, but I am assuming these are in use rather than supporting unregistered clients. In other words when a customer registers their application (or application instance) and supplies their IP addresses, you return a unique client ID (and often a client secret). This client ID and secret is then used by the customer's application to request an access token and refresh token combination.
An access token and refresh token combination should be unique for each application instance. It is also acceptable (and somewhat common) for a single application instance to request multiple active access/refresh tokens. An example is when you have two API endpoints that might require a different authorization level, the application could request two access/refresh tokens each with a different set of scope values (authorization) to be used with these endpoints. In your scenario each application instance (server or worker role) would request it's own access/refresh token combination and independently recycle these. The more interesting question in my opinion is whether these application instances share the same client credentials, or use a unique set.
The OAuth 2.0 Threat Model encourages use of installation specific client secrets (see https://www.rfc-editor.org/rfc/rfc6819#section-5.2.3.4). It uses the wording "An authorization server may issue separate client identifiers and corresponding secrets to the different installations of a particular client" (emphasis mine). However, I have found in practice that there is little to stop a client from reusing client credentials for multiple instances with native and mobile applications.
With the additional protection of white-listing customer IP addresses, I think you would be fine without using unique client credentials per application instance. That being said you should fully consider all threats unique to your system that may attempt to exploit this as a weakness. One possible way to address this would be to use a dynamic client registration process to automatically issue client credentials for each new application instance. See OpenID Connect Dynamic Client Registration 1.0 (http://openid.net/specs/openid-connect-registration-1_0.html) or the draft OAuth 2.0 Dynamic Client Registration Protocol (https://datatracker.ietf.org/doc/html/draft-ietf-oauth-dyn-reg-21) as examples of this. Again this may be difficult to enforce that a customer uses dynamic client registration, rather than simply reuse a single set of application/client credentials.
In closing, may I suggest rather than designing an OAuth2 authorization server from scratch that you look at the excellent open source Thinktecture IdentityServer (https://github.com/thinktecture/Thinktecture.IdentityServer.v3) (no affiliation) platform as an extensible starting point.

Securing a Subscription WCF Webservice

I'm working on a subscription data delivery webservice using C# and WCF. Customers will sign-up to use the service at different usage levels for a monthly fee. The project requirements call for the service to be accessible from a web app hosted on the same server, from a desktop app and Windows service distributed to customers and from a WordPress plugin. In the future, support may be added for other CMS systems and mobile (Apple/Android) apps.
The security requirements include a standard user ID and password authentication for each call to the service to verify subscription status and type and to track user activity. That's easy enough to do but there's more and that's what I'm looking for advice on.
First of all, there's the need to track IP addresses and use this information to control access. One part of this is to restrict the number of different IP addresses the service can be called from within a specific time period per ID and subscription type. The second part is to prevent access to the service from certain countries entirely. I've read some other answers here about how to implement IP address detection/tracking in general but I am more concerned about potential difficulties associated with this that others have encountered. What should we watch out for here?
The second major security requirement is to restrict access to the service to our provided desktop/service applications or from authorized domains using our CMS plugins. I'm not sure how this can be implemented other than using some sort of authentication token which of course could be easily hacked. Perhaps in combination with the login and IP address requirements this will be enough though. Are there any alternative methods that might be a better approach to take?

how to handle http requests from a browser using c#

I have a windows application developed in c#.Net which is used as a website blocker for a network.I have done this by modifying the hosts file.It works fine when urls are blocked like "www.yahoo.com".Now my requirement is I have to block the urls based on the keywords.i.e when the user just types "yahoo" in the browser,I should verify the keyword and block a corresponding website.Now how can I track the website typed by the user in the browser and block or allow the user to particular site based on the url.I should not allow the user to view the page if the keyword is present.How to do this?Can some one help me to do this?
There's plenty of code samples out there that will act as proxies (eg. http://code.cheesydesign.com/?p=393) however I would strongly suggest following the advice of the comments you've been given and go with an existing application.
Building a proxy that will not interfere with the complicated web apps of today is not trivial. You also need to be careful about blocking based on keywords - web apps I've worked on have failed in spectacular ways due to proxies doing this, and rejecting requests for important javascript files (often due to minification or compression) rendering our app useless.
Also consider that your proxy won't be able to check SSL traffic (which is increasing all the time) without serving up your own certs acting as a man-in-the-middle.

Categories