I have an odd issue that I am trying to track down.
If I deploy my client and Identity Server to Azure, using a self signed certificate then the code works.
I have now moved it to our UAT environment, where the identity server is configured to use a purchased certificate. This certificate has been provided for a single domain. identity.mydomain.com
The client has the password for this certificate so it can do what it needs to.
When I browse to the identity server I can log in to the admin section, so that is all running correctly. If I browse to the client, it redirects to the identity service where I can log in. But as soon as I log in, and am redirected back to my website, I get the following error;
Bad Request - Request Too Long
HTTP Error 400. The size of the request headers is too long.
Looking at the cookies, I can see a whole load of cookies created. I have deleted those and restarted, but I still have the same issue.
If I increase the size of the buffers by using.
<httpRuntime maxRequestLength="2097151" executionTimeout="2097151">
Then it works, but I am concerned that I am masking a problem rather than fixing it.
Has anyone else had to do this to get identity server to work on iis?
I've had this issue recently. The solution was to downgrade the used NuGet package Microsoft.Owin.Security.OpenIdConnect. I was using 3.0.1. You must downgrade to 3.0.0. This is an issue with Owin/Katana middleware. Descriptioin of the issue can be found here. Note that the page states how to fix the actual issue in the library. I haven't tried that, it could also work and is worth the try.
Note that you must clear your cookies the first time you redeploy with the fix in place. As temporary fix, you can always clear your cookies, and just visit the site again. At some point however, it will always stick bunch of nonce strings in the cookie. Similar issue can be found here.
What solved the problem for me was using AdamDotNet's Custom OpenIdConnectAuthenticationHandler to delete old nonce cookies.
public static class OpenIdConnectAuthenticationPatchedMiddlewareExtension
{
public static Owin.IAppBuilder UseOpenIdConnectAuthenticationPatched(this Owin.IAppBuilder app, Microsoft.Owin.Security.OpenIdConnect.OpenIdConnectAuthenticationOptions openIdConnectOptions)
{
if (app == null)
{
throw new System.ArgumentNullException("app");
}
if (openIdConnectOptions == null)
{
throw new System.ArgumentNullException("openIdConnectOptions");
}
System.Type type = typeof(OpenIdConnectAuthenticationPatchedMiddleware);
object[] objArray = new object[] { app, openIdConnectOptions };
return app.Use(type, objArray);
}
}
/// <summary>
/// Patched to fix the issue with too many nonce cookies described here: https://github.com/IdentityServer/IdentityServer3/issues/1124
/// Deletes all nonce cookies that weren't the current one
/// </summary>
public class OpenIdConnectAuthenticationPatchedMiddleware : OpenIdConnectAuthenticationMiddleware
{
private readonly Microsoft.Owin.Logging.ILogger _logger;
public OpenIdConnectAuthenticationPatchedMiddleware(Microsoft.Owin.OwinMiddleware next, Owin.IAppBuilder app, Microsoft.Owin.Security.OpenIdConnect.OpenIdConnectAuthenticationOptions options)
: base(next, app, options)
{
this._logger = Microsoft.Owin.Logging.AppBuilderLoggerExtensions.CreateLogger<OpenIdConnectAuthenticationPatchedMiddleware>(app);
}
protected override Microsoft.Owin.Security.Infrastructure.AuthenticationHandler<OpenIdConnectAuthenticationOptions> CreateHandler()
{
return new SawtoothOpenIdConnectAuthenticationHandler(_logger);
}
public class SawtoothOpenIdConnectAuthenticationHandler : OpenIdConnectAuthenticationHandler
{
public SawtoothOpenIdConnectAuthenticationHandler(Microsoft.Owin.Logging.ILogger logger)
: base(logger) { }
protected override void RememberNonce(OpenIdConnectMessage message, string nonce)
{
var oldNonces = Request.Cookies.Where(kvp => kvp.Key.StartsWith(OpenIdConnectAuthenticationDefaults.CookiePrefix + "nonce"));
if (oldNonces.Any())
{
Microsoft.Owin.CookieOptions cookieOptions = new Microsoft.Owin.CookieOptions
{
HttpOnly = true,
Secure = Request.IsSecure
};
foreach (KeyValuePair<string, string> oldNonce in oldNonces)
{
Response.Cookies.Delete(oldNonce.Key, cookieOptions);
}
}
base.RememberNonce(message, nonce);
}
}
}
And use:
app.UseOpenIdConnectAuthenticationPatched(new OpenIdConnectAuthenticationOptions(){...});
As detailed here:
https://github.com/IdentityServer/IdentityServer3/issues/1124#issuecomment-226519073
Just clearing cookies worked for me. It is the easiest answer to try first.
Related
I am trying to push a commit I made on my local repository to a remote counterpart, hosted on a private Azure DevOps server, using LibGit2Sharp programmatically.
As per the Azure documentation, the HTTPS OAuth enabled Personal Access Token needs to sent with the request in a custom Authentication header as 'Basic' with the Base64 encoded token:
var personalaccesstoken = "PATFROMWEB";
using (HttpClient client = new HttpClient()) {
client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Basic",
Convert.ToBase64String(Encoding.ASCII.GetBytes($":{personalaccesstoken}")));
using (HttpResponseMessage response = client.GetAsync(
"https://dev.azure.com/{organization}/{project}/_apis/build/builds?api-version=5.0").Result) {
response.EnsureSuccessStatusCode();
}
}
The LibGit2Sharp.CloneOptions class has a FetchOptions field which in turn has a CustomHeaders array that can be used to inject the authentication header during the clone operation, like the following (as mentioned in this issue):
CloneOptions cloneOptions = new() {
CredentialsProvider = (url, usernameFromUrl, types) => new UsernamePasswordCredentials {
Username = $"{USERNAME}",
Password = $"{ACCESSTOKEN}"
},
FetchOptions = new FetchOptions {
CustomHeaders = new[] {
$"Authorization: Basic {encodedToken}"
}
}
};
Repository.Clone(AzureUrl, LocalDirectory, cloneOptions);
And the clone process succeeds (I tested it as well as checked the source code :) )
However, the LibGit2Sharp.PushOptions does not have any such mechanism to inject authentication headers. I am limited to the following code:
PushOptions pushOptions = new()
{
CredentialsProvider = (url, usernameFromUrl, types) => new UsernamePasswordCredentials
{
Username = $"{USERNAME}",
Password = $"{PASSWORD}"
}
};
This is making my push operation fail with the following message:
Too many redirects or authentication replays
I checked the source code for Repository.Network.Push() on Github.
public virtual void Push(Remote remote, IEnumerable<string> pushRefSpecs, PushOptions pushOptions)
{
Ensure.ArgumentNotNull(remote, "remote");
Ensure.ArgumentNotNull(pushRefSpecs, "pushRefSpecs");
// Return early if there is nothing to push.
if (!pushRefSpecs.Any())
{
return;
}
if (pushOptions == null)
{
pushOptions = new PushOptions();
}
// Load the remote.
using (RemoteHandle remoteHandle = Proxy.git_remote_lookup(repository.Handle, remote.Name, true))
{
var callbacks = new RemoteCallbacks(pushOptions);
GitRemoteCallbacks gitCallbacks = callbacks.GenerateCallbacks();
Proxy.git_remote_push(remoteHandle,
pushRefSpecs,
new GitPushOptions()
{
PackbuilderDegreeOfParallelism = pushOptions.PackbuilderDegreeOfParallelism,
RemoteCallbacks = gitCallbacks,
ProxyOptions = new GitProxyOptions { Version = 1 },
});
}
}
As we can see above, the Proxy.git_remote_push method call inside the Push() method is passing a new GitPushOptions object, which indeed seems to have a CustomHeaders field implemented. But it is not exposed to a consumer application and is being instantiated in the library code directly!
It is an absolute necessity for me to use the LibGit2Sharp API, and our end-to-end testing needs to be done on Azure DevOps repositories, so this issue is blocking me from progressing further.
My questions are:
Is it possible to use some other way to authenticate a push operation on Azure from LibGit2Sharp? Can we leverage the PushOptions.CredentialsProvider handler so that it is compatible with the auth-n method that Azure insists on?
Can we cache the credentials by calling Commands.Fetch by injecting the header in a FetchOptions object before carrying out the Push command? I tried it but it fails with the same error.
To address the issue, is there a modification required on the library to make it compatible with Azure Repos? If yes, then I can step up and contribute if someone could give me pointers on how the binding to the native code is made :)
I will provide an answer to my own question as we have fixed the problem.
The solution to this is really simple; I just needed to remove the CredentialsProvider delegate from the PushOptions object, that is:
var pushOptions = new PushOptions();
instead of,
PushOptions pushOptions = new()
{
CredentialsProvider = (url, usernameFromUrl, types) => new UsernamePasswordCredentials
{
Username = $"{USERNAME}",
Password = $"{PASSWORD}"
}
};
¯\(ツ)/¯
I don't know why it works, but it does. (Maybe some folks from Azure can clarify it to us.)
It turns out that this works on windows (push options with no credentials provider). Perhaps because somewhere a native call the OS resolves the credentials using some other means. But in Linux / container environment, the issue persists.
"There was a problem pushing the repo: remote authentication required but no callback set"
I think as you mentioned, minimally the CustomHeaders implementation must be exposed for this to work.
Image of error on console
I've tried searching for my problems but nothing seems to ask what I wanted to ask.
I'm working on a web service that generates and sends a kind of token from server to client, currently I'm using Glav CacheAdapter (the web cache kind)
When someone requested a data call, the server should generate a token then saves it to a cache, then sends the key to the client, the client then should send the same token to the server and it should be checked with the one in the cache, but somehow when the server generates the key it successfully creates and saves one (I tested when debugging), but when the client call sends the token (it is the same one) but somehow the cache does not contain any data.
>>>> Project A
>> Service
public string Generate()
{
AppServices.Cache.InnerCache.Add($"AuthenticationTokenCache:{xxx}", DateTime.Now.AddDays(1), new StringValue() { Value = xxx });
return key;
}
public bool Validate(string token)
{
return AppServices.Cache.InnerCache.Get<StringValue>($"AuthenticationTokenCache:{xxx}") != null;
}
>> WebAPI
public bool CallValidate(string token)
{
var xService = new Service();
return xService.Validate(token);
}
>>>> Project B
>> WebAPI
protected override bool RequestValidation(string token)
{
var client = new HttpClient();
var authURL = $"/api/CallValidate?token={token}";
var response = client.GetAsync(authURL).Result.Content;
string jsonContent = response.ReadAsStringAsync().Result;
var authResult = JsonConvert.DeserializeObject<bool>(jsonContent);
if(authResult)
{
return true;
}
}
Is the cache type I use wrong, or maybe there's something wrong that I don't realize is wrong?
And when I create new instance of the same class does the cache gets shared between those object or not?
I'm not really sure as to how the details of how caching works, any pointer as to reference reading material would be helpful too.
Thank you.
Background
I have a web api server (asp.net core v2.1) that serve some basic operation, like managing entities on the server. This is the interface:
[HttpPost]
[Route("create")]
public async Task<ActionResult<NewEntityResponse>> Create(CreateEntityModel model)
{
// 1) Validate the request.
// 2) Create a new row on the database
// 3) Return the new entity in response.
}
The user running this REST method in this way:
POST https://example.com/create
Content-Type: application/json
{
"firstName": "Michael",
"lastName": "Jorden"
}
And getting response like this:
Status 200
{
"id": "123456" // The newly created entity id
}
The Problem
When sending thousands of requests like this, at some point it will fail because of network connections. When connection fails, it can leads us into two different situations:
The network call was ended on the way to the server - In this case, the server don't know about this request. Therefore, the entity wasn't created. The user just have to send the same message again.
The network call was sent from the server to back to the client but never rich the destination - In this case the request was fulfill completely, but the client don't aware for this. The expected solution is to send the same request again. In this case, it will create the same entity twice - and this is the problem.
The Requested Solution
I want to create an generic solution for web-api that "remmeber" which commands it already done. if he got same request twice, it's return HTTP status code Conflict.
Where I got so far
I thought to add the client an option to add a unique id to the request, in this way:
POST https://example.com/create?call-id=XXX
Add to my server a new filter that check if the key XXX is already fulfill. If yes, return Conflict. Otherwise - continue.
Add another server filter that checks the response of the method and marking it as "completed" for further checks.
The problem with this solution on concurrency calls. If my method takes 5 seconds to be returned and the client sent the same message again after 1 second - it will create two entities with same data.
The Questions:
Do you think that this is good approach to solve this problem?
Do you familiar with ready to use solutions that doing this?
How to solve my "concurrency" problem?
Any other tips will be great!
thanks.
Isnt the easiest solution to make the REST action idempotent?
I mean by that: the call should check if the resource already exists and either create a new resource if it doesnt or return the existing if it does?
OK, I just figure it up how to make it right. So, I implemented it by myself and share it with you.
In order to sync all requests between different servers, I used Redis as cache service. If you have only one server, you can use Dictionary<string, string> instead.
This filter do:
Before processing the request - add a new empty value key to Redis.
After the server processed the request - store the server response in Redis. This data will be used when the user will ask again for same request.
public class ConflictsFilter : ActionFilterAttribute
{
const string CONFLICT_KEY_NAME = "conflict-checker";
static readonly TimeSpan EXPIRE_AFTER = TimeSpan.FromMinutes(30);
private static bool ShouldCheck(ActionDescriptor actionDescription, IQueryCollection queries)
{
return queries.ContainsKey(CONFLICT_KEY_NAME);
}
private string BuildKey(string uid, string requestId)
{
return $"{uid}_{requestId}";
}
public override void OnActionExecuting(ActionExecutingContext context)
{
if (ShouldCheck(context.ActionDescriptor, context.HttpContext.Request.Query))
{
using (var client = RedisConnectionPool.ConnectionPool.GetClient())
{
string key = BuildKey(context.HttpContext.User.GetId(), context.HttpContext.Request.Query[CONFLICT_KEY_NAME]);
string existing = client.Get<string>(key);
if (existing != null)
{
var conflict = new ContentResult();
conflict.Content = existing;
conflict.ContentType = "application/json";
conflict.StatusCode = 409;
context.Result = conflict;
return;
}
else
{
client.Set(key, string.Empty, EXPIRE_AFTER);
}
}
}
base.OnActionExecuting(context);
}
public override void OnResultExecuted(ResultExecutedContext context)
{
base.OnResultExecuted(context);
if (ShouldCheck(context.ActionDescriptor, context.HttpContext.Request.Query) && context.HttpContext.Response.StatusCode == 200)
{
string key = BuildKey(context.HttpContext.User.GetId(), context.HttpContext.Request.Query[CONFLICT_KEY_NAME]);
using (var client = RedisConnectionPool.ConnectionPool.GetClient())
{
var responseBody = string.Empty;
if (context.Result is ObjectResult)
{
ObjectResult result = context.Result as ObjectResult;
responseBody = JsonConvert.SerializeObject(result.Value);
}
if (responseBody != string.Empty)
client.Set(key, responseBody, EXPIRE_AFTER);
}
}
}
}
The code is executed only if the query ?conflict-checker=XXX is exists.
This code is provide you under MIT license.
Enjoy the ride :)
TL;DR: I am grasping for straws here, anybody got a SSO with CefSharp working and can point me to what I am doing wrong? I try to connect to a SSL-SSO page through CefSharp but it wont work - neither does it in Chrome-Browser. With IE it just works. I added the to trusted sites (Proxy/Security), I tried to tried to whitelist-policy the URL for chrome in the registry and tried different CefSharp settings - nothing helped.
I am trying (to no avail) to connect to a SSO enabled page via CefSharp-Offline-browsing.
Browsing with normal IE it just works:
I get 302 answer
the redirected site gives me a 401 (Unauthorized) with NTLM, Negotiate
IE automagically sends the NTLM Auth and receives a NTLM WWW-Authenticate
after some more 302 it ends in 200 and a logged in state on the website
Browsing with Chrome 69.0.3497.100 fails:
I guess this is probably due to the fact that the webserver is setup on a co-workers PC and uses a self-signed cert.
F12-Debugging in IE/Chrome:
In IE I see a 302, followed by two 401 answers, and end on the logged in site.
In chrome I see only 302 and 200 answers and end on the "fallback" login site for user/pw entry.
The main difference in (one of the 302) request headers is NEGOTIATE vs NTLM
// IE:
Authorization: NTLM TlRMT***==
// Chrome:
Authorization: Negotiate TlRMT***==
Upgrade-Insecure-Requests: 1
DNT: 1
No luck to connect through CefSharp so far, I simply land in its RequestHandler.GetAuthCredentials() - I do not want to pass any credentials with that.
What I tried to get it working inside Windows / Chrome:
installed the self-signed cert as "trusted certificate authorities"
added the co-workers host to the Windows Internet Proxy settings as trusted site
added the co-workers host to Software\Policies\Google\Chrome\ registry as
https://dev.chromium.org/administrators/policy-list-3#AuthServerWhitelist
https://dev.chromium.org/administrators/policy-list-3#AuthNegotiateDelegateWhitelist
which all in all did nothing: I still do not get any SSO using Chrome:
What I tried to get it working inside CefSharp:
deriving from CefSharp.Handler.DefaultRequestHandler, overriding
OnSelectClientCertificate -> never gets called
OnCertificateError -> no longer gets called
GetAuthCredentials -> gets called, but I do not want to pass login credentials this way - I already have a working solution for the http:// case when calling the sites normal login-page.
providing a settings object to Cef.Initialize(...) that contains
var settings = new CefSettings { IgnoreCertificateErrors = true, ... };
settings.CefCommandLineArgs.Add ("auth-server-whitelist", "*host-url*");
settings.CefCommandLineArgs.Add ("auth-delegate-whitelist", "*host-url*");
on creation of the browser providing a RequestContext:
var browser = new CefSharp.OffScreen.ChromiumWebBrowser (
"", requestContext: CreateNewRequestContext (webContext.Connection.Name));
CefSharp.RequestContext CreateNewRequestContext (string connName)
{
var subDirName = Helper.Files.FileHelper.MakeValidFileSystemName (connName);
var contextSettings = new RequestContextSettings
{
PersistSessionCookies = false,
PersistUserPreferences = false,
CachePath = Path.Combine (Cef.GetGlobalRequestContext ().CachePath, subDirName),
IgnoreCertificateErrors = true,
};
// ...
return new CefSharp.RequestContext (contextSettings);
}
I am aware that part of those changes are redundant (f.e. 3 ways to set whitelists of which at least 2 should work for CefSharp, not sure about the registry one affecting it) and in case of IgnoreCertificateErrors dangerous and can't stay in. I just want it to work somehow to then trim back what to do to make it work in production.
Research:
https://learn.microsoft.com/en-us/windows/desktop/SecAuthN/microsoft-ntlm
https://www.chromium.org/developers/design-documents/http-authentication
https://www.magpcss.org/ceforum/viewtopic.php?f=6&t=11085 leading to
https://bitbucket.org/chromiumembedded/cef/issues/1150/ntlm-authentication-issue (fixed 2y ago)
https://sysadminspot.com/windows/google-chrome-and-ntlm-auto-logon-using-windows-authentication/
https://productforums.google.com/forum/#!msg/chrome/1594XUaOVKY/8ChGCBrwYUYJ
and others .. still none the wiser.
Question: I am grasping for straws here , anybody got a SSO with CefSharp working and can point me to what I am doing wrong?
TL;DR: I faced (at least) 2 problems: invalid SSL certificates and Kerberos token problems. My test setup has local computers set up with a web-server I call into. These local computers are mostly windows client OS VMs with self-signed certificates. Some are windows servers. The latter worked, the fromer not. With IE both worked.
Browsing to the site in question using https://... lead to CEFsharp encountering the self-signed certificate (which is not part of a trusted chain of certs) - therefore it will call the browsers RequestHandler (if set) and call into its
public override bool OnCertificateError (IWebBrowser browserControl, IBrowser browser,
CefErrorCode errorCode, string requestUrl,
ISslInfo sslInfo, IRequestCallback callback)
{
Log.Logger.Warn (sslInfo.CertStatus.ToString ());
Log.Logger.Warn (sslInfo.X509Certificate.Issuer);
if (CertIsTrustedEvenIfInvalid (sslInfo.X509Certificate))
{
Log.Logger.Warn ("Trusting: " + sslInfo.X509Certificate.Issuer);
if (!callback.IsDisposed)
using (callback)
{
callback?.Continue (true);
}
return true;
}
else
{
return base.OnCertificateError (browserControl, browser, errorCode, requestUrl,
sslInfo, callback);
}
}
For testing purposes I hardcoded certain tests into CertIsTrustedEvenIfInvalid (sslInfo.X509Certificate) that would return true for my test environment - this might be replaced by a simple return false, an UI-Popup presenting the cert and asking the user if she wants to proceed or it might take certain user-provided cert-files into account - dunno yet:
bool CertIsTrustedEvenIfInvalid (X509Certificate certificate)
{
var debug = new Dictionary<string, HashSet<string>> (StringComparer.OrdinalIgnoreCase)
{
["cn"] = new HashSet<string>(StringComparer.OrdinalIgnoreCase) { "some", "data" },
["ou"] = new HashSet<string>(StringComparer.OrdinalIgnoreCase) { "other", "stuff" },
["o"] = new HashSet<string>(StringComparer.OrdinalIgnoreCase) { "..." },
["l"] = new HashSet<string>(StringComparer.OrdinalIgnoreCase) { "Atlantis" },
["s"] = new HashSet<string>(StringComparer.OrdinalIgnoreCase) { "Outer Space" },
["c"] = new HashSet<string>(StringComparer.OrdinalIgnoreCase) { "whatsnot" },
};
var x509issuer = certificate.Issuer
.Split (",".ToCharArray ())
.Select (part => part.Trim().Split("=".ToCharArray(), 2).Select (p => p.Trim()))
.ToDictionary (t => t.First (), t => t.Last ());
return x509issuer.All (kvp => debug.ContainsKey (kvp.Key) &&
debug[kvp.Key].Contains (kvp.Value));
}
Only if the SSL-Step works, SSO will be tried.
After solving the SSL issue at hand I ran into different behavious of Chrome versus IE/Firefox etc as described here # Choosing an authentication scheme
- the gist of it is:
if multiple auth schemes are reported by the server, IE/Firefox use the first one they know - as delivered by the server (preference by order)
Chrome uses the one which it deems of highest priority (in order: Negotiate -> NTLM -> Digest->Basic) ignoring the servers ordering of alternate schemes.
My servers reported NTLM,Negotiante (that order) - with IE it simply worked.
With Chrome this led to Kerberos tokens being exchanged - which only worked when the web-server was hosted on a Windows Server OS - not for Windows Client OS. Probably some kind of failed configuration for Client-OS computers in the AD used. Not sure though - but against Server OS it works.
Additionaly I implemented the
public override bool GetAuthCredentials (IWebBrowser browserControl, IBrowser browser,
IFrame frame, bool isProxy, string host,
int port, string realm, string scheme,
IAuthCallback callback)
{
// pseudo code - asks for user & pw
(string UserName, string Password) = UIHelper.UIOperation (() =>
{
// UI to ask for user && password:
// return (user,pw) if input ok else return (null,null)
});
if (UserName.IsSet () && Password.IsSet ())
{
if (!callback.IsDisposed)
{
using (callback)
{
callback?.Continue (UserName, Password);
}
return true;
}
}
return base.GetAuthCredentials (browserControl, browser, frame, isProxy,
host, port, realm, scheme, callback);
}
to allow for a fail-back if the SSO did not work out. After providing the AD credentials in this dialog login is possible as well).
For good measure I also whitelisted the hosts to the CEF-Browser context on creation of a new broswer like so:
CefSharp.RequestContext CreateNewRequestContext (string subDirName, string host,
WebConnectionType conType)
{
var contextSettings = new RequestContextSettings
{
PersistSessionCookies = false,
PersistUserPreferences = false,
CachePath = Path.Combine (Cef.GetGlobalRequestContext ().CachePath, subDirName),
};
var context = new CefSharp.RequestContext (contextSettings);
if (conType == WebConnectionType.Negotiate) # just an enum for UserPW + Negotiate
Cef.UIThreadTaskFactory.StartNew (() =>
{
// see https://cs.chromium.org/chromium/src/chrome/common/pref_names.cc for names
var settings = new Dictionary<string, string>
{
["auth.server_whitelist"] = $"*{host}*",
["auth.negotiate_delegate_whitelist"] = $"*{host}*",
// only set-able via policies/registry :/
// ["auth.schemes"] = "ntlm" // "basic", "digest", "ntlm", "negotiate"
};
// set the settings - we *trust* the host with this and allow negotiation
foreach (var s in settings)
if (!context.SetPreference (s.Key, s.Value, out var error))
Log.Logger.Debug?.Log ($"Error setting '{s.Key}': {error}");
});
return context;
}
I made BasicAuth and WindowsAuth work in my SignalR project.
Now I am looking for other ways of authenticating (without needing a Win/AD Account).
While reading the SignalR documentation I stumbled upon the possibility to provide auth tokens in the connection header:
http://www.asp.net/signalr/overview/security/hub-authorization#header
It states "Then, in the hub, you would verify the user's token."
I could make the OnConnected method to be access anonymously and get the token like the following and then verifying it:
var test = Context.Request.Headers["mytoken"];
But what would would be the next step? I would need to set the connected user to be an authenticated user but how can I do that manually?
My overall goal is to have a very simple method of authentication,i.e. a "hardcoded" token validated on the server side and grant access to the other methods which have authorization enabled.
Any help would be appreciated.
I have had a similar problem. I found a kind of workaround creating a new AuthorizeAttribute. Next, I decorated the methods with this attribute. When a request is made, the attribute checks the token and gives the permission or not to be accessed.
Here is the code:
[AttributeUsage(AttributeTargets.Method)]
internal class CustomAuthorizeAttribute : AuthorizeAttribute
{
public override bool AuthorizeHubMethodInvocation(Microsoft.AspNet.SignalR.Hubs.IHubIncomingInvokerContext hubIncomingInvokerContext, bool appliesToMethod)
{
string token = hubIncomingInvokerContext.Hub.Context.Headers["AuthenticationToken"];
if (string.IsNullOrEmpty(token))
return false;
else
{
string decryptedValue = Encryptor.Decrypt(token, Encryptor.Password);
string[] values = decryptedValue.Split(';');
string userName = values[0],
deviceId = values[1],
connectionId = values[2];
bool b = ...CanAccess()...;
return b;
}
}
}
To have a username, you can simply add a property in your Hub that reads the token, parse it and returns the username.
Still can't use Context.User.Identity, though. I hope it helps.