Here's the deal: I have a site where multiple people will be sharing the same account and should each be able to be on a page that uploads files and keeps a list of the files they've uploaded that session. The controller for the file uploading page looks like
public class FileUploadController : Controller
{
// ...
private List<ThreePartKey> uploadedFiles = new List<ThreePartKey> ();
public ActionResult Index ( )
{
// ...
}
[HttpPost]
public ActionResult Index (HttpPostedFileBase file)
{
// ...
if (!errorOccured)
{
uploadedFiles.Add(new ThreePartKey { orgname = selectedOrgName, catname = selectedCatName, filename = fileNameNoExtension, fullfilepath = newFileUrlPathAndName });
}
// ...
}
and the problem is that uploadedFiles keeps getting re-initialized whenever [HttpPost] public ActionResult Index (HttpPostedFileBase file) is called, meaning the user's list of uploaded files only shows the last uploaded one. So I instead tried
private static List<ThreePartKey> uploadedFiles = new List<ThreePartKey> ();
and that screwed up everything because all the signed-in users are sharing the same list.
Is there any easy way to do what I'm trying to do?
Controllers are instantiated and destroyed on every request. If you want to persist information in a webserver it is strongly advised to use a permanent backing store such as a database.
You can use static state in ASP.NET applications (WebForms, MVC, OWIN, etc) however this is only recommended for caching for performance. It cannot be relied upon because static state is only local to the current AppDomain in the current Application Pool (w3wp.exe instance) - if your website is run in multiple pools or appdomains, or if your application is restarted (or killed due to inactivity) then the stored state is lost.
On option is to provide a 'session' code/id with each request. When user first connects to your site, they are given a session-code (I use 'code' to indicate it has nothing to do with what we would normally call 'session').
Every link has that session-code as part of the url and every post includes the session-code. Then your upload cache can be:
private static ILookup<int, ThreePartKey> uploadedFiles;
(or dictionary if you prefer)
private static IDictionary<int, IList<ThreePartKey>> uploadedFiles;
Depends on the size of the rest of your site if this is workable or not - in most cases probably not as described... but could be managed, eg use the IP address as the 'code' or if you're using AnglurJS or single page application.
As pointed out, any static/singleton cache will still be lost if the app pool is reset, eg via the inactivity timeout setting in IIS.
Another option is to persist the files in subfolders based on the user's IP address.
You've only stipulated that they all use the same login, not how the files are stored etc, so maybe this would work for you.
Related
Background
I've created a working bot in C# but I'm failing to expand it to be a multi-tenant bot. I have created multiple bots in the Microsoft portal using this technique to identify themselves from the messaging endpoint:
https://example.com/api/messages/bot1
https://example.com/api/messages/bot2
https://example.com/api/messages/bot3
I can grab the LastSegment from the URL while in the MessagesController and store it in PrivateConversationData so I know which bot is talking in the current conversation. I intended use this stored 'bot id' in order to retrieve the Microsoft AppId & Password from the web.config (the bot's credentials are stored as a series of custom entries and not the standard appSettings as that only works for a single bot).
Credentials Problem
The authentication works well (nearly) as described here except when using async code with .ConfigureAwait(false) I can't get the HttpContext.Current as it becomes null when running on a different thread. This means I can't get the authenticated user's credentials either by looking them up in the web.config or by calling GetCredentialsFromClaims() since I've lost the authenticated user. If I use .ConfigureAwait(true) I just get deadlocks all over the place.
I have the credentials in the web.config but they are stored per bot and I need the 'bot id' from the URL above in order to get the credentials.
Question
The crux of the problem is: I need the URL to get the 'bot id' and I need the 'bot id' to get the credentials from the web.config but I can never reliably get access to the URL once I've passed a .ConfigureAwait(false) in the code. On the flip side, I can't get the 'bot id' from the PrivateConversationData since I need the bot's credentials in order to load it. A bit chicken and egg :-(
If anyone has any ideas of what I may be doing wrong or has an alternative approach to know which 'bot id' is currently executing I'd very much appreciate it.
Thanks
Please find below given the sample code.
public class StartUp {
public void Configuration(IAppBuilder app) {
var builder = new ContainerBuilder();
//Note: Initialize / register the Metadata Service that can bring the tenant details from the corresponding store
builder.RegisterType<TenantMetadataService>().As<ITenantMetadataService>();
//Note: This helps you in accessing the TenantMetadata from any constructor going forward after the below registry
builder.Register(ti => TenantMetadata.GetTenantMetadataFromRequest()).InstancePerRequest();
//TODO: Register the various services / controllers etc which may require the tenant details here
}
}
public class TenantMetadata {
public Guid TenantId { get;set; }
public Uri TenantUrl { get;set; }
public string TenantName { get;set; }
public static TenantMetadata GetTenantMetadataFromRequest() {
var context = HttpContext.Current;
//TODO: If you have any header like TenantId coming from the request, you can read and use it
var tenantIdFromRequestHeader = "";
//TODO: There will be a lazy cache that keeps building the data as new tenant's login or use the application
if(TenantCache.Contains(...))return TenantCache[Key];
//TODO: Do a look-up from the above step and then construct the metadata
var tenantMetadata = metadataSvc.GetTenantMetadata(...);
//TODO: If the data match does not happen from the Step2, build the cache and then return the value.
TenantCache.Add(key,tenantMetadata);
return tenantMetadata;
}
}
Note
The above code snippet uses the various service placeholders, cache and the other methods which will require to be used based on the designed application services. If you wish not to cache the tenant metadata, if it may contain some sensitive data, you can remove the caching implementation parts.
This implementation can be spread across all your web facing portals like your Web UI, Web Api and WebJobs etc so that it is same across all apps and it is easy to test and consume.
HTH.
Assume some intranet WebAPI endpoint like:
public class AttachmentDto
{
public String Path { get; set; }
}
public class AttachmentsApiController : ApiController
{
public void Post(AttachmentDto attachment)
{
var attachmentsStorage = new AttachmentsStorage();
attachmentsStorage.Add(attachment.Path);
}
}
where AttachmentsStorage in one or another way reads the file at attachment.Path (one or another network share) and saves the contents at some more or less publicly available and known place.
That is basically equivalent to simply
public String Post(AttachmentDto attachment)
{
return File.ReadAllText(attachment.Path);
}
That in my opinion evaluates to security vulnerability, even though the system is intranet, because any file on the server that is accessible to the used service account can be technically read.
Am I correct?
If it is so, then what can be done to mitigate this issue?
I've considered:
Pass the file contents - possible, though not desired for this particular system because of the assumed design and possible size of the files.
Prohibit any addresses that are not network shares. Something like:
private Boolean IsNetworkShareFile(String path)
{
var uri = new Uri(path);
return
uri.IsFile &&
uri.IsUnc &&
uri.IsAbsoluteUri;
}
It seems to work, but it at best prevents only local file access(though some file share can actually point to local) and doesn't restrict access to private shares.
Try impersonation/delegation - probably the best solution with authentication mode="Windows", though it will require changing account settings in Active Directory
This is an A4
What you are describing is known as an Insecure Direct Object Reference and is in the OWASP top 10.
You can guess the mitigations from the title. You can either
Secure the references, or
Use indirect object references instead
(or both)
Secure the reference
The server should validate Path, ideally against a white list.
Paths can be a little tricky to validate because they can contain escape characters. Be sure to use Path.Combine and MapPath instead of performing any path computation yourself.
Also, since this is a string that is being input into your system, always check for injection.
Use an indirect object reference
Modify the API's interface so the client submits a PathID instead of a Path, and make PathID discoverable via some other service call which lists only those specific files that the client has the right to access. If the system has per-user permissions (i.e. ACL), then bind the PathID namespace to the user session, so that one user cannot guess another user's PathIDs.
I am administrator of a small practice project web application, AngularJS front-end pulling its back-end data from a C#/.NET WebAPI, and I'm handling security using the SimpleMembershipProvider.
I suspect that the way I implemented said security is not the best (I'm told ASP.NET Identity is now the way to go?) but that's another question altogether.
The issue that I'm very bewilderingly running into is that I get occasional reports that on a given page load to display a particular user's data, it returns somebody else's. Reloading the page fixes the issue (evidently) and I haven't been able to duplicate the scenario myself, or figure out anything particularly consistent in the users to which this happens.
None of the information being displayed is at all sensitive in nature (the app's just a friendly front end for an already public third-party API) so I'm not in panic mode about this, but I am both concerned and confused and want it fixed.
Here is what one of my API controller endpoints looks like:
[Authorize]
public class UserController : ApiController
{
private static int _userId;
private readonly IUserProfileRepository _userProfileRepository;
public UserController()
{
_userProfileRepository = new UserProfileRepository(new DatabaseContext());
_userId = WebSecurity.GetUserId(User.Identity.Name);
}
public UserProfileDto Get()
{
return _userProfileRepository.GetUserProfileById(_userId).ToDto();
}
}
Any feedback on where I might be going wrong here or what might be causing the intermittant inconsistency would be very much appreciated. (Laughter also acceptable if the way I handled this is just really bad. :P )
Static class fields are shared by all instances/threads of the same AppDomain (in your case - process). Different http requests are processed by threads running in parallel. Any two threads running [almost] at the same time may (will) change the value of _userId. You are assigning _userId in the constructor of your controller, and a new instance of this controller is created for each http request that is to be responded to by UserController. Therefore, this assignment will happen multiple times.
You will have hard time replicating this problem, since you are a single user testing the code, hence there are no overlapping request threads.
Remove static specifier from the _userId field declaration of the controller class.
Note: make sure that DatabaseContext is disposed of. One place that can be used for this is the overriden Controller.Dispose.
Change the Get to retrieve the user id rather than from a static variable:
public UserProfileDto Get()
{
return _userProfileRepository.GetUserProfileById(WebSecurity.GetUserId(User.Identity.Name)).ToDto();
}
I am creating a login system and I want a way to sort of cache information without retrieving the same information from the database.
for example I would have a static class called tokenData. token data would be a private class to store login token, username, expireDate, etc. So every time I visit another page it would check the static class for the data. The token is then stored in session / cookie to produce the lookup. If the data is not in the token static class (e.g. application pool restart) then it would check the database for the record when the user logs in and creates another based on the data in the token table.
Can someone offer me any advice is this is acceptable practice or offer me anything to improve and issues that can arise?
an exmaple is
public class userToken
{
private string name;
private string tokenId;
private static List<userToken> userData = new List<userToken>();
public void add(userToken);
public userToken Find(string tokenId);
}
Never ever ever use static for user or session specific data. static is shared across ALL sessions! You might end up with user sessions sharing confidential data.
Use HttpContext.Session or HttpContext.Cache.
Your solution can introduce errors when run on more than a single server with a single user. The cache you are building is not thread safe. It will also introduce errors when your app is run across 2+ servers in a cluster (load balanced).
I would look into using a proper caching toolset (memcached, etc.)
I'm creating a controller that will serve the combined/minified versions of my JavaScript and CSS. I need to somewhere along the line define which scripts/styles to be loaded.
When a request is made, for example for style.css?VersionNumberHere, it will check if the combined/minified data is already in the HttpContext.Cache, if so spit it out. Otherwise, I need to look up the definition that makes up style.css.
I created a Script/StyleBuilder (that inherits from ContentBuilder) that will store all the paths that need to be combined and then squished (so this would be the definition of style.css).
Where should I be storing these references to the "builders"? Should they be in a static class or a singleton that implements an interface so that it can be tested?
Here's the interface that the abstract class ContentBuilder implements (you can easily imagine the implementation):
public interface IContentBuilder : IEnumerable<string>
{
string Name { get; }
int Count { get; }
string[] ValidExtensions { get; }
void Add(string path);
bool ValidatePath(string path);
string GetHtmlReference(); // Spits out <script>, or <link> depending on implementation.
string Build(); // Minifies, combines etc.
}
And here is ideally what I'd like to be able to do with these:
ContentBuilderContainer.Current.Add("main.js", c => new ScriptBuilder()
{
"/path/to/test.js",
"/path/to/test2.js",
"/path/to/test3.js"
});
ContentBuilderContainer.Current.Add("style.css", c => new StyleBuilder()
{
"/path/to/style.css",
"/path/to/test.less"
});
Then to output all the HTML for all registered IContentBuilder:
ContentBuilder.Container.Current.BuildHtml();
Maybe you should check out SquishIt. Some more info on it in this blog post. We use it in production.
Attach caching attributes to your controller actions and cache by parameter like this:
[OutputCache(Duration = 7200, Location = OutputCacheLocation.Client, VaryByParam = "jsPath;ServerHost")]
[CompressFilter]
// Minifies, compresses JavaScript files and replaces tildas "~" with input serverHost address
// (for correct rewrite of paths inside JS files) and stores the response in client (browser) cache for a day
[ActionName("tildajs")]
public virtual JavaScriptResult ResolveTildasJavaScript(string jsPath, string serverHost)
...
I made the following interface:
public interface IContentBuilderContainer
{
int Count { get; }
bool Add(string name, Func<IContentBuilder> contentBuilder);
string RenderHtml();
}
And then in the implmentation of ContentBuilderContainer:
public class ContentBuilderContainer : IContentBuilderContainer
{
// Other members removed for simplicity.
#region Static Properties
/// <summary>
/// Gets or sets the current content builder container.
/// </summary>
public static IContentBuilderContainer Current
{
get;
set;
}
#endregion
#region Static Constructors
static ContentBuilderContainer()
{
ContentBuilderContainer.Current = new ContentBuilderContainer();
}
#endregion
}
This way there's a single ContentBuilderContainer living at one time.
I helped write some code that did this recently. Here's a high level overview of the solution that was implemented. Hopefully it will give you some good ideas.
Configuration: We created custom configuration elements that define a key and their a corresponding list of directories. So the key JS is linked to our /Content/Scripts folder, and CSS is linked to our /Content/Styles folder. I have seen other solutions where the configuration allowed for individual files to be listed.
Controller: The controller was set up to receive requests something along the lines of /Content/Get/JS and /Content/Get/CSS. The controller uses the configuration key and client request headers to come up with a cache key that identifies the content we want to serve: JS-MSIE-ZIP, CSS-FFX, etc. The controller then checks our cache service. If the content is not there, it gets concatenated, minified, compressed, cached and then served. Handy fallout is that the content is compressed before going into the cache instead of every time it's served.
View: In the View, the links are set up like this:
<link href="<%: Url.Action("Get", "Content", new { key = "CSS" }) %>" rel="stylesheet" type="text/css" />
Cache Service: We're using an existing cache service we have that just wraps the application cache. At some point we'll probably move that to Velocity or something similar. If the amount of CSS and JS we cache keeps growing, we'll probably change the format of the key to a proper filename and move the content to the file system. But, memory's pretty cheap, we'll see what happens.
Reasoning: (if it matters)
We did this in order to keep the JavaScript for different features in separate files in source control without having to link to all of the files individually in the HTML. Because we configure our content by directory and not individual files, we can also run a full minification during production builds to speed up the whole run time process somewhat. Yet we still get the benefit of determining which content to serve based on the client browser, and cached compressed versions.
In development, the system can be set up with a quick configuration change so that every request rebuilds the JS. The files are concatenated with file names injected in comments for easy searching, but the content is not minified and nothing is cached anywhere. This allows us to change, test and debug the JS without recompiling the application.
Couldn't quite find all these features in a package out there so we spent a couple of days and built it. Admittedly some features were just for fun, but that's why we like doing what we do. =)