I have a a requirement to control the ability for a client to download images/files based on who they are. We are calling an action with a parameter that allows me to sub in data from session to finish a path, without putting the real path on the client. We then return a FileActionResult from the controller.
It was working. If we found the image, we could create a stream and serve the file, and if we could not, we returned a default image. But, we found that we could easily run into an issue where the stream was open when a new request was made, which resulted in an error. This could then turn into an issue where even the default image could not be downloaded.
I have looked around a lot and we have tested several methods, and these conflicts can still occur. I have started to think that maybe this is not an IO issue, but more of an, "I'm trying to do something wrong" issue.
Is there a way to intercept a call for a static resource, and then adapt the path if it is looking in a specific location, without imposing the rule on all request?
The closest I have found is when creating a View Expander, where I can re-interoperate the path of a called resource, but it is not the same. One is compiled, the other is not.
I don't have any code to show because the approach is uncertain, and unknow. Searches have proven difficult because the terms collide with well know solutions to topics that do not apply.
I am hoping that someone who is more knowledgeable can point me to a method that will treat files in a secure folder as if they are static resources once I have determined they are authorized to access the static resource.
I am using Identity, but I do not extend that identity to system access, nor will I. The only user allowed can be the IISUsr, per my client.
Any help would be greatly appreciated!
Related
In my laravel application, I want to provide the users with the opportunity to download a copy of their stored data in the form of a Word document. I found that certain parts of this can only be accomplished using C#/.NET.
For this, I wrote a C# application alongside a method called GetWordProfile(User user) which returns FileInfo set to the actual path of the output file (this is always within the storage folder of laravel, so laravel has access to it). I only need the path and everything's done and dusted since from this point on, I can manage my laravel application to download this for the user.
However, the question is how do I get there? I must not forget about potential errors which may occur and thus display them (the errors are (inside my C# application) handled by log4net in a file as well as on the console; same goes for all output).
I tried to run my application using shell_exec respectively exec, however, both only returned zero results (null) (despite having set $output for exec) and thus seem not to be suitable. Also, I usually don't want loops (inside PHP/laravel) too much since you're then using a lot of computing power which is unnecessary for this sort of task, also you don't want to let your users wait more than, say, 5 secs, seeing nothing in your browser but the script being executed within a blank page (during the execution there's no content, obviously).
EDIT: I also approached the use of COM which ultimately did not work properly out either.
What is an appropriate approach towards this?
I did something similar with Python + C# a while back using IPC (Inter-process Communication) using named pipes.
EDIT: URL is broken. Here's the question someone asked previously on this topic.
Interprocess Communication using Named Pipes in C# + PHP
Thank you for looking into this! My boss asked me about the following: We are in a library and we have online access to journals. When someone requests access to a journal, we log them on. If this has to be done for a whole class of students, it takes quite some time.
Let's assume we have a Csharp application. The application is in the C:/Program Files/ folder together with some kind of configuration file that contains the credentials and URLs and so forth. Since the files are in the C:/Program Files/ directory, a regular user will not have access to copy/manipulate any of the files. Using the CSharp SecureString class, the credentials would be safe. However, as soon as the application opens the browser and uses HttpWebRequest to send a POST request to log us in, the data would not be safe anymore.
Is this correct? A regular user can start an executable and could gain access to the POST data in the browser or can maybe impersonate the browser to get the POST request data.
If this is the case, I have two questions. The second one may be a question about opinions but the first one shouldn't be.
Is there any way to do what my boss wants me to do safely without ever giving anyone access to the credentials?
Is this a bad idea and should not be done at all?
I am also happy about "You should not do this, because..." answers, because this would also solve the problem for me if I can convince her of this.
Thank you!
Edit:
Sorry for the lack of information: Different accounts are used. Most of the time, it would be the student's own domain account. We also have a generic domain account we sometimes use in the library for classes to have the computers already logged in when the class arrives to speed things up. So this is a well known account. Of course entering the credentials in front of the patron as we do now is in no shape, way or form secure either.
It is a provably unsolvable problem. Since the user's machine, in your setup, needs to know the sensitive information, there is no way for you to prevent that machine's user from also knowing that sensitive information. The only way to prevent the user from accessing it is to ensure that the sensitive data is never on the client's machine.
Pretty much any "good" solution is going to require some sort of cooperation with the site in question, which you presumably won't have. Good solutions would involve having a server only you control (with the "real" credentials) log in, and then provide some sort of temporary token or session ID to the user to use for a period of time, and that would expire after a short while.
Another option is to never have the user directly access the site, but rather always access a server you control which will redirect all traffic (that you consider valid) over to the other system. While this is an option that would be possible without any cooperation from the 3rd party, it likely wouldn't be terribly trivial to implement.
While going through MVC concepts, i have read that it is not a good practice to have code inside 'GET' action which changes state of server objects( DB updates etc.,).
'Caching of return data' has been given as a reason for this.
Could someone please explain this?
Thanks in advance!
This is by HTTP standard. The GET verb is one that should be idempotent and safe.
9.1.1 Safe Methods
Implementors should be aware that the software represents the user in
their interactions over the Internet, and should be careful to allow
the user to be aware of any actions they might take which may have an
unexpected significance to themselves or others.
In particular, the convention has been established that the GET and
HEAD methods SHOULD NOT have the significance of taking an action
other than retrieval. These methods ought to be considered "safe".
This allows user agents to represent other methods, such as POST, PUT
and DELETE, in a special way, so that the user is made aware of the
fact that a possibly unsafe action is being requested.
Naturally, it is not possible to ensure that the server does not
generate side-effects as a result of performing a GET request; in
fact, some dynamic resources consider that a feature. The important
distinction here is that the user did not request the side-effects, so
therefore cannot be held accountable for them.
http://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html
Browsers can cache GET requests, generally on static data, like images or scripts. But you can also allow browsers to cache GET requests to controller actions as well, using [OutputCache] or other similar ways, so if caching is turned on for a GET controller action, it's possible that clicking on a link leading to /Home/Index doesn't actually run the Index method on the server, but rather allows the browser to serve up the page from its own cache.
With this line of thinking, you can safely turn on caching on GET actions in which the data you're serving up doesn't change (or doesn't change often), with the knowledge that your server action won't fire every time.
POSTs won't be cached by the browser, so any POST is guaranteed to make it to the server.
Ignore caching for a moment. Another way of thinking about this is that search engines will store HTTP GET links during their indexing/crawling process, therefore they will show up in search results.
Suppose if your /Home/Index is implemented as GET but it lets say deletes a row in your Database, every time this link shows up on a search engine and somebody clicks it, you will have a delete row, and soon you have a lot deleted rows.
The HTTP spec states that GET and HEAD are expected to be idempotent, ie. they should not change server state.
One practical aspect of this, is that search robots will issue GET against any link to your site they know of. If such a GET changes user data it was not meant to change, you are in trouble.
Being idempotent has the added benefit that clients could be able to cache the result of a GET (use HTTP headers to control this).
I manage a software download site, and we've been trying to find a good way to present the downloads to students. Due to licensing restrictions, there are a large number of downloads that should only be accessable to certain students or staff, and many of the files are dvd iso's or other large files. We started out by pushing all the downloads through code, but we found that files over 500 megs would just time out and die half way through. (I think part of this problem is related to using afs for a storage system instead of cifs, but I won't go into that...)
What I was looking at doing was giving users a temporary url to the file that is only good for x number of minutes. I've seen this used on other sites before, but I wasn't sure what was involved with setting it up.
So first off, is this a workable solution for my scenario? Or will we still run into problems? And what is the best method for going about doing this? Thanks!
Something you could do is randomly generate a string in a database that corresponds to a file and do some sort of stealthed redirect to the actual file. This parameter would be passed as part of the query string and would allow you to invalidate urls however you like by performing any kind of checking before sending the file.
Well, as you haven't mentioned about the IIS version, you may take a look at
http://learn.iis.net/page.aspx/389/configuring-ftp-with-net-membership-authentication/
This article explains how to configure FTP server for ASP.NET Membership authentication. If you set this up, you can restrict the files based on roles.
Also, I doubt how you would implement a anonymous url solution without pushing the downloads through code.
I am wondering which way is the fastest to deliver images via ASP.net:
//get file path
string filepath = GetFilePath();
Response.TransmitFile(f);
or:
string filepath = GetFilePath();
context.Response.WriteFile(f);
or
Bitmap bmp = GetBitmap()
bmp.Save(Response.OutputStream);
or any other method you can think of
TransmitFile scales better since it does not load the file into Application memory.
You'll need to test with large image files to see a visible difference, but TransmitFile will outputperform WriteFile.
In either case, you should use an ashx handler rather than an aspx page to serve the image. aspx has extra overhead which is not needed.
One more thing-- set the ContentType when sending the file or the browser may render it as binary gibberish.
In the case of BMP:
context.Response.ContentType="image/bmp";
This doesn’t really answer your question but asp is not a file server, if you want to serve files use IIS and get Asp to link to those files, or if you must use ASP use it to redirect to the appropriate place.
I am not saying that it can't be done but if you are worried about performance, you may consider going down another route.
Of the methods you have I would think that the bitmap one would be slowest as that is creating a more complex object.
MS seems to have a decent solution if you must do it through asp.
It's easy enough to test, I recommend you set up three different URLs that will test the different mechanisms and then have a client (HttpWebRequest/HttpWebResponse or WebClient instance) download the content from all of them. Use a Stopwatch instance to time the download.
I imagine that it's not going to matter, that network latency is going to trump IO latency (unless you are thrashing the hard drive all the time) most of the time.
Well one thing is for sure - this will NOT be as fast as letting IIS server the file. If route the request through asp.net instead of letting IIS serve it then you are introducing a bunch of overhead you dont need.
The only reason I can imagine routing through asp.net is for security purposes; is that the case?
In my experience, use TransmitFile(), as long as that's the only thing you intend to send, and it sounds like it is.
Note this is incompatible with AJAX-enabled ASPX files.
I imagine you want to do this either out of security concerns, or to record some type of metrics (e.g. recording each hit to the database to find out what image is most popular, or who is viewing the image, etc.), or for URL rewriting purposes. If there is no particular reason to use ASP.NET to serve the image, then you should just let IIS take care of it as others have noted.
Also - this doesn't answer your question of which method is most efficient when reading an image file from disk, but I thought I should point this out:
If you already have a Stream or Bitmap containing the image, use that to write directly to Response.OutputStream. You definitely want to avoid writing it to disk and then reading from disk if you already have the Stream.