I'm looking for creative ways to solve a difficult problem. And I need to do this via C# code only, no website configurations.
My users upload a "package" of files. These are usually HTML files with relative paths to images and other resources. Currently I store these in a folder in an S3 bucket. So far there is no problem.
The problem appears when I need to serve this file back to the client. I need a way to give them access to the HTML file for X amount of time while also keeping the integrity of the URL links.
For instance - File.html has a reference to fish.png -
<img src="fish.png"/>
If I grant them access to File.html the fish image is broken because they do not have access to "fish.png". If I grant them access to both the link is still broken because the src doesn't have the security token. I've even tried granting access to the folder and both files but still the image is broken. I also can't download the contents because that would defeat the purpose of only having the resource available for X amount of time.
I hope my problem is clear. I am very new at S3 development in general. Any help is appreciated.
EDIT - I wanted to add that modifying the HTML document links is not an option. They don't always upload HTML, it could be flash files or other file types. I need the document links references to be maintained.
You need to first figure out why if they have access to the entire folder, that it doesn't work.
If they have access to the folder, and both files are in that folder and the link to the image has no path, it should work. Until you get that use-case solved, getting the time-based url's working is a non-starter.
You may want to make sure the capitalization is correct - it matters in s3, whereas if you are used to running in IIS it usually doesn't.
One potential way to do this would be to make the IMAGE files public, and make the HTML files "signed urls". You can set an expiry time on the signed urls that reference the image files. The image files obviously wouldnt expire, but the access to HTML files that reference the image files WOULD expire.
Not the prettiest solution, but the problem isnt very pretty either. ;-)
You can read more about Pre-Signed URLS here: Generate a Pre-signed Object URL using AWS SDK for .NET
This might be overkill but I think it would work.
When a user requests the file, you copy the file and any files referenced in the file to a public area (perhaps in a directory using a GUID as a name so it's not easily guessable by other users).
Then when the time has expired you can simply delete the new directory.
Related
I understand this folder App_Data is normally for database files etc but I want this now for images, the idea being users upload images into this folder and they can be accessed from the website, I basically want App_Data to be used/thought of as a normal folder now, anyone know how to do this? Is it just permission settings or can this folder not be used like a normal folder. Thanks in advance :)
ApplicationData is a folder for Application Data. What kind of data you store there is up to you. Note that there are 3 on a Windows:
ApplicationData
CommonApplicationData
LocalApplicationData
Generally data in there it should be data specific to this user - except for CommonApplicationData, of course. Being shared across users, is what the "Common" Prefixed Folders are there for.
However the rest of the question makes no sense. You want the user to manually put stuff there, so a WebSite can upload it? You also seem to think it is somehow not a "normal" folder?
WebSites do not have random access to the file System. So it would really just be annoying for the user to navigate there. And if there is another programm in the loop, you have not told us of it.
And the folder is quite normal. The OS stores a path to it wich can be changed (and the file moved Automagically), but beyond that it is as normal as can be. The unknown position is why you should always retreive the real values from the OS with https://learn.microsoft.com/en-us/dotnet/api/system.environment.specialfolder
Edit:
Based on your comment, I understand now. You are writing a Website. And you wonder why the server has no access to the AppData Folder. Of course only now I noticed MVC properly.
WebServers are uniquely vulnerable to hacking. Online 24/7, a few well known frameworks and widespread reachability as a core goal. As a result they generally run under the most restrictive userrights possible.
Read access to the servers programm and the Instances content directory - any more can not be expected and should never be granted. Maybe write access in a subfolder of content for Temp files - but there are better solutions, that involve Databases and HTTP Handlers.
Solution:
If you want your images to be avalible, put them into a subfolder of the Content directory for this instance. However you really should be considered Database Storage with HTTP Handlers: https://www.red-gate.com/simple-talk/sql/learn-sql-server/an-introduction-to-sql-server-filestream/ Some even go as far as having a seperate, dedicated Webserver just for Images. But I doubt you are on that scale yet.
I am allowing users to upload files to my server. What possible security threats do I face and how can I eliminate them?
Let's say I am allowing users to upload images to my server either from their system or from net. Now to check even the size of these images I have to store them in my /tmp folder. Isn't it risky? How can I minimize the risk?
Also let's say I am using wget to download the images from the link that the users upload in my form. I first have to save those files in my server to check if they actually are images. Also what if a prankster gives me a URL and I end up downloading an entire website full of malware?
First of all, realize that uploading a file means that the user is giving you a lot of data in various formats, and that the user has full control over that data. That's even a concern for a normal form text field, file uploads are the same and a lot more. The first rule is: Don't trust any of it.
What you get from the user with a file upload:
the file data
a file name
a MIME type
These are the three main components of the file upload, and none of it is trustable.
Do not trust the MIME type in $_FILES['file']['type']. It's an entirely arbitrary, user supplied value.
Don't use the file name for anything important. It's an entirely arbitrary, user supplied value. You cannot trust the file extension or the name in general. Do not save the file to the server's hard disk using something like 'dir/' . $_FILES['file']['name']. If the name is '../../../passwd', you're overwriting files in other directories. Always generate a random name yourself to save the file as. If you want you can store the original file name in a database as meta data.
Never let anybody or anything access the file arbitrarily. For example, if an attacker uploads a malicious.php file to your server and you're storing it in the webroot directory of your site, a user can simply go to example.com/uploads/malicious.php to execute that file and run arbitrary PHP code on your server.
Never store arbitrary uploaded files anywhere publicly, always store them somewhere where only your application has access to them.
Only allow specific processes access to the files. If it's supposed to be an image file, only allow a script that reads images and resizes them to access the file directly. If this script has problems reading the file, it's probably not an image file, flag it and/or discard it. The same goes for other file types. If the file is supposed to be downloadable by other users, create a script that serves the file up for download and does nothing else with it.
If you don't know what file type you're dealing with, detect the MIME type of the file yourself and/or try to let a specific process open the file (e.g. let an image resize process try to resize the supposed image). Be careful here as well, if there's a vulnerability in that process, a maliciously crafted file may exploit it which may lead to security breaches (the most common example of such attacks is Adobe's PDF Reader).
To address your specific questions:
[T]o check even the size of these images I have to store them in my /tmp folder. Isn't it risky?
No. Just storing data in a file in a temp folder is not risky if you're not doing anything with that data. Data is just data, regardless of its contents. It's only risky if you're trying to execute the data or if a program is parsing the data which can be tricked into doing unexpected things by malicious data if the program contains parsing flaws.
Of course, having any sort of malicious data sitting around on the disk is more risky than having no malicious data anywhere. You never know who'll come along and do something with it. So you should validate any uploaded data and discard it as soon as possible if it doesn't pass validation.
What if a prankster gives me a url and I end up downloading an entire website full of malware?
It's up to you what exactly you download. One URL will result at most in one blob of data. If you are parsing that data and are downloading the content of more URLs based on that initial blob that's your problem. Don't do it. But even if you did, well, then you'd have a temp directory full of stuff. Again, this is not dangerous if you're not doing anything dangerous with that stuff.
1 simple scenario will be :
If you use a upload interface where there are no restrictions about the type of files allowed for upload then an attacker can upload a PHP or .NET file with malicious code that can lead to a server compromise.
refer:
http://www.acunetix.com/websitesecurity/upload-forms-threat.htm
Above link discusses the common issues
also refer:
http://php.net/manual/en/features.file-upload.php
Here are some of them:
When a file is uploaded to the server, PHP will set the variable $_FILES[‘uploadedfile’][‘type’] to the mime-type provided by the web browser the client is using. However, a file upload form validation cannot depend on this value only. A malicious user can easily upload files using a script or some other automated application that allows sending of HTTP POST requests, which allow him to send a fake mime-type.
It is almost impossible to compile a list that includes all possible extensions that an attacker can use. E.g. If the code is running in a hosted environment, usually such environments allow a large number of scripting languages, such as Perl, Python, Ruby etc, and the list can be endless.
A malicious user can easily bypass such check by uploading a file called “.htaccess”, which contains a line of code similar to the below: AddType application/x-httpd-php .jpg
There are common rules to avoid general issues with files upload:
Store uploaded files not under your website root folder - so users won't be able to rewrite your application files and directly access uploaded files (for example in /var/uploads while your app is in /var/www).
Store sanitated files names in database and physical files give name of file hash value (this also resolves issue of storing files duplicates - they'll have equal hashes).
To avoid issues with filesystem in case there are too many files in /var/uploads folder, consider to store files in folders tree like that:
file hash = 234wffqwdedqwdcs -> store it in /var/uploads/23/234wffqwdedqwdcs
common rule: /var/uploads/<first 2 hash letters>/<hash>
install nginx if you haven't done its already - it serves static like magic and its 'X-Accel-Redirect' header will allow you to serve files with permissions being checked first by custom script
I have a little payments webApp, our customers can install it on their IIS and work with it. They can upload their own logotype.
We are using WyBuild to update this apps, but it replaces all files on the web folder with the new version, so the logotypes are deleted, that's why we placed the customer's files in program files, so the updater can't delete them.
the problem is that I can't load the images from the following path
C:\Program Files\MyApp\ImageFoder\logo.jpg
I don't know how to do it and I'm almost sure that is not possible to load
My web application is on
C:\inetpub\wwwroot\MyApp\
I can't have the images on the webFolder because wyBuild deletes them when I'm trying to update them, I already tried the paths like this: (the don't work)
///file:c:/program files/ .... etc
so, the question is
How can I load an image to an asp:image control using it's windows path ?
You need to configure an IIS Virtual Folder to point to the alternate location where the images are stored.
I wouldn't put them in Program Files, though, a sibling folder in wwwroot would be better.
Remember NTFS permissions are easy to mess up and it's easier to manage them in a single place.
Update - for locally installed, localhost-only sites Alternatively (and this is only a good idea if you have minimal amounts of traffic. NOT for public websites), you can serve files from an arbitrary location using a VirtualPathProvider. It sounds like this 'web app' is installed like a desktop app for some reason? If you want to store user data externally, the user's App Data folder would be appropriate, but ONLY if the web app refuses external connections, and can only be accessed from the machine.
Since you're dealing with images, I'd grab the imageresizing.net library and use the VirtualFolder plugin to serve the files dynamically. It's 200KB more in your project, but you get free dynamic image resizing and/or processing if you need it, and you save a few days making a VirtualPathProvider subclass work (they're a nightmare).
Wouldn't it be better to use isolated storage?
Added: I mean on the users machine, and upload them again if they are not found. This takes away your overhead completely.
I have a folder with content in my solution, it contains lots of subdirectories. I need to copy this folder into IsolatedStorage. I read msdn forum message where it was said that there are no means to get folder content from code. How do I solve this problem?
The forum ist right. There is no way to enumerate "Content" resources.
You could set build action "Resource" or "Embedded Resource" as some answers suggest - then there are ways to enumerate the resources using ResourceManager or similar means. But I wouldn't recommend it as this will embed all resources into your assembly bloating it and making your app slow to load.
Here is a similar question (about enumerating image files). No solution though. The answer of Matt contains the only workaround: Prepare a list of filenames at design time and build this list into the app. Then instead of enumerating the files at runtime you read the filenames one by one from this list.
If you need this just for developing and testing then like the others I also recommend looking at the ISETool. You setup your app once with a reference storage and use the tool to save the state of the isolated storage. When you need to restore the state from isolated storage you can use the tool to copy the saved one back to the phone or emulator. An example for doing this can be found in this blog post.
Do you have a lot of files as well in the subdirectories? I see three solutions:
Set the file build action as Resource so they are embedded in the dll. You can retrieve the folder anme in the filename of the resources (MyAssembly.MyFolder.Filename.extension).But it will slow down the loading of your assembly and so the startup time of your application.
Set the files build action to Content so they are included in the XAP file but I'm not sure you can iterate the content without knowing the path
You can put the content in a zipped file on a remote server, get it on the first startup and use http://slsharpziplib.codeplex.com/ to dezip the content in Isolated Storage.
Mighter,
If I understand you right, you need the contents of your folder, along with the sub-directories in Isolated Storage, right? Putting them in your solution only gets them into the XAP.
You could use the Isolated Storage Explorer that comes bundled with the Windows Phone SDK 7.1 to manipulate file storage in Isolated Storage. This would be the easiest way to get your folder's content into Iso.
You could start learning about Isolated Storage Explorer [ISETool.exe] from here.
Hope this helps!
I'm wondering how to get globally unique IDs for files and folders in Windows (XP, Vista and 7), and also be able to get the full path of the file or folder just by having the ID, something like getFileByGUID. I'm trying to do this in C++, C# and PHP.
The globally unique IDs should stay the same even if the file is moved, so using the full path of the file or folder wouldn't work.
Any help would be much appreciated, thanks!
You may consider using the Distributed Link Tracking Service.
Subject to the caveats mentioned in the page for BY_HANDLE_INFORMATION, GetFileInformationByHandle might be helpful, depending on what the goal is.
This won't let one retrieve the file's name, though. Due to NTFS hard links there may be more than one path to the same file contents anyway...
You could hash together information about the file, such as its metadata and/or contents. It would be difficult to do this on an entire file system without collisions, but I assume you're not trying to index the whole file system. This wouldn't work if you need files to retain their IDs if they're modified, though.