I have an ELB application that we have just added photo upload functionality to. Currently this saves the images in a folder within the deployment directory in ELB, saving the URL to a DynamoDb table. Everything works fine. Whenever a user wants to see an image we simply attach the URL to the src attribute of an tag in the UI and the browser downloads the image directly.
However it occurred to me that if the underlying EC2 instance is terminated and re-started we will most likely lose all the photos. Am I correct in this assumption? If so, what's the best practice here? Should uploads always be saved in an S3 bucket? Any guidance hugely appreciated.
Should uploads always be saved in an S3 bucket?
TL;DR
In an elastic cloud environment, yes always move your static content to reliable external storage (in this case S3). It will make your app scale better. See the S3 question here
Resources:
IAM Credentials Giving your app keys to access s3 right out of the box in Beanstalk.
S3 Getting Started
Media Reference Architecture Describes at least part of what you're looking for. Look at the S3/Datastore/Web Server interaction there. More here.
Longer Description
In a traditional architecture you might have a drive attached to a web server or two and you store the files there. You always expect those to be up. If you run out of disk space you have a problem. If you're server craps out, you've also got a problem. Even if you have a backup, you run the risk of both going down and you needing to restore all your data AND bringing up a server manually.
In a cloud architecture you're basically admitting that the "machine" is fallible and no longer relying on it to store any application state. It should be used to store things you need on disk to launch the app and/or temporary storage, but if you need something long term thats why services like S3 exist! By eliminating state from your app servers you can scale them automatically (however you see fit) without worrying about your users' content. If you had other services that needed that content, they could get it from there as well with the proper permissions.
Related
I have an old ASP.NET 2.0 site that I really have no interest in rebuilding or updating at this point. I'd like to move it to Windows Azure but I'm not all that familiar with Azure so I'm wondering if it's easily portable.
The biggest potential roadblock is the fact that users can upload multiple photos. Upon upload, I create several copies of the image in pre-defined dimensions and store them on the local file system using Server.MapPath("{location}") to indicate where it should be stored.
Can I have a site hosted on Azure (using their Free or Shared tier) and continue to use this method of uploading and storing files or do I have to switch to blob storage? There are only about 400MB of images.
Basically I'm looking for a low/no-cost way to easily host this site on Azure that doesn't involve changes to the code (or at the very least, extremely minimal changes such as changing Server.MapPath to some relative location) so that I can move my other, more current ones to Azure as well. My situation is such that if I can't move this site, it doesn't make sense to move the others because I'll have to keep paying for the server for this one anyways (they're all hosted on the same server for now).
Azure drives are not guaranteed to be stored between reboots of the virtual machine, so you will probably need to use blob storage. But you can mount a blob as a NTFS volume and store it there. This would make the transition quite simple in the code.
I'm writing a windows service that will have a 'LocalService' account type. I have a file that stores what it has to do.
I also have a windows form GUI where that file is also accessed to add/remove instances of the action for the service to perform. (dont know if its relavant but the service downloads tables from a webservice and exports them to any database the user has access to. these downloads are scheduled to happen regularly)
The service will only be installed on a user account.
I was planning on storing the file in user appdata folder however while debugging the service I got the error "Access to path [path] is denied"
Where would you recommend storing this file so it is accessible from both programs?
Thanks
EDIT: Looking a bit more, I've realised that
Environment.GetFolderPath(Environment.SpecialFolder.ApplicationData)
finds a different path for the service and the windows form app..
And that that app cant access the service appdata just as the service cant seem to access user appdata. so the same question stands!
ANOTHER EDIT:
So it turns out
Environment.GetFolderPath(Environment.SpecialFolder.CommonApplicationData)
is accessible from a local service and a user program - doh
...but some places seem to be read only...
Three options as I see it:
Run the service under the user's login id
Upside - both processes will have identical access to the various parts of the file system, so should remove your immediate problem
Downside - if the user changes their password the two will get out of sync.
write to some "neutral" part of the file system (or perhaps the registry) where shared access won't be a problem. The trouble with AppData is that as you've found, Windows sets up all kinds of protection around it in order to ringfence different users from each other.
Upside - no problems writing
Downside - you're effectively inventing your own standard. 15 years ago this would have been a no-brainer, the registry, but these days I get the impression that the registry is frowned upon (even though ms still rely on it!). If you do go down the registry route, make sure you're aiming at hklm not hkcu else you'll have the same problem!
During your setup, do some tricks to set up access to the relevant folders. But this is basically tearing down the protection that Windows sets up. Doesn't sound too sensible to me.
I'm creating a C# Metro/Modern UI app, and I need a way to handle some user data (mostly just small strings, but a fair amount of them), and specifically I'd like the data to 'roam' with a user's Microsoft Account. I know that you can handle this with roamingSettings, but it seems like that's supposed to be used more for like storing user IDs and other one-time settings, whereas I would be using it to store all of my app's data, and there seems to be a limit to the amount of space I get with that. I was thinking about using SkyDrive to host a "MyApp Data" folder, but I can't seem to figure out how to upload a simple text file to it :(
It seems like the best way to handle it would be to set up an account on Azure or EC2 and then make a simple PHP API so I could access the SQL database from my app, but I'd rather not have to pay for hosting.
I've seen other questions about Metro app storage on StackExchange and Microsoft's own forums, but most of those are in reference to local storage and using SQL servers to handle the storage.
So should I just use roamingSettings and keep an eye on the quota, should I try to use cloud hosting, or is there a better solution I just haven't thought of yet?
Thanks!
A few things about roaming settings:
- they are intended for that, settings. Not as a data replication scheme, thus the quota
- they are not immediate. You can create a setting named "highpriority" that will replicate in less than a minute, but other settings can take several minutes to replicate. If you need data to be available immediately, roaming settings are not an option. Also, if you exceed quota all your data will stop replicating, which is a bad thing. :) It also will not replicate between different versions of your app even if the settings are the same. In addition, if you do not use the app for a period of time (default is 30 days), then the roaming data will be deleted from the cloud. I am pretty sure roaming data can also be turned off via group policy in enterprise settings.
You can leverage SkyDrive. Make sure you download the Live SDK. Overview of using SkyDrive is here... http://msdn.microsoft.com/en-us/library/live/hh826521.aspx It is, fundamentally, just a collection of REST APIs. See the SkyDrive photo sample for an app that uploads files to SkyDrive http://code.msdn.microsoft.com/windowsapps/Live-SDK-Windows-Developer-8ad35141
I would go for a cloud based solution. A MS employee told me that the roaming data is a "best effort" there is no control if it actually works, sometimes it works, sometimes it just doesn't.
Personally I'd try to use the skydrive option
I am trying to create a document manager for my winforms application. It is not web-based.
I would like to be able to allow users to "attach" documents to various entities (personnel, companies, work orders, tasks, batch parts etc) in my application.
After lots of research I have made the decision to use the file system to store the files instead of a blob in SQL. I will set up a folder to store all the files, but I will store the document information (filepath, uploaded by, changed by, revision etc) in parent-child relationship with the entity in an sql database.
I only want users to be able to work with the documents through the application to prevent the files and database records getting out of sync. I some how need to protect the document folder from normal users but at the same time allow the application to work with it. My original thoughts were to set the application up with the only username and password with access to the folder and use impersonation to login to the folder and work with the files. From feedback in a recent thread I started I now believe this was not a good idea, and working with impersonation has been a headache.
I also thought about using a webservice but some of our clients just run the application on there laptops with no windows server. Most are using windows server or citrix/windows server.
What would be the best way to set this up so that only the application handles the documents?
I know you said you read about blobs but are you aware of the FILESTREAM options in SQL Server 2008 and onwards? Basically rather than saving blobs into your database which isn't always a good idea you can instead save the blobs to the NTFS file system using transactional NTFS. This to me sounds like exactly what you are trying to achieve.
All the file access security would be handled through SQL server (as it would be the only thing needing access to the folder) and you don't need to write your own logic for adding and removing files from the file system. To remove a file from the file system you just delete the related record in the sql server table and it handles removing it from the file system.
See:
http://technet.microsoft.com/en-us/library/bb933993.aspx
Option 1 (Easy): Security through Obscurity
Give everyone read (and write as appropriate) access to your document directories. Save your document 'path' as the full URI (\\servername\dir1\dir2\dir3\file.ext) so that your users can access the files, but they're not immediately available if someone goes wandering through their mapped drives.
Option 2 (Harder): Serve the File from SQL Server
You can use either a CLR function or SQLDMO to read the file from disk, present it as a varbinary field and reconstruct it at the client side. Upside is that your users will see a copy, not the real thing; makes viewing safer, editing and saving harder.
Enjoy! ;-)
I'd go with these options, in no particular order.
Create a folder on the server that's not accessible to users. Have a web service running on the server (either using IIS, or standalone WCF app) that has a method to upload & download files. Your web service should manage the directory where the files are being stored. The SQL database should have all the necessary metadata to find the documents. In this manner, only your app can get access to these files. Thus the users could only see the docs via the app.
I can see that you chose to store the documents on the file system. I wrote a similar system (e.g. attachments to customers/orders/sales people/etc...) except that I am storing it in SQL Server. It actually works pretty well. I initially worried that so much data is going to slowdown the database, but that turned out to be not the case. It's working great. The only advice I can give if you take this route is to create a separate database for all your attachments. Why? Because if you want to get a copy of the RDBMS for your local testing, you do not want to be copying a 300GB database that's made up of 1GB of actual data and 299GB of attachments.
You mentioned that some of your users will be carrying laptops. In that case, they might not be connected to the LAN. If that is the case, I'd consider storing the files (and maybe metadata itself) in the cloud (EC2, Azure, Rackspace, etc...).
Here's the scenerio.
I have to access a web service on the local LAN to obtain a list of files which I then must retrieve from the machine running the web service. The question has arisen whether to use a mapped drive or just retrieve the files via HTTP from the web service (or web server if the service is self-hosting).
All machines are running Windows XP or later.
I am leaning towards the web server approach - because it has the fewest unknowns as far as having the necessary permissions to access the files.
So basically the question is which is the better approach - web server or network share?
I would go the webservice route because it reduces the number of variables in the equation. Based on your current setup you already need a web service in order to get a list of files to download. At this point you know access to the web service isn't a problem so putting the files there removes a lot of unknowns.
If you put files onto another machine you run the risk of hitting at least the following problems that do not exist with the web service (since you already know you have access)
Permission Issues
Firewall issues
I would think it depends on various factors you haven't mentioned: will lots of clients be trying to access these files at a given time? Will the app be distributed across multiple servers in the future? Might you need to implement a caching system in the future?
If the answer is no to all of these, then you should probably pick what's easiest.
I would lean towards plain old HTTP. Doing it via the web service would probably involve marshalling the file as an array, for example, which makes it larger. A file share means needing to worry about permissions.