I am wondering which way is the fastest to deliver images via ASP.net:
//get file path
string filepath = GetFilePath();
Response.TransmitFile(f);
or:
string filepath = GetFilePath();
context.Response.WriteFile(f);
or
Bitmap bmp = GetBitmap()
bmp.Save(Response.OutputStream);
or any other method you can think of
TransmitFile scales better since it does not load the file into Application memory.
You'll need to test with large image files to see a visible difference, but TransmitFile will outputperform WriteFile.
In either case, you should use an ashx handler rather than an aspx page to serve the image. aspx has extra overhead which is not needed.
One more thing-- set the ContentType when sending the file or the browser may render it as binary gibberish.
In the case of BMP:
context.Response.ContentType="image/bmp";
This doesn’t really answer your question but asp is not a file server, if you want to serve files use IIS and get Asp to link to those files, or if you must use ASP use it to redirect to the appropriate place.
I am not saying that it can't be done but if you are worried about performance, you may consider going down another route.
Of the methods you have I would think that the bitmap one would be slowest as that is creating a more complex object.
MS seems to have a decent solution if you must do it through asp.
It's easy enough to test, I recommend you set up three different URLs that will test the different mechanisms and then have a client (HttpWebRequest/HttpWebResponse or WebClient instance) download the content from all of them. Use a Stopwatch instance to time the download.
I imagine that it's not going to matter, that network latency is going to trump IO latency (unless you are thrashing the hard drive all the time) most of the time.
Well one thing is for sure - this will NOT be as fast as letting IIS server the file. If route the request through asp.net instead of letting IIS serve it then you are introducing a bunch of overhead you dont need.
The only reason I can imagine routing through asp.net is for security purposes; is that the case?
In my experience, use TransmitFile(), as long as that's the only thing you intend to send, and it sounds like it is.
Note this is incompatible with AJAX-enabled ASPX files.
I imagine you want to do this either out of security concerns, or to record some type of metrics (e.g. recording each hit to the database to find out what image is most popular, or who is viewing the image, etc.), or for URL rewriting purposes. If there is no particular reason to use ASP.NET to serve the image, then you should just let IIS take care of it as others have noted.
Also - this doesn't answer your question of which method is most efficient when reading an image file from disk, but I thought I should point this out:
If you already have a Stream or Bitmap containing the image, use that to write directly to Response.OutputStream. You definitely want to avoid writing it to disk and then reading from disk if you already have the Stream.
Related
I have a a requirement to control the ability for a client to download images/files based on who they are. We are calling an action with a parameter that allows me to sub in data from session to finish a path, without putting the real path on the client. We then return a FileActionResult from the controller.
It was working. If we found the image, we could create a stream and serve the file, and if we could not, we returned a default image. But, we found that we could easily run into an issue where the stream was open when a new request was made, which resulted in an error. This could then turn into an issue where even the default image could not be downloaded.
I have looked around a lot and we have tested several methods, and these conflicts can still occur. I have started to think that maybe this is not an IO issue, but more of an, "I'm trying to do something wrong" issue.
Is there a way to intercept a call for a static resource, and then adapt the path if it is looking in a specific location, without imposing the rule on all request?
The closest I have found is when creating a View Expander, where I can re-interoperate the path of a called resource, but it is not the same. One is compiled, the other is not.
I don't have any code to show because the approach is uncertain, and unknow. Searches have proven difficult because the terms collide with well know solutions to topics that do not apply.
I am hoping that someone who is more knowledgeable can point me to a method that will treat files in a secure folder as if they are static resources once I have determined they are authorized to access the static resource.
I am using Identity, but I do not extend that identity to system access, nor will I. The only user allowed can be the IISUsr, per my client.
Any help would be greatly appreciated!
I have a C # application and I need to keep in her pictures, but I can not use resources (resx).
Is there then an alternative method?
update:
ANSWER:
My images is static, for this I will use Embedded Resource
How create embedded resource
As mentioned above, it really depends on whether or not they are static or dynamic. If static, you could use an Embedded Resource (instead of encoding them as constant byte arrays, as others have mentioned). If dynamic, you could store them in isolated storage.
Is this a dynamic set of pictures or static? If static, you can compile the resource file as a separate assembly and then consume it. If dynamic, you need some type of data store to store the files. If file system does not work, a database is an option.
One idea comes to mind, but I probably wouldn't use it if at all possible: put the images into your code as arrays of bytes.
You could create byte[] fields for the images and initialize them in code. You could dynamically generate the code using either CodeDOM or Reflection.Emit.
If images are not too large you can encode them into huge constant arrays in code.
This is not very elegant solution but what else if no file, no resource, no database?
Given the options you gave us, that doesn't leave us with much to work with. Depending on the image size, you can use this website to encoding your images to Base64 and then store them in a string. Then you can use System.Convert.FromBase64String to convert them from base64 back to a byte array.
Another option would be to store your images on a web server and create a webservice on the server that you can consume to retreive the images.
These are not good methods though. The first idea is consuming a lot more memory then it would need to and the second method is just kind of ridiculous if you think about it. It is much much much better to use a database. You can use a database such as MySQL that will work with both Windows and Mono. You can also use SQLite database so that the database is portable. There really isn't a good reason to store this in a database.
I would go on my line: you can save on disk by using any Mono ZIP library that provides secure encryption.
Zip: cause you potentially will save a space
Encryption: if you need save some private data (family images, documents...)
Can group them based on your own app logic, or save them individually, with also some metafile attached, if you need also provide some information on image.
Microsoft uses this kind of "technology" for example for DOCX files. Try to change an extension of DOCX file to ZIP and unzip it into the folder. You will see the content.
Regards.
I have a site which is akin to SVN, but without the version control.. Users can upload and download to Projects, where each Project has a directory (with subdirs and files) on the server. What i'd like to do is attach further information to files, like who uploaded it, how many times its been downloaded, and so on. Is there a way to do this for FileInfo, or should I store this in a table where it associates itself with an absolute path or something? That way sounds dodgy and error prone :\
It is possible to append data to arbitrary files with NTFS (the default Windows filesystem, which I'm assuming you're using). You'd use alternate data streams. Microsoft uses this for extended metadata like author and summary information in Office documents.
Really, though, the database approach is reasonable, widely used, and much less error-prone, in my opinion. It's not really a good idea to be modifying the original file unless you're actually changing its content.
As Michael Petrotta points out, alternate data streams are a nifty idea. Here's a C# tutorial with code. Really though, a database is the way to go. SQL Compact and SQLite are fairly low-impact and straightforward to use.
I manage a software download site, and we've been trying to find a good way to present the downloads to students. Due to licensing restrictions, there are a large number of downloads that should only be accessable to certain students or staff, and many of the files are dvd iso's or other large files. We started out by pushing all the downloads through code, but we found that files over 500 megs would just time out and die half way through. (I think part of this problem is related to using afs for a storage system instead of cifs, but I won't go into that...)
What I was looking at doing was giving users a temporary url to the file that is only good for x number of minutes. I've seen this used on other sites before, but I wasn't sure what was involved with setting it up.
So first off, is this a workable solution for my scenario? Or will we still run into problems? And what is the best method for going about doing this? Thanks!
Something you could do is randomly generate a string in a database that corresponds to a file and do some sort of stealthed redirect to the actual file. This parameter would be passed as part of the query string and would allow you to invalidate urls however you like by performing any kind of checking before sending the file.
Well, as you haven't mentioned about the IIS version, you may take a look at
http://learn.iis.net/page.aspx/389/configuring-ftp-with-net-membership-authentication/
This article explains how to configure FTP server for ASP.NET Membership authentication. If you set this up, you can restrict the files based on roles.
Also, I doubt how you would implement a anonymous url solution without pushing the downloads through code.
The urls on my site can become very long, and its my understanding that urls are transmitted with the http requests. So the idea came to compress the string in the url.
From my searching on the internet, i found suggestions on using short urls and then link that one to the long url. Id prefer to not use this solution because I would have to do a extra database check to convert between long and short url.
That leaves in my head 3 options:
Hashing, I don't think this is a option. If you want a safe hashing algorithm, its going to be long.
Compressing the url string, basically having the server depress the string when when it gets the url parameters.
Changing the url so its not descriptive, this is bad because it would make development harder for me (This is a 1 man project).
Considering the vast amount possible amount of OS/browsers out there, I figured id as if anyone else has tried this or have some clever suggestions.
If it maters the url parameters can reach 100+ chars.
Example:
mysite.com/Reports/Ability.aspx?PlayerID=7737&GuildID=132&AbilityID=1140&EventID=1609&EncounterID=-1&ServerID=17&IsPlayer=True
EDIT:
Let me clarify atm this is NOT breaking the site. Its more about me learning to find a good solution ( Im well aware this is micro optimization, my site is very fast atm ) and making my site even faster ( To challenge myself, and become a better coder ).
There is also a cosmetic issue, I personal think that a URL longer then the address bar looks bad.
You have some conflicting requirements as you want to shorten/compress the url without making it less descriptive. By the very nature of shortening the URL, you will, to a certain extent, make it less descriptive.
As I understand it, your goal is to optimise by sending less over the request. You mention 100+ characters, instead of 1000+ which I assume means they don't get that big? In which case, I'd see this as an unnecessary micro-optimisation.
To add to previous suggestions of using POST, a simple thing would be to just shorten the keys instead of using full names if you don't want to do full url shortening e.g.:
mysite.com/Reports/Ability.aspx?pid=7737&GID=132&AID=1140&EID=1609&EnID=-1&SID=17&IsP=True
These are obviously less descriptive.
But like I said, are you having a real problem with having long URLs?
I'm not sure I understand what's your problem with long URLs? Generally I'd try to avoid them, but if that's necessary then you won't depend on the user remembering it anyway, so why go through all the compressing trouble? Even with a URL of 1000 chars (~2KB) the page request won't be slow.
I would, however, consider using POST instead of GET if possible, to prettify the URL, but that's of course depends on your implementation / environment.
It is recommended a few times here to use POST instead of GET. I would strongly recommend AGAINST picking your HTTP action by what the URL looks like. There is more to this choice than how it is displayed in the browser.
A quick overview:
http://www.w3.org/2001/tag/doc/whenToUseGet.html#checklist
A few options to add to the other answers:
Using a subclassed LinkButton for your navigation. This holds the extra data (PlayerId for example) inside its viewstate as properties. This won't be much help though if you're giving URLs to people via emails.
Use the MVC routing engine to produce slightly improved URLs - no keys for the querystring. e.g. mysite.com/Reports/Ability/7737/132/1140/1609/-1/17/True
Create your own URL shortener like tinyurl.com. Store the url in the database along with each of the querystring values to lookup.
Simply setup some friendly URLs for the most popular reports, for example mysite.com/Reports/JanuaryReport. You can also do this using the MVC routing engine.
The MVC routing engine is stand alone and can work without your site being an MVC site.
With my scheme, one could encode the params section of a URL as a base64 string which is ~50% shorter than a direct base64 representation. So for your case you get:
~50% shorter params section
a base 64 string which hides a lot of the detail
see http://blog.alivate.com.au/packed-url/
Most browsers can handle up 2048 characters in URL; if you don't feel like to use a long parameter list, you can always to pass parameters through POST requests.
There are theoretical problems with extended URLs. The exact limit varies across browser (roughly 2k in sort versions of IE) and server (4-8k in Apache, varying on version and configuration), and isn't officially specified in any RFC that I am aware of.
I would agree with synhershko, and replace the URL with form POST parameters instead if you are concerned that your URLs are growing too long.
I've encountered similar situations in the past, although my reasons for optimisation were for SEO. To me it depends on what you're doing with the page URL variables, are they being appended on all/most pages? If they are then to me there is almost always a much better way, although if you're far down the development path it's probably too late now.
I like being able to 'read' a URL, especially when I drop into an unknown site 2 or more layers deep in the navigation and there site is designed poorly, it's often the easiest and fastest way for an advanced user to find where they are on the site.
If you're interested in it from an SEO point of view, its normally best to have a hierarchy which only contains: / - _
Search engines will try and read URL's, see this video by Matt Cutts (can't remember how far into the video he mentions it but it's a good watch anyway...)
Any form of compression of the URL (hashing, compressing, non-descriptive) is going to:
make the urls harder to read, remember and type in correctly
have a performance impact as you will have to decrypt/decompress/convert the url before you can work with it.
Also, hashing is usually considered to be non-reversible - given a hashed value you shouldn't be able to work out what generated it, but you could use it to look up a value in a database, which gets you back to your first issue of short-long lookups.
You could easily just remove the redundant "ID" at the end of each parameter, and possibly strip out vowels or similar to "shorten" the url without losing too much from the semantics of the request.
But to be honest, the length of your URL is one of the least things to worry about in terms of performance - look at the size of any cookies you're sending back and forth between the browser and the server, and the page size you're sending back.