Upload files without full page postback - c#

The Plan
I need to upload a file without causing a full page refresh.
The uploaded files should be stored in a temporary place (session or
cookies).
I will only save the files in the server, if the user
sucessfully fills all the form fields.
Note: This is one of the slides of a jQuery slider. So a full refresh would ruin the user experience.
The Problem
If I place a Fileuploader Control inside a AJAX Update Panel, I wont be able to acess the file on the server side.
Note:From what I have found so far, this happens due to safety reasons.
Can't be done without co-operating binaries being installed on the
client. There is no safe mechanism for an AJAX framework to read the
contents of a file and therefore be able to send it to the server. The
browser supports that only as a multipart form post from a file input
box.
The Questions
When storing the files in a temporary location, should I use session or cookies? (what if the user has cookies disabled?)
If preventing a postback, really is against the standarts of user safety. Will it harm my website reputation? (regarding SEO and such)
Which road to take?
C# ASP.Net with AJAX? (is there a workarround?)
C# ASP.Net + AJAX Control Toolkit? Does it helps? (using the AsyncFileUpload control)
C# ASP.Net + jQuery Control? (won't I have problems fetching the data from the JavaScript?)
C# ASP.Net + iFrame? (not the most elegant solution)

The total amount of cookies that you can use is limited to a few kilobytes, so that's not a viable option to store a file. So sessions would be the only remaining. Consider also to save the file in the file system and remove it if it's not going to be used, as storing files in memory (session) will limit how many users you can handle at once.
No, for functions like uploading files you don't have to worry about that. Search engines doesn't try to use such functions when scanning the page.
You can use an AJAX upload in browsers that support direct file access, but there is no way around doing a post if you need to support all browsers. However, the post doesn't have to end up loading a new page, you can put a form in an iframe, or point the target of a form to an iframe.

Related

C# Webbrowser not loading full web page - page loaded event handler

I have a webpage that I want to monitor that has stock market information that I want to read and store. The information gathered is to be stored somewhere, say a .csv file or similar for later analysis.
The first problem I have is detecting when this page has fully loaded. The time taken to load can vary enormously. The event handlers I have tried all fire multiple times (I know this has been covered and I have tried the various techniques, but to no avail). Perhaps it is something specific to do with this web-page? Anyway, I need to know when this page has fully loaded and is sitting pretty with all graphics displayed properly.
The second problem is that, I cannot get the true source page into the webbrowser. As as a consequence, all access to the DOM fails as the HTML representation inside the webbrowser control appears not match what is actually happening on the webpage. I have dumped the text (webBrowser2.DocumentText) and it looks nothing like what you see when I check source in a browser, chrome for example. (I also use the firebug extension in Firefox to double check things). How can I get to the correct page into the webbrowser so I can start to manipulate things?
Essentially, in terms of the data, I need the GMT Time, Strike Rate and expiration time. My process will monitor with a timer control. To be able to read all the other element data on screen is a nice-to-have.
Can this be done?
I am an experienced programmer new to web programming and C#.
I think you want this AJAX request.
As a review, the web works by first loading the web page, then scanning the web page for additional files it needs to load (js, css, images, etc). When those finish, the onload event is triggered and some AJAX functions may run.
In this case, only some of the page is loaded and AJAX functions update the data in the graph later. As you've seen "Show Source" only shows the original file that was downloaded and is not a dump of its current state.
The easiest way to get the data is to find the URL of the AJAX request that loads the graph data. It is already conveniently formatted in JSON for you to scrap.

securely show images on website

I currently store a number of document preview images (jpg/tif) outside of my web root. There are 100s of them, so having this work efficiently is important.
The reason they are stored outside of the web root is that they contain data the only specific users/user groups may view (but each user can have 100s of documents they can view).
My current implementation is, when the user selects ‘view image’ an ajax call is triggered and this moves the image in question to a specific folder within the web root. The location is passed back and used to display the image to the user.
When the next image is clicked, the call deletes any existing images and copies over the requested image. At session logout / timeout the users image folder is emptied.
This has a few problems, but mainly:
Files are constantly being copied and deleted
There is the risk of images being left in the folder (issues with log off scripts)
The whole time an images is in the folder it could be viewed by another users (unlikely but possible)
Is there a better way of doing this? I looked at trying to combine the BinaryReader with the ajax call (as I hoped this would cut out the need to copy the files), but can’t see how to get the data back to be used by the JS in the calling page.
Alternatively is there a way of making selected Folders only accessible to given users based on some session criteria? (I can’t imagine there is but I thought it’s worth asking.)
So if anyone has any ideas on how this can be improved that would be great.
This is a c# ASP.NET app using Jquery.
Edit:
The image is displayed using ajax, this allows for preloading and also means the rest of the page does not need to be reloaded when they select the next/previous image.
It can almost be thought of as a javascript image swapper type situation, where the images are stored outside of the web root.
Thanks.
My current implementation is, when the user selects ‘view image’ an ajax call is triggered and this moves the image in question to a specific folder within the web root.
This is horrible idea. You realize you can just access the image data and pass it to web as stream with specific mime type, right?
Maybe try to write a method that will check user credentials by cookies, if it is not OK then load and send back some standard image that will say that user must log in to view file, if it is ok then load and show proper file from a path outside of root based on url parameter (with proper headers like content-type also often referred as mime-type ofc). Then link urls to that method with proper parameter(s).
You can easily find examples of code (like here) to display image in binary form from DB. You would need just to load them from some path outside of root, not DB.
Also you don't need to load it by AJAX - just add IMG with SRC pointing to URL of handler. Or redirect / open window if it needs to be downloaded not shown.
The issue was how to get an image to show via javascript that is not in the web root.
I created a generic handler (ashx file) that based on the session values (authentication) and submitted parameters would return an image.
That in turn is being called via AJAX.

asp:FileUpload - Keep track of files to save later

I am working on a form on a page that uses an asp:FileUpload to allow users to upload files to a server. I'm new to ASP and am using C# for my code-behind. The plan is to have the user "attach" files one at a time, adding them to an asp:listbox. Finally when the form is submitted the files in the listbox get saved to the server.
While it seems pretty easy to save files from the FileUpload by using
myFileUpload.SaveAs("path");
I am running into some difficulty figuring out how to keep track of the files independent of the FileUpload. I can get the file names really easily using
Path.GetFileName(myFileUpload.PostedFile.FileName);
but really I need to have some way of keeping track of more than just the names. My first thought was maybe to use a temporary folder of some sort, but the files are going to potentially be pretty large so I don't want to do that because saving might take a while.
How can I keep the file around so that I can save it on the server later independent of the FileUpload?
Rather than using a ListBox I would use actual <asp:FileUpload> controls so you can have access to all of the methods for that control - such as Save Etc.
You can put a bunch of these on your page and simply hide all but the first one. Then have a button to say "Add Another" - then with the click of this button show the next <asp:FileUpload> control - JQuery would be a nice choice to show the next <asp:FileUpload> that is currently hidden.
Then in your postback you can loop through all of your <asp:FileUpload> controls and if it HasFile - which is a property on the control - then perform your saving etc.
Save them into a temporary folder if needed - perhaps renaming the file with a GUID and store this list of GUID's in the users Session so you can grab those when needed.
Once you call SaveAs, you're saving the file. The base FileUpload control won't allow you to cache the file somewhere without actually uploading it to the server first. If you are looking to upload multiple files without uploading until the end, you may need to look into dynamically generating FileUpload controls (as many as the user wants). That way they can select the files to upload one at a time, then hit an "Upload" button at the end.
It's a little clunky to do it that way, though. I'd look for some third-party multiple upload controls. I've used PLUpload in the past.

MVC 3 partial caching

I know this question has been asked before but I'm confused as to the best approach so please forgive me asking again...
I have an MVC3 application that will be an extranet, allowing users to log in, via Forms Authentication. The users will be accessing confidential information so, in order to prevent somebody from hitting Back after they log out (and I SignOut of FormsAuthentication), I have disabled all caching, forcing the redirection to the logon page.
Everything works well from a security point of view, but my problem is that I'd like to cache the non-secure elements of the page, such as images, backgrounds, logos, etc.
At the moment, each page renders with an ugly flicker, because all my artwork is being downloaded each time.
Of course, this also has a negavtive impact on bandwidth too.
How can I control the caching such that the artwork, css, scripts, etc. get cached whilst preventing the dreaded Back button after FormsAuthentication SignOut problem?
Thank you in advance,
Simon.
Assuming the images are not dynamically generated you can either do it internally via MVC or using IIS.
Internally you'd need to serve all your images and set expires by.
If your using IIS it becomes much much simpler, you just edit the expires header in the IIS Custom headers section to a date in the future (a date in the past auto expires it). If you wish to ensure an image is not cached add a query string to it
<img src="image.png?dummy=8sn7ahh2" />
Then the image wouldn't be cached too so you basically want to cache all images and then black list (using query string) the ones you don't want cached.
Heres a nice example on how to switch it on/off for IIS7

Download multiple files from one link

I would like to open multiple download dialog boxes after the user clicks on a link.
Essentially what I am trying to do is allow the user to download multiple files. I don't want to zip up the files and deliver one zipped file because that would require a lot of server resources given that some of the files are some what large.
My guess is that there may be some way with javascript to kick off multiple requests when the user clicks on a certain link. Or maybe there might be a way on the server side to start off another request.
Unless the client is configured to automatically download files, you can't accomplish this without packaging the files in a single response (like ZIP solution you mentioned.) This would be a security issue if a Web site would be able to put arbitrarily large number of files on your disk without telling you.
By the way, you might be overestimating the cost of packaging in a single file. Streaming files is usually an I/O-bound operation. There should be enough CPU cycles to spare for piping the data through some storage(tar)/compression(zip) methods.
If you absolutely, positively cannot zip at the server level, this would probably be a good instance for creating some sort of custom "download manager" client-side plugin that you would have the user install and then you could have complete control over how many files you downloaded, where they went, etc.
I suppose you could link to a frameset document or a document containing iframes. Set the src of each from to one of the files you want to download.
That said, a zipped version would be better. If you are concerned about the load then either:
zip the files with compression set to none
use caching on the server so you zip each group of files only once
Present a page with a form of check boxes of the available files for download - with multiple select enabled for the check boxes.
User selects multiple files and submits forms.
Server accepts request and creates a page with serial-triggered file download javascript.
The page with the embedded javascript is presented to the user's browser, listing and asking for confirmation the files to be serially downloaded.
User clicks [yes - serially swamp my harddisk with these files] button.
foreach file, listener for download completed triggers the next download, until end of list.
I only know how to do this using Google GWT, where I had set up GWT RPC between browser and server. Took me two weeks to understand GWT RPC and perfect the download. Now it seems rather simple.
Basically (do you know basically is one of the most used non-technical words among the geek community?), you have to declare a server service class specifying the datatype/class of transfer. Where the datatype must implement serializable. Then on the browser-side the GWT client declares a corresponding receiver class specifying the same serializable datatype. The browser side implements a listener for onSuccess and onFailure.
Hey, I even managed to augment GWT service base class so that I could use JSP rather than plain servlets to implement the service interface.
Actually, I was not downloading a series of files but streams that conditionally serially triggered the next stream, because my onSuccess routine would inspect the current stream to decide what content to request for on the next stream.
Ok, two weeks was an exageration, it took me a week to do it. A genius would have taken half a day only.
I don't see what the big deal is with this. Why not something like this:
Click me
<script type="text/javascript">
$('a#myLink').click(function() {
window.open('http://www.mysite.com/file1.pdf', 'file1');
window.open('http://www.mysite.com/file2.pdf', 'file2');
window.open('http://www.mysite.com/file3.pdf', 'file3');
});
</script>

Categories