I'm creating a mockup file upload tool for a community site using Fine Uploader.
I've got the session set up to retrieve the initial files from the server along with a thumbnail url.
It all works great, however the rendering of the thumbnails is really slow.
I can't work out why. So I hard-coded to use a very small thumbnail for each of the four files. This made no difference.
The server side not the issue. The information is coming back very quickly.
Am I doing something wrong? Why is fineuploader so slow? Here's screen grab. It's taking four seconds to render the four thumbnails.
I'm using latest chrome. It's a NancyFX project on a fairly powerful machine. Rending other pages with big images on them is snappy.
Client side code:
thumbnails: {
placeholders: {
waitingPath: '/Content/js/fine-uploader/placeholders/waiting-generic.png',
notAvailablePath: '/Content/js/fine-uploader/placeholders/not_available-generic.png'
}
},
session: {
endpoint: "/getfiles/FlickaId/342"
},
Server side code:
// Fine uploader makes session request to get existing files
Get["/getfiles/FlickaId/{FlickaId}"] = parameters =>
{
//get the image files from the server
var i = FilesDatabase.GetFlickaImagesById(parameters.FlickaId);
// list to hold the files
var list = new List<UploadedFiles>();
// build the response data object list
foreach (var imageFile in i)
{
var f = new UploadedFiles();
f.name = "test-thumb-small.jpg"; // imageFile.ImageFileName;
f.size = 1;
f.uuid = imageFile.FileGuid;
f.thumbnailUrl = "/Content/images/flickabase/thumbnails/" + "test-thumb-small.jpg"; // imageFile.ImageFileName;
list.Add(f);
}
return Response.AsJson(list); // our model is serialised by Nancy as Json!
};
This is by design, and was implemented both to prevent the UI thread from being flooded with the image scaling logic and to prevent a memory leak issue specific to Chrome. This is explained in the thumbnails and previews section of the documentation, specifically in the "performance considerations" area:
For browsers that support client-generated image previews (qq.supportedFeatures.imagePreviews === true), a configurable pause between template-generated previews is in effect. This is to prevent the complex process of generating previews from overwhelming the client machine's CPU for a lengthy amount of time. Without this limit in place, the browser's UI thread runs the risk of blocking, preventing any user interaction (scrolling, etc) until all previews have been generated.
You can adjust or remove this pause via the thumbnails option, but I suggest you not do this unless you are sure users will not drop a large number of complex image files.
Related
I'm working on a simple blazor application that receives a file upload and stores it. I am using BlazorInputFile and I can't work out why copying the stream to MemoryStream is causing the browser to freeze.
The details of how to use (and how it's implemented) BlazorInputFile are explained in this blog post: Uploading Files in Blazor.
var ms = new MemoryStream();
await file.Data.CopyToAsync(ms); // With a 1MB file, this line took 3 seconds, and froze the browser
status = $"Finished loading {file.Size} bytes from {file.Name}";
Sample project/repo: https://github.com/paulallington/BlazorInputFileIssue
(this is just the default Blazor app, with BlazorInputFile implemented as per the article)
Use await Task.Delay(1); as mentioned on Zhi Lv's comment in this post blazor-webassembly upload file can't show progress?
var buffer = new byte[imageFile.Size];
await Task.Delay(1);
await imageFile.OpenReadStream(Int64.MaxValue).ReadAsync(buffer);
pratica.Files.Add(new FilePraticaRequest()
{
Contenuto = buffer,
Nome = imageFile.Name,
});
StateHasChanged();
I've experienced the same issue. I've tried both predefined components such as Steve Sanderssons file upload and MatBlazor fileupload and also my own component to handle fileuploads. Small files are not a problem. Once the files are a bit larger in size the UI will hang itself. MemoryOutOfBoundsException (or similar). So no, async/await can't help you release the UI.
I have put so much effort into this issue, one solution, that I am currently using, is to do all fileuploads with javascript instead of blazor. Just use javascript to get the file and post it up to the server. No JSInterop..
However, it seems like it is a memory issue in webassembly mono.
Read more here: https://github.com/dotnet/aspnetcore/issues/15777
Note: I haven't tried this on the latest Blazor version. So I'm not sure it's fixed or not.
You wait for the copy result, so the app freeze, you can refactor your code like this;
var ms = new MemoryStream();
file.Data.CopyToAsync(ms).ContinueWith(async task =>
{
if (task.Exception != null)
{
throw task.Exception; // Update this by your convenience
}
status = $"Finished loading {file.Size} bytes from {file.Name}";
await InvokeAsync(StateHasChanged).ConfigureAwait(false); // informs the component the status changed
}; // With a 1MB file, this line took 3 seconds, and should not froze the browser
In xamarin forms we can create images like this:
Image i = new Image { Source = "http://www.foo.com/foo.jpg };
After adding this to layout if url returns an image it will display it. What I want to now is is there a way to know if ths Url is an actual image. Otherwise I am going to show an default image.
Regards.
Edit
I have created a function:
public string GetImageSourceOrDefault(string orgUrl)
{
var req = (HttpWebRequest)WebRequest.Create(orgUrl);
req.Method = "HEAD";
try
{
using (var resp = req.GetResponse())
{
bool res = resp.ContentType.ToLower(CultureInfo.InvariantCulture)
.StartsWith("image/");
if (res)
return orgUrl;
else
return "defualt_logo.jpg";
}
}
catch
{
return "default_logo.jpg";
}
}
This function does the trick. However, for every image it does a request. I have a listview which shows like 220 entries. Using this method messed up the time that listview gets loaded.
Note: this function is natively called using dependency injection.
Maybe further improvements will do. Any ideas?
FFImageLoading CachedImage supports Loading and Error Placeholders (and much more). It's basically a API compatible replacement for Image with additional properties. You could try that.
var cachedImage = new CachedImage() {
LoadingPlaceholder = "Loading.png",
ErrorPlaceholder = "Error.png"
};
https://github.com/molinch/FFImageLoading
With Xamarin.Forms UriImageSource you can specify different caching length, and whether caching is used by using the properties CacheValidity and CachingEnabled.
By default it will automatically cache results for 1 day on the local storage of the device.
In your function, as you mention, you are downloading the image every single time.
You have no current functionality that is storing and caching the result for later re-use.
By implementing something like this on the platform specific layer would get around your current solution of re-downloading the image every single time.
Alternatively as a workaround, if you didn't want to implement the above, you could try putting two Image controls stacked upon each other, maybe in a Grid, with the bottom image showing a default placeholder image, and on-top another Image control that would show the intended image, if successfully downloaded, using the UriImageSource.
You could also possibly hook hook into the PropertyChange notification of the Image.Source and detect it being set, with the image then being displayed. Upon detection you could then release the image from the temporary place holder Image control perhaps?
I have tried several .NET pdf libraries to create a pdf page from a html page.
Well in Azure it's not working for a website because i'm receiving a timeout.
I found on the web people are talking about running the pdf converting as a worker role.
Anyone knows how to configure a worker role to work with the azure website.
I cannot find much info on the web about this. Is it possible?
You're conflating things. Azure Websites is a service. Azure worker role is a stateless virtual machine running in a Cloud Service. They are two separate things. Plus, you do not need a worker role to generate PDFs (though it's certainly a viable option). You simply need the ability to install your PDF-rendering software, whether that's in Windows or in Linux.
You will not be able to install such software in Azure Websites, but you can install software in Azure web/worker roles (via startup scripts) or Virtual Machines (via ssh/rdp). Which you choose, as well as the PDF library you choose, is completely up to you (and out of scope here, since that level of architecture is subjective).
EVO has the solution for converting HTML to PDF in Azure Websites. Here is the code snippet for creating PDF from HTML in Azure Websites:
protected void convertToPdfButton_Click(object sender, EventArgs e)
{
// Get the server IP and port
String serverIP = textBoxServerIP.Text;
uint serverPort = uint.Parse(textBoxServerPort.Text);
// Create a HTML to PDF converter object with default settings
HtmlToPdfConverter htmlToPdfConverter = new HtmlToPdfConverter(serverIP, serverPort);
// Set optional service password
if (textBoxServicePassword.Text.Length > 0)
htmlToPdfConverter.ServicePassword = textBoxServicePassword.Text;
// Set HTML Viewer width in pixels which is the equivalent in converter of the browser window width
htmlToPdfConverter.HtmlViewerWidth = int.Parse(htmlViewerWidthTextBox.Text);
// Set HTML viewer height in pixels to convert the top part of a HTML page
// Leave it not set to convert the entire HTML
if (htmlViewerHeightTextBox.Text.Length > 0)
htmlToPdfConverter.HtmlViewerHeight = int.Parse(htmlViewerHeightTextBox.Text);
// Set PDF page size which can be a predefined size like A4 or a custom size in points
// Leave it not set to have a default A4 PDF page
htmlToPdfConverter.PdfDocumentOptions.PdfPageSize = SelectedPdfPageSize();
// Set PDF page orientation to Portrait or Landscape
// Leave it not set to have a default Portrait orientation for PDF page
htmlToPdfConverter.PdfDocumentOptions.PdfPageOrientation = SelectedPdfPageOrientation();
// Set the maximum time in seconds to wait for HTML page to be loaded
// Leave it not set for a default 60 seconds maximum wait time
htmlToPdfConverter.NavigationTimeout = int.Parse(navigationTimeoutTextBox.Text);
// Set an adddional delay in seconds to wait for JavaScript or AJAX calls after page load completed
// Set this property to 0 if you don't need to wait for such asynchcronous operations to finish
if (conversionDelayTextBox.Text.Length > 0)
htmlToPdfConverter.ConversionDelay = int.Parse(conversionDelayTextBox.Text);
// The buffer to receive the generated PDF document
byte[] outPdfBuffer = null;
if (convertUrlRadioButton.Checked)
{
string url = urlTextBox.Text;
// Convert the HTML page given by an URL to a PDF document in a memory buffer
outPdfBuffer = htmlToPdfConverter.ConvertUrl(url);
}
else
{
string htmlString = htmlStringTextBox.Text;
string baseUrl = baseUrlTextBox.Text;
// Convert a HTML string with a base URL to a PDF document in a memory buffer
outPdfBuffer = htmlToPdfConverter.ConvertHtml(htmlString, baseUrl);
}
// Send the PDF as response to browser
// Set response content type
Response.AddHeader("Content-Type", "application/pdf");
// Instruct the browser to open the PDF file as an attachment or inline
Response.AddHeader("Content-Disposition", String.Format("{0}; filename=Getting_Started.pdf; size={1}",
openInlineCheckBox.Checked ? "inline" : "attachment", outPdfBuffer.Length.ToString()));
// Write the PDF document buffer to HTTP response
Response.BinaryWrite(outPdfBuffer);
// End the HTTP response and stop the current page processing
Response.End();
}
I'm the author of the Rotativa nuget package. For security policy reasons and OS limitations it can't be used on Azure websites. To address this issue I created a SaaS version on Azure.
It's an API that's really easy to use, just install dedicated nuget package:
PM> Install-Package RotativaHQ
And create PDF files from Razor Views, as easy as:
return new ViewAsPdf(model);
No need for anything special in the View/HTML. No need for absolute URLS in images/css/js links. Works on localhost (dev machine) too.
Currently the service has endpoints in 4 Azure regions: US East, US West, EU North, Southeast Asia.
It's fast since it uses a proprietary protocol to send the web page contents to the API for conversion to PDF.
It's reliable because all endpoints are load balanced.
Details on the web site:
https://rotativahq.com
I'm trying to get all the photos from the device's pictures library and show then on the app using a <GridView/> with an <Image/> inside it's item template, but I didn't find a way to do that without issues.
I need to create BitmapImages from the StorageFiles that I get.
First I tried creating the BitmapImages and setting the UriSource as new Uris with the files paths, like this:
var picsLib = await KnownFolders.PicturesLibrary.GetFilesAsync(CommonFileQuery.OrderByDate);
var picsList = new List<BitmapImage>();
foreach (StorageFile pic in picsLib)
{
var imgSrc = new BitmapImage();
imgSrc.UriSource = new Uri(pic.Path, UriKind.Absolute);
picsList.Add(imgSrc);
}
PhotosView.ItemsSource = picsList;
But the images doesn't show up.
Right after, I tried using streams:
var imgSrc = new BitmapImage();
var picStream = await pic.OpenReadAsync();
imgSrc.SetSource(picStream);
picsList.Add(imgSrc);
Of course, I got System.OutOfMemoryException.
Next, I tried using thumbnails:
var imgSrc = new BitmapImage();
var picThumb = await pic.GetThumbnailAsync(Windows.Storage.FileProperties.ThumbnailMode.PicturesView,
200, Windows.Storage.FileProperties.ThumbnailOptions.ResizeThumbnail);
imgSrc.SetSource(picThumb);
picsList.Add(imgSrc);
But I realized that it's just like the stream, OutOfMemory again. If I limit it to only get the thumbnails of 10 or 20 images, it works nice, but I really need to show all the photos.
XAML isn't the problem as it does the job fine when I limit the number of images to load.
The app is meant to be used by anyone who download it from the Windows Phone Store when finished, so the size of the images vary, as the pictures library of Windows Phone devices contains almost any photos stored on the user's phone, including photos from the device's camera, saved images, etc.
There is absolutely no way to ever guarantee that you won't run out of memory with any of the above approaches. And the reality is that unless you are resizing the images on the fly to a standard size, you will never really control how much memory you are using, even for just the visible images.
You MUST make the containing grid virtualized so that the byte array's are only allocated for the images that are actually visible to the user.
Yes, there will be some lag on most systems as you scroll as byte arrays are discarded and created, but that is the price you pay for being able to view them 'all'.
All of that being said, here is a blog to help get you started.
I was wondering if someone could give me some guidance here. I'd like to be able to programatically get every image on a webpage as quickly as possible. This is what I'm currently doing: (note that clear is a WebBrowser control)
if (clear.ReadyState == WebBrowserReadyState.Complete)
{
doc = (IHTMLDocument2)clear.Document.DomDocument;
sobj = doc.selection;
body = doc.body as HTMLBody;
sobj.clear();
range = body.createControlRange() as IHTMLControlRange;
for (int j = 0; j < clear.Document.Images.Count; j++)
{
img = (IHTMLControlElement)clear.Document.Images[j].DomElement;
HtmlElement ele = clear.Document.Images[j];
string test = ele.OuterHtml;
string test2 = ele.InnerHtml;
range.add(img);
range.select();
range.execCommand("Copy", false, null);
Image image = Clipboard.GetImage();
if (image != null)
{
temp = new Bitmap(image);
Clipboard.Clear();
......Rest of code ...........
}
}
}
However, I find this can be slow for alot of images, and additionally it hijacks my clipboard. I was wondering if there is a better way?
I suggest using HttpWebRequest and HttpWebResponse. In your comment you asked about efficiency/speed.
From the standpoint of data being transferred using HttpWebRequest will be at worst the same as using a browser control, but almost certainly much better. When you (or a browser) makes a request to a web server, you initially only get the markup for the page itself. This markup may include image references, objects like flash, and resources (like scripts and css files) that are referenced, but not actually included in the page itself. A web browser will then proceed to request all the associated resources needed to render the page, but using HttpWebRequest you can request only those things that you actually want (the images).
From the standpoint of resources or processing power required to extract entities from a page, there is no comparison: using a broswer control is far more resource intensive than scanning an HttpWebResponse. Scanning some data using C# code is extremely fast. Rendering a web page involves javascript, graphics rendering, css parsing, layout, caching, and so on. It's a pretty intensive operation, actually. Using a browser under programmatic control, this will quickly become apparent: I doubt you could process more than a page every second or so.
On the other hand, a C# program dealing directly with a web server (with no rendering engine involved) could probably handle dozens if not hundreds of pages per second. For all practical purposes, you'd really be limited only by the response time of the server and your internet connection.
There are multiple approaches here.
If it's a one time thing, just browse to the site and select File > Save Page As... and let the browser save all the images locally for you.
If it's a recurring thing there are lots of different ways.
buy a program that does this. I'm sure there are hundreds of implementations.
use the html agility pack to grab the page and compile a list of all the images I want. Then spin a thread for each image that downloads and saves it. You might limit the number of threads depending on various factors like your (and the sites) bandwidth and local disk speed. Note that some sites have arbitrary limitations placed on the number of concurrent requests per connection they will handle. Depending on the site this might be as few as 3.
This is by no means conclusive. There are lots of other ways. I probably wouldn't do it through a WebBrowser control though. That code looks brittle.