I tried to find the answer to my question, but it seems like I am either missing correct terminology or it really is a bit tricky to do.
I am trying to see if it is possible to utilise either Lazy Loading, or data sent from API in portions periodically, so loading time does not take as long to reach first render. My current system where an array of over 1000 objects is being fetched from .NET API into React UI just does not work as I would like it to.
I would like to skip pagination if possible.
Just implement the endpoint GET /data?show=X&skip=Y,
for first request get data from /data?show=10,
then whenever you want (for example when user reaches the bottom of website), do /data?show=10&skip=10
I'm not sure if I understood you correctly, but I hope it helps somehow :)
Related
Fairly new to coding and i want a project to work on that could help me advance my skills. I'm not sure what language would be best for this sort of undertaking but i would definitely prefer to use C++ or C#.
For the first part of the program i basically would like to try and take all my pandora likes and put them on a spreadsheet with song name is one column and artist in the other. I don't see the formatting being too hard once i actually get the data i need, but i'm not really sure how to communicate with a server at all in this point in time. I'm guessing i probably won't be able to grab a raw list of likes so the i'm thinking my best course of action will be to first expand the likes list all the way, and then i need to read the text on the screen ro in the source code.
For the first step, expanding my like i found the HTML source code that actually does this:
<div class="show_more tracklike" data-nextLikeStartIndex="0" data-nextThumbStartIndex="5">Show more</div>"
Not sure if this is something i can work with but i was thinking if i could set data-nextThumbStartIndex="5" to be equal to the # of likes - 5 (the amount it shows by default) it would be fairly easy to expand the list. If not i would probably have to click the "show more" link repeatedly until i have all the likes on the page.
For the next step, getting the data i want, i think my best option would be to basically just grab the text that i physically see on the screen and worry about filtering and manipulating the data afterwards. The other option is looking at the source code, which i actually found the pieces of code where the info i want is stored. If i could retrieve the page's source code i think it would be relatively easy to pick out the data i actually want from that.
So yea that's about it, i know i'm pretty noob atm and what i'm saying is probably wrong and/or much more complicated than i think but i'm a pretty quick learner and at the very least if someone could point me in the right direction to communicate with a server that would be much appreciated.
This question is quite "wide" (and I have absolutely no knowledge of Pandora itself - can't access it from where I live).
In general, there are several different ways to solve this type of problem:
Screen Scraping - basically access the website as if you were a web-server, and from the HTML string that comes back, dig out the information you need. The problem here is that the data is not very suitable for "machine reading", as it often has no distinct points for the "reader" to find the relevant information, and it's difficult to sort the data from the "chaff".
AJAX api - "Asynchronous Java Script and XML" where the provider of the website has an interface to fetch certain data within to the web-browser - of course, if you "pretend" to be the web-browser, requesting the same type of information. You are relying on the website to have such an interface, but if it exists, the data is generally in a "more suitable form to be machine read" (typically XML, but not always).
JSON api - "Java Script Object Notation" is a similar solution to AJAX - like XML, JSON is a "human and machine readable format".
The latter two are definitely preferable, as the data coming back is meant for machine reading. The drawback is that you need to have "server side cooperation". The good thing here is that Pandora does have a JSON API. The bad thing is that it seems to be hard to use... Here's one discussion on the subject:
Making JSON calls to Unoffical Pandora API
The main principle here is that you send some stuff to the webserver, and receive a reply with the requested information. Exactly how this is done depends on the language/programming environment. A popular C++ solution is libcurl.
There is a Ruby Client here, using the JSON interface
https://github.com/nixme/pandora_client
A C# implementation to interface with Pandora is here:
http://pandoraunleashed.googlecode.com/svn/trunk/PandoraUnleashed/Pandora.cs
Unfortunately, I can't find any direct reference to "listing likes".
I'm in the process of converting an asp.net application from MVC controllers to ApiController. So far everything is going pretty smooth, except I've had a few hicccups.
The problem I'm having right now is a few methods are having requests of the form:
sort:FieldName
dir:DESC
filter[0][field]:FieldName
filter[0][data][type]:string
filter[0][data][value]:deadeawd
(the content type is application/x-www-form-urlencoded;)
And the sort and the dir can easily be captured within a model class, and I've done so, but I don't know how to capture the filter[0] fields. (there could be filter[1] and so on as well, just how many there are is not known ahead of time, ie the data structure is dynamic).
Currently the application grabs the form data, and a method builds a query string based on the data there, but in Web API we no longer have access to the form data directly.
I could use a dynamic object, or a NameValueCollection, but I'm just trying to figure out what's the best option, what's the intended usage, and what's the best practice.
(in case you're wondering, the request data can't be changed, it's from a framework that we are using and don't have an easy way to override how it does things)
The answer we ended up going with was to change the framework so it sent a proper JSON object.
It was quite a bit of work, and it means it might be difficult to update it, but we're looking at moving away from the framework soon anyways, and we don't want the trivial little details of this framework to alter our API in a negative way.
If anyone is interested, the dynamic or NameValueCollection probably would've worked, and I don't think there is a best practice, because passing data like this is already bad practice.
Basically I have a class called Asset which holds all the information for an Asset in my system. This can get quite big (Assets have Thumbnails, Filenames, Metadata, Ratings, Comments, etc).
On my results page, I list all the Assets that match a particular criteria which then can be filtered using jQuery.
I was finding performance issues in IE8 so the first thing I did was look at the Asset class and see what was not needed for displaying an Asset on the page. (Later I visited my jQuery and found that it was what was causing the performance issues).
So, when I stripped my class right down to basics, I made that the BaseAsset and derived Asset from that.
My question is, did I need to do that? was there any need?
I shall provide examples if necessary, but I am refraining at the moment, because the post could become quite big :)
I don't think you need a base class per say, I think what you need is to send what you need. It seems that the problem simply was the fact that you didn't need all of the data all of the time.
I know it's tedious, but only send what you need, when you need it, and you won't have any problems. When you need more data, then either load it asynchronously with an AJAX call, or even make another page that the user navigates to.
I think this question is like clay pidgeon shooting.. "pull... bang!" .. shot down.. but nevertheless, it's worth asking I believe.
Lots of JS frameworks etc use JSON these days, and for good reason I know. The classic question is "where to transform the data to JSON".
I understand that at some point in the pipeline, you have to convert the data to JSON, be it in the data access layer (I am looking at JSON.NET) or I believe in .NET 4.x there are methods to output/serialize as JSON.
So the question is:
Is it really a bad idea to contemplate a SQL function to output as JSON?
Qualifier:
I understand trying to output 1000's of rows like that isn't a good idea - in fact not really a good idea for web apps either way unless you really have to.
For my requirement, I need possibly 100 rows at a time...
The answer really is: it depends.
If your application is a small one that doesn't receive much use, then by all means do it in the database. The thing to bear in mind though is, what happens when your application is being used by 10x as many users in 12 months time?
If it makes it quick, simple and easy to implement JSON encoding in your stored procedures, rather than in your web code and allows you to get your app out and in use, then that's clearly the way to go. That said, it really doesn't take that much work to do it "properly" with solutions that have been suggested in other answers.
The long and short of it is, take the solution that best fits your current needs, whilst thinking about the impact it'll have if you need to change it in the future.
This is why [WebMethod] (WebMethodAttribute) exists.
Best to load the data to to the piece of program and then return it as JSON.
.NET 4 has a support for returning json, and i did it as a part of one ASP.NET MVC site and it was fairly simple and straightforward.
I recommend to move the transformation out of the sql server
I agree with the other respondents that this is better done in your application code. However... this is theoretically possible using SQL Server's ability to include CLR assemblies in the database using create assembly syntax. The choice is really yours. You could create an assembly to do the translation in .net, define that assembly to SQL Server and then use contained method(s) to serialize to JSON as return values from your stored procedures...
Better to load it using your standard data access technique and then convert to JSON. You can then use it in standard objects in .NET as well as your client side javascript.
If using .net mvc you serialize your results in your controllers and output a JsonResult, there's a method Controller.Json() that does this for you. If using webforms an http handler and the JavascriptSerializer class would be the way to go.
Hey thanks for all the responses.. it still amazes me how many people out there have the time to help.
All very good points, and certainly confirmed my feeling of letting the app/layer do the conversion work - as the glue between the actual data and frontend. I guess I haven't kept up too much with MVC or SQL-2008, and so was unsure if there were some nuggets worth tracking down.
As it worked out (following some links posted here, and further fishing) I have opted to do the following for the time being (stuck back using .NET 3.5 and no MVC right now..):
Getting the SQL data as a datatable/datareader
Using a simple datatable > collection (dictionary) conversion for a serializable list
Because right now I am using an ASHX page to act as the broker to the javascript (i.e.
via a JQuery AJAX call), within my ASHX page I have:
context.Response.ContentType = "application/json";
System.Web.Script.Serialization.JavaScriptSerializer json = new System.Web.Script.Serialization.JavaScriptSerializer();
I can then issue: json.serialize(<>)
Might seem a bit backward, but it works fine.. and the main caveat is that it is not ever returning huge amounts of data at a time.
Once again, thanks for all the repsonses!
I'm currently building a single page AJAX application. It's a large "sign-up form" that's been built as a multi-step wizard with multiple branches and different verbiage based on what choices the user makes. At the end of the form is an editable review page. Once the user submits the form, it sends a rather large email to us, and a small email to them. It's sort of like a very boring choose your own adventure book.
Feature creep has pushed the size of this app beyond the abilities of the current architecture, and it's too slow to work in any slower computers (not good for a web app), especially those using Internet Explorer. It currently has 64 individual steps, 5400 DOM elements and the .aspx file alone weighs in at 300kb (4206 LOC). Loading the app takes anywhere from 1.5 seconds on a fast machine running FireFox 3, to 20 seconds on a slower machine running IE7. Moving between steps takes about the same amount of time.
So let's recap the features:
Multi-Step, multi-path wizard style
form (64 steps)
Current step is shown in a fashion similar to this: http://codylindley.com/CSS/325/css-step-menu
Multiple validated fields
Changing verbiage based on user
choices
Final, editable review page
I'm using jQuery 1.3.2 and the following plugins:
jQuery Form Wizard Plugin
jQuery clueTip plugin
jQuery sexycombo
jQuery meioMask plugin
As well as some custom script for loading the verbiage from an XML file, running the review page and some aesthetic accoutrements.
I don't have this posted anywhere public, but I'm mostly looking for some tips on how to approach this sort of project and make it light weight and extensible. If anyone has any ideas as far as tools, tutorials or technologies, that's what I'm looking for. I'm a pretty novice programmer (I'm mostly a CSS/xHTML/Design guy), so speak gently. I just need a good plan of attack to make this app faster. Any ideas?
One way would be to break apart the steps into multiple pages / requests. To do this you would have to store the state of the previous pages somewhere. You could use a database to do this or some other method.
Another way would be to dynamically load the parts you need via AJAX. This won't help with the 54000 DOM elements though, but it would help with the initial page load.
Based on the question comments a quick way to "solve" this problem is to make a C# class that mirrors all the fields in your question. Something like this:
public class MySurvey
{
public string FirsName { get; set; }
public string LastName { get; set; }
// and so on...
}
Then you would store this in the session (too keep it easy... I know it's not the "best" way) like this
public MySurvey Survey
{
get
{
var survey = Session["MySurvey"] as MySurvey;
if (survey == null)
{
survey = new MySurvey();
Session["MySurvey"] = survey;
}
return survey;
}
}
This way you'll always have a non-null Survey object you can work with.
The next step would be to break that big form into smaller pages, let's say: step1.aspx, step2.aspx, step3.aspx etc. All these pages would inherit from a common base page that would include the property above. After this all you'd need to do is send the request from step1.aspx back and save it to Survey, similar to what you're doing now but for each small piece. When done redirect (Response.Redirect("~/stepX.aspx")) to the next page. The info from the previous page would be saved in the session object. If they close the browser page they won't be able to get back though.
Rather than saving it to the session you could save it in a database or in a cookie, but you're limited to 4K for cookies so it may not fit.
I agree with PBZ, saving the individual steps would be ideal. You can, however, do this with AJAX. If you did, though, it'd require some stuff that sounds like it might be outside of your skillset of mostly front-end development, you'd need to probably create a new database row and tie it to the user's session ID, and every time they click to the next step have it update that row. Possibly even tie it to their IP address so if the whole thing blows up they can come back and hit "remember me?" for your application to retrieve it.
As far as optimizing the existing structure, jQuery is fairly heavy when it comes to optimization, and adding a lot of jQuery modules doesn't help that. I'm not saying it's bad, because it saves you a lot of time, but there are some instances where you are using a module for one of its many functionalities, and you can replace that entire module with a few lines of jQuery enabled javascript.
As far as minimizing the individual DOM elements, the step above I mentioned could help slim that down, because you're probably loading a lot of extensible functions for those modules that you may or may not need.
On the back end, I'd have to see the source to see how to tell you to optimize it, but it sounds like there's a lot of redundancy in individual steps, some of that can probably be trimmed down into functions that include a little recursion, or at the least delegate some of the tasks to one another.
I wish I could help more but without digging through your source I can only suggest basic strategies. Best of luck, though!
Agree, break up the steps. 5400 elements is too many.
There are a few options if you need to keep it on one page.
AJAX requests to get back either raw HTML, or an array of objects to parse into HTML or DOM
Frames or Iframes
JavaScript to set innerHTML or manipulate the DOM based on the current step. Note with this option IE7 and especially IE6 will have memory leaks. Google IE6 JavaScript memory leaks for more info.
Use document.write to include only the .js file(s) needed for the current step.
HTH.
Sounds like mostly a JQuery optimization problem.
First suggestion would be switch as many selects into ID selectors as you can. I've had speedups of over 200-300x by being able to move to id attribute selection only.
Second suggestion is more of a plan of attack. Since IE is your main problem area, I suggest using the IE8 debugger. You just need to hit f12 in IE8... Tabs 3 and 4 are script and profiler respectively.
Once you've done as much of #1 as you think you can, to get a starting point, just go to profiler, hit start profiling, do some slow action on the webpage, and then stop profiling. You will see your longest method calls, and just work your way through it.
For finer testing/dev, go to the script tab. Breakpoints locals etc are there for analysis. You can dev/test changes via the immediate window... i.e. put a break point where you want to change a function, trigger the function, execute your javascript instead of the defined javascript in the immediate window.
When you think you have something figured out, profile your changes to make sure they are really improvements. Just start the profiler, run the old code, stop it and note your benchmark. Then re-start the profiler and use the immediate window to execute your altered function.
That's about it. If that flow can't take you far enough, as mentioned above, JQuery itself (and hence its plugins) are not terribly performant, and replacing with standard javascript will speed everything up. If your plugins benchmark slow, look at replacing them with other plugins.