Open xAPI content at last bookmarked slide - c#

I am currently opening xAPI content in our own LMS. We do not use a LRS but store statements in our own database. This will change soon as we want to build our own LRS.
When opening content, I build a string comprising the content, endpoint auth token and actor.
This will always open the content at the beginning.
If I connect to a LRS, the content is opened at the last bookmarked slide, and shows the percentage of progression.
On looking at calls made in Fiddler I can see 3 main calls being made to the LRS where the state_Id passed in is suspend_data, cumulative_time or bookmark.
Bookmark returns the Id of the last slide, and suspend data returns a load of numbers in json format.
My issue is that I can easily get the last slide Id from my database, but I cannot get the percentage or set the completed items in the package as complete (with the tick). I'm guessing the returned values from suspend_data may have something to do with setting these.
Can anyone advise on what I should be doing to open the content properly at it's bookmark?

The content itself would be in charge of opening itself to the right place based on the returned values from the queries. The content is using the State API document resources to capture those values, see https://xapi.com/blog/deep-dive-state-activity/, and then to read them back at launch so that it can set the correct state for the learner. This will also be specific to the type of content that is being run, in this case likely from a major authoring tool which has determined how it wants to store those values. Content from other authoring tools will not necessarily use those same methods. xAPI does not provide for any specifics around these details, it only defines what the LRS must support, so you will be best off implementing the LRS endpoints in the way they were intended or you'll be customizing for every kind of content you have. Also, you likely should be providing more information to the content at launch time, for instance a base activity id and a registration value.
https://xapi.com/building-a-learning-record-store/ may be a good resource to review before creating an LRS.

Related

Load testing using VSTS. Searching from a CSV file and then clicking on the first element

I have a question regarding a performance/load testing scenario.
I understand that it's best practice not to combine more than one variable when doing a load test. However, the management is insisting on it.
Scenario: User searches for an item and then clicks on a specific item from the search results to bring up details in an i-frame.
Validation: Make sure the search is performant and the details open in the i-frame as expected- without crashing the i-frame.
I have recorded the scenario using VSTS. I am using a CSV file for the search criteria. However, how do I configure the test to click on the "first" element in the search results every time?
Thank you for very much, I apologize if I missed anything.
Correlate for the first returned item on the page. You may have to post process the returned data.
You will likely find a POST operation which follows where the results of your selection are captured. Once you have captured the data from the returned stream on the page, then use the captured data to replace the data in the post related to the item being queried.
You can also use this same captured data to validate the returned page contains the data that is expected, rather than potentially a random unrelated item, but which still returns an HTTP 200 as a valid page. Search for the appropriate unique string in the returned page data to ensure that not only you have requested the proper page, but the page you expect is being returned.

Web Performance Testing - Code behind value as dynamic parameter

I've been put in charge of developing a performance test for one aspect of the system my company develops. The idea is that you upload a document, click a button, which then does a whole bunch of steps behind the scenes. The document is given a unique ID once it's uploaded, which I need to capture as a dynamic parameter so that the test can be repeated. The application is written in ASP.NET/C#, and I have no ability to view the source code or modify it in any way.
Visual Studio does not automatically pick this up as a dynamic parameter, despite the fact that it needs to be one.
Process Information
The id is generated when the document is uploaded. It's stored in the link that you click to process the document.
The id is grabbed from the code-behind at run time, through Razor code.
It is not passed through the referer to the next page, but somehow appears there anyway.
Things I've Tried
Extracting the value from the first page. This just results in having C# put in the id location instead of the actual value. Eg. 'DataItem.GetMember("ObjectId").Value' instead of '600-562949958140473'
Creating an intermediate step to get the id before moving onto the real page. I'm not sure how to do this successfully - my attempt just resulted in an additional failed step.
This is my first time working with Web Performance Testing, though I have used CodedUI. So for any explanation, please assume I'm an idiot.

Real time editor for Cloud Storage System

I am working on cloud storage system in ASP.Net MVC5. In which I made a file manager that handles cut,copy,download multiple files,edit and preview of files, but I want to edit documents like word files in real time (collaborative editing)..is there any api that can help me accordingly.
Thank you in advance.
you should use Signal R for real time applications...it may be possible with the help of application user interface but its better to write your own code according to your choice...
[http://signalr.net/][1]
dev_express and syncfusion may be your solution..try these..
This is turning into a huge comment, so I'll just explain my point of view in an answer. I'll remove it, if I see an actual answer appears.
I am suggesting you start writing your own code for collaborative editing and the reason is quite simple. You need at least slightly different processing for almost each file type, which suggests there will never be a single API to support collaborative editing for all file types, unless somebody makes it their goal to maintain it and keep up with every one created.
Start it simple, text (or hex) editing. Define how changes are made and implemented on other clients and then work your way to add as many file types (and methods that go with them) as you need.
You could use source code of 1 of these open source collaborative text editors (you'll have to find download / Github links on their websites) to get a general idea how to do it, but you will still have to put in some work and won't go far without creating your own code.
Collaborative editing requires user 1's (who just started editing) client to send either one of these:
Data pointing to changes made in file
Full file, and user 2's client (or central "server") should be able to calculate the changes made from there and implement them.
One of the problems is to overwrite only that portion of the file changes were made to (and avoid overwriting the other user 2's work).
And the biggest problem (the reason you can't have "1 for all" method/API) is each file type has its own structure meaning that different file types will have different data representing changes in file. If you try to write raw data it might work, but you'd still need to calculate and lock away specific portions of file, that contain general information, rather than data of your file.

securely show images on website

I currently store a number of document preview images (jpg/tif) outside of my web root. There are 100s of them, so having this work efficiently is important.
The reason they are stored outside of the web root is that they contain data the only specific users/user groups may view (but each user can have 100s of documents they can view).
My current implementation is, when the user selects ‘view image’ an ajax call is triggered and this moves the image in question to a specific folder within the web root. The location is passed back and used to display the image to the user.
When the next image is clicked, the call deletes any existing images and copies over the requested image. At session logout / timeout the users image folder is emptied.
This has a few problems, but mainly:
Files are constantly being copied and deleted
There is the risk of images being left in the folder (issues with log off scripts)
The whole time an images is in the folder it could be viewed by another users (unlikely but possible)
Is there a better way of doing this? I looked at trying to combine the BinaryReader with the ajax call (as I hoped this would cut out the need to copy the files), but can’t see how to get the data back to be used by the JS in the calling page.
Alternatively is there a way of making selected Folders only accessible to given users based on some session criteria? (I can’t imagine there is but I thought it’s worth asking.)
So if anyone has any ideas on how this can be improved that would be great.
This is a c# ASP.NET app using Jquery.
Edit:
The image is displayed using ajax, this allows for preloading and also means the rest of the page does not need to be reloaded when they select the next/previous image.
It can almost be thought of as a javascript image swapper type situation, where the images are stored outside of the web root.
Thanks.
My current implementation is, when the user selects ‘view image’ an ajax call is triggered and this moves the image in question to a specific folder within the web root.
This is horrible idea. You realize you can just access the image data and pass it to web as stream with specific mime type, right?
Maybe try to write a method that will check user credentials by cookies, if it is not OK then load and send back some standard image that will say that user must log in to view file, if it is ok then load and show proper file from a path outside of root based on url parameter (with proper headers like content-type also often referred as mime-type ofc). Then link urls to that method with proper parameter(s).
You can easily find examples of code (like here) to display image in binary form from DB. You would need just to load them from some path outside of root, not DB.
Also you don't need to load it by AJAX - just add IMG with SRC pointing to URL of handler. Or redirect / open window if it needs to be downloaded not shown.
The issue was how to get an image to show via javascript that is not in the web root.
I created a generic handler (ashx file) that based on the session values (authentication) and submitted parameters would return an image.
That in turn is being called via AJAX.

Full HTML code from iframes using webbrowser

I need get the html code this site (with C#):
http://urbs-web.curitiba.pr.gov.br/centro/defmapalinhas.asp?l=n (only works with IE8)
Using the WebClient class, or HttpWebResquest, or any other library, I do not have access to the html code generated dynamically.
So my only solution (I guess) would be to use the WebBrowser Control (WPF).
I was trying and trying, using mshtml.HTMLDocument and SHDocVw.IWebBrowser2
but it is a mess, I can not find what I want on it
it seems there are many "iframe", and inside there are more "iframe".
I do not know, I tried:
IHTMLElementCollection elcol = htmlDoc.getElementsByTagName("iframe");
var test = htmlDoc.getElementsByTagName("HTML");
var test2 = doc.all;
but had no progress, does anyone know how to help me?
Observation / trivia: This is the site that shows where all bus pass in my city. This site is horrible, and only works in IE8 has serious problems. I would like to use this information to try to create a better service, using google maps or bing maps posteriorly.
The site that I was trying to get the information is no longer available, the idea to get dynamic html source code was abandoned and I cannot found the solution using a WebBrowser Control for WPF.
I believe that today there are other ways to solve this problem.
You need to use the "Frames" object in the WebBrowser control, this object collection will return all frames and iframes if I recall correctly, and you need to look at the frames collection for each newly discovered frame you find on the page, get me? So, it’s like a recursive discovery loop that you need to run, you add each frame you find to your array or collection, and for each "unsearched" frame, you must look at that frames ".Frames" collection (they will all have a .Count etc, just a typical collection) and you do this for every newly discovered frame that you find, until of course, there are no longer any newly discovered frames that haven't had their ".Frames" collection searched.
So, the function, if done as per above, will allow for infinitely nested frames to be discovered, as I've done this in a VB6 project (I'm happy to give you the source for it if you would like it). However, the nesting is not preserved in my example, but that is ok since the nesting structure isn't important and you should figure out which was what by the order of the frames that are added to the collection since the order is related to the hierarchy of the frames being added.
Once you do that, getting the html source on this is pretty straight forward and I’m sure you know how to do, probably a .DocumentText depending on the version of the WB control you are using.
Also, you say it is not possible to use the HTTP clients to directly grab the source code? I must disagree, since once you have the frame objects, you can get the URLs from each frame object and do a URL2String type call to get the URL and turn it into a string from any httpclient-like class or framework. The only way it may be prevented on their behalf if if they accept requests only from a particular referrer (ie: the referrer must be from their domain name on some of their files etc), or the USER_AGENT where if it isn't one of the specified browsers, then it is technically possible that they will reject and not return data, unlikely but possible.
However, both referrer and user_agent can be changed in the httpclient you are using, so if they are imposing limits based on this sort of stuff, you can spoof them very easily and give them the data that they expect. Once again, this is low probability stuff, but it is possible they may have set things up this way especially if their data is proprietary.
PS: My first visit to the site ended up in IE crashing and reopening that tab :), terrible site I agree.

Categories