I am relatively new to ColdFusion (using ColdFusion 10) and I have a question regarding creating a real-time updated table.
Currently I have a C# application that I have writing stock prices to a csv (text) file every 2 seconds and would like to reflect these changes as they happen in a table on web page. I know I could have the entire table refresh every 2 seconds, but this would generate a lot of requests to the server and I would like to know if there is a better way of doing it ? Could this be easily achieved using ColdFusion 10's new html5 Web-sockets functionality ?
Any advice/guidance on which way to proceed or how to achieve this would be greatly appreciated !
Thanks, AlanJames.
I think you could rewrite your question and get at least 5 answers in first hour.
Now to answer it, if I understood well what you're asking.
IMHO websockets aren't there yet, if your website is for wide population and you are not 100% sure that they're coming with most recent Chrome or FF, forget it.
You could use some javascript websocket library which would gracefully fallback to flash or AJAX HTTP polling, like http://socket.io/ or cloud service like pusher.com . But this will complicate your life because you have 2-3 times more work in backend if you implement polling and websocket.
Regarding amount of requests, if you want real time data on screen, you gotta have server to support it.
You could optimize if you request once and refresh data for all the table, so not per cell. You'd get all new data at once and update those cells which changed with jquery. So not pulling all data again, or whole table HTML, just minimal amount of data.
AJAX polling would certainly help with amount of requests, time of the request being open is another possible problem though. You could do polling with BlazeDS which is even in ColdFusion 9.
some pages to look at:
http://www.bennadel.com/blog/2351-ColdFusion-10-Using-WebSockets-To-Push-A-Message-To-A-Target-User.htm
http://www.bennadel.com/blog/1956-Very-Simple-Pusher-And-ColdFusion-Powered-Chat.htm
http://nil.checksite.co.uk/index.cfm/2010/1/28/CF-BlazeDS-AJAX-LongPolling-Part1
There isn't a way to get live updates every 2 seconds without making some kind of request from your page to your server, otherwise how would it know if anything has changed?
Personally I would write a CFC method to read in your text file and see if it's changed, then poll that method every few seconds using jQuery to return whether it has changed or not, and pass back any updated content.
Without knowing the details of your text file etc. it's hard to write anything accurate. Fundamentally your CFC method would have to store (in a SESSION var probably) a copy of the text file data, so it could compare it with the latest read-in data and tell if anything has changed. If it has changed then send a structure back with the updates, or return a response saying it's unchanged.
Your CFC code would look something like this:
<cffunction name="check_update" access="remote" output="false">
<cfset response = structNew()>
<cffile action="read"
file="path\to\your\textfile.txt"
variable="file_content"
>
<cfif file_content NEQ SESSION.file_content>
<cfset response.updated = true>
<cfset SESSION.file_content = file_content>
<cfset response.content = structNew()>
<!--- code here to populate 'content' variable with updated info --->
<cfelse>
<cfset response.updated = false>
</cfif>
<cfreturn response>
</cffunction>
Then the jQuery code to poll that data would look like this:
var update_interval;
var update_pause = 3000;
function check_update() {
var request = {
returnformat : 'json',
queryformat : 'column',
method: 'check_update'
}
$.getJSON("path/to/your/service.cfc", request, function(data) {
if(data.UPDATED == true) {
/* code here to iterate through data.CONTENT */
/* and render out your updated info to your table */
}
});
}
$(document).ready(function () {
update_interval = setInterval(check_update(), update_pause);
});
So once the DOM is ready we create an interval that in this case fires every 3 seconds (3000ms) and calls the check_update() function. That function makes a call to your CFC, and checks the response. If the response UPDATED value is true then it runs whatever code to render your updates.
That's the most straightforward method of achieving what you need, and should work regardless of browser. In my experience the overhead of polling a CFC like that is really very small indeed and the amount of data you're transferring will by tiny, so it should be no problem to handle.
I don't think there's any other method that could be more lightweight / easy to put together. The benefits of long polling or SSE (with dodgy browser support) are negligible and not worth the programming overhead.
Thanks, Henry
Related
I have a C# .NET 2.2 web server process that exposes an API. When a request comes in, the server needs to make its own HTTP request to a database API. Depending on the query, the response from the database can be very large, and in some cases this is large enough that my .NET process crashes with (Memory quota exceeded) in the logs.
The code that sends off the request looks like this:
string endpoint_url = "<database service url>";
var request_body = new StringContent(query, Encoding.UTF8, "<content type>");
request_body.Headers.ContentType.CharSet = "";
try {
var request_task = Http.client.PostAsync(endpoint_url, request_body);
if (await Task.WhenAny(request_task, Task.Delay(timeoutSeconds*1000)) == request_task) {
request_task.Result.EnsureSuccessStatusCode();
var response = await request_task.Result.Content.ReadAsStringAsync();
JObject json_result = JObject.Parse(response);
if (json_result["errors"] is null) {
return json_result;
} else {
// return error
}
} else {
// return timeout error
}
} catch(Exception e) {
// return error
}
My question is, what is the best way of protecting against my web service going down when a query returns a large response like this? The .NET Core best practices suggest that I shouldn't be loading the response body into a string wholesale, but doesn't really suggest an alternative.
I want to fail gracefully and return an error to the client rather than causing an outage of the .NET service, so setting some kind of limit on the response size would work. Unfortunately the database service in question does not return a Content-Length header so I can't just check that.
My web server currently has 512MB of memory available, which I know is not much, but I'm concerned that this error could happen for a large response regardless of the amount of memory I have available. My main concern is guaranteeing that my .NET service wont crash regardless of the size of response from the database service.
If Http.client is an HttpClient you can restrict the maximum data that it will read before aborting the operation and throwing an exception with it's MaxResponseContentBufferSize property. By default it's set to 2Gb, that explains why makes your server go away if it only has 512Mb of RAM, so you can set it to something like 10/20Mb and handle the exception if it has been overflown.
The simplest approach that you could use is to make decision based on the returning row count.
If you are using ExecuteReader then it will not return the affected rows, but you can overcome this limitation by simply returning two result sets. The first result set would have a single row with a single column, which tells you the row count and based on that you can decide whether or not you are calling the NextResult and process the requested data.
If you are using stored procedures then you can use an out parameter to indicate the retrieved row count. By using either the ##ROWCOUNT variable or the ROWCOUNT_BIG() function. Yet again you can branch on that data.
The pro side of these solutions is that you don't have to read any record if it would outgrow your available space.
The con side of these solutions is that determining the threshold could be hard, because it could depend on the query itself, on one (or more) parameter(s) of it, on the table size, etc.
Well you definitely shouldn't be creating an unbounded string that could be larger than your heap size but it's more complicated than just that advice. As others are pointing out the entire system needs to work together be able to return large results with a limited memory footprint.
The simplest answer to your direct question - how can I send back an error if the response won't fit in memory - would be to create a buffer of some limited "max" size and read only that much data from the response. If it doesn't fit in your buffer then it's too large and you can return an error.
But in general that's a poor design because the "max" is impossible to statically derive - it depends on server load.
The better answer is to avoid buffering the entire result before sending it to the client and instead stream the results to the client - read in a buffer full of data and write out that buffer - or some processed form of that buffer - to the client. But that requires some synergy between the back-end API, your service and possibly the client.
If your service has to parse a complete object - as you're showing with Json.Parse - then you'll likely need to re-think your design in general.
I am currently facing an issue where it takes quite a while to process information from a server, and I would like to provide active feedback for the user to know what is going on while the application appears to be just sitting around.
A little about the application: it allows the user to pull all databases from a specified server along with all content within those databases. This can take quite a while sometimes, since some of our databases can reach 2TB in size. Each server can contain hundreds of databases and so as a result, if I try to load a server with 100 databases, and 30% of those databases are over 100GB in size, it takes a good couple of minutes before the application is able to run effectively again.
Currently I just have a simple loading message that says: "Please wait, this could take a while...". However, in my opinion, this is not really sufficient for something that can take a few minutes.
So as a result, I am wondering if there is a way to track the progress of the SqlConnection object as it is executing the specified query? If so, what kind of details would I be able to provide and are there any readily available resources to look over and better understand the solution?
I am hopeful that there is a way to do this without having to recreate the SqlConnection object altogether.
Thank you all for your help!
EDIT
Also, as another note; I am NOT looking for handouts of code here. I am looking for resources that will help me in this situation if any are available. I have been looking into this issue for a few days already, I figure the best place to ask for help is here at this point.
Extra
A more thorough explanation: I am wondering if there is a way to provide the user with names of databases, tables, views, etc that are currently being received.
For example:
Loading Table X From Database Y On Server Z
Loading Table A From Database B On Server Z
The SqlConnection has an InfoMessage-Event to which you can assign a method.
Furthermore you have to set the FireInfoMessageEventOnUserErrors-Property to true.
No you can do something like
private void OnInfoMessage(object sender, SqlInfoMessageEventArgs e)
{
for (var index = 0; index < e.Errors.Count; index++)
{
var message = e.Errors[index];
//use the message object
}
}
Note that you should only evaluate messages with an errorcode lower then 11 (everything above is a 'real' error).
Depending on what command you are using sometime the server already generates such info messages (for example at VERIFY BACKUP).
Anyways you can also use this to report progress form a stored procedure or a query.
Pseudocode (im not an sqlguy):
RAISEERROR('START', 1,1) WITH NOWAIT; -- Will raise your OnInfoMessage-method
WHILE
-- Execute something
RAISEERROR('Done something', 1,1) WITH NOWAIT; -- Will raise your OnInfoMessage-method
-- etc.
END
Have fun with parsing, cause remember: such messages can also be generated by the server itself (so it is probably a good idea to start your own, relevant "errors" with a satic sequence which cannot occur under nomal circumstances).
I have a controller that returns a json that populates the grid in my view. Depending on the filters, the user can retrieve large amount of data in one call so I set the MaxJsonLength to max:
var jsonResult = Json(result, JsonRequestBehavior.AllowGet);
jsonResult.MaxJsonLength = int.MaxValue;
My question is, is it safe to always set the MaxJsonLength to max value? What are its draw-backs?(if there is any)
I found this related post but it didn't answered my question.
What is MaxJSONlength good for?
I need your expertise here. Thanks in advance!
I don't think it is good idea to set it to MaxValue on each call. It does not mean it will break your application, but it may make your application appear broken.
I've had the same problem once - in some sittuations user might have requested bigger dataset. Like 10-50 megabytes large, through internet connection, not LAN. Nothing impossible, you can send such data sets. But your application will be dead-slow. Browser will be waiting for the data, users will wait long time before page will be usable, which in turn causes them to do silly stuff like clicking everywhere, cursing and reporting bugs in application. Is it really bug? Depends on your requirements, but I would say yes.
What you can and should do is to provide pagination. Send small sets of data to users, display them immediately, allow users to work with them and then send additional data as needed. Or if it always be needed - send it automatically in packages in background, but in smaller sets, that will be quickly dispalyed. Users will get their page ready quickly and most of the time they won't notice that not all data is already there - by the time they will need it it will be already downloaded.
With today's supprot for AJAX, jQuery and stuff like that doing it should not be any more difficult than it is to get and display whole data set at once.
I want to store data in a client side variable from a server side one, is it a good practice?
I have an app which uses a web service and I dont want to expose the ip on the source code, so it would be good if I can set a client side variable with that ip.
Now someone told me that for example getting values from the Session and storing them in a JS variable could be known as a "bad thing", as it represents an XSS Hole and I dont want my website to be marked as a "unsafe" one.
The reason I want to store the value on a client side variable is so that I can use JQUERY - AJAX so that the client does not have to re load the page for every request.
Can you help me out?
There's nothing inherently wrong with saving server side data on the client side.
Here's an easy way, for PHP:
<script>var someValue = "<?php echo $some_value; ?>";</script>
This saves a PHP value in the someValue JavaScript variable.
For more complicated data structures, here's a nice way to convert between the two languages:
<script>var someValue = JSON.decode("<?php echo json_encode($some_value); ?>");</script>
This only becomes a bad idea when the data you want to save client side is sensitive. There's no way to tell what your intentions are, so you have to use your best judgment.
Code Generating in JS
<script >
<%= SomeMethodFromCodeBehind() %> //JS goes here.
</script >
Pros: You get access to everything you want on the server.
Cons: You are writing JS using C# and C# string methods aren't exactly templating engine.
You will think you have access to Session, but only at code gen time!
If you mix user input into the results, then this is "JavaScript Injection", which get's lumped in with XSS.
Code Generating Just the Data
<script >
var myNumber = parseInt(<%= Convert.ToInt32(Session["TheNumber"]) %>,10);
var id = '<% TextBox.ClientID %>';
</script >
The above just generates the data, not all the code, so it's easier to see what is potential user input (maybe the Session part) and then force it to a suitable datatype, or do some JS escaping on the strings.
In my experience, I've only needed to ever do this for ClientIDs and sometimes properties of controls that don't have a clientside API.
Calling Services
If you don't code generate function that returns 42, maybe you can leave the function on the server and just have the JS gather the parameters. For one, calling a webservice isn't going to be subject to XSS-- the user would have to attack the parameters of the webservice, where as in code generation, the user can do all sorts of attacks if only he can figure out how to get his code output (esp on someone elses browser). So now you get to create a JS friendly web API. Here are some things that can go wrong:
<script >
//Things that used to be somewhat secret are potentially public now.
PageMethods.GetSesion("Blah",onSuccess);
//Wow, something that used to be hard for the user to tamper with is now convenient to
//tamper with. Now Session is ALL user input (and hence evil)
PageMethods.SetSession("Blah",42);
//Okay, now the user has the ability to read your secrets, conveniently
PageMethods.Encrypt("Blah", onSuccess);
PageMethods.Decrypt("Blah", onSuccess);
if(password=="xyzzy")
{
PageMethods.ShipTheGoods(address,goods); //User can call this directly.
}
</script >
(I'm not promoting PageMethods, but they sure are shorter for sample code than the alternatives)
The above sort of code is another common issue when you write JS code when you're used to writing server side code. One book called this overly granular APIs and attacks on control of flow (that if statement that doesn't really protect ShipTheGoods from being called without a password)
Anyhow, in a JS page, the way to track state (i.e. Session) is just a variable, "var customerInFocus = 'joe'). You can use code generation to output a value-- then when you are ready to save, send it back to the server in a pagemethod (or web service) call and treat all of those parameters as user input that has possibly, probably been tampered with along the way.
I am away from my development workstation so I thought I'd ask this in hopes of getting an answer when I try it tomorrow. I have a two part question relating to a web application i built using c# jquery and jquery datatables:
1) I know that we can set the value of fnfilter as metioned on their page using something like:
var oTable;
$(document).ready(function() {
oTable = $('#example').dataTable();
/* Filter immediately */
oTable.fnFilter( 'test string' );
} );
however is there a way to retrieve the value entered by the use in the search bar? I was thinking along the lines of
var aContainer= oTable.fnFilter()
or
var aContainer= oTable.fnFilter($(this).html())
2) My application has to retrieve values from another source on the web. These are the value displayed in the datatable. Most of my processing(counting, etc..) is done client side and has drastically slowed down generating the web app. Does anyone know of any suggestions to increase performance of client side scripts specifically datatables?
If your datatable is really instantiated as oTable = $('#example').dataTable(); then doing this:
var textEntered = $('#example_filter input:text')[0].value;
Should return whatever the user entered on the field for filtering.
In answer to #1, you can get the value of the text entered into the search box by doing
// Assume the table's id attribute is 'blah'
var search_string = $('#blah_filter>input').val();
As far as #2, have you considered server-side processing of the data and sending the result to the client?
This article
might give you a big help if you decide to write server side code. Now researching it myself (and not looking forward to implementing custom filtering!).