I have a controller that returns a json that populates the grid in my view. Depending on the filters, the user can retrieve large amount of data in one call so I set the MaxJsonLength to max:
var jsonResult = Json(result, JsonRequestBehavior.AllowGet);
jsonResult.MaxJsonLength = int.MaxValue;
My question is, is it safe to always set the MaxJsonLength to max value? What are its draw-backs?(if there is any)
I found this related post but it didn't answered my question.
What is MaxJSONlength good for?
I need your expertise here. Thanks in advance!
I don't think it is good idea to set it to MaxValue on each call. It does not mean it will break your application, but it may make your application appear broken.
I've had the same problem once - in some sittuations user might have requested bigger dataset. Like 10-50 megabytes large, through internet connection, not LAN. Nothing impossible, you can send such data sets. But your application will be dead-slow. Browser will be waiting for the data, users will wait long time before page will be usable, which in turn causes them to do silly stuff like clicking everywhere, cursing and reporting bugs in application. Is it really bug? Depends on your requirements, but I would say yes.
What you can and should do is to provide pagination. Send small sets of data to users, display them immediately, allow users to work with them and then send additional data as needed. Or if it always be needed - send it automatically in packages in background, but in smaller sets, that will be quickly dispalyed. Users will get their page ready quickly and most of the time they won't notice that not all data is already there - by the time they will need it it will be already downloaded.
With today's supprot for AJAX, jQuery and stuff like that doing it should not be any more difficult than it is to get and display whole data set at once.
Related
I have a C# .NET 2.2 web server process that exposes an API. When a request comes in, the server needs to make its own HTTP request to a database API. Depending on the query, the response from the database can be very large, and in some cases this is large enough that my .NET process crashes with (Memory quota exceeded) in the logs.
The code that sends off the request looks like this:
string endpoint_url = "<database service url>";
var request_body = new StringContent(query, Encoding.UTF8, "<content type>");
request_body.Headers.ContentType.CharSet = "";
try {
var request_task = Http.client.PostAsync(endpoint_url, request_body);
if (await Task.WhenAny(request_task, Task.Delay(timeoutSeconds*1000)) == request_task) {
request_task.Result.EnsureSuccessStatusCode();
var response = await request_task.Result.Content.ReadAsStringAsync();
JObject json_result = JObject.Parse(response);
if (json_result["errors"] is null) {
return json_result;
} else {
// return error
}
} else {
// return timeout error
}
} catch(Exception e) {
// return error
}
My question is, what is the best way of protecting against my web service going down when a query returns a large response like this? The .NET Core best practices suggest that I shouldn't be loading the response body into a string wholesale, but doesn't really suggest an alternative.
I want to fail gracefully and return an error to the client rather than causing an outage of the .NET service, so setting some kind of limit on the response size would work. Unfortunately the database service in question does not return a Content-Length header so I can't just check that.
My web server currently has 512MB of memory available, which I know is not much, but I'm concerned that this error could happen for a large response regardless of the amount of memory I have available. My main concern is guaranteeing that my .NET service wont crash regardless of the size of response from the database service.
If Http.client is an HttpClient you can restrict the maximum data that it will read before aborting the operation and throwing an exception with it's MaxResponseContentBufferSize property. By default it's set to 2Gb, that explains why makes your server go away if it only has 512Mb of RAM, so you can set it to something like 10/20Mb and handle the exception if it has been overflown.
The simplest approach that you could use is to make decision based on the returning row count.
If you are using ExecuteReader then it will not return the affected rows, but you can overcome this limitation by simply returning two result sets. The first result set would have a single row with a single column, which tells you the row count and based on that you can decide whether or not you are calling the NextResult and process the requested data.
If you are using stored procedures then you can use an out parameter to indicate the retrieved row count. By using either the ##ROWCOUNT variable or the ROWCOUNT_BIG() function. Yet again you can branch on that data.
The pro side of these solutions is that you don't have to read any record if it would outgrow your available space.
The con side of these solutions is that determining the threshold could be hard, because it could depend on the query itself, on one (or more) parameter(s) of it, on the table size, etc.
Well you definitely shouldn't be creating an unbounded string that could be larger than your heap size but it's more complicated than just that advice. As others are pointing out the entire system needs to work together be able to return large results with a limited memory footprint.
The simplest answer to your direct question - how can I send back an error if the response won't fit in memory - would be to create a buffer of some limited "max" size and read only that much data from the response. If it doesn't fit in your buffer then it's too large and you can return an error.
But in general that's a poor design because the "max" is impossible to statically derive - it depends on server load.
The better answer is to avoid buffering the entire result before sending it to the client and instead stream the results to the client - read in a buffer full of data and write out that buffer - or some processed form of that buffer - to the client. But that requires some synergy between the back-end API, your service and possibly the client.
If your service has to parse a complete object - as you're showing with Json.Parse - then you'll likely need to re-think your design in general.
Im trying working on a web app project and trying to figure out how to display my answer on the second web page.
I have put a a text box on my first webpage and have corrected the coding of my application as I have received the correct answers in the textbox after I have debugged it.
Ideally I want to remove this textbox and want my answers which I managed to display on my textbox displayed on a label in the next webpage. Here is the calculation part of my code;
var cost = ((int)duration.TotalMinutes) * 0.35m;
txtCost.Text = cost.ToString("c");
I'd like to make my answer appear in my second webpage and not have it displayed in the first. I have tried using Session["Cost"] = cost; on the button click event handler of the first webpage double cost = (double)(Session["Cost"]);
lblDisplay.Text = cost.ToString("c");
and this on the second webpage but every time I Debug it and run I always get $0.00 displayed on my label. Can someone help me fix this?
Sharing value between two views in MVC application, try following
// To save into the Cache
System.Web.HttpContext.Current.Cache["CostKey"] = cost;
// To retrieve Cache Value
var cachedValue = System.Web.HttpContext.Current.Cache["CostKey"] as double;
For Session State, have a look at this link
In ASP.NET WebForms application, you can pass data around in various ways:
Cache
See the Learning Curve answer for examples.
However, the object put in the cache is not guaranteed to be found again if the server experiences memory shortage or alike. The ASP.NET manages the cache and evicts objects on its own to maintain memory availability. This is in contrast with ApplicationState and SessionState where the objects are kept until they are removed manually, or the Application ends or Session expires.
Session and Application states
You can put any object in the SessionState object and retrieve it elsewhere in your code. However, you need to cast it appropriately as the SessionState accepts object-s. E.g. if you store a number, when you retrieving it, you must do the casting yourself, just as you already did it.
The reason it doesn't work, is perhaps you're trying to retrieve it from within another user's SessionState. Yes, the SessionState is a per-user structure. If you need to add the value as from one device and use it on another, use ApplicationState:
Application["cost"] = cost;
Redirecting Response
Using this technique, you could force the browser to request another page from the server and specify the full query string, including the variables you need. E.g. :
var destination = Server.UrlEncode("/someOtherPage.aspx?cost=34.65");
Response.Redirect(destination);
As an alternative, you can use Server.Transfer("someOtherPage.aspx") to save the roundtrip. However, in that case, the browser doesn't change the address in the address bar so the user is misled that she browses one page, but in fact, it is the someOtherPage.aspx.
I want to do grid, I get 1000 rows of data from SQL Server with WCF, then I put grid 10 data in view in first after use scroll and get 10-20 data from controller in two after use scroll and get 20-30 data from controller in three..... use scroll and get 990-1000 data from controller. But I must go SQL Server with WCF only one time for 1000 rows of data (I cannot go to SQL Server all time (example 0-10,10-20,20-30)) and I put 10 data grid in view, problem is 990 rows of data in controller.
How to keep 990 rows of data in the controller ?
You can make use of Caching for this
Either use the System.Web.Caching
Or use MemoryCache
Depending on you setup, you might also be able to use OutputCache
[OutputCache(Duration=10, VaryByParam="none")]
public ActionResult Result()
{
return Data();
}
See http://www.asp.net/mvc/overview/older-versions-1/controllers-and-routing/improving-performance-with-output-caching-cs for more around this.
your description is quite confusing. Sorry if I misunderstood your requirement.
If it involve over 1000+ of data, session is not a good option especially if your program involve other usage of session.
Since you are using MVC, you can take advantage of new option such as ViewData and TempData. You can read more about it here.
I used TempData before and it can process large amount of data (I did not count how much it was, but consider quite huge) so it should be a much better option than session.
I am looking at developing a SPA application probably with Angular.
One of the challenges we face is that we have considerable amount of financial based calculations that comes into play whilst the user is entering values on a form. Here’s a simplified example:
The user is entering a detail line on a sales transaction entry form.
As they enter the Net amount, the system should calculate the Sales
Tax amount and Gross value based on the net value entered (as I say,
it does get more complex than this).
The important thing to note here, is that as the user tabs out of the Net field, they should see the Tax and Gross fields update.
So I see two high level options here:
Either code this calculation in JavaScript
Make a service call to
perform the calculation
Either way, I want the Angular style model to be updated with the result of the calculation which will cause the view to update.
Where possible, I would prefer to do this through a service call, as that way it opens the door to re-using this logic from other clients in the future. I also think that coding this sort of logic in C# should be faster to develop and more maintainable (and keeps the logic in one place).
Ideally I would like this logic in the C# entity in the service that models the transaction.
How should I therefore go about calling such server side logic?
Should I somehow pass the whole client side representation of the
model back up to the service and have it calculate the other values?
Not sure how I would do this in terms of telling the service which
values actually need calculating.
Or should I have (lots of) individual
service methods named things like CalculateTax(net, taxPercentage)
that returns the Tax amount.
Or is there some other method or pattern that I am missing altogether here?
Many thanks
I would create an API endpoint that received the calculation you needed and the values, and would return the result. This would be the same as getting a single record from a normal CRUD API making your angular service quite simple:
angular.module('fin',[]).service('calculation', function($http) {
return {
getResult: function(calcMethod, values){
return $http({
url: 'http://backend.com/calculate',
method: 'GET',
params: {
calcMethod: calcMethod,
values: values
}
});
}
}
}
And then you could just call from your controller, something like this:
Service.getResult('Sales Tax',[$scope.value1, $scope.value2]).success( function(res) {
$scope.result = res;
});
With all my love to javascript, I would never trust financial calculation to it, especially on client side.
But. It depends on what exactly you need to calculte. I mean:
1) (your case) Server gives you "source" values(like user's amount etc.) and percentage(or user enters it), AND you don't pass this data to server, then you definitely can, and I think should, do this on client side.
2) If you have smth like price and amount of items, you should calculate it on server(you can do pre-calculation on UI), but confirm it from server too.
Now according to the business scenario you have provided, it seems the calculations are such that you would not want others to know, how its done. The calculations are best kept in the server side
When ever you tab out of the field of the net amount, you can call the server to fetch you the sales tax and the other values you calculate.
You can go through like this.
You have the methods to calculate the sales tax. Now this needs a net amount. So when ever you enter the net amount and you tab out of the field you can call the individual service to get the sales tax amount and the gross amount. Here your input to the service is not the whole client side but just the net amount that is bounded to the controller.
I am relatively new to ColdFusion (using ColdFusion 10) and I have a question regarding creating a real-time updated table.
Currently I have a C# application that I have writing stock prices to a csv (text) file every 2 seconds and would like to reflect these changes as they happen in a table on web page. I know I could have the entire table refresh every 2 seconds, but this would generate a lot of requests to the server and I would like to know if there is a better way of doing it ? Could this be easily achieved using ColdFusion 10's new html5 Web-sockets functionality ?
Any advice/guidance on which way to proceed or how to achieve this would be greatly appreciated !
Thanks, AlanJames.
I think you could rewrite your question and get at least 5 answers in first hour.
Now to answer it, if I understood well what you're asking.
IMHO websockets aren't there yet, if your website is for wide population and you are not 100% sure that they're coming with most recent Chrome or FF, forget it.
You could use some javascript websocket library which would gracefully fallback to flash or AJAX HTTP polling, like http://socket.io/ or cloud service like pusher.com . But this will complicate your life because you have 2-3 times more work in backend if you implement polling and websocket.
Regarding amount of requests, if you want real time data on screen, you gotta have server to support it.
You could optimize if you request once and refresh data for all the table, so not per cell. You'd get all new data at once and update those cells which changed with jquery. So not pulling all data again, or whole table HTML, just minimal amount of data.
AJAX polling would certainly help with amount of requests, time of the request being open is another possible problem though. You could do polling with BlazeDS which is even in ColdFusion 9.
some pages to look at:
http://www.bennadel.com/blog/2351-ColdFusion-10-Using-WebSockets-To-Push-A-Message-To-A-Target-User.htm
http://www.bennadel.com/blog/1956-Very-Simple-Pusher-And-ColdFusion-Powered-Chat.htm
http://nil.checksite.co.uk/index.cfm/2010/1/28/CF-BlazeDS-AJAX-LongPolling-Part1
There isn't a way to get live updates every 2 seconds without making some kind of request from your page to your server, otherwise how would it know if anything has changed?
Personally I would write a CFC method to read in your text file and see if it's changed, then poll that method every few seconds using jQuery to return whether it has changed or not, and pass back any updated content.
Without knowing the details of your text file etc. it's hard to write anything accurate. Fundamentally your CFC method would have to store (in a SESSION var probably) a copy of the text file data, so it could compare it with the latest read-in data and tell if anything has changed. If it has changed then send a structure back with the updates, or return a response saying it's unchanged.
Your CFC code would look something like this:
<cffunction name="check_update" access="remote" output="false">
<cfset response = structNew()>
<cffile action="read"
file="path\to\your\textfile.txt"
variable="file_content"
>
<cfif file_content NEQ SESSION.file_content>
<cfset response.updated = true>
<cfset SESSION.file_content = file_content>
<cfset response.content = structNew()>
<!--- code here to populate 'content' variable with updated info --->
<cfelse>
<cfset response.updated = false>
</cfif>
<cfreturn response>
</cffunction>
Then the jQuery code to poll that data would look like this:
var update_interval;
var update_pause = 3000;
function check_update() {
var request = {
returnformat : 'json',
queryformat : 'column',
method: 'check_update'
}
$.getJSON("path/to/your/service.cfc", request, function(data) {
if(data.UPDATED == true) {
/* code here to iterate through data.CONTENT */
/* and render out your updated info to your table */
}
});
}
$(document).ready(function () {
update_interval = setInterval(check_update(), update_pause);
});
So once the DOM is ready we create an interval that in this case fires every 3 seconds (3000ms) and calls the check_update() function. That function makes a call to your CFC, and checks the response. If the response UPDATED value is true then it runs whatever code to render your updates.
That's the most straightforward method of achieving what you need, and should work regardless of browser. In my experience the overhead of polling a CFC like that is really very small indeed and the amount of data you're transferring will by tiny, so it should be no problem to handle.
I don't think there's any other method that could be more lightweight / easy to put together. The benefits of long polling or SSE (with dodgy browser support) are negligible and not worth the programming overhead.
Thanks, Henry