Complex financial calculations/business rules in Angular - c#

I am looking at developing a SPA application probably with Angular.
One of the challenges we face is that we have considerable amount of financial based calculations that comes into play whilst the user is entering values on a form. Here’s a simplified example:
The user is entering a detail line on a sales transaction entry form.
As they enter the Net amount, the system should calculate the Sales
Tax amount and Gross value based on the net value entered (as I say,
it does get more complex than this).
The important thing to note here, is that as the user tabs out of the Net field, they should see the Tax and Gross fields update.
So I see two high level options here:
Either code this calculation in JavaScript
Make a service call to
perform the calculation
Either way, I want the Angular style model to be updated with the result of the calculation which will cause the view to update.
Where possible, I would prefer to do this through a service call, as that way it opens the door to re-using this logic from other clients in the future. I also think that coding this sort of logic in C# should be faster to develop and more maintainable (and keeps the logic in one place).
Ideally I would like this logic in the C# entity in the service that models the transaction.
How should I therefore go about calling such server side logic?
Should I somehow pass the whole client side representation of the
model back up to the service and have it calculate the other values?
Not sure how I would do this in terms of telling the service which
values actually need calculating.
Or should I have (lots of) individual
service methods named things like CalculateTax(net, taxPercentage)
that returns the Tax amount.
Or is there some other method or pattern that I am missing altogether here?
Many thanks

I would create an API endpoint that received the calculation you needed and the values, and would return the result. This would be the same as getting a single record from a normal CRUD API making your angular service quite simple:
angular.module('fin',[]).service('calculation', function($http) {
return {
getResult: function(calcMethod, values){
return $http({
url: 'http://backend.com/calculate',
method: 'GET',
params: {
calcMethod: calcMethod,
values: values
}
});
}
}
}
And then you could just call from your controller, something like this:
Service.getResult('Sales Tax',[$scope.value1, $scope.value2]).success( function(res) {
$scope.result = res;
});

With all my love to javascript, I would never trust financial calculation to it, especially on client side.
But. It depends on what exactly you need to calculte. I mean:
1) (your case) Server gives you "source" values(like user's amount etc.) and percentage(or user enters it), AND you don't pass this data to server, then you definitely can, and I think should, do this on client side.
2) If you have smth like price and amount of items, you should calculate it on server(you can do pre-calculation on UI), but confirm it from server too.

Now according to the business scenario you have provided, it seems the calculations are such that you would not want others to know, how its done. The calculations are best kept in the server side
When ever you tab out of the field of the net amount, you can call the server to fetch you the sales tax and the other values you calculate.
You can go through like this.
You have the methods to calculate the sales tax. Now this needs a net amount. So when ever you enter the net amount and you tab out of the field you can call the individual service to get the sales tax amount and the gross amount. Here your input to the service is not the whole client side but just the net amount that is bounded to the controller.

Related

Equivalence of ABAP SELECT-OPTIONS in C#

I am writing a report in C# that will generate an SQL statement to call data in SAP. In SAP ABAP, there is a command "SELECT-OPTIONS" which will automatically place on a screen a field which automatically has a number of different options to input data. For example, if you wanted to query a customer master database, you could enter a single customer number, multiple customer numbers, multiple ranges of customer numbers. Set criteria to include the customer numbers, exclude them, etc.
It is really nice functionality that users are asking me to duplicate but with a C# front end.
I am trying to replicate this a portion of this functionality by using lookup buttons, datagridviews, internal lists, etc.
I was wondering if anyone has done anything similar or if there is a customer class that already exists that does the equivalent.
You probably need to understand SAP ABAP and C# to fully understand the question as it is hard to explain without having to show a lot pictures and using a lot of words.
Thanks
Stephen
Most likely there is no generic finished product that will do it. In ABAP, this relies on the fact that select-options is bound to a variable, data element and domain, which, in turn, has either a valid-values-list (fix or via table) and/or various search helps. So if you need to enter an employee number, you will be able to select the number by name or by email or by department or other criteria. So basically, for each “type of object” that you want to enter there is some sort of input help that has intrinsic knowledge of entered data.
If you are only interested in an “input field” that is able to select an arbitrary number of following inputs at the same time (without value help dialogs)
include/exclude single values
include/exclude range (for sortable values) (42-50 or Bob-Mike)
include/exclude open ranges (>= 42)
include/exclude values by pattern (ash*)
Then: I never saw anything like that in any UI other than SAPs DynPro or WebDynpro.
In the end, you end up with a so-called range table, which has four values per line:
include/exclude
operation (equals, not equals, less than, between, etc)
value1
value2 (only relevant for operations like “between”)
So if you build a UI for that, the user will need to enter something which will end up in this construct.
Try ERPConnect from Theobald Software:
https://theobald-software.com/en/erpconnect/
I didn't find a mention of SELECT-OPTION control in the brochures but they claim they have .Net API for core SAP/ABAP tools and interfaces, so you can give a try.

Performance issue using in memory database with Typescript/javascript

I have a server side developed in c# with entity framework as a provider for SQL server. My server is managing a many to many relation between students and classes.
public class Student
{
public List<Course> Courses
.
.
}
public class Course
{
public List<Student> Students
.
.
}
My client side is developed in angular js with Typscript.
To be synchronized with the server, each change in server is pushed to the clients with push notifications (signalr).
For faster response time, my client keeps a sort of database in memory (since the amount of data is not that big, less than 500 records).
I keep an array of students and for each one of them also keep array of courses:
Students: { [studentId: number] : Courses } ={}
And in that object I keep track of all the students and their courses in the client side.
In my application I have the option to remove multiple courses from the entire system. However when doing such thing, when the action is successfully finished on server-side, the processing in the client side becomes heavy when there are many students. That is because I need to iterate through all the removed courses and for each one of them iterate through the entire students array to locate and remove those courses from the students array. (Or iterate the removed courses first and within iterate through the students) - both are heavy and take a while.
Is there a different better design for this? Should I approach this in a different way maybe?
There are a couple of things that come to mind:
Check if you really need the "client side database". Is your backend that slow? Isn't it a premature optimation? I think the complexity of your client program will drop drastically if you remove this part... Just fetch the latest data directly from the server when you need it.
Reload the whole "client database" if big changes happen.
Optimize your "client database". Removing a few hundred items shouldn't take too long... Are you using angular.forEach? It could slow you down significantly. Do you have a few hundred students, but only a few deleted courses? Iterate over the students and only then (inside the iteration) iterate over the deleted courses. Pseudo code:
for (student in students) {
for (deletedCourse in deletedCourses) {
student.removeCourse(deletedCourse);
}
}
And not like this:
for (deletedCourse in deletedCourses) {
for (student in students) {
student.removeCourse(deletedCourse);
}
}
This way, you would iterate much more and waste time. But it's hard to know without your source code.
In general, you should profile your code and pin down the performance problems. Log to the console how long you needed for different approaches and choose accordingly.
Rebuild the array of Students using the updated data in the database after the course is deleted.
Try storing just the courseIDs inside the student object. That way the performance of the removal part might be manageable.
Have the courses in a separate object and use the map function to get the courses for the student.
Also as the comments suggested, your code for course removal might play a role. So if you can edit your question, it would be helpful.
Of course there is a better design than just looping through the arrays :)
Essentialy, you need to filter your client side array - i.e. the in-memory database - to exclude (drop) the courses, deleted by the server.
So let's say that the server did its job and deleted some courses, thus sent the deletedOnServer array through SignalR. Then you can call a client side function - e.g. PurgeCourses - and clean your in-memory database. Here is a possible implementation, using native JavaScript's Array.prototype.filter():
// Assuming you client side courses are stored here
var clientSideCoursesArray = [];
function PurgeCourses(deletedOnServer) {
// To get an array of deleted courses ids:
var deletedCoursesIds = $.map(deletedOnServer, function (x) { return x.id; });
clientSideCoursesArray = clientSideCoursesArray.filter(function (prev, cur, idx, arr) {
var isDeletedByServer = deletedCoursesIds.indexOf(cur.id) !== -1;
return !isDeletedByServer;
});
}
However, jQuery.grep() is the recommended way as it is optimized to perform better:
function PurgeCourses(deletedOnServer) {
// To get an array of deleted courses ids:
var deletedCoursesIds = $.map(deletedOnServer, function (x) { return x.id; });
clientSideCoursesArray = $.grep(clientSideCoursesArray, function (course, index) {
var isDeletedByServer = $.grep(deletedCoursesIds, course.id).length > 0;
return isDeletedByServer;
}, true);
}
Note the use of the inverted - the fourth argument of $.grep().
Instead of storing it in client memory, you can store it in a singleton object in webapi server memory and access that. That singleton object can keep checking database for changes in a pre-defined interval of time (a code of very few lines can do that if you use a last modified field with datetime comparison). It can fetch the entire dataset into webapi memory again as singleton.
This would also ensure that each client is working with consistent copy of data.
You shouldn't store complex data client-side, there is no reason for the client to find disk-space stolen when you have a server. If your server isn't enough fast, you must provide a better one.
The answer is, that you have to make a choice, repeat the process (slowly) in client-side (considering that the final user can have a bad pc and be a lot slower) or just think a good server strategy to process your data.
If you want to reduce server work, you can query data and cache the results till something changes. When a user changes something, you query again the database and keep saving data.
I have a last advice, when you develop something you must think like you are going to provide your product for billions of users. Considering a large amount of users, are you sure that asking for all the data is a good idea?

Coldfusion - How to update Table Cells in Real time?

I am relatively new to ColdFusion (using ColdFusion 10) and I have a question regarding creating a real-time updated table.
Currently I have a C# application that I have writing stock prices to a csv (text) file every 2 seconds and would like to reflect these changes as they happen in a table on web page. I know I could have the entire table refresh every 2 seconds, but this would generate a lot of requests to the server and I would like to know if there is a better way of doing it ? Could this be easily achieved using ColdFusion 10's new html5 Web-sockets functionality ?
Any advice/guidance on which way to proceed or how to achieve this would be greatly appreciated !
Thanks, AlanJames.
I think you could rewrite your question and get at least 5 answers in first hour.
Now to answer it, if I understood well what you're asking.
IMHO websockets aren't there yet, if your website is for wide population and you are not 100% sure that they're coming with most recent Chrome or FF, forget it.
You could use some javascript websocket library which would gracefully fallback to flash or AJAX HTTP polling, like http://socket.io/ or cloud service like pusher.com . But this will complicate your life because you have 2-3 times more work in backend if you implement polling and websocket.
Regarding amount of requests, if you want real time data on screen, you gotta have server to support it.
You could optimize if you request once and refresh data for all the table, so not per cell. You'd get all new data at once and update those cells which changed with jquery. So not pulling all data again, or whole table HTML, just minimal amount of data.
AJAX polling would certainly help with amount of requests, time of the request being open is another possible problem though. You could do polling with BlazeDS which is even in ColdFusion 9.
some pages to look at:
http://www.bennadel.com/blog/2351-ColdFusion-10-Using-WebSockets-To-Push-A-Message-To-A-Target-User.htm
http://www.bennadel.com/blog/1956-Very-Simple-Pusher-And-ColdFusion-Powered-Chat.htm
http://nil.checksite.co.uk/index.cfm/2010/1/28/CF-BlazeDS-AJAX-LongPolling-Part1
There isn't a way to get live updates every 2 seconds without making some kind of request from your page to your server, otherwise how would it know if anything has changed?
Personally I would write a CFC method to read in your text file and see if it's changed, then poll that method every few seconds using jQuery to return whether it has changed or not, and pass back any updated content.
Without knowing the details of your text file etc. it's hard to write anything accurate. Fundamentally your CFC method would have to store (in a SESSION var probably) a copy of the text file data, so it could compare it with the latest read-in data and tell if anything has changed. If it has changed then send a structure back with the updates, or return a response saying it's unchanged.
Your CFC code would look something like this:
<cffunction name="check_update" access="remote" output="false">
<cfset response = structNew()>
<cffile action="read"
file="path\to\your\textfile.txt"
variable="file_content"
>
<cfif file_content NEQ SESSION.file_content>
<cfset response.updated = true>
<cfset SESSION.file_content = file_content>
<cfset response.content = structNew()>
<!--- code here to populate 'content' variable with updated info --->
<cfelse>
<cfset response.updated = false>
</cfif>
<cfreturn response>
</cffunction>
Then the jQuery code to poll that data would look like this:
var update_interval;
var update_pause = 3000;
function check_update() {
var request = {
returnformat : 'json',
queryformat : 'column',
method: 'check_update'
}
$.getJSON("path/to/your/service.cfc", request, function(data) {
if(data.UPDATED == true) {
/* code here to iterate through data.CONTENT */
/* and render out your updated info to your table */
}
});
}
$(document).ready(function () {
update_interval = setInterval(check_update(), update_pause);
});
So once the DOM is ready we create an interval that in this case fires every 3 seconds (3000ms) and calls the check_update() function. That function makes a call to your CFC, and checks the response. If the response UPDATED value is true then it runs whatever code to render your updates.
That's the most straightforward method of achieving what you need, and should work regardless of browser. In my experience the overhead of polling a CFC like that is really very small indeed and the amount of data you're transferring will by tiny, so it should be no problem to handle.
I don't think there's any other method that could be more lightweight / easy to put together. The benefits of long polling or SSE (with dodgy browser support) are negligible and not worth the programming overhead.
Thanks, Henry

Is MaxJsonLength safe to always set to max value?

I have a controller that returns a json that populates the grid in my view. Depending on the filters, the user can retrieve large amount of data in one call so I set the MaxJsonLength to max:
var jsonResult = Json(result, JsonRequestBehavior.AllowGet);
jsonResult.MaxJsonLength = int.MaxValue;
My question is, is it safe to always set the MaxJsonLength to max value? What are its draw-backs?(if there is any)
I found this related post but it didn't answered my question.
What is MaxJSONlength good for?
I need your expertise here. Thanks in advance!
I don't think it is good idea to set it to MaxValue on each call. It does not mean it will break your application, but it may make your application appear broken.
I've had the same problem once - in some sittuations user might have requested bigger dataset. Like 10-50 megabytes large, through internet connection, not LAN. Nothing impossible, you can send such data sets. But your application will be dead-slow. Browser will be waiting for the data, users will wait long time before page will be usable, which in turn causes them to do silly stuff like clicking everywhere, cursing and reporting bugs in application. Is it really bug? Depends on your requirements, but I would say yes.
What you can and should do is to provide pagination. Send small sets of data to users, display them immediately, allow users to work with them and then send additional data as needed. Or if it always be needed - send it automatically in packages in background, but in smaller sets, that will be quickly dispalyed. Users will get their page ready quickly and most of the time they won't notice that not all data is already there - by the time they will need it it will be already downloaded.
With today's supprot for AJAX, jQuery and stuff like that doing it should not be any more difficult than it is to get and display whole data set at once.

Mid-Tier Help Needed

In one sentence, what i ultimately need to know is how to share objects between mid-tier functions w/ out requiring the application tier to to pass the data model objects.
I'm working on building a mid-tier layer in our current environment for the company I am working for. Currently we are using primarily .NET for programming and have built custom data models around all of our various database systems (ranging from Oracle, OpenLDAP, MSSQL, and others).
I'm running into issues trying to pull our model from the application tier and move it into a series of mid-tier libraries. The main issue I'm running into is that the application tier has the ability to hang on to a cached object throughout the duration of a process and make updates based on the cached data, but the Mid-Tier operations do not.
I'm trying to keep the model objects out of the application as much as possible so that when we make a change to the underlying database structure, we can edit and redeploy the mid-tier easily and multiple applications will not need to be rebuilt. I'll give a brief update of what the issue is in pseudo-code, since that is what us developers understand best :)
main
{
MidTierServices.UpdateCustomerName("testaccount", "John", "Smith");
// since the data takes up to 4 seconds to be replicated from
// write server to read server, the function below is going to
// grab old data that does not contain the first name and last
// name update.... John Smith will be overwritten w/ previous
// data
MidTierServices.UpdateCustomerPassword("testaccount", "jfjfjkeijfej");
}
MidTierServices
{
void UpdateCustomerName(string username, string first, string last)
{
Customer custObj = DataRepository.GetCustomer(username);
/*******************
validation checks and business logic go here...
*******************/
custObj.FirstName = first;
custObj.LastName = last;
DataRepository.Update(custObj);
}
void UpdateCustomerPassword(string username, string password)
{
// does not contain first and last updates
Customer custObj = DataRepository.GetCustomer(username);
/*******************
validation checks and business logic go here...
*******************/
custObj.Password = password;
// overwrites changes made by other functions since data is stale
DataRepository.Update(custObj);
}
}
On a side note, options I've considered are building a home grown caching layer, which takes a lot of time and is a very difficult concept to sell to management. Use a different modeling layer that has built in caching support such as nHibernate: This would also be hard to sell to management, because this option would also take a very long time tear apart our entire custom model and replace it w/ a third party solution. Additionally, not a lot of vendors support our large array of databases. For example, .NET has LINQ to ActiveDirectory, but not a LINQ to OpenLDAP.
Anyway, sorry for the novel, but it's a more of an enterprise architecture type question, and not a simple code question such as 'How do I get the current date and time in .NET?'
Edit
Sorry, I forgot to add some very important information in my original post. I feel very bad because Cheeso went through a lot of trouble to write a very in depth response which would have fixed my issue were there not more to the problem (which I stupidly did not include).
The main reason I'm facing the current issue is in concern to data replication. The first function makes a write to one server and then the next function makes a read from another server which has not received the replicated data yet. So essentially, my code is faster than the data replication process.
I could resolve this by always reading and writing to the same LDAP server, but my admins would probably murder me for that. The specifically set up a server that is only used for writing and then 4 other servers, behind a load balancer, that are only used for reading. I'm in no way an LDAP administrator, so I'm not aware if that is standard procedure.
You are describing a very common problem.
The normal approach to address it is through the use of Optimistic Concurrency Control.
If that sounds like gobbledegook, it's not. It's pretty simple idea. The concurrency part of the term refers to the fact that there are updates happening to the data-of-record, and those updates are happening concurrently. Possibly many writers. (your situation is a degenerate case where a single writer is the source of the problem, but it's the same basic idea). The optimistic part I'll get to in a minute.
The Problem
It's possible when there are multiple writers that the read+write portion of two updates become interleaved. Suppose you have A and B, both of whom read and then update the same row in a database. A reads the database, then B reads the database, then B updates it, then A updates it. If you have a naive approach, then the "last write" will win, and B's writes may be destroyed.
Enter optimistic concurrency. The basic idea is to presume that the update will work, but check. Sort of like the trust but verify approach to arms control from a few years back. The way to do this is to include a field in the database table, which must be also included in the domain object, that provides a way to distinguish one "version" of the db row or domain object from another. The simplest is to use a timestamp field, named lastUpdate, which holds the time of last update. There are other more complex ways to do the consistency check, but timestamp field is good for illustration purposes.
Then, when the writer or updater wants to update the DB, it can only update the row for which the key matches (whatever your key is) and also when the lastUpdate matches. This is the verify part.
Since developers understand code, I'll provide some pseudo-SQL. Suppose you have a blog database, with an index, a headline, and some text for each blog entry. You might retrieve the data for a set of rows (or objects) like this:
SELECT ix, Created, LastUpdated, Headline, Dept FROM blogposts
WHERE CONVERT(Char(10),Created,102) = #targdate
This sort of query might retrieve all the blog posts in the database for a given day, or month, or whatever.
With simple optimistic concurrency, you would update a single row using SQL like this:
UPDATE blogposts Set Headline = #NewHeadline, LastUpdated = #NewLastUpdated
WHERE ix=#ix AND LastUpdated = #PriorLastUpdated
The update can only happen if the index matches (and we presume that's the primary key), and the LastUpdated field is the same as what it was when the data was read. Also note that you must insure to update the LastUpdated field for every update to the row.
A more rigorous update might insist that none of the columns had been updated. In this case there's no timestamp at all. Something like this:
UPDATE Table1 Set Col1 = #NewCol1Value,
Set Col2 = #NewCol2Value,
Set Col3 = #NewCol3Value
WHERE Col1 = #OldCol1Value AND
Col2 = #OldCol2Value AND
Col3 = #OldCol3Value
Why is it called "optimistic"?
OCC is used as an alternative to holding database locks, which is a heavy-handed approach to keeping data consistent. A DB lock might prevent anyone from reading or updating the db row, while it is held. This obviously has huge performance implications. So OCC relaxes that, and acts "optimistically", by presuming that when it comes time to update, the data in the table will not have been updated in the meantime. But of course it's not blind optimism - you have to check right before update.
Using Optimistic Cancurrency in practice
You said you use .NET. I don't know if you use DataSets for your data access, strongly typed or otherwise. But .NET DataSets, or specifically DataAdapters, include built-in support for OCC. You can specify and hand-code the UpdateCommand for any DataAdapter, and that is where you can insert the consistency checks. This is also possible within the Visual Studio design experience.
(source: asp.net)
If you get a violation, the update will return a result showing that ZERO rows were updated. You can check this in the DataAdapter.RowUpdated event. (Be aware that in the ADO.NET model, there's a different DataAdapter for each sort of database. The link there is for SqlDataAdapter, which works with SQL Server, but you'll need a different DA for different data sources.)
In the RowUpdated event, you can check for the number of rows that have been affected, and then take some action if the count is zero.
Summary
Verify the contents of the database have not been changed, before writing updates. This is called optimistic concurrency control.
Other links:
MSDN on Optimistic Concurrency Control in ADO.NET
Tutorial on using SQL Timestamps for OCC

Categories