Construct and evaluate an equation in .net - c#

I'm trying to come up with a good way to have a WCF web service construct an equation that can then be passed back to a client, which would then evaluate the equation by plugging the numbers into the appropriate locations.
We have a form that has numerous types of fields, but all of which are ultimately used to calculate section subtotals, and then a final total. We're trying to keep to SOA practices, so we don't want the javascript or UI C# to make any decisions about how calculations should be done, but we want to try to avoid having to hit up the web service every single time a field on the form changes. So we're trying to come up with a way to have the calculation equation still dictated by the web service, but then provided back to the client in such a way that it can then be evaluated and have the appropriate properties plugged into it. We can even use the javascript eval() for this.
The other complication to this is that we may have 3rd parties that choose to have all calculations handled by the web service, so they would just pass a request object and let it determine the amounts every time. Our thoughts for this is to basically have a second operation contract that would utilize the first to determine the appropriate equation, and then basically just evaluate it, similar to how our client UI code would. I was thinking of having the equation be some kind of array which would include arithmetic enumerations between property names which could then be serialized if we want, but the whole thing is still very much a work in progress, and I wanted to put all this out there early on to see if I could get some opinions.
Thanks in advance for any advice/suggestions/criticism you may be able to provide.

Related

Best way to handle large amount of permanent data

I'm developing a PC app in Visual Studio where I'm showing the status of hundreds of sensors that are connected via WiFi. The thing is that I need to hold on to the sensor data even after I close the app, so I'm considering some form of permanent storage. These are the options I've considered:
1) My Sensor object is relatively compact with only a few properties. I could serialize all the objects before closing the app and load them every time the app starts anew.
2) I could throw all the properties (which are mostly strings and doubles) into a simple text file and create a custom protocol for storage and retrieval.
3) I could integrate a database with my app. Someone told me this is the best way to go about it, but I'm a bit hesitant seeing as I'm not familiar with DBs.
Which method would yield the best results in terms of resource usage and speed? Or is there some other, better way to go about this?
First thing you need is to understand is your problem. For example, when the program is running do you need to have everything in memory at the same time or do you work with your sensors one at a time?
What is a "large amount of data"? For example, to me that will never be less than million (or billion in some cases).
Once you know that you shouldn't be scared of using something just because you are not familiar to it. Otherwise you are not looking for the best solution for your problem, you are just hacking around it in a way that you feel comfortable.
This being said, you have several ways of doing this. Like you said you can serialize data, using json to store and a few other alternatives but if we are talking about a "large amount of data that we want to persist" I would always call for the use of Databases (the name says a lot). If you don't need to have everything in memory at the same time then I believe that this is you best option.
I personally don't like them (again, personal choice) but one way of not learning SQL (a lot) while you still use your objects is to use an ORM like NHibernate (you will also need to learn how to use it so you don't get things a slower).
If you need to have everything loaded at the same time (most often that is not the case so be sure of this) you need to know what you want to keep and serialize it. If you want that data to be readable by another tool or organize in a given way consider a data format like XML or JSON.
Also, you can use mmap-file.
File is permanent, and keep data between program run.
So, you just keep your data structs in the mmap-ed area, and no more.
MSDN manual here:
https://msdn.microsoft.com/en-us/library/windows/desktop/aa366556%28v=vs.85%29.aspx
Since you need to load all the data once at the start of the program, the database case seems doubtful. The DB necessary when you need to load a bit of data many times.
So first two cases seem more preferred. I would advice to hide a specific solution behind an interface, then you'll can change it later.
Standard .NET serialization of sensors' array is more simple probably, and it will be easier to expand.

Web API get dynamic Form Data

I'm in the process of converting an asp.net application from MVC controllers to ApiController. So far everything is going pretty smooth, except I've had a few hicccups.
The problem I'm having right now is a few methods are having requests of the form:
sort:FieldName
dir:DESC
filter[0][field]:FieldName
filter[0][data][type]:string
filter[0][data][value]:deadeawd
(the content type is application/x-www-form-urlencoded;)
And the sort and the dir can easily be captured within a model class, and I've done so, but I don't know how to capture the filter[0] fields. (there could be filter[1] and so on as well, just how many there are is not known ahead of time, ie the data structure is dynamic).
Currently the application grabs the form data, and a method builds a query string based on the data there, but in Web API we no longer have access to the form data directly.
I could use a dynamic object, or a NameValueCollection, but I'm just trying to figure out what's the best option, what's the intended usage, and what's the best practice.
(in case you're wondering, the request data can't be changed, it's from a framework that we are using and don't have an easy way to override how it does things)
The answer we ended up going with was to change the framework so it sent a proper JSON object.
It was quite a bit of work, and it means it might be difficult to update it, but we're looking at moving away from the framework soon anyways, and we don't want the trivial little details of this framework to alter our API in a negative way.
If anyone is interested, the dynamic or NameValueCollection probably would've worked, and I don't think there is a best practice, because passing data like this is already bad practice.

Validating your site

What else needs to be validated apart from what I have below? This is my question.
It is important that any input to a site is properly validated:
Textboxes, etc – use .NET validators (or custom code if the validators aren’t appropriate)
Querystring or Form values – use manual validation (casting to specific types, boundary checking, etc)
This ties into the problems which XSS can reveal.
Basically you have to validate any input that someone could potentially tamper with:
Form Postbacks (mainly .NET Controls – these can be validated with .NET validation controls. Also if you have Request Validation turned on on all pages, this reduces the risk )
QueryString Values
Cookie values
HTTP Headers
Viewstate (automatically done for you as long as you have ViewState MAC enabled)
Javascript (all JS can be viewed and changed, so need to ensure no crucial functionality is handled by JavaScript- i.e. always enable server side validation)
There is a lot that can go wrong with a web application. Your list is pretty comprehensive, although it is duplication. The http spec only states, GET, POST, Cookie and Header. There are many different types of POST, but its all in the same part of the request.
For your list I would also add everything having to do with file upload, which is a type of POST. For instance, file name, mime type and the contents of the file. I would fire up a network monitoring application like Wireshark and everything in the request should be considered potentially harmful.
There will never be a one size fits all validation function. If you are merging sql injection and xss sanitation functions then you maybe in trouble. I recommend testing your site using automation. A free service like Sitewatch or an open source tool like skipfish will detect methods of attack that you have missed.
Also, on a side note. Passing the view state around with a MAC and/or encrypted is a gross misuse of cryptography. Cryptography is tool used when there is no other solution. By using a MAC or encryption you are opening the door for an attacker to brute force this value or use something like oracle padding attack to take advantage of you. A view state should be kept track by the server, period end of story.
I would suggest a different way of looking at the problem that is orthogonal to what you have here (and hence not incompatible, there's no reason why you can't examine it both ways in case you catch with one what you miss with another).
The two things that are important in any validation are:
Things you pay attention to.
Things you pass to another layer untouched.
Now, most of the things you've mentioned so far fit into the first cateogry. Cookies that you ignore fit into the second, as would query & post information if you passed to another handler with Server.Execute or similar.
The second category is the most debatable.
On the one hand, if a given handler (.aspx page, IHttpHandler, etc.) ignores a cookie that may be used by another handler at some point in the future, it's mostly up to that other handler to validate it.
On the other hand, it's always good to have an approach that assumes other layers have security holes and you shouldn't trust them to be correct, even if you wrote them yourself (especially if you wrote them yourself!)
A middle-ground position, is that if there are perhaps 5 different states some persistant data could validly be in, but only 3 make sense when a particular piece of code is hit, it might verify that it is in one of those 3 states, even if that doesn't pose a risk to that particular code.
That done, we'll concentrate on the first category.
Querystrings, form-data, post-backs, headers and cookies all fall under the same category of stuff that came from the user (whether they know it or not). Indeed, they are sometimes different ways of looking at the same thing.
Of this, there is a subset that we will actually work upon in any way.
Of that there is a range of legal values for each such item.
Of that, there is a range of legal combinations of values for the items as a whole.
Validation therefore becomes a matter of:
Identify what input we will act upon.
Make sure that each component of that input is valid in its own right.
Make sure that the combinations are valid (e.g it may be valid to not send a credit card number, but invalid to not send one but set payment type to "credit card").
Now, when we come to this, it's generally best not to try to catch certain attacks. For example, it's not so good to avoid ' in values that will be passed to SQL. Rather, we have three possibilities:
It's invalid to have ' in the value because it doesn't belong there (e.g. a value that can only be "true" or "false", or from a set list of values in which none of them contain '). Here we catch the fact that it isn't in the set of legal values, and ignore the precise nature of the attack (thus being protected also from other attacks we don't even know about!).
It's valid as human input, but not as what we will use. An example here is a large number (in some cultures ' is used to separate thousands). Here we canonicalise both "123,456,789" and "123'456'789" to 123456789 and don't care what it was like before that, as long as we can meaningfully do so (the input wasn't "fish" or a number that is out of the range of legal values for the case in hand).
It's valid input. If your application blocks apostrophes in name fields in an attempt to block SQL-injection, then it's buggy because there are real names with apostrophes out there. In this case we consider "d'Eath" and "O'Grady" to be valid input and deal with the fact that ' is significant in SQL by escaping properly (ideally by using an API for data access that will do this for us.
A classic example of the third point with ASP.NET is code that blocks "suspicious" input with < and > - something that makes a great number of ASP.NET pages buggy. Granted, it's better to be buggy in blocking that inappropriately than buggy by accepting it inappropriately, but the defaults are for people who haven't thought about validation and trying to stop them from hurting themselves too badly. Since you are thinking about validation, you should consider whether it's appropriate to turn that automatic validation off and then treat < and > in a manner appropriate for your given use.
Note also that I haven't said anything about javascript. I don't validate javascript (unless perhaps I was actually receiving it), I ignore it. I pretend it doesn't exist and then I won't miss a case where its validation could be tampered with. Pretend yours doesn't exist at this layer too. Ultimately client-side validation is to save the good guys making honest mistakes time, not to twart the bad guys.
For similar reasons, this is best not tested through a browser. Use Fiddler to construct requests that hit the validation points you want to examine. This way all client-side validation is by-passed, and you're looking at the server the same way an attacker will.
Finally, remember that a page with 100% perfect validation is not necessarily secure. E.g. if your validation is perfect but your authentication poor then someone can send "valid" code to it that will be just - perhaps more - nasty as the more classic SQL-injection of XSS code. That hits onto other topics that are for other questions, except that validation as discussed here is only part of the puzzle.

Web Service Client Architecture - c#

Morning all,
I have been tasked with developing a client tool for a cloud web service API (A simple WSDL). I am not a seasoned or even qualified developer, I have an intermediate knowledge of C# and enough I believe to make this work, but I don't want a solution that just works, I want to build something clean and well coded which another dev can read and understand and which is intuitive.
You may want to stop me there and say "That is something you can only learn through experience." if that is the case then I can accept that and move on, but if you do have some advice the rest of the details are below.
The solution will be a C# Console application. I have produced a spec for this, it is below:
1.) Create a console application in .NET which has the following
capabilities:
2.) Consume CSV file containing Processed Data OR ODBC
Connection to staging SQL database and read records directly out of
load table
3.) Make the following calls to Zuora Webservice (Asynchronous) ·
SubscribeWithExisitingAccount() · Create() ·
Login() · Subscribe() · Update() · Delete()
(*) Calls marked with this are possibly avoidable,
*it is possible to create a subscription, account and contact with a
single call (Subscribe())
*Create() may be the exception as a scenario may occur where we need
to create an instance of an object with no corresponding subscription.
4.) Report back the success and errors of every record into a CSV
file.
Mappings will be done on a 1 to 1 basis, where the input file
will have the same column names as the target
Where I lack knowledge is following a design which will make this app make sense and work efficiently. I am not looking for someone to do this for me, what I am looking for is tips on how I can improve on what I am already doing
Currently I am just organically building the solution due to a lack of foresight on jobs like this, so I am also interested in things I can do post development.
ALL Advice and criticism is welcomed.
Thanks in advance,
Matt
Design principles are a big subject, and how to apply them correctly is only something that comes with experience. There's a lot more of them out there then you'd ever use in a given project, and in some cases using them correctly means not using them at all (or only choosing specific ones that suit the project). The first step is wanting to write good code though, so you're starting in the right place. :) A couple of things did stand out to me:
2.) Consume CSV file containing Processed Data OR ODBC Connection to
staging SQL database and read records directly out of load table
What you're going to want to do here is only build the logic that does something with this data once. The most direct way to achieve that is to have your logic expect the data in a certain format (probably business classes that hold the parsed data and that your logic an use).
So what you'll do is take the input data (CSV/SQL Table/Whatever) and first parse it into your internal business classes. Then you feed the parsed data to your logic that does whatever your app does with it. The advantage here is that you can change the logic once and it will work with both data types, AND if someone comes along later and says "now we need it to read this Excel file" all you'll have to do is add another parser to get the Excel data into your internal format. No changes to the logic will be required.
4.) Report back the success and errors of every record into a CSV
file.
Mappings will be done on a 1 to 1 basis, where the input file
will have the same column names as the target
Same as above. Don't assume that you'll be exporting to CSV forever, make a simple "ReportError" class or some such that holds error details and stick it into a List while doing your processing. At the end when it's time to output your errors, you can convert that into a CSV. So if this requirement changes and you instead report errors to a web service, you only have to change a small part of the code (and none of it is your processing logic).
There's a theme here. :) Try to encapsulate logical bits so that if something changes it's easy to find where that something is in code. If you can learn to do that, you'll wind up with maintainable code even if you don't follow any other process or pattern (particularly since as one person you won't be making huge projects).
3.) Make the following calls to Zuora Webservice (Asynchronous) ·
SubscribeWithExisitingAccount() · Create() · Login() · Subscribe() ·
Update() · Delete()
As a console app, I'm going to question if you actually need these to be asynchronous or not. What do you hope to gain from an async call to Login()? Can your program do anything while waiting for Login() to return?
It's not that async is terribly difficult, but it IS more to manage then synchronous calls. For a console app from someone whose not that experienced in the technology yet, I'm not sure what benefit you're gaining to weigh against the extra effort it requires of you.
I would recomend you read a book on webservices (this is a good one) They arent really something you can just pick up from playing about and can be quite frustraiting if you dont know what your doing.
As for development, I recomend you prototype it first. Hammer something out thats messy but lets you get an idea of how to do things. You can then use that as a reference for when your actually building your app.

Custom Validators in ASP.NET Development - Clean vs Efficient

I'm working on a page that has a significant number of textboxes/dropdowns/etc to fill out. The majority of these are going to be performing some sort of custom validation. I should note that it's nothing of substantial size - all just string or integer values.
I always hear (and have typically always agreed) that as much validation should be performed on the client rather than on the server, but in this case I am unsure. The difference here is that this project will be passed on to an IT guy who knows about computers but is still new to programming - he will be the one in charge of making the minor updates and changes to the way these custom validations work in the future.
My idea shifted from being as efficient as possible to being a bit less efficient but much more readable. I created a new class specifically for all of my validations which will be used throughout the website. By forcing all of my custom validation code in this class, though, I eliminate any client-side validations I might be able to perform. I should also note that each page that requires a custom validation will generally need to perform at least one server-side validation, so I will never be able to use client-side 100%
Considering the relatively low level of activity on the website (currently and in the future), would you consider this as an acceptable solution? Or would you ALWAYS prefer to have as much validation on the client as possible in order to increase the responsiveness, even if it makes things a bit more messy for whoever may be working on it in the future?
The benefit of client-side validation is that the user doesn't have to wait for a page to postback.
Validation constraints are best declared server-side. Otherwise, someone could disable JavaScript on their browser and send corrupt data to your database.
If you want to get the speed of client-side validation, but keep the client clean for maintenance, you can subscribe the onblur event of each form input to do an AJAX call and validate the model, then constrain the form to not submit if the form is invalid. This could all be factored into an external .js file, so all your IT guy has to do is include it, and from there its just HTML.
You always want to aim for better user experience in my opinion. Generally speaking, if your code doesn't add value to the user experience, it doesn't really matter how you implement it in the back end. Having said that, you should always try to write "maintainable" code. If "messy" code is the best that you could do for the time being, add documentations that explains why that is.

Categories