Domain Driven Development and Rich GUI - c#

I have a philosophical question about applying DDD to development of a rich GUI application. As a programmer I have experience in creating both DDD and DB-oriented systems so I know the basics. Now I am facing a complete redesign of a large point of sale application and I have a problem.
Usually DDD concept means "99% of logic is in domain and 1% of logic in GUI"; and logic in GUI is only a validation. Such approach works well when you have simple forms, where users can enter something and then press 'Save' to send the data to a server or smth like this.
One of the main features of the existing application is that it's quick. Working on POS means a saleperson does everything quickly. Business logic that the POS must follow is highly complicated. Roughly speaking, every time a user changes price, tax, discount etc other prices, dicounts, taxes etc changes; so it's a kind of a domain that resides on a client.
Technically I can, obviously, move the logic to a remote domain that lives on a server, but it will make the system very slow. I'll need to make a remote call every time a user makes changes in UI.
Are there any ideas of how to preserve purity of DDD and at the same time make the system quick?
Thank you!
P.S. The only way I see now is using a downlodable assembly that contains a domain, but it definitely looks like a hack...

There is a definite trade-off that needs to be carefully managed to find the right balance between UI responsiveness and enforcing the purity of DDD.
Personally, I like to to take a default position of starting "pure" and only allowing compromises to the DDD pattern where real world performance testing proves it to be necessary.
I often find it is surprising how much of the logic can be kept on the server without adversely affecting client responsiveness, since the bottleneck is not necessarily where you expect it to be.

One concept is to have some quick validation on the client, which does not try to be 100% accurate but can detect maybe 95% of invalid input.
In your example this quick validation could check things like:
discount is greater than 0 and < price
tax somewhere between 0 and 25%
The input is sent to the server for full validation only if it has passed the quick client test.
For example - assume we have a quick client side validation which is 95% accurate. This means that when the user inputs invalid data, in 95% of cases the UI will display the error with no server contact necessary.
Only 5% of invalid input data will result in an error being displayed first after server contact. Which is probably OK if the user usually does not supply invalid data - which partly depends on how well designed the UI is.
Critical - the quick validation must never say that valid data is invalid.

probably you can try, to use separated "two applications" (two modules) each one as in DDD philosophy:
- customer service in the POS
- shop services in the "server"...
the two modules have to by integrated... e.g. via network...

Related

Custom Validators in ASP.NET Development - Clean vs Efficient

I'm working on a page that has a significant number of textboxes/dropdowns/etc to fill out. The majority of these are going to be performing some sort of custom validation. I should note that it's nothing of substantial size - all just string or integer values.
I always hear (and have typically always agreed) that as much validation should be performed on the client rather than on the server, but in this case I am unsure. The difference here is that this project will be passed on to an IT guy who knows about computers but is still new to programming - he will be the one in charge of making the minor updates and changes to the way these custom validations work in the future.
My idea shifted from being as efficient as possible to being a bit less efficient but much more readable. I created a new class specifically for all of my validations which will be used throughout the website. By forcing all of my custom validation code in this class, though, I eliminate any client-side validations I might be able to perform. I should also note that each page that requires a custom validation will generally need to perform at least one server-side validation, so I will never be able to use client-side 100%
Considering the relatively low level of activity on the website (currently and in the future), would you consider this as an acceptable solution? Or would you ALWAYS prefer to have as much validation on the client as possible in order to increase the responsiveness, even if it makes things a bit more messy for whoever may be working on it in the future?
The benefit of client-side validation is that the user doesn't have to wait for a page to postback.
Validation constraints are best declared server-side. Otherwise, someone could disable JavaScript on their browser and send corrupt data to your database.
If you want to get the speed of client-side validation, but keep the client clean for maintenance, you can subscribe the onblur event of each form input to do an AJAX call and validate the model, then constrain the form to not submit if the form is invalid. This could all be factored into an external .js file, so all your IT guy has to do is include it, and from there its just HTML.
You always want to aim for better user experience in my opinion. Generally speaking, if your code doesn't add value to the user experience, it doesn't really matter how you implement it in the back end. Having said that, you should always try to write "maintainable" code. If "messy" code is the best that you could do for the time being, add documentations that explains why that is.

SOA/WCF dissecting the system & service boundaries

I'm building a system which will have a few channels feeding different clients (MonoDroid, MonoTouch, Asp.Net Mvc, REST API)
I'm trying to adopt an SOA archetecture and also trying to adopt the persistence by reachability pattern (http://www.udidahan.com/2009/06/29/dont-create-aggregate-roots/)
My question relates to the design of the archetecture. How best to split the system into discreet chunks to benefit from SOA.
In my model have a SystemImplementation which represents the an installation of the system iteself. And also an Account entity.
The way I initially thought about designing this was to create the services as:
SystemImplementationService - responsible for managing things related to the actual installation itself such as branding, traffic logging etc
AccountService - responsible for managing the users assets (media, network of contacts etc)
Logically the registration of a new user account would happen in AccountService.RegisterAccount where the service can take care of validating the new account (duped username check etc), hashing the pw etc
However, in order to achieve persistence by reachability I'd need to add the new Account to the SystemImplementation.Accounts collection for it to save in the SystemImplementation service automatically (using nhibernate i can use lazy=extra to ensure when i add the new account to the collection it doesn't automatically load all accounts)
For this to happen I'd probably need to create the Account in AccountService, pass back the unsaved entity to the client and then have the client call SystemImplementation.AssociateAccountWithSystemImplementation
So that I don't need to call the SystemImplementation service from the AccountService (as this, correct me if I'm wrong - is bad practise)
My question is then - am i splitting the system incorrectly? If so, how should I be splitting a system? Is there any methodology for defining the way a system should be split for SOA? Is it OK to call a WCF service from in a service:
AccountService.RegisterAccount --> SystemImplementation.AssociateAccountWithSystemImplementation
I'm worried i'm going to start building the system based on some antipatterns which will come to catch me later :)
You have a partitioning issue, but you are not alone, everyone who adopts SOA comes up against this problem. How best to organize or partition my system into relevant pieces?
For me, Roger Sessions is talking the most sense around this topic, and guys like Microsoft are listening in a big way.
The papers that changed my thinking in this can be found at http://www.objectwatch.com/whitepapers/ABetterPath-Final.pdf, but I really recommend his book Simple Architectures for Complex enterprises.
In that book he introduces equivalence relations from set theory and how they relate to the partitioning of service contracts.
In a nutshell,
The rules to formulating partitions can be summarized into five laws:
Partitions must be true partitions.
a. Items live in one partition only, ever.
Partitions must be appropriate to the problem at hand.
a. Partitions only minimize complexity when they are appropriate to the problem
at hand, e.g. a clothing store organized by color would have little value to
customers looking for what they want.
The number of subsets must be appropriate.
a. Studies show that there seems to be an optimum number of items in a
subset, adding more subsets, thus reducing the number of items in each
subset, has very little effect on complexity, but reducing the number of
subsets, thus increasing the number of elements in each subset seems to
add to complexity. The number seems to sit in the range 3 – 12, with 3 – 5
being optimal.
The size of the subsets must be roughly equal
a. The size of the subsets and their importance in the overall partition must be
roughly equivalent.
The interaction between the subsets must be minimal and well defined.
a. A reduction in complexity is dependent on minimizing both the number and
nature of interactions between subsets of the partition.
Do not stress to much if at first you get it wrong, the SOA Manifesto tell us we should value Evolutionary refinement over pursuit of initial perfection .
Good luck
With SOA, the hardest part is deciding on your vertical slices of functionality.
The general principles are...
1) You shouldn't have multiple services talking to the same table. You need to create one service that encompasses an area of functionality and then be strict by preventing any other service from touching those same tables.
2) In contrast to this, you also want to keep each vertical slice as narrow as it can be (but no narrower!). If you can avoid complex, deep object graphs, all the better.
How you slice your functionality depends very much on your own comfort level. For example, if you have a relationship between your "Article" and your "Author", you will be tempted to create an object graph that represents an "Author", which contains a list of "Articles" written by the author. You would actually be better off having an "Author" object, delivered by "AuthorService" and the ability to get "Article" object from the "ArticleService" based simply on the AuthorId. This means you don't have to construct a complete author object graph with lists of articles, comments, messages, permissions and loads more every time you want to deal with an Author. Even though NHibernate would lazy-load the relevant parts of this for you, it is still a complicated object graph.

How can I make a "greenscreen" web app?

In informal conversations with our customer service department, they have expressed dissatisfaction with our web-based CSA (customer service application). In a callcenter, calls per hour are critical, and lots of time is wasted mousing around, clicking buttons, selecting values in dropdown lists, etc. What the dirrector of customer service has wistfully asked for is a return to the good old days of keyboard-driven applications with very little visual detail, just what's necessary to present data to the CSR and process the call.
I can't help but be reminded of the greenscreen apps we all used to use (and the more seasoned among us used to make). Not only would such an application be more productive, but healthier for the reps to use, as they must be risking injury doing data entry through a web app all day.
I'd like to keep the convenience of browser-based deployment and preserve our existing investment in the Microsoft stack, but how can I deliver this keyboard-driven ultra-simple greenscreen concept to the web?
Good answers will link to libraries, other web applications with a similar style, best practices for organizing and prioritizing keyboard shortcut data (not how to add them, but how to store and maintain the shortcuts and automatically resolve conflicts, etc.
EDIT: accepted answers will not be mini-lectures on how to do UI on the web. I do not want any links, buttons or anything to click on whatsoever.
EDIT2: this application has 500 users, spread out in call centers around North America. I cannot retrain them all to use the TAB key
I make web based CSR apps. What your manager is forgetting is now the application is MUCH more complex. We are asking more from our reps than we did 15 years ago. We collect more information and record more data than before.
Instead of a "greenscreen" application, you should focus on making the web application behave better. For example,dont have a dropdown for year when it can be a input field. Make sure the taborder is correct and sane, you can even put little numbers next to each field grouping to indicate tab order. Assign different screens/tabs to F keys and denote them on the screen.
You should be able to use your web app without a mouse at all with no loss of productivity if done correctly.
Leverage the use of AJAX so a round trip to the server doesn't change the focus of their cursor.
On a CSR app, you often have several defaults. you should assign each default a button and allow the csr to push 1 button to get the default they want. this will reduce the amount of clicking and mousing around.
Also very important You need to sit with the CSR's and watch them for a while to get a feel for how they use the app. if you haven't done this, you are probably overlooking simple changes that will greatly enhance their productivity.
body { background: #000; color: #0F0; }
More seriously, it's entirely possible to bind keyboard shortcuts to actions in a web app.
You might consider teaching your users to just use the Tab key - that's how I fill out most web forms. Tab to a select list and type out the first few letters of the option I'm attempting to select. If the page doesn't do goofy things with structure and tabindexes, I can usually fill out most web forms with just the keyboard.
As I had to use some of those apps over time, will give my feedback as a user, FWIW, and maybe it helps you to help your users :-) Sorry it's a bit long but the topic is rather close to my heart - as I had myself to prototype the "improved" interface for such a system (which, according to our calculations, saves very nontrivial amounts of money and avoids the user dissatisfaction) and then lead the team that implemented it.
There is one common issue that I noticed with quite a few of CRMs: there is 20+ fields on the screen, of which typically one uses 4-5 for performing of 90% of operations. But one needs to click through the unnecessary fields anyway.
I might be wrong with this assumption, of course (as in my case there was a wide variety of users with different functions using the system). But do try to sit down with the users and see how they are using the application and see if you can optimize something UI-wise - or, if really it's a matter of not knowing how to use "TAB" (and they really need to use each and every of those 20 fields each time) - you will be able to coach a few of them and check whether this is something sufficient for them - and then roll out the training for the entire organization. Ensure you have the intuitive hotkey support, and that if a list contains 2000 items, the users do not have to scroll it manually to find the right one, but rather can use FF's feature to select the item by typing the start of its text.
You might learn a lot by looking at the usage patterns of the application and then optimizing the UI accordingly. If you have multiple organizational functions that use the system - then the "ideal UI" for each of them might be different, so the question of which to implement, and if, becomes a business decision.
There are also some other little details that matter for the users - sometimes what you'd thought would be the main input field for them in reality is not - and they have an empty textarea eating up half of the screen, while they have to enter the really important data into a small text field somewhere in the corner. Or that in their screen resolution they need the horizontal scrolling (or, scrolling at all).
Again, sitting down with the users and observing should reveal this.
One more issue: "Too fast developer hardware" phenomenon: A lot of the web developers tend to use large displays with high resolution, showing the output of a very powerful PCs. When the result is shown on the CSR's laptop screen at 1024x768 of a year-old laptop, the layout looks quite different from what was anticipated, as well as the rendering performance. Tune, tune, tune.
And, finally - if your organization is geographically disperse, always test with the longest-latency/smallest bandwidth link equivalent. These issues are not seen when doing the testing locally, but add a lot of annoyance when using the system over the WAN. In short - try to use the worst-case scenario when doing any testing/development of your application - then this will become annoying to you and you will optimize its use - so then the users that are in better situation will jump in joy over the apps performance.
If you are in for the "green screen app" - then maybe for the power users provide a single long text input field where they could type all the information in the CLI-type fashion and just hit "submit" or the ENTER key (though this design decision is not something to be taken lightly as it is a lot of work). But everyone needs to realize that "green-screen" applications have a rather steep learning curve - this is another factor to consider from the business point of view, along with the attrition rate, etc. Ask the boss how long does the typical agent stay at the same place and how would the productivity be affected if they needed a 3-month term to come to full speed. :) There's a balance that is not decided by the programmers alone, nor by the management alone, but requires a joint effort.
And finally a side note in case you have "power users": you might want to take a look at conkeror as a browser - though fairly slow in itself, it looks quite flexible in what it can offer from the keyboard-only control perspective.
I can't agree with the others more when they say the first priority of the redesign should be going and talking to / observing your users and see where they have problems. I think you would see far more ROI if you find out the most common tasks and the most common errors your users make and streamline those within the bounds of your existing UI. I realize this isn't an easy thing to do, but if you can pull it off you'll have much happier users (since you've solved their workflow issues) and much happier bosses (since you saved the company money by not having to re-train all the users on a completely new UI).
After reading everyone else's answers and comments, I wanted to address a few other things:
EDIT: accepted answers will not be mini-lectures on how to do UI on the web. I do not want any links, buttons or anything to click on whatsoever.
I don't mean to be argumentative, but this sounds like you've already made up your mind without having thought of the implications on the users. I can immediately see a couple pitfalls with this approach:
A greenscreen-esque UI may not be
more productive for your users. For
example, what's the average age of
your users? Most people 25 and
younger have had little to no
exposure to these types of UIs.
Suddenly imposing this sort of
interface on them could cause a
major backlash from your users. As an example, look at what happened
when Facebook decided to change its
UI to the "stream" concept - huge
outrage from the users!
The web wasn't really designed with this sort of interface in mind. What I mean is that people are not used to having command-line-like interfaces when they visit a website. They expect visual medium (images, buttons, links, etc.) in addition to text. Changing too drastically from this could confuse your users.
Programming this type of interface will be tough. As in my last point, the web doesn't play well with command-line-like or text-only interfaces. Things like function keys, keyboard shortcuts (like ctrl- and alt-) are all poorly and inconsistently supported which means you'll have to come up with your own ways of accessing standard things like help (since F1 will map to the web browser's help, not your app's).
EDIT2: this application has 500 users, spread out in call centers around North America. I cannot retrain them all to use the TAB key
I think this argument is really just a strawman. If you are introducing a wholly new UI, you're going to have to train your users on it. Really, it should be assumed that any change to your UI will require training in one form or another. Something simple like adding tab-navigation to the UI is actually comparatively small in the training department. If you did this it would be very easy to send out a "handy new feature in the UI" email, or even better, have some sort of "tip of the day" (that users can toggle off, of course) which tells them about cool timesaving features like tab navigation.
I can't speak for the other posters here, but I did want to say that I hope you don't think we're being too argumentative here as that's not our (well OK, my) intent. Rather the reaction comes from us hearing the idea for your UI and not being convinced that it is necessarily the best thing for your users. You are fully welcome to say I'm wrong and that this is what your users will benefit most from; but before you do, just remember that at the end of the day it's your users who matter most and if they don't buy in to your new UI, no one will.
It's really more of a keyboard-centric mentality when developing. I use the keyboard for as much as possible and the apps I build tend to show that (so I can quickly go through my use cases).
Something as simple as getting the tab order correct could be all your app needs (I guess I'm not sure if you can set this in ASP.NET...). A lot of controls will auto-complete for the rest.

Localizing data that is generated dynamically

This was a hard question for me to summarize so we may need to edit this a bit.
Background
About four years ago, we had to translate our asp.net application for our clients in Mexico. Extensibility and scalability were not that much of a concern at the time (oh yes, I just said those dreadful words) because we only have U.S. and Mexican customers.
Rather than use resource files, we replaced every single piece of static text in our application with some type of server control (asp.net label for example). We store each and every English word in a SQL database. We have added the ability to translate the English text into another language and also can add cultural overrides. For example, hello can be translated to ¡hola! in one language and overridden to ¡bueno! in a different culture. The business has full control over these translations because will built management utilities for them to control everything. The translation kicks in when we detect that the user has a browser culture other than en-us. Every form descends from a base form that iterates through each server control and executes a translation (translation data is stored as a datatable in an application variable for a culture). I'm still amazed at how fast the control iteration is.
The problem
The business is very happy with how the translations work. In addition to the static content that I mentioned above, the business now wants to have certain data translated as well. System notes are a good example of a translation they want. Example "Sent Letter #XXXX to Customer" - the business wants the "Sent Letter to Customer" text translated based on their browser culture.
I have read a couple of other posts on SO that talk about localization but they don't address my problem. How do you translate a phrase that is dynamically generated? I could easily read the English text and translate "Sent", "Letter", "to" and "Customer", but I guarantee that it will look stupid to the end user because it's a phrase. The dynamic part of the system-generated note would screw up any look-ups that we perform on the phrase if we stored the phrase in English, less the dynamic text.
One thought I had... We don't have a table of system generated note types. I suppose we could create one that had placeholders for dynamic data and the translation engine would ignore the placeholder markers. The problem with this approach is that our SQL server database is a replication of an old pick database and we don't really know all the types of system generated phrases (They are deep in the pic code base, in subroutines, control files, etc.). Things like notes, ticklers, and payment rejection reasons are all stored differently. Trying to normalize this data has proven difficult. It would be a huge effort to go back and identify and change every pick program that generated a message.
This question is very close; but I'm not dealing with just system-generated status messages but rather an infinite number of phrases and types of phrases with no central generation mechanism.
Any ideas?
The lack of a "bottleneck" -- what you identify as the (missing) "central generation mechanism" -- is the architectural problem in this situation. Ideally, rearchitecting to put such a bottleneck in place (so you can keep using your general approach with a database of culture-appropriate renditions of messages, just with "placeholders" for e.g. the #XXXX in your example) would be best.
If that's just unfeasible, you can place the "bottleneck" at the other end of the pipe -- when a message is about to be emitted. At that point, or few points, you need to try and match the (English) string that's about to be emitted with a series of well-crafted regular expressions (with "placeholders" typically like (.*?)...) and thereby identify the appropriate key for the DB lookup. Yes, that still is a lot of work, but at least it should be feasible without the issues you mention wrt old translated pick code.
We use technique you propose with insertion points.
"Sent letter #{0:Letter Num} to Customer {1:Customer Full Name}"
Which might be (in reverse Pig Latin, say):
"Ustomercay {1:Customer Full Name} asway entsay etterlay #{0:Letter Num}"
Note that this handles cases where the particular target langue reverses the order of insertion etc. It does not handle subtleties like first, second, etc, which have to be handled with application logic/more phrases:
"This is your {0:first, second, third} warning"
In a pinch I suppose you could try something like foisting the job off onto Google if you don't have a translation on hand for a particular phrase, and stashing the translation for later.
Stashing the translations for later provides both a data collection point for building a message catalog and a rough (if sometimes laughably wonky) dynamically built starter set of translations. Once you begin the process, track which translations have been reviewed and how frequently each have been hit. Frequently hit machine translations can then be reviewed and refined.
Dynamic machine translation is not suitable for a product that you actually expect people to pay money for. The only way to do it is with static templates containing insertion points (as Cade Roux has demonstrated in his answer).
There's no getting around a thorough refactoring of your code to make this feasible. The alternative is to do nothing with those phrases (which is what you're doing now, and it's working out okay, right?). Usually no translation is better than embarrassingly bad translation.

Should a internal C# app be compiled with business logic?

[background]
So, I've got a C# application that was written before I got here. I'm not in the dev org, at this time, but I am the tech lead in my sub-group within the internet marketing org. My responsibility is process automation, minimal desktop support, and custom apps that make our lives easier.
[/background]
[app details]
We've got an app that creates a custom database file from a list of URLs. It was designed to have one input file, and two output files for the two applications that use these sort of db files. The rule for the difference between the two output files is compiled into the code.
[/app details]
Should an internal C# app be compiled with business logic that can't be changed without it being re-built?
Internal applications have one goal: support the process.
If the rules for creating the output are simple, change every day and are put down by a user, compiling it into the binary is totally wrong and an investment into a GUI and a new set of programmers could do much good. If the rules are complex, change once a year and are mandated by the management, having them compiled into the binary is a simple, cost-effective way to maintain them and keep users from fiddling with the internals.
As always, the answer has to be "it depends".
If the logic is changed on a regular basis, you should avoid building it into the program. On the other hand, since it is internal, I'm guessing that the process required to rebuild the app is minimal or non-existent, so it may not make much of a difference.
How long does it take to alter business logic and then recompile?
How long will it take to alter business logic without recompiling in new version?
How long will it take to recode it?
How will this affect maintainence in terms of extra hours spent in the future?
Are any of the people who need the app unable to alter the business logic because it is in code form?
Answering those 5 questions will yield an answer.
If the logic does not need to be changed then yes it should probably be compiled along with the code.
On the other hand if there are certain factors that could change the behavior of this business logic then you should probably provide a mean of changing it such as xml configuration files that alter its behavior.
Sure, if you know that the utility will only be used within your organization and for a single purpose there is nothing wrong with mixing your business rules with the logic. Over-designing (in this case making code reusable when it will never be reused) would not be an efficient use of resources.
I usually employ multiple configuration strategies based on probability of change.
First of all never put business rules in code without documenting it in some-way. Code has a lot of variables and only some of them can be changed safely while still maintaining the correct behaviour. I Normally put a constant at the start of class to identify what behaviour can be changed, i.e.
// Prefer this
const int AllowDownloadAttempts = 2;
if (AttemptDownload() > AllowDownloadAttempts) RegisterAndAllowDownload();
// Over this
if (AttemptDownload() > 2) RegisterAndAllowDownload();
A basic rule I follow is anything other than [-1, 0, 1] must be documented.
If it's not critical and not likely to change often than I would place it in the applications configuration file (e.g. App.config) and access it via a strongly-typed configuration class so you can keep track of its usages to know when its safe to remove or change.
If it needs to be changed frequently or changed by business users then I would store it in a database and provide a simple GUI to edit it then load it into a strongly-typed configuration class when the application loads.

Categories