Sorry for the naive question; I'm still working through understanding DDD. Let's say I have an IOrderService. Is this where I would have a method GetNewOrderID? Or, I guess more generally, what is the right way to allocate new orders in a DDD scenario?
Unless I've misunderstood DDD then it's not a naive question - instead when it's not clear where the responsibility then not enough of the domain has been investigated / understood. Things like:
What is the format of the Order ID,
what information goes into a single
OrderID.
Is there a requirement to
save anything at the point of getting
a new OrderID - like who requested
it, etc.
Is there a requirement
that requested, but unused Order IDs
be freed up?
One or more of these requirements might clarify the situation.
I like to listen to the words that the business use. If they talk about, for instance, catalogue orders and phone orders both being sources of orders, I might have an OrderSource or IOrderSource instead of an OrderService - the last is code-speak instead of business-speak. I might have two implementations, or just one which uses an identifier to say "this is from a phone", depending on how different the processes are.
If the business people talk about getting IDs from their service, then yes, do that. It's more likely though that they talk about receiving an order from an OrderSource, dispatching it to the Warehouse or creating an OrderForm and receiving a ReferenceNumber. That ReferenceNumber may well be the primary key, but it's probably not an Id.
The language that the business use can guide you to write software which matches their process well. Keeps things easy to change and helps you spot if there's an aspect of the domain which could use some learning. The design patterns are all the same that you're used to; I just don't call my code after those if the business have some better terms. See DDD's Ubiquitous Language, and good luck!
Related
I have the folowing question:
What is the prefered way to use the status in code, an enum OR singleton?
I have in a DB stored the status values with their ID's. If the status changes in de DB is would also need some changes in the code.
does anyone now what is more prefered, based on conventions?
I've been looking on the internet but couldn't find a clear answer.
It depends in part on whether the ids for your statuses have guaranteed values, or whether the ids could change per-database (via an IDENTITY). Personally, for statuses I prefer fixed - which gives you the most flexibility, and least overhead - you can choose to use an enum (or maybe some consts if more convenient), and you never have to add an indirection, i.e. "get the id that is open".
This isn't always possible, though, and when it isn't it is still definitely useful to cache and re-use them (to avoid hitting the DB for that lookup). However, I would avoid a singleton, not least because it won't play nicely if you ever need to talk to more than one database - the ids in each could well be different. However, any suitable cache implementation (or maybe IoC/DI) should allow you to store the appropriate data (probably some kind of dictionary). Singletons are also just a bit of a pain generally if you like testing etc.
But: an enum and fixed id values is a lot simpler.
Note that under any implementation, changing the status list is a non-trivial operation, not least it will be a big UPDATE (or several if you are denormalized).
If you intend to use the Status across the application and is standardised across then it would be best fit for an Enum
Enum Status
{Open, Pending, Closed, Deferred}
Also this makes the code more readable
I work in development team of about 12 and we build a reasonable set of API's that we use on a strictly in-house only basis. Typically all classes and interfaces are public because that is just how they get done. I have often considered the value of making some of the constructors internal so that the consumers of the API (albeit internal) have to use the factory or some other reason that I can't think of now.
Is this something that you and your team practice?
How does this effect your unit tests? Do you find that it is okay to unit test a class through it's factory or do you access the constructor through something like PrivateObject?
The answer is yes; my current project has exactly one developer working on it - me - and yet I still use visibility and other access modifiers, along with more complex design patterns, as necessary. Here's why:
The compiler, if you let it, can be one of your greatest tools to enforce good design patterns. Making everything public can cause a lot of headaches down the road, when you have to maintain an object whose normal state during program execution is the object-oriented equivalent of a human living his life on an operating table with his chest cracked open and everyone who knows him from his children to his electric company poking around in his vital organs to get him to do what they want. Adding another hand, or removing one, can cause the patient to go into cardiac arrest.
Code should be self-documenting. A class or class member that is marked as internal means it probably should be; if everything's public, you don't know if there's anything you shouldn't touch when interfacing with the object. Again, you've got your patient sitting on the operating table, and all of a sudden a new guy comes in, grabs the liver and goes "hey, what does this do?". Objects should have their hands shaken, be told to do something and let go to do it, and the function of their liver is of no concern to anyone but them.
Code should be maintainable by your posterity. This ties back to the first two rules, but basically, someone should be able to open up your codebase, find the entrance point and trace through basic execution flow, all the while looking at objects utilized along the way and determining their general form and function. Back to our patient on the operating table, let's say five years from now someone walks in on this scene; a guy split open on a table with 50 guys in his guts. It's not going to look like any polite social custom he's ever seen; it'll probably look most like a ritual human sacrifice, and most people's first instinct when encountering such a situation is to run.
However, the flip side of the coin is that a design pattern implemented for its own sake is generally a Very Bad Thing. Once you graduate college and get your first job, nobody really cares that you know how to implement a Strategy pattern, and you shouldn't do so at the first opportunity just to say you did. Every pattern has a set of circumstances in which it applies. If you were a doctor, would you perform an angioplasty on the next patient who walked in just to say you were able to do it?
I'm building a system which will have a few channels feeding different clients (MonoDroid, MonoTouch, Asp.Net Mvc, REST API)
I'm trying to adopt an SOA archetecture and also trying to adopt the persistence by reachability pattern (http://www.udidahan.com/2009/06/29/dont-create-aggregate-roots/)
My question relates to the design of the archetecture. How best to split the system into discreet chunks to benefit from SOA.
In my model have a SystemImplementation which represents the an installation of the system iteself. And also an Account entity.
The way I initially thought about designing this was to create the services as:
SystemImplementationService - responsible for managing things related to the actual installation itself such as branding, traffic logging etc
AccountService - responsible for managing the users assets (media, network of contacts etc)
Logically the registration of a new user account would happen in AccountService.RegisterAccount where the service can take care of validating the new account (duped username check etc), hashing the pw etc
However, in order to achieve persistence by reachability I'd need to add the new Account to the SystemImplementation.Accounts collection for it to save in the SystemImplementation service automatically (using nhibernate i can use lazy=extra to ensure when i add the new account to the collection it doesn't automatically load all accounts)
For this to happen I'd probably need to create the Account in AccountService, pass back the unsaved entity to the client and then have the client call SystemImplementation.AssociateAccountWithSystemImplementation
So that I don't need to call the SystemImplementation service from the AccountService (as this, correct me if I'm wrong - is bad practise)
My question is then - am i splitting the system incorrectly? If so, how should I be splitting a system? Is there any methodology for defining the way a system should be split for SOA? Is it OK to call a WCF service from in a service:
AccountService.RegisterAccount --> SystemImplementation.AssociateAccountWithSystemImplementation
I'm worried i'm going to start building the system based on some antipatterns which will come to catch me later :)
You have a partitioning issue, but you are not alone, everyone who adopts SOA comes up against this problem. How best to organize or partition my system into relevant pieces?
For me, Roger Sessions is talking the most sense around this topic, and guys like Microsoft are listening in a big way.
The papers that changed my thinking in this can be found at http://www.objectwatch.com/whitepapers/ABetterPath-Final.pdf, but I really recommend his book Simple Architectures for Complex enterprises.
In that book he introduces equivalence relations from set theory and how they relate to the partitioning of service contracts.
In a nutshell,
The rules to formulating partitions can be summarized into five laws:
Partitions must be true partitions.
a. Items live in one partition only, ever.
Partitions must be appropriate to the problem at hand.
a. Partitions only minimize complexity when they are appropriate to the problem
at hand, e.g. a clothing store organized by color would have little value to
customers looking for what they want.
The number of subsets must be appropriate.
a. Studies show that there seems to be an optimum number of items in a
subset, adding more subsets, thus reducing the number of items in each
subset, has very little effect on complexity, but reducing the number of
subsets, thus increasing the number of elements in each subset seems to
add to complexity. The number seems to sit in the range 3 – 12, with 3 – 5
being optimal.
The size of the subsets must be roughly equal
a. The size of the subsets and their importance in the overall partition must be
roughly equivalent.
The interaction between the subsets must be minimal and well defined.
a. A reduction in complexity is dependent on minimizing both the number and
nature of interactions between subsets of the partition.
Do not stress to much if at first you get it wrong, the SOA Manifesto tell us we should value Evolutionary refinement over pursuit of initial perfection .
Good luck
With SOA, the hardest part is deciding on your vertical slices of functionality.
The general principles are...
1) You shouldn't have multiple services talking to the same table. You need to create one service that encompasses an area of functionality and then be strict by preventing any other service from touching those same tables.
2) In contrast to this, you also want to keep each vertical slice as narrow as it can be (but no narrower!). If you can avoid complex, deep object graphs, all the better.
How you slice your functionality depends very much on your own comfort level. For example, if you have a relationship between your "Article" and your "Author", you will be tempted to create an object graph that represents an "Author", which contains a list of "Articles" written by the author. You would actually be better off having an "Author" object, delivered by "AuthorService" and the ability to get "Article" object from the "ArticleService" based simply on the AuthorId. This means you don't have to construct a complete author object graph with lists of articles, comments, messages, permissions and loads more every time you want to deal with an Author. Even though NHibernate would lazy-load the relevant parts of this for you, it is still a complicated object graph.
I must develop a simple web application to produce reports. I have a single table "contract" and i must return very simple aggregated values : number of documents produced in a time range, average number of pages for documents and so on . The table gets filled by a batch application, users will have roles that will allow them to see only a part of the reports (if they may be called so ).
My purpose is :
develop a class, which generates the so called reports, opened to future extension (adding new methods to generate new reports for different roles must be easy )
decouple the web graphic interface from the database access
I'm evaluating various patterns : decorator, visitor, ... but being the return data so simple i cannot evaluate which apply or even if its the case to use one. Moreover i must do it i less than 5 days. It can be done if i make a so called "smart gui" but as told at point 1, i don't want to get troubles when new roles or method will be added.
thank you for your answers.
I'm sorry, i realize i haven't provided too much infos. I live in a Dilbert world. at the moment i've got the following info : db will be oracle (the concrete db doesn't exist yet) , so no EF, maybe linqtodataset (but i'm new to linq). About new features of the application,due to pravious experiences, the only thing i wish is not to be obliged to propagate changes over the whole application, even if it's simple. that are the reasons i've thougth to design patterns (note i've said "if it's the case" in my question) .
I'll KISS it and then will refactor it if needed , as suggested by ladislav mrnka, but i still appreciate any suggestion on how to keep opened to extension the data gathering class
KISS - keep it simple and stupid. You have five days. Create working application and if you have time refactor it to some better solution.
The road to good code is not paved with design patterns.
Good code is code that is readable, maintainable, robust, compatible and future-proof.
Don’t get me wrong: Design patterns are a great thing because they help categorise, and thus teach, the experience that earlier generations of programmers have accrued. Each design pattern once solved a problem in a way that was not only novel and creative, but also good. The corrolary is not that a design pattern is necessarily good code when applied to any other problem.
Good code requires experience and insight. This includes experience with design patterns, and insight into their applicability and their downsides and pitfalls.
That said, my recommendation in your specific case is to learn about the recommended practice regarding web interfaces, database access, etc. Most C# programmers write web applications in ASP.NET; tend to use LINQ-to-Entities or LINQ-to-SQL for database access; and use Windows Forms or WPF for a desktop GUI. Each of these may or may not fulfill the requirements of your particular project. Only you can tell.
How about you use strategy pattern for the retrieving data? And use interfaces like following to keep it extendable at all times.
IReportFilter: Report filter/criteria set
IReportParams: Gets report parameters
IReportData: Gets the report data in a result set
IReportFormat: Report formatting
IReportRender: Renders the report
Just thinking out loud.
When working with service oriented applications we often use system types to identify / query our business entities.
IList<Product> GetProductsByUserAndCategoryId(int userId, int categoryId);
However we can't prevent developpers to pass another identifier which is not a "User Identifier" and not a "Category Identifier", or maybe inverse the ids on method call.
So a solution is to use Strongly Type Identifiers, like this :
IList<Product> GetProductsByUserAndCategoryId(UserId userId, CategoryId categoryId);
GetProductsByUserAndCategoryId(new UserId(123), new CategoryId(456));
What do you think about this ? Pros and cons ?
Pros and cons ?
Well, first off, this only shifts the moment of validation; it still has to happen, preferably as soon as a UserId (…) is instantiated. You also have to see whether this really has any benefits in your system at all.
On the other hand, I do think that it prevents bugs by disambiguating between inherently ambiguous numbers. Letting the same type int stand for two completely unrelated things can actually be dangerous.
In a recent code review (for a course at University) the no. 1 error students had made was to use an integer in the wrong way. Having used different types as in your example would effectively have prevented this source of errors altogether.
So, in summary, I don’t think there’s a blanked answer but I am generally in favour of such idioms. This is one of the real benefits of a strict type system.
The only real con for me is having to type extra code. You code becomes more tightly defined, which is a good thing, but now there is extra friction to actually getting it written. It is just a matter of will spending the extra time and effort up front pay off in saved maintenance and dependability later.
It is the same with any methodology. The TDD guys spend time building scaffolding and tests up front with the hope it will make there code more reliable and more easily maintainable. -- Many of them say it saves time upfront as well...but I have my doubts :) --
Ultimately I agree with Mr. Rudolph. This is the strength of strict type systems; Use it to your advantage.