I've been driving myself crazy for hours trying to figure this one out, and I'm not moving anywhere with it.
I'm creating a checkout till for a cashier, specifically I need to sum the items, then apply the promotional discounts. I'm trying to do it without violating any design principles (impossible, I know, I can let things slide when it makes sense).
Promotional discounts could be anything, from a black friday deal flat discount, to 'Orders over £100 save 10%' to '3 for 2 on these items', or 'Buy at least two cans of coke, and the price for them all drops to £0.50!'
I cannot see how to fit the promotional deals in. Each may require a different set of data from different locations. For instance one of the big problems being the '3 for 2' deal. Getting access to the items in the Checkout has been a plague on my mind.
So far, my best approach has been to use the Decorator pattern. Wrap the checkout up in a bunch of promotional deals when the price is calculated, as each decorator holds an instance of the checkout, we'll have access to the orginal checkout with the list of items.
In the future, the only thing I'd need to do is write the new rule, add it to the factory, and update any DB data which is perfect, the minimal change.
This kind of works. I can justify it in my head that it's still a checkout, and therefore all the rules being able to access the checkout make sense, gives me a nice way to chain the discount. But there is a problem, in that I'm sure I shouldn't be using it the way I'm suggesting. For instance, if one of the promotions wears off, you shouldn't really 'unwrap' it, and realistically while it's nice to be able to add promotions dynamically to extend an instance, it's not necessary.
I've read through more design patterns but can't seem to find anything that applies. I saw the following article:
https://levelup.gitconnected.com/rules-design-pattern-in-c-6c62f0e20ee0
This is basically what I want to do, but the implementation feels clumsy to me.
public bool IsValid(FileInfo fileInfo)
{
var rules = new List<IFileValidationRule> { new FileExtensionRule(new string[] { "txt", "html" }) };
if (AdminConfig.CheckFileSize)
{
rules.Add(new FileSizeRule("txt", 5 * 1024 * 1024));
rules.Add(new FileSizeRule("html", 10 * 1024 * 1024));
}
if (User.Status != UserStatus.Premium)
{
rules.Add(new MaxFileLengthRule(50));
}
bool isValid = rules.All(rule => rule.IsValid(fileInfo));
return isValid;
}
Specifically this part, seems to violate a few key principles, Open-Closed principle, Dependency Inversion, etc.
The other big problem I can't wrap my head arround is as below:
Imagine for the above example, a new rule needs to be added that reads the file data, check if there are any bad characters in there, doesn't matter what.
Implementing this is easy, you inject the file or the file data, whatever into the 'FileValidator' class, you then instantiate your rule and pass the file data into it. You then run the rule and return the success, great! But is this ok?
Reading this says no: http://wiki.c2.com/?TellDontAsk
"Tell, don't ask" - It is okay to use accessors to get the state of an object, as long as you don't use the result to make decisions outside the object.
That would be exactly what the code is doing! I guess the alternative to this is to update the 'FileData' object to essentially take a list of bad characters, check the file data, return false to the rule, which then fails the whole process, but this would start to throw a bunch more rules out the door. You're now breaking the Open-Closed principle, Single Responsibility principle, and it feels like you're building a rod for your own back, adding these custom methods for singular rules, bloating your object. (The link does discuss how you can pass a function into the method, which is pretty nice, but still not perfect, at the end of the day, aren't you just indirectly handing control to the caller?)
The above alone wouldn't be enough to stop me, but I'm struggling to justify making a private set of items public so one rule out of the bunch can make use of that data.
I'm in OOP recursion, tumbling towards a stack overflow. Can anyone pull me out and help me consilidate my thoughts? None of the design patterns seem to work but I'm sure this is a basic problem solved many times in the past. What am I missing?
Related
I would like to do a very simple test for the Constructor of my class,
[Test]
public void InitLensShadingPluginTest()
{
_lensShadingStory.WithScenario("Init Lens Shading plug-in")
.Given(InitLensShadingPlugin)
.When(Nothing)
.Then(PluginIsCreated)
.Execute();
}
this can be in Given or When it... I think it should be in When() but it doesn't really matter.
private void InitLensShadingPlugin()
{
_plugin = new LSCPlugin(_imagesDatabaseProvider, n_iExternalToolImageViewerControl);
}
Since the Constructor is the one being tested, I do not have anything to do inside the When() statement,
And in Then() I assert about the plugin creation.
private void PluginIsCreated()
{
Assert.NotNull(_plugin);
}
my question is about StoryQ, since I do not want to do anything inside When()
i tried to use When(()=>{}) however this is not supported by storyQ,
this means I need to implement something like
private void Nothing()
{
}
and call When(Nothing)
is there a better practice?
It's strange that StoryQ doesn't support missing steps; your scenario is actually pretty typical of other examples I've used of starting applications, games etc. up:
Given the chess program is running
Then the pieces should be in the starting positions
for instance. So your desire to use a condition followed by an outcome is perfectly valid.
Looking at StoryQ's API, it doesn't look as if it supports these empty steps. You could always make your own method and call both the Given and When steps inside it, returning the operation from the When:
.GivenIStartedWith(InitLensShadingPlugin)
.Then(PluginIsCreated)
If that seems too clunky, I'd do as you suggested and move the Given to a When, initializing the Given with an empty method with a more meaningful name instead:
Given(NothingIsInitializedYet)
.When(InitLensShadingPlugin)
.Then(PluginIsCreated)
Either of these will solve your problem.
However, if all you're testing is a class, rather than an entire application, using StoryQ is probably overkill. The natural-language BDD frameworks like StoryQ, Cucumber, JBehave etc. are intended to help business and development teams collaborate in their exploration of requirements. They incur significant setup and maintenance overhead, so if the audience of your class-level scenarios / examples is technical, there may be an easier way.
For class-level examples of behaviour I would just go with a plain unit testing tool like NUnit or MSpec. I like using NUnit and putting my "Given / When / Then" in comments:
// Given I initialized the lens shading plugin on startup
_plugin = new LSCPlugin(_imagesDatabaseProvider, n_iExternalToolImageViewerControl);
// Then the plugin should have been created
Assert.NotNull(_plugin);
Steps at a class level aren't reused in the same way they are in full-system scenarios, because classes have much smaller, more encapsulated responsibilities; and developers benefit from reading the code rather than having it hidden away in the step definitions.
Your Given/When/Then comments here might still echo scenarios at a higher level, if the class is directly driving the functionality that the user sees.
Normally for full-system scenarios we would derive the steps from conversations with the "3 amigos":
a business representative (PO, SME, someone who has a problem to be solved)
a tester (who spots scenarios we might otherwise miss)
the dev (who's going to solve the problem).
There might be a pair of devs. UI designers can get involved if they want to. Matt Wynne says it's "3 amigos, where 3 is any number between 3 and 7". The best time to have the conversations is right before the devs pick up the work to begin coding it.
However, if you're working on your own, whether it's a toy or a real application, you might benefit just from having imaginary conversations. I use a pixie called Thistle for mine.
I am writing a log file decoder which should be capable of reading many different structures of files. My question is how best to represent this data. I am using C#, but am new to OOP.
An example:
The log files have a range of sensor values. One sensor reading can be called A, another B. Obviously, there are many more than 2 entry types.
In different log files, they could be stored either as ABABABABAB or AAAAABBBBB.
I was thinking of describing this as blocks of entries. So in the first case, a block would be 'AB', with 5 blocks. In the second case, the first block is 'A', read 5 times. This is followed by a block of 'B', read 5 times.
This is quite a simplification (there are actually 40 different types of log file, each with up to 40 sensor values in a block). No log has more than 300 blocks.
At the moment, I store all of this in a datatable. I have a column for each entry, with a property of how many to read. If this is set to -1, it continues to the next column in the block. If not, it will assume that it has reached the end of the block.
This all seems quite clumsy. Can anyone suggest a better way of doing this?
I think you should first start here, and then here to learn a little bit about what object oriented programming is. Don't worry about your current problem while learning about OOP.
As you are learning about OO concepts, you should begin to understand code is not data, and data is not code. It does not matter how you represent your data from an OOP stance. You can write OO code to consume your data, or you could write procedurage code to consume your data, that part is irrelevant to the format of the data.
So then getting back to your question
My question is how best to represent this data
It depends on your needs. What is writing the log file? Do you have control over the writer and reader? If I did I would rely on build the built in serialization methods to minize the amount of code I need to write. Is the log file going to be really long? If so the "datatable" approach you described is usually better. If the log file isn't going to be a huge in file size, XML is really easy to work with.
Very basic and straightforward:
Define an interface for IEnrty with properties like string EntryBlock, int Count
Define a class which represents an Entry and implements IEntry
Code which doing a binary serialization should be aware of interfaces, for instance it should reffer IEnumerable<IEntry>
Class Entry could override ToString() to return something like [ABAB-2], surely if this is would be helpful whilst serialization
Interface IEntry could provide method void CreateFromRawString(string rawDataFromLog) if it would be helpful, decide yourself
If you want more info please share code you are using for serialization/deserializaton
In addition to what Bob has offered, I highly recommend Head First Design Patterns as a gentle, but robust introduction to OO for a C# programmer. The samples are in Java, which translate easily to C#.
As for OOP, you want to learn SOLID.
I would suggest you build this using Test Driven Development.
Start small, with a simple fragment of your log data and write a test like (you'll find a better way to do this with experience and apply it to your situation):
[Test]
public void ReadSequence_FiveA_ReturnsProperList()
{
// Arrange
string sequenceStub = "AAAAA";
// Act
MyFileDecoder decoder = new MyFileDecoder();
List<string> results = decoder.ReadSequence(sequenceStub);
// Assert
Assert.AreEqual(5, results.Count);
Assert.AreEqual("A", results[0]);
}
That test code snippet is just a starting point, and I've tried to be rather verbose in the assertions. You can come up with more creative ways over time. The point is to start small. Once this test passes, add another test where you mix "AB" and change your decoder to handle this properly. Eventually, you'll have a large set of tests that handle your different formats. Using TDD, you'll be on the path to using SOLID properly. Whenever you find something you can't test, you should review the rules and see if you can't make it simpler and inject dependencies.
Eventually you'll get into mocking. For example, you might find that you'd rather INJECT the ability for your MyFileDecoder class to have a dependency that will read your log file. In that case, you would create a mock object and pass that into the constructor and set the mock to return the sequenceStub when a method is called.
For my software development programming class we were supposed to make a "Feed Manager" type program for RSS feeds. Here is how I handled the implementation of FeedItems.
Nice and simple:
struct FeedItem {
string title;
string description;
string url;
}
I got marked down for that, the "correct" example answer is as follows:
class FeedItem
{
public:
FeedItem(string title, string description, string url);
inline string getTitle() const { return this->title; }
inline string getDescription() const { return this->description; }
inline string getURL() const { return this->url; }
inline void setTitle(string title) { this->title = title; }
inline void setDescription(string description){ this->description = description; }
inline void setURL(string url) { this->url = url; }
private:
string title;
string description;
string url;
};
Now to me, this seems stupid. I honestly can't believe I got marked down, when this does the exact same thing that mine does with a lot more overhead.
It reminds me of how in C# people always do this:
public class Example
{
private int _myint;
public int MyInt
{
get
{
return this._myint;
}
set
{
this._myint = value;
}
}
}
I mean I GET why they do it, maybe later on they want to validate the data in the setter or increment it in the getter. But why don't you people just do THIS UNTIL that situation arises?
public class Example
{
public int MyInt;
}
Sorry this is kind of a rant and not really a question, but the redundancy is maddening to me. Why are getters and setters so loved, when they are unneeded?
It's an issue of "best practice" and style.
You don't ever want to expose your data members directly. You always want to be able to control how they are accessed. I agree, in this instance, it seems a bit ridiculous, but it is intended to teach you that style so you get used to it.
It helps to define a consistent interface for classes. You always know how to get to something --> calling its get method.
Then there's also the reusability issue. Say, down the road, you need to change what happens when somebody accesses a data member. You can do that without forcing clients to recompile code. You can simply change the method in the class and guarantee that the new logic is utilized.
Here's a nice long SO discussion on the subject: Why use getters and setters.
The question you want to ask yourself is "What's going to happen 3 months from now when you realize that FeedItem.url does need to be validated but it's already referenced directly from 287 other classes?"
The main reason to do this before its needed is for versioning.
Fields behave differently than properties, especially when using them as an lvalue (where it's often not allowed, especially in C#). Also, if you need to, later, add property get/set routines, you'll break your API - users of your class will need to rewrite their code to use the new version.
It's much safer to do this up front.
C# 3, btw, makes this easier:
public class Example
{
public int MyInt { get; set; }
}
I absolutely agree with you. But in life you should probably do The Right Thing: in school, it's to get good marks. In your workplace it's to fulfill specs. If you want to be stubborn, then that's fine, but do explain yourself -- cover your bases in comments to minimize the damage you might get.
In your particular example above I can see you might want to validate, say, the URL. Maybe you'd even want to sanitize the title and the description, but either way I think this is the sort of thing you can tell early on in the class design. State your intentions and your rationale in comments. If you don't need validation then you don't need a getter and setter, you're absolutely right.
Simplicity pays, it's a valuable feature. Never do anything religiously.
If something's a simple struct, then yes it's ridiculous because it's just DATA.
This is really just a throwback to the beginning of OOP where people still didn't get the idea of classes at all. There's no reason to have hundreds of get and set methods just in case you might change getId() to be an remote call to the hubble telescope some day.
You really want that functionality at the TOP level, at the bottom it's worthless. IE you would have a complex method that was sent a pure virtual class to work on, guaranteeing it can still work no matter what happens below. Just placing it randomly in every struct is a joke, and it should never be done for a POD.
Maybe both options are a bit wrong, because neither version of the class has any behaviour. It's hard to comment further without more context.
See http://www.pragprog.com/articles/tell-dont-ask
Now lets imagine that your FeedItem class has become wonderfully popular and is being used by projects all over the place. You decide you need (as other answers have suggested) validate the URL that has been provided.
Happy days, you have written a setter for the URL. You edit this, validate the URL and throw an exception if it is invalid. You release your new version of the class and everyone one using it is happy. (Let's ignored checked vs unchecked exceptions to keep this on-track).
Except, then you get a call from an angry developer. They were reading a list of feeditems from a file when their application starts up. And now, if someone makes a little mistake in the configuration file your new exception is thrown and the whole system doesn't start up, just because one frigging feed item was wrong!
You may have kept the method signature the same, but you have changed the semantics of the interface and so it breaks dependant code. Now, you can either take the high-ground and tell them to re-write their program right or you humbly add setURLAndValidate.
Keep in mind that coding "best practices" are often made obsolete by advances in programming languages.
For example, in C# the getter/setter concept has been baked into the language in the form of properties. C# 3.0 made this easier with the introduction of automatic properties, where the compiler automatically generates the getter/setter for you. C# 3.0 also introduced object initializers, which means that in most cases you no longer need to declare constructors which simply initialize properties.
So the canonical C# way to do what you're doing would look like this:
class FeedItem
{
public string Title { get; set; } // automatic properties
public string Description { get; set; }
public string Url { get; set; }
};
And the usage would look like this (using object initializer):
FeedItem fi = new FeedItem() { Title = "Some Title", Description = "Some Description", Url = "Some Url" };
The point is that you should try and learn what the best practice or canonical way of doing things are for the particular language you are using, and not simply copy old habits which no longer make sense.
As a C++ developer I make my members always private simply to be consistent. So I always know that I need to type p.x(), and not p.x.
Also, I usually avoid implementing setter methods. Instead of changing an object I create a new one:
p = Point(p.x(), p.y() + 1);
This preserves encapsulation as well.
There absolutely is a point where encapsulation becomes ridiculous.
The more abstraction that is introduced into code the greater your up-front education, learning-curve cost will be.
Everyone who knows C can debug a horribly written 1000 line function that uses just the basic language C standard library. Not everyone can debug the framework you've invented. Every introduced level encapsulation/abstraction must be weighed against the cost. That's not to say its not worth it, but as always you have to find the optimal balance for your situation.
One of the problems that the software industry faces is the problem of reusable code. Its a big problem. In the hardware world, hardware components are designed once, then the design is reused later when you buy the components and put them together to make new things.
In the software world every time we need a component we design it again and again. Its very wasteful.
Encapsulation was proposed as a technique for ensuring that modules that are created are reusable. That is, there is a clearly defined interface that abstracts the details of the module and make it easier to use that module later. The interface also prevents misuse of the object.
The simple classes that you build in class do not adequately illustrate the need for the well defined interface. Saying "But why don't you people just do THIS UNTIL that situation arises?" will not work in real life. What you are learning in you software engineering course is to engineer software that other programmers will be able to use. Consider that the creators of libraries such as provided by the .net framework and the Java API absolutely require this discipline. If they decided that encapsulation was too much trouble these environments would be almost impossible to work with.
Following these guidelines will result in high quality code in the future. Code that adds value to the field because more than just yourself will benefit from it.
One last point, encapsulation also makes it possible to adequately test a module and be resonably sure that it works. Without encapsulation, testing and verification of your code would be that much more difficult.
Getters/Setters are, of course, good practice but they are tedious to write and, even worse, to read.
How many times have we read a class with half a dozen member variables and accompanying getters/setters, each with the full hog #param/#return HTML encoded, famously useless comment like 'get the value of X', 'set the value of X', 'get the value of Y', 'set the value of Y', 'get the value of Z', 'set the value of Zzzzzzzzzzzzz. thump!
This is a very common question: "But why don't you people just do THIS UNTIL that situation arises?".
The reason is simple: usually it is much cheaper not to fix/retest/redeploy it later, but to do it right the first time.
Old estimates say that maintenance costs are 80%, and much of that maintenance is exactly what you are suggesting: doing the right thing only after someone had a problem. Doing it right the first time allows us to concentrate on more interesting things and to be more productive.
Sloppy coding is usually very unprofitable - your customers are unhappy because the product is unreliable and they are not productive when the are using it. Developers are not happy either - they spend 80% of time doing patches, which is boring. Eventually you can end up losing both customers and good developers.
I agree with you, but it's important to survive the system. While in school, pretend to agree. In other words, being marked down is detrimental to you and it is not worth it to be marked down for your principles, opinions, or values.
Also, while working on a team or at an employer, pretend to agree. Later, start your own business and do it your way. While you try the ways of others, be calmly open-minded toward them -- you may find that these experiences re-shape your views.
Encapsulation is theoretically useful in case the internal implementation ever changes. For example, if the per-object URL became a calculated result rather than a stored value, then the getUrl() encapsulation would continue to work. But I suspect you already have heard this side of it.
We have a lot of code that passes about “Ids” of data rows; these are mostly ints or guids. I could make this code safer by creating a different struct for the id of each database table. Then the type checker will help to find cases when the wrong ID is passed.
E.g the Person table has a column calls PersonId and we have code like:
DeletePerson(int personId)
DeleteCar(int carId)
Would it be better to have:
struct PersonId
{
private int id;
// GetHashCode etc....
}
DeletePerson(PersionId persionId)
DeleteCar(CarId carId)
Has anyone got real life experience
of dong this?
Is it worth the overhead?
Or more pain then it is worth?
(It would also make it easier to change the data type in the database of the primary key, that is way I thought of this ideal in the first place)
Please don’t say use an ORM some other big change to the system design as I know an ORM would be a better option, but that is not under my power at present. However I can make minor changes like the above to the module I am working on at present.
Update:
Note this is not a web application and the Ids are kept in memory and passed about with WCF, so there is no conversion to/from strings at the edge. There is no reason that the WCF interface can’t use the PersonId type etc. The PersonsId type etc could even be used in the WPF/Winforms UI code.
The only inherently "untyped" bit of the system is the database.
This seems to be down to the cost/benefit of spending time writing code that the compiler can check better, or spending the time writing more unit tests. I am coming down more on the side of spending the time on testing, as I would like to see at least some unit tests in the code base.
It's hard to see how it could be worth it: I recommend doing it only as a last resort and only if people are actually mixing identifiers during development or reporting difficulty keeping them straight.
In web applications in particular it won't even offer the safety you're hoping for: typically you'll be converting strings into integers anyway. There are just too many cases where you'll find yourself writing silly code like this:
int personId;
if (Int32.TryParse(Request["personId"], out personId)) {
this.person = this.PersonRepository.Get(new PersonId(personId));
}
Dealing with complex state in memory certainly improves the case for strongly-typed IDs, but I think Arthur's idea is even better: to avoid confusion, demand an entity instance instead of an identifier. In some situations, performance and memory considerations could make that impractical, but even those should be rare enough that code review would be just as effective without the negative side-effects (quite the reverse!).
I've worked on a system that did this, and it didn't really provide any value. We didn't have ambiguities like the ones you're describing, and in terms of future-proofing, it made it slightly harder to implement new features without any payoff. (No ID's data type changed in two years, at any rate - it's could certainly happen at some point, but as far as I know, the return on investment for that is currently negative.)
I wouldn't make a special id for this. This is mostly a testing issue. You can test the code and make sure it does what it is supposed to.
You can create a standard way of doing things in your system than help future maintenance (similar to what you mention) by passing in the whole object to be manipulated. Of course, if you named your parameter (int personID) and had documentation then any non malicious programmer should be able to use the code effectively when calling that method. Passing a whole object will do that type matching that you are looking for and that should be enough of a standardized way.
I just see having a special structure made to guard against this as adding more work for little benefit. Even if you did this, someone could come along and find a convenient way to make a 'helper' method and bypass whatever structure you put in place anyway so it really isn't a guarantee.
You can just opt for GUIDs, like you suggested yourself. Then, you won't have to worry about passing a person ID of "42" to DeleteCar() and accidentally delete the car with ID of 42. GUIDs are unique; if you pass a person GUID to DeleteCar in your code because of a programming typo, that GUID will not be a PK of any car in the database.
You could create a simple Id class which can help differentiate in code between the two:
public class Id<T>
{
private int RawValue
{
get;
set;
}
public Id(int value)
{
this.RawValue = value;
}
public static explicit operator int (Id<T> id) { return id.RawValue; }
// this cast is optional and can be excluded for further strictness
public static implicit operator Id<T> (int value) { return new Id(value); }
}
Used like so:
class SomeClass
{
public Id<Person> PersonId { get; set; }
public Id<Car> CarId { get; set; }
}
Assuming your values would only be retrieved from the database, unless you explicitly cast the value to an integer, it is not possible to use the two in each other's place.
I don't see much value in custom checking in this case. You might want to beef up your testing suite to check that two things are happening:
Your data access code always works as you expect (i.e., you aren't loading inconsistent Key information into your classes and getting misuse because of that).
That your "round trip" code is working as expected (i.e., that loading a record, making a change and saving it back isn't somehow corrupting your business logic objects).
Having a data access (and business logic) layer you can trust is crucial to being able to address the bigger pictures problems you will encounter attempting to implement the actual business requirements. If your data layer is unreliable you will be spending a lot of effort tracking (or worse, working around) problems at that level that surface when you put load on the subsystem.
If instead your data access code is robust in the face of incorrect usage (what your test suite should be proving to you) then you can relax a bit on the higher levels and trust they will throw exceptions (or however you are dealing with it) when abused.
The reason you hear people suggesting an ORM is that many of these issues are dealt with in a reliable way by such tools. If your implementation is far enough along that such a switch would be painful, just keep in mind that your low level data access layer needs to be as robust as an good ORM if you really want to be able to trust (and thus forget about to a certain extent) your data access.
Instead of custom validation, your testing suite could inject code (via dependency injection) that does robust tests of your Keys (hitting the database to verify each change) as the tests run and that injects production code that omits or restricts such tests for performance reasons. Your data layer will throw errors on failed keys (if you have your foreign keys set up correctly there) so you should also be able to handle those exceptions.
My gut says this just isn't worth the hassle. My first question to you would be whether you actually have found bugs where the wrong int was being passed (a Car ID instead of a Person ID in your example). If so, it is probably more of a case of worse overall architecture in that your Domain objects have too much coupling, and are passing too many arguments around in method parameters rather than acting on internal variables.
And if so, why?
and what constitutes "long running"?
Doing magic in a property accessor seems like my prerogative as a class designer. I always thought that is why the designers of C# put those things in there - so I could do what I want.
Of course it's good practice to minimize surprises for users of a class, and so embedding truly long running things - eg, a 10-minute monte carlo analysis - in a method makes sense.
But suppose a prop accessor requires a db read. I already have the db connection open. Would db access code be "acceptable", within the normal expectations, in a property accessor?
Like you mentioned, it's a surprise for the user of the class. People are used to being able to do things like this with properties (contrived example follows:)
foreach (var item in bunchOfItems)
foreach (var slot in someCollection)
slot.Value = item.Value;
This looks very natural, but if item.Value actually is hitting the database every time you access it, it would be a minor disaster, and should be written in a fashion equivalent to this:
foreach (var item in bunchOfItems)
{
var temp = item.Value;
foreach (var slot in someCollection)
slot.Value = temp;
}
Please help steer people using your code away from hidden dangers like this, and put slow things in methods so people know that they're slow.
There are some exceptions, of course. Lazy-loading is fine as long as the lazy load isn't going to take some insanely long amount of time, and sometimes making things properties is really useful for reflection- and data-binding-related reasons, so maybe you'll want to bend this rule. But there's not much sense in violating the convention and violating people's expectations without some specific reason for doing so.
In addition to the good answers already posted, I'll add that the debugger automatically displays the values of properties when you inspect an instance of a class. Do you really want to be debugging your code and have database fetches happening in the debugger every time you inspect your class? Be nice to the future maintainers of your code and don't do that.
Also, this question is extensively discussed in the Framework Design Guidelines; consider picking up a copy.
A db read in a property accessor would be fine - thats actually the whole point of lazy-loading. I think the most important thing would be to document it well so that users of the class understand that there might be a performance hit when accessing that property.
You can do whatever you want, but you should keep the consumers of your API in mind. Accessors and mutators (getters and setters) are expected to be very light weight. With that expectation, developers consuming your API might make frequent and chatty calls to these properties. If you are consuming external resources in your implementation, there might be an unexpected bottleneck.
For consistency sake, it's good to stick with convention for public APIs. If your implementations will be exclusively private, then there's probably no harm (other than an inconsistent approach to solving problems privately versus publicly).
It is just a "good practice" not to make property accessors taking long time to execute.
That's because properties looks like fields for the caller and hence caller (a user of your API that is) usually assumes there is nothing more than just a "return smth;"
If you really need some "action" behind the scenes, consider creating a method for that...
I don't see what the problem is with that, as long as you provide XML documentation so that the Intellisense notifies the object's consumer of what they're getting themselves into.
I think this is one of those situations where there is no one right answer. My motto is "Saying always is almost always wrong." You should do what makes the most sense in any given situation without regard to broad generalizations.
A database access in a property getter is fine, but try to limit the amount of times the database is hit through caching the value.
There are many times that people use properties in loops without thinking about the performance, so you have to anticipate this use. Programmers don't always store the value of a property when they are going to use it many times.
Cache the value returned from the database in a private variable, if it is feasible for this piece of data. This way the accesses are usually very quick.
This isn't directly related to your question, but have you considered going with a load once approach in combination with a refresh parameter?
class Example
{
private bool userNameLoaded = false;
private string userName = "";
public string UserName(bool refresh)
{
userNameLoaded = !refresh;
return UserName();
}
public string UserName()
{
if (!userNameLoaded)
{
/*
userName=SomeDBMethod();
*/
userNameLoaded = true;
}
return userName;
}
}