Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I'm wondering whether it's insane to (almost) always use custom data types in C# rather than relying on built in types such as System.Int32 and System.String.
For instance, to represent a persons First name, the idea is to use a data type called PersonFirstName rather than System.String (of course, the PersonFirstName data type would have to contain a System.String). Another example is to have a PersonID class which represents the database identifier for the person, rather than to have a System.Int32.
There would be some benefits here:
Today, if a function takes an int as parameter, it's easy to pass in an ID of a Company object rather than the ID of an Person object, because both are of types int. If the function took a CompanyID, I would get a compilation error if I tried to pass in a PersonID.
If I want to change the database column data type from int to uniqueidentifier for a Person, I would only have to make the change in the PersonID class. Today, I would have to make changes in all places which takes an Int and is supposed to represent a company.
It may be easier to implement validation in the right places. " " may never be a correct first name, which PersonFirstName can take care of.
Yes, I would have to write more constructors. I could implement implicit overloading in these to make them easy to work with though.
Is this madness?
Yes, utter madness - to sum up your idea, and to Paraphrase Blackadder
It's mad! It's mad. It's madder than Mad Jack McMad, the winner of this year's Mr Madman competition
I don't think that's madness. I think using business logic objects with strongly typed objects is a very good thing
No, you're not getting any real benefit of that. For some things it makes sense, perhaps an Email class or maybe, maybe an ID class. However, having a "PersonID" or "ClientID" class seems to go far. You could have a "typedef" or alias or whatever but I would not go too far with this in most circumstances. You can go overboard very quickly and end up with a lot of work for no benefit.
Yes... It is ! You will lose more than you gain.
Yes, madness AND OVERKILL...
It sounds like a maintenance nightmare to me. what would the CompanyID constructor take? An integer? Sooner or later - you are going to have to use native types whether you like it or not.
So what I see here at first glance is a question within a question. Basically:
How do I mitigate complexity and change in my code base?
I would say that you need to look at the problem you are trying to solve and first see what the best solution is going to be. If you are dealing with something that is potentially going to be pervasive throughout your code base then you might want to see if you are violating SOLID design principles. Chances are that if you have one type that is being used in A LOT of different places your design is way too coupled, and you have a poor separation of concerns.
On the other hand, if you know that this type is going to be used in a lot of places, and it also happens to be very volatile (changes is certain), then the approach you mention above is probably the right way to go.
Ask yourself "What problem am I trying to solve?" and then choose a solution based on that answer. Don't start with the answer.
As extracted from Code Complete by Steve McConnell:
The object-oriented design would ask, "Should an ID be treated as an object?" Depending on the project's coding standards, a "Yes" answer might mean that the programming has to write a constructor, comment it all; and place it under configuration control. Most programmers would decide, "No, it isn't worth creating a whole class just for an ID. I'll just use ints.
Note what just happened. A useful design alternative, that of simply hiding the ID's data type, was not even considered...
For me, this passage is a great answer to your question. I'm all for it.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
It seems like a good design decision that the System.Object class, and hence all classes, in .NET provide a ToString() method which, unsurprisingly, returns a string representation of the object. Additionally in C# this method is implemented for native types so that they integrate nicely with the type system.
This often comes in handy when user interaction is required. For example, objects can directly be held in GUI widgets like lists and are "automatically" displayed as text.
What is the rationale in the language design to not provide a similarly general object.FromString(string) method?
Other questions and their answers discuss possible objections, but I find them not convincing.
The parse could fail, while a conversion to string is always possible.
Well, that does not keep Parse() methods from existing, does it? If exception handling is considered an undesirable design, one could still define a TryParse() method whose standard implementation for System.Object simply returns false, but which is overridden for concrete types where it makes sense (e.g. the types where this method exists today anyway).
Alternatively, at a minimum it would be nice to have an IParseable interface which declares a ParseMe() or TryParse() method, along the lines of ICloneable.
Comment by Tim Schmelter's "Roll your own": That works of course. But I cannot write general code for native types or, say, IPAddress if I must parse the values; instead I have to resort to type introspection or write wrappers which implement a self-defined interface, which is either maintenance-unfriendly or tedious and error-prone.
Comment by Damien: An interface can only declare non-static functions for reasons discussed here by Eric Lippert. This is a very valid objection. A static TryParse() method cannot be specified in an interface. A virtual ParseMe(string) method though needs a dummy object, which is a kludge at best and impossible at worst (with RAII). I almost suspect that this is the main reason such an interface doesn't exist. Instead there is the elaborate type conversion framework, one of the alternatives mentioned as solutions to the "static interface" oxymoron.
But even given the objections listed, the absence of a general parsing facility in the type system or language appears to me as an awkward asymmetry, given that a general ToString() method exists and is extremely useful.
Was that ever discussed during language/CLR design?
It seems like a good design decision that the System.object class, and hence all classes, in .NET provide a ToString() method
Maybe to you. It's always seemed like a really bad idea to me.
which, unsurprisingly, returns a string representation of the object.
Does it though? For the vast majority of types, ToString returns the name of the type. How is that a string representation of the object?
No, ToString was a bad design in the first place. It has no clear contract. There's no clear guidance on what its semantics should be, aside from having no side effects and producing a string.
Since ToString has no clear contract, there is practically nothing you can safely use it for except for debugger output. I mean really, think about it: when was the last time you called ToString on object in production code? I never have.
The better design therefore would have been methods static string ToString<T>(T) and static string ToString(object) on the Debug class. Those could have then produced "null" if the object is null, or done some reflection on T to determine if there is a debugger visualizer for that object, and so on.
So now let's consider the merits of your actual proposal, which is a general requirement that all objects be deserializable from string. Note that first, obviously this is not the inverse operation of ToString. The vast majority of implementations of ToString do not produce anything that you could use even in theory to reconstitute the object.
So is your proposal that ToString and FromString be inverses? That then requires that every object not just be "represented" as a string, but that it actually be round trip serializable to string.
Let's think of an example. I have an object representing a database table. Does ToString on that table now serialize the entire contents of the table? Does FromString deserialize it? Suppose the object is actually a wrapper around a connection that fetches the table on demand; what do we serialize and deserialize then? If the connection needs my password, does it put my password into the string?
Suppose I have an object that refers to another object, such that I cannot deserialize the first object without also having the second in hand. Is serialization recursive across objects? What about objects where the graph of references contains loops; how do we deal with those?
Serialization is difficult, and that's why there are entire libraries devoted to it. Making it a requirement that all types be serializable and deserializable is onerous.
Even supposing that we wanted to do so, why string of all things? Strings are a terrible serialization data type. They can't easily hold binary data, they have to be entirely present in memory at once, they can't be more than a billion characters tops, they have no structure to them, and so on. What you really want for serialization is a structured binary storage system.
But even given the objections listed, the absence of a general parsing facility in the type system or language appears to me as an awkward asymmetry, given that a general ToString() method exists and is extremely useful.
Those are two completely different things that have nothing to do with each other. One is a super hard problem best solved by libraries devoted to it, and the other is a trivial little debugging aid with no specification constraining its output.
Was that ever discussed during language/CLR design?
Was ToString ever discussed? Obviously it was; it got implemented. Was a generalized serialization library ever discussed? Obviously it was; it got implemented. I'm not sure what you're getting at here.
Why is there no inverse to object.ToString()?
Because object should hold the bare minimum functionality required by every object. Comparing equality and converting to string (for a lot of reasons) are two of them. Converting isn't. The problem is: how should it convert? Using JSON? Binary? XML? Something else? There isn't one uniform way to convert from a string. Hence, this would unnecessarily bloat the object class.
Alternatively, at a minimum it would be nice to have an IParseable interface
There is: IXmlSerializable for example, or one of the many alternatives.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 9 years ago.
Improve this question
I've read some things about this, I even found similar question, but it didn't really answer this. For me it seems that privatizing something only makes my life so much harder when I need to find a private variable in a class to use it elsewhere. So what is would the problem be if everything was public? Would it somehow slow the program itself?
You must consider the maintainability of the code. Accessing all the variables everywhere in your solution is good only if you are the only one in the project and you will be the only one that maintain and use the code. If someone else's entered into project and do completely different things they will be able to access your methods/variables and set the things to unexpected variables. You should think as a OOP design and design your classes like that.
FYI I don't believe you are supposed to ask discussion-based questions like this on SO... But the simplistic answer is this: don't limit your thinking to the logic of the code. We all know there are ten thousand ways to accomplish the same thing. You can probably rewrite a bunch of your code to avoid encapsulation. However, data encapsulation provides a few benefits when you start working on larger projects or with larger teams that go beyond just writing functional code:
(1) organization by concept: if you're coding a bike you would code a class for the wheel, a class for the frame, a class for the handlebars, etc., and you'd know where to go to resolve an issue, perhaps even after months of time away from the code;
(2) separation of implementation and interface: you can tell the world about your public interface and handle the actual implementation privately, so people don't have to know how things work in your code, they just know that it works, like a black box; and if later you have to change your private implementation you can do so freely as long as the public interface still works;
(3) simplification for humans: remember, humans read your code, so would you like to slam them with every bit of data and logic in your project? that would just make for a bunch of angry programmers.
So that's a gentle introduction to encapsulation.
This comes from the fact that, a class should not expose its members directly but must provide a proxy through which the members must be accessed. (Like getters/setters or Properties)
See this question for more info: Why it is recommended to declare instance variables as private?
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
Before down-voting let me explain my question. I have a little experience in designing architectures and try to progress. Ones, when I was fixing a bug, I came up with a conclusion that we need to make our private method to be public and than use it. That was the fastest way to make my job done, and have a bug fixed. I went to my team-leader and said it. After I've got a grimace from him, I was explained that every public method is a very expensive pleasure. I was told that every public method should be supported throughout the lifetime of a project. And much more..
I was wondering. Indeed! Why it wasn't so clearly when I was looking in the code. It wasn't also so evidently when I designed my own architectures. I remember my thoughts about it:
Ahh, I will leave this method public, who knows, maybe it will come usefull when the system grows.
I was confused, and thought that I made scaleable systems, but in fact got tons of garbage in my interfaces.
My question:
How can you explain to yourself if a method is really important and worthy to be public? Are any counterexamples for checking it? How you get trained to make private/public choise without spending hours in astral?
I suggest you read up on YAGNI http://c2.com/cgi/wiki?YouArentGonnaNeedIt
You should write code to suit actual requirements because writing code to suit imagined requirements leads to bloated code which is harder to maintain.
My favourite quote
Perfection is achieved, not when there is nothing more to add, but
when there is nothing left to take away.
-- Antoine de Saint-Exupery French writer (1900 - 1944)
This question need a deep and thorough discussion on OOP design, but my simple answer is anything with public visibility can be used by other classes. Hence if you're not building method for others to use, do not make it public.
One pitfall of unecessarily making private method public is when other classes did use it, it makes it harder for you to refactor / change the method, you have to maintain the downstream (think if this happen to hundreds of classes)
But nevertheless maybe this discussion will never end. You should spend more time reading OOP design pattern books, it will give you heaps more idea
There are a few questions you can ask yourself about the domain in which the object exists:
Does this member (method, property, etc.) need to be accessed by other objects?
Do other objects have any business accessing this member?
Encapsulation is often referred to as "data hiding" or "hiding members" which I believe leads to a lot of confusion. Inexperienced developers would rightfully ask, "Why would I want to hide anything from the rest of my code? If it's there, I should be able to use it. It's my code after all."
And while I'm not really convinced with the way in which your team leader worded his response, he has a very good point. When you have too many connection points between your objects, you end up with too many connections. Objects become more and more tightly coupled and fuse into one big unsupportable mess.
Clearly and strictly maintaining a separation of concerns throughout the architecture can significantly help prevent this. When you design your objects, think in terms of what their public interfaces would look like. What kind of outwardly-visible attributes and functionality would they have? Anything which wouldn't reasonably be expected as part of that functionality shouldn't be public.
For example, consider an object called a Customer. You would reasonably expect some attributes which describe a Customer, such as:
Name
Address
Phone Number
List of orders processed
etc.
You might also expect some functionality available:
Process Payment
Hold all Orders
etc.
Suppose you also have some technical considerations within that Customer. For example, maybe the methods on the Customer object directly access the database via a class-level connection object. Should that connection object be public? Well, in the real world, a customer doesn't have a database connection associated with it. So, clearly, no it should not be public. It's an internal implementation concern which isn't part of the outwardly-visible interface for a Customer.
This is a pretty obvious example, of course, but illustrates the point. Whenever you expose a public member, you add to the outwardly-visible "contract" of functionality for that object. What if you need to replace that object with another one which satisfies the same contract? In the above example, suppose you wanted to create a version of the system which stores data in XML files instead of a database. If other objects outside of the Customer are using its public database connection, that's a problem. You'd have to change a lot more about the overall design than just the internal implementation of the Customer.
As a general rule it's usually best to prefer the strictest member visibilities first and open them up as needed. Combine that guideline with an approach of thinking of your objects in terms of what real-world entities they represent and what functionality would be visible on those entities and you should be able to determine the correct course of action for any given situation.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
A question was raised in a discussion I had around whether an interface method should return a Custom object vs a primitive type.
e.g.
public interface IFoo
{
bool SomeMethod();
}
vs
public interface IFoo
{
MyFooObj SomeMethod();
}
Where MyFooObj is:
public class MyFooObj
{
bool SomeProp{get;set;}
}
The argument being that you can easily add properties to the object in the future without needing to change the interface contract.
I am unsure what the standard guidelines on this are?
IMHO Changing the MyFooObj is the same as changing/adding methods to the IFoo Interface - so no I don't think it's a good idea add just another abstraction - remember YAGNI
My standard response is - YAGNI.
You can always change things if it turns out that way, in particular if you control the full source of the application and how the interface is used.
Wrapping a boolean just in order to forecast the future is only adding complication and additional layers of abstraction when they are not currently needed.
If you are using DDD and specific modelling techniques in your codebase, is can make sense to have such aliases to booleans, if they are meaningful in your domain (but I can't see this being the case for a single boolean value).
I don't see the point of encapsulating primitive types in a custom object.
If you change the definition of this custom object, then you actually change the contract because the function doesn't return the same thing.
I think it's again an over-engineered "pattern".
There are no general guidelines regarding this.
As you pointed out, if you have semantics around the return type that you think strongly believe may change or may need to be updated in the future it may be better to return the complex type.
But the reality is that in most circumstances it is better to keep things simple and return the primitive type.
That depends somewhat on what you like. My opinion, that in your sample case, I would stick with the simple bool in the interface definition for those reasons:
it is the simplest to read possibility
no one looks for methods that aren't available
IMHO, an object makes sense only when a certain amount of complexity/grouping is required as a result.
If its not required to begin with you should not wrap it changing what is returned inside the object is simply the same as changing the interface which breaks rule number one of programming with interfaces.
Its right up there with designing for extension, YAGNI (you ain't gonna need it).
As a side note I got told off for stuff like this early in my career.
If you ever need to return something more than a boolean, it is extremely likely that you are going to modify other parts of the interface as well. Do not make things more complex than they need to be: simplicity is prerequisite of reliability.
In addition to the other answers, adding a new field to a custom class is technically still a potential breaking change to the interface's consumers. Link
Just thought I'd mention that if MyFooObj is in an assembly which is strong named, you update it, its version gets updated - old clients will immediately break (e.g. InvalidCastException) due to a version mismatch (it won't attempt a partial bind due to strong nameness) unless they recompile with the new version. You've still changed the interface contract. So best to keep things simple, return the primative type and declare your change of contract more explicitly.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
I do not want to ask candidates questions, but rather give them several problems to resolve. The reason for this is that I've seen people be excellent with theory, but when confronted by a real world c# issue, just couldn't hack it.
These c# problems should be simple enough that it won't take more than 1-20 minutes to resolve, yet complicated enough that I'd be able to weed out candidates that can't code.
Right now, I typically ask the applicants to reverse a string and remove duplicates from a List. This alone weeds out a large number of people.
Any other examples I could use?
Edit: I should have mentioned that this is for a standard c# gig, where they'll be writing business code rather than finding the most optimal way to implement a linked list.
I like picking simple problems that I actually had to solve at some point; it doesn't get more relevant to the job than that.
When I worked on VBScript I'd ask college candidates how to write a simplified version of DateDiff, since doing so was what I did my first real day of work at Microsoft. More advanced candidates I would ask how to build a device which tracks the relationship between 32 bit handles and an associated 64 bit pointer, which again I actually had to do when working on VBScript.
More recently I tend to ask questions about tree manipulation algorithms, since the compiler is all about tree manipulation. Or about how to codegen new operators using monads, since that's how LINQ works.
My point is not that you should use questions in these areas, my point is that surely you must have had problems that you had to solve in your day-to-day work. Ask the candidates about those problems -- then you'll learn how they solve a realistic problem, and they'll learn what sorts of problems they'd be solving if they came to work with you.
dont ask for knowledge of class libraries or obscure corners of the language (unsafe, dynamic, ..); smart people can pick these up or look them up.
I would ask to design a class hierarchy to represent something real world (vehicles, animals, ...). This usually flushes out the people who dont get objects. Make them do it with interfaces too. Also make them reverse a string - no harm in oldies but goldies
I agree with you, it is surprising how many people claim to be experienced and you find out that all that they did was read the box…
I don’t know if testing for C# is as valuable as it first seems… sure you could ask them to describe an example of when they needed to use inheritance, or why casting might have a performance problem, etc. But these are easy to study for. You would be surprised at how many interviewees give the example using “car” or “color” when giving their real world example of inheritance…. Guess they are in a book somewhere.
When looking at this problem it helps me when I compare experience in development to learning Spanish. A short time into the class everyone is conjugating verbs and can pass a test on this… but nobody speaks Spanish yet. You want the guy that claims to speak Spanish and can actually do it.
So I like to be more specific with the other technologies that will tell me if they have traveled the well-worn path of development. If they say they are an ASP.Net developer I ask them simple questions, but ones that are on the path
EXAMPLES: Give me an example of where the connection string could live? If you need to pass an ID from one page to another, what are your options? If a page takes 5 minutes to load, tell me how you would go about troubleshooting it. If I had a web page that had a single button on it, how would I center that button? Tell me the difference between storing variables in the viewstate verses session state?
You don’t have to know everything, but eighty percent of the people interviewing for a senior level position will get 10% of these types of questions right. (And on 70% of the phone interviews you will hear them Googling for the answers – good thing these aren’t the types of questions you can easily Google for.)
SQL Server is about the same. They say they would rate themselves an 8 or 9 in SQL Sever development, but then get 10% of questions. The questions again are to see if you have been on the well-worn path.
EXAMPLES: If you had a table of customers and a table of orders, how would you find the customers that had no orders? What is a clustered index? If I had a table of developers and a table of projects, how would I set it up so that projects could have multiple developers on it and developers could be on multiple projects?
How could you develop in SQL Server for “years” and not have hit these concepts? A high percentage of candidates get almost none of these answers right!! (I guess the SQL Server box isn’t as informative.)
So if you say you are a senior level guy and you can say “Soy un revelador de software” (I am a software developer), but can’t say “He hecho eso antes” (I have done that before), I don’t think you are the senior level person you are claiming to be.
Now this tells you if they have been on the well-worn path, but not if they are smart and have good problem solving skills. Having gone thru a ton of these types of interviews I can tell you that by the time the process is done you will be satisfied with having enough information to have a strong opinion on both of these issues. You might also see that by then giving them a problem set to solve is unnecessary.
Show them a small section of code or architecture diagram from one of your own projects and ask them to suggest how they would refactor it. Even if you don't wind up hiring them, you might get some interesting suggestions on ways to improve your code.
Building Eric's and other answers here, but answering as an only-ever-so-far-interviewee, what I would like in an interview is a kind of pair-programming 'test', where you sit down together facing the screen, and talk through a real-world problem.
I think there would be many advantages:
For the interviewee, being in front of a screen instead of facing the interviewer makes it easier to think about the problem rather than the interview.
For the interviewer, being with the interviewee while they look through the code and ask questions about the problem space would give a much greater insight into how the interviewee thinks, how they approach problems, and how they communicate and interact with others.
I would expect that it's more important and interesting to see a candidate thinking round the edges of your real-world problem, even if they don't completely solve it, than to have them get 10 out of 10 on come algorithmic test.
Something mildly algorithmic.
Write a method that returns true if a string is a palindrome, and false otherwise.
Re-implement the String.Substring(int, int) method.
Something about object-oriented design too.
Design a checkers game (ie, define the classes and some of the methods).
One question I was asked, and subsequently ask interviewees, is"Describe how you would make this phone into an application". Have them describe the classes, their properties, methods, interfaces, etc. Then question them on why they chose to implement them in that specific way. It gives you a good idea if they understand how to code, and gives you some insight into how they approach and solve problems.
Also, if you offer a suggestion of how they could have implemented it a different way, it may show you whether they are open to new ideas, criticism, or if they are a team player or not.
Fizz Buzz