Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I want to learn how to make layered architecture correctly. For that I need an advice.
For example project I started to write news website. I layered my project:
Is it best to do that? I'll do that angular (in web project).
And one more. Should I make one more layer for Dependency injection?
I would not call it NewsWebSite.BLL because it sounds like the BLL can only be used for web applications.
I would have it like this. If the company name is Contoso:
// This is where you can put all your common code.
// I do not mean cross cutting concern here. By common I mean if you have
// some contstants or enums that are shared by all Dlls
Contoso
Contoso.Business
Contoso.Api
Contoso.WebApp
Contoso.Data
// The name of test projects are exactly the same as the name of the
// assembly but has the word "Tests" at the end
Contoso.Business.Tests
Contoso.Api.Tests
Furthermore, see the Pascal Casing naming convention I am using. This way I do not have to deal with Contoso.BLL.SomeClass.
Also, my Contoso.Business.Tests will reside in a namespace that matches my Contoso.Buiness namespace. Here is a class in Contoso.Business:
public namespace Contoso.Business
{
public class Foo
{
}
}
The test for that class, I would not put it into Contoso.Business.Tests namespace (I am not talking about the DLL). I would make my test class which is testing Foo like this:
// See the namespace here, I am not using Contoso.Business.Tests
public namespace Contoso.Business
{
// The name of the class is identical to the name of the class being tested but the word "Tests" appended
public class FooTests
{
}
}
That way they share the same namespace and I can relate them easily.
I use often that architectural structure. In the same situations, meaning webAPI and angular.
But it's important that you considerate all the need in your project, including it's dimension. Ex: if you don't really have the need to manage Logic of business, using a BLL may just no be relevant.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
I am trying to figure out the best way to change an existing class.
So the class is called ExcelReport and it has one method Create(data,headings). This is live and used in many places. Now recently I want to change the method so I can format columns in Excel.
Create(data, headings, columnformats)
So as not to upset my existing programs the best I can come up with is to add another method Create2(data,headings,columnformats) to the class.
I got a lot of suggestions saying I should modify the existing class with a overloaded method, which I did. But does this not break the Open/Close Principle as my existing class was in production?
Should I have created a new class ExcelReport2(and Interface) with the new improved method and passed this into my new program using dependency injection?
OCP
In object-oriented programming, the open–closed principle states "software entities (classes, modules, functions, etc.) should be open for extension, but closed for modification";[1] that is, such an entity can allow its behaviour to be extended without modifying its source code.
Your solution
You will most likely want to create more options later on for this.
And since you asked for an open/closed principle answer we need to take that into account (open for extension, closed for change).
A more robust alternative is to create a new overload:
void Create(CreationOptions options);
Looks trivial, right? The thing is that any subclass can introduce their own options like MyPinkThemedFormattedCellsCreationOptions.
So your new option class would look like this as of now:
public class CreationOptions
{
public SomeType Data { get; set; }
public SomeType Headings { get; set; }
public SomeType[] ColumnFormats { get; set; }
}
That's open for extension and closed for change as new features doesn't touch the existing API, since now you only have to create sub classes based on CreationOptions for new features.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I like the DI feature of ASP.NET Core, but am finding that some of my classes end up with huge constructor parameter signatures...
public class Foo {
private IBar1 _bar1;
private IBar2 _bar2;
// lots more here...
public Foo(IBar1 bar1, IBar2 bar2, lots more here...) {
_Bar1 = bar1;
_Bar2 = bar2;
// ...
}
public DoSomething() {
// Use _bar1
}
}
In case this looks like a code smell, it's worth pointing out that any controller is going to use AutoMapper, an email service and 2 or 3 managers related to ASP.NET Identity, so I have 4 or 5 dependencies before I start injecting a single repository. Even if I only use 2 repositories, I can end up with 6 or 7 dependencies without actually violating any SOLID principles.
I was wondering about using a parameter object instead. I could create a class that has a public property for every injected dependency in my application, takes a constructor parameter for each one, and then just inject this into each class instead of all the individual Bars...
public class Foo {
private IAllBars _allBars;
public Foo(IAllBars allBars) {
_allBars = allBars;
}
public DoSomething() {
// Use _allBars.Bar1
}
}
The only disadvantage I can see is that it would mean that every class would have every dependency injected into it via the parameter object. In theory, this sounds like a bad idea, but I can't find any evidence that it would cause any problems.
Anyone any comments? Am I letting myself into potential trouble by trying to make my constructor code neater?
What you're describing sounds like the service locator pattern, and while it seems tempting to simplify your code by eliminating all those constructor parameters, it usually ends up hurting maintainability in the long run. Check out Mark Seemann's post Service Locator violates encapsulation for more details about why it should be avoided.
Generally, when you find yourself with a class with dozens of constructor parameters, it means that class might have too many responsibilities. Can it be decomposed into a number of smaller classes with narrower goals? Rather than introducing a "catch-all" class that knows about everything, maybe there's a complex part of your application that you can abstract behind a facade.
Sometimes, you do end up with large coordinator classes that have many dependencies and that's okay in certain circumstances. However, if you have many of these it's usually a design smell.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I made a static partial class Utils and I put each method in a separate file in the Utils folder.
But then looked up the partial classes and everywhere it says I shouldn't be using it except for separating auto-generated code.
So, should I merge it together or it's ok to have it in one folder?
First of all a Utils class with many methods tends to become a huge pile of much unrelated code. Because nearly all "Helper" methods will be placed there. By dividing them into single files you fight against a symptom and not againt the root cause. You'll transform the pile of code into a pile of files.
You should cluster the methods into topics and divide the Utils class in meaningful units. Please keep an eye on the Single Responsibility Principle.
The idea of having one class per file makes a lot of sense as it is a self-contained unit.
If you are separating every method in to separate files you are introducing yourself to an organisational nightmare as methods within one class tend to be related and now instead of having a quick overview in a single file and being able to code easily, you've now spread yourself over many files.
You're doing nothing but shooting yourself in the foot.
EDIT: If you're worried about source-control (your question doesn't say anything about it - but I thought I'd add it), today's source-control systems are very good at merging, even if people are working on the same file. There may be issues if two developers are working in the same locality (e.g. the same function) and a manual merge would be required, but in a well organised team - this is a rare occurrence.
From MSDN:
There are several situations when splitting a class definition is
desirable:
When working on large projects, spreading a class over separate files enables multiple programmers to work on it at the same time.
When working with automatically generated source, code can be added to the class without having to recreate the source file. Visual Studio
uses this approach when it creates Windows Forms, Web service wrapper
code, and so on. You can create code that uses these classes without
having to modify the file created by Visual Studio.
To split a class definition, use the partial keyword modifier, as shown here:
public partial class Employee
{
public void DoWork()
{
}
}
public partial class Employee
{
public void GoToLunch()
{
}
}
With that said, I rarely see any reason why one would want to use partial. According to the SRP:
The single responsibility principle states that every module or class
should have responsibility over a single part of the functionality
provided by the software, and that responsibility should be entirely
encapsulated by the class
Now one may argue that you're using the same class since you're using partial. However, it is an indication that you're doing to much in one class. Consider moving them into separate classes instead. I personally think that the gain from separating a util into several partial classes just moves the problem from a big class into a problem of several files. I believe that you will get more maintainable code if you follow the Single Responsibility Principle, and if you feel the need of splitting a class into several partial classes you're probably doing to much and are not following SRP.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
Is there any preference on either appending DTO or Entity to a class name?
Is there any standard around this?
1 Class is used by ORM (EntityFramework) and the other class is used for serialization.
The reason for this is so that there is no duplication of all fields as the EntityFramework is a wrapper around the DTO class(most but not all properties).
The DTO class is in a shared library, and decoupled from EF.
E.g. Which of these is the most common/standard approach?
// 1.
MyNamespace.Entities.MyClass
MyNamespace.Models .MyClassDto
// 2.
MyNamespace.Entities.MyClassEntity
MyNamespace.Models .MyClass
// 3.
MyNamespace.Entities.MyClassEntity
MyNamespace.Models .MyClassDto
In my personal experience your third example is the only implementation I have worked with and it is the one I would argue for because the intent of the object you are working with will always be clear whereas with the other two it only becomes clear when looking at both objects together.
That being said as long as your team comes to an agreement on which to use any would work.
In my opinion, you typically don't want to put implementation details into class names for similar reasons to why you don't want to use Hungarian Notation.
If there's a bit of code that needs to work with both types and differentiate between them, another option is including aliased using statements like this:
using entities = MyNamespace.Entities;
using dto = MyNamespace.Models;
//in code
var myClassEntity = new entities.MyClass();
var myClassDto = new dto.MyClass();
//work with both
My assumption is that the code that needs to work with both types is limited to an isolated library, and that client code typically works with one, not both types.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I have an interface:
public interface IMyObject
{
}
I have an abstract class:
public abstract class MyObject : IMyObject
{
}
And I have a class:
public class MyExtendedObject : MyObject
{
}
There are many interfaces, abstracts and concretes like this in my project. I wonder what is the best scenario to organize the code in namespace (folders in project) point of view. Should I put all related stuff under the same folder or should create, for example a Base namespace for abstract classes, Interfaces namespace for interfaces and another namespace for extended objects?
The best way is subjective and poject dependent.
Like a suggession I would say:
move in separate folder interfaces and abstract classes, so separate them from concrete implementation classes.
+ Absrtacts
-> IMyObject.cs
-> MyObject.cs
+ Concrete
-> MyExtendedObject.cs
Robert C. Martin (one of the founding fathers of Agile and now the Software Craftmanship movement) has a whole talk on that that is really worth watching
It's based on Ivar Jacobson's Object Oriented Software Engineering: A Use Case Driven Approach.
To summarize it in a few sentences, your project structure should reflect what it models and not the technology or particular language constructs you use. In the case of your abstract/interface/concrete classes this means that using a structure where you put all your abstract classes in a folder/namspace/assembly, your concrete classes in another folder/namespace/assembly is not the way to go (even though it is very common to find projects where this approach is taken).