I'm very annoyed by C# or Java refactoring of namespaces or packages. If you referenced in many classes a class in a common package used in many independent projects and if you decide to just move that package as a package child of the current parent package you have to modify all clients just because you cannot use generic imports like this
import mypackage.*
which would allow refactoring without impacting clients.
So how do you manage to do refactoring when impact can be so big for such a small change ?
What if it's client's code not under my control am I stuck ?
Use an IDE with support for refactoring. If you move a java file in Eclipse, all references are updated. Same for rename, package name changes, etc. Very handy.
It sounds like your asking about packages that are compiled and deployed to other projects as for instance, a jar file. This is one reason why getting your API as correct as possible is so important.
How to Design a Good API and Why it Matters
I think that you could deprecate the existing structure and modify each class to be a wrapper or facade to the new refactored class. This might give you flexibility to continue improving the new structure while slowing migrating projects that use the old code.
imagine someone doing an import like import com.* and if it was like what you wanted it to be, it will load anything and everything in a com package which means zillions of classes are going to be imported, and then you will complain about why it is so slow, why it requires too much memory......
In your case, if you use a IDE, that will take care of most of the work and will be very easy but you will still need to deploy new executables to your clients as well if your application architecture requires.
Related
Hello everyone
Currently I have the following reference structure:
DSFinalProject have reference of "DAL" project and "DataStructure" project.
DataStructure have reference of the DAL project as well…
Now I want that DSFinalProject won't have reference to the DAL layer but will be able to use interfaces from that class.
Is there any way to "tunnel" the interfaces that are in the DAL project to DSFinalProject without actually making references between them?
Maybe using the DataStructure project? Or anything else?
Thanks in advance for any help :)
The easiest way is to put them in DataStructure, which isn't too bad since anything that references the interfaces will need to reference DataStructure as well.
My vote would be to put them there until you run into a scenario when you need to have the interfaces in a separate assembly.
I don't know of any way to reference interfaces (or anything else) inside the DAL project from DSFinalProject without having a reference to the project (or assembly).
You can move them to another project if you think it makes the dependencies cleaner - if you put the interfaces in the DataStructure project - you'd run into a circular reference where it needs DAL and DAL needs it.
I don't believe that there is anyway to so what you ask. If you think about what happens when you serialise objects, you still need the assembly to provide the low level structure of how the fields are laid out inside the stream of data. It needs the code in the interface to say that the first 4 bytes are a double, etc.
So the only to do this is to move your interfaces into a new interfaces.dll which can be referenced by everything. You will see this pattern repeated in many examples including the EnterpriseLibrary.
However...
you are making a classic mistake. Why are you splitting your code into so many projects? Projects really should be thought of as the run time packaging of our code, not a desing time segregation mechanism. By splitting into so many assemblies, you do three things.
You slow your build system down, as the compiler does more work fetching the other assemblies.
You slow down Visual Studio as it works harder to load up all the projects and keep the references between them. I once worked on a solution with 140 projects that took 15 minutes just to open (but I always got my morning coffee).
You slow down the run time performance as DotNet has to search around for another 4k dll (thats the minimum, even for just one line of code). Try looking at the fusion logs or use SysMon to see just how much work is involved in this simple operation.
Take a look at this example Hints on how to optimise code do see what's going to happen as your solutions get more complicated.
Instead of splitting it like this, use namespaces instead, you will still have the seperation, but instead of having to use so many references, you now have control by the using statements inside your classes. You will easily see if you are using a DAL reference in a class designed to be in a DSFinalProject tier. You can just create a folder under the project and add your classes there instead. Get rid of all the projects and still have a properly tiered system.
As your solution grows, wait until you have at least two executables before you start introducing projects, and then consider the run time implications. If you are always going to load up two assemblies, merge them into one (I've seen some open source projects these days that use ilmerge to merge in third party libraries too).
i'm working on a large c# project,i wonder why people use DLLs in their apps. I know that a dll file ( please correct if i'm wrong) contains some functions, but why don't we put those functions inside our main c# app?
Thanks
Most of it is summed up in the answer to this question, but the basic reasoning is "so you don't have to duplicate code".
Code reuse. Usually dll files contain functions that are useful in more than one app, and to have them in a single compiled file is a lot easier than copying over all that code.
Portability, Reusability, Modularity.
Splitting types and the like into separate assemblies allows you to reuse those types in different projects, maintain those types a modular fashion (e.g. update just one assembly instead of the whole app), and share parts of your code with others.
It also allows you to group common functionality into a single package.
Maintainability. When you need to fix a bug, you can release just the DLL containing the fix, instead of having to re-release the entire application.
This is an interesting question in a modern computing.
Back in the 16bit days DLLs cut down on the amount code in memory.
This was a big issue when 16 meg computers where considered fully loaded.
I find many of the answers interesting as though a DLL is the only way to have a reusable,maintainable and portable library.
Good reasons for dll's are that you want to share code with an external party.
Just as Visual Studio and other library vendors give you dll's this makes there code available to a external consumer. However, at one time they did distribute them in another way.
Patchable, This is true but how often does this really happen. Every company I've worked for has tested products as a unit. I suppose if you need to do incremental patching because of bandwidth or something this would be a reason.
As for all the other reasons including reusable, maintainable, modularity.
I guess most of you don't remember .LIB files which were statically linked libraries.
You can even distribute .LIB files but they have to be introduced at compile time and not runtime. They can help facilitate reusable, maintainable and modularity just like a DLL.
The big difference is that they are linked when the program is compiled not when it is executed.
I'm really beginning to wonder if we shouldn't return to .LIB files for many things and reducing the number of DLL files. Memory is plentiful and there is overhead in load time when you have to load and runtime link a bunch of DLL files.
Sadly, .LIB files are only an option if your a C++ guy. Maybe they will consider them with C# in the future. I"m just not sure the reasons for DLL's still exist in the broad context they are used for today.
In big softwares, you have many teams they work on several different modules of program, and thay can proceed their goals without needing to know what others is doing! So one of the best solutions, is that each team produces own codes in parallel. So,dll comes to scene....
Extensibility - a lot of plugin frameworks use DLLs/Assemblies for plugins.
dll : a dynamic link library :
it is a library.
It contain some functions and data.
Where we use these function?
we use these function and data which are inside the dll,in another application or program.
the most important thing is that dll will not get loaded into memory, when it require , called it is loaded into ram .
One of the best use is, one can integrate many third party functionalities into your application just by referencing the dlls, no need to use every third party tool/application into your system.
For example, you need to send a meeting invite via MS outlook through code, for this simply refer the dlls provided by MS outlook in your application and you can start coding your way to success!
I've inherited an enormous .NET solution of about 200 projects. There are now some developers who wish to start adding their own components into our application, which will require that we begin exposing functionality via an API.
The major problem with that, of course, is that the solution we've got on our hands contains such a spider web of dependencies that we have to be careful to avoid sabotaging the API every time there's a minor change somewhere in the app. We'd also like to be able to incrementally expose new functionality without destroying any previous third party apps.
I have a way to solve this problem, but i'm not sure it's the ideal way - i was looking for other ideas.
My plan would be to essentially have three dlls.
APIServer_1_0.dll - this would be the dll with all of the dependencies.
APIClient_1_0.dll - this would be the dll our developers would actual refer to. No references to any of the mess in our solution.
APISupport_1_0.dll - this would contain the interfaces which would allow the client piece to dynamically load the "server" component and perform whatever functions are required. Both of the above dlls would depend upon this. It would be the only dll that the "client" piece refers to.
I initially arrived at this design, because the way in which we do inter process communication between windows services is sort of similar (except that the client talks to the server via named pipes, rather than dynamically loading dlls).
While i'm fairly certain i can make this work, i'm curious to know if there are better ways to accomplish the same task.
You may wish to take a look at Microsoft Managed Add-in Framework [MAF] and Managed Extensibiility Framework [MEF] (links courtesy of Kent Boogaart). As Kent states, the former is concerned with isolation of components, and the latter is primarily concerned with extensibility.
In the end, even if you do not leverage either, some of the concepts regarding API versioning are very useful - ie versioning interfaces, and then providing inter-version support through adapters.
Perhaps a little overkill, but definitely worth a look!
Hope this helps! :)
Why not just use the Assembly versioning built into .NET?
When you add a reference to an assembly, just be sure to check the 'Require specific version' checkbox on the reference. That way you know exactly which version of the Assembly you are using at any given time.
I'm a newbie to SSIS / C# (I'm generally a Java developer) so apologies if this is a really stupid question.
Essentially the problem is this: I have two Data Flow tasks which load data up and export them to a legacy flat file format. The formatting is done by a Script Task (C#).
What I'd like to do is share some common code between the two. e.g. I could create a common base class and then extend it for my two different script tasks.
However it seems that SSIS doesn't really make provision for this.
Does anyone know if there is a way of accomplishing what I want to do?
You're correct that there is not a straightforward way to do this directly from SSIS.
In a recent project, we took two different approaches, which both worked fairly well depending on what you need to do:
Create a utility class (as a simple class library) and reference it from your script tasks. This is done pretty much the same as any other sort of reference. If you use .NET 3.5, remember that you'll have to update the version manually in the script tasks since SSIS defaults to 2.0. We also found that if we wanted some manner of reusability in the utility assembly (not relying on hardcoded variable names, etc.) then the package still had to have a fairly large amount of "setup" boilerplate to use the utility scripts.
Create a custom data flow component. This is a much more involved process, but ultimately will do the best in terms of avoiding code duplication. Generally, coding the actual data flow is fairly simple and not that much different than a script component, but the various setup code you'll need can tend to make things complicated. There's also not a lot of support in SSIS for when something goes wrong. Led to a lot of detective work on our project.
If you plan on using something a whole lot, and are committed to getting rid of boilerplate code as much as possible, 2 is the preferred option. If it's being used a few places here and there, consider the simple approach of 1.
I am pretty sure it's possible to access .NET assemblies in SSIS scripts. So you could do it this way. See the article "Accessing .NET assemblies with SSIS" on SQL Server Central.
I believe you will have to create an assembly or webservice for this to work.
This does not completely solve your issue but it does help in not having to recreate all the classes every time you need them (I also do not want to deploy referenced assemblies for my current project ). Firstly you need a master copy of your classes, you can copy them from an existing Script Task using the same process below but in reverse.
Open the Editor for the Script Task and on the Property Explorer click on the Project File (the st_[Guid] ), in the Properties window you’ll see the Project Folder location. (This location gets recreated every time you edit the script task)
In explorer, copy your classes to this folder
On the Project Explorer, click on the “Show All Files” icon
Right click on your files and add to Project
Probably way too late to answer this, but you can click on the solution and add a class there. Then when you go into your scripts you can say add existing object and search for that class you created earlier. For me it was located by the solution for the project. Haven't gone through the deployment or anything for this, but at least you can access the class through the individual scripts.
In my team we have hundreds of shared dlls, which many also reference other dlls that themselves reference other dlls, and so on. We have started to use a 'Shared' directory for all the dlls that we feel are generic enough to use in other projects, such as a database comms dll.
The problem is that if one of the dlls all the way down the tree is changed, then everything that references it needs to be recompiled to avoid versioning issues (which occur at runtime).
To avoid this, there is now talk of adding all our 'shared' dlls into one big assembly, and anyone creating new apps simply reference that, and that alone.
This obviously will get bigger and bigger and i'm not sure if this is the best way or not. Any thoughts please?
What we do is treat the maintenance of the shared DLLs as a project in itself, with its own source-control and everything. Then about twice a year, we do a 'release' of the shared DLLs to the public, with its own version number and everything. As long as you always use the DLLs as a 'set' (meaning all the ones you reference are from the same release) you're guaranteed not to have any dependency issues.
It's most definitely not the best way to do it. I have a few "shared" DLLs at my job that are kind of like that. They get unwieldy and difficult (read: impossible) to make meaningful changes to because it becomes too difficult to ensure that changes don't break apps downstream, which seems like the exact opposite of what you're trying to do.
It sounds like what you really need to do is separate your concerns a little bit better. If all of these DLLs are referencing each other, they're probably too tightly coupled. A true "shared" DLL should be able to stand on its own, or as part of a packet of three or four that travel as a group. If your dependencies are actually preventing you from making changes, then your coupling strategy has gone horribly wrong.
Putting everything in one large DLL certainly isn't going to make anything better. In fact, probably the opposite. Once you've got everything in one DLL, the temptation will be there to couple everything within it even more tightly together, which will make it impossible to pull things apart later.
you can make one solution that include all connected projects.
and when you need to release, just build this solution
Update.
As you say, the solution is cant hold so much dlls.
In other hand you can make an external MSBuild script
or using CruiseControl.NET that have possibilities to make such complicated tasks.
To quote from the GoF book, "Program to an interface, not an implementation." This could apply here to some of your libraries. You are already aware of how brittle your develop becomes when you have tight coupling. Now what needs to be addressed is how to give you breathing room.
You can create an interface. This will provide a contract that any application can use to specify that a minimum set of functionality is available.
You can create a Service that implements an interface. This will allow you to provide what would be thought of as an addon or a plugin. This allows you to design towards a contract version with expectations that your tools will adhere to.
You can create a Service that only uses an interface. This will allow your application to send in any concrete implementation that adheres to a contract of design.
Products like development editors and web browsers use this approach to make some code reuse possible. Thank you. Good day.
Design Principles from Design Patterns
Plugin