Converting multiple VFP9 classes to C# - c#

Let me start by saying that I have not programmed in a heavily OO programming language since Java in college (I graduated in December 2005). Since I've graduated I have been programming with FoxPro 2.5 - VFP9. Now, the company I work for is pushing to convert all of our FoxPro applications to C#.
My part of this project is converting our report parsing application. In VFP9 it consists of 5-6 forms (none of which will be carried over as we have created a new C# front-end to replace it), a single Base class that contains all of our standard methods, and approximately 575 individual parser classes (some of which do nothing but set a few parser specific variables/properties and call the needed base classes). Some of the parsers contain their own custom methods which still use and interact with the base methods and global properties.
Now for my question...
From a design standpoint, we would like for our new C# front-end to spawn multiple executables (3-5 EXEs) that will call our new C# base/parser class libraries (DLLs). My original thought was that I would have one solution/project with a Base_Code.cs and the other 575 parser.cs files (H1.cs, H2.cs, H3.cs, etc). However, we need the capability to build each .cs file independently of the others as I may be updating the Base_Code.cs while my co-worker is updating the H1.cs.
How do I best structure this? Do I keep one solution but create 576 projects or do I create 576 solutions all using the same Namespace as another team is attempting currently?
There are several global variables/properties that we use throughout the base code and each parser (these will be passed in from the front-end application) like file paths, file names, etc. that will be static so this needs to be taken into consideration as well when thinking of the design.
EDIT FOR EXAMPLE **
The C# front-end is basically a queueing system and file/status viewer. This front-end "queues" the reports we pick up throughout the day. The report at the top of the list determines what DLL will be needed. The front-end application and the DLLs are completely separate.
Example: H00001_2342318.MSG - this will call the H00001 DLL
H00002_3422551.MSG - this will call the H00002 DLL
Each H00001, H00002, etc (575 DLLs in total) will use methods that are in the BASE DLL.
If I have to update the H00001 DLL, I need to do so without having to rebuild all 575 DLLs.

It sounds like what you want is a "plugin" kind of architecture. This lets you drop in / update dlls (assemblies) without recompiling the main app.
E.g. http://code.msdn.microsoft.com/windowsdesktop/Creating-a-simple-plugin-b6174b62
and related.

576 projects and/or solutions is a maintenance nightmare. I have far fewer (75 or so) projects across less than a dozen solutions for an entire suite of products. This includes tests, framework components, etc. and the only way I keep on top of it is via strict naming/path conventions, source control, and scripts for automation.
Speaking of tests, you should be planning for unit tests (greenfield development presents an excellent opportunity for this; don't miss the opportunity). Tests go in a separate project; this is another reason why each class shouldn't be given its own assembly. You could theoretically double your project count.
I would start with a single solution with logically separated projects (e.g. break things out by function/dependency, and add a test project for each library).
Source control eliminates any concerns about conflicts between team members. If you have internal challenges getting source control up and running, look into Team Foundation Services Online: http://tfs.visualstudio.com/. Setup is incredibly easy and it's free for up to 5 users.
If you do end up needing greater disconnection between projects, you may want to consider using NuGet packages with a local repository to isolate/version different, discrete components. This wouldn't be my first step, but it is a worthwhile option to keep in mind.
From the comments, it sounds like you are currently performing daily deployments of autonomous units.
This seems risky. Can this be driven by configuration/data rather than code changes?
576 completely autonomous units seems like a problem with application design (e.g. reuse?)
Assuming that new code must be deployed every day, perhaps a scripting language + DLR could make this easier. Dynamic compilation of c# is also a possibility.

Related

Is it possible to set up a base project for use across multiple ASP.NET MVC projects?

My team lead handed this one to me, and I'm a bit stumped. We have just started using ASP.NET MVC for web development in our shop, and there are common design and functionality that we would like to be able to use across multiple sites.
So far, I have looked at creating a custom template with the common elements, but the downside to that is that updates to the template (as far as I can tell) do not automatically get pushed to projects created using that template. As having changes automatically update to the consuming projects is a requirement, custom templates won't work for me.
My question is, is it possible to set up a base project for use across multiple ASP.NET MVC projects, where updates to the base get propogated to the consuming projects? If you have any experience in this field, I would certainly appreciate some direction. My apologies if this question seems elementary to you, but this is my first real foray into ASP.NET MVC.
I've found that the best method for sharing resources between disparate projects is to create your own Nuget packages. These can contain anything from a class library with reusable classes, enums, extension methods, etc. to entire web applications complete with controllers, views, JavaScript, CSS, etc. The scope is entirely up to how much commonality you can abstract from your projects. You can then set up your own private Nuget repository to hold these so you don't have to publish them to the whole world. (Although, if you do create something that would benefit others as well, by all means do share on the official Nuget repo.)
Setting everything up is pretty trivial. I learned how to create Nuget packages and set up a private repo in a day. Here's some resources to get you started:
Official Nuget documentation for creating and deploying packages
Using the Package Explorer application to create packages via a GUI
Official Nuspec (the package manifest file) reference.
Hosting your own Nuget feeds
Alternate method for creating your own repository with SymbolSource integration
SymbolSource also offers private repos, remotely hosted on their servers, gratis. Some enterprise environments may not like having their code "in the cloud", but if you can get by with it, this is by far the easiest way to get going.
From experience, the company I work for has found that whilst there are common design and functionality elements across our project, the uncommon elements can be too broad which outweighs then need to have some form of base project. Using custom project templates also become a maintenance nightmare so avoid those.
Instead we've opted to document how a project should be setup for particular designs and it's up to the Team Lead to follow which bits are needed for the particular project they are working on.
If there a functional overlaps we've considered (but not actually yet done) creating a common library(s) that has they're own development lifecyle, and then setup our own NuGet Server for distribution of the common library to your other projects. We haven't done this yet mainly because again the differences between projects we have worked tend to be large enough for this not be warranted.
But from the sound of what you're describing, NuGet packages or something similar could be the way to go in your case.
While I don't think there's a way to set up a base project that everything else inherits from, you could quite easily set up a common library project that all others reference. It could include base classes for all the common things you'll be using (eg ControllerBase).
This way, updating the library project will allow new functionality to be added to all other projects. You could configure templates so that the common base classes are used by default when adding new elements.
Depending on how you reference the common library (compiled dll/linked project reference) you either get a stable link to a specific version or instant updates across all projects. I'd personally prefer to reference the common dll rather than the project, since this allows project A to be using an older version than project B. Updating A to the new version is trivial, but it gives you a level of separation so that if B requires breaking changes, you don't have to waste resources to keep A working.
Another added bonus is that checking out an old version from source control would still be guaranteed to work as it would be tied to the version of the library in use at the time it was created.

How to include 3rd party code in separate versions in one project

I've got an interesting problem on my hands and I can't quite figure out the right way of handling it. This is specific to sitecore, but I would imagine the fix to the issue would be one that could be applied to anyone that has multiple websites running different versions of a framework.
Right now I have 3 separate websites running Sitecore as the framework and CMS for the sites. One website it running code from Sitecore 6.5, another is on 7.0, and another is on 7.0 but will be 7.2 soon enough.
One of the core principles of programming is do not repeat yourself. I want to set up a separate C# project to include handling of Sitecore specific logic and classes. It would mostly include utility like classes that do simple functions to make my life easier checking many kinds of things. These base features are included in each version of Sitecore I am using.
Basically there is a ton of shared functionality between the Sitecore DLLs despite the differences, and I want to be able to write version agnostic code in one place.
I don't care if it needs to build out 3 separate DLLs for each set of Sitecore DLLs I need to compile with, as long as I can keep one base source. Is this sort of thing possible?
How I would handle it:
Setup an independent project and make use of configurations/symbols. A lot of the simple .NET code can probably be universally shared, however give you're working with different versions of SC you would most likely deal with deprecated functionality, API changes, etc. One example I can think of is UIFilterHelpers.ParseDatasourceString (which is deprecated in 7.2 in favor of SearchStringModel.ParseDatasourceString). There are a log of ways to approach this, but for example:
Inline Versions
#if SC7
IEnumerable<SearchStringModel> searchStringModel = UIFilterHelpers.ParseDatasourceString(Attributes["sc_datasource"]);
#else //SC72
IEnumerable<SearchStringModel> searchStringModel = SearchStringModel.ParseDatasourceString(Attributes["sc_datasource"]);
#endif
Another approach is to use partial classes and define version-specific implementations (then only include those in the correct project. Maybe you have:
Common.sln
Common.SC65.csproj
MyClass.cs [shared]
MyClass.SC65.cs
Common.SC7.csproj
MyClass.cs [shared]
MyClass.SC7.cs
Common.SC72.csproj
MyClass.cs [shared]
MyClass.SC72.cs
In the above, MyClass.cs resides in the root and is included in every project. However, the .SC#.cs files are only included in the project targeting the specific sitecore version.
This pattern is used a lot by libraries that target different .NET platforms or various configurations. To use an MVC example, you'd have MyProject.csproj, MyProject.MVC3.csproj, MyProject.MVC4.csproj, MyProject.MVC5.csproj (each with different references and possibly framework versions).

c#: why use DLLs?

i'm working on a large c# project,i wonder why people use DLLs in their apps. I know that a dll file ( please correct if i'm wrong) contains some functions, but why don't we put those functions inside our main c# app?
Thanks
Most of it is summed up in the answer to this question, but the basic reasoning is "so you don't have to duplicate code".
Code reuse. Usually dll files contain functions that are useful in more than one app, and to have them in a single compiled file is a lot easier than copying over all that code.
Portability, Reusability, Modularity.
Splitting types and the like into separate assemblies allows you to reuse those types in different projects, maintain those types a modular fashion (e.g. update just one assembly instead of the whole app), and share parts of your code with others.
It also allows you to group common functionality into a single package.
Maintainability. When you need to fix a bug, you can release just the DLL containing the fix, instead of having to re-release the entire application.
This is an interesting question in a modern computing.
Back in the 16bit days DLLs cut down on the amount code in memory.
This was a big issue when 16 meg computers where considered fully loaded.
I find many of the answers interesting as though a DLL is the only way to have a reusable,maintainable and portable library.
Good reasons for dll's are that you want to share code with an external party.
Just as Visual Studio and other library vendors give you dll's this makes there code available to a external consumer. However, at one time they did distribute them in another way.
Patchable, This is true but how often does this really happen. Every company I've worked for has tested products as a unit. I suppose if you need to do incremental patching because of bandwidth or something this would be a reason.
As for all the other reasons including reusable, maintainable, modularity.
I guess most of you don't remember .LIB files which were statically linked libraries.
You can even distribute .LIB files but they have to be introduced at compile time and not runtime. They can help facilitate reusable, maintainable and modularity just like a DLL.
The big difference is that they are linked when the program is compiled not when it is executed.
I'm really beginning to wonder if we shouldn't return to .LIB files for many things and reducing the number of DLL files. Memory is plentiful and there is overhead in load time when you have to load and runtime link a bunch of DLL files.
Sadly, .LIB files are only an option if your a C++ guy. Maybe they will consider them with C# in the future. I"m just not sure the reasons for DLL's still exist in the broad context they are used for today.
In big softwares, you have many teams they work on several different modules of program, and thay can proceed their goals without needing to know what others is doing! So one of the best solutions, is that each team produces own codes in parallel. So,dll comes to scene....
Extensibility - a lot of plugin frameworks use DLLs/Assemblies for plugins.
dll : a dynamic link library :
it is a library.
It contain some functions and data.
Where we use these function?
we use these function and data which are inside the dll,in another application or program.
the most important thing is that dll will not get loaded into memory, when it require , called it is loaded into ram .
One of the best use is, one can integrate many third party functionalities into your application just by referencing the dlls, no need to use every third party tool/application into your system.
For example, you need to send a meeting invite via MS outlook through code, for this simply refer the dlls provided by MS outlook in your application and you can start coding your way to success!

.NET solution - many projects vs one project

We currently have a rapidly growing C# codebase. Currently we have about 10 projects, split up in the usual categories, common/util stuff, network layer, database, ui components/controls etc.
We run into the occasional circular dependency where project x depends on something in y and vice-versa. We are looking at maybe collapsing the projects down to one and just managing using structure folders/namespaces. We have a Java project which of course organises just using folders/packages so we're not sure what, if any, benefit having multiple projects brings. None of our projects require special project properties, except the main run project, which we may kept separate (and very thin).
Does anyone have any prior experience in why one project is better/worse than multiple projects and could suggest the best approach? And also any issues with circular dependencies would be useful in either approach would be useful.
Any input appreciated.
In my experience, separating code which creates a single executable in multiple projects can be useful if you want to
use different programming languages in different parts,
develop libraries that are also used by other applications, or
conceptually separate multiple layers (i.e., let Visual Studio ensure that there are no direct references from project Lib to project App).
Personally, I base most of my decisions on the second point. Do I think that part of the application can be a more general library that I am likely to need in other application? Put it in a separate project. Otherwise, as you point out, having a single project usually makes development easier.
About the circular dependencies: The recommended way to solve this is to put interfaces of the referenced stuff into a third project. For example, if you have two applications both sharing some objects through remoting, you put interfaces of the shared objects in a library project to ensure that they are available to both applications.
Without knowing the exact design of your application, it's difficult to give more concrete advise.
If you've got projects with circular dependencies, that indicates a problem with the design of the code, not with the solution/project model.
When making dependencies between projects, it helps to always think of one as "Lower" and the other as "Higher"
A higher level project (such as a web interface) should only depend on lower projects. A lower project (such as a utility) should never depend on something higher, such as a web interface. If this happens, it either means your higher level project has something that really should be in the lower project, or vice versa.
Generally speaking, having multiple VS projects (within a VS solution) does just make sense in these cases
You can potentially reuse the produced DLL in another project (a class library)
You want to separate things like in a layered architecture where you may drop the DAO dll and exchange it with another
There are just different front-end projects (i.e. ASP.net MVC apps) which need to be deployed in different physical locations but use the same BL, DAL.
If your saying you're having the problem of circular dependencies, then you're having a problem in your code design. Probably you may put that logic which is used by multiple projects inside a class library designed to be reused in many projects.
Generally I'd say you shouldn't add more projects if you don't really need it. Splitting up into projects means adding more complexity, so when you're doing so, you should gain a reasonable benefit from it.
We've noticed that Visual Studio's performance degrades significantly as the number of projects grows. Something as simple as switching from 'Debug' to 'Release' configurations can takes upwards of 15 seconds for solutions with around a dozen C# projects in them.
Also, as a counter point to Reed's comment about build times, I've seen build times grow because Visual Studio seems to be spending a lot of time on the project overhead. The actual compile times seem fast, but the total time from hitting build to being able to run is significant.
My advice would be keep the number of projects to the minimum you can get away with. If you need multiple projects for good reasons then use them as necessary, but prefer to keep things together. You can also refactor to split a project into two if necessary.
Multiple projects allows better reuse of specific types within multiple applications. It can also improve build time, since certain projects will not need to be rebuilt for all code changes.
A single project makes life easier, since you don't have to worry about dependencies. Just realize that the ease comes at a cost - it also makes it easier to let poor design decisions creep into the code base. Circular dependencies, whether in one project or multiple, are typically a design flaw, not a requirement.
There are several reasons for separating a solution into different projects (and thus assemblies), and it mainly comes down to re-usability and separation of responsibilities.
Now your goal should be to make an assembly (aka project) has the minimum amount of dependencies on other assemblies in your solution, otherwise you may as well have everything in fewer assemblies. If for example your UI components have a strong dependency on your data access code then there is probably something wrong.
Really, this comes down to programming against common interfaces.
Note However:
When I say "otherwise you may as well have everything in fewer assemblies", I wasn't necessarily suggesting this is the wrong thing to do. In order to achieve true separation of concerns your going to be writing a lot more code and having to think about your design a lot more. All this extra work may not be very beneficial to you, so think about it carefully.
You might find the following Martin article worthwhile: Design Principles and Design Patterns (PDF)(Java).
A revised version in C# specifically is available in Agile Principles, Patterns, and Practices in C# also by Martin.
Both express different guidelines that will help you decide what belongs where. As pointed out, however, cyclic dependencies indicate that there are either problems with design or that something is in a component that belongs in a different one.
Where I work, we opted for an approach where the aim is to have a single project per solution. All code library projects also have a test harness application and/or a unit test app.
As long as the code libraries pass testing, the release versions (with Xml Documentation file of course) get transferred into a “Live” folder.
Any projects that requires functionality from these other projects have to reference them from the “Live” folder.
The advantages are pretty clear. Any project always accesses known working code. There is never a chance of referencing a work in progress assembly. Code gets tested per assembly, making it far easier to understand where a bug originates. Smaller solutions are easier to manage.
Hope this helps!
Shad
Start with single project. The only benefit in splitting your codebase into more projects is simply to improve build time.
When I have some reusable functionality that I really want to isolate from main project, I'll just start brand new solution for it.

Common C# source code for Windows and Windows Mobile

I have a goal to build an application with UI that would run on both Windows Mobile and "normal" desktop Windows. The priority is for it to "look good" under Windows Mobile, and for desktop Windows it is OK if it distorted. Before I invest days trying, I would like to hear if that is possible to begin with. There are several parts to this question:
Is .NET Compact Framework a subset of "normal" (please, edit) .NET Framework? If not, does MSDN have any information anywhere on classes that are in .NET Compact Framework, but not in "normal" (again, please, edit) framework?
Is behavior of shared classes same in both frameworks?
Is it possible to have a single Visual Studio 2005 solution / project for both platforms? If yes, how do to set it up?
Any other comments and advice? Any relevant links?
The CF contains a subset of the full framework (FFx), but it is not a pure subset. There are actually several things available in the CF that aren't in the FFx, which makes it a bit more difficult. CF apps also, except in the most rudimentary cases, use P/Invoke. Those calls are never the same from the desktop to the device, so they are not directly portable (though with a little abstraction you can have a platform-agnostic interface).
For the most part, behavior is the same. I've seen some cases where it's not, and I recall some event ordering not always being identical though, so trust but verify.
It's possible through very careful massaging of the configurations, but I certainly don't recommend it. It's difficult to maintain and very fragile. Instead have two project files, one for CF and one for FFx. You're likely to have code file differences anyway. Add the code files as links to each project so they both use the same physical source file. I'd recommend using some form of CI to help ensure they both build at all times.
Take a look at Dan Moth's MSDN article and blog entries on sharing code assets.
P.S. I found the poster online - it'll show you all the classes that are CF. I ordered it fro Microsoft because Kinkos wanted $65 to print it out in color for me! Microsoft sent me a couple of copies free - all I had to do was ask:
http://www.microsoft.com/downloads/details.aspx?familyid=7B645F3A-6D22-4548-A0D8-C2A27E1917F8&displaylang=en
I have it hanging in my cubicle and it's a godsend when trying to remember which namespaces classes can be found in.
Nice multi-part question:
Differences between the Full Framework and the Compact Framework
The article above has links to relevant documentation about how class behavior differs (it definitely DOES differ in some situations)
Very simple! Create a single solution with a set of base functionality in a Class Library, then create two client projects (one for your desktop app and one for the windows mobile app). Finally, add references to the class library to both client projects.
Depending on the breadth of the project you are working on, you may want to check out the Model View Controller pattern. It may be a bit much for your project, but if you want to share UI behavior between projects, it can be a life saver.
Hope that helps!
CF, in general contains a subset of the classes from the regular framework - but you can't directly execute code from one on t'other. Additionally, rather than just being a subset, there are probably a few things in compact that aren't in the regular version, such as the GUI things specific for mobile devices (soft keys, etc) - assuming you are writing a winform exe, and not a web page (which might be the simplest way to get compatibility).
With some effort, it it possible to share logic code, in particular utility dlls - but they need different csproj files (since they have completely different compile-time "targets"). To reduce maintenance, you can often cheat by hacking the csproj to use wildcards, like from here:
<ItemGroup>
<Compile Include="..\protobuf-net\**\*.cs" />
</ItemGroup>
For UI, things get a lot tricker. In general the expectation would be to have shared business logic and separate UI for different target devices.
1). There is a Compact Framework so yes; And it is a subset of the full .NET framework. I've got a poster on my wall at the office that denotes a whole bunch of classes that work in CF... I don't recall off the top of my head if there are any that are purely CF, but I suppose there must be some. There are a couple of good books on the subject - one by Paul Yao that I have and another by Andy Wigley - both are available on Amazon.
2). As far as I'm aware, the classes that are CF and full framework work the same but need to be compiled for different targets.
3). I would hazard a guess that providing you only use classes that are common to both, that you could use the same solution, I don't know the extent you would have to go to make it compile for the compact device and the full version though, nor can I say with complete certainty that it can be done. I'd hazard a guess that the process isn't simple.
4). Go to your local book store and have a flick through those two books I mentioned. Like I said, I have the one by Paul Yao and it seems to cover most of what I could imagine needing on a compact device.

Categories