I am cleaning up some code in a C# app that I wrote and really trying to focus on best practices and coding style. As such, I am running my assembly through FXCop and trying to research each message it gives me to decide what should and shouldn't be changed. What I am currently focusing on are locale settings. For instance, the two errors that I have currently are that I should be specifying the IFormatProvider parameter for Convert.ToString(int), and setting the Dataset and Datatable locale. This is something that I've never done, and never put much thought into. I've always just left that overload out.
The current app that I am working on is an internal app for a small company that will very likely never need to run in another country. As such, it is my opinion that I do not need to set these at all. On the other hand, doing so would not be such a big deal, but it seems like it is unneccessary and could hinder readability to a degree.
I understand that Microsoft's contention is to use it if it's there, period. Well, I'm technically supposed to call Dispose() on every object that implements IDisposable, but I don't bother doing that with Datasets and Datatables. I wonder what the practice in regards to globalization and localization on small-scale internal apps is "in the wild."
I usually ignore those kinds of warnings for small internal apps. Remember that FXCop is meant to make sure that your code is good for a framework, not all of them might be relevant to you, I always disable various rules that I don't think fits with the applications as I build them.
Though I would call Disponse on any classes that implements them, doesn't matter if they don't do anything now, an upgraded version of the class might start leaking something essential, and it's a good habit to get into.
Related
I have been exploring reinventing the DataTable and I am wondering what the uses are for MarshalByValueComponent. I believe it is used for .NET Remoting (maybe WinForms and WebForms), but that was replaced superseded by WCF. I cannot find any notable usages of it across GitHub or Google. Is MarshalByValueComponent still used?
This is hard to answer. The entire concept of wanting to remote a component is a mystifying one. These design decisions were made 13+ years ago and clearly they had a very different idea of how remoting was going to be practical. Which didn't pan out that well, heavily re-engineered in .NET 3.0
Just noodling about this a bit without knowing the thinking behind it. MarshalByValueComponent exist as the antipode to Component. Which inherits from MarshalByRefObject. By far most components and controls are not serializable, they have way too much runtime state associated with them that can never properly deserialize in another runtime environment. Take an OpenFileDialog, the odds that a verbatim copy of it on another machine can operate correctly are zilch. Again having to suspend the wonder at why you'd want to do this at all. Same for any Control, it has dozens of properties whose value depend on the operating system state.
But MBRO isn't that desirable, the many round-trips take a heavy hit from network latency. There are a few components that don't have runtime state and still make a bit of sense in a remoting scenario. That they are a component is in itself a quirk, it's been a long time since I dropped a DataSet on a Form. But they inherit MBVC as a result. Just ignore this, it isn't practical.
I am working on a C# project and have two programmers to help me on parts of the project. The problem is that I don't trust these programmers as they are joining recently and need to protect my company's property.
I need to hide some parts of the code from the two programmers so they don't see it and they should still be able to work on their parts and run the full application to test it.
Is there such thing ? :)
Know a few things:
You Can't Hide Code Users Compile Against.
C# makes it incredibly easy to see what you're compiling against, but this is actually true for all programming languages: if they are required to compile it, compile against a dll, or they can run it, either as a DLL or as raw C#, they can get access to the logic behind it. There's no way around that. If the computer can run the program and it all resides on your PC, then the human can look it over and learn how to do it too.
HOWEVER! You can design your program in such a way that they don't need to compile against it.
Use Interfaces.
Make the code that the other employees must write a plug-in. Have them write their code as an entirely separate project to an interface that the core part of your API loads dynamically at run time.
Take a look at The Managed Extensibility Framework for a tool to do this.
Use Web or Remote Services.
Components of particular secrecy can be abstracted away so the details of how it works can be hidden and then invoked via a web call. This only works in situations where the core details you want to protect are not time sensitive. This also doesn't protect the idea behind the feature: the employee will need to understand it's purpose to be able to use it, and that alone is enough to rebuild it from scratch.
Build Trust Through Code Reviews.
If you don't currently trust your employees, you need to develop it. You will not be able to know everything that everyone does always. This is a key skill in not just programming, but life. If you feel that you can't ever trust them, then you either need to hire new employees that you can trust, or build trust in them.
One way to build trust in their capabilities is through code reviews. First, make sure you're using a version control system that allows for easy branching. If you aren't, switch immediately to Mercurial*. Have an "integration" area and individual development areas, usually through cloned branches or named branches. Before they commit code, get together with the employee and review the changes. If you're happy with them, then have them commit it. This will consume a little bit of time on each commit, but if you do quick iterations on changes, then the reviews will also be quick.
Build Trust Through Camaraderie.
If you don't trust your employees, chances are they won't trust you either. Mutual distrust will not breed loyalty. Without loyalty, you have no protection. If they have access to your repository, and you don't trust them, there's a good chance they can get at the code you want anyway with a little bit of effort.
Most people are honest most of the time. Work with them. Learn about them. If one turns out to be working for a hostile entity, they've probably already obtained what they wanted to get and you're screwed anyway. If one turns out to be a pathological liar or incompetent, replace them immediately. Neither of these issues will be saved by "protecting" your code from their eyes.
Perform Background Checks.
A further way to improve trust in your employee, from a security standpoint, is a background check. A couple hundred bucks and a few days, and you can find out all sorts of information about them. If you're ready to hide code from them, and you're the employer, you might as well do due diligence before they steal the secrets to the universe.
Your Code is Not That Important.
I hate to break it to you, but there's almost a 100% chance that your code is not special. Trying to protect it through obscurity is a waste of time and a known, poor, protection method.
Good luck!
**Why Mercurial? Just because it's one option that's easy to get started with. Feel free to use any other, like Git, if it suits your fancy. Which one you use is entirely besides the point and irrelevant to this overall discussion.*
You can't do it,
Even if you only give them a DLL with your code, they can extract the code with reflection tools, e.g. reflector.
Keep a separate backup and submit dummy placeholders to source control.
The complicated way: set up an application server with VS2010 and all the files they need, lock everything down so they cannot access any files directly and can only run VS2010 and the built application, and provide only DLLs for the protected code.
Theoretically, they would be able to work on the code they need to but would never have direct access to the DLLs, nor would they have the ability to install or use a tool such as .NET Reflector to disassemble the files... might still be some holes you'd need to look for though.
The right way: Hire trustworthy programmers. ;)
Put your code into a DLL and use Dotfuscator to obfuscate the internal workings.
The only way I can see is to give them compiled and obfuscated assemblies to reference. Because you can only obfuscate private members you may possibly need to modify your code so that public methods do not do much if anything at all. If there is any interesting code in a public method you should rearrange your code like this:
public bool ProcessSomething()
{
return this.DoProcessSomething();
}
private bool DoProcessSomething()
{
// your code
}
Even obfuscator that comes free with VS will do some job to make it non-trivial to look into your code. If you require more protection you need better obfuscator of course.
But in the long run it is impractical and sends bad signals to those developers telling that you do not trust them. There can be nothing good coming out of this. If you're not the boss (or owner of the code) I would not worry that much - after all it's not your property. You can talk to your boss to express your concerns. If you are the boss you should have not employed people you do not trust in the first place.
Having just gone through a small experimenting session to try to see how much work it would take to bring our .NET class library, or at least portions of it, into Silverlight so that we can reuse business logic between the two worlds, I'm wondering if others have experience with this sort of thing.
The things I noticed, off the top of my head:
Lots of attributes missing (Browsable(false) for instance)
Lots of interfaces missing, or present, but empty (ICloneable is hidden, ITypedList missing)
Reflection differences (everything reachable needs to be public)
Some base class differences (no Component?)
So I'm wondering, is it really feasible for me to even look at this as a possibility?
I got the initial code running, but I had to just comment out a whole lot of the base functionality, mostly around handling lists since they are based on ITypedList and some base classes. Apparently I need to change to ObservableCollection in Silverlight, so a whole of of base-code needs to be changed in order to cope.
The actual business test class I created is 99.5% identical to the one I would've made for .NET, only some minor changes that would easily be usable in .NET as well, just not as I would've made it before looking at Silverlight. In other words, it looks feasible to share business logic, provided I can make the base classes compatible.
Just so I'm clear, what I'm talking about is that I would basically have two project files, one for .NET, and one for Silverlight, but the actual C# source code would be the same, shared between the two.
So does anyone have any experience with this? Any tips or guidelines?
Will it be worth it? It certainly warrants more looking into.
It is definitely feasible.
It's done on a project here; the Silverlight project includes the C# ones, and there are some #IF statements handling some things (like log4net declarations), and other times things are just re-implemented. But in general, it's a huge win, and you should definitely attempt it (and certainly, we have, successfully).
-- Edit:
One point though, is that our OR/M (LLBLGen) didn't have inbuilt support for 'simple' objects to send down through Silverlight; but someone had written a plugin that handled it, which helped. So it may be worth considering what sort of DAL you're using, and how well it supports Silverlight.
What I've done to facilitate this is:
Frequent use of partial classes and #if !SILVERLIGHT to separate code into parts that Silverlight can handle.
Use of code generation whenever possible. For example I've been experimenting with T4 templates that generate Silverlight equivalent attributes (DisplayAttribute instead of DescriptionAttribute for example)
Whenever there's an interface/attribute that isn't implemented by Silverlight (such as IDeserializationCallback, ICloneable, INotifyPropertyChanging) I will create a dummy interface of the same name in the Silverlight application as long as I know that the fact that the implementation won't be used is not a problem.
Finally, it's worth noting that in Silverlight 4, the assembly format does allow for sharing of binaries between Silverlight and .NET as long as there are no dependencies that Silverlight does not support.
One more note about the separate base classes - it may be worthwhile to create an abstract class that derives from ObservableCollection in Silverlight and BindingList (or whatever you're using in .NET) to minimize the impact on your typed collections.
UPDATE
Today I was working on porting some .NET code to Silverlight that made heavy use of the System.Diagnostics API's like TraceSource, SourceSwitch, etc which do not exist in Silverlight. I created very minimal implementations of these in the Silverlight project and put them in the Einstein.Diagnostics namespace. In doing so I decided I needed a convention to easily identify code that was mimicking the .NET Framework vs. my own code. So I renamed the placeholder files to prefix them with an # sign. I also prefixed the class names in those files as well. The nice thing about that is that the # sign does not actually change their class names as far as the C# compiler is concerned. So #SourceSwitch still compiles to be Einstein.Diagnostics.SourceSwitch but in the code I can easily see something is up. I've also decorated these classes with a [SilverlightPlaceholder] attribute.
I do this with protobuf-net, and I use a few approaches:
conditional compilation symbols in the project file to trigger subtle code-branches (yes, it isn't perfect, but it works)
re-introduction of some things; attributes might be an example here - your code can still use re-introduced attributes, even if the framework code doesn't; as a more extreme example of this, for compact framework I had to re-introduce a good chunk of the Expression API, which was fun
just drop some things ;-p
However if you are using ITypedList (which you mention), I can see that whole approach falling apart pretty messily; component-model is complex enough already, without having to force your way through the hacks too. It really depends quite how far you've gone down this road. Maybe 4.0 / dynamic will open up some of these options again?
One possible fix to your issue is to copy the missing code from the Mono project. Back in the day, I did a small project with the Compact Framework and it was missing the entire System.XLM namespace. I just copied the entire thing from Mono into my project, compiled it and it worked great with minimal changes, iirc.
I'll write a application but I've never experienced to allow people to use my application programming interface before.I mean how kinda design I should make to let people use my methods from outside world like API.
Please some one show me a way.I am kinda new to this.
Expose as little as you can. Every bit you publish, will return to you x100 in next version. Keeping compatibility is very hard.
Create abstractions for everything you publish. You will definitely change your internals, but your existing users should stay compatible.
Mark everything as internal. Even the main method of your application. Every single method that could be used, will be used.
Test your public API the same way you would for interfaces. Integration tests and so on. Note that your API will be used in unpredictable ways.
Maximize convention over configuration. This is required. Even if your API is a single method you will still need to support this. Just makes your life easier.
Sign, and strong name your assemblies, this is good practice.
Resolve as many FxCop and StyleCop errors as possible.
Check your API is compatible with the Naming Guidelines of your platform.
Provide as many examples as you can, and remember that most of the usage of your API will be Ctrl+C and Ctrl+V from these examples.
Try to provide documentation. Check that you do not have GhostDoc-style auto-generated documentation. Everybody hates this.
Include information on how to find you.
Do not bother with obfuscation. This will help you and your users.
ADDED
API should have as less dependencies as you can. For example, dependecies on the IoC containers should be prohibited. If your code uses it internally. Just ilmerge them into your assemblies.
It may not be the funniest reading, and certainly not the only reading to do on the subject, but while designing your class library (your API), do check in with the Design Guidelines for Developing Class Libraries every now and then, it's a good idea to have a design that corresponds a bit with the .NET Framework iteself.
Make your methods you want to expose to the outside world public.
I found this presentation to be particularly insightful:
How to Design a Good API and Why it Matters
http://lcsd05.cs.tamu.edu/slides/keynote.pdf
One way to do it is to create a DLL for your main functionality that others will use and an EXE that calls the methods in the DLL. If you want your application to support plug-ins, have a look at the System.AddIn namespace.
If you want to see what's new in this area, check out the Managed Extensibility Framework. It's a new/"unified (see the comments...)" method for exposing features for add-ins and other extensibility/modularity.
In my team we have hundreds of shared dlls, which many also reference other dlls that themselves reference other dlls, and so on. We have started to use a 'Shared' directory for all the dlls that we feel are generic enough to use in other projects, such as a database comms dll.
The problem is that if one of the dlls all the way down the tree is changed, then everything that references it needs to be recompiled to avoid versioning issues (which occur at runtime).
To avoid this, there is now talk of adding all our 'shared' dlls into one big assembly, and anyone creating new apps simply reference that, and that alone.
This obviously will get bigger and bigger and i'm not sure if this is the best way or not. Any thoughts please?
What we do is treat the maintenance of the shared DLLs as a project in itself, with its own source-control and everything. Then about twice a year, we do a 'release' of the shared DLLs to the public, with its own version number and everything. As long as you always use the DLLs as a 'set' (meaning all the ones you reference are from the same release) you're guaranteed not to have any dependency issues.
It's most definitely not the best way to do it. I have a few "shared" DLLs at my job that are kind of like that. They get unwieldy and difficult (read: impossible) to make meaningful changes to because it becomes too difficult to ensure that changes don't break apps downstream, which seems like the exact opposite of what you're trying to do.
It sounds like what you really need to do is separate your concerns a little bit better. If all of these DLLs are referencing each other, they're probably too tightly coupled. A true "shared" DLL should be able to stand on its own, or as part of a packet of three or four that travel as a group. If your dependencies are actually preventing you from making changes, then your coupling strategy has gone horribly wrong.
Putting everything in one large DLL certainly isn't going to make anything better. In fact, probably the opposite. Once you've got everything in one DLL, the temptation will be there to couple everything within it even more tightly together, which will make it impossible to pull things apart later.
you can make one solution that include all connected projects.
and when you need to release, just build this solution
Update.
As you say, the solution is cant hold so much dlls.
In other hand you can make an external MSBuild script
or using CruiseControl.NET that have possibilities to make such complicated tasks.
To quote from the GoF book, "Program to an interface, not an implementation." This could apply here to some of your libraries. You are already aware of how brittle your develop becomes when you have tight coupling. Now what needs to be addressed is how to give you breathing room.
You can create an interface. This will provide a contract that any application can use to specify that a minimum set of functionality is available.
You can create a Service that implements an interface. This will allow you to provide what would be thought of as an addon or a plugin. This allows you to design towards a contract version with expectations that your tools will adhere to.
You can create a Service that only uses an interface. This will allow your application to send in any concrete implementation that adheres to a contract of design.
Products like development editors and web browsers use this approach to make some code reuse possible. Thank you. Good day.
Design Principles from Design Patterns
Plugin