I'm working on a huge C# + VB.NET solution that has about 210 projects.
Some developers have over time been using a method that only work in a specific context (HttpContext needs to be present) which means that Console Applications that somewhere in their calltree use this method, will fail.
Other than throwing an exception (which might break running solutions), is there a way to check if this context dependent method is called by a specific 'parent' ?
In Visual Studio it's possible to "Find All References", so i'm looking for some tool that could do this recursively to give me a list of eg: all projects that somehow call this broken code.
Related
Take a look at following solution:
MySolution.sln
MyApp.csproj
MyClassLib.csproj
MyClass.cs
MyClassLib project referenced by MyApp project and contains MyClass.
MyClass is used only in MyApp, so it can be moved there.
Is there a way to determine such cases with some tool? Maybe Roslyn or Resharper inspections?
In case of complex solution with long history and many projects this is required feature.
No, there is no such tool for this.
Why? Easy: What if, sometime in the future, you create a MyApp2 and that also needs MyClass? Then it would be better if MyClass is not in the MyApp assembly.
Now you, as the human developing this, might know that there will never (although never say never) be a MyApp2 but a tool cannot possibly know this.
I have limited experience with ReSharper, but from my experience, ReSharper can not automatically detect these cases where a file can be moved, but can visualize these hierarchies.
Going back to your earlier example, the hierarchy tool would show that your MyClass.cs file is only used by a file in MyApp.csproj. (It would not explicitly say this, but you would be able to tell based on the hierarchy.)
You can either use CodeLens in visual studio to check where is used
or either right click on the class (or shift+f12) to "Find all references" and check where is used. This gives you a quick overview, give that you know your project structure, of the need of moving a class to some other place.
or use
Code analysis tools or other code tools to check redundancy etc.
You cannot determine those automatically unless you fiddle with these tools, as it's an edge case when yourself know wheter a class should be placed in some place or not and no AI can replace that, unless you write your own custom code analysis tool that does that particular task.
Edit: Since author seems so much driven and determined into digging into this problem, I suggest you to take a shot into T4 code generation, DSL, CodeDOM to check if you can actually generate or analyze the code you want
Or, create Custom code analysis rulesets or check if the ones already present suits for you
#MindSwipe is right. However, if you really need to do this then here's a hack:
ensure your solution is under version control. this can help later.
select project MyClassLib and run a find and replace in all files of the current project: public class with internal class.
build your solution to get a bunch of errors
open the ErrorList pane and sort it by Description
You should see error messages such as:
The type or namespace name 'MyClass' could not be found (are you missing a using directive or an assembly reference?).
If you see exactly 1 message per class then it means that class can be moved from the library project to the project that yielded this error. Otherwise it means it is shared by at least 2 projects; in this case you have to make it public again (undo the change made by the global replace for this class).
My library has some methods whose return value should never be discarded. Leaking them is a very popular mistake even for me, the author. So I want the compiler to alert programmer when it does so.
Such value may be either stored or used as an argument for another method. It's not strictly to use the stored value but if it's simply discarded it's 100% error.
Is there any easy to setup way to enforce this for my library users?
var x = instance.Method(); // ok
field = instance.Method(); // ok
instance.OtherMethod(instance.Method()); // ok
MyMethod(instance.Method()); // ok, no need to check inside MyMethod
instance.Method(); // callvirt and pop - error!
I thought about making IL analyzer for post-build event but it feels like so overcomplicated...
If you implement Code Analysis / FXCop, the rule CA1806 - Do not ignore method results would cover this case.
See: How to Enable / Disable Code Analysis for Managed Code
Basically, it's as simple as going to the project file, code analysis tab, checking a box and selecting what rules to error / warn on.
Basically tick the checkbox # 1, and then use 2 to get to a window where you can configure a ruleset file (this can either be one you share between libraries or something more global (if you have a build server, make sure its stored somewhere the build can get to, i.e. with the source not on a local machine).
Here's a ruleset with the rule I mean:
The Nicolai's answer enables ruleset for any types but I needed this check for only my library types (I don't want to force my library users to apply rule set on all their code).
Using out everywhere as suggested in the comments makes the library usage to hard.
Therefore I've chosen another approach.
In finalizer I check whether any method was called (it's enough for me to confirm usage). If not - InvalidOperationException. Object creation StackTrace is optionally recorded and appended to the error message.
User may call SetNotLeaked() to disable the check for particular object and all internal objects recursively.
This is not a compile-time check but it will surely be noticed.
This is not a very elegant solution and it breaks some guidelines but it does what I need, doesn't make user to view through unnecessary warnings (RuleSet solution) and doesn't affect code cleanliness (out).
For tests I had to make a base class where I setup Appdomain.UnhandledException handler in SetUp method and check (after GC.Collect) whether any exception was thrown in TearDown because finalizer is called from another thread and NUnit otherwise shows the test as passed.
I'm working on a Visual Studio solution that currently has two projects in it (with more to come later). One project is a mature C#/Winforms application that I built last year (think of it as the prototype). The other one is a DLL that is going to do the same thing as the prototype but through a different application. I'd like to re-use code from the prototype (let's call the method in question SyncInvoices() ) in the DLL because the prototype code works perfectly b/c I've hammered the bugs out of it. The class that contains SyncInvoices is baked into the prototype application instead of being its own DLL.
I've added the class that contains SyncInvoices() to the DLL's project (as a linked file, since it already exists elsewhere in the solution). I can instantiate that class in the DLL project and call SyncInvoices() but the compiler throws errors related to GUI elements.
The problem is that SyncInvoices() has some-thread safe calls to the Prototype application's GUI in it, basically used to pass messages/errors back to the interface.
The DLL doesn't have a GUI, so it doesn't need to run that code. It still builds the rest of the methods in that class, even though they aren't used. Is there a way I can tell the compiler to ignore those lines when building the DLL? I'd rather not maintain two sets of nearly identical code, especially when the two projects are part of the same solution.
I thought about using #define/ #if blocks to partition off the code but I'm not sure if C# works that way-- most of the time I've seen those used is to keep debug code from ending up in production. If it is possible to tell the app to include/exclude code through #if blocks, how do I set the values?
Should I just bite the bullet and make a copy of the method without the offending code in it?
Without more specifics it's hard to give the correct answer, but I'd say generally you'd handle this with events. Whatever calls into the GUI are happening in the prototype, that would typically be some form of event, which you would subscribe to in the prototype when you instantiate your new class.
Are there any particularly problematic cases you could give more specifics on?
I have the following code. The CustomControlHelper generates an instance of an object via reflection. At this stage we don't know what type of object we are dealing with. We do know it will be a CustomControl, but we don't know if it implements any particular interface or if it extends any other classes. The following code is trying to establish whether the loaded control implements the IRichAdminCustomControl interface.
Object obj = CustomControlHelper.GetControl(cc.Id, cc.ControlClass);
if(obj != null)
{
bool isWhatWeWant = (obj is IRichAdminCustomControl);
return isWhatWeWant;
}
That's all fine, but I've noticed that when I know I have an object that implements IRichAdminCustomControl, the expression evaluates to false.
Okay, this is where it gets really weird. If I inspect the code when debugging, the expression evaluates to true, but then if I immediately let the code run and inspect the result, it evaluates to false (I've attached an animated gif below to illustrate).
Has anyone come across weirdness like this before and if so, what on earth is causing it?
Incidentally, I believe the product I'm using uses Spring.NET to provide dependency injection in the CustomControlHelper.
If you are using Visual Studio 2010 SP1, I came across this bug:
Misreporting of variable values when debugging x64 code
There is a workaround on that page, posted by Microsoft:
You can either set all projects to compile to x86, or create an intermediate initialised variable declaration to ensure the debugger reports the correct value of the variable being examined.
Try this as a workaround:
bool isWhatWeWant = true;
isWhatWeWant &= (obj is IRichAdminCustomControl);
bool finalValue = isWhatWeWant; // this line should fix isWhatWeWant too in the debugger
return finalValue;
EDIT: seems like VS2012 also encounters similar problems in specific conditions.
Two possibilities come to mind. The first is that your interface name is generic enough that it could already be in the namespace somewhere. Try fully qualifying the interface in the is clause. The second possibility is that you might be running the code as part of a constructor, or being called indirectly by a constructor. Any reflection like stuff needs to be done after we are certain the application has fully loaded.
So I found the answer. It was because I had two copies of the dll in different locations. I had one copy in the bin of my back-end application and one in a shared external directory that gets dynamically loaded by the backend app.
I should explain; this application consists of two apps running in tandem, a frontend app and a backend app. Ordinarily, you place "Custom Controls" into your frontend app. These controls are then copied on application start to a external directory that is accessible to the backend app.
In this case, I had logic in my Custom Control library that needed to be accessed in the backend app - so I had to make a reference to it... which ended up with the backend app having two references to the same class. D'oh! And OF COURSE that's going to work when you're debugging.
Solution was to split my extra logic into its own project and reference THAT in the backend app.
I'm not going to "accept" my own answer here because, although it solved my specific problem, the solution is a bit TOO specific to the apps I'm working with and would be unlikely to help anyone else.
This happened to me once and even though I never came to a conclusion as to why it was happening I believed the PDB files that were being loaded with the debugging symbols where out of sync. So, by "cleaning" the solution and then rebuilding the solution this weird issue went away.
Is it possible to get a list or a specific instance of IDebugEngine2 (MSDN) from a Visual Studio Package without using IVsLoader approach (described here)?
Normally I would expect most services to be available through GetService, either directly or through some other service. But I can not easily find anything that can provide debug engines.
What are you trying to do with it? The debugger interfaces are extremely fragile. Often there are 2, 3, or maybe more possible ways to perform an action with the debugger interfaces, but the particular DE implementation only supports 1 of them. Debug engine implementers are not expecting any direct calls to their debug engine interfaces from anywhere except Visual Studio itself, and the risk of breaking debugger functionality if you attempt it lies somewhere between very high and guaranteed.
For example, here are some of the potential ways to tell a DE to launch and/or attach to a process:
IDebugEngineLaunch2.LaunchSuspended
IDebugPortEx2.LaunchSuspended
IDebugProgramEx2.Attach
IDebugProgramNode2.Attach_V7
IDebugProgramNodeAttach2.OnAttach
IDebugEngine2.Attach
IVsDebuggableProjectCfg.DebugLaunch
VsShellUtilities.LaunchDebugger
IVsDebugger2.LaunchDebugTargets
IVsDebugger2.LaunchDebugTargets2
Edit 1: In the case of my Java debugger, the debug engine is created by the session manager with the following stack:
My code calls IVsDebugger2.LaunchDebugTargets2
The environment calls back to my implementation of IDebugProgramProvider2.WatchForProviderEvents
After creating a new instance of IDebugProgram2 (a copy of IDebugProcess2 obtained from the IDebugDefaultPort2 that VS passed to WatchForProviderEvents is passed to the IDebugProgram2 constructor), my code calls IDebugPortNotify2.AddProgramNode
The environment calls back to the constructor of my debug engine
I recently was investigating the same question, and eventually found you can easily do this via ILocalRegistry3.CreateInstance!
Please see my post here for more info