I have inherited a large c# codebase. Its very messy and I suspect it might have various thread safety issues. One thing I want to do is find all static data in it. I know I could craft various regex search etc but it seems that an IL inspection tool could do a better job.
Does anybody know if such a tool exists. I have dotPeek and it doesnt, dont think reflector does it. In fact a general tool for finding all sorts of info about IL. Maybe fxcop rule could be written, i dont know enough about fxcop
EDIT: the code has > 2000 static methods so searching for 'static' is a pain. And I am worried that fancy regexes might miss something
And I also could absolutely write my own mini tool to do it. I just wondered if there is already a tool to do it
It's pricey, but NDepend might suit your needs. I've used it before and have nothing but nice things to say about it.
It offers many kinds of code inspection and a domain-specific language (CQL) for querying code. If it's not already built-in, I'm certain you could write a simple query to find static methods.
Related
This question already has an answer here:
Detect duplicate code in Visual Studio 2010
(1 answer)
Closed 5 years ago.
Is there any tool or way I can check how can I optimize my code? removing redundancy? I am using VS 2010
Thanx
I don't know about removing redundancy, but ReSharper has some nice code analysis features that can help to identify unused code blocks. It can also make suggestions for cleaner code, but it's not always 100% accurate.
Such tools, even if they existed, wouldn't be reliable. The best would be to perform a code review by a good developer or architect.
No tool can replace experience and expertise. There are a number of productivity tools that can help, a popular one being ReSharper for example, but it's not going to fix everything for you. At some point you just have to rely on your abilities and the abilities of your team members. Learning how to code well takes time.
It often helps to step back and look at your code with the mindset of certain design principles. S.O.L.I.D. can be a great place to start. Some other questions you can ask yourself are:
Are your classes and types properly encapsulated?
Is your code test-driven or behavior-driven in any way?
Do your tests define discrete unit of behavior, or are they just tailored to the implementation that's being tested?
To specifically address redundancy, quite simply, do you have copied/pasted code doing the same thing in two places?
A profiler will give you a good idea of where your application spends most of its time. From knowing what to how to optimize, though, requires experience as well as knowledge of both the code base in general and the problem domain.
What you want is Code Coverage tools. These keep a record of which lines of code are executing. In order for this to be effective, a complete test suite, or manual test run, is required. This will show up the lines of code that are never used, and will help you make decisions.
Static analysis can also help you with code paths and give you information about how and where your code is called.
A couple of good question sabout code coverage:
What can I use for good quality Code Coverage for C#/.NET?
C# Code Coverage metrics
Also look at Microsoft's FxCop for static analysis:
http://msdn.microsoft.com/en-us/library/bb429476(VS.80).aspx
There is http://clonedetectivevs.codeplex.com/ which is a VS plugin. It uses http://conqat.cs.tum.edu/ under the hood. I've not really used it but does what you asked. Couple that with code reviews and might help.
There is a (commercial, 249€) solution checking for duplicate code, even in large projects.
http://www.solidsourceit.com/products/SolidSDD-code-duplication-cloning-analysis.html
For that purpose we use the build-in Duplicated Finder in our TeamCity build server.
Question:
I want code for: syntax highlighting (of programming languages)
Language: C# or assembly x86 (preferably C#)
Platform: Windows
Qualifications: most efficient implementation possible / most professional / the way that big corporations like Microsoft do it
Rephrased: How do I implement syntax highlighting in C# for Windows in the most efficient way presently known?
Elaboration (feel free to skip - not needed to answer question :)):
I don't want just any way of implementing it - I've already seen several.
What I'd like to know is how Microsoft does it so well on Visual Studio (whichever version).
People keep trying to reinvent the wheel when it comes to syntax highlighting. I don't understand why.
Is this considered a very hard problem? I've seen implementations that only highlight what's currently showing on the screen, I think that's the way to go... (it used some clever API to know which lines of a textbox were actually showing).
I've also seen implementations using RichTextBox and I think that's not the way to go (maybe I'm wrong here) - I think something like subclassing the routine that draws text on the regular textbox and changing its brushes might be better (maybe I've seen that somewhere - I doubt I'd think of that myself)
Also I've heard that some people implement it with AST just like a compiler would be coded (the lexer part, I think?) - I'd hope that that's overkill - I don't see that as being efficient. (uneducated guess)
If it's indeed a hard problem, then how do the big corps always get it right? I've never heard of a way to break the syntax highlighting in Visual Studio, for example.
But any other tool that implements it does so poorly, or worse than the big guys.
What's the official "this is the best way and any other way is less efficient" way of doing it?
I really don't have any evidence that Microsoft's way is better, but seeing that they probably know more about Windows API than anybody else, I'd guess that there way of implementing it is the best (I would love to be wrong - imagine being able to say that my implementation of syntax highlighting is better than MS's!)
Sorry for the disjointed elaboration.
Also I apologize in advance for any faux-pas - this is my first question.
I don't think there is a "this is the best way and any other way is less efficient" way to do it. In reality I don't think that efficiency is the major problem. Rather complexity is.
A good syntax highlighter is based on a good parser. As long as you can parse the code you can highlight every part of it in any way you like. But, what happens then when the code is not well-formed? A lot of syntax highlighter just highlight keywords and a few block structures to overcome this problem. By doing this, they can use simple regular expressions instead of having a full-fledged, syntax-error tolerant parser (which is what Visual Studio has).
The best way is probably to reuse something existing, such as ScintillaNET.
As with anything code.... there rarely is a "best" way. There are multiple ways of doing things and each of them have benefits and drawbacks.
That said, some form of the Interpreter Pattern is probably the most common way. According to the GoF book:
The Interpreter pattern is widely used
in compilers implemented with
object-oriented languages, as the
Smalltalk compilers are. SPECTalk
uses the pattern to interpret
descriptions of input file formats.
The QOCA constraint-solving toolkit
uses it to evaluate constraints.
It also goes on to talk about it's limitations in the applicability section
the grammer is simple. For complex grammars, the class hierarchy for the
grammer becomes large and unmanagable.
Tools such as parser generators are a
better alternative in such cases
effeciency is not a critical concern. The most efficient
interpreters are usually not
implemented by interpreting parse
trees directly but by first
translating them into another form.
For example, regular expressions are
often transformed into state machines.
But even then, the translator can be
implemented by the Interpreter
pattern, so the pattern is still
applicable.
Understanding this, you should now know why it's better to pre-compile your reusable RegEx first before performing many matches with it. If you don't, it will have to do both steps every time (transformation, interpretations) rather than building the state machine once, and applying it efficiently several times over.
Specifically for the type of interpretation you are describing, Microsoft exposes the Microsoft.VisualStudio namespace and all of it's powerful features as part of the Visual Studio SDK. You can also look at System.CodeDOM for dynamic code generation and compilation.
If you are able to embed a web page you could look at Prism.js
Are there any tools that can find any private functions without any references? (Redundant functions)
Reason being, that a function may have been created and called from a couple of areas, but as the project expands and grows, these two calls may have been removed and swapped with a better alternative. But the method may still remain. I was wondering if there were any handy tools that would look through the code, spotting private functions and checking if they have any references, if not, inform the user of the situation.
It wouldn't be too tricky to create one myself, but I was wondering if there were any accessible apps that could do this with the files containing the code?
My code is in c#, but I can imagine that this question covers a variety of coding languages.
ReSharper does the job.
If your code has Unit Tests (it does, right? ;-) then running NCover will allow you to identify methods that aren't being called from anywhere. If you haven't got any unit tests then it's a good excuse to use for starting to build them.
In the general case, I'd suspect that code coverage tools are a good fit in most languages.
Eclipse does it automatically for Java, not sure if you can have same thing for C#.
Another question might even be "Does the c# compiler remove private methods that aren't actually used?".
My guess would be no, but you never know!
EDIT:
Actually, I think it might be hard to tell where a method is used. It might be private, but it can still be used as Event Handlers. Not impossible to check, but I'm sure that aspect would make it a little more difficult.
Is there a quick way to detect classes in my application that are never used? I have just taken over a project and I am trying to do some cleanup.
I do have ReSharper if that helps.
I don't recommend deleting old code on a new-to-you project. That's really asking for trouble. In the best case, it might tidy things up for you, but isn't likely to help the compiler or your customer much. In all but the best case, something will break.
That said, I realize it doesn't really answer your question. For that, I point you to this related question:
Is there a custom FxCop rule that will detect unused PUBLIC methods?
NDepend
Resharper 4.5 (4.0 merely detects unused private members)
Build your own code quality unit-tests with Mono.Cecil (some samples could be found in the Lokad.Quality the this open source project)
Review the code carefully before you do this. Check for any uses of reflection, as classes can be loaded and methods can be dynamically invoked at runtime without knowing at compile time which ones they are.
It seems that this is one of the features proposed features for the next version of Resharper. That doesn't help yet, but hopefully the EAP is just around the corner.
Be careful with this - it is possible that you may remove things that are not needed in the immediate vicinity of the code you are working on but you run the risk of deleting interface members that other applications may rely on without your knowledge.
recently I was given the task to discover a C# solution I have never seen before, and give suggestions on refactoring it. I think I will use NDepend (for the first time ever) to see the overall picture, and also to check a lot of code metrics to figure out what could be refactored. NDepend is pretty good at showing the structure of a project, I think.
My question is a more general one: what do you think is the best way to discover code that you are seeing the first time, and need to understand it's structure?
(Unfortunately there is no logical design documentation and the code is poorly commented.)
Code Discovery is much more easy with NDepend. This tool provides a top-down approach on dependencies and layering between assemblies, namespaces and classes. This is done with some graph and depednencies matrix generated from the code.
You'll also get dependencies on tier code assemblies, which is really useful to know which part of the code does what.
Also, some graphical representation of volume metrics such as # lines of code helps a lot to get a clear idea of where the effort where done on the code.
I frequently use Reflector to study third-party assemblies as well as .NET assemblies. Not so much for a bird's-eye view of the relations between classes, but a more for close-up detail of exactly what is going on.