How to compare fxcop warnings against veracode reults - c#

I have to make a choice between Veracode and FxCop for application security testing.
Obviously Veracode comes with a price and FxCop is free.
But to know the efficiency of FxCop I must compare my results with the free analysis result provided by veracode. Both the tests are run against the same dll.
How will I know which one is a Cross-site scripting error or an CRLF injection in FxCop?
Is there any guide available? Any way to decipher if I am looking at the same errors in both?
Any help is appreciated.

FxCop is not specifically geared towards security testing. Though it has a couple of rules that check for specific security issues, it's far less advanced than VeraCode, Coverity or Fortify in this respect. It's not meant to replace them, on this front, it's meant to provide basic checks.
Code Analysis also checks other aspects like localization and globalization issues, memory leaks and other generally bad things that have nothing to do with security.
You base solution should at least use Code Analysis inside Visual Studio. Whether you want to use additional security checks from 3rd party vendors is up to you. There are a number of (open source) rulesets available that extend Code Analysis with additional security rules. These are not standard rules that ship with Visual Studio though (and haven't been updated in ages).
To see which types of check are built into Code Analysis (FxCop), look at the documentation. You'll see that there is no cross site scripting warnings present, which makes sense, as you're likely to make such mistakes in HTML and Javascript and not primarily in C#. CodeAnalysis and FxCop target issues in your Managed .NET code, not in your client side scripts or HTML.
Other tools like JsHint/JsLint and and tools recommended by the OWASP group may provide free alternatives.

Related

Track Data Input Through Application Code and System Libraries

I am a security dude, and I have done extensive research on this one, and at this point I am looking for guidance on where to go next.
Also, sorry for the long post, I bolded the important parts.
What I am trying to do at a high level is simple:
I am trying to input some data into a program, and "follow" this data, and track how it's processed, and where it ends up.
For example, if I input my login credentials to FileZilla, I want to track every memory reference that accesses, and initiate traces to follow where that data went, which libraries it was sent to, and bonus points if I can even correlate it down to the network packet.
Right now I am focusing on the Windows platform, and I think my main question comes down to this:
Are there any good APIs to remote control a debugger that understand Windows forms and system libraries?
Here are the key attributes I have found so far:
The name of this analysis technique is "Dynamic Taint Analysis"
It's going to require a debugger or a profiler
Inspect.exe is a useful tool to find Windows UI elements that take input
The Windows automation framework in general may be useful
Automating debuggers seems to be a pain. IDebugClient interface allows for more rich data, but debuggers like IDAPro or even CheatEngine have better memory analysis utilities
I am going to need to place memory break points, and track the references and registers that are associated with the input.
Here are a collection of tools I have tried:
I have played with all the following tools: WinDBG (awesome tool), IDA Pro, CheatEngine, x64dbg, vdb (python debugger), Intel's PIN, Valgrind, etc...
Next, a few Dynamic Taint Analysis tools, but they don't support detecting of .NET components or other conveniences that Windows debugging framework provides natively provided by utilities like Inspect.exe:
https://github.com/wmkhoo/taintgrind
http://bitblaze.cs.berkeley.edu/temu.html
I then tried writing my own C# program using IDebugClient interface, but the it's poorly documented, and the best project I could find was from this fellow, and is 3 years old:
C# app to act like WINDBG's "step into" feature
I am willing to contribute code to an existing project that fits this use case, but at this point I don't even know where to start.
I feel like as a whole dynamic program analysis and debugging tools could use some love... I feel kind of stuck, and don't know where to move from here. There are so many different tools and approaches to solving this problem, and all of them are lacking in some manner of another.
Anyway, I appreciate any direction or guidance. If you made it this far thanks!!
-Dave
If you insist on doing this at runtime, Valgrind or Pin might be your best bet. As I understand it (having never used it), you can configure these tools to interpret each machine instruction in an arbitrary way. You want to trace dataflows through machine instructions to track tainted data (reads of such data, followed by writes to registers or condition code bits). A complication will likely be tracing the origin of an offending instruction back to a program element (DLL? Link module? Named subroutine) so that you can complain appropriately.
This a task you might succeed at doing as an individual in terms of effort.
This should work for applications.
I suspect one of your problems will be tracing where goes in the OS. That's a lot harder although the same principle applies; your difficulty will be getting the OS supplier to let you track insructions executed in the OS.
Doing this as runtime analysis has the downside that if a malicious application doesn't do anything bad on your particular execution, you won't find any problems. That's the classic shortcoming of dynamic analysis.
You could consider tracking the data the source code level using classic compiler techniques. This requires that you have access to all the source code that might be involved (that's actually really hard if your application depends on a wide variety of libraries), that you have tools that can parse and track dataflows through source modules, and that these tools talk to each other for different languages (assembler, C, Java, SQL, HTML, even CSS...).
As static analysis, this has the chance of detecting an undesired dataflow no matter which execution occurs. Turing limitations means that you likely cannot detect all such issues. THat's the shortcoming of static analysis.
Building your own tools, or even integrating individual ones, to do this is likely outside what you can reasonably do as an individual. You'll need to find uniform framework for building such tools. [Check my bio for one].

Does FxCop in C# cover MISRA?

For programs written in .net/C# does FxCop (and Roslyn equivalents) cover the relevant rules in MISRA? Has anybody gone through and ticked them off?
Or is there a compliance standard for .NET similar to MISRA?
No. By default FxCop (now Code Analysis in Visual Studio) only watches for spelling/casing corrections and Microsoft's own guidelines. You are free to come-up with your own rules, of course.
Note that most of the static-analysis tools only look at the compiled CIL - so you won't be able to watch for safety-critical style violations (such as non-braced if and unintentional switch-case fallthroughs).
Given that MISRA is specifically for C and C++ (and not C#/CIL) you won't find it under FxCop. Though I imagine if you did implement MISRA for C# you would make a tidy bit of money from it - I'd pay for it!
After some googleing I did find http://www.sonarlint.org/visualstudio/rules/index.html#sonarLintVersion=2.0.0&ruleId=S2291&tags=misra
This tool looks quite interesting taking the Roslyn analyzers to the next level. I will investigate this tool further.

TFS Check-in Policy - Best Practices

Are there any best practices for enforcing a TFS check-in policy? Are there any good guides on how to implement various types of policies as well as their pros and cons?
Things I'd particularly like to do are ensure that code compiles (note that compilation can take up to five minutes) and that obvious bits of the coding standards are followed (summary tags must exist, naming conventions are followed, etc).
TFS 2010 (and 2008 but I have not used 2008) allows a gated checkin - which forces a build prior to the build being checked in.
Activating this is a (reasonably) straightforward process, see for example these guides:
http://blogs.msdn.com/b/patcarna/archive/2009/06/29/an-introduction-to-gated-check-in.aspx
http://intovsts.net/2010/04/18/the-gated-check-in-build-in-tfs2010/
There is a step before all this which is required to make all this happen. That is a TFS build server setup. That can be a complex process depending on infrastructure etc. Here is an MSDN guide:
http://msdn.microsoft.com/en-us/library/ms181712.aspx
The pros are that the code in the repository can be reasonably stable. For a large team this can save a LOT of time.
There are a lot of cons worth considering for this benefit. Firstly, the installation and maintenance of an extra build server. This include disk space allocation, patches etc.
Secondly is the extra time required for each person to check in a file. Waiting for a build to succeed before the code is checked in (and available for others to get) can be a while.
Thirdly, when (not if) the build server is not available, a contingency plan needs to be in place to allow developers to continue their work.
There is a lot of extra process required to reap the rewards of gated checkins. However is this process is governed properly it can lead to a much smoother development cycle.
Although we do not use gated checkins, we do use a TFS build server for continuous integration to do scheduled builds. This minimises the dependency minute-to-minute on the build server while ensuring (with reasonably effectiveness) that after a build has broken, we are notified and can rectify it ASAP. This method empowers the developers to have an understanding of integrating code, and how to avoid breaking the code in the repository.
I think the premise of this question is somewhat wrong. I think a good question of this nature should be something along the lines of; my team is having a problem with code stability, conflicting change-sets, developers not running tests, poor coverage, or other metric reporting to management and we'd like to use TFS to help solve that(those) issue(s). Yes, I do realize that the OP stated that ensuring compilation is considered a goal, but that comes part and parcel with having an automated build server.
I would question any feature that adds friction to a developer's work cycle without a clearly articulated purpose. Although I've never used them, gated check-ins sound like a feature in search of a problem. If the stability of your codebase is impacting developer productivity and you can't fix it by changing the componetization of your software, dev team structure, or a better branching strategy, then I guess it's a solution. I've worked in a large shop on a global project where ClearCase was the mandated tool and I've encountered that kind of corporate induced fail, but the team I worked on didn't go there quietly or willingly.
The ideal policy is not to have one. Let developers work uninhibited and with as little friction as possible. Code reviews do much more than a set of rules enforced by a soul-less server will ever do. A team that supports testing, and is properly structured will do more for stability than a gated check-in will ever achieve. Tools that support branching and local check-ins make it easier for developers to try new things without fear of breaking the build will help mitigate the kind of technical debt that kills large projects.
You should look at chapter 8 of "Patterns & practices: Team Development with Visual Studio Team Foundation Server"
http://tfsguide.codeplex.com/

Visual Studio Code Analysis - Does Microsoft follow it themselves?

Did a quick search but could not find anything about this.
I guess all of you know that the Visual Studio Code Analysis is quite nitpicking and gives warnings about a lot of things. Does anybody know how well Microsoft follow this themselves..? That is, if I were to run a code analysis on their assemblies, would the warnings be none or very few (perhaps surpress warning with a justification..?).
Most of the things that code analysis (or FXCop) check are closely based on the ".NET Framework Library Design Guidelines" (either the book, or on MSDN).
However those guidelines (especially in the book) have caveats, not all apply in all circumstances.
There are plenty of indications that MS do use these tools, but I assume they do have reasons to not apply all the rules all the time, as any other project does.
There are two core tools used widely at Microsoft for Code Analysis: FXCop for managed code and PreFast for native C++.
Historically, while not every team has thoroughly enforced the use of CA when building their products, there's been a significant upswing over the last 3-4 years in particular in the number of teams that now enforce pretty stringent CA requirements on their feature teams and on the product as a whole.
For example, in Vista, the Windows team essentially took 3 months off product development and SAL-annotated the vast majority of their key method and function declarations. in Win7, they mandated that all new code had to comply with a set of requirements for SAL-annotating key scenarios (primarily to reduce the likelihood of buffer overruns). In Win8 they're going further still and are incorporating new SAL annotations for a number of key scenarios. Combined with improved compilers and tools like PreFast (now build into VS 2010 Pro and up), they and you can find and eliminate potential issues before the product is released.
Note that the warnings issues by CA (whichever CA tool you choose to use) will always require overrides - sometimes, there's a really good reason as to why the code has to do what it does. But you should only override if you're ABSOLUTELY sure it's necessary and appropriate. NEVER turn off a warning because you don't understand it and never turn off a warning if you can't be bothered to fix it.

Static Code Analysis - Which ones to turn on first?

We're using VS2008 with the built in static code analysis rule set.
We've got a relatively large C# solution (150+ projects) and while some of the projects (< 20) are using static code analysis religiously, most are not. We want to start enforcing static code analysis on all projects, but enabling all rules would create a massive distraction to our current projects. Which of the many static code analysis rules that are available should we turn on first? Which rules have the biggest bang for the buck? If you could give me your prioritized top 20, I'd greatly appreciate it.
Thanks in advance,
--Ed.S.
The very first rules you should activate for a project are those for which you don't yet have any violations in that project. This will allow you to avoid introducing new problems without costing you any additional clean-up effort.
As for the rest, given that you're already using code analysis on other projects, your best input for which rules are most likely to be broken with serious consequences is probably the developers who work on those projects. If you don't have enough overlap between projects to get meaningful feedback from developers, you might want to consider starting with the rules that are included in the Microsoft Minimum Recommended Rules rule set in Visual Studio 2010.
If you are planning on actually cleaning up existing violations in any given project, you may want to consider using FxCop instead of VS Code Analysis until the clean-up is complete. This would allow you to activate rules immediately while keeping "for clean-up" exclusions of existing violations outside your source code.
Given that the Studio ones are similar to FxCop's rules, I can tell you which ones I'd turn on last.
If internationalization is not on the horizon, turn off Globalization Rules.
Turn off Performance Rules initially. Optimize when you need to.
Fit the others to your team and your projects. Turn off individual rules that aren't applicable. In particular, Naming Rules may need to be adjusted.
EDIT: The most important thing is to reduce noise. If every project has 200 warnings and stays that way for months, everyone will ignore them. Turn on the rules that matter to your team, clean up the code to get 100% passing (or suppress the exceptions - and there will be exceptions; these are guidelines), then enforce keeping the code clean.
If you going to localize your project/ it is going to be used in different countries, then definitely enable localization rules. It will find all call to all sort of Format/Parse functions that do not specify CultureInfo. Bugs involving not specified CultureInfo are hard to find in testing, but they will really bite you in the ass, when your French client will ask: why your program does not work/crash on numbers with "," as decimal separator.
In my experience code analysis warnings of all types show 'hidden' bugs or flaws in your code. Fixing these can solve some real problems. I have not found a list of warnings that I would like to disable.
Instead, I would turn them on one project at a time and fix all the warnings in that project before moving to the next.
If you want to turn things off I would consider not checking the Naming rules (unless you are shipping a library, APIs or other externally exposed methods) and Globalization rules. (unless your applications make active use of Globalization). It depends a bit on your situation which make sense.
I somewhat agree with Jeroen Huinink's answer.
I would turn on all the rules that you think a project should follow and fix them as soon as possible. You don't have to fix them all now, but as you go through and fix a defect or refactor a method in a module, you can always clean up the issues found by static analysis in that method or module. New code should adhere to your rules and existing code should be transformed into adherence as quickly as possible, but you don't need to drop everything to make that happen.
Your development team can also look at the issues for a project and prioritize them, perhaps filing defects in your issue tracking system for the most critical problems so that they are addressed quickly and by the appropriate developer.

Categories