We have a project that may generate a lot of exception(because it interacts with a protocol that is widely used but not widely respected).
When it has been implemented, all the "sensitive" methods/constructors were set as DebuggerStepThrough.
Since VS2015, the DebuggerStepThrough is now ignored(We now have VS 2017). I know that we can go to the Exception settings of Visual Studio, we can specify what kind of exception we want or not, and add conditions with projects types, but this has 2 issues:
It's not persisted with the project when we push it to our GIT server
We often change those exception settings to track a very particular issue that should not be interrupted, then we reset then, so it would means that we will regularly loose those changes.
So, is there some compilation settings, or any other way to make sure that we don't receive those exception, and that we can share this accross the team(=commit to our git server)?
NB: This question is NOT about whether we should handle those exceptions or not.
Would migrating that code to a separate assembly help? I'm not sure, but you might be able to reference it without pdb's and it should fail silently if you have proper catches in place.
Related
I hope this isn't a dumb question, but I am writing a .NET MVC application and I was going to heavily use System.Diagnostics during development. If so, I was just going to leave all of those calls (mostly Trace.WriteLine), in the application once it goes to production.
Is there any reason (performance, security, etc...) to not leave System.Diagnostics calls in an application? Maybe certain parts of that library? I had heard that with some libraries (like libraries that use reflection) you have to be careful how you use it because you could potentially expose your application to certain types of attacks.
Any thoughts on what I should beware of would be helpful (if there is anything).
All calls use CPU and Memory.
You dont need to remove this from your code, you can disable System.Diagnostics.Trace calls to be compiled.
System.Diagnostics.Trace are ConditionalAttribute.
https://msdn.microsoft.com/en-us/library/system.diagnostics.Trace.aspx:
In Visual Studio projects, by default, the "DEBUG" conditional
compilation symbol is defined for debug builds, and the "TRACE" symbol
is defined for both debug and release builds. For information about
how to disable this behavior, see the Visual Studio documentation.
How to disable:
Compilers that support ConditionalAttribute ignore calls to these
methods unless "TRACE" is defined as a conditional compilation symbol.
To define the "TRACE" conditional compilation symbol in C#, add the
/d:TRACE option to the compiler command line when you compile your
code using a command line, or add #define TRACE to the top of your
file. In Visual Basic, add the /d:TRACE=True option to the compiler
command line or add #Const TRACE=True to the file.
Happy to help you!
This is the reason why you should look into ETW and use Eventsource calls for logging.
[Event(2, Level = EventLevel.Error)]
public void TaskCreate(long TaskId)
{
if (IsEnabled())
WriteEvent(2, TaskId);
}
This demo event shows that it only gets fired if an active listener is listening to Events for a ETW provider. So ONLY if you use Perfview or Windows Performance Recorder (which is part of the WPT/Windows SDK) to capture a trace, the events are written. If not, the Event never gets raised.
Leaving any unnecessary code in could have implications. First that comes to mind is performance. If those Diagnostic calls use up system resources, even just a little, it could have an impact. It could cause longer running times, etc.
The second part that you already covered was security. Anytime you expose internal parts of your code, it allows the potential for exploitation. It might be minor, but the tighter the better regarding security...
If you can easily remove the Diagnostic/development code, I'd recommend doing so, then testing again before deploying to production.
I have to work with an old version of Mono in Unity projects. I find myself recreating some classes and extension methods that exist in later versions of .NET. Should I be marking these with an attribute that will make it easy to take them out at a later point, just wait for the inevitable errors, and delete the duplicate code, or take some other approach I'm not familiar with yet? If the attribute route is the way to go, is there already an appropriate attribute created for this kind of thing?
Here's what I'd like:
[PresentInDotNET(3.5)]
I fill in the version and get alerted when the framework is at that level or higher.
Split them off to a separate assembly, and change the set of assemblies that make up the final delivery based on the .NET version. You need to rebuild your main assembly to refer to the correct assemblies (depending on whether Foo is in MySystem or System), but as long as you keep namespaces identical, that's all. If you are not even interested in keeping compatibility with older versions, you can simply delete classes from this assembly as they become available.
Alternatively, if the classes/extension methods you are recreating are not interesting (in the sense that you gain nothing by having .NET supply them for you), simply put them in their separate namespace and accept that you are duplicating code already present in newer versions. It doesn't matter a whole lot which assembly gets the job done, after all, as long as it happens.
Whatever you do, try to avoid going the route of #ifdefs, runtime discovery, and other conditional code, as this is much harder to maintain.
How about adding "// TODO" comments for places like this? Visual Studio will display these in the Task window and you can get at them pretty easily.
At my workplace we deploy internal application by only replacing assemblies that have changed (not my idea).
We can tell which assemblies we need to deploy by looking at if the source files that are compiled into the assemblies have changed. Most of the time we don't need to redeploy assemblies that depend on assemblies that have changed. However we have found some cases where even though no source files in an assembly have changed, we need to redeploy it.
So far we know that any of these changes in an assembly, will require all dependent assemblies to need to be recompiled and deployed:
Constant changes
Enum definition changes (order of values)
Return type of a function changes and caller uses var (sometimes)
Namespace of a class changes to another already referenced namespace.
Are there any other cases that we're missing? I'm also open to arguments why this entire approach is flawed (although it's been used for years).
Edit To be clear, we're always recompiling, but only deploying assemblies where the source files in them have changed.
So anything that breaks compilation will be picked up (method name changes, etc.), since they require changes in the calling code.
Here is another one:
Changes to optional parameter values.
The default values get directly compiled to the assembly using them (if not specified)
public void MyOptMethod(int optInt = 5) {}
Any calling code such as this:
theClass.MyOptMethod();
Will end up compiled to:
theClass.MyOptMethod(5);
If you change the method to:
public void MyOptMethod(int optInt = 10) {}
You will need to recompile all dependent assemblies if you want the new default to apply.
Additional changes that will require recompilation (thanks Polynomial):
Changes to generic type parameter constraints
Changes to method names (especially problematic when using reflection, as private methods may also be inspected)
Changes to exception handling (different exception type being thrown)
Changes to thread handling
Etc... etc... etc...
So - always recompile everything.
First off, we have sometimes deployed only a few assemblies in an application instead of the complete app. However, this is by no means the norm and has ONLY been done in our test environments when the developer had very recently (as in within the last few minutes) published the whole site and was just making a minor tweak. However, once the dev is satisfied they will go ahead and do a full recompile and republish.
The final push to testing is always based off a full recompile / deploy. The pushes to staging and ultimately production are based off of that full copy.
Besides repeatability, one reason is that you really can't be 100% positive that a human didn't miss something in the comparisons. Next, the amount of time to deploy 100 assemblies versus 5 is trivial and quite frankly not worth the amount of human time it takes to try and figure out what really changed.
Quite frankly, the list you have in combination with Oded's answer ought to be enough to convince others of the potential for failure. However, the very fact that you have already run into failures due to this lackadaisical approach should be enough of a warning flag to stop it from continuing.
At the end of the day, it really boils down to a question of professionalism. Standardization and repeatability of the process of moving code out of development, through the various hoops and ultimately into production are extremely important in creating robust mission critical applications. If your deployment process is frought with the potential for failure due to these types of risk inducing short cuts, it raises questions on the quality of the code being produced.
I have a very theoretical question: Is there a way to ban the use of some methods, objects etc. inside of my application/project map in C#, .Net and/or Visual Studio?
To be more specific: I'm developing a DMS System where it should never be possible to delete files from an archive. The archived files are just files inside a Windows folder structure.
So, whenever someone tries to perform a System.IO.File.Delete() this should be forbidden. Instead I would force to use a custom-made FileDelete()-method which always ensures that the file to delete is not a file from inside an archive.
(This doesn't have to happen automatically. It's ok when there is an error/exception that informs the developer of a banned method-call.)
Another way to implement this could be to observe all calls of System.IO.File.Delete() at runtime, catch them and execute my own FileDelete()-method.
Of course these are a really theoretical questions but I would just know if there could be a way to implement this.
P.S.: I'm using C# with Visual Studio 2005. So it doesn't matter if I can realize this through my programming language or by Visual Studio (or by any other way I forgot).
Wouldn't it be simpler to control delete permissions to the archived files?
you can define methods and adorn them with declarative security attributes
http://msdn.microsoft.com/en-us/library/dswfd229.aspx
HTH
The closest I can come to a solution is to write you own System.IO.File class and keeping that in exe project. That way you'll get a ambiguity compile error that can be resolved with giving you own implementation in an alias (using File=System.IO.File, Version=[version], cultuer=[correct culture], publicKey=[public key]). If you're unsure about what to write make a break point and write something like ?typeof(System.IO.File).AssemblyQualifiedName in the immediate window.
It's not bullet proof but at least it will enforce the developer to be concious about the decision and you could even (tho I personally wouldn't do it) change the default class template to include the using directive for every class
Not for existing library functions.
For your own code, you could apply code-access-security on methods, but code running as "full trust" will breeze past this; so to check for abuse via reflection you would probably have to check the caller manually (Assembly.GetCallingAssembly) - which is painful and still not 100% robust...
There is specific file/IO permissions, but again full trust will ignore it.
I think "no" is a safer answer.
One way you could go about doing this is to create a special user account and only grant that account the permissions necessary to remove the files.
Just keep in mind that the user is in control of his computer (if he has administrative privileges ;) and while you can put some obstacles in his way there really is nothing you can do about it (and that's the way it should be).
What about writing your own FxCop rule for that case?
With such a rule it will be impossible to compile if you treat warnings as errors.
I have a .net 2.0 c# ClickOnce app and it connects to its data via Web Services. I've been told that one way to potentially speed up the application is to generate a serialization assembly beforehand. I have several questions on this front.
The default setting to whether to generate a serialization assembly is Auto. What criteria does VS2005 use to decide whether to generate a serialization assembly or not? It seems like it does not generate under Debug configuration, but it does under Release configuration, but I can't tell for sure and can't the information anywhere.
Does serialization assembly actually improve the startup of the application? Specifically what does it improve? Do I actually need a serialization assembly?
It is really asking "Shall I pre-generate the serialization assemblies and include it in the deployed project or shall I fall back on the default of generating the assemblies on the fly?" Generally that won't hurt too much after the first hit perf-wise. Where it can play in is that the serialization assemblies are generated in %SYSTEMROOT%\TEMP. Which, in some cases, the process can't access, leading to fatal exceptions in most cases.
This is not relevant to your situation, but there's another good reason for pre-generating the serialization assembly - it's necessary when hosting your code in SQL Server (i.e. SQLCLR). SQL Server doesn't allow these assemblies to be generated dynamically, so your serialization code would fail inside SQL Server.
In most cases, you aren't likely to see a huge benefit from this, especially if you keep the app open for a while. Pre-generating a serialization assembly mainly helps the first time (in an exe lifetime) that you serialize a specific type as xml.
According to Intellitrace, only the first time you XML-serialize a type, a FileNotFoundException is thrown and then caught. This means CLR expects to load an assembly containing all the XML-Serializers for that specific Assembly and when it's not found, a FileNotFoundException is thrown to signal the XmlSerializer: "Hey! Generate the darn assembly!" and this is what happens during that "Catch" and then the previously not found file exists.
I've read somewhere that using try-catch for logic is a bad exercise. IDK why Microsoft has used this approach...