I have a sql clr that has a few functions and Stored procedures. I have the project in an EXTERNAL_ACCESS mode and a key for signing. - works just fine.
I have added another function to the project that uses the ICSharpcode.SharpZipLib. I initially got an incompatibility error for the versions, which I think I resolved following instructions on another post.
The project buids ok, but now I get the following error during that last phase of my project (SQL server db project). This is on my local machine, where I have admin privileges.
Creating [ICSharpCode.SharpZipLib]...
(47,1): SQL72014: .Net SqlClient Data Provider: Msg 6211, Level 16, State 1, Line 1 CREATE ASSEMBLY failed because type '<PrivateImplementationDetails>' in safe assembly 'ICSharpCode.SharpZipLib' has a static field '$$method0x6000014-1'. Attributes of static fields in safe assemblies must be marked readonly in Visual C#, ReadOnly in Visual Basic, or initonly in Visual C++ and intermediate language.
(47,0): SQL72045: Script execution error. The executed script:
CREATE ASSEMBLY [ICSharpCode.SharpZipLib]
AUTHORIZATION [dbo]
FROM 0x4D5A90000300000004000000FFFF0000B800000000000000400000000000000000000000000000000000000000000000000000000000000000000000800000000E1FBA0E00B409CD21B8014CCD21546869732070726F6772616D2063616E6E6F742062652072756E20696E20444F53206D6F64652E0D0D0A2400000000000000504500004C0103003877BE5A0000000000000000E00002210B010B000000020000200000000000009E1B02000020000000200200000040000020000000100000040000000000000004000000000000000060020000100000000000000300408500001000001000000000100000100000000000001000000000000000000000004C1B02004F000000002002003804000000000000000000000000000000000000004002000C00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000200000080000000000000000000000082000004800000000000000000000002E74657874000000A4FB0100002000000000020000100000000000000000000000000000200000602E7273726300000038040000002002000010000000100200000000000000000000000000400000402E72656C6F6300000C000000004002000
An error occurred while the batch was being executed.
Appreciate any help !
Thx,
Satya
I guess the first thing would be to see if you could do without IcSharpcode.SharpZipLib. If not then:
If you have access to the source for IcSharpcode.SharpZipLib you could change the static to be read-only.
Last option is to deploy the assemblies with PERMISSION_SET = UNSAFE.
The CLR host that runs within SQL Server is highly restricted as compared to the CLR host running on the OS. One reason for the restrictions is that the App Domains are shared across sessions. So everyone executing a particular SQLCLR method (be it a Stored Procedure, Function, User-Defined Type, User-Defined Aggregate, or Trigger) is executing the same method in the same static class in the same App Domain. Hence, static class variables are shared resources, and unless you are very careful and deliberate in using them, they can quite easily lead to race conditions and odd (and difficult-to-debug) behavior.
The error message is about this, but it is also a bit misleading since it mentions that SAFE Assemblies do not allow such things. More accurately, it is non-UNSAFE Assemblies do not allow such things (i.e. neither SAFE nor EXTERNAL_ACCESS).
So, as Niels mentioned in his answer, you can mark the Assembly as UNSAFE and it will load and probably work. However, unless you know how that variable (and any others that are marked as static but were not yet mentioned) is used, it could lead to race conditions if one session overwrites the value that another session was still using. Or there is potential for a previous value to be left there that could adversely impact the next caller. You would need to look through the code to ensure that this isn't an issue prior to attempting to set the Assembly to UNSAFE.
While not as quick and easy, you really do need to start with updating the code to mark those static variables as readonly and try recompiling to make sure that there are no attempts to write to that variable throughout the code. And if other parts of the code do write to that static variable, then you need to refactor the code or find other code to do the same thing. I ran into this years ago and opted to use DotNetZip for my SQL# project, though I still did need to make minor modifications for things such as static variables.
Related
I got error CA2122 DoNotIndirectlyExposeMethodsWithLinkDemands on this function :
internal static string GetProcessID()
{
return Process.GetCurrentProcess().Id.ToString(CultureInfo.CurrentCulture);
}
How to fix it?
I got error CA2122
It is not an error, just a warning. The code analysis tool you are using checks for lots of obscure corner-cases, the kind that the C# compiler does not complain about but might be a bad practice. And the kind that programmers are often unaware of. It was originally designed as an internal tool used by Microsoft programmers working on framework code. The rules they must follow are pretty draconian since they can't predict how their code is going to be used.
...WithLinkDemands
A link demand is Code Access Security (CAS) detail. It ensures that code has sufficient rights to execute. Link demands are very cheap, they are checked only once, happens when the code is just-in-time compiled. The "only-once" clause is what the warning is talking about, it is technically possible for code that has sufficient rights to execute first, thus allowing the method to be jitted, but used later by non-trusted code, thus bypassing the check. The tool just assumes that this might happen because the method is public, it doesn't know for a fact that this actually happens in your program.
return Process.GetCurrentProcess()...
It is the Process class that has the link demand. You can tell from the MSDN article which demands it makes. It verifies that the calling code runs in full trust, that it doesn't run in a restrictive unmanaged host like SQL Server and that a derived class meets these demands as well. The Process class is a bit risky, untrusted code could do naughty things by starting a process to bypass CAS checks or to learn too much about the process it runs in and tinker with its configuration.
How to fix it?
More than one possible approach. Roughly in order:
Always high odds that this warning just doesn't apply to your program. In other words, there is no risk of it ever executing code that you don't trust. Your program would have to support plug-ins, written by programmers you don't know about but still have access to the machine to tell your program to load their plug-in. Not very common. Proper approach then is to configure the tool to match your program's behavior, you'd disable the rule.
Evaluate the risk of untrusted code using this method. That ought to be a low one for this specific method, exposing the process ID does not give away any major secrets. It is just a number, it doesn't get to be a risky number until it is used by code that uses Process.GetProcessById(). So you'd consider to suppress the warning, apply the [SuppressMessage] attribute to the method. This is a common outcome, the framework source code has lots and lots of them.
Follow the tool's advice and apply the CAS attributes to this method as well. Simply a copy-paste from the link demands you saw in the MSDN article. This closes the "only-once" loophole, the untrusted code will now fail to jit and can't execute.
At my workplace we deploy internal application by only replacing assemblies that have changed (not my idea).
We can tell which assemblies we need to deploy by looking at if the source files that are compiled into the assemblies have changed. Most of the time we don't need to redeploy assemblies that depend on assemblies that have changed. However we have found some cases where even though no source files in an assembly have changed, we need to redeploy it.
So far we know that any of these changes in an assembly, will require all dependent assemblies to need to be recompiled and deployed:
Constant changes
Enum definition changes (order of values)
Return type of a function changes and caller uses var (sometimes)
Namespace of a class changes to another already referenced namespace.
Are there any other cases that we're missing? I'm also open to arguments why this entire approach is flawed (although it's been used for years).
Edit To be clear, we're always recompiling, but only deploying assemblies where the source files in them have changed.
So anything that breaks compilation will be picked up (method name changes, etc.), since they require changes in the calling code.
Here is another one:
Changes to optional parameter values.
The default values get directly compiled to the assembly using them (if not specified)
public void MyOptMethod(int optInt = 5) {}
Any calling code such as this:
theClass.MyOptMethod();
Will end up compiled to:
theClass.MyOptMethod(5);
If you change the method to:
public void MyOptMethod(int optInt = 10) {}
You will need to recompile all dependent assemblies if you want the new default to apply.
Additional changes that will require recompilation (thanks Polynomial):
Changes to generic type parameter constraints
Changes to method names (especially problematic when using reflection, as private methods may also be inspected)
Changes to exception handling (different exception type being thrown)
Changes to thread handling
Etc... etc... etc...
So - always recompile everything.
First off, we have sometimes deployed only a few assemblies in an application instead of the complete app. However, this is by no means the norm and has ONLY been done in our test environments when the developer had very recently (as in within the last few minutes) published the whole site and was just making a minor tweak. However, once the dev is satisfied they will go ahead and do a full recompile and republish.
The final push to testing is always based off a full recompile / deploy. The pushes to staging and ultimately production are based off of that full copy.
Besides repeatability, one reason is that you really can't be 100% positive that a human didn't miss something in the comparisons. Next, the amount of time to deploy 100 assemblies versus 5 is trivial and quite frankly not worth the amount of human time it takes to try and figure out what really changed.
Quite frankly, the list you have in combination with Oded's answer ought to be enough to convince others of the potential for failure. However, the very fact that you have already run into failures due to this lackadaisical approach should be enough of a warning flag to stop it from continuing.
At the end of the day, it really boils down to a question of professionalism. Standardization and repeatability of the process of moving code out of development, through the various hoops and ultimately into production are extremely important in creating robust mission critical applications. If your deployment process is frought with the potential for failure due to these types of risk inducing short cuts, it raises questions on the quality of the code being produced.
I've written a multi-threaded windows service in C#. For some reason, csc.exe is being launched each time a thread is spawned. I doubt it's related to threading per se, but the fact that it is occurring on a per-thread basis, and that these threads are short-lived, makes the problem very visible: lots of csc.exe processes constantly starting and stopping.
Performance is still pretty good, but I expect it would improve if I could eliminate this. However, what concerns me even more is that McAfee is attempting to scan the csc.exe instances and eventually kills the service, apparently when one the instances exits in mid-scan. I need to deploy this service commercially, so changing McAfee settings is not a solution.
I assume that something in my code is triggering dynamic compilation, but I'm not sure what. Anyone else encounter this problem? Any ideas for resolving it?
Update 1:
After further research based on the suggestion and links from #sixlettervariables, the problem appears to stem from the implementation of XML serialization, as indicated in Microsoft's documentation on XmlSerializer:
To increase performance, the XML serialization infrastructure dynamically generates assemblies to serialize and deserialize specified types.
Microsoft notes an optimization further on in the same doc:
The infrastructure finds and reuses those assemblies. This behavior occurs only when using the following constructors:
XmlSerializer.XmlSerializer(Type)
XmlSerializer.XmlSerializer(Type, String)
which appears to indicate that the codegen and compilation would occur only once, at first use, as long as one of the two specified constructors are used. However, I don't benefit from this optimization because I am using another form of the constructor, specifically:
public XmlSerializer(Type type, Type[] extraTypes)
Reading a bit further, it turns out that this also happens to be a likely explanation for a memory leak that I have been observing when my code executes. Again, from the same doc:
If you use any of the other constructors, multiple versions of the same assembly are generated and never unloaded, which results in a memory leak and poor performance. The easiest solution is to use one of the previously mentioned two constructors. Otherwise, you must cache the assemblies in a Hashtable.
The two workarounds that Microsoft suggests above are a last resort for me. Going to another form of the constructor is not preferred (I am using the "extratypes" form for serialization of derived classes, which is a supported use per Microsoft's docs), and I'm not sure I like the idea of managing a cache of assemblies for use across multiple threads.
So, I have sgen'd, and see the resulting assembly of serializers for my types produced as expected, but when my code executes the sgen-produced assembly is not loaded (per observation in the fusion log viewer and process monitor). I'm currently exploring why this is the case.
Update 2:
The sgen'd assembly loads fine when I use one of the two "friendlier" XmlSerializer constructors (see Update 1, above). When I use XmlSerializer(Type), for example, the sgen'd assembly loads and no run-time codegen/compilation is performed. However, when I use XmlSerializer(Type, Type[]), the assembly does not load. Can't find any reasonable explanation for this.
So I'm reverting to using one of the supported constructors and sgen'ing. This combination eliminates my original problem (the launching of csc.exe), plus one other related problem (the XmlSerializer-induced memory leak mentioned in Update 1 above). It does mean, however, that I have to revert to a less optimal form of of serialization for derived types (the use of XmlInclude on the base type) until something changes in the framework to address this situation.
Psychic debugging:
Your Windows Service does XML serialization/deserialization
To increase performance, the XML serialization infrastructure dynamically generates assemblies to serialize and deserialize specified types.
If this is the case you can build these XML Serializer Assemblies a-priori.
Apologies for the shortness of the question, however I don't think it needs much elaboration.
Any there any security implications caused by using the CSharpCodeProvider and could it open a server up for attack?
It depends on how you use it. Here is a summary sorted from the safe use to a use that you certainly don't want to allow (when running the code on a server or some environment that you want to control):
If you use CSharpCodeProvider just for generating C# source code, then you only need a permission to save the generated files to some directory or to noting at all (if it is possible to get the code generated into a memory stream)
If you use it for compiling generated C# source, then you need a permission to run csc.exe (which may not be available in some limited environments such as shared hostings).
If you just generate files & compile them, then it probably won't be harmful (although someone could probably abuse your application to generate many, many files and attack the server using some kind of DOS attack.
If you also load & execute the generated code, then it depends on how you generate it. If you assume that there are no bugs in C#/CodeDOM and can guarantee that the generated code is safe, then you should be fine.
If your code contain things such as CodeSnippetExpression that can be provided by the user (in some way) than the user can write and run anything he or she wants on your server, so this would be potentially quite dangerous.
Sort of. On the surface it's not a direct risk, because you're not running code, just compiling it. However, there's nothing that says that the C# compiler doesn't contain some sort of bug that, given the right malicious input, would cause it to bail out and start executing commands directly.
However, if you later execute the compiled code (and presumably you do -- otherwise why would you compile it to begin with?), it will be running the same context as you are. Obviously, that has all kinds of unpleasant security implications, much like using the quasi-analogous eval() feature of other languages.
It depends on the source that you are compiling. If you have enough control over the source, then it might be an acceptable risk. If you are allowing someone outside of your sphere of trust supply code to the compiler, it might be an unacceptable risk.
I have a .net 2.0 c# ClickOnce app and it connects to its data via Web Services. I've been told that one way to potentially speed up the application is to generate a serialization assembly beforehand. I have several questions on this front.
The default setting to whether to generate a serialization assembly is Auto. What criteria does VS2005 use to decide whether to generate a serialization assembly or not? It seems like it does not generate under Debug configuration, but it does under Release configuration, but I can't tell for sure and can't the information anywhere.
Does serialization assembly actually improve the startup of the application? Specifically what does it improve? Do I actually need a serialization assembly?
It is really asking "Shall I pre-generate the serialization assemblies and include it in the deployed project or shall I fall back on the default of generating the assemblies on the fly?" Generally that won't hurt too much after the first hit perf-wise. Where it can play in is that the serialization assemblies are generated in %SYSTEMROOT%\TEMP. Which, in some cases, the process can't access, leading to fatal exceptions in most cases.
This is not relevant to your situation, but there's another good reason for pre-generating the serialization assembly - it's necessary when hosting your code in SQL Server (i.e. SQLCLR). SQL Server doesn't allow these assemblies to be generated dynamically, so your serialization code would fail inside SQL Server.
In most cases, you aren't likely to see a huge benefit from this, especially if you keep the app open for a while. Pre-generating a serialization assembly mainly helps the first time (in an exe lifetime) that you serialize a specific type as xml.
According to Intellitrace, only the first time you XML-serialize a type, a FileNotFoundException is thrown and then caught. This means CLR expects to load an assembly containing all the XML-Serializers for that specific Assembly and when it's not found, a FileNotFoundException is thrown to signal the XmlSerializer: "Hey! Generate the darn assembly!" and this is what happens during that "Catch" and then the previously not found file exists.
I've read somewhere that using try-catch for logic is a bad exercise. IDK why Microsoft has used this approach...