How do I properly manage the lifetime of a Qubit in C# - c#

I'm playing around with Q#, which uses C# as a driver. I'd like to pass a Qubit object to the Q# code but it isn't working as expected.
C# Driver
using Microsoft.Quantum.Simulation.Core;
using Microsoft.Quantum.Simulation.Simulators;
namespace Quantum.QSharpApplication1 {
class Driver {
static void Main(string[] args) {
using (var sim = new QuantumSimulator()) {
var x = new Microsoft.Quantum.Simulation.Common.QubitManager(10);
Qubit q1 = x.Allocate();
Solve.Run(sim, q1, 1);
}
System.Console.WriteLine("Press any key to continue...");
System.Console.ReadKey();
}
}
}
Q#
namespace Quantum.QSharpApplication1
{
open Microsoft.Quantum.Primitive;
open Microsoft.Quantum.Canon;
operation Solve (q : Qubit, sign : Int) : ()
{
body
{
let qp = M(q);
if (qp != Zero)
{
X(q);
}
H(q);
}
}
}
When I run this, it runs without error until it reaches the System.Console.* lines at which point it throws the following exception in the Q# code
System.AccessViolationException
HResult=0x80004003
Message=Attempted to read or write protected memory. This is often an indication that other memory is corrupt.
Source=<Cannot evaluate the exception source>
StackTrace:
<Cannot evaluate the exception stack trace>
The debugger associates this with the "let qp = M(q);" line in Q#.
Note this does happen in the Solve.Run call, the real code has multiple Solve calls and the output appear correct. It only appears to occur after the using QuantumSimulator scope is left. I recall reading that the Qubit must be reset to zero before it is released. I'm not sure if that is the problem here, but I don't see a way to do that in C#. Interesting I remove the Console lines, the program will run without error (timing?).

The QubitManager instance you used to create the qubits is not a singleton (each Simulator has its own QubitManager), therefore the Simulator is not aware of the Qubit your trying to manipulate on the Q# code, thus the AccessViolationException.
In general, creating Qubits on the driver is not supported; you can only allocate qubits using the allocate and borrowing statements inside Q#. The recommendation is to create an entry point in Q# to allocate the qubits which does the qubit allocation and call this from the driver, for example:
// MyOp.qs
operation EntryPoint() : ()
{
body
{
using (register = Qubit[2])
{
myOp(register);
}
}
}
// Driver.cs
EntryPoint.Run().Wait();
Finally, note that in your driver code you have this:
Solve.Run(sim, q1, 1);
The Run method returns a tasks that executes asynchronously. You must typically add a Wait() to make sure it finishes execution:
EntryPoint.Run(sim, 1).Wait();
If you do this you will notice that failure during the Run, not the Console.WriteLine.

Related

InvalidOperationException "NoGCRegion mode was already in progress" on call to GC.TryStartNoGCRegion

I'm doing performance testing comparing various algorithms and want to eliminate the scatter in the results due to garbage collection perhaps being done during the critical phase of the test. I'm turning off the garbage collection using GC.TryStartNoGCRegion(long) (see https://learn.microsoft.com/en-us/dotnet/api/system.gc.trystartnogcregion?view=net-6.0) before the critical test phase and reactivating it immediately afterwards.
My code looks like this:
long allocatedBefore;
int collectionsBefore;
long allocatedAfter;
int collectionsAfter;
bool noGCSucceeded;
try
{
// Just in case end "no GC region"
if (GCSettings.LatencyMode == GCLatencyMode.NoGCRegion)
{
GC.EndNoGCRegion();
}
// Exception is thrown sometimes in this line
noGCSucceeded = GC.TryStartNoGCRegion(solveAllocation);
allocatedBefore = GC.GetAllocatedBytesForCurrentThread();
collectionsBefore = getTotalGCCollectionCount();
stopwatch.Restart();
doMyTest();
stopwatch.Stop();
allocatedAfter = GC.GetAllocatedBytesForCurrentThread();
collectionsAfter = getTotalGCCollectionCount();
}
finally
{
// Reactivate garbage collection
if (GCSettings.LatencyMode == GCLatencyMode.NoGCRegion)
{
GC.EndNoGCRegion();
}
}
//...
private int getTotalGCCollectionCount()
{
int collections = 0;
for (int i = 0; i < GC.MaxGeneration; i++)
{
collections += GC.CollectionCount(i);
}
return collections;
}
The following exception is thrown from time to time (about once in 1500 tests):
System.InvalidOperationException
The NoGCRegion mode was already in progress
bei System.GC.StartNoGCRegionWorker(Int64 totalSize, Boolean hasLohSize, Int64 lohSize, Boolean disallowFullBlockingGC)
bei MyMethod.cs:Zeile 409.
bei MyCaller.cs:Zeile 155.
The test might start a second thread that creates some objects it needs in a pool.
As far as I can see, the finally should always turn the GC back on, and the (theoretically unnecessary) check at the beginning should also do it in any case, but nevertheless, there is an error that NoGCRegion was already active.
The question C# TryStartNoGCRegion 'The NoGCRegion mode was already in progress' exception when the GC is in LowLatency mode got the same error message, but there was clear code path to have activated NoGCRegion more than once there. I can't see how that could happen here.
The test itself is not accessing GC operations except for GC.SuppressFinalize in some Dispose() methods.
The test itself does not run in parallel with any other test; my Main() method loops over a set of input files and calls the test method for each one.
The test method uses an external c++ library which would be unmanaged memory in the .NET context.
What could be causing the exception, and why doesn't the call to GC.EndNoGCRegion(); prevent the problem?

C# are field reads guaranteed to be reliable (fresh) when using multithreading?

Background
My colleague thinks reads in multithreaded C# are reliable and will always give you the current, fresh value of a field, but I've always used locks because I was sure I'd experienced problems otherwise.
I spent some time googling and reading articles, but I mustn't be able to provide google with correct search input, because I didn't find exactly what I was after.
So I wrote the below program without locks in an attempt to prove why that's bad.
Question
I'm assuming the below is a valid test, then the results show that the reads aren't reliable/fresh.
Can someone explain what this is caused by? (reordering, staleness or something else)?
And link me to official Microsoft documentation/section explaining why this happens and what is the recommended solution?
If the below isn't a valid test, what would be?
Program
If there are two threads, one calls SetA and the other calls SetB, if the reads are unreliable without locks, then intermittently Foo's field "c" will be false.
using System;
using System.Threading.Tasks;
namespace SetASetBTestAB
{
class Program
{
class Foo
{
public bool a;
public bool b;
public bool c;
public void SetA()
{
a = true;
TestAB();
}
public void SetB()
{
b = true;
TestAB();
}
public void TestAB()
{
if (a && b)
{
c = true;
}
}
}
static void Main(string[] args)
{
int timesCWasFalse = 0;
for (int i = 0; i < 100000; i++)
{
var f = new Foo();
var t1 = Task.Run(() => f.SetA());
var t2 = Task.Run(() => f.SetB());
Task.WaitAll(t1, t2);
if (!f.c)
{
timesCWasFalse++;
}
}
Console.WriteLine($"timesCWasFalse: {timesCWasFalse}");
Console.WriteLine("Finished. Press Enter to exit");
Console.ReadLine();
}
}
}
Output
Release mode. Intel Core i7 6700HQ:
Run 1: timesCWasFalse: 8
Run 2: timesCWasFalse: 10
Of course it is not fresh. The average CPU nowadays has 3 layers of Caches between each cores Registers and the RAM. And it can take quite some time for a write to one cache to be propagate to all of them.
And then there is the JiT Compiler. Part of it's job is dead code dection. And one of the first things it will do is cut out "useless" variables. For example this code tried to force a OOM excpetion by running into the 2 GiB Limit on x32 Systems:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
namespace OOM_32_forced
{
class Program
{
static void Main(string[] args)
{
//each short is 2 byte big, Int32.MaxValue is 2^31.
//So this will require a bit above 2^32 byte, or 2 GiB
short[] Array = new short[Int32.MaxValue];
/*need to actually access that array
Otherwise JIT compiler and optimisations will just skip
the array definition and creation */
foreach (short value in Array)
Console.WriteLine(value);
}
}
}
The thing is that if you cut out the output stuff, there is a decent chance that the JiT will remove the variable Array inlcuding the instantionation order. The JiT has a decent chance to reduce this programming to doing nothing at all at runtime.
volatile is first preventing the JiT from doing any optimisations on that value. And it might even have some effect on how the CPU processes stuff.

Should a TraceSource be guarded with "if" statements?

Before tracing to a TraceSource, should the "Trace Level" be checked prior to issuing the trace itself?
var ts = new TraceSource("foo");
ts.Switch.Level = SourceLevels.Warning;
if (/* should there be a guard here? and if so, what? */) {
ts.TraceEvent(TraceEventType.Warning, 0, "bar");
}
While there is SourceSwitch.ShouldTrace(TraceEventType), the documentation indicates
Application code should not call this method; it is intended to be called only by methods in the TraceSource class.
It appears that pre-TraceSource model employed the TraceSwitch (not SourceSwitch) class which had various TraceXYZ methods (for this purpose?), but such appears to be not needed/used/mentioned with the TraceSource model.
(Having the guard outside the trace method affects evaluation of expressions used in/for the call - of course side-effects or computationally expensive operations in such are "bad" and "ill-advised", but I'd still like focus on the primary question.)
As per expensive trace parameters computation I came up with the following:
internal sealed class LazyToString
{
private readonly Func<object> valueGetter;
public LazyToString(Func<object> valueGetter)
{
this.valueGetter = valueGetter;
}
public override string ToString()
{
return this.valueGetter().ToString();
}
}
The usage would be
traceSource.TraceEvent(TraceEventType.Verbose, 0, "output: {0}", new LazyToString(() =>
{
// code here would be executed only when needed by TraceSource
// so it can contain some expensive computations
return "1";
}));
Any better idea?
I know that in NLog you generally just do the trace at whatever level you want and it will take care of whether or not the log level should be traced or not.
To me it looks like TraceSource works the same way.
So I would say "No" you probably shouldn't check.
Test it out by setting different trace levels and tracing messages at different levels and see what gets traced.
I think in terms of performance you are generally ok if you use the methods defined on the class:
Based on an example from: http://msdn.microsoft.com/en-us/library/sdzz33s6.aspx
This is good:
ts.TraceEvent(TraceEventType.Verbose, 3, "File {0} not found.", "test");
This would be bad:
string potentialErrorMessageToDisplay = string.Format( "File {0} not found.", "test" );
ts.TraceEvent(TraceEventType.Verbose, 3, potentialErrorMessageToDisplay );
In the first case the library probably avoids the call to string.Format if the error level won't be logged anyway. In the second case, string.Format is always called.
Are strings you provide to the message argument expensive? A constant or literal is pretty cheap. If that is the case, don't worry about it, use the trace switch/trace listener filters, etc to reduce the amoount of trace processed (and the perf cost of trace) (BTW, the default trace listener is very expensive, always clear the trace listeners before adding the ones you want)
System.Diagnostics doesn't have anything to make a inactive TraceSource invocation costless. Even if you use the listener filters, or set the trace switch to zero (turn it off) the TraceEvent will be invoked and the message string will be constructed.
Imagine that the trace string is expensive to calculate, for example, it iterates across all the rows in a dataset and dumps them to a string. That could take a not trivial number of milliseconds.
To get around this you can make the string building part wrapped in a function that has a conditional attribute to turn it off in release mode, or use wrapper method that takes a lambda expression or a Func that creates the string (and isn't executed when not needed)
Like #nexuzzz suggests, there could be situations where calculation of event parameter is expensive. Here is what I could think of.
Suggestions to developers would be: "If you don't have string argument readily available, use the lambda version of TraceInformation or TraceWarning.
public class TraceSourceLogger : ILogger
{
private TraceSource _traceSource;
public TraceSourceLogger(object that)
{
_traceSource = new TraceSource(that.GetType().Namespace);
}
public void TraceInformation(string message)
{
_traceSource.TraceInformation(message);
}
public void TraceWarning(string message)
{
_traceSource.TraceEvent(TraceEventType.Warning, 1, message);
}
public void TraceError(Exception ex)
{
_traceSource.TraceEvent(TraceEventType.Error, 2, ex.Message);
_traceSource.TraceData(TraceEventType.Error, 2, ex);
}
public void TraceInformation(Func<string> messageProvider)
{
if (_traceSource.Switch.ShouldTrace(TraceEventType.Information))
{
TraceInformation(messageProvider());
}
}
public void TraceWarning(Func<string> messageProvider)
{
if (_traceSource.Switch.ShouldTrace(TraceEventType.Warning))
{
TraceWarning(messageProvider());
}
}
}

Performance Counter - Instances; Create/Update without error but not visible in PerfMon

When invoking UpdatePerformanceCounters: In this updater all the counter names for the category and instance counters are the same - they are always derived from an Enum. The updater is passed a "profile" typically with content such as:
{saTrilogy.Core.Instrumentation.PerformanceCounterProfile}
_disposed: false
CategoryDescription: "Timed function for a Data Access process"
CategoryName: "saTrilogy<Core> DataAccess Span"
Duration: 405414
EndTicks: 212442328815
InstanceName: "saTrilogy.Core.DataAccess.ParameterCatalogue..ctor::[dbo].[sp_KernelProcedures]"
LogFormattedEntry: "{\"CategoryName\":\"saTrilogy<Core> DataAccess ...
StartTicks: 212441923401
Note the "complexity" of the Instance name.
The toUpdate.AddRange() of the VerifyCounterExistence method always succeeds and produces the "expected" output so the UpdatePerformanceCounters method continues through to the "successful" incrementing of the counters.
Despite the "catch" this never "fails" - except, when viewing the Category in PerfMon, it shows no instances or, therefore, any "successful" update of an instance counter.
I suspect my problem may be that my instance name is being rejected, without exception, because of its "complexity" - when I run this through a console tester via PerfView it does not show any exception stack and the ETW events associated with counter updates are successfully recorded in an out-of-process sink. Also, there are no entries in the Windows Logs.
This is all being run "locally" via VS2012 on a Windows 2008R2 server with NET 4.5.
Does anyone have any ideas of how else I may try this - or even test if the "update" is being accepted by PerfMon?
public sealed class Performance {
private enum ProcessCounterNames {
[Description("Total Process Invocation Count")]
TotalProcessInvocationCount,
[Description("Average Process Invocation Rate per second")]
AverageProcessInvocationRate,
[Description("Average Duration per Process Invocation")]
AverageProcessInvocationDuration,
[Description("Average Time per Process Invocation - Base")]
AverageProcessTimeBase
}
private readonly static CounterCreationDataCollection ProcessCounterCollection = new CounterCreationDataCollection{
new CounterCreationData(
Enum<ProcessCounterNames>.GetName(ProcessCounterNames.TotalProcessInvocationCount),
Enum<ProcessCounterNames>.GetDescription(ProcessCounterNames.TotalProcessInvocationCount),
PerformanceCounterType.NumberOfItems32),
new CounterCreationData(
Enum<ProcessCounterNames>.GetName(ProcessCounterNames.AverageProcessInvocationRate),
Enum<ProcessCounterNames>.GetDescription(ProcessCounterNames.AverageProcessInvocationRate),
PerformanceCounterType.RateOfCountsPerSecond32),
new CounterCreationData(
Enum<ProcessCounterNames>.GetName(ProcessCounterNames.AverageProcessInvocationDuration),
Enum<ProcessCounterNames>.GetDescription(ProcessCounterNames.AverageProcessInvocationDuration),
PerformanceCounterType.AverageTimer32),
new CounterCreationData(
Enum<ProcessCounterNames>.GetName(ProcessCounterNames.AverageProcessTimeBase),
Enum<ProcessCounterNames>.GetDescription(ProcessCounterNames.AverageProcessTimeBase),
PerformanceCounterType.AverageBase),
};
private static bool VerifyCounterExistence(PerformanceCounterProfile profile, out List<PerformanceCounter> toUpdate) {
toUpdate = new List<PerformanceCounter>();
bool willUpdate = true;
try {
if (!PerformanceCounterCategory.Exists(profile.CategoryName)) {
PerformanceCounterCategory.Create(profile.CategoryName, profile.CategoryDescription, PerformanceCounterCategoryType.MultiInstance, ProcessCounterCollection);
}
toUpdate.AddRange(Enum<ProcessCounterNames>.GetNames().Select(counterName => new PerformanceCounter(profile.CategoryName, counterName, profile.InstanceName, false) { MachineName = "." }));
}
catch (Exception error) {
Kernel.Log.Trace(Reflector.ResolveCaller<Performance>(), EventSourceMethods.Kernel_Error, new PacketUpdater {
Message = StandardMessage.PerformanceCounterError,
Data = new Dictionary<string, object> { { "Instance", profile.LogFormattedEntry } },
Error = error
});
willUpdate = false;
}
return willUpdate;
}
public static void UpdatePerformanceCounters(PerformanceCounterProfile profile) {
List<PerformanceCounter> toUpdate;
if (profile.Duration <= 0 || !VerifyCounterExistence(profile, out toUpdate)) {
return;
}
foreach (PerformanceCounter counter in toUpdate) {
if (Equals(PerformanceCounterType.RateOfCountsPerSecond32, counter.CounterType)) {
counter.IncrementBy(profile.Duration);
}
else {
counter.Increment();
}
}
}
}
From MSDN .Net 4.5 PerformanceCounter.InstanceName Property (http://msdn.microsoft.com/en-us/library/system.diagnostics.performancecounter.instancename.aspx)...
Note: Instance names must be shorter than 128 characters in length.
Note: Do not use the characters "(", ")", "#", "\", or "/" in the instance name. If any of these characters are used, the Performance Console (see Runtime Profiling) may not correctly display the instance values.
The instance name of 79 characters that I use above satisfies these conditions so, unless ".", ":", "[" and "]" are also "reserved" the name would not appear to be the issue. I also tried a 64 character sub-string of the instance name - just in case, as well as a plain "test" string all to no avail.
Changes...
Apart from the Enum and the ProcessCounterCollection I have replaced the class body with the following:
private static readonly Dictionary<string, List<PerformanceCounter>> definedInstanceCounters = new Dictionary<string, List<PerformanceCounter>>();
private static void UpdateDefinedInstanceCounterDictionary(string dictionaryKey, string categoryName, string instanceName = null) {
definedInstanceCounters.Add(
dictionaryKey,
!PerformanceCounterCategory.InstanceExists(instanceName ?? "Total", categoryName)
? Enum<ProcessCounterNames>.GetNames().Select(counterName => new PerformanceCounter(categoryName, counterName, instanceName ?? "Total", false) { RawValue = 0, MachineName = "." }).ToList()
: PerformanceCounterCategory.GetCategories().First(category => category.CategoryName == categoryName).GetCounters().Where(counter => counter.InstanceName == (instanceName ?? "Total")).ToList());
}
public static void InitialisationCategoryVerify(IReadOnlyCollection<PerformanceCounterProfile> etwProfiles){
foreach (PerformanceCounterProfile profile in etwProfiles){
if (!PerformanceCounterCategory.Exists(profile.CategoryName)){
PerformanceCounterCategory.Create(profile.CategoryName, profile.CategoryDescription, PerformanceCounterCategoryType.MultiInstance, ProcessCounterCollection);
}
UpdateDefinedInstanceCounterDictionary(profile.DictionaryKey, profile.CategoryName);
}
}
public static void UpdatePerformanceCounters(PerformanceCounterProfile profile) {
if (!definedInstanceCounters.ContainsKey(profile.DictionaryKey)) {
UpdateDefinedInstanceCounterDictionary(profile.DictionaryKey, profile.CategoryName, profile.InstanceName);
}
definedInstanceCounters[profile.DictionaryKey].ForEach(c => c.IncrementBy(c.CounterType == PerformanceCounterType.AverageTimer32 ? profile.Duration : 1));
definedInstanceCounters[profile.TotalInstanceKey].ForEach(c => c.IncrementBy(c.CounterType == PerformanceCounterType.AverageTimer32 ? profile.Duration : 1));
}
}
In the PerformanceCounter Profile I've added:
internal string DictionaryKey {
get {
return String.Concat(CategoryName, " - ", InstanceName ?? "Total");
}
}
internal string TotalInstanceKey {
get {
return String.Concat(CategoryName, " - Total");
}
}
The ETW EventSource now does the initialisation for the "pre-defined" performance categories whilst also creating an instance called "Total".
PerformanceCategoryProfile = Enum<EventSourceMethods>.GetValues().ToDictionary(esm => esm, esm => new PerformanceCounterProfile(String.Concat("saTrilogy<Core> ", Enum<EventSourceMethods>.GetName(esm).Replace("_", " ")), Enum<EventSourceMethods>.GetDescription(esm)));
Performance.InitialisationCategoryVerify(PerformanceCategoryProfile.Values.Where(v => !v.CategoryName.EndsWith("Trace")).ToArray());
This creates all of the categories, as expected, but in PerfMon I still cannot see any instances - even the "Total" instance and the update always, apparently, runs without error.
I don't know what else I can "change - probably "too close" to the problem and would appreciate comments/corrections.
These are the conclusions and the "answer" insofar as as it explains, to the best of my ability, what I believe is happening and posted by myself - given my recent helpful use of Stack Overflow this, I hope, will be of use to others...
Firstly, there is essentially nothing wrong with the code displayed excepting one proviso - mentioned later. Putting a Console.ReadKey() before program termination and after having done a PerformanceCounterCategory(categoryKey).ReadCategory() it is quite clear that not only are the registry entries correct (for this is where ReadCategory sources its results) but that the instance counters have all been incremented by the appropriate values. If one looks at PerfMon before the program terminates the instance counters are there and they do contain the appropriate Raw Values.
This is the crux of my "problem" - or, rather, my incomplete understanding of the architecture: INSTANCE COUNTERS ARE TRANSIENT - INSTANCES ARE NOT PERSISTED BEYOND THE TERMINATION OF A PROGRAM/PROCESS. This, once it dawned on me, is "obvious" - for example, try using PerfMon to look at an instance counter of one of your IIS AppPools - then stop the AppPool and you will see, in PerfMon, that the Instance for the stopped AppPool is no longer visible.
Given this axiom about instance counters the code above has another completely irrelevant section: When trying the method UpdateDefinedInstanceCounterDictionary assigning the list from an existing counter set is pointless. Firstly, the "else" code shown will fail since we are attempting to return a collection of (instance) counters for which this approach will not work and, secondly, the GetCategories() followed by GetCounters() or/and GetInstanceNames() is an extraordinarily expensive and time-consuming process - even if it were to work. The appropriate method to use is the one mentioned earlier - PerformanceCounterCategory(categoryKey).ReadCategory(). However, this returns an InstanceDataCollectionCollection which is effectively read-only so, as a provider (as opposed to a consumer) of counters it is pointless. In fact, it doesn't matter if you just use the Enum generated new PerformanceCounter list - it works regardless of whether the counters already exist or not.
Anyway, the InstanceDataCollectionCollection (this is essentially that which is demonstrated by the Win32 SDK for .Net 3.5 "Usermode Counter Sample") uses a "Sample" counter which is populated and returned - as per the usage of the System.Diagnostics.PerformanceData Namespace whichi looks like part of the Version 2.0 usage - which usage is "incompatible" with the System.Diagnostics.PerformanceCounterCategory usage shown.
Admittedly, the fact of non-persistance may seem obvious and may well be stated in documentation but, if I were to read all the documentation about everything I need to use beforehand I'd probably end up not actually writing any code! Furthermore, even if such pertinent documentation were easy to find (as opposed to experiences posted on, for example, Stack Overflow) I'm not sure I trust all of it. For example, I noted above that the instance name in the MSDN documentation has a 128 character limit - wrong; it is actually 127 since the underlying string must be null-terminated. Also, for example, for ETW, I wish it were made more obvious that keyword values must be powers of 2 and opcodes with value of less than 12 are used by the system - at least PerfView was able to show me this.
Ultimately this question has no "answer" other than a better understanding of instance counters - especially their persistence. Since my code is intended for use in a Windows Service based Web API then its persistence is not an issue (especially with daily use of LogMan etc.) - the confusing thing is that the damn things didn't appear until I paused the code and checked PerfMon and I could have saved myself a lot of time and hassle if I knew this beforehand. In any event my ETW event source logs all elapsed execution times and instances of what the performance counters "monitor" anyway.

Can an object be declared above a using statement instead of in the brackets

Most of the examples of the using statement in C# declare the object inside the brackets like this:
using (SqlCommand cmd = new SqlCommand("SELECT * FROM Customers", connection))
{
// Code goes here
}
What happens if I use the using statement in the following way with the object declared outside the using statement:
SqlCommand cmd = new SqlCommand("SELECT * FROM Customers", connection);
using (cmd)
{
// Code goes here
}
Is it a bad idea to use the using statement in the way I have in the second example and why?
Declaring the variable inside the using statement's control expression limits the scope of the variable to inside the using statement. In your second example the variable cmd can continue to be used after the using statement (when it will have been disposed).
Generally it is recommended to only use a variable for one purpose, limiting its scope allows another command with the same name later in scope (maybe in another using expression). Perhaps more importantly it tells a reader of your code (and maintenance takes more effort than initial writing) that cmd is not used beyond the using statement: your code is a little bit more understandable.
Yes, that is valid - the object will still be disposed in the same manner, ie, at the end and if execution flow tries to leave the block (return / exception).
However if you try to use it again after the using, it will have been disposed, so you cannot know if that instance is safe to continue using as dispose doesn't have to reset the object state. Also if an exception occurs during construction, it will not have hit the using block.
I'd declare and initialize the variable inside the statement to define its scope. Chances are very good you won't need it outside the scope if you are using a using anyway.
MemoryStream ms = new MemoryStream(); // Initialisation not compiled into the using.
using (ms) { }
int i = ms.ReadByte(); // Will fail on closed stream.
Below is valid, but somewhat unnecessary in most cases:
MemoryStream ms = null;
using (ms = new MemoryStream())
{ }
// Do not continue to use ms unless re-initializing.
I wrote a little code along with some unit tests. I like it when I can validate statements about the question at hand. My findings:
Whether an object is created before or in the using statement doesn't matter. It must implement IDisposable and Dispose() will be called upon leaving the using statement block (closing brace).
If the constructor throws an exception when invoked in the using statement Dispose() will not be invoked. This is reasonable as the object has not been successfully constructed when an exception is thrown in the constructor. Therefore no instance exists at that point and calling instance members (non-static members) on the object doesn't make sense. This includes Dispose().
To reproduce my findings, please refer to the source code below.
So bottom line you can - as pointed out by others - instantiate an object ahead of the using statement and then use it inside the using statement. I also agree, however, moving the construction outside the using statement leads to code that is less readable.
One more item that you may want to be aware of is the fact that some classes can throw an exception in the Dispose() implementation. Although the guideline is not to do that, even Microsoft has cases of this, e.g. as discussed here.
So here is my source code include a (lengthy) test:
public class Bar : IDisposable {
public Bar() {
DisposeCalled = false;
}
public void Blah() {
if (DisposeCalled) {
// object was disposed you shouldn't use it anymore
throw new ObjectDisposedException("Object was already disposed.");
}
}
public void Dispose() {
// give back / free up resources that were used by the Bar object
DisposeCalled = true;
}
public bool DisposeCalled { get; private set; }
}
public class ConstructorThrows : IDisposable {
public ConstructorThrows(int argument) {
throw new ArgumentException("argument");
}
public void Dispose() {
Log.Info("Constructor.Dispose() called.");
}
}
[Test]
public void Foo() {
var bar = new Bar();
using (bar) {
bar.Blah(); // ok to call
}// Upon hitting this closing brace Dispose() will be invoked on bar.
try {
bar.Blah(); // Throws ObjectDisposedException
Assert.Fail();
}
catch(ObjectDisposedException) {
// This exception is expected here
}
using (bar = new Bar()) { // can reuse the variable, though
bar.Blah(); // Fine to call as this is a second instance.
}
// The following code demonstrates that Dispose() won't be called if
// the constructor throws an exception:
using (var throws = new ConstructorThrows(35)) {
}
}
The idea behind using is to define a scope, outside of which an object or objects will be disposed.
If you declare the object you are about to use inside using in advance, there's no point to use the using statement at all.
It has been answered and the answer is: Yes, it's possible.However, from a programmers viewpoint, don't do it! It will confuse any programmer who will be working on this code and who doesn't expect such a construction. Basically, if you give the code to someone else to work on, that other person could end up being very confused if they use the "cmd" variable after the using. This becomes even worse if there's even more lines of code between the creation of the object and the "using" part.

Categories