Configuring resiliency settings Entity Framework 6.02 - c#

I'm trying to configure resiliency settings for EF6.02. I have an application which in one point of execution writes a log entry to database. The application is not depending on the SQL-server so if the server does not respond, I want the application to abandon the INSERT query (through SaveChanges of the DbContext) and continue execution immediately.
Using default settings, the debug log outputs ten
"A first chance exception of type 'System.Data.SqlClient.SqlException' occurred in System.Data.dll"
After the ten tries, it throws an exception and my code continues. But I want just one try and for example 2 s command timeout. According to documentation on MSDN, default resiliency method for SQL server is:
DefaultSqlExecutionStrategy: this is an internal execution strategy that is used by default. This strategy does not retry at all, however, it will wrap any exceptions that could be transient to inform users that they might want to enable connection resiliency.
As the documentation mentions, this strategy does not retry at all. But still I have ten retries. I have tried to create a class which inherits DbConfiguration but I have not found any documentation on how to change this.
Can anyone help me reduce the number of retries?
UPDATE: Below is code based on suggestions
using System;
using System.Data.Entity;
using System.Data.Entity.SqlServer;
using System.Data.Entity.Infrastructure;
using System.Runtime.Remoting.Messaging;
namespace MyDbLayer
{
public class MyConfiguration : DbConfiguration
{
public MyConfiguration ()
{
this.SetExecutionStrategy("System.Data.SqlClient", () => SuspendExecutionStrategy
? (IDbExecutionStrategy)new DefaultExecutionStrategy()
: new SqlAzureExecutionStrategy());
}
public static bool SuspendExecutionStrategy
{
get
{
return (bool?)CallContext.LogicalGetData("SuspendExecutionStrategy") ?? false;
}
set
{
CallContext.LogicalSetData("SuspendExecutionStrategy", value);
}
}
}
}
And code writing to SQL
try
{
using (MyEntities context = new MyEntities ())
{
Log logEntry = new Log();
logEntry.TS = DateTime.Now;
MyConfiguration.SuspendExecutionStrategy = true;
context.Log.Add(logEntry);
context.SaveChanges();
}
}
catch (Exception ex)
{
logger.Warn("Connection error with database server.", ex);
}
finally
{
//Enable retries again...
MyConfiguration.SuspendExecutionStrategy = false;
}

Did you try execution strategy suspension?
Your own DB configuration would be like this one:
public class MyConfiguration : DbConfiguration
{
public MyConfiguration()
{
this.SetExecutionStrategy("System.Data.SqlClient", () => SuspendExecutionStrategy
? (IDbExecutionStrategy)new DefaultExecutionStrategy()
: new SqlAzureExecutionStrategy());
}
public static bool SuspendExecutionStrategy
{
get
{
return (bool?)CallContext.LogicalGetData("SuspendExecutionStrategy") ?? false;
}
set
{
CallContext.LogicalSetData("SuspendExecutionStrategy", value);
}
}
}
then you may define non-retriable command like this:
public static void ExecWithoutRetry(System.Action action)
{
var restoreExecutionStrategyState = EbgDbConfiguration.SuspendExecutionStrategy;
try
{
MyConfiguration.SuspendExecutionStrategy = true;
action();
}
catch (Exception)
{
// ignore any exception if we want to
}
finally
{
MyConfiguration.SuspendExecutionStrategy = restoreExecutionStrategyState;
}
}
and finally your regular DB code with retriable and non-retriable commands may look like this:
using (var db = new MyContext())
{
ExecWithoutRetry(() => db.WriteEvent("My event without retries"));
db.DoAnyOperationWithRetryStrategy();
}

I found it. I just needed to add "Connection Timeout=X" where X is seconds to the connection string and everything worked without needing to modify ExecutionStrategies etc.
Adding this before executing query solved it
contextInstance.Database.Connection.ConnectionString = context.Database.Connection.ConnectionString + ";Connection Timeout=2;";

Related

Store liteDB documents in the cloud (azure) blob storage

I am using lightDB as a local database in my iOS and android app implemented in Xamarin Forms. I am trying to store my local liteDB in the cloud using Azure. We have implemented a REST api which can receive a byte[] but I am having problem getting the liteDB documents to a byte[]. If I try to read the file using File.ReadAllBytes(LiteDbPath) where we have stored the liteDB i get a System.IO.IOException: Sharing violation on path. I assume this is not the way to do this, but I am unable to figure out how to do this. Anyone have any suggestions on how to do this?
It is possible I am using this the wrong way, I am quite unexperienced in this area.
Update: More details to make it clearer what I have done and what I want to do.
This is our DataStore class (where we use LiteDB):
[assembly: Dependency(typeof(DataStore<Zystem>))]
namespace AirGlow_App.Services {
class DataStore<T> {
public void Close()
{
var db = new LiteRepository(LiteDbPath);
db.Dispose();
}
public LiteQueryable<T> Get()
{
using (var db = new LiteRepository(LiteDbPath))
{
try
{
return db.Query<T>();
}
catch (Exception ex)
{
Debug.WriteLine($"Exception when doing Get. Exception = {ex.Message}.", TypeDescriptor.GetClassName(this));
//TODO : General Error Handling
return null;
}
}
}
public T Get(BsonValue id)
{
using (var db = new LiteRepository(LiteDbPath))
{
try
{
return db.Query<T>().SingleById(id);
}
catch (Exception ex)
{
Debug.WriteLine($"Exception when doing Get. Exception = {ex.Message}.", TypeDescriptor.GetClassName(this));
//TODO : General Error Handling
return default(T);
}
}
}
public void Add(T obj)
{
using (var db = new LiteRepository(LiteDbPath))
{
try
{
db.Insert<T>(obj);
}
catch (Exception ex)
{
Debug.WriteLine($"Exception when doing Add. Exception = {ex.Message}.", TypeDescriptor.GetClassName(this));
//TODO : General Error Handling
}
}
}
public void Delete(Guid Id)
{
using (var db = new LiteRepository(LiteDbPath))
{
try
{
var o = new BsonValue(Id);
db.Delete<T>(o);
}
catch (Exception ex)
{
Debug.WriteLine($"Exception when doing Delete. Exception = {ex.Message}.", TypeDescriptor.GetClassName(this));
//TODO : General Error Handling
}
}
}
public void Save(T obj)
{
using (var db = new LiteRepository(LiteDbPath))
{
try
{
db.Update<T>(obj);
}
catch (Exception ex)
{
Debug.WriteLine($"Exception when doing Save. Exception = {ex.Message}.", TypeDescriptor.GetClassName(this));
//TODO : General Error Handling
}
}
}
}
}
Then we are using it like this:
public class ZystemsViewModel : ObservableObject
{
private DataStore<Zystem> DB = DependencyService.Get<DataStore<Zystem>>();
public ZystemsViewModel()
{
MessagingCenter.Subscribe<ZystemAddViewModel, Zystem>(this, "Add", (obj, item) =>
{
var newItem = item as Zystem;
Debug.WriteLine($"Will add {newItem.Name} to local database.", TypeDescriptor.GetClassName(this));
DB.Add(newItem);
});
}
}
It was a colleague who is not working here anymore who did these parts. I think the reasoning for using it as a DependencyService was to be able to access it in all classes, pretty much as a singleton. This should probably be changed to a singleton class instead?
Using the database works fine the app. But I want to upload the entire database (file) to Azure and I am unable to get it to a byte[]. When I do
byte[] liteDbFile = File.ReadAllBytes(LiteDbPath);
I get a System.IO.IOException: Sharing violation on path. As some are suggesting it is probably due to the file is being locked, any suggestions on how to solve this?
LiteDB is plain file database and has no running service to access data. If you create a REST API, you should locate you datafile in same machine (local disk) that are running your IIS.
Azure blob storage is another service and deliver file as request.
Think LiteDB as a simple FileStream class (that works with local file) with "super-powers" :)

Better way than try-catch to pass failure messages back to Web API caller?

I've got many Web API calls that delegate to methods in data-layer classes that call my ORM (Entity Framework) and look like this:
public OperationResult DeleteThing(Guid id)
{
var result = new OperationResult() { Success = true };
using (var context = this.GetContext())
{
try
{
context.Things.Where(x => x.Id == id).Delete();
context.SaveChanges();
}
catch (Exception ex)
{
Logger.Instance.LogException(ex);
result.AddError("There was a database error deleting the thing. Check log for details.");
}
return result;
}
(You may recognize the return value as similar to the Notification pattern.)
So I have many of the same try-catch blocks and it smells bad to me. I'd like to get rid of them all and use a global exception handler to log errors instead, but in addition to logging, I also need to pass a message back to the consumer, specific to each different service method, so that the consumer can perhaps pass the message as the results of the service call appropriately. Web service consumers, e.g. our web site, ultimately can display the message generated here to clue the user in to the nature of the error.
Can anyone suggest a better way? My instinct is to go through and replace with catches of specific exception types, but that seems like a lot of work for zero practical benefit and a harm to my code maintainability.
Similar to Stuart's answer, you can also use a Filter attribute inherited from ExceptionFilterAttribute to modify the response based on any input you require.
Here's a full working example that accomplishes:
Custom message for exception type
Modifying the operation result
Fall through generic message for all exception types
ValuesController.cs
using System;
using System.Collections.Generic;
using System.Linq;
using System.Net;
using System.Net.Http;
using System.Web.Http;
using System.Web.Http.Filters;
using Demo.Models;
namespace Demo.Controllers
{
public class ValuesController : ApiController
{
// DELETE api/values/5
[OperationError("The operation failed to delete the entity")]
public OperationResult Delete(int id)
{
throw new ArgumentException("ID is bad", nameof(id));
}
// DELETE api/values/5?specific=[true|false]
[OperationError("The operation tried to divide by zero", typeof(DivideByZeroException))]
[OperationError("The operation failed for no specific reason")]
public OperationResult DeleteSpecific(int id, bool specific)
{
if (specific)
{
throw new DivideByZeroException("DBZ");
}
else
{
throw new ArgumentException("ID is bad", nameof(id));
}
}
}
public class OperationErrorAttribute : ExceptionFilterAttribute
{
public Type ExceptionType { get; }
public string ErrorMessage { get; }
public OperationErrorAttribute(string errorMessage)
{
ErrorMessage = errorMessage;
}
public OperationErrorAttribute(string errorMessage, Type exceptionType)
{
ErrorMessage = errorMessage;
ExceptionType = exceptionType;
}
public override void OnException(HttpActionExecutedContext actionExecutedContext)
{
// Exit early for non OperationResult action results
if (actionExecutedContext.ActionContext.ActionDescriptor.ReturnType != typeof(OperationResult))
{
base.OnException(actionExecutedContext);
return;
}
OperationResult result = new OperationResult() {Success = false};
// Add error for specific exception types
Type exceptionType = actionExecutedContext.Exception.GetType();
if (ExceptionType != null)
{
if (exceptionType == ExceptionType)
{
result.AddError(ErrorMessage);
}
else
{
// Fall through
base.OnException(actionExecutedContext);
return;
}
}
else if (ErrorMessage != null)
{
result.AddError(ErrorMessage);
}
// TODO: Log exception, generate correlation ID, etc.
// Set new result
actionExecutedContext.Response =
actionExecutedContext.Request.CreateResponse(HttpStatusCode.InternalServerError, result);
base.OnException(actionExecutedContext);
}
}
}
Specific exception:
Generic exception:
You could move your logic up the stack into a custom ExceptionHandler. This is a simple example, but the basic idea is to handle specific exceptions and control the status code and (not pictured below) normalize error messages for the caller.
public class ApiExceptionHandler: ExceptionHandler
{
public override void Handle(ExceptionHandlerContext context)
{
if (context == null) throw new ArgumentNullException("context");
LogManager.GetLoggerForCurrentClass().Error(context.Exception, "Captured in ExceptionHandler");
if (context.Exception.GetType() == typeof(NotFoundException))
{
context.Result = new NotFoundResult(context.Request);
}
else if (context.Exception.GetType() == typeof(ArgumentException))
{
// no-op - probably a routing error, which will return a bad request with info
}
else if (context.Exception.GetType() == typeof(ArgumentNullException))
{
context.Result = new BadRequestResult(context.Request);
}
else
{
context.Result = new InternalServerErrorResult(context.Request);
}
}
}
Hook this up in the WebApiConfig:
config.Services.Replace(typeof(IExceptionHandler), new ApiExceptionHandler());

ServiceStack Redis problems with simultaneous read requests

I'm using the ServiceStack.Redis implementation for caching events delivered over a Web API interface. Those events should be inserted into the cache and automatically removed after a while (e.g. 3 days):
private readonly IRedisTypedClient<CachedMonitoringEvent> _eventsCache;
public EventMonitorCache([NotNull]IRedisTypedClient<CachedMonitoringEvent> eventsCache)
{
_eventsCache = eventsCache;
}
public void Dispose()
{
//Release connections again
_eventsCache.Dispose();
}
public void AddOrUpdate(MonitoringEvent monitoringEvent)
{
if (monitoringEvent == null)
return;
try
{
var cacheExpiresAt = DateTime.Now.Add(CacheExpirationDuration);
CachedMonitoringEvent cachedEvent;
string eventKey = CachedMonitoringEvent.CreateUrnId(monitoringEvent);
if (_eventsCache.ContainsKey(eventKey))
{
cachedEvent = _eventsCache[eventKey];
cachedEvent.SetExpiresAt(cacheExpiresAt);
cachedEvent.MonitoringEvent = monitoringEvent;
}
else
cachedEvent = new CachedMonitoringEvent(monitoringEvent, cacheExpiresAt);
_eventsCache.SetEntry(eventKey, cachedEvent, CacheExpirationDuration);
}
catch (Exception ex)
{
Log.Error("Error while caching MonitoringEvent", ex);
}
}
public List<MonitoringEvent> GetAll()
{
IList<CachedMonitoringEvent> allEvents = _eventsCache.GetAll();
return allEvents
.Where(e => e.MonitoringEvent != null)
.Select(e => e.MonitoringEvent)
.ToList();
}
The StructureMap 3 registry looks like this:
public class RedisRegistry : Registry
{
private readonly static RedisConfiguration RedisConfiguration = Config.Feeder.Redis;
public RedisRegistry()
{
For<IRedisClientsManager>().Singleton().Use(BuildRedisClientsManager());
For<IRedisTypedClient<CachedMonitoringEvent>>()
.AddInstances(i => i.ConstructedBy(c => c.GetInstance<IRedisClientsManager>()
.GetClient().GetTypedClient<CachedMonitoringEvent>()));
}
private static IRedisClientsManager BuildRedisClientsManager()
{
return new PooledRedisClientManager(RedisConfiguration.Host + ":" + RedisConfiguration.Port);
}
}
The first scenario is to retrieve all cached events (several hundred) and deliver this over ODataV3 and ODataV4 to Excel PowerTools for visualization. This works as expected:
public class MonitoringEventsODataV3Controller : EntitySetController<MonitoringEvent, string>
{
private readonly IEventMonitorCache _eventMonitorCache;
public MonitoringEventsODataV3Controller([NotNull]IEventMonitorCache eventMonitorCache)
{
_eventMonitorCache = eventMonitorCache;
}
[ODataRoute("MonitoringEvents")]
[EnableQuery(AllowedQueryOptions = AllowedQueryOptions.All)]
public override IQueryable<MonitoringEvent> Get()
{
var allEvents = _eventMonitorCache.GetAll();
return allEvents.AsQueryable();
}
}
But what I'm struggling with is the OData filtering which Excel PowerQuery does. I'm aware of the fact that I'm not doing any serverside filtering yet but that doesn't matter currently. When I filter for any property and click refresh, PowerQuery is sending multiple requests (I saw up to three) simultaneously. I believe it's fetching the whole dataset first and then executing the following requests with filters. This results in various exceptions for ServiceStack.Redis:
An exception of type 'ServiceStack.Redis.RedisResponseException' occurred in ServiceStack.Redis.dll but was not handled in user code
With additional informations like:
Additional information: Unknown reply on multi-request: 117246333|company|osdmonitoringpreinst|2014-12-22|113917, sPort: 54980, LastCommand:
Or
Additional information: Invalid termination, sPort: 54980, LastCommand:
Or
Additional information: Unknown reply on multi-request: 57, sPort: 54980, LastCommand:
Or
Additional information: Type definitions should start with a '{', expecting serialized type 'CachedMonitoringEvent', got string starting with: u259447|company|osdmonitoringpreinst|2014-12-18|1
All of those exceptions happen on _eventsCache.GetAll().
There must be something I'm missing. I'm sure Redis is capable of handling a LOT of requests "simultaneously" on the same set but apparently I'm doing it wrong. :)
Btw: Redis 2.8.12 is running on a Windows Server 2008 machine (soon 2012).
Thanks for any advice!
The error messages are indicative of using a non-thread-safe instance of the RedisClient across multiple threads since it's getting responses to requests it didn't expect/send.
To ensure your using correctly I only would pass in the Thread-Safe IRedisClientsManager singleton, e.g:
public EventMonitorCache([NotNull]IRedisClientsManager redisManager)
{
this.redisManager = redisManager;
}
Then explicitly resolve and dispose of the redis client in your methods, e.g:
public void AddOrUpdate(MonitoringEvent monitoringEvent)
{
if (monitoringEvent == null)
return;
try
{
using (var redis = this.redisManager.GetClient())
{
var _eventsCache = redis.As<CachedMonitoringEvent>();
var cacheExpiresAt = DateTime.Now.Add(CacheExpirationDuration);
CachedMonitoringEvent cachedEvent;
string eventKey = CachedMonitoringEvent.CreateUrnId(monitoringEvent);
if (_eventsCache.ContainsKey(eventKey))
{
cachedEvent = _eventsCache[eventKey];
cachedEvent.SetExpiresAt(cacheExpiresAt);
cachedEvent.MonitoringEvent = monitoringEvent;
}
else
cachedEvent = new CachedMonitoringEvent(monitoringEvent, cacheExpiresAt);
_eventsCache.SetEntry(eventKey, cachedEvent, CacheExpirationDuration);
}
}
catch (Exception ex)
{
Log.Error("Error while caching MonitoringEvent", ex);
}
}
And in GetAll():
public List<MonitoringEvent> GetAll()
{
using (var redis = this.redisManager.GetClient())
{
var _eventsCache = redis.As<CachedMonitoringEvent>();
IList<CachedMonitoringEvent> allEvents = _eventsCache.GetAll();
return allEvents
.Where(e => e.MonitoringEvent != null)
.Select(e => e.MonitoringEvent)
.ToList();
}
}
This will work irrespective of what lifetime of what your EventMonitorCache dependency is registered as, e.g. it's safe to hold as a singleton since EventMonitorCache is no longer holding onto a redis server connection.

Asynchronous insert in Azure Table

How to asynchronously save an entity to Windows Azure Table Service?
The code below works synchronously but raises an exception when trying to save asynchronously.
This statement:
context.BeginSaveChangesWithRetries(SaveChangesOptions.Batch,
(asyncResult => context.EndSaveChanges(asyncResult)), null);
Results in System.ArgumentException: "The current object did not originate the async result. Parameter name: asyncResult".
Additionally, what's the correct pattern for creating the service context when saving asynchronously? Should I create a separate context for each write operation? Is it too expensive (e.g. requiring a call over the network)?
TableStorageWriter.cs:
using System;
using System.Data.Services.Client;
using System.Diagnostics;
using Microsoft.WindowsAzure;
using Microsoft.WindowsAzure.StorageClient;
namespace WorkerRole1
{
public class TableStorageWriter
{
private const string _tableName = "StorageTest";
private readonly CloudStorageAccount _storageAccount;
private CloudTableClient _tableClient;
public TableStorageWriter()
{
_storageAccount = CloudStorageAccount.Parse(CloudConfigurationManager.GetSetting("StorageConnectionString"));
_tableClient = _storageAccount.CreateCloudTableClient();
_tableClient.CreateTableIfNotExist(_tableName);
}
public void Write(string message)
{
try
{
DateTime now = DateTime.UtcNow;
var entity = new StorageTestEntity
{
Message = message,
PartitionKey = string.Format("{0:yyyy-MM-dd}", now),
RowKey = string.Format("{0:HH:mm:ss.fff}-{1}", now, Guid.NewGuid())
};
// Should I get this context before each write? It is efficient?
TableServiceContext context = _tableClient.GetDataServiceContext();
context.AddObject(_tableName, entity);
// This statement works but it's synchronous
context.SaveChangesWithRetries();
// This attempt at saving asynchronously results in System.ArgumentException:
// The current object did not originate the async result. Parameter name: asyncResult
// context.BeginSaveChangesWithRetries(SaveChangesOptions.Batch,
// (asyncResult => context.EndSaveChanges(asyncResult)), null);
}
catch (StorageClientException e)
{
Debug.WriteLine("Error: {0}", e.Message);
Debug.WriteLine("Extended error info: {0} : {1}",
e.ExtendedErrorInformation.ErrorCode,
e.ExtendedErrorInformation.ErrorMessage);
}
}
}
internal class StorageTestEntity : TableServiceEntity
{
public string Message { get; set; }
}
}
Called from WorkerRole.cs:
using System.Net;
using System.Threading;
using Microsoft.WindowsAzure.ServiceRuntime;
using log4net;
namespace WorkerRole1
{
public class WorkerRole : RoleEntryPoint
{
public override void Run()
{
var storageWriter = new TableStorageWriter();
while (true)
{
Thread.Sleep(10000);
storageWriter.Write("Working...");
}
}
public override bool OnStart()
{
ServicePointManager.DefaultConnectionLimit = 12;
return base.OnStart();
}
}
}
Examples using Windows Azure SDK for .NET 1.8.
You should call EndSaveChangesWithRetries instead of EndSaveChanges, as otherwise the IAsyncResult object returned by BeginSaveChangesWithRetries cannot be used by EndSaveChanges. So, could you please try changing your End method call as below?
context.BeginSaveChangesWithRetries(SaveChangesOptions.Batch,
(asyncResult => context.EndSaveChangesWithRetries(asyncResult)),
null);
And for your other question, I would recommend creating a new TableServiceContext for each call, as DataServiceContext is not stateless (MSDN) and the way you implemented TableStorageWriter.Write with the asynchronous call might allow concurrent operations. Actually, in Storage Client Library 2.0, we explicitly prevented concurrent operations that uses a single TableServiceContext object. Moreover, creating a TableServiceContext does not result in a request to Azure Storage.

Rhino mocks throws exception of "Callback arguments didn't match the method arguments delegate" on the do method

I'm using Rhino mocks to change the behaviour of a NHibernate DAL so that when the commit transaction is called by the code the mock framework changes the behaviour so the transaction is rolled back. The reason i am doing this is that for integration testing but i don't want to add any data to the database.
Here is my the method/class under test:
public class NHibernateDALSave<T> : IBaseDALSave<T> where T : class
{
protected ISession _session;
protected ISessionFactory _sessionFactory;
public NHibernateDALSave()
{
_sessionFactory = new Configuration().Configure().BuildSessionFactory();
}
public NHibernateDALSave(ISessionFactory sessionFactory)
{
_sessionFactory = sessionFactory;
}
public void OpenSession()
{
if (_sessionFactory == null)
{
_sessionFactory = new Configuration().Configure().BuildSessionFactory();
}
_session = _sessionFactory.OpenSession();
}
public virtual int Save(T objectToSave)
{
this.OpenSession();
using (_session)
{
using (ITransaction tx = _session.BeginTransaction())
{
try
{
Int32 NewId = Convert.ToInt32(_session.Save(objectToSave));
_session.Flush();
tx.Commit();
return NewId;
}
catch (Exception)
{
tx.Rollback();
throw;
}
}
}
}
}
This is the test code:
public void SaveEmployee_Blank_Success()
{
//setup employee object to save
EmployeeDataContext employee = new EmployeeDataContext();
employee.Person = new PersonDataContext();
employee.PayRollNo = "12345";
employee.Person.Surname = "TEST";
//stub classes
ISessionFactory SessionFactoryStub = MockRepository.GenerateMock<ISessionFactory>();
ISession SessionStub = MockRepository.GenerateMock<ISession>();
ITransaction TranStub = MockRepository.GenerateMock<ITransaction>();
//Actual classes
ISessionFactory sessionFactory = new Configuration().Configure().BuildSessionFactory();
ISession Session = sessionFactory.OpenSession();
ITransaction Tran = Session.BeginTransaction();
try
{
//Configure to prevent commits to the database
SessionStub.Stub(ss => ss.BeginTransaction()).Return(TranStub);
SessionStub.Stub(ss => ss.Save(Arg<EmployeeDataContext>.Is.Anything)).Do((Action)delegate { Session.Save(employee); });
SessionStub.Stub(ss => ss.Flush()).Do((Action)delegate { Session.Flush(); });
TranStub.Stub(ts => ts.Commit()).Do((Action)delegate { Tran.Rollback(); });
TranStub.Stub(ts => ts.Rollback()).Do((Action)delegate { Tran.Rollback(); });
SessionFactoryStub.Stub(sf => sf.OpenSession()).Return(SessionStub);
NHibernateDALSave<EmployeeDataContext> target = new NHibernateDALSave<EmployeeDataContext>(SessionFactoryStub);
target.Save(employee);
}
catch
{
Tran.Rollback();
throw;
}
}
The error I am getting is "Callback arguments didn't match the method arguments delegate" which occurs on the 2nd line after the start of the try, catch block.
Can anyone help me with the meaning of this error message and what i can do to resolve this? Or does anyone have an suggestions of how to carry out integration testing with Nhibernate?
Al
I haven't used RhinoMocks, but I have used other mock frameworks. I think the problem is that your Save method takes a single parameter, but the delegate that you've supplied to the callback Do method does not take an argument.
That line should probably be like this:
SessionStub.Stub(ss => ss.Save(Arg<EmployeeDataContext>.Is.Anything)).Do(arg => Session.Save(employee))
Matt's answer is correct, but also consider using WhenCalled, instead of Do. It's much easier to use, when you don't actually need to use the actual parameters passed in as in your case.

Categories