I am using Xamarin Forms and the WindowsAzure.MobileServices.SQLiteStore NuGet to handle synchronization between my sqlite database and my azure database. Everything works fine, but when I want to logout my user I want to clean the database. This way on the next login, it will once again, regenerate the database and synchronize from zero. I have tried purging the tables but this only removes local data and when you log back in it will synchronize any new data only.
Currently my dispose does the following:
// Globally declared and initialized in my init() method
MobileServiceSQLiteStore store { get; set; }
public MobileServiceClient MobileService { get; set; }
public async Task Dispose()
{
try
{
initialized = false;
// await this.userTable.PurgeAsync<AzureUser>("CleanUsers", this.userTable.CreateQuery(), CancellationToken.None);
store.Dispose();
MobileService.Dispose();
store = null;
MobileService = null;
}
catch (Exception ex)
{
throw ex;
}
}
Any idea of how I can clean my sqlite database on logout using this component? Thanks!
If you want to purge all items, use:
this.userTable.PurgeAsync(null, null, true, CancellationToken.None);
See the code.
Related
I have created service which communicates with my database. GetAvailableUserId service's method cannot be run simultaneously, because I don't want to return same user's id for two different calls. So far I have managed this:
public class UserService : IUserService
{
public int GetAvailableUserId()
{
using (var context = new UsersEntities())
{
using (var transaction = context.Database.BeginTransaction())
{
var availableUser = context.User
.Where(x => x.Available)
.FirstOrDefault();
if (availableUser == null)
{
throw new Exception("No available users.");
}
availableUser.Available = false;
context.SaveChanges();
transaction.Commit();
return availableUser.Id;
}
}
}
}
I wanted to test if service will work as intended, so I created simple console application to simulate synchronous requests:
Parallel.For(1, 100, (i, state) => {
var service = new UserServiceReference.UserServiceClient();
var id = service.GetAvailableUserId();
});
Unfortunately, It failed that simple test. I can see, that it returned same id for different for iterations.
Whats wrong there?
If I understood you correctly, you wan to lock method from other threads. If yesm then use lock:
static object lockObject = new object();
public class UserService : IUserService
{
public int GetAvailableUserId()
{
lock(lockObject )
{
// your code is omitted for the brevity
}
}
}
You need to spend some time and delve into the intricadies of SQL Server and EntityFramework.
Basically:
You need a database connection that handles repeatable results (which is a database connection string setting).
You need to wrap the interactions in EntityFramework within one transaction so that multiple instances do not possibly return the same result in the query and then make problems in the save.
Alternative method to achieve this is to catch DbUpdateConcurrencyException to check whether values in the row have changed since retrieving when you try to save.
So if e.g. the same record is retrieved twice. The first one to have the Available value updated in the database will cause the other one to thow concurrency exception when it tries to save because the value has changed since it was retrieved.
Microsoft - handling Concurrency Conflicts.
Add ConcurrencyCheck attribute above the Available property in your entity.
[ConcurrencyCheck]
public bool Available{ get; set; }
Then:
public int GetAvailableUserId()
{
using (var context = new UsersEntities())
{
try
{
var availableUser = context.User
.Where(x => x.Available)
.FirstOrDefault();
if (availableUser == null)
{
throw new Exception("No available users.");
}
availableUser.Available = false;
context.SaveChanges();
return availableUser.Id;
}
catch (DbUpdateConcurrencyException)
{
//If same row was already retrieved and updated to false, do not save, instead call the method again to get the next true row.
return GetAvailableUserId();
}
}
}
What is a good way to bubble up a DbUpdateConcurrencyException to the view from the grain?
I'm currently working on an Orlean's prototype that has a custom state that I'm using Entity Framework Core to communicate with the DB and using the optimistic concurrency patterns built into EF Core to manage the concurrency issues.
Where I'm having an issue is that I want to bubble up my Exception from the grain to the view and am not receiving it on the view end.
I'm trying to accomplish this because I want to deal with some of the concurrency issues that are more pressing on the view so that the user can decide or at least be alerted to the issue.
I brought this up on the Orlean's Gitter, but didn't get many ideas from it.
Example of my code for updating:
public Task UpdateUser(User user)
{
//Reason for second try/catch is to bubble the error to controller
try
{
userState = new User
{
Username = this.GetPrimaryKeyString(),
accountType = user.accountType,
FName = user.FName,
LName = user.LName,
RowVersion = user.RowVersion,
CreatedDate = user.CreatedDate
};
UpdateState();
}
catch (DbUpdateConcurrencyException ex)
{
throw ex;
}
return Task.CompletedTask;
}
public Task UpdateState()
{
using (var context = new OrleansContext())
{
context.users.Update(userState);
try
{
context.SaveChanges();
}
catch ( DbUpdateConcurrencyException ex)
{
var entry = ex.Entries.Single();
var clientValues = (User)entry.Entity;
var databaseEntry = entry.GetDatabaseValues();
//Make sure the row wasn't deleted
if(databaseEntry != null)
{
var databaseValues = (User)databaseEntry.ToObject();
if(clientValues.accountType != databaseValues.accountType)
{
//Bubble up the exception to controller for proper handling
throw ex;
}
//Update Row Version to allow update
userState.RowVersion = databaseValues.RowVersion;
context.SaveChanges();
}
}
}
return Task.CompletedTask;
}
I'm open to any suggestions on this as long as it allows the user to be alerted to the Exception and can view their data and the current DB values.
There is a chance that the exception is not being serialized or deserialized correctly. The primary reasons for this could be:
The Exception class does not correctly implement the ISerializable pattern.
The assembly which contains the Exception class is not present on the client, so the client does not understand how to create the Exception type.
In this case, I would lean towards the second reason, because most (but not all!) Exception classes do correctly implement the ISerializable pattern.
In either case, you can catch your exception and turn it into a generic exception.
You could create a helper method to do this using the LogFormatter.PrintException(Exception) method from Orleans to format the exception as a string.
public static void ThrowPlainException(Exception e) =>
throw new Exception(Orleans.Runtime.LogFormatter.PrintException(e));
The solution I came to was to create a custom exception class that serializable add the database values object to it and bubble that up to the views.
[Serializable]
public class UpdateException : Exception
{
public object databaseValues { get; set; }
public UpdateException(object databaseValues)
{
this.databaseValues = databaseValues;
}
public UpdateException(string message, object databaseValues) :base(message)
{
this.databaseValues = databaseValues;
}
}
I have a scenario where it requires to add a record in to table, then - creating a resource on the cloud if record is added, then update the record in table with the resource identifier if resource is created on cloud. So, they are 3 operations and I want to revert all of it when any of them doesn't succeed.
We have TransactionScope for Multiple Db Operations in one go but I'm wondering how to achieve this? Appreciate your help!
Edit
PS: There could be any number of operations like that - say 10 or more in a sequence, and they may not even related to DB operations. They could just be creating 10 files in a sequence - so when any of the file creation fails - all the previous files should be deleted/undone.
How about going a command pattern way? It's may not be perfect command pattern implementation but something very close. See below:
public interface ICommand {
ICommandResult Execute();
ICommandResult Rollback();
}
public interface ICommandResult {
bool Success { get; set; }
object Data { get; set; }
Exception Error { get; set; }
}
public class CommandResult : ICommandResult {
public bool Success { get; set; }
public object Data { get; set; }
public Exception Error { get; set; }
}
public class AddToDBCommand : ICommand {
private ICommandResult result;
private int newRecordId;
public AddToDBCommand(<params_if_any>) {
result = new CommandResult();
}
public ICommandResult Execute() {
try {
// insert record into db
result.Success = true;
result.Data = 10; // new record id
}
catch (Exception ex) {
result.Success = false;
result.Error = ex;
}
return result;
}
public ICommandResult Rollback() {
try {
// delete record inserted by this command instance
// use ICommandResult.Data to get the 'record id' for deletion
Console.WriteLine("Rolling back insertion of record id: " + result.Data);
// set Success
}
catch(Exception ex) {
// set Success and Error
// I'm not sure what you want to do in such case
}
return result;
}
}
Similarly you would create commands for creating cloud resource and updating record in db. In main code you can hold collection of ICommand objects and execute each one.
var commands = new List<ICommand>
{
new AddToDBCommand(<params_if_any>),
new AddToCloudCommand(<params_if_any>),
new UpdateInDBCommand(<param_if_any>)
};
Then in the loop you can call Execute, if it returns Success = false then record the current command index in collection and loop backward whilst calling Rollback on each command.
I assume you are using Azure as cloud.
So to support transactions you need to have -
1. Elastic database on Azure which supports transactions.
2. You need to have .NET framework 4.6.1 or higher to utilize distributed transaction.
I encourage you to go through https://learn.microsoft.com/en-us/azure/sql-database/sql-database-elastic-transactions-overview
Now in your case lets break 3 steps considering transaction scope is applied.
Add record to table -
If this fails then no worries I guess.
Create resource in cloud-
If this fails then Added record will be rolled back.
Update record in table with resource id created.
If this fails then 1 step will be rolled back.
After transaction scope is finished you need to check that record added in 3rd step exists. If it does not then you need to manually rollback resource creation by deleting it.
I may be going about this incorrectly but this is my class that I wrap my entity object:
using System;
using System.Linq;
namespace SSS.ServicesConfig.data
{
public partial class GlobalSetting
{
private static GlobalSetting _globalSettings;
public static GlobalSetting GlobalSettings
{
get
{
if (_globalSettings == null)
{
GetGlobalSetting();
}
return _globalSettings;
}
}
private static void GetGlobalSetting()
{
try
{
using (var subEntities = PpsEntities.DefaultConnection())
{
_globalSettings = (from x in subEntities.GlobalSettings
select x).FirstOrDefault();
if (_globalSettings == null)
{
_globalSettings = new GlobalSetting();
_globalSettings.GlobalSettingId = Guid.NewGuid();
_globalSettings.CompanyCode = string.Empty;
_globalSettings.CorporationId = Guid.Empty;
_globalSettings.DefaultBranch = "01";
_globalSettings.SourceId = Guid.Empty;
_globalSettings.TokenId = Guid.Empty;
subEntities.AddToGlobalSettings(_globalSettings);
subEntities.SaveChanges();
}
}
}
catch (Exception ex)
{
Logging.Log("An error occurred.", "GetGlobalSetting", Apps.ServicesConfig, ex);
throw new Exception(string.Format("Unable to retrieve data: [{0}].", ex.Message));
}
}
internal static void SaveGlobalSettings()
{
using (var entities = PpsEntities.DefaultConnection())
{
entities.Attach(_globalSettings);
entities.SaveChanges();
}
}
}
}
I'm trying to make it where they have to go through my class to get the settings record and save it though the same class. This is in a separate project that several other projects are going to import.
My save isn't saving to the database and I see no errors or changes on the record. In this particular table, there is only one record so it's not adding another record either.
Any suggestions?
First your save is not being called after the initial value is assigned to _globalSettings.
Second You should not be trying to change the value with a get accessor. It is bad form.
http://msdn.microsoft.com/en-us/library/w86s7x04.aspx
I recommend that you separate the responsibility of the save to the database to a new method (you could expose the SaveGlobalSettings method by making it public), but if you are determined to obfuscate the save from the user, then I would recommend you remove the save to the database from get accessor of the GlobalSettings property, create a set accessor for the GlobalSettings property, and put the save to the database in the GlobalSettings properties set accessor.
One other note, you are killing your stack trace.
throw new Exception(string.Format("Unable to retrieve data: [{0}].", ex.Message));
You can still catch and log the exception the way that your are doing it, but re-throw the exception like this:
catch (Exception ex)
{
Logging.Log("An error occurred.", "GetGlobalSetting", Apps.ServicesConfig, ex);
throw;
}
This will preserve the original exception.
I am currently in the process of creating a prototype of a custom Log4Net appender, which is going to store information on all exceptions that occur within the project in an Azure table. The table is to be created based on the model defined in the 'LogEntry' class. Since this is a prototype web application, at the moment I have created a button that throws an exception to start the logger and I have been following this as a guide:
http://www.kongsli.net/nblog/2011/08/15/log4net-writing-log-entries-to-azure-table-storage/
However, when the exception is thrown and the logger is instantiated, the table is not created correctly. Instead of creating the table based on my LogEntry class, it is only generating (what I assume to be the TableServiceContext defaults) of 'PartitionKey', 'RowKey' and 'TimeStamp'. As a result, the logger is failing an no entries are being created in the table.
Below are some extracts from my project:
LogEntry.cs
public class LogEntry : TableServiceEntity
{
public LogEntry()
{
var now = DateTime.UtcNow;
// PartitionKey is the current year and month whild RowKey is a combination of the date, time and a GUID.
// This is so that we are able to query our log entries more efficiently.
PartitionKey = string.Format("{0:yyyy-MM}", now);
RowKey = string.Format("{0:dd HH:mm:ss.fff}-{1}", now, Guid.NewGuid());
}
// This region of the class class represents each entry in our log table.
#region Table Columns
...all columns defined here...
#endregion
}
LogServiceContext.cs
internal class LogServiceContext : TableServiceContext
{
public LogServiceContext(string baseAddress, StorageCredentials credentials)
: base(baseAddress, credentials)
{
}
internal void Log(LogEntry logEntry)
{
AddObject("LogEntries", logEntry);
SaveChanges();
}
public IQueryable<LogEntry> LogEntries
{
get
{
return CreateQuery<LogEntry>("LogEntries");
}
}
}
And an extract from the appender class itself:
// Create a new LogEntry and store all necessary details.
// All writing to log is done asynchronically to prevent the write slowing down request handling.
Action doWriteToLog = () => {
try
{
_ctx.Log(new LogEntry
{
CreatedDateTime = DateTime.Now,
UserName = loggingEvent.UserName,
IPAddress = userIPAddress,
Culture = userCulture,
OperatingSystem = userOperatingSystem,
BrowserVersion = userCulture,
ExceptionLevel = loggingEvent.Level,
ExceptionDateTime = loggingEvent.TimeStamp,
ExceptionMessage = loggingEvent.RenderedMessage,
ExceptionStacktrace = Environment.StackTrace,
AdditionalInformation = loggingEvent.RenderedMessage
});
}
catch (DataServiceRequestException e)
{
ErrorHandler.Error(string.Format("{0}: Could not wring log entry to {1}: {2}",
GetType().AssemblyQualifiedName, _tableEndpoint, e.Message));
}
};
doWriteToLog.BeginInvoke(null, null);
I am happy to provide any additional information and can package the solution should anyone wish to see the classes in their full form. Any help would be greatly appreciated!
after writing the blog post that you refer to, I have made some small changes to the code.You can see the change in my github repo: https://github.com/vidarkongsli/azuretablestorageappender
Essensially, what I did was replacing SaveChanges() with BeginSaveChanges(SaveChangesOptions.Batch, null, null) and removing the BeginInvoke statement from AzureTableStorageAppender.Append(LoggingEvent)
I think this might help the situation.