Store liteDB documents in the cloud (azure) blob storage - c#

I am using lightDB as a local database in my iOS and android app implemented in Xamarin Forms. I am trying to store my local liteDB in the cloud using Azure. We have implemented a REST api which can receive a byte[] but I am having problem getting the liteDB documents to a byte[]. If I try to read the file using File.ReadAllBytes(LiteDbPath) where we have stored the liteDB i get a System.IO.IOException: Sharing violation on path. I assume this is not the way to do this, but I am unable to figure out how to do this. Anyone have any suggestions on how to do this?
It is possible I am using this the wrong way, I am quite unexperienced in this area.
Update: More details to make it clearer what I have done and what I want to do.
This is our DataStore class (where we use LiteDB):
[assembly: Dependency(typeof(DataStore<Zystem>))]
namespace AirGlow_App.Services {
class DataStore<T> {
public void Close()
{
var db = new LiteRepository(LiteDbPath);
db.Dispose();
}
public LiteQueryable<T> Get()
{
using (var db = new LiteRepository(LiteDbPath))
{
try
{
return db.Query<T>();
}
catch (Exception ex)
{
Debug.WriteLine($"Exception when doing Get. Exception = {ex.Message}.", TypeDescriptor.GetClassName(this));
//TODO : General Error Handling
return null;
}
}
}
public T Get(BsonValue id)
{
using (var db = new LiteRepository(LiteDbPath))
{
try
{
return db.Query<T>().SingleById(id);
}
catch (Exception ex)
{
Debug.WriteLine($"Exception when doing Get. Exception = {ex.Message}.", TypeDescriptor.GetClassName(this));
//TODO : General Error Handling
return default(T);
}
}
}
public void Add(T obj)
{
using (var db = new LiteRepository(LiteDbPath))
{
try
{
db.Insert<T>(obj);
}
catch (Exception ex)
{
Debug.WriteLine($"Exception when doing Add. Exception = {ex.Message}.", TypeDescriptor.GetClassName(this));
//TODO : General Error Handling
}
}
}
public void Delete(Guid Id)
{
using (var db = new LiteRepository(LiteDbPath))
{
try
{
var o = new BsonValue(Id);
db.Delete<T>(o);
}
catch (Exception ex)
{
Debug.WriteLine($"Exception when doing Delete. Exception = {ex.Message}.", TypeDescriptor.GetClassName(this));
//TODO : General Error Handling
}
}
}
public void Save(T obj)
{
using (var db = new LiteRepository(LiteDbPath))
{
try
{
db.Update<T>(obj);
}
catch (Exception ex)
{
Debug.WriteLine($"Exception when doing Save. Exception = {ex.Message}.", TypeDescriptor.GetClassName(this));
//TODO : General Error Handling
}
}
}
}
}
Then we are using it like this:
public class ZystemsViewModel : ObservableObject
{
private DataStore<Zystem> DB = DependencyService.Get<DataStore<Zystem>>();
public ZystemsViewModel()
{
MessagingCenter.Subscribe<ZystemAddViewModel, Zystem>(this, "Add", (obj, item) =>
{
var newItem = item as Zystem;
Debug.WriteLine($"Will add {newItem.Name} to local database.", TypeDescriptor.GetClassName(this));
DB.Add(newItem);
});
}
}
It was a colleague who is not working here anymore who did these parts. I think the reasoning for using it as a DependencyService was to be able to access it in all classes, pretty much as a singleton. This should probably be changed to a singleton class instead?
Using the database works fine the app. But I want to upload the entire database (file) to Azure and I am unable to get it to a byte[]. When I do
byte[] liteDbFile = File.ReadAllBytes(LiteDbPath);
I get a System.IO.IOException: Sharing violation on path. As some are suggesting it is probably due to the file is being locked, any suggestions on how to solve this?

LiteDB is plain file database and has no running service to access data. If you create a REST API, you should locate you datafile in same machine (local disk) that are running your IIS.
Azure blob storage is another service and deliver file as request.
Think LiteDB as a simple FileStream class (that works with local file) with "super-powers" :)

Related

In here I'm trying to get particular employee's all data using Xero with .net

I'm using, Xero sdk for my .net project(using Xero.NetStandard.OAuth2.Api;using Xero.NetStandard.OAuth2.Model.PayrollAu;)
public XeroAuth _xeroAuth;
public PayrollAuApi ipayrollAUApi = new PayrollAuApi(); //*************
private Xero.NetStandard.OAuth2.Model.PayrollAu.Employee _GetEmpLeave;
// get perticular Employee all data
private async Task<Xero.NetStandard.OAuth2.Model.PayrollAu.Employee> GetEmployeeAsync(Guid
syncedEmpID)
{
var tokens = await _xeroAuth.GetXeroTokenAuthorization();
_tenantUniqueId = tokens.Item3;
try
{
return ipayrollAUApi.GetEmployeeAsync(tokens.Item1, tokens.Item2,
syncedEmpID).Result._Employees;
}
catch (Exception ex)
{
logging.LoggingToText(ex, await _Context.GetAssemblyInfoAsync(4));
return null;
}
}
in this GetEmployeeAsync method showing casting error it look like this
how to resolve this issue

Unable to delete Realm database in Realm dotnet 2.1.0

Using Realm dotnet 2.1.0 in a Xamarin Forms project, I'm unable to delete the database due to the "Unable to delete Realm because it is still open" message.
This has only been an issue since upgrading to 2.1.0, previously we were using Realm 0.80.
Here's our full database removal method:
public void RemoveDatabase()
{
// Remove all content.
try
{
ClearCachedDataItems();
}
catch (Exception ex)
{
//Logging removed
}
try
{
// Close all connections
using (var realm = GetRealmInstance())
{
if (realm != null)
{
realm.Dispose();
}
}
}
catch (Exception ex)
{
//Logging removed
}
try
{
// Drop the nuke.
var config = GetRealmConfiguration();
var delete = false;
using (var realm = Realms.Realm.GetInstance(config))
{
if (config != null)
{
delete = true;
}
realm.Dispose();
}
if (delete)
{
Realms.Realm.DeleteRealm(config); // fails here
}
}
catch (Exception ex)
{
//Logging removed
}
}
public Realms.Realm GetRealmInstance()
{
var config = GetRealmConfiguration();
if (config == null)
{
return null;
}
return Realms.Realm.GetInstance(config);
}
I added the using statements based on this git post: https://github.com/realm/realm-dotnet/issues/1545
Is there a way to get all running instances of local Realms?
I'm also concerned that I'm creating a new instance of Realm in the final steps when I just want to check a local instance.

Xamarin app crash when attempting to sync SyncTable

I making an app using xamarin and azure mobile service. I am attempting to add offline sync capabilities but I am stuck. I have a service which looks like this
class AzureService
{
public MobileServiceClient Client;
AuthHandler authHandler;
IMobileServiceTable<Subscription> subscriptionTable;
IMobileServiceSyncTable<ShopItem> shopItemTable;
IMobileServiceSyncTable<ContraceptionCenter> contraceptionCenterTable;
IMobileServiceTable<Member> memberTable;
const string offlineDbPath = #"localstore.db";
static AzureService defaultInstance = new AzureService();
private AzureService()
{
this.authHandler = new AuthHandler();
this.Client = new MobileServiceClient(Constants.ApplicationURL, authHandler);
if (!string.IsNullOrWhiteSpace(Settings.AuthToken) && !string.IsNullOrWhiteSpace(Settings.UserId))
{
Client.CurrentUser = new MobileServiceUser(Settings.UserId);
Client.CurrentUser.MobileServiceAuthenticationToken = Settings.AuthToken;
}
authHandler.Client = Client;
//local sync table definitions
//var path = "syncstore.db";
//path = Path.Combine(MobileServiceClient.DefaultDatabasePath, path);
//setup our local sqlite store and intialize our table
var store = new MobileServiceSQLiteStore(offlineDbPath);
//Define sync table
store.DefineTable<ShopItem>();
store.DefineTable<ContraceptionCenter>();
//Initialize file sync context
//Client.InitializeFileSyncContext(new ShopItemFileSyncHandler(this), store);
//Initialize SyncContext
this.Client.SyncContext.InitializeAsync(store);
//Tables
contraceptionCenterTable = Client.GetSyncTable<ContraceptionCenter>();
subscriptionTable = Client.GetTable<Subscription>();
shopItemTable = Client.GetSyncTable<ShopItem>();
memberTable = Client.GetTable<Member>();
}
public static AzureService defaultManager
{
get { return defaultInstance; }
set { defaultInstance = value; }
}
public MobileServiceClient CurrentClient
{
get { return Client; }
}
public async Task<IEnumerable<ContraceptionCenter>> GetContraceptionCenters()
{
try
{
await this.SyncContraceptionCenters();
return await contraceptionCenterTable.ToEnumerableAsync();
}
catch (MobileServiceInvalidOperationException msioe)
{
Debug.WriteLine(#"Invalid sync operation: {0}", msioe.Message);
}
catch (Exception e)
{
Debug.WriteLine(#"Sync error: {0}", e.Message);
}
return null;
}
public async Task SyncContraceptionCenters()
{
ReadOnlyCollection<MobileServiceTableOperationError> syncErrors = null;
try
{
//await this.Client.SyncContext.PushAsync();
await this.contraceptionCenterTable.PullAsync(
//The first parameter is a query name that is used internally by the client SDK to implement incremental sync.
//Use a different query name for each unique query in your program
"allContraceptionCenters",
this.contraceptionCenterTable.CreateQuery());
}
catch (MobileServicePushFailedException exc)
{
if (exc.PushResult != null)
{
syncErrors = exc.PushResult.Errors;
}
}
// Simple error/conflict handling. A real application would handle the various errors like network conditions,
// server conflicts and others via the IMobileServiceSyncHandler.
if (syncErrors != null)
{
foreach (var error in syncErrors)
{
if (error.OperationKind == MobileServiceTableOperationKind.Update && error.Result != null)
{
//Update failed, reverting to server's copy.
await error.CancelAndUpdateItemAsync(error.Result);
}
else
{
// Discard local change.
await error.CancelAndDiscardItemAsync();
}
Debug.WriteLine(#"Error executing sync operation. Item: {0} ({1}). Operation discarded.", error.TableName, error.Item["id"]);
}
}
}
I am getting this error:
System.NullReferenceException: Object reference not set to an instance of an object. When the SyncContraceptionCenters() is run. As far as I can tell I reproduced the coffeeItems example in my service But I am stuck.
I think I found the solution. The issue was the way the tables were being synced.
by calling SyncContraceptionCenters() and SyncShop() at the same time shopItemtable.PullAsync and contraceptionTable.PullAsync were happening at the same time. Which is bad apparently bad. So but putting them in the same method and awaiting them they run separately and they work as expected.

ServiceStack Redis problems with simultaneous read requests

I'm using the ServiceStack.Redis implementation for caching events delivered over a Web API interface. Those events should be inserted into the cache and automatically removed after a while (e.g. 3 days):
private readonly IRedisTypedClient<CachedMonitoringEvent> _eventsCache;
public EventMonitorCache([NotNull]IRedisTypedClient<CachedMonitoringEvent> eventsCache)
{
_eventsCache = eventsCache;
}
public void Dispose()
{
//Release connections again
_eventsCache.Dispose();
}
public void AddOrUpdate(MonitoringEvent monitoringEvent)
{
if (monitoringEvent == null)
return;
try
{
var cacheExpiresAt = DateTime.Now.Add(CacheExpirationDuration);
CachedMonitoringEvent cachedEvent;
string eventKey = CachedMonitoringEvent.CreateUrnId(monitoringEvent);
if (_eventsCache.ContainsKey(eventKey))
{
cachedEvent = _eventsCache[eventKey];
cachedEvent.SetExpiresAt(cacheExpiresAt);
cachedEvent.MonitoringEvent = monitoringEvent;
}
else
cachedEvent = new CachedMonitoringEvent(monitoringEvent, cacheExpiresAt);
_eventsCache.SetEntry(eventKey, cachedEvent, CacheExpirationDuration);
}
catch (Exception ex)
{
Log.Error("Error while caching MonitoringEvent", ex);
}
}
public List<MonitoringEvent> GetAll()
{
IList<CachedMonitoringEvent> allEvents = _eventsCache.GetAll();
return allEvents
.Where(e => e.MonitoringEvent != null)
.Select(e => e.MonitoringEvent)
.ToList();
}
The StructureMap 3 registry looks like this:
public class RedisRegistry : Registry
{
private readonly static RedisConfiguration RedisConfiguration = Config.Feeder.Redis;
public RedisRegistry()
{
For<IRedisClientsManager>().Singleton().Use(BuildRedisClientsManager());
For<IRedisTypedClient<CachedMonitoringEvent>>()
.AddInstances(i => i.ConstructedBy(c => c.GetInstance<IRedisClientsManager>()
.GetClient().GetTypedClient<CachedMonitoringEvent>()));
}
private static IRedisClientsManager BuildRedisClientsManager()
{
return new PooledRedisClientManager(RedisConfiguration.Host + ":" + RedisConfiguration.Port);
}
}
The first scenario is to retrieve all cached events (several hundred) and deliver this over ODataV3 and ODataV4 to Excel PowerTools for visualization. This works as expected:
public class MonitoringEventsODataV3Controller : EntitySetController<MonitoringEvent, string>
{
private readonly IEventMonitorCache _eventMonitorCache;
public MonitoringEventsODataV3Controller([NotNull]IEventMonitorCache eventMonitorCache)
{
_eventMonitorCache = eventMonitorCache;
}
[ODataRoute("MonitoringEvents")]
[EnableQuery(AllowedQueryOptions = AllowedQueryOptions.All)]
public override IQueryable<MonitoringEvent> Get()
{
var allEvents = _eventMonitorCache.GetAll();
return allEvents.AsQueryable();
}
}
But what I'm struggling with is the OData filtering which Excel PowerQuery does. I'm aware of the fact that I'm not doing any serverside filtering yet but that doesn't matter currently. When I filter for any property and click refresh, PowerQuery is sending multiple requests (I saw up to three) simultaneously. I believe it's fetching the whole dataset first and then executing the following requests with filters. This results in various exceptions for ServiceStack.Redis:
An exception of type 'ServiceStack.Redis.RedisResponseException' occurred in ServiceStack.Redis.dll but was not handled in user code
With additional informations like:
Additional information: Unknown reply on multi-request: 117246333|company|osdmonitoringpreinst|2014-12-22|113917, sPort: 54980, LastCommand:
Or
Additional information: Invalid termination, sPort: 54980, LastCommand:
Or
Additional information: Unknown reply on multi-request: 57, sPort: 54980, LastCommand:
Or
Additional information: Type definitions should start with a '{', expecting serialized type 'CachedMonitoringEvent', got string starting with: u259447|company|osdmonitoringpreinst|2014-12-18|1
All of those exceptions happen on _eventsCache.GetAll().
There must be something I'm missing. I'm sure Redis is capable of handling a LOT of requests "simultaneously" on the same set but apparently I'm doing it wrong. :)
Btw: Redis 2.8.12 is running on a Windows Server 2008 machine (soon 2012).
Thanks for any advice!
The error messages are indicative of using a non-thread-safe instance of the RedisClient across multiple threads since it's getting responses to requests it didn't expect/send.
To ensure your using correctly I only would pass in the Thread-Safe IRedisClientsManager singleton, e.g:
public EventMonitorCache([NotNull]IRedisClientsManager redisManager)
{
this.redisManager = redisManager;
}
Then explicitly resolve and dispose of the redis client in your methods, e.g:
public void AddOrUpdate(MonitoringEvent monitoringEvent)
{
if (monitoringEvent == null)
return;
try
{
using (var redis = this.redisManager.GetClient())
{
var _eventsCache = redis.As<CachedMonitoringEvent>();
var cacheExpiresAt = DateTime.Now.Add(CacheExpirationDuration);
CachedMonitoringEvent cachedEvent;
string eventKey = CachedMonitoringEvent.CreateUrnId(monitoringEvent);
if (_eventsCache.ContainsKey(eventKey))
{
cachedEvent = _eventsCache[eventKey];
cachedEvent.SetExpiresAt(cacheExpiresAt);
cachedEvent.MonitoringEvent = monitoringEvent;
}
else
cachedEvent = new CachedMonitoringEvent(monitoringEvent, cacheExpiresAt);
_eventsCache.SetEntry(eventKey, cachedEvent, CacheExpirationDuration);
}
}
catch (Exception ex)
{
Log.Error("Error while caching MonitoringEvent", ex);
}
}
And in GetAll():
public List<MonitoringEvent> GetAll()
{
using (var redis = this.redisManager.GetClient())
{
var _eventsCache = redis.As<CachedMonitoringEvent>();
IList<CachedMonitoringEvent> allEvents = _eventsCache.GetAll();
return allEvents
.Where(e => e.MonitoringEvent != null)
.Select(e => e.MonitoringEvent)
.ToList();
}
}
This will work irrespective of what lifetime of what your EventMonitorCache dependency is registered as, e.g. it's safe to hold as a singleton since EventMonitorCache is no longer holding onto a redis server connection.

Configuring resiliency settings Entity Framework 6.02

I'm trying to configure resiliency settings for EF6.02. I have an application which in one point of execution writes a log entry to database. The application is not depending on the SQL-server so if the server does not respond, I want the application to abandon the INSERT query (through SaveChanges of the DbContext) and continue execution immediately.
Using default settings, the debug log outputs ten
"A first chance exception of type 'System.Data.SqlClient.SqlException' occurred in System.Data.dll"
After the ten tries, it throws an exception and my code continues. But I want just one try and for example 2 s command timeout. According to documentation on MSDN, default resiliency method for SQL server is:
DefaultSqlExecutionStrategy: this is an internal execution strategy that is used by default. This strategy does not retry at all, however, it will wrap any exceptions that could be transient to inform users that they might want to enable connection resiliency.
As the documentation mentions, this strategy does not retry at all. But still I have ten retries. I have tried to create a class which inherits DbConfiguration but I have not found any documentation on how to change this.
Can anyone help me reduce the number of retries?
UPDATE: Below is code based on suggestions
using System;
using System.Data.Entity;
using System.Data.Entity.SqlServer;
using System.Data.Entity.Infrastructure;
using System.Runtime.Remoting.Messaging;
namespace MyDbLayer
{
public class MyConfiguration : DbConfiguration
{
public MyConfiguration ()
{
this.SetExecutionStrategy("System.Data.SqlClient", () => SuspendExecutionStrategy
? (IDbExecutionStrategy)new DefaultExecutionStrategy()
: new SqlAzureExecutionStrategy());
}
public static bool SuspendExecutionStrategy
{
get
{
return (bool?)CallContext.LogicalGetData("SuspendExecutionStrategy") ?? false;
}
set
{
CallContext.LogicalSetData("SuspendExecutionStrategy", value);
}
}
}
}
And code writing to SQL
try
{
using (MyEntities context = new MyEntities ())
{
Log logEntry = new Log();
logEntry.TS = DateTime.Now;
MyConfiguration.SuspendExecutionStrategy = true;
context.Log.Add(logEntry);
context.SaveChanges();
}
}
catch (Exception ex)
{
logger.Warn("Connection error with database server.", ex);
}
finally
{
//Enable retries again...
MyConfiguration.SuspendExecutionStrategy = false;
}
Did you try execution strategy suspension?
Your own DB configuration would be like this one:
public class MyConfiguration : DbConfiguration
{
public MyConfiguration()
{
this.SetExecutionStrategy("System.Data.SqlClient", () => SuspendExecutionStrategy
? (IDbExecutionStrategy)new DefaultExecutionStrategy()
: new SqlAzureExecutionStrategy());
}
public static bool SuspendExecutionStrategy
{
get
{
return (bool?)CallContext.LogicalGetData("SuspendExecutionStrategy") ?? false;
}
set
{
CallContext.LogicalSetData("SuspendExecutionStrategy", value);
}
}
}
then you may define non-retriable command like this:
public static void ExecWithoutRetry(System.Action action)
{
var restoreExecutionStrategyState = EbgDbConfiguration.SuspendExecutionStrategy;
try
{
MyConfiguration.SuspendExecutionStrategy = true;
action();
}
catch (Exception)
{
// ignore any exception if we want to
}
finally
{
MyConfiguration.SuspendExecutionStrategy = restoreExecutionStrategyState;
}
}
and finally your regular DB code with retriable and non-retriable commands may look like this:
using (var db = new MyContext())
{
ExecWithoutRetry(() => db.WriteEvent("My event without retries"));
db.DoAnyOperationWithRetryStrategy();
}
I found it. I just needed to add "Connection Timeout=X" where X is seconds to the connection string and everything worked without needing to modify ExecutionStrategies etc.
Adding this before executing query solved it
contextInstance.Database.Connection.ConnectionString = context.Database.Connection.ConnectionString + ";Connection Timeout=2;";

Categories