I'm using Entity Framework Core and SQLite as relational database, but also ASP.NET Core to display the data on a website.
Now what I want to do is every time something happens to the database (insert, update, delete), I also want a SignalR method to be executed. This way every client gets noticed about the changes.
Is it possible to observe the SQLite database?
Or should I call the hub methods manually after every statement to the db?
My Hub Class
public class GeneralHub : Hub
{
public async Task ServerSendUpdate()
{
List<int> list = CountTickets();
await Clients.All.SendAsync("ClientReceiveUpdate", list);
}
private List<int> CountTickets()
{
NamedRestClient _namedRestClient = new NamedRestClient();
return _namedRestClient.CountTickets().Result;
}
}
At the moment the amount of objects in my database is displayed on the website by invoking method ServerSendUpdate()
Related
I create generic add method that makes search in database given id and then i make changes on table and so i save it to database but when i add it under transaction scope it didn't worked. Thrown Request timeout Exception.It works on .net framework but doesn't work on .net core.I tried Mock<> but it does give me an error that platform doesn't support that.
using (Y db = new Y())
{
using (var transaction = db.Database.BeginTransaction())
{
db.Table.Add(new Table());
db.SaveChanges();
public static void add<Y>(Func<T, bool> condition)
{
tableobject = Db.Set<Y>().where(condition).FirstOrDefault();
Db.Set<Y>().Add(tableobject);
Db.SaveChanges();
}
Transaction.Commit();
}
}
I make some changes on question for representation real situation.
I find the solution.
This is a method that takes parameters and create new instance MyDbContext.
I called this method under transaction and if you work with .net core you cant create two dbcontext instance.You can work only one dbcontext. And I passed my Dbcontext class to method and the problem solved.
Background
I have a central database my MVC EF web app interacts with following best practices. Here is the offending code:
// GET: HomePage
public ActionResult Index()
{
using (var db = new MyDbContext())
{
return View(new CustomViewModel()
{
ListOfStuff = db.TableOfStuff
.Where(x => x.Approved)
.OrderBy(x => x.Title)
.ToList()
});
}
}
I also modify the data in this database's table manually completely outside the web app.
I am not keeping an instance of the DbContext around any longer than is necessary to get the data I need. A new one is constructed per-request.
Problem
The problem I am having is if I delete a row or modify any data from this table manually outside the web app, the data being served by the above code does not reflect these changes.
The only way to get these manual edits of the data to be picked up by the above code is to either restart the web app, or use the web app to make a modification to the database that calls SaveChanges.
Log Results
After logging the query being executed and doing some manual tests there is nothing wrong with the query being generated that would make it return bad data.
However, in logging I saw a confusing line in the query completion times. The first query on app start-up:
-- Completed in 86 ms with result: CachingReader
Then any subsequent queries had the following completion time:
-- Completed in 0 ms with result: CachingReader
What is this CachingReader and how do I disable this?
Culprit
I discovered the error was introduced elsewhere in my web app as something that replaced the underlying DbProviderServices to provide caching, more specifically I am using MVCForum which uses EF Cache.
This forum's CachingConfiguration uses the default CachingPolicy which caches everything unless otherwise interacted with through the EF which was the exact behavior I was observing. More Info
Solution
I provided my own custom CachingPolicy that does not allow caching on entities where this behavior is undesirable.
public class CustomCachingPolicy : CachingPolicy
{
protected override bool CanBeCached(ReadOnlyCollection<EntitySetBase> affectedEntitySets, string sql, IEnumerable<KeyValuePair<string, object>> parameters)
{
foreach (var entitySet in affectedEntitySets)
{
var table = entitySet.Name.ToLower();
if (table.StartsWith("si_") ||
table.StartsWith("dft_") ||
table.StartsWith("tt_"))
return false;
}
return base.CanBeCached(affectedEntitySets, sql, parameters);
}
}
With this in place, the database logging now always shows:
-- Completed in 86 ms with result: SqlDataReader
Thanks everyone!
I have cached my database using following code for Redis operations:
public bool InitialiseCache()
{
try
{
_cache = Connection.GetDatabase();
return true;
}
catch (Exception ex)
{
return false;
}
}
I tried to debug and preview value of _cache but it does not display cached data (tables). I wanted to confirm that GetDatabase() method caches all tables. Is there any way to preview all Redis keys, or values?
Short Answer :
No, Redis's GetDatabase() method DOES NOT caches all database tables
Long Answer :
As Per StackExchange.Redis on Github :
Using a redis database
Accessing a redis database is as simple as:
IDatabase db = redis.GetDatabase();
The object returned from GetDatabase is a cheap pass-thru object, and does not need to be stored. Note that redis supports multiple
databases (although this is not supported on "cluster"); this can be
optionally specified in the call to GetDatabase. Additionally, if you
plan to make use of the asynchronous API and you require the
Task.AsyncState to have a value, this can also be specified:
int databaseNumber = ...
object asyncState = ...
IDatabase db = redis.GetDatabase(databaseNumber, asyncState);
Once you have the IDatabase, it is simply a case of using the redis
API. Note that all methods have both synchronous and asynchronous
implementations. In line with Microsoft's naming guidance, the
asynchronous methods all end ...Async(...), and are fully await-able
etc.
I'm using offline sync in a Xamarin app. I have the following scenario:
I have a couple of tables which sync fine and one table called LocalOnlyTable which I don't want to sync. I just want to read/write it locally.
The problem appears when I pull one of my tables like so:
await exerciseTable.PullAsync(string.Format("{0}ItemByFK", typeof(Exercise).Name), exerciseTable.CreateQuery());
I get a MobileServicePushFailedException saying 404 LocalOnlyTable does not exist.
I'm wondering why Mobile Services tries to push/pull the LocalOnlyTable and
How can I prevent Mobile Services from trying to sync LocalOnlyTable?
Just came across your issue here and thought of sharing my solution.
1) Create a custom TableSyncHandler to block off local-only tables:
public class TableSyncHandler : IMobileServiceSyncHandler
{
private readonly IMobileServiceClient _client;
private readonly HashSet<string> _excludedTables = new HashSet<string>();
public TableSyncHandler(IMobileServiceClient client)
{
_client = client;
}
public void Exclude<T>()
{
_excludedTables.Add(_client.SerializerSettings.ContractResolver.ResolveTableName(typeof(T)));
}
public Task OnPushCompleteAsync(MobileServicePushCompletionResult result)
{
return Task.FromResult(0);
}
public Task<JObject> ExecuteTableOperationAsync(IMobileServiceTableOperation operation)
{
if (_excludedTables.Contains(operation.Table.TableName))
{
return Task.FromResult((JObject) null);
}
return operation.ExecuteAsync();
}
}
2) When you are initializing MobileServiceClient's SyncContext, register the tables you want to exclude to this syncHandler, and then initialize SyncContext using the syncHandler:
_store = new MobileServiceSQLiteStore("YourStore");
_store.DefineTable<User>();
_store.DefineTable<LocalOnlyTable>();
_syncHandler = new TableSyncHandler(client);
// LocalOnlyTable is excluded from sync operations
_syncHandler.Exclude<LocalOnlyTable>();
await client.SyncContext.InitializeAsync(_store, _syncHandler);
Disclaimer:
This has not gone to production yet, so I don't know if there will be performance impact, but seems to be working fine so far in testing.
This solution is based on Azure Mobile Services client v1.3.2 source code. It's not doing anything (pull/push) when the synchandler returns null result. This behaviour can possibly change in the future.
All actions take using the MSSyncTable APIs are tracked to be sent to the server. If you have a table you do not want to track you shouldn't use the MSSyncTable APIs to insert/update records.
You should be able to use either the SQLiteStore methods (like upsert) or execute SQL on your SQLite Db directly for your untracked tables.
I used lot of model for connecting to db, in my last project that i worked with C# & entity framework, i created static class for db connecting but i had problem with opening and closing connection for that give me error when more than 10-15 requests come together, i solved it with changing method of connecting to db with i connect now per request and removed all static methods and classes.
Now i want to know,
What is best model for making connection?
Should i close it after every query and open it before using or ...?
A connection in static class is good model (that i don`t need to
create it, every time)?
Is there a good design pattern for this problem?
All of it is for the same question What is the best method for
making database connection (static, abstract, per request, ...)?
For example i working on a sms sender web panel, I should send 100K sms per second, these sms collect with others and make a package that every package have 1~20 sms then i need to send 5K~100K packages per one second and when i send a package i should do these steps:
Update single sms to delivered or not delivered
Update user balance if delivered decrease user balance in useraccounts table
Update number of sms send count in user table
Update number of sms send count in mobile number table
Update number of sms send count in sender number table
Update package for delivered and failed sms in package table
Update package for how thread send this package in package table
Update thread table for how many sms send it by this tread and how many failed
Add account document for this transactions in AccountDocument table
All steps and lot of other things like logs, user interface and monitoring widgets, that should doing and i need DB connection for doing every single of this transactions.
Now, What is best model for connecting to DB? By human request or by thread request or by every single transaction..
answers to your questions:
Close it. .NET does connection pooling for you under the hood.
Create it. use the using (Connection conn = new ....) each time - this way, you'll make the most out of the .NET pooling mechanism.
you can use the .NET ThreadPool (or your own custom one), define the ThreadPool to use solely 10 thread in parallel and Enqueue work items one after another. this way no more then 10 connections will be used in the same time + it'll probably work faster.
More about Custom ThreadPools: Custom ThreadPool Implementation
Per instance.
Here's my suggestion for an architecture:
Create a database table (queue) for pending SMS to be sent out.
each row will contain all the information needed for the sms + the current status.
create a worker process, perhaps a windows service which will sample this table constantly - let's say, each 5 seconds. it will select the TOP ~20 SMS with status = 'pending to be sent' (should be represented as int). and will update the status to 'sending'
each sms will be sent out using a custom threadpool on the windows service side.
in the end of the process, ALL the processed sms status will be updated to 'done' using a CTE (common table expression - you can send a cte with all the sms rows ids that have just been process to do a 'bulk update' to 'done' status).
you could make the status update stored procedure to be the same one as the 'getpending'. this way, you could select-for-update with no lock and make the database work faster.
this way, you can have more than just one processor service running (but then you'll have to loose the nolock).
remember to avoid as much locking as possible.
by the way, this is also good because you could send SMS from any place in your system by simply adding a row to the pending SMS table.
And one more thing, i would not recommend to use entity framework for this, as it has too much going on under the hood. All you need for this kind of task is to simply call 3-4 stored procedures, and that's it. Maybe take a look at Dapper-dot-NET - its a very lightweight MicroDal framework which in most cases works more than 10 times faster than EF (Entity Framework)
1. Should i close it after every query?
.Net does that for you so let it handle it, that's a garbage collector task. So don't bother disposing your objects manually, this is a good answer by Jon Skeet: https://stackoverflow.com/a/1998600/544283. However you could use the using(IDisposable){ } statement to force the GC to do it's work. Here is a nice article about resources reallocation: http://www.codeproject.com/Articles/29534/IDisposable-What-Your-Mother-Never-Told-You-About.
2. A connection in static class is good?
Never make a data context static! Data contexts are not thread safe or concurrent safe.
3. Is there a good design pattern for this problem?
As Belogix mentioned dependency injection and unit of work patterns are great, in fact entity framework is a unit of work itself. DI and UoW are a bit overrated though, it's not easy to implement if it's your first time handling an IoC container which if you're going that path I'd recommend Ninject. One other thing is you don't really need DI if you're not gonna run tests, the awesomeness of these patterns is to decouple, so you can test and mock without sweat.
In-short: If you're gonna run test against your code go for these patterns. If not, I'm providing you an example about how you could share your data context among the services you'd like. This is the answer to your fourth question.
4. What is the best method for making database connection (static, per request)?
Your context service:
public class FooContextService {
private readonly FooContext _ctx;
public FooContext Context { get { return _ctx; } }
public FooContextService() {
_ctx = new FooContext();
}
}
Other services:
public class UnicornService {
private readonly FooContext _ctx;
public UnicornService(FooContextService contextService) {
if (contextService == null)
throw new ArgumentNullException("contextService");
_ctx = contextService.Context;
}
public ICollection<Unicorn> GetList() {
return _ctx.Unicorns.ToList();
}
}
public class DragonService {
private readonly FooContext _ctx;
public DragonService(FooContextService contextService) {
if (contextService == null)
throw new ArgumentNullException("contextService");
_ctx = contextService.Context;
}
public ICollection<Dragon> GetList() {
return _ctx.Dragons.ToList();
}
}
Controller:
public class FantasyController : Controller {
private readonly FooContextService _contextService = new FooContextService();
private readonly UnicornService _unicornService;
private readonly DragonService _dragonService;
public FantasyController() {
_unicornService = new UnicornService(_contextService);
_dragonService = new DragonService(_contextService);
}
// Controller actions
}
Second thoughts (almost an edit):
If you need your context not to create the proxies for your entities therefore not having lazy loading either, you could overload your context service as follows:
public class FooContextService {
private readonly FooContext _ctx;
public FooContext Context { get { return _ctx; } }
public FooContextService() : this(true) { }
public FooContextService(bool proxyCreationEnabled) {
_ctx = new FooContext();
_ctx.Configuration.ProxyCreationEnabled = proxyCreationEnabled;
}
}
NOTE:
If you set the proxy creation enabled to false you will not have lazy loading out of the box.
If you have api controllers you don't want to deal with any full blown object graph.
EDIT:
Some reading first:
This link relates to a pre-release version of EF6: Entity Framework and Async.
Scott Allen posted about this in his blog: Async in Entity Framework 6.0.
If you're going to use Unit of Work I'd recommend to read this: Make the DbContext Ambient with UnitOfWorkScope.
Darin Dimitrov's answer on Do asynchronous operations in ASP.NET MVC use a thread from ThreadPool on .NET 4.
Get this done:
(_context as IObjectContextAdapter).ObjectContext.Connection.Open();
This is a great article about Managing Connections and Transactions.
Entity framework exposes EntityConnection through the Connection property. Read as: public sealed class EntityConnection : DbConnection.
Considerations for managing connections: (taken from previous link)
The object context will open the connection if it is not already open before an operation. If the object context opens the connection during an operation, it will always close the connection when the operation is complete.
If you manually open the connection, the object context will not close it. Calling Close or Dispose will close the connection.
If the object context creates the connection, the connection will always be disposed when the context is disposed.
In a long-running object context, you must ensure that the context is disposed when it is no longer required.
Hope it helps.
I think per request scales the best. Use a thread-safe connection pool and make the connection scope coincide with the unit of work. Let the service that's responsible for transactional behavior and units of work check out the connection, use it, and return it to the pool when the unit of work is either committed or rolled back.
UPDATE:
10-12 seconds to commit a status update? You've done something else wrong. Your question as written is not sufficient to provide a suitable answer.
Daily NASDAQ volume is 1.3B transactions, which on an 8 hour day works out to ~45K transactions per second. Your volume is 2X that of NASDAQ. If you're trying to do it with one machine, I'd say that NASDAQ is using more than one server.
I'd also wonder if you could do without that status being updated using ACID. After all, Starbucks doesn't use two-phase commit. Maybe a better solution would be to use a producer/consumer pattern with a blocking queue to update those statuses when you can after they're sent.