EF6 Repository pattern in Windows Service - c#

I have implemented a Windows service which is to get a list of emails from the database and send them using .net mail every 60 seconds.
I have designed it using repository pattern, please see the screenshot of the solution folders and projects.Click to see the picture
Questions:
In terms of the pattern am I on the right track? structure of the separated projects, creating an interface for each repository and a service per repository.
If some of the business logics have nothing to do with the database, do I still need to create a repository for them or a service is enough?
I have a SMTP service class which is implementing .net mail and sending emails, while I'm sending each email I need to update the database, I would like to know if putting the update logic in the SMTP Service class is a good practice? It's something like below
public class SMTPService : ISMTPService
{
SmtpClient client;
MailMessage newMessage;
EmailService emailService;
IEventLoggerService MailCheckerLog;
public async Task SendEmail(tb_Email email)
{...}
void SendCompletedCallback(object sender, System.ComponentModel.AsyncCompletedEventArgs e, tb_Email email)
{
if (e.Cancelled)
{
}
if (e.Error != null)
{
}
else
{
email.DateSent = DateTime.Now;
emailService.Update(email);
}
client.Dispose();
newMessage.Dispose();
}
}

I'm not sure what you'd like to achieve exactly and what business requirements you have.
The structure is OK for now, but consider this: you have a user entity and a photo entity. Each user must have a photo associated with them. How would you handle that scenario?
You have to create a UserRepository and a PhotoRepository. Of couse you first have to insert the user record in order for the photo record to reference it later.
So you call the Insert() method of the UserRepository. When the user is inserted, you call the Insert() method of the PhotoRepository. But what if that Insert() fails? Now you have a user in the db who doesn't have a photo.
You have to insert the user and the photo together, in one transaction. That's where the unit of work pattern comes in. You have to use that if you have business logic that concerns more than one entity type. If all you do is handle emails, then this is fine. If not, you have to add that pattern to your application. See an example here.
A service, again, is something that handles business transactions, and a business transaction can touch multiple types of entites. Again, the unit-of-work pattern can help you with this. But usually repositories are created based on your entites, and services based on your business logic. In the example, you could have a UserService that uses both the UserRepository and the PhotoRepository (usually through the unit of work).
In case of sending the e-mails, I would again design the services based on my business logic. And probably the business logic is to 'send e-mails', not to 'send e-mails via SMTP'. What happens if you decide to use a service like SendGrid? Then it is not SMTPService anymore.
I would probably create an EmailService (which you also have), and this would have a SendEmails() method. This would use the EmailRepository to fetch the emails, send it using SMTP, then update it and save it through the unit-of-work.
Or, if you want to be really abstract, you can create an IEmailSenderService interface with one method, SendEmail(Email email). Then you can create an SmtpEmailSenderService, which implements this interface and wraps the SmtpClient class and sends the email using SMTP. If you decide to move to SendGrid, you can create a SendGridEmailSenderService, that uses HttpClient to issue requests to SendGrid. The update is still done in the EmailService using the repository and the unit of work, but now the EmailService itself does not use SmtpClient but simply the IEmailSenderService interface.

Related

How to structure communication between TcpListener and TcpClient after initial socket connection is made

So this is maybe pretty simple for network programming dudes but I'm a little confused about this. Let me try to ask given an example of usage.
Say I were to create a chat server with clients being able to join with a username. No accounts or passwords. Just username.
I have a working Echo Server example running that uses async calls so all that is good. Now I need to define some message handling between the client and the server.
Lets say the client now connects and it wants to get the list of connected users. How would I go about doing that?
My thought is that I create a function called BeginGetConnectedUsers and that will send message to the server. The server will then reply with a list of users but because I'm using async calls I'll now have my accept code look at the message and figure out that it is a reply from a BeginGetConnectedUsers so it will send the data to a EndGetConnectedUsers.
But I have no idea if this is a good way to do something like this?
But also with this design I'll have to pair every BeginGet function with an EndGet function. This could be made more readable with the async await style but though but that may not be preferable either.
I have no experience with how to structure the following communication between client and server when they have to start exchanging all the interesting data stuff...
And advice? Places to look? All my google searches that include the work TCPLISTENER will all show me examples of Echo Servers and I already know how to do that.
There are many posible implementations here.
I woudl implement an strategy pattern or a controller pattern. Other options are state machines (deterministic automatas) or even a simple and big switch case.
So basically you have only one function to receive the messages from the wire.
All the messages implements the same interface may be
IMessage<T>
{
string Type {get;set;}
T Data {get;set;}
}
So when you get the message you use the Type property to route the T Data to the actual method you want.
In a simple implementation using a controller, you anotate the controller methods with an attribute indicating the message type you want to manage:
class Controller
{
[Messagetype("GetConnectedUsersResponse")]
Response GetConnectedUsers(IEnumerable<User> users)
{
//...
}
[Messagetype("AnothermessageType")]
Response OtherStuffToDo(....)
{
//...
}
}
When you receives the message, by using some simple reflection you dynamically call to the method wich has the attibute with the matching message type attribute.
If you dont like reflection another option (among a lot of docens else) is to use an strategy patter
So you can register some message Handlers by key in your IoC container for example.
All hadlers implement a function lets say
interface MessageHandler<T>
{
Response Handle(T Data);
}
When you receive the message you just resolve the handler using your favourite IoC container (resolving by name is lately considered as an atipattern, so take it with a pinch of salt)
var handler = container.resolve(message.Type);
var response = handler.handle(message.Data);
In both implementations you should define how you respond (if you do) and adjust the "Response" return type (May be you just dont have response so it is void)

Best architecture design using service layer and interacting services?

I have several services that are currently highly decoupled. Now I have to extend them and they need to depend to access each other.
Let's say I have 4 services: EmailService, HouseService, UserService, PriceService. Each user has an email address and each user belongs to a house.
I want to send an email to each user about the price of the house that they are connected to. So in the EmailService I have SendEmailToAddress(string email, string text), in PriceService I have GetHousePrice(int id), in HouseService I have GetUsersInHouse(int id) and in UserService I have GetEmailOfUser(int id).
What would be the best approach to send an email to all the users from the HouseController? Should I just init all the services in the controller action and call each one in order or should I use the Mediator pattern? If I should use it, it would probably contain only one method so it seems a bit of an overkill. Also if I use it everywhere should I create different mediators for each service connection or should it be only one class that has all my services as private properties and then in the methods use only the once I need for a specific action? If I go with the Mediator pattern should I use it in every controller or should I stick with the bare services where they don't need to interact together (e.g. if I only need a list of houses I think it's probably best to just get them directly from the service object instead of the Mediator)?
Given that your services aren't actually needing to communicate with each other, you just need to call various methods on each and use the return values to complete a higher level task, I don't think the Mediator pattern is appropriate here.
For example, its not like you need the HouseService to manipulate the state of objects managed by the PriceService...you just need data from the PriceService that the HouseService provides input for:
var houseId = houseService.GetIdOfHouse(someCriteria);
var price = priceService.GetPriceOfHouse(houseId);
Instead, I think what you need to implement is the Facade pattern, which will:
Provide a unified interface to a set of interfaces in a subsystem. Façade defines a higher-level interface that makes the subsystem easier to use.
Good example of Facade pattern can be found on the dofactory.com site:
http://www.dofactory.com/net/facade-design-pattern
Here's what I would consider doing:
public class NotificationFacade
{
private IPriceService _priceService;
private IHouseService _houseService;
private IUserService _userService;
private IEmailService _emailService;
public NotificationFacade(IPriceService priceService, IHouseService houseService, IUserService userService, IEmailService emailService)
{
_priceService = priceService;
_houseService = houseService;
_userService = userService;
_emailSerice = emailSerice;
}
public void NotifyUsersAboutPriceForHouse(int houseId)
{
var price = _priceService.GetHousePrice(houseId);
var users = _houseService.GetUsersInHouse(houseId);
foreach(var user in users)
{
var emailAddress = _userService.GetEmailOfUser(user);
_emailService.SendEmailToAddress(emailAddress, "Your House Price is:" + price);
}
}
}
In your controller:
public HouseController
{
private NotificationFacade _notificationFacade;
public HouseController(NotificationFacade notificationFacade)
{
_notificationFacade = notificationFacade;
}
public void SomeActionMethod(int houseId)
{
_notificationFacade.NotifyUsersAboutPriceForHouse(houseId);
}
}
The dependencies should be resolved using Dependency Injection with a container such as Unity, Ninject, StructureMap or something similar...
You could create a workflow service that contains the actual logic to look up the information and send the mail using the existing services.
This service is then called from your HouseController. You could use the service directly as a class library or expose it as a WCF service; but it depends on your requirements.
This way your entity services remain loosely coupled, and all of your cross-service logic is in a dedicated component.
As I was looking for best practices since past couple of days in ASP.Net MVC and I concluded that our services should contain all business logic ( using repositories of different domain models) and expose public methods that are accessible by controller.
In your case you should create a new service and put the whole logic of calculation and sending email in a method of that service. So that your service will work like a black box. Other developers (who work on your project) don't need to know that how thing are managed in that method. All they need to know is to call that method with required parameter and handle response.
Just create HouseServiceFacade that contains the services you need. In this facade you can put all methods for the controller.

What is the best method for making database connection (static, abstract, per request, ...)?

I used lot of model for connecting to db, in my last project that i worked with C# & entity framework, i created static class for db connecting but i had problem with opening and closing connection for that give me error when more than 10-15 requests come together, i solved it with changing method of connecting to db with i connect now per request and removed all static methods and classes.
Now i want to know,
What is best model for making connection?
Should i close it after every query and open it before using or ...?
A connection in static class is good model (that i don`t need to
create it, every time)?
Is there a good design pattern for this problem?
All of it is for the same question What is the best method for
making database connection (static, abstract, per request, ...)?
For example i working on a sms sender web panel, I should send 100K sms per second, these sms collect with others and make a package that every package have 1~20 sms then i need to send 5K~100K packages per one second and when i send a package i should do these steps:
Update single sms to delivered or not delivered
Update user balance if delivered decrease user balance in useraccounts table
Update number of sms send count in user table
Update number of sms send count in mobile number table
Update number of sms send count in sender number table
Update package for delivered and failed sms in package table
Update package for how thread send this package in package table
Update thread table for how many sms send it by this tread and how many failed
Add account document for this transactions in AccountDocument table
All steps and lot of other things like logs, user interface and monitoring widgets, that should doing and i need DB connection for doing every single of this transactions.
Now, What is best model for connecting to DB? By human request or by thread request or by every single transaction..
answers to your questions:
Close it. .NET does connection pooling for you under the hood.
Create it. use the using (Connection conn = new ....) each time - this way, you'll make the most out of the .NET pooling mechanism.
you can use the .NET ThreadPool (or your own custom one), define the ThreadPool to use solely 10 thread in parallel and Enqueue work items one after another. this way no more then 10 connections will be used in the same time + it'll probably work faster.
More about Custom ThreadPools: Custom ThreadPool Implementation
Per instance.
Here's my suggestion for an architecture:
Create a database table (queue) for pending SMS to be sent out.
each row will contain all the information needed for the sms + the current status.
create a worker process, perhaps a windows service which will sample this table constantly - let's say, each 5 seconds. it will select the TOP ~20 SMS with status = 'pending to be sent' (should be represented as int). and will update the status to 'sending'
each sms will be sent out using a custom threadpool on the windows service side.
in the end of the process, ALL the processed sms status will be updated to 'done' using a CTE (common table expression - you can send a cte with all the sms rows ids that have just been process to do a 'bulk update' to 'done' status).
you could make the status update stored procedure to be the same one as the 'getpending'. this way, you could select-for-update with no lock and make the database work faster.
this way, you can have more than just one processor service running (but then you'll have to loose the nolock).
remember to avoid as much locking as possible.
by the way, this is also good because you could send SMS from any place in your system by simply adding a row to the pending SMS table.
And one more thing, i would not recommend to use entity framework for this, as it has too much going on under the hood. All you need for this kind of task is to simply call 3-4 stored procedures, and that's it. Maybe take a look at Dapper-dot-NET - its a very lightweight MicroDal framework which in most cases works more than 10 times faster than EF (Entity Framework)
1. Should i close it after every query?
.Net does that for you so let it handle it, that's a garbage collector task. So don't bother disposing your objects manually, this is a good answer by Jon Skeet: https://stackoverflow.com/a/1998600/544283. However you could use the using(IDisposable){ } statement to force the GC to do it's work. Here is a nice article about resources reallocation: http://www.codeproject.com/Articles/29534/IDisposable-What-Your-Mother-Never-Told-You-About.
2. A connection in static class is good?
Never make a data context static! Data contexts are not thread safe or concurrent safe.
3. Is there a good design pattern for this problem?
As Belogix mentioned dependency injection and unit of work patterns are great, in fact entity framework is a unit of work itself. DI and UoW are a bit overrated though, it's not easy to implement if it's your first time handling an IoC container which if you're going that path I'd recommend Ninject. One other thing is you don't really need DI if you're not gonna run tests, the awesomeness of these patterns is to decouple, so you can test and mock without sweat.
In-short: If you're gonna run test against your code go for these patterns. If not, I'm providing you an example about how you could share your data context among the services you'd like. This is the answer to your fourth question.
4. What is the best method for making database connection (static, per request)?
Your context service:
public class FooContextService {
private readonly FooContext _ctx;
public FooContext Context { get { return _ctx; } }
public FooContextService() {
_ctx = new FooContext();
}
}
Other services:
public class UnicornService {
private readonly FooContext _ctx;
public UnicornService(FooContextService contextService) {
if (contextService == null)
throw new ArgumentNullException("contextService");
_ctx = contextService.Context;
}
public ICollection<Unicorn> GetList() {
return _ctx.Unicorns.ToList();
}
}
public class DragonService {
private readonly FooContext _ctx;
public DragonService(FooContextService contextService) {
if (contextService == null)
throw new ArgumentNullException("contextService");
_ctx = contextService.Context;
}
public ICollection<Dragon> GetList() {
return _ctx.Dragons.ToList();
}
}
Controller:
public class FantasyController : Controller {
private readonly FooContextService _contextService = new FooContextService();
private readonly UnicornService _unicornService;
private readonly DragonService _dragonService;
public FantasyController() {
_unicornService = new UnicornService(_contextService);
_dragonService = new DragonService(_contextService);
}
// Controller actions
}
Second thoughts (almost an edit):
If you need your context not to create the proxies for your entities therefore not having lazy loading either, you could overload your context service as follows:
public class FooContextService {
private readonly FooContext _ctx;
public FooContext Context { get { return _ctx; } }
public FooContextService() : this(true) { }
public FooContextService(bool proxyCreationEnabled) {
_ctx = new FooContext();
_ctx.Configuration.ProxyCreationEnabled = proxyCreationEnabled;
}
}
NOTE:
If you set the proxy creation enabled to false you will not have lazy loading out of the box.
If you have api controllers you don't want to deal with any full blown object graph.
EDIT:
Some reading first:
This link relates to a pre-release version of EF6: Entity Framework and Async.
Scott Allen posted about this in his blog: Async in Entity Framework 6.0.
If you're going to use Unit of Work I'd recommend to read this: Make the DbContext Ambient with UnitOfWorkScope.
Darin Dimitrov's answer on Do asynchronous operations in ASP.NET MVC use a thread from ThreadPool on .NET 4.
Get this done:
(_context as IObjectContextAdapter).ObjectContext.Connection.Open();
This is a great article about Managing Connections and Transactions.
Entity framework exposes EntityConnection through the Connection property. Read as: public sealed class EntityConnection : DbConnection.
Considerations for managing connections: (taken from previous link)
The object context will open the connection if it is not already open before an operation. If the object context opens the connection during an operation, it will always close the connection when the operation is complete.
If you manually open the connection, the object context will not close it. Calling Close or Dispose will close the connection.
If the object context creates the connection, the connection will always be disposed when the context is disposed.
In a long-running object context, you must ensure that the context is disposed when it is no longer required.
Hope it helps.
I think per request scales the best. Use a thread-safe connection pool and make the connection scope coincide with the unit of work. Let the service that's responsible for transactional behavior and units of work check out the connection, use it, and return it to the pool when the unit of work is either committed or rolled back.
UPDATE:
10-12 seconds to commit a status update? You've done something else wrong. Your question as written is not sufficient to provide a suitable answer.
Daily NASDAQ volume is 1.3B transactions, which on an 8 hour day works out to ~45K transactions per second. Your volume is 2X that of NASDAQ. If you're trying to do it with one machine, I'd say that NASDAQ is using more than one server.
I'd also wonder if you could do without that status being updated using ACID. After all, Starbucks doesn't use two-phase commit. Maybe a better solution would be to use a producer/consumer pattern with a blocking queue to update those statuses when you can after they're sent.

Raising Domain Events For Multiple Subscribers

I am stating to look into the Domain Events pattern and have read a lot of resources on the subject but I cannot find a good way of implementing for our requirements. Basically we have a Service/Domain layer which wraps the repository layer for reads/writes with a simplistic CQRS implementation. We have an ASP.NET Mvc application which consumes this service/domain layer. The whole application is tied together with Autofac and what I would like to happen is for the following:
When a news item is created by calling say "CreateNews" on the service layer register that an event will need to be raised as so:
public void CreateNews(Domain.Entities.News.NewsBO news)
{
ValidateBusinessObject(news);
var entityNews = AutoMapper.Mapper.Map<Repositories.Entities.News.News>(news);
NewsCommandRepository.Create(entityNews);
_domainEventManager.Register<NewsCreatedDomainEvent>(x => x.News = news);
}
This is all happening in a transaction and I don't want to actually raise the event until the save is completed so in our save changes method I want to do this:
public void SaveChanges()
{
_repoCommandManager.SaveChanges();
_domainEventManager.RaiseEvents();
}
Then in our ASP.NET Mvc application I want to have an implementation of an IHandler which looks like this:
public class NewsCreatedDomainEventCacheHandler : IHandles<Project.Services.Domain.Events.News.NewsCreatedDomainEvent>
{
public void Handle(Services.Domain.Events.News.NewsCreatedDomainEvent #event)
{
// In here we would update the cache or something else particular to the web layer
}
}
I cannot figure out how to go about raising this event from the save method and calling the implementation in the Web.Mvc application.
Any suggestions would be appreciated.
I think I have an example of how to do this for you and I happen to be using MVC and AutoFac also! In my specific example I am concentrating on Command/Query separation but in doing so I had to implement a domain event pattern.
First have a read of this blog post so you get an overview of how things hang together and what the code looks like:
http://www.nootn.com.au/2013/03/command-query-separation-to-better.html
So I would recommend installing the DotNetAppStarterKit.Web.Mvc NuGet package, then take a look at the Global.asax file for how to register all the components you will need. You can peruse the SampleMvc application for things like Event Subscribers.
I hope this helps and gets you up and going quickly. You can just use the event publisher/subscriber parts of DotNetAppStarterKit without using commands and queries.

Asp.net MVC membership design

Some Background to begin:
I've implemented a custom MembershipProvider that validates a user from my service layer called "WebMemberShipProvider"
Currently I have a service called "MembershipService", this service implements the interface IMembershipService on the service layer.
this MemberShipService queries the dal, and validates a user based on username/password
I've also created a custom AuthorizeAttribute named "AuthorizeOwnerAttribute" and this is where I'm having design issues.
For each controller I have a dependency on a Service. eg. UsersController takes a IUserService in it's constructor.
How can I call AuthorizeAttribute on an ActionResult where the current logged in user and the user being edited have the same "StudioId". Note: I want to use AuthorizeAttribute with multiple controllers, not just "UserController"
So my questions to you are:
What should I do to store the
current authenticated user's
"StudioId", as this will be used
across multiple controllers.
How should I pass authentication down to the service layer, because I want to validate that the requests are valid in the service and data access layers, not just on the client. (If this is advisable, I'm just assuming that validation on the client only is enough if I want to re-use the BLL and DAL later on in a stand-alone application)
Technologies used:
- LINQ to SQL via the
Repository pattern
- ASP.NET MVC Preview 2
Any recommendations or code examples would be very welcomed.
I basically did my security mostly at the controller level for something like this. I made a decision not to pass things down too far down the chain in order to find out whether or not a person had access to it or if I did I would just make sure IPrincipal.IsInRole() would be enough to satisfy it.
Now I did something else that feels somewhat hackier. I needed to make sure that people that were registered and had this piece assigned to them were the only ones able to access it from this section.
So I created an attribute filter that works much like this:
public override void OnActionExecuting(ActionExecutingContext filterContext)
{
var thingToView = filterContext.ActionParameters[_thingToView] as thingToView ;
var registration = filterContext.ActionParameters[_registration] as Registration;
if (!registration.CanSeeThing(thingToView))
{
throw new RegistrationCannotViewThing(registration, thingToView);
}
base.OnActionExecuting(filterContext);
}
Now the thing that felt somewhat hacky in this implementation is that I did this on my controller method:
[AuthFilter(ThingToView ="thingToView", Registration="registration")
public ActionResult Method(Thing thingToView, Registration registration)
{
....
}
The actual parameter assignments occurred in the model binder. The security happens through the filter which checks the parameters passed to the method. I then reused this filter in a lot of places.
I did something similar with a model binder to a Scott Hanselman post here: http://www.hanselman.com/blog/IPrincipalUserModelBinderInASPNETMVCForEasierTesting.aspx in order to pass what user is calling a method.
I suppose you can use the example blog post above in order to get your user object to your controller method in order to pass it to your service layer.

Categories