We have developed a Vehicle Tracking Application in MVC5 using EF6.
The app has a dashboard on which current status of vehicle is displayed i.e. Moving / Stopped/ etc.
To Load current status the dashboard view fires async ajax request(every 20 seconds) to fetch latest data.
Now, rather then making calls from client machine, I want that the client should automatically receive the update as soon as new data is available for its vehicle. It should not poll every 20 seconds.
I've read about SignalR, and tried implementing the Chat program. That's works well. But somehow, I'm unable to figure out "How to use it in my scenario?".
Also, I read about SQLDepedency to detect changes in DB, but again couldn't reach to a solution.
Will be glad, if someone can point me in right direction.
Thanks.
Some time ago I experimented with replacing polling with SignalR too. It was quite straightforward and I used mainly SignalR web as a source of information.
I remember I dealt with some serialization issues but it was more related to the message contract as we used a hierarchy of interfaces and implemented some inheritance in contracts (My question from that time).
Just a suggestion - plan well for scalability - how will your scenario work when you will have to scale out (if applicable) to multiple servers. For my high frequency messaging it was a no-go reason (My question from that time).
maysbe this link can help.
It is SOAP based but also express the fact taht you will have to implement a wbeservice like part on the device.
Related
There is an ASP.net C# web application through which we can get the recipient emails, time zone and their smtp server details in to the database.I have two requirements:
1. Consider a table in the database. When ever there is a change in the table, an email has to be sent. It is OK if we can constantly check the database every 5 minutes. It would be great if we can send it instantly but a delay is fine.
2. Sending emails automatically at 12 AM at their respective time zone.
I m familiar with C# programming. But kind of new to automatic scheduling stuff. This could sound like a basic question but it would be great if you can help. What is the best way to implement this - Web api or web services or WCF or windows services or combination of web api and task scheduler? Please let me know your thoughts. Also a small tip on how to implement this would be great.
You have an option of setting up trigger but I hate that approach as it will add overhead to your table tow insertion and not actually needed. I think you are in the right path by thinking about pooling. There is a nice little library in .net called hangfire which I find to be very useful to do scheduled task. It has pretty sophisticated reporting and almost all the time works really well. You can give it a try. But if you want to control things better writing a small windows service don't be that bad either. I think doing websevice either using webapi or wcf is a bit overkill here and might not fit purpose.
Basically, I have a new desktop application my team and I are working on that will run on Windows 7 desktops on our manufacturing floor. This program will be used fairly heavily as it gets introduced and will need to interact with our manufacturing database. I would estimate there will (eventually) be around 100 - 200 machines running this application at the same time.
We're lucky here, we get to do everything from scratch, so we define the database, any web services, the program design, and any interaction between the aforementioned.
As it is right now, our legacy applications just have direct access to a database, which is icky. We want to not do that with the new application.
So my question is, how do I do this? Vague, I know, but basically I have a lot at my disposal here, and I'm not entirely sure what the right direction to go is.
My initial thought, based on what I've perceived others doing, is to basically wall off the database by using webservices. i.e. all database interactions from the floor MUST occur through the webservices, providing a layer of security by doing much of the database logic behind closed doors. Webservice calls are then secured to individual users via Active Directory.
As I've found though, that has some implications of its own... We have to abstract the data before it reaches the application. There's still potential for malicious abuse by using webservice calls repeatedly to ruin or spam data. We've looked at Entity Framework and really like what it provides, but as best I can tell, that's going to be unavailable by the time we're at the application level in this instance.
It just seems like I can't come to a conclusion on what is "right". So, what is right?
WebServices sounds like a right approach. Implementing a SOA-oriented layer on the webservices layer gives you a lot of control over what happens to the data at the database server.
I don't quite share your doubts about repeated calls doing any damage - first you can have an audit log of every single call so that detecting possible misuses is obvious. But you also could implement a role based security so that web service methods are exposed to users in roles, which means that not everyone will be able to call just any method.
You could even secure your webservices with forms authentication so that authentication is done against any datasource, not only the active directory.
And last thing, the application itself could be published as a ClickOnce application so that it is downloaded and executed from the web page and it automatically updates itself just as you publish new versions.
If you need some technical guidance, I've blogged on that years ago:
http://netpl.blogspot.com/2008/02/clickonce-webservice-and-shared-forms.html
My suggestion since you are greenfield is to use an API wrapper approach with Servicestack.
Check out: http://www.servicestack.net/ServiceStack.Northwind/
Doing that you can use servicestack authentication, abstract away your db layer (because you could move to a different DB provider, change its location, provide queues for work items etc...) and in time perhaps move your whole infrastructure to an internal intranet app.
Plus Servicestack is incredibly fast, interoperable with almost any protocol you through at it, and provides for running it through MONO, so you are not stuck with a MS backend that could be very expensive.
My two cents. :)
First of all this question is not appropiate for StackOverflow, you might get close-votes really quickly.
Second, You may want to have a look at WCF RIA Services for this.
These will allow you to create basic CRUD operations for all your entities, and stuff like that.
I never used this myself, no I'm not sure what the potential issues might be.
Otherwise, Just do what we did:
Create generic (<T>) interfaces and services and contracts and everything. This will allow you to adapt your CRUD functionality in your Services, DAOs, ViewModels and such to any entity type.
I've been looking into using SignalR for a while now, and I think I have a good candidate for it.
I have a page which allows users of the system to leave Comments, and at the moment it uses JQuery to periodically refresh the list of comments. I think SignalR would replace this nicely, i.e. if there were two users looking at the list and one wrote a comment, I would like it to appear instantly in the second.
All well and good, I have a sort of template where this works.
However
My system itself can sometimes add automatic notifications to the list - These are put into the database directly by a non-web based application.
How can I get SignalR to see the new information from the database and send it to the users?
In SignalR the hub is a static part in your application. You can spin up a System.Threading.Timer in your webapplication to periodically check your database for new notifications and add those to the data used by the hub.
This can even be improved by using a SqlCacheDependency.
A subjective side note: I do agree this type of functionality is a very good candidate for SignalR.
We are looking into a better way to deliver data update notifications to a web front end.
These notifications trigger events that execute business logic and up-date elements via JavaScript (JS) to dynamically update the page without reloading.
Currently this is done with a server side thread, which timely fires an A-synch JS event to notify the web front-end(s) to check if the data has been changed or not.
This mechanism works, but the feeling within the team is that it could be a lot more efficient.
The tool is written in C# / ASP.NET combined with JS and we use the PokeIn library for the aSynch JS/C# Calls.
Any suggestions for improved functionality are welcome! Including radically different approaches still maintaining the JS/C#/ASP.NET usage.
Is this a real question? I would like to add this as a comment but I don't have the enough score.. Anyway, if you need what pokein does for you (object translation among the parties) that is the only option you have. Although there are solutions like websync, signalr.. They don't handle the object translation and has no different approach etc... Better, you benefit from pokein's websocket feature. Both of others needs Windows Server 8 for websocket. Pokein lets you use websocket on any server version or platform..
Sounds like SignalR would help you? This blog post gives a good introduction.
I was trying to solve something similar (reporting real-time updates triggered from an external services communicating with the server) recently and it turned out SignalR is a perfect fit for this situation.
Basically it is a library wrapping long-polling, Web Sockets and few other techniques, using (transparently) whatever is available on server and client.
I only have good experience with it so far.
My assignment is to create an App for a Mobile Device (Like iphone/android/BB/etc..), the purpose of this app is to tell the users there is something new on the website and then show an list (inside the App) showing the latest updates.
The Company insisted I use ASP.NET/C#/Visual Studio and use the SOAP protocol.
I've started working with C# and then using the so called WCF.
I've already got some stuff working. (Like "consuming" the WCF from an Android App and getting data sent back).
My Question is what will be the best "Architecture" to work with for the Mobile App Development. I was thinking about have only 1 WCF and then call a general function like Do() (Or some other name :)) and then adding a soap header where u can define what u want the service todo. Like getting a record from the database, or ping , er something else, whatever the company may need in the future :)
How this would work:
The Client (Mobile App) would make a call to the WCF, and in the soap header is states, lets say, it wants to register the Phone with the Device ID. The WCF will receive the Soap Requests, extract the header and use some sort of switch to decide what it needs todo. Once it knows that to do the WCF will then, for example, access some local Classes to insert/retrieve Database data or do something else and when its done it will simply return what is needed. (Like an OK sign or data or something else.. :)).
Is this a right approach, cause how I am looking at this, it makes it very easy for changes on the back end without updating the App.
Sorry if this a retarded question, but I am new to WCF and Mobile App Development, and i'am trying to deliver a great product at the end of my internship. I was just wondering what sort of "Architecture" you guys suggest I would use for this sort of assignment.
EDIT
I already told them SOAP is too heavy for mobile development and shown them some graphs. But they insisted to use techniques they already know.
After doing some research I indeed think the contract based approach is the better way to go. But can you maybe answer a few questions regarding it?
-Can I have like one WCF file that gets "consumed" which holds all the different operations?
-Can I authenticate the client (With using Soap headers Required) at the beginning of the WCF and after that call the desired operation?
SOAP is generally regarded as a little too heavy for mobile development. Since users may incur data charges and generally have lower bandwidth, it would be preferable to take a REST/JSON approach. You can still use WCF to do this at the server.
You can use a generic operation (MessageAction="*") but you will then need to handle the serialisation/deserialistion of messages yourself. However, unless you have a pressing reason to do this I would suggest properly structured operations are the better way to go. They are much more maintainable. You can still make implementation changes at the server without affecting the client, as long as the message contract does not alter. The reality is that if you want to change the message or operation contracts you will have to make changes to the clients anyway. After considering this, the 'contract' based approach only has upsides and no real downsides.