I am really new to the concept of RESTful Web APIs, please do not go hard on me but I just really cannot think a way of it.
So basically, what I cannot find a solution for that I cannot find a way to make my console application works without my manuel 'start' command so that DB can stay up to date so that in the meanwhile API would only be responsible for getting the data from DB or other CRUD operations.
I had a console application which can parse data from a website and store the necessary values to the database. Then in the same solution, I also created a Web API - MVC project and wrote CRUD operations for all entity data types. They both works without any problem, however unless I manually make the console application to run the code and update the DB, DB has old values so the get operations ended up with wrong values.
All I wanted to do is to somehow run the console application's code (might be at certain times) so that the DB can stay up to date.
Hope I did not ask in a confusing but if I do it is because I cannot figure it out and cannot understand the process.
Appreciate if someone can briefly explain me what I have to do.
Thanks !
What you're asking really isn't a programming question per se, but more of a scheduling question. You could turn your code into a service and install it, but that is really overkill for what you need. (Some people would argue that a windows service isn't appropriate for this since all it does is run at scheduled times and not wait to intercept information, but that is a whole other discussion).
What you need is a Scheduled Task in Windows: http://windows.microsoft.com/en-US/windows/schedule-task#1TC=windows-7
Related
So this is more of an advice question.
We have a project that involves an RSS feed and the reports from the feed being saved to a database. This will need to have a Job service so either quartz or chron.
My question is that there are many types of project that we use as developers but in my line of work these are normally web API with MVC hooked up to an angular front end.
With this project we do not need any end points so no need for MVC. Just after some advice as to what others would recommend.
The flow will be
1. c# call to rss feed with a parameter (5 per second max)
2. xml returned
3. xml mapped to a DTO/Modal
4. DTO/modal saved to Database
5. External reporting tool will handle data.
Any help is appreciated.
Thanks in advance
I would recommend a console application for the following reasons:
Requirement for a job. Console apps can easily be ran via scheduled tasks/jobs.
Lack of requirement for User Interface. Perhaps you might need to pass in a few parameters, not sure. Perfect for a console application.
The requirements for retrieving and storing of the rss feeds XML can all be handled in C#, nothing special needed from a framework perspective that the console app can't easily do.
You may also consider a windows service project.
https://i.stack.imgur.com/WIAWC.jpg
Make it a purely background process.
You don’t have to worry about console windows appearing in a user session and taking measures to keep it hidden.
There are few operational tasks - like managing the service through service management console, out of box support for logon and failure recovery etc. which can be leveraged too.
There is an ASP.net C# web application through which we can get the recipient emails, time zone and their smtp server details in to the database.I have two requirements:
1. Consider a table in the database. When ever there is a change in the table, an email has to be sent. It is OK if we can constantly check the database every 5 minutes. It would be great if we can send it instantly but a delay is fine.
2. Sending emails automatically at 12 AM at their respective time zone.
I m familiar with C# programming. But kind of new to automatic scheduling stuff. This could sound like a basic question but it would be great if you can help. What is the best way to implement this - Web api or web services or WCF or windows services or combination of web api and task scheduler? Please let me know your thoughts. Also a small tip on how to implement this would be great.
You have an option of setting up trigger but I hate that approach as it will add overhead to your table tow insertion and not actually needed. I think you are in the right path by thinking about pooling. There is a nice little library in .net called hangfire which I find to be very useful to do scheduled task. It has pretty sophisticated reporting and almost all the time works really well. You can give it a try. But if you want to control things better writing a small windows service don't be that bad either. I think doing websevice either using webapi or wcf is a bit overkill here and might not fit purpose.
I have (or the company) a simple watchdog application that monitors memory, diskspace and some connections and other stuff realated to a realtime database server. We are using this application to detect errors and notify system administrators by email about faults. We also send a daily report on som KPI and other stuff.
The problem is that there need to be someone logged inn to the server at all time with this solution (it was created simple just for monitoring some problems we had but has become a application we like to futher develop) and we like to convert it to a service to avoid that someone needs to be logged on at all time.
The application is written in C# framework 3.5. I know there is WCF and other stuff now. I looked in VS (version 2012) and i see that the even removed the Service project that used to be there, and there is now only WCF.
I dont have to much experice in that field (.NET technology) since i've only done legacy C++ programming the last 5 years at work.
Does anyone have som recommandation for doing this the best way ?
If you move all of your logic into a class library, you can just create a new Service Project, and use that library within the service that's generated.
MSDN has a walkthrough on creating services that walks through the process, step by step.
Basically, I have a new desktop application my team and I are working on that will run on Windows 7 desktops on our manufacturing floor. This program will be used fairly heavily as it gets introduced and will need to interact with our manufacturing database. I would estimate there will (eventually) be around 100 - 200 machines running this application at the same time.
We're lucky here, we get to do everything from scratch, so we define the database, any web services, the program design, and any interaction between the aforementioned.
As it is right now, our legacy applications just have direct access to a database, which is icky. We want to not do that with the new application.
So my question is, how do I do this? Vague, I know, but basically I have a lot at my disposal here, and I'm not entirely sure what the right direction to go is.
My initial thought, based on what I've perceived others doing, is to basically wall off the database by using webservices. i.e. all database interactions from the floor MUST occur through the webservices, providing a layer of security by doing much of the database logic behind closed doors. Webservice calls are then secured to individual users via Active Directory.
As I've found though, that has some implications of its own... We have to abstract the data before it reaches the application. There's still potential for malicious abuse by using webservice calls repeatedly to ruin or spam data. We've looked at Entity Framework and really like what it provides, but as best I can tell, that's going to be unavailable by the time we're at the application level in this instance.
It just seems like I can't come to a conclusion on what is "right". So, what is right?
WebServices sounds like a right approach. Implementing a SOA-oriented layer on the webservices layer gives you a lot of control over what happens to the data at the database server.
I don't quite share your doubts about repeated calls doing any damage - first you can have an audit log of every single call so that detecting possible misuses is obvious. But you also could implement a role based security so that web service methods are exposed to users in roles, which means that not everyone will be able to call just any method.
You could even secure your webservices with forms authentication so that authentication is done against any datasource, not only the active directory.
And last thing, the application itself could be published as a ClickOnce application so that it is downloaded and executed from the web page and it automatically updates itself just as you publish new versions.
If you need some technical guidance, I've blogged on that years ago:
http://netpl.blogspot.com/2008/02/clickonce-webservice-and-shared-forms.html
My suggestion since you are greenfield is to use an API wrapper approach with Servicestack.
Check out: http://www.servicestack.net/ServiceStack.Northwind/
Doing that you can use servicestack authentication, abstract away your db layer (because you could move to a different DB provider, change its location, provide queues for work items etc...) and in time perhaps move your whole infrastructure to an internal intranet app.
Plus Servicestack is incredibly fast, interoperable with almost any protocol you through at it, and provides for running it through MONO, so you are not stuck with a MS backend that could be very expensive.
My two cents. :)
First of all this question is not appropiate for StackOverflow, you might get close-votes really quickly.
Second, You may want to have a look at WCF RIA Services for this.
These will allow you to create basic CRUD operations for all your entities, and stuff like that.
I never used this myself, no I'm not sure what the potential issues might be.
Otherwise, Just do what we did:
Create generic (<T>) interfaces and services and contracts and everything. This will allow you to adapt your CRUD functionality in your Services, DAOs, ViewModels and such to any entity type.
Imagine a site where the user logs in and can view their ip webcam (I can do this bit i believe). The problem is i want the site to do some processing on the images/video coming in even when the user is not logged in i.e run some motion detection algorithm and if there is motion log the incident in a database.
What would i need to learn about to implement this project? I want to use ASP.NET and C# so i assume:
Learn ASP.NET.
Learn C# (I'm a pretty competent desktop application developer).
mySQL database (Is this the best kind of database to use in this situation?).
I've not used ASP.NET before hence i have no idea what it can/can't do. I think i can get an ASP.NET site up and displaying a live feed but how do i implement the bit that is always running in the background processing stills from the live feed and logging the incidents?
Any help is appreciated. Thanks in advance.
You probably want to use something like a Windows Service to do the continuous processing. With the ASP.NET site talking to the database and displaying the feed.
ASP.NET is not really suited to doing background tasks.
MySQL should work fine and is free, so if this is not a work related task then it might be a good choice. I have a MySQL database here that contains close to 100GB of text. So it should handle what you are suggesting.
The the web site and database you're on the right track, ASP.Net and MySql will work just fine for the type of project you are describing. However, the processing bit doesn't fit very well into the ASP.net model.
I would recommend that you think about creating a Windows Service to do whatever processing you need to do. It sounds like you want your processor to work on remote video streams so you'll need to consider how you'll get those live streams to you service and how many concurrent streams you could realistically process.
Perhaps it may make sense to have a client application or service that your users would run locally which would ping your hosted service when it detected a movement? In that case you'll likely want to look at hosting a WCF service which can be done in IIS or any standalone application (such as the aforementioned Windows Service).