how to get data generated by biometric machine into asp.net - c#

I have client requirement in which client wants to get output data of bio metric machine and want to process in asp.net c#. How to get Data from bio metric machine and process int asp.net?

Your question is very generic so here's a very generic answer.
Generally speaking processing data in asp.net consists of:
Creating controllers with HttpGet, HttPost methods
Saving data into a data store (e.g. a database)
There are many questions you need to ask before you come up with a good design. For example:
How much data?
How often?
How would the data be consumed (via an API by a 3rd party app?, published somewhere?
Would you be using an existing datastore?
What authentication and authorization requirements need to be met?
(Many, many more questions.)
One place to find samples for aps.net is https://github.com/aspnet/samples
As an example, if your data comes in as json, you could look at JsonUploadSample:
This sample illustrates how to upload and download JSON to and from an ApiController.
Good luck!

Related

How to use the data from get request in a post request?

I’m kind of new to APIs and currently working on a project using the Spoonacular API. Im trying to build a meal planner website. At the moment, I’m working on the “Add To Meal Plan” feature which is a post method. To get the information for the post method, I think have to use a get method to retrieve recipe information from the API using a Recipe ID. I’m using TempData to store the information I get back from the get method so I can use it in my post method. Is this the most efficient way to be doing this? Or is it better to have my get and post requests be in the same method so I don’t have to store anything?
Currently, I’m using TempData to store the recipe information. It works but just not sure if this is the most efficient way to do this. I’m storing an object that I’ve serialized.
Using TempData is an acceptable approach for storing temporary data that needs to be passed from one request to another. It's part of the ASP.NET Core framework and provides a simple way to store values between multiple requests.
However, depending on your specific use case, there might be other ways to store this information that are more efficient. For example, if you want to store the recipe information for multiple users and persist it between sessions, you could consider using a database or a caching solution.
If the information you're storing is specific to a single user session, then TempData might be the best choice as it's relatively lightweight and easy to use.
In terms of combining the get and post requests into a single method, it's not necessary to do so. However, it might be more efficient in terms of network requests and server load if you can reduce the number of requests that need to be made to retrieve the required information.
Tell me if this helps :)

Retrieve data from API of different source systems using C#

I am here for your help. First time handling a task about the API, SAP/SOAP, Web services. Sorry, as this is a long post. Let me explain the workflow.
XYZ Server calls the API Application (which I need to develop) to obtain the UserIDs.
These UserIDs will be used to retrieve the data from API of 3 different source systems (ABC, PGS, KGT). ABC and PGS are using RestAPI while the KGT is using SOAP.
The retrieved data will be stored in the In-Memory.
The API Application will insert all the APIs in the In-Memory to the XYZ server.
Here's my question:
Is it possible to use only the VS Code for the development?
There's no database provided. How am I able to store the retrieved data into the memory?
Based on the workflow, is it possible to develop the API application in just a week? Given that I am only a newbie?
I just need your ideas on how I am going to start the development. Currently, I already have a method to get the user ID and to insert the data back into the XYZ server.
Note: The API Application is a non-UI. It serves or acts as a middleman, to transfer data from other systems to XYZ system.
Appreciate your response.
I don't know if it helps but I will try to help you.
First, I think it i totally possible to only use VS Code as you are just coding an API.
Second, have you though about having an in-memory database like Redis?
Third, I think it depends. It is possible to do that in a week at least for more experienced people, but as a "newbie" as you said, I think maybe some more time might be needed.
Hope it helps ^^

Best practice for pulling bulk data from api and storing in database [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I have written a small application in ASP.NET Core to create and manage collections of cards for a collectable card game. I currently have a version that successfully downloads bulk card data via API call, parses the JSON response, and loads it into local SQL Server database. I then use the local data to add the cards to the collections, lookup prices, etc. As an academic exercise, I'm deliberately overcomplicating the design of this as if it were a large scale enterprise application since I really have no reason to build it other than to learn more about programming, so I'm wondering about best practices for something like this?
Currently I have the application broken into four projects: an API client for pulling the card data from the external API, a data access/domain layer using EF Core and SQL server, a service layer that orchestrates everything, and a Blazor Server UI. The main thing that I'm struggling with is that my service layer is dependent on both the API client and the DBContext, so I'm wondering if there's a good way to consolidate the dependencies, since the data from both sources are mapped to the same domain objects.
From what I've been reading, it seems like setting up a repository would be a good option and is common when there are multiple data sources being utilized. I have a 2nd version with repositories for accessing the local database but I'm not sure how to introduce the external API calls into this version. I think I could create separate implementations of the Card Repository Interface, one to access my local SQL database and another to access the external API but I'm not sure how the application would know when I need one or the other if I'm using dependency injection.
For example, periodically I want to check for updated card data from the external API and update my database with the new data, but for the most part I'll be reading the card data from the local database for managing the collections. Any advice on how to approach this? I can give code examples if needed. Thanks.
Your service layer should not be dependent on the API or DBContext, instead (as you have researched already) you would have a repository that accesses the data for your service layer. The calls to the database and to other external APIs should also go within your repositories, since your service layer should not know anything about how the data is accessed.
Your repositories should be separated by business objects, not necessarily tables themselves - however I would definitely separate each external 3rd party API into its own repository.
For a logistics company I worked at, I had implemented a dynamic design where I was able to create a well structured repository pattern which would pull data either from databases internal to our company (transportation, and accounting systems), or from 3rd party APIs (such as flight or freight data) and store queried or response data into either our SQL Server or Oracle databases. This isn't too hard to accomplish.
For the repositories that make calls to a 3rd party API, an important thing I learned was making sure to design a versatile pattern for the API request calls. This was done by creating a separate APIClient solution in which you can create the HttpClient, add Headers, Query Parameters vs POST/PUT/PATCH body, and the other items required for your different types of requests. Also you have to account for including logic flow to get authentication tokens for subsequent API calls.
Another thing to consider, which I also took advantage of was the webhooks that the 3rd party APIs offered, that way I didn't have to constantly poll the data to see what changed, instead they sent me the data when things changed on their side.
There are many ways to approach your pet project, hopefully I've given you some useful ideas.
Try limiting the number of sources you are pulling from that requires the logic to execute. Could you combine your data?
Try streamlining your logic.
Check this link out. It's for scanning changes in an SQL server.
Check for changes to an SQL Server table?

Storing and Reporting on data retrieved from an API

I have a very high-level, generic question related to the retrieval of data with an API, the storage of that data, and the ability to report off of the data. My background is primarily on the database side with a specific focus on reporting out of Crystal. That being said, I'm fairly green when it comes to APIs, SDKs, .NET, and Visual Studio, so feel free to respond as if I'm 5.
I've attached a quick mock-up of the application architecture for context. The vendor we're working with touts their APIs as the best way to retrieve data for reporting purposes, but I'm struggling with visualizing the layer between raw API data retrieval and a reporting environment. Having not worked with API data retrieval in the past, can someone explain to me in layman's terms how this process would work?
1.) How would I go about retrieving data from the app server via the vendor's API? Is it as simple as creating a visual studio project and coding the API call?
2.) Let's say I'm able to retrieve the data with an API call, what is the best method for storing / reporting against that data? Is it possible to develop real-time reports out of visual studio with API call data?
3.) If #2 is not possible, the data pulled from the API calls will have to be stored somewhere. Is it possible to code the API calls to write results directly to a separate reporting datamart?
Again, I apologize if these questions are extremely elementary. I'm basically looking for context around the scenario to identify how close or rather far off I am in my understanding.
Any help is appreciated.
Thanks!
i´m going to try to answer each question as high level as possible:
1) Retrieving the data via the API is fairly simple, you need to code the call with a proper requerst and handle the response, for example if your API is exposed with a REST web service all you need to do is make an http call to the ws endpoint according to the service definition.
2) You said that the report requirement is probably a SSRS report. The way to proceed depends on how you want to handle this data. One approach culd be to store it in a database and then create a report server project that generates reports with this data.
3) Yes, according to the data format you can do whatever you want with it, from exporting a csv file to store it inside a dedicated database.
I hope this was useful in some way, as i´m not super experienced in report generation but have worked with different APIs handling data

wcf service(REST or SOAP or WEB API) thats accepts any data in any structure

I am trying to build a service/API. In my scenario there will be a lots of devices that monitor health. For example a heart beat monitor will monitor heart beats send out heart beat data. A pulse rate monitor will monitor pulses send out data and similarly you can magine a lot of devices thats send out data to my service in their own format. The service/API i am trying to build should be able to accept data in any format and store it in a azure table or blob storage. Any pointers on how to get started with this design would be helpful. I am also not sure if asp.net web api or WCF will be a better choice.
Also there is no business logic there in the service i have mentioned. From the service perspective i dont have to know the data's structure. All i have to do is to persist it it and retrieve it when asked for. Nothing more
Considering that you're (eventually) going to have to know what that data is, I would recommend to define each of them as you encounter them and then process that data.
Pay now or pay later.
Some things to think about ...
You are still going to need to be able to identify which data is which, so the minimum you can get away with is a structure with an Id or Name field and then the string data you mention which would contain the rest of the any format data.
So you either need the submitter to understand that format and send it in with the Id or Name field, or you need to set up multiple methods for the users to call when submitting data to the service like:
.addHeartBeat()
.addPulseRate()
etc ... and have that method add the Id or Name to your data. Then you would need a corresponding way to request that data back - using the same structure (either the user needs to know the Id, or to call a specific method). You also need to consider what other criteria the requester might use like date ranges, particular patient etc .. then all those possible filter fields need to be separate from the blob data in order to efficiently query it.
Looks like you want to build a file upload load service that accepts any kind of file and store it in blob (literally you can treat everything you want to monitor as a file). First of all, consider to generate SAS and allow client to upload the files to blob storage directly. This can save some effort, and give a better performance. If that’s not an option, then use ASP.NET Web API. While WCF also supports REST, in the long run, it is recommended to use ASP.NET Web API to build new RESTful services. WCF can be used to build SOAP services, but SOAP requires the client to send SOAP envelops rather than arbitrary data.
Best Regards,
Ming Xu.

Categories