Expose the SQL Sever tables and its data as oData - c#

Is there a tool or framework to expose SQL Server tables and its data of as oData. Consider that tables are generated dynamically so using OR Mapper Entity Framework is not an option.
We need a mechanism to expose data as OData without generating C# classes.

There are a number of options here.
From a coding perspective, you can build something generic. .Net (C#) wraps OData support around the IQueryable interface and the [EnableQuery] attribute. The Pseudo code below demonstrates how you can do this generically with WebAPI2. A working demo can be stood up in minutes:
[EnableQuery(PageSize = 100)]
public IQueryable Get()
{
var data = (IQueryable)<get any data from the DB as IQueryable>;
return Okay(data, data.GetType());
}
Keep in mind that the filtering etc can end up being performed in memory, so trying to push as much of the filtering back to the database will give better performance. I have mainly used this with strongly typed objects and Entity Framework pushes all the filtering to the DB - very powerful and very quick. Keep in mind that OData is very flexible and you need to optimise your database indexes and queries for your common use cases.
From a Database perspective, if you are running in Azure, you have OData a few clicks away. See this article. Further Azure Table Storage's raw format is OData from the get go. Beware there may be limitations, for example, I think OData results from SQL Azure are paged to 50 rows to avoid denial of service type scenarios that thrash your database, especially for OData queries over non indexed data.
If your SQL is on premise, I don think there is anything out of the box, however there are a number of vendors that offer connectors. Here is an example from a quick Google. I have no affiliation with them.

Related

Opening up OData

I'm reviewing OData again as I would like to use it in a new Rest project with EF but I have the same concerns I had a few years back.
Exposing a general IQueryable can be quite dangerous. Restricting potentially expensive queries must be done elsewhere. DB, connection level.
OData doesn't allow any interception/customisation of the behaviour by developers as it sits outside the interface.
OData doesn't play well with DI in general. While it is possible to DI an alternative IQueryable you can't intercept the OD calls and check, amend or modify.
My suggestion is that the tool be broken down into more distinct elements to allow far greater customisation and re-use. Break open the black box :) It would also be better in terms of single responsibility. Would it be possible to have components that did the following
Expression generators from urls. Converts OData urls extensions into typed expressions usable with an IQueryable but independent of it. Generate T => Expression<Func<T, bool>> for a where for example. This would be a super useful stand alone component and support OData url formats being used more widely as a standard.
An EF Adaptor to attach the expressions to an EF context. An EF Adaptor to attach the expressions to an EF context or use in any other DI'ed code. So rather than exposing a public IQueryable the service can encapsulate an interface and get the benefits of OData functionality. Rest Get -> Expression Generation -> Map to IQueryable.
This approach would allow developers to intercept the query calls and customise the behaviour if required while maintaining the ease of use for simple cases. We could embed OData and EF within repository patterns where we add our own functionality.
There is a lot of misunderstanding in your post, it's not really well suited to this site, but it is a recurring line of speculation that does need to be addressed.
OData doesn't play well with DI in general. While it is possible to DI an alternative IQueryable you can't intercept the OD calls and check, amend or modify.
This statement is just not accurate at all, not on the DI topic or the query interception. To go into detail is too far out of scope as there are many different ways to achieve this, it would be better to post a specific scenario that you are challenged by and we can post a specific solution.
Exposing a general IQueryable can be quite dangerous. Restricting potentially expensive queries must be done elsewhere. DB, connection level.
Exposing raw IQueryable as a concept has some inherent dangers if you do not put in any restrictions, but in OData we are not exposing the IQueryable to the public at all, not in the traditional SDK or direct API sense. Yes your controller method can (and should) return an IQueryable but OData parses the Path and Query from the incoming Http Request to compose the final query to serve the request without pre-loading data into memory.
The inherent risk with IQueryable comes from when you allow external logic to compose or execute a query that is attached to your internal data context, but in OData the HTTP Host boundary prevents external operators from interacting with your query or code directly, so this risk is not present due to the hosting model.
OData gives you granularity over which fields are available for projecting, filtering or sorting, and although there is rich support for composing extended queries including functions and aggregates, the IQueryable expression itself does not pass the boundary of the executable interface. The IQueryable method response is itself fundamental to many of the features that drive us to choose OData in the first place.
However, you do not need to expose IQueryable at all if you really do not want to! You can return IEnumerable instead, but by doing so you will need to load enough data into memory to satisfy the query request, if you want to fulfil it that is. There are extension points to help you do this as well as tools to parse the URL query parameters into simple strings or an expression tree that you can apply to your own data models if you need to.
The EnableQueryAttribute is an Action Filter that will compose a LINQ query over the results from your controller endpoints to apply any $filter criteria or $select/$expand projections or even $apply aggregations.
OData doesn't allow any interception/customisation of the behaviour by developers as it sits outside the interface.
EnableQueryAttribute is about as close to a Black Box as you can find in OData, but the OData Libraries are completely open source and you can extend or override the implementation or omit the attribute altogether. If you do so (omit it), you will then need to process and format the response to be OData compliant. The specification allows for a high degree of flexibility, the major caveat is that you need to make sure the $metadata document describes the inputs and outputs.
The very nature of the ASP request processing pipeline means that we can inject all sorts of middleware implementations at many different points, we can even implement our own custom query options or we pass the query through the request body if we need to.
If your endpoints do NOT return IQueryable, then the LINQ composition in the EnableQueryAttribute can only operate over the data that is in the IEnumerable feed. A simple example of the implication of this is if the URL Query includes a $select parameter for a single field, something like this:
http://my.service.net/api/products(101)?$select=Description
If you are only exposing IEnumerable, then you must manually load the data from the underlying store. You can use the ODataQueryOptions class to access the OData arguments through a structured interface, the specific syntax will vary based on your DAL, ORM and the actual Model of course. However, like most Repository or MVC implementations, many implementations that do not use IQueryable will default to simply loading the entire object into memory instead of the specifically requested fields, they might end up loading the results from this comparative SQL query:
SELECT * FROM Product WHERE Id = #Id
If this Product has 20 fields, then all that data will be materialised into memory to service the request, even though only 1 field was requested. Even without using IQueryable, OData still has significant benefits here by reducing the bytes being sent across the wire to the client application. This reduces costs but also the time it will take to fulfill a request.
By comparison, if the controller method returned an IQueryable expression that had been deferred or not yet materialised, then the final SQL that gets executed could be something much more specific:
SELECT Description FROM Product WHERE Id = #Id
This can have significant performance benefits, not just in the SQL execution but in the transport between the data store and the service layer as well as the serialization of the data that is received.
Serialization is often taken for granted as a necessary aspect of API development, but that doesn't mean there is no room to improve the process. In the cloud age where we pay for individual CPU cycles there is a lot of wasted processing that we can reclaim by only loading the information that we need, when we need it.
To fully realise the performance gains requires selective data calls from the Client. If the end client makes a call to explicitly request all fields, then there should be no difference between OData and a traditional API approach, but with OData the potential is there to be realized.
If the controller is exposing a complex view, so not a traditional table, then there is even more significance in supporting IQueryable. For custom business DTOs (views) that do not match the underlying storage model we are often forced to compromise between performance practicalities and data structures. Without OData that allows for the caller to trim the data schema, it is common for APIs to either implement some fully dynamic endpoints, or to see a sprawl of similar DTO models that have restricted scope or potentially single purpose. OData provides a mechanism to expose a single common view that has more metadata than all callers need, while still allowing individual callers to only retrieve the sub-set that they need.
In aggregate views you can end up with some individual columns adding significant impact on the overall query execution, in traditional REST APIs this becomes a common justification for having similar DTO models, with OData we can define the view once and give the callers flexibility to choose when the extra data, that comes with a longer response wait time, should be queried, and when it should not.
OData provides a way to balance between being 100% generic with your DTOs or resorting to single use DTOs.
The flexibility provided by OData can significantly reduce the overall time to market by reducing the iterative evolution of views and complex types that often comes up as the front-end development teams start to consume your services. The nature of IQueryable and the conventions offered by the OData standard means that there is potential for front-end work to begin before the API is fully implemented
This was a very simple and contrived example, we didn't yet cover $expand or $apply that can lead to very memory intensive operations to support. I will however quickly talk about $count, it is a seemingly simple requirement, to return a count of all records for a specific criteria or for no criteria at all. An OData IQueryable implementation requires no additional code and has almost zero processing to service this request as it can be passed entirely to the underlying data store in the form of a SELECT COUNT(*) FROM...
With OData and the OData Libraries, we get a lot of functionality and flexibility OOTB, but the default functionality is just the start, you can extend your controllers with additional Functions and Actions and views as you need to.
Regarding the Dangers of IQueryable...
A key argument against exposing IQueryable from the DbContext is that it might allow callers to access more of your database than you might have intended. OData has a number of protections against this. The first is that for each field in the entire schema you can specify if the field is available at all, can be filtered, or can be sorted.
The next level of protection is that for each endpoint we can specify the overall expansion depth, by default this is 2.
It is worth mentioning that it is not necessary to expose your data model directly through OData, if your domain model is not in-line with your data model, it may be practical to only expose selected views or DTOs through the OData API, or only a sub-set of tables in your schema.
For more discussion on DTOs and Over/Under posting protection, have a read over this this post: How to deal with overposting/underposting when your setup is OData with Entity framework
Opening the Black Box
Expression generators from urls. Converts OData urls extensions into typed expressions usable with an IQueryable but independent of it. Generate T => Expression<Func<T, bool>> for a where for example.
This is a problematic concept, if you're not open to IQueryable ... That being said, you can use open types and can have a completely dynamic schema that you can validate in real-time or be derived from the query routes entirely without validation. There is not a lot of published documentation on this, mainly due to the scenarios where you want to implement this are highly specific, but it's not hard to sort out. While out of scope for this post, if you post a question to SO with a specific scenario in mind we can post specific implementation advice...
An EF Adaptor to attach the expressions to an EF context. An EF Adaptor to attach the expressions to an EF context or use in any other DI'ed code. So rather than exposing a public IQueryable the service can encapsulate an interface and get the benefits of OData functionality. Rest Get -> Expression Generation -> Map to IQueryable.
What you are describing is pretty close to how the OData Context works. To configure OData, you need to specify the structure of the Entities that the OData Model exposes. There are convention based mappers provided OOTB that can help you to expose an OData model that is close to 1:1 representation of an Entity Framework DbContext model with minimal code, but OData is not dependant on EF at all. The only requirement is that you define the DTO models, including the actions and functions, from this model the OData runtime is able to validate and parse the incoming HTTP request into queryable expressions composed from the base expressions that your controllers provide.
I don't recommend it, but I have seen many implementations that use AutoMapper to map between the EF Model to DTOs, and then the DTOs are mapped to the OData Entity model. The OData Model is itself an ORM that maps between your internal model and the model that you want to expose through the API. If this model is a significantly different structure or involves different relationships, then AutoMapper can be justified.
You don't have to implement the whole OData runtime including the OData Entity Model configuration and inheritng from ODataController if you don't want to.
The usual approach when you want to Support OData Query Options in ASP.NET Web API 2 without fully implementing the OData API is to use the EnableQueryAttribute in your standard API, it is after all just an Action Filter... and an example of how the OData libraries are already packaged in a way that you can implement OData query conventions within other API patterns.
OData Url "$filter=id eq 1"
becomes:
Func<TestModel, bool> filterLambda = x => x.id == 1;
where TestModel is the 'implied' in some way (maybe this is the problem you refer to)
in terms of generating the code it's something like
Expression.Equal(Expression.PropertyOrField(ExpressionParam, "id"), Expression.Constant(1))
generalised into an OData expression parser
Thanks for the reply. Your time is much appreciated. This isn't really an answer rather a design discussion. Apologies if it's not appropriate here.
I'm no expert with this tech so I may also be missing options. Manually exposing IEnumerables instead of IQueryable seems to require much more coding if I'm reading that suggestion correctly. It would also lead to service processing of the data after the database queries. The idea of custom actions on a custom IQueryable may also be worth some investigation.
An example of not playing well with DI...If we have an http context user with the token who has some limits re the queries they can do. We would like to take the user info from the http context and restrict the db context queries they do. Maybe the user can only see certain business units or certain clients. There are many other use-cases.
It should be possible to append/amend queries before presented to the database with the extra user context. This is where the decomposition idea comes in. If OData could generate a structure (lambda expression may not be good here either) that can be manipulated we can have the best of both by manipulating before the query execution.
The $filter, $extends concepts could be added more generally to interfaces that would allow the database to be better encapsulated. The interface applies the filter behind the scenes. The OData implementation could make the filters available on the context rather than applying the results 'outside' of the controller.
It would be interesting to know what you mean by "a problematic concept". This model seems so natural to me. I'm amazed it doesn't work this way already.

Can EF Core return IQueryable from Stored Procedure / Views / Table Valued Function?

We're in need of passing ODATA-V4 query search, order by clauses to Database directly.
Here is the case:
There are joins among tables and we invoke (inline) table valued
functions using SQL to get desired records.
ODATA where clauses needs to be applied on the result-set, then we
apply pagination Skip, Take and Order By.
We started with Dapper, however Dapper supports only IEnumerable, Thus Dapper would bring entire records from DB then only OData (Query Options ApplyTo) pagination will apply, spoiling the performance gain :-(
[ODataRoute("LAOData")]
[HttpGet]
public IQueryable<LAC> GetLAOData(ODataQueryOptions<LAC> queryOptions)
{
using (IDbConnection connection = new SqlConnection(RoutingConstants.CascadeConnectionString))
{
var sql = "<giant sql query";
IQueryable<LAC> iqLac = null;
IEnumerable<LAC> sqlRes = connection.Query<LAC>(sql, commandTimeout: 300);
**IQueryable<LAC> iq = sqlRes.AsQueryable();
iqLac = queryOptions.ApplyTo(iq) as IQueryable<LAC>;
return iqLac;**
}
}
Most of the example we see on Stored procedure, Views support apparently returns List.
https://hackernoon.com/execute-a-stored-procedure-that-gets-data-from-multiple-tables-in-ef-core-1638a7f010c
Can we configure EF Core 2.2 to return IQueryable so that ODATA could
further filter out and then yield only desired counts say 10.?
Well, yes and no. You can certainly return an IQueryable, and you're already doing so, it seems. And, you can certainly further query via LINQ on that IQueryble, in memory.
I think what you're really asking, is if you can further query at the database-level, such only the ultimate result set you're after is returned from the database. The answer to that is a hard no. The stored procedure must be evaluated first. Once you've done that, all the result have been returned from the database. You can further filter in memory, but it's already too late for the database.
That said, you should understand that OData is fundamentally incompatible with the idea of using something like a stored procedure. The entire point is to describe the query via URL parameters - the entire query. You could use a view instead, but stored procedures should not be used along-side OData.
EF cannot return IQueryable from a Stored Procedure because the database engine itself doesn't provide a mechanism for selectively querying or manipulating execution of the script, you can't for instance do the following in SQL:
SELECT Field1, Field2
EXEC dbo.SearchForData_SP()
WHERE Field2 IS NOT NULL
ORDER BY Field3
The Stored Procedure is a black box to the engine, and because of this there are certain types of expressions and operations that you can use in SPs that you cannot use in normal set based SQL queries or expressions. For instance you can execute other stored procedures. SPs must be executed in their entirety, before you can process the results.
If the database engine itself cannot do anything to optimise the execution of Stored Procedures, its going to be hard for your ORM framework to do so.
This is why most documentation and examples around executing SPs via EF returns a List as that makes it clear that the entire contents of that list is in memory, casting that List to IQueryable with .AsQueryable() doesn't change the fact that the data is maintained within that List object.
There are joins among tables and we invoke (inline) table valued functions using SQL to get desired records.
What you are describing here is similar to what OData and EF try to offer you, mechanisms for composing complex queries. To take full advantage of OData and EF your should consider replicating or replacing your TVFs with linq statements. EF is RDBMS agnostic so it tries to use and enforce generic standards that can be applied to many database engines, not just SQLSERVER. When it comes to CTEs, TVFs and SPs the implementation and syntax in each database engine becomes a lot more specific even to specific versions in some cases. Rather than trying to be everything to everyone, the EF team has to enforce some limits so they can maintain quality of the services they have offered for us.
There is a happy medium that can be achieved however, where you can leverage the power of the two engines:
Design your SPs so that the filtering variables are passed through as parameters and restrict dependence on Stored Procedures to scenarios where the structure of the output is as efficient as you would normally need. You can then expose the SP as an Action endpoint in OData and the called can pass the parameter values directly through to the SP.
You can still wrap the response in an IQueryable<T> and decorate this action with the EnableQuery attribute, this will perform in memory $select, $expand and simple $filter operations but the service will still load the entire recordset into memory before constructing the response payload. This mechanism can still reduce bandwidth between the server and the client, just not between the database and the service layer.
Make different versions of your SP if you need to have different result structures for different use cases.
Use TVFs or Views only when the query is too complex to express easily using linq or you need to use Table Hints, CTEs, Recursive CTEs or Window Functions that cannot be easily replicated in Linq.
In many cases where CTEs (non recursive) are used, the expression can be easier to construct in Linq.
To squeeze the most performance from indexes you can use Table Hints in SQL, because we don't have tight control over how our Linq expressions will be composed into SQL it can take a lot of work to construct some queries in a way that the database can optimise them for us. In many scenarios, as with CTEs above, going through the process of rewriting your query in Linq can help avoid scenarios where you would traditionally have used table hints.
There are limits, when you want or need to take control, using specialised SQL Server concepts that EF doesn't support, you are making a conscious decision to have one and not the other.
I do not agree that OData and Stored Procedures are fundamentally incompatible there are many use cases where the two get along really well, you have to find the balance though. If you feel the need to pass through query options such as $select, $expand $filter, $top, $skip... to your Stored Procedure, either change your implementation to be constructed purely in Linq (so no SP) or change the client implementation so that you pass through formal parameters that can be handled directly in the SP.

Dynamic Datatypes Cosmodb and C#

We have a requirement where user can create a questionnaire and in questionnaire user can select any type it could be date,number,list or choice. To Store response of questionnaire in relation database we have everything as string, but that brings lot of typecasting problems when we report or fetch back from API or feed into reporting system.
I was thinking if we go NoSql route where it is schema less, we can create dynamic types and store in form of Json, so that when we retrieve types are maintained.
The question is what is the best way to define Objects in C# to read and insert into NoSql database or is there a better approach?
If you are planning to use the SQL API of CosmosDB then you have two options.
The Azure CosmosDB SDK for .NET has all the API logic you need in order to do all your CRUD operations in CosmosDB. It also supports mapping from the NoSQL document to your POCO objects.
Your other option is to use Cosmonaut. It is an ORM for CosmosDB that makes all of the above, really easy for you. It supports everything the .NET SDK supports plus some extra features regarding collection sharing and easy async operations that make integration with CosmosDB really simple.

REST web service - is entity framework an overkill?

I'm familiar with the EF and it looks pretty cool. As far as I can see, it is basically a LINQ to SQL with extra functions (like caching, automatic connection handling and so on). However, in my opinion EF is useful for those applications that directly comminicate with the model data (~persistence).
In case of writing a RESTful web service, we are reading and writing objects (for example) in JSON format. The application calls the web service with some data and it returns data back.
That's why I'm actually thinking on not using EF because it looks like an overkill for me. Since I'm not planning to expose the actual model, I would use DTOs instead (both as input and output of a web service call). This means that I have to do the mapping to the underlying model anyway so the EF would be used as a LINQ to SQL wrapper.
Is there anything I'm missing? Is there any feature that would be useful while writing a RESTful web service? is there any benefit from using EF instead of LINQ to SQL?
So the logic here is that you aren't exposing your entities past the data layer, so EF is pointless.
I never expose my EF Entities pass the business layer, just one layer down from the data layer. I always project them to ViewModels and Models which are just POCOs. I've seen this in lots of projects.
Rarely do I actually use the entity change tracking features. By the time a GET/POST has occurred it doesn't make sense to requery the entities on the POST just so you can update them via change tracking. Instead a direct update makes more sense and avoids an unnecessary roundtrip to the database.
My point being is in what I've seen it most commonly used, the EF models are not exposed past more than one layer in most cases. This ensures View/UI layers don't accidentally modify EF state or cause lazy loading(which is usually disabled).
However I still get to leverage the great EF/DB mapping layer and EF LINQ queries, which is by far the greatest features of EF.
Most alternatives such as Dapper are just that, a framework for executing queries.
So I wouldn't fallback to just doing ADO.NET or an older query technology just because you aren't using all the features of EF. You should still use a modern query framework such as EF or Dapper, simply because you are still executing queries. Just because you aren't exposing the entities doesn't change that.

How to make data access layer for SQL Server and MySql

I'm building a data access layer and need to be able switch between two providers in different environments.
How do I structure this? I'm using a repository pattern and have e.g. a CarRepository class and a Car class. The CarRepository class is responsible for saving, deleting and loading from the database.
I have a Database class, responsible for connecting to the database, and executing the query (sending a SqlCommand for SQL Server). The SQL syntax for the underlying databases is different and the parameter syntax is also different (SQL Server uses # and MySql uses ?).
I would like an approach where I can make the least effort in making my application run on both platforms.
The obvious method is making a MySqlCarRepository and a SqlServerCarRepository, but that introduces a heavy amount of maintenance. Are there any good approaches to this scenario? Maybe keeping a static class with static strings containing the SQL statements for the different SQL flavours? (how about parameter syntax then?)
Any advice is welcome
(Please note that ORM (Nhibernate, Linq2Sql etc) is not an option)
The approach I follow is to first-of-all use the ADO Provider Factories to abstract the data access implementation. So I will use IDbConnection and so forth in the code.
Then I have an abstraction for a query. I can then use Query objects that contain the actual sql statements. These Query objects are created from RawQuery or various query builders (insert/update/delete/etc.) that have implementations for each provider type. The specific raw queries will need to be coded and obtained specific to the DB you need since there is no gettin passed that.
There is quite a bit of leg work involved in coding this 'plumbing' and I have not had a situation where I actually require different platforms so I have not bothered coding some small bits that I know need some ironing out but you are welcome to contact if you are interested in seeing some code.
Can you use any code generation tools?
I used to use Code Smith in another life and had templates that would generate POCO objects from DB tables, repository classes for each object and stored procedures. Worked alright after fine tuning the templates, and there were plenty examples on the net.
But this was way before I saw the light with NHibernate!
A pattern for accessing multiple database types is the DAO (Data Access Object) pattern. This could suit your particular need if you can't/don't use an ORM. The following article explains the pattern for Java but it is still very relevant for C#:
http://java.sun.com/blueprints/corej2eepatterns/Patterns/DataAccessObject.html

Categories