C# creating methods with some kind of submethods - c#

I want to build my own mysql class. But I'm relative new to c#.
So in my thoughts I would like to build some like:
mySqlTool.select("a,b,c").from("foo").where("x=y");
I dont know how this is called and actually I dont really know if this is even possible. My google search ended with no real result.
So my questions are:
Is it possible to do some like my sample above?

Creating a fluent api isn't overly complicated. You simply return the current instance of the class from each method which allows them to be chained the way that you specify in your post:
public class MySqlTool
{
private string _fields, _table, _filters;
public MySqlTool Select(string fields)
{
_fields = fields;
return this;
}
public MySqlTool From(string table)
{
_table = table;
return this;
}
public MySqlTool Where(string filters)
{
_filters = filters;
return this;
}
public Results Execute()
{
// build your query from _fields, _table, _filters
return results;
}
}
Would allow for you to run a query like:
var tool = new MySqlTool();
var results = tool.Select("a, b, c").From("someTable").Where("a > 1").Execute();

It looks like you are trying to do something similar to Linq to SQL, but with MySQL instead. There is a project called LinqConnect that does this. I have no affiliation with them.
You can use it like this (from a LinqConnect tutorial):
CrmDemoDataContext context = new CrmDemoDataContext();
var query = from it in context.Companies
orderby it.CompanyID
select it;
foreach (Company comp in query)
Console.WriteLine("{0} | {1} | {2}", comp.CompanyID, comp.CompanyName, comp.Country);
Console.ReadLine();

I'm not a big fan of it personally, but it can be useful if you're just learning. Linq2SQL can give you that functionality somewhat out of the box.
If you're looking to accomplish this yourself. You'll want to turn to extension methods. A feature within c# that will allow you to extend your classes using static methods, operating on the base.
My recommendation if you're doing your own data-access layer, then you should use something like dapper (built/used by the crew at StackOverflow). Which offers you a simple wrapper around the base connection to the database. It offers simple ORM functionality and does parameterization for you.
I would also recommend strongly that you encapsulate your intensive queries within Stored Procedures.

What you're looking to do is design a fluent interface, and it's somewhat of an advanced concept for someone who is just learning C#. I would suggest you stick to the basics until you have a little more experience.
More importantly, there are already existing data adapters built into the .NET FCL and also third-party adapters like this one that are more suitable and already do what you're trying to do using "LINQ to (database)". I wouldn't reinvent the wheel if I were you.

Related

Refactoring method for getting Json from a stored procedure in ASP.NET Core

Please, please don't close or mark this question as duplicate, I have already looked StackOverflow and online but couldn't find solution.
Below code works great that I receive data from SQL Server via a stored procedure, then assign to a list of book model and return Json:
public IActionResult GetAllBooks()
{
List<BookViewModel> book = new List<BookViewModel>();
DataTable dataTable = new DataTable();
using (SqlConnection sqlConnection = new SqlConnection(_configuration.GetConnectionString("xxx")))
{
sqlConnection.Open();
SqlDataAdapter sqlData = new SqlDataAdapter("proc_GetBookList", sqlConnection);
sqlData.SelectCommand.CommandType = CommandType.StoredProcedure;
sqlData.Fill(dataTable);
foreach (DataRow dr in dataTable.Rows)
{
book.Add(new BookViewModel
{
Name = dr["Name"].ToString(),
Stock = Convert.ToInt32(dr["Stock"]),
});
}
}
return Json(book);
}
But I am trying to find a better way or best practice e.g serialize or any other techniques so that I don't need to create (View model and Assigning them values) like below. This is small example of only two properties but sometimes I need to map like 20 or more properties, do you guy see any problem in above code? I am new in software development world, any suggestion would be appreciated.
new BookViewModel
{
Name = dr["Name"].ToString(),
Stock = Convert.ToInt32(dr["Stock"]),
};
I have used Newtonsoft JSON (NuGet package) for this purpose.
Example:
using Newtonsoft.JSON;
public string DataTableToJSONWithJSONNet(DataTable table) {
string JSONString = string.Empty;
JSONString = JSONConvert.SerializeObject(table);
return JSONString;
}
You can find this Newtonsoft example and a few other methods here.
Using a query like you are using is pretty much going to make you use this style of assignment. Switching to Entity Framework to query your DB is going be your best bet, since it will do assignment to objects/classes automatically. But I get that doing so after a project is started can be a PITA or nearly impossible (or a very significantly large amount of work) to do. There's also a bit of a learning curve, if you've never used it before.
What you can do to make things easier is to create a constructor for your model that takes in a DataRow and assigns the data on a single place.
public BookViewModel(DataRow dr)
{
Name = dr["Name"].ToString();
Stock = Convert.ToInt32(dr["Stock"]);
}
Then you just call "book.Add(new BookViewModel(dr));" in your foreach loop. This works well if you have to do this in multiple places in your code, so you don't have to repeat the assignments when you import rows.
You might also be able to use Reflection to automatically assign the values for you. This also has a bit of a learning curve, but it can make conversions much simpler, when you have it set up.
Something similar to Reflection is AutoMapper, but that's not as popular as it used to be.
I was going to suggest using a JSON package like Newtonsoft or the built in package for C#, but it looks I got beat to that punchline.
Another option is using Dapper. It's sort of a half-step between your current system and Entity. It can use SQL or it's own query language to cast the results directly to a model. This might be the easiest and most straight forward way to refactor your code.
Dapper and Entity are examples of object relational mappers (ORMs). There are others around you can check out.
I've only listed methods I've actually used and there are many other ways to get the same thing done, even without an ORM. They all have their pros and cons, so do your research to figure out what you're willing to commit to.
Simply just replace your "return Json(book)" with
return Ok(book)

Recursion in Fluent API

I am designing a fluent API for writing SQL. Keep in mind one of my goals is to have API not suggest functions that can't be called in that part of the chain. For instance if you just got done defining a field in the select clause you can't call Where until you called From first. A simple query looks like this:
string sql = SelectBuilder.Create()
.Select()
.Fld("field1")
.From("table1")
.Where()
.Whr("field1 > field2")
.Whr("CURRENT_TIMESTAMP > field3")
.Build()
.SQL;
My problem comes with recursion in SQL code. Say you wanted to have a field contain another SQL statement like below:
string sql = SelectBuilder.Create()
.Select()
.Fld("field1")
.SQLFld()
.Select
.Count("field6")
.From("other table")
.EndSQLFld()
.FLd("field2")
.From("table1")
.Where()
.Whr("field1 > field2")
.Whr("CURRENT_TIMESTAMP > field3")
.Build()
.SQL;
I am using method chaining to build my fluent API. It many ways it is a state machine strewn out across many classes which represent each state. To add this functionality I would need to copy essentially every state I already have and wrap them around the two SQLFld and EndSQLFld states. I would need yet another copy if you were one more level down and were embedding a SQL statement in to a field of the already embedded SQL statement. This goes on to infinity, so with an infinitely deep embedded SQL query I would need an infinite number of classes to represent the infinite states.
I thought about writing a SelectBuilder query that was taken to the point of the Build method and then embedding that SelectBuilder in to another SelectBuilder and that fixes my infinity problem, but it is not very elegant and that is the point of this API.
I could also throw out the idea that the API only offers functions when they are appropriate but I would really hate to do that. I feel like that helps you best discover how to use the API. In many fluent APIs it doesn't matter which order you call what, but I want the API to appear as close to the actual SQL statement as possible and enforce its syntax.
Anyone have any idea how to solve this issue?
Glad to see you are trying fluent interfaces, I think they are a very elegant and expressive.
The builder pattern is not the only implementation for fluent interfaces. Consider this design, and let us know what you think =)
This is an example and I leave to you the details of your final implementation.
Interface design example:
public class QueryDefinition
{
// The members doesn't need to be strings, can be whatever you use to handle the construction of the query.
private string select;
private string from;
private string where;
public QueryDefinition AddField(string select)
{
this.select = select;
return this;
}
public QueryDefinition From(string from)
{
this.from = from;
return this;
}
public QueryDefinition Where(string where)
{
this.where = where;
return this;
}
public QueryDefinition AddFieldWithSubQuery(Action<QueryDefinition> definitionAction)
{
var subQueryDefinition = new QueryDefinition();
definitionAction(subQueryDefinition);
// Add here any action needed to consider the sub query, which should be defined in the object subQueryDefinition.
return this;
}
Example usage:
static void Main(string[] args)
{
// 1 query deep
var def = new QueryDefinition();
def
.AddField("Field1")
.AddField("Filed2")
.AddFieldWithSubQuery(subquery =>
{
subquery
.AddField("InnerField1")
.AddField("InnerFiled2")
.From("InnerTable")
.Where("<InnerCondition>");
})
.From("Table")
.Where("<Condition>");
// 2 queries deep
var def2 = new QueryDefinition();
def2
.AddField("Field1")
.AddField("Filed2")
.AddFieldWithSubQuery(subquery =>
{
subquery
.AddField("InnerField1")
.AddField("InnerField2")
.AddFieldWithSubQuery(subsubquery =>
{
subsubquery
.AddField("InnerInnerField1")
.AddField("InnerInnerField2")
.From("InnerInnerTable")
.Where("<InnerInnerCondition>");
})
.From("InnerInnerTable")
.Where("<InnerCondition>");
})
.From("Table")
.Where("<Condition>");
}
You can't "have only applicable methods available" without either sub-APIs for the substructures or clear bracketing/ending of all inner structural levels (SELECT columns, expressions in WHERE clause, subqueries).
Even then, running it all through a single API will require it to be stateful & "modal" with "bracketing" methods, to track whereabouts in the decl you are. Error reporting & getting these right will be tedious.
Ending bracketing by "fluent" methods, to me, seems non-fluent & ugly. This would result in a ugly appearence of EndSelect, EndWhere, EndSubquery etc. I'd prefer to build substructures (eg SUBQUERY for select) into a local variable & add that.
I don't like the EndSQLFld() idiom, which terminates the Subquery implicitly by terminating the Field. I'd prefer & guess it would be better design to terminate the subquery itself which is the complex part of the nested structure -- not the field.
To be honest, trying to enforce ordering of a "declarative" API for a "declarative" language (SQL) seems to be a waste of time.
Probably what I'd consider closer to an ideal usage:
SelectBuilder select = SelectBuilder.Create("CUSTOMER")
.Column("ID")
.Column("NAME")
/*.From("CUSTOMER")*/ // look, I'm just going to promote this onto the constructor.
.Where("field1 > field2")
.Where("CURRENT_TIMESTAMP > field3");
SelectBuilder countSubquery = SelectBuilder.Create("ORDER")
.Formula("count(*)");
.Where("ORDER.FK_CUSTOMER = CUSTOMER.ID");
.Where("STATUS = 'A'");
select.Formula( countSubquery, "ORDER_COUNT");
string sql = SelectBuilder.SQL;
Apologies to the Hibernate Criteria API :)

Using Dynamic LINQ (or Generics) to query/filter Azure tables

So here's my dilemma. I'm trying to utilize Dynamic LINQ to parse a search filter for retrieving a set of records from an Azure table. Currently, I'm able to get all records by using a GenericEntity object defined as below:
public class GenericEntity
{
public string PartitionKey { get; set; }
public string RowKey { get; set; }
Dictionary<string, object> properties = new Dictionary<string, object>();
/* "Property" property and indexer property omitted here */
}
I'm able to get this completely populated by utilizing the ReadingEntity event of the TableServiceContext object (called OnReadingGenericEvent). The following code is what actually pulls all the records and hopefully filter (once I get it working).
public IEnumerable<T> GetTableRecords(string tableName, int numRecords, string filter)
{
ServiceContext.IgnoreMissingProperties = true;
ServiceContext.ReadingEntity -= LogType.GenericEntity.OnReadingGenericEntity;
ServiceContext.ReadingEntity += LogType.GenericEntity.OnReadingGenericEntity;
var result = ServiceContext.CreateQuery<GenericEntity>(tableName).Select(c => c);
if (!string.IsNullOrEmpty(filter))
{
result = result.Where(filter);
}
var query = result.Take(numRecords).AsTableServiceQuery<GenericEntity>();
IEnumerable<GenericEntity> res = query.Execute().ToList();
return res;
}
I have TableServiceEntity derived types for all the tables that I have defined, so I can get all properties/types using Reflection. The problem with using the GenericEntity class in the Dynamic LINQ Query for filtering is that the GenericEntity object does NOT have any of the properties that I'm trying to filter by, as they're really just dictionary entries (dynamic query errors out). I can parse out the filter for all the property names of that particular type and wrap
"Property[" + propName + "]"
around each property (found by using a type resolver function and reflection). However, that seems a little... overkill. I'm trying to find a more elegant solution, but since I actually have to provide a type in ServiceContext.CreateQuery<>, it makes it somewhat difficult.
So I guess my ultimate question is this: How can I use dynamic classes or generic types with this construct to be able to utilize dynamic queries for filtering? That way I can just take in the filter from a textbox (such as "item_ID > 1023000") and just have the TableServiceEntity types dynamically generated.
There ARE other ways around this that I can utilize, but I figured since I started using Dynamic LINQ, might as well try Dynamic Classes as well.
Edit: So I've got the dynamic class being generated by the initial select using some reflection, but I'm hitting a roadblock in mapping the types of GenericEntity.Properties into the various associated table record classes (TableServiceEntity derived classes) and their property types. The primary issue is still that I have to initially use a specific datatype to even create the query, so I'm using the GenericEntity type which only contains KV pairs. This is ultimately preventing me from filtering, as I'm not able to do comparison operators (>, <, =, etc.) with object types.
Here's the code I have now to do the mapping into the dynamic class:
var properties = newType./* omitted */.GetProperties(
System.Reflection.BindingFlags.Instance |
System.Reflection.BindingFlags.Public);
string newSelect = "new(" + properties.Aggregate("", (seed, reflected) => seed += string.Format(", Properties[\"{0}\"] as {0}", reflected.Name)).Substring(2) + ")";
var result = ServiceContext.CreateQuery<GenericEntity>(tableName).Select(newSelect);
Maybe I should just modify the properties.Aggregate method to prefix the "Properties[...]" section with the reflected.PropertyType? So the new select string will be made like:
string newSelect = "new(" + properties.Aggregate("", (seed, reflected) => seed += string.Format(", ({1})Properties[\"{0}\"] as {0}", reflected.Name, reflected.PropertyType)).Substring(2) + ")";
Edit 2: So now I've hit quite the roadblock. I can generate the anonymous types for all tables to pull all values I need, but LINQ craps out on my no matter what I do for the filter. I've stated the reason above (no comparison operators on objects), but the issue I've been battling with now is trying to specify a type parameter to the Dynamic LINQ extension method to accept the schema of the new object type. Not much luck there, either... I'll keep you all posted.
I've created a simple System.Refection.Emit based solution to create the class you need at runtime.
http://blog.kloud.com.au/2012/09/30/a-better-dynamic-tableserviceentity/
I have run into exactly the same problem (with almost the same code :-)). I have a suspicion that the ADO.NET classes underneath somehow do not cooperate with dynamic types but haven't found exactly where yet.
So I've found a way to do this, but it's not very pretty...
Since I can't really do what I want within the framework itself, I utilized a concept used within the AzureTableQuery project. I pretty much just have a large C# code string that gets compiled on the fly with the exact object I need. If you look at the code of the AzureTableQuery project, you'll see that a separate library is compiled on the fly for whatever table we have, that goes through and builds all the properties and stuff we need as we query the table. Not the most elegant or lightweight solution, but it works, nevertheless.
Seriously wish there was a better way to do this, but unfortunately it's not as easy as I had hoped. Hopefully someone will be able to learn from this experience and possibly find a better solution, but I have what I need already so I'm done working on it (for now).

LINQ to SQL business object creation best practices

I've been using LINQ extensively in my recent projects, however, I have not been able to find a way of dealing with objects that doesn't either seem sloppy or impractical.
I'll also note that I primarily work with ASP.net.
I hate the idea of exposing the my data context or LINQ returned types to my UI code. I prefer finer grained control over my business objects, and it also seems too tightly coupled to the db to be good practice.
Here are the approaches I've tried ..
Project items into a custom class
dc.TableName.Select(λ => new MyCustomClass(λ.ID, λ.Name, λ.Monkey)).ToList();
This obviously tends to result in a lot of wireup code for creating, updating etc...
Creating a wrapper around returned object
public class MyCustomClass
{
LinqClassName _core;
Internal MyCustomClass(LINQClassName blah)
{
_core = blah;
}
int ID {get { return _core.ID;}}
string Name { get {return _core.Name;} set {_core.Name = value;} }
}
...
dc.TableName.Select(λ => new MyCustomClass(λ)).ToList();
Seems to work pretty well but reattaching for updates seems to be nigh impossible somewhat defeating the purpose.
I also tend to like using LINQ Queries for transformations and such through my code and I'm worried about a speed hit with this method, although I haven't tried it with large enough sets to confirm yet.
Creating a wrapper around returned object while persisting data context
public class MyCustomClass
{
LinqClassName _core;
MyDataContext _dc;
...
}
Persisting the data context within my object greatly simplifies updates but seems like a lot of overhead especially when utilizing session state.
A quick Note: I know the usage of λ is not mathematically correct here - I tend to use it for my bound variable because it stands out visually, and in most lambda statements it is the transformation that is important not the variable - not sure if that makes any sense but blah
Sorry for the extremely long question.
Thanks in advance for your input and Happy New Years!
I create "Map" extension functions on the tables returning from the LINQ queries. The Map function returns a plain old CLR object. For example:
public static MyClrObject Map(this MyLinqObject o)
{
MyClrObject myObject = new MyClrObject()
{
stringValue = o.String,
secondValue = o.Second
};
return myObject;
}
You can then add the Map function to the select list in the LINQ query and have LINQ return the CLR Object like:
return (from t in dc.MyLinqObject
select t.Map()).FirstOrDefault();
If you are returning a list, you can use the ToList to get a List<> back. If you prefer to create your own list types, you need to do two things. First, create a constructor that takes an IEnumerable<> of the underlying type as it's one argument. That constructor should copy the items from the IEnumerable<> collection. Second, create a static extension method to call that constructor from the LINQ query:
public static MyObjectList ToMyObjectList(this IEnumerable<MyObjectList> collection)
{
return new MyObjectList (collection);
}
Once these methods are created, they kind of hide in the background. They don't clutter up the LINQ queries and they don't limit what operations you can perform in teh query.
This blog entry has a more thorough explanation.

Pulling the WHERE clause out of LINQ to SQL

I'm working with a client who wants to mix LINQ to SQL with their in-house DAL. Ultimately they want to be able to query their layer using typical LINQ syntax. The point where this gets tricky is that they build their queries dynamically. So ultimately what I want is to be able to take a LINQ query, pull it apart and be able to inspect the pieces to pull the correct objects out, but I don't really want to build a piece to translate the 'where' expression into SQL. Is this something I can just generate using Microsoft code? Or is there an easier way to do this?
(you mean just LINQ, not really LINQ-to-SQL)
Sure, you can do it - but it is massive amounts of work. Here's how; I recommend "don't". You could also look at the source code for DbLinq - see how they do it.
If you just want Where, it is a bit easier - but as soon as you start getting joins, groupings, etc - it will be very hard to do.
Here's just Where support on a custom LINQ implemention (not a fully queryable provider, but enough to get LINQ with Where working):
using System;
using System.Collections.Generic;
using System.Linq.Expressions;
using System.Reflection;
namespace YourLibrary
{
public static class MyLinq
{
public static IEnumerable<T> Where<T>(
this IMyDal<T> dal,
Expression<Func<T, bool>> predicate)
{
BinaryExpression be = predicate.Body as BinaryExpression;
var me = be.Left as MemberExpression;
if(me == null) throw new InvalidOperationException("don't be silly");
if(me.Expression != predicate.Parameters[0]) throw new InvalidOperationException("direct properties only, please!");
string member = me.Member.Name;
object value;
switch (be.Right.NodeType)
{
case ExpressionType.Constant:
value = ((ConstantExpression)be.Right).Value;
break;
case ExpressionType.MemberAccess:
var constMemberAccess = ((MemberExpression)be.Right);
var capture = ((ConstantExpression)constMemberAccess.Expression).Value;
switch (constMemberAccess.Member.MemberType)
{
case MemberTypes.Field:
value = ((FieldInfo)constMemberAccess.Member).GetValue(capture);
break;
case MemberTypes.Property:
value = ((PropertyInfo)constMemberAccess.Member).GetValue(capture, null);
break;
default:
throw new InvalidOperationException("simple captures only, please");
}
break;
default:
throw new InvalidOperationException("more complexity");
}
return dal.Find(member, value);
}
}
public interface IMyDal<T>
{
IEnumerable<T> Find(string member, object value);
}
}
namespace MyCode
{
using YourLibrary;
static class Program
{
class Customer {
public string Name { get; set; }
public int Id { get; set; }
}
class CustomerDal : IMyDal<Customer>
{
public IEnumerable<Customer> Find(string member, object value)
{
Console.WriteLine("Your code here: " + member + " = " + value);
return new Customer[0];
}
}
static void Main()
{
var dal = new CustomerDal();
var qry = from cust in dal
where cust.Name == "abc"
select cust;
int id = int.Parse("123");
var qry2 = from cust in dal
where cust.Id == id // capture
select cust;
}
}
}
Technically if your DAL exposes IQueryable<T> instead of IEnumerable<T> you can also implement a IQueryProvider and do exactly what you describe. However, this is not for the faint of heart.
But if you expose the LINQ to SQL tables themselves in the DAL, they will do exactly this for you. There is a (big) risk though since you'll be handling the client code total control over how to express SQL queries, and the usual result is some complex query that joins everything and slaps pagination a top of it with less than spectacular run time performance.
I think you should consider carefully what is actually needed from the DAL and expose only that.
I just read an interesting article on Expression Trees, LINQ to SQL uses these to translate the query into SQL and send it over the wire.
Maybe that's something you could use?
Just some though. I know some language support building a string that can be execute in the code itself. I never tried it with .Net, but this is common in functional languages like LISP. Since .Net support lambdas, maybe this is possible.
Since F# is coming to .Net soon, maybe it will possible if it is not right now.
What I am trying to say is if you can do this then maybe you can build that string that will be use as the LINQ statement and then execute it. Since it is a string, it will be possible to analyse the string and get the information you want.
Try Dynamic Linq
To anyone else with the same question out there. Pulling out the where clause from LINQ-to-SQL isn’t quite as straightforward, as one would’ve hoped for. Additionally, doing that by itself is probably meaningless. There are a couple of options, depending on the requirements – either grab it from the generated string, but then it would contain parameter references and object property mappings that would also have to be resolved, so those would also have to be pulled out of the original provider somehow, otherwise this would be pointless. Another – would be to find a modular provider that can do that, as well as make member mappings easily accessible, but once again, without the rest of the query, I see little utility in doing that, because the where clause would reference table/column aliases from the select statement.
I had a similar task to write a full blown provider for a custom ORM/DAL a couple of years ago. While it qualifies as the most complex thing I’ve worked on, being an experience developer, I can say it’s not as bad, as some people claim once you wrap your head around the concepts that lie at the foundation of such a component. Some solutions that I’ve seen go the wrong way about it, add redundant functionality and have extra code addressing problems introduced by underlying logic. E.g. the “optimization” stage/module that attempts to re-factor bloated, nested SQL produced by the main parser. If the latter was designed in such a way that would output clean SQL from the start, then no clean-up phase would be needed. I’ve seen providers that create a new level of nesting for each where and join call. That’s a bad strategy. By breaking down a query into three/four main parts – select, from, where and orderby, which are built individually as the tree is being visited, this problem is avoided altogether. I’ve developed an object-to-data (aka LINQ-to-SQL) provided based on these principles for a custom ORM/DAL and it produces nice, clean SQL, with an excellent performance, as each statement is compiled to IL and cached.
For anyone that is looking to do something similar, please see my posts that include a sample project with a tutorial/barebones implementation that makes it easy to see how it works. Included is also the full solution:
How to write a LINQ to SQL provider in C# Part 1 - Introduction
How to write a LINQ to SQL provider in C# Part 2 - Expression Visitor
How to write a LINQ to SQL provider in C# Part 3 - Where Clause Visitor
How to write a LINQ to SQL provider in C# Part 4 - Compiling Expression Trees

Categories