I have an application that stores data in database (oracle)
I have simple model
public class FileTemplate
{
public string Xml { get; set; }
...
}
and class map
public class FileTemplateMap : ClassMap<FileTemplate>
{
public FileTemplateMap()
{
Table("FILE_TEMPLATE");
Map(f => f.Xml, "XML").CustomSqlType("NCLOB");
...
}
}
A want to add PostgreSql support. But PostgreSql doesn't have NCLOB data type. I modify my mapping:
public class FileTemplateMap : ClassMap<FileTemplate>
{
public FileTemplateMap()
{
Table("FILE_TEMPLATE");
#if POSTGRE
Map(f => f.Xml, "XML").CustomSqlType("TEXT");
#else
Map(f => f.Xml, "XML").CustomSqlType("NCLOB");
#endif
}
}
Now I have to do different builds for oracle and postgresql with defining conditional compilation symbols (for postgresql). And application that build with POSTGRE compilation symbol doesn't work with oracle.
Are there other ways to do this without using conditional compilation symbols? I want to have one build, that works with both databases.
I'd do something like this
public static class CustomSqlTypeHelpers
{
static readonly string _ClobSqlType;
static CustomSqlTypeHelpers()
{
// Checks to validate config file setting ommitted
_ClobSqlType = ConfigurationManager.AppSettings["ClobSqlType"];
}
public static PropertyPart LargeTextColumn(this PropertyPart pp)
{
return pp.CustomSqlType(_ClobSqlType);
}
}
public FileTemplateMap()
{
Table("FILE_TEMPLATE");
Map(f => f.Xml, "XML").LargeTextColumn()
}
I've done a little differently.
Here is an article about my solution: http://beamyplum.blogspot.ru/2013/08/nhibernate.html
Related
I'm using the MongoDB C# driver to talk to a Mongo Atlas instance.
I'm restructuring the schema of a few documents and I want to use ISupportInitilize to read some extra elements and convert them to the new expected schema.
This is the old document definition:
public class ImageDocument : DocumentBase, ISupportInitialize
{
[BsonExtraElements]
public Dictionary<string, object> ExtraElements;
//Other elements omitted for brevity.
public string AzureImageId { get; set; }
public string AzureImageUrl { get; set; }
public void BeginInit()
{
}
public void EndInit()
{
}
}
Here is the new document definition:
public class ImageDocument : DocumentBase, ISupportInitialize
{
[BsonExtraElements]
public Dictionary<string, object> ExtraElements;
//Other elements omitted for brevity
public AzureImageInformationPage Original { get; set; } //Original, as uploaded
public void BeginInit()
{
}
public void EndInit()
{
if (Original == null)
{
Original = new AzureImageInformationPage {
AzureImageId = ExtraElements.GetValueOrDefault("AzureImageId").ToString(),
ImageUrl = ExtraElements.GetValueOrDefault("ImageUrl").ToString()
};
}
}
}
Now, for some reason the EndInit method is never called, even though the MongoDB documentation states it should happen automagically.
I'm using the following code to interact with the MongoDB C# driver:
public async Task<IList<T>> RetrieveAll<T>() where T : DocumentBase
{
return await GetCollection<T>().AsQueryable().ToListAsync();
}
public async Task<IList<T>> RetrieveWhere<T>(Expression<Func<T, bool>> query) where T : DocumentBase
{
return await GetCollection<T>().AsQueryable().Where(query).ToListAsync();
}
public async Task<T> RetrieveSingle<T>(Expression<Func<T, bool>> query) where T : DocumentBase
{
return await GetCollection<T>().AsQueryable().SingleOrDefaultAsync(query);
}
private IMongoCollection<T> GetCollection<T>() where T : DocumentBase
{
//Slightly modified from the real code, so it's easy to read.
var collectionName = typeof(T).Name.Replace("Document", string.Empty);
//Database name is hardcoded for now.
var database = mongoClient.GetDatabase("MyDb");
return database.GetCollection<T>(collectionName);
}
How do I get the MongoDB driver to call the ISupportInitialize methods?
Thanks in advance for helping me out.
I've found the issue.
As of writing, initialization is only supported when compiling against .NET 4.5.
I'm using .NET core 2.0.
See this issue on the MongoDB Jira, and line 131 to 150 in the BsonClassMapSerializer class.
Hopefully the MongoDB team will add support for serialization in .NET core soon.
I have been wandering around for this but couldn't find a single good article that would work. There are bits and peaces everywhere but not a complete code for this.
It's frustrating why Microsoft would make it so hard to connect EF with MySql whereas for SQL Server it takes only few mins.
Could anyone please make it easier and just illustrate in simple steps to create EF with MySql. I am only interested in the connection part otherwise i have been working in EF for couple of years. I know the rest. Here's the code and the related libraries.
Data Context Class
public ReservationDataContext() : base(#"server=localhost;port=3306;database =xxx;uid=xx;pwd=xx") { }
partial void OnCreated();
public ReservationDataContext(string connectionString): base(connectionString)
{
this.OnCreated();
}
public ReservationDataContext(string connection, MappingSource mappingSource) :
base(connection,mappingSource)
{
this.OnCreated();
}
public Table<Reservation> Reservations
{
get
{
return this.GetTable<Reservation>();
}
}
Entity Class:
[Table(Name = "reservations")]
public class Reservation
{
[Column(IsPrimaryKey = true, IsDbGenerated = true, AutoSync = AutoSync.OnInsert)]
public int Id { get; set; }
[Column]
public bool IsReserved { get; set; }
[Column]
public string Status { get; set; }
The dlls are as follows.
dlls
I was getting errors where it was picking up sql server entity framework instead of mysql dll. So resolved that. Isn't giving any error on that but returns null and not fetching the data. Here's the code for that.
In Query Handler class.
private ReservationContext reservationContext;
public QueryHandler()
{
reservationContext = new ReservationContext(ConfigurationManager.ConnectionStrings["ReservationDataContext"].ConnectionString);
}
public List<Reservation> GetAllReservations()
{
List<Reservation> resList = reservationContext.Reservations.ToList();
return resList;
}
I'm using Entity Framework and .Net Core 2.0 for the first time (I'm also pretty new to C#, but I've been using the traditional .Net Framework & VB since version 1... so I'm no newbie to .Net development), and I've already run into a problem creating my database.
Take this simple scenario: I want to store some information about some electric pumps. Two of the properties are a min/max type range, so I've implemented these as a simple class, thus:
public class Pump
{
[Key]
public int pumpId { get; set; }
public string pumpName { get; set; }
public int pumpControlChannel { get; set; }
public MinMax normalCurrent { get; set; }
public MinMax normalFlowRate { get; set; }
}
[ComplexType]
public class MinMax
{
public int min { get; set; }
public int max { get; set; }
}
As you can see, I've tried the [ComplexType] decorator, to no avail.
Anyway, now create a dead simple DBContext class to manage my Pumps class. I'm using Sqlite:
public class EFDB : DbContext
{
public DbSet<Pump> pumps { get; private set; }
private static DbContextOptions GetOptions(string connectionString)
{
var modelBuilder = new DbContextOptionsBuilder();
return modelBuilder.UseSqlite(connectionString).Options;
}
public EFDB(string connectionString) : base(GetOptions(connectionString)) { }
protected override void OnModelCreating(ModelBuilder modelBuilder)
{
try
{
// modelBuilder.ComplexType<MinMax>(); // ComplexType not recognised
base.OnModelCreating(modelBuilder);
}
catch (Exception ex)
{
System.Diagnostics.Debugger.Break();
}
}
}
and lastly a simple static class to call it (I embeded it in a bigger program... to duplicate this problem you could just stick the code lines into program.cs):
public static class TryMe
{
public static void MakeMeFail()
{
using (var db = new EFDB("FileName=C:\\temp\\test_effail.db"))
{
try
{
db.Database.EnsureCreated();
}
catch (Exception ex)
{
System.Diagnostics.Debugger.Break(); // If we hit this line, it fell over
}
}
System.Diagnostics.Debugger.Break(); // If we hit this line, it worked.
}
}
Just call TryMe.MakeMeFail(), the code fails at db.Database.EnsureCreated().
From everything I've read, [ComplexType] should do what I want... but it Just Doesn't. Nor can I find modelBuilder.ComplexType<T> anywhere.
It may just be a library reference I'm missing...? The above code uses the following:
using System;
using Microsoft.EntityFrameworkCore;
using System.ComponentModel.DataAnnotations;
using System.ComponentModel.DataAnnotations.Schema;
However, NONE of the documentation/examples I can find anywhere show which libraries need referencing!
Thanks in advance.
[PS: Apologies to those who already saw this question, I'm using EF Core 2.0, NOT EF6]
Typical... it's always the way, isn't it? 5 minutes after posting, you discover the answer to your own question....
The answer, in this case, can be found here:
https://learn.microsoft.com/en-us/ef/core/modeling/owned-entities
EF Core calls this sort of entity an "owned" entity, rather than a "complex type".
Simply adding these lines to `OnModelCreating' fixed the issue:
modelBuilder.Entity<Pump>().OwnsOne(p => p.normalCurrent);
modelBuilder.Entity<Pump>().OwnsOne(p => p.normalFlowRate);
The database now creates (correctly, I think, I haven't verified that yet).
I use Dapper and TableAttribute:
using Dapper.Contrib.Extensions;
namespace MyCompany.Entities
{
[Table(Config.TABLE_ARCHIVO_CLIENTE)]
public partial class ArchivoCliente
{
Working
public const string TABLE_ARCHIVO_CLIENTE = "Archivo_Cliente";
Not working if not const string. I try use a static property for use appSettings:
public static string TABLE_ARCHIVO_CLIENTE
{
get
{
return ConfigurationManager.AppSettings.Get(KeyTable);
}
}
Any suggestions for using AppSettings ?
Attribute parameters require constants.
Checking the Dapper.Contrib code, it appears very unusually to access the attribute by name. If it was by type, you could do something like:
class ConfigTableAttribute : TableAttribute {
public ConfigTableAttribute(string configSetting)
: base(LookupTableNameFromConfig(configSetting));
private static string LookupTableNameFromConfig(string configSetting)
{
// TODO: your code here
}
}
and annotate your code with:
[ConfigTable(nameof(Config.TABLE_ARCHIVO_CLIENTE))]
class Foo {}
It would then be your job to implement the TODO which would fetch the actual value via reflection or an indexer, etc. In the code shown, the input configSetting would be TABLE_ARCHIVO_CLIENTE.
However, since it accesses it by name and dynamic, all you actually need is something called TableAttribute that has a Name. You could do the same thing as above, but in a different namespace:
namespace MyEvilness {
class TableAttribute : Attribute {
public TableAttribute(string configSetting) {
Name = LookupTableNameFromConfig(configSetting);
}
// etc as before
}
}
and use:
[MyEvilness.Table(nameof(Config.TABLE_ARCHIVO_CLIENTE))]
class Foo {}
Word of caution; I consider the current implementation to be a bug! I understand why it is done that way (i.e. so it works with EF), but I'm tempted to make it work for either approach.
I've written a small package to overcome this issue. It assigns the value as tablename if the key matches to FullName, in the configuration file. With some effort spend to avoid sql injection.
One can add it like dependency injection
// Startup.cs or Program.cs
// ...
services.ReadTablenamesFromConfig(configuration.GetSection("MySectionName"));
// ...
with configuration:
// appsettings.json
...
"MySectionName": {
"TableNames": {
"Demo.Sale": "sale_2020"
}
},
...
For the model:
// Sale.cs
namespace Demo
{
//[Table("sale_2020")]
public class Sale
{
public string Product { get; set; }
public int Quantity { get; set; }
}
}
See a better example here.
As for the time being, the implementation is as follows:
// TablenameExtensions.cs
using Dapper.Contrib.Extensions;
using Microsoft.Extensions.Configuration;
using Microsoft.Extensions.DependencyInjection;
using System;
namespace Dapper.Contrib.Extensions.Tablename
{
public static class TablenameExtensions
{
private static TablenameConfig _config;
public static IServiceCollection ReadTablenamesFromConfig(this IServiceCollection services, IConfigurationSection configSection)
{
services.Configure<TablenameConfig>(configSection);
_config = configSection.Get<TablenameConfig>();
SqlMapperExtensions.TableNameMapper = TableName;
return services;
}
private static string TableName(Type type) => _config.TableNames[type.FullName].Replace("`", "");
public static string TableName<T>() => TableName(typeof(T));
}
}
where:
// TablenameConfig.cs
using System.Collections.Generic;
namespace Dapper.Contrib.Extensions.Tablename
{
internal class TablenameConfig
{
public IDictionary<string, string> TableNames { get; set; }
}
}
I am very new to C# and ServiceStack and I am working on a small project that consists on calling a third party API and loading the data I get back from the API into a relational database via ServiceStack's ORMLite.
The idea is to have each endpoint of the API have a reusable model that determines how it should be received in the API's response, and how it should be inserted into the database.
So I have something like the following:
[Route("/api/{ApiEndpoint}", "POST")]
public class ApiRequest : IReturn<ApiResponse>
{
public Int32 OrderId { get; set; }
public DateTime PurchaseDate { get; set; }
public String ApiEndpoint { get; set; }
}
public class ApiResponse
{
public Endpoint1[] Data { get; set; }
public String ErrorCode { get; set; }
public Int32 ErrorNumber { get; set; }
public String ErrorDesc { get; set; }
}
public class Endpoint1
{
[AutoIncrement]
public Int32 Id { get; set; }
[CustomField("DATETIME2(7)")]
public String PurchaseDate { get; set; }
[CustomField("NVARCHAR(50)")]
public String Customer { get; set; }
[CustomField("NVARCHAR(20)")]
public String PhoneNumber { get; set; }
public Int32 Amount { get; set; }
}
My first class represents the API's request with its route, the second class represents the API's response. The API's response is the same for all endpoints, but the only thing that varies is the structure of the Data field that comes back from that endpoint. I've defined the structure of one of my endpoints in my Endpoint1 class, and I am using it in my API's response class. As you can see, I am also defining a few attributes on my Endpoint1 class to help the ORM make better decisions later when inserting the data.
Ok, so the issue is that I have about 15 endpoints and I don't want to create 15 ApiResponse classes when I know the only thing that changes is that first Data field in the class.
So I made something like this:
public class DataModels
{
public Type getModel(String endpoint)
{
Dictionary<String, Type> models = new Dictionary<String, Type>();
models.Add("Endpoint1", typeof(Endpoint1));
// models.Add("Endpoint2", typeof(Endpoint2));
// models.Add("Endpoint3", typeof(Endpoint3));
// and so forth...
return models[endpoint];
}
}
I would like for getModel() to be called when the request is made so that I can pass in the ApiEndpoint field in the ApiRequest class and store the type that I want my Data field to have so that I can dynamically change it in my ApiResponse class.
In addition, there is the ORM part where I iterate over every endpoint and create a different table using the model/type of each endpoint. Something like this:
endpoints.ForEach(
(endpoint) =>
{
db.CreateTableIfNotExists<Endpoint1>();
// inserting data, doing other work etc
}
);
But again, I'd like to be able to call getModel() in here and with that define the model of the specific endpoint I am iterating on.
I've attempted calling getModel() on both places but I always get errors back like cannot use variable as a typeand others... so I am definitely doing something wrong.
Feel free to suggest a different approach to getModel(). This is just what I came up with but I might be ignoring a much simpler approach.
When I DID understand you correctly, you have different API-Calls which all return the same object. The only difference is, that the field "Data" can have different types.
Then you can simply change the type of data to object:
public object Data { get; set; }
And later simply cast this to the required object:
var data1=(Endpoint1[]) response.Data;
You're going to have a very tough time trying to dynamically create .NET types dynamically which requires advanced usage of Reflection.Emit. It's self-defeating trying to dynamically create Request DTOs with ServiceStack since the client and metadata services needs the concrete Types to be able to call the Service with a Typed API.
I can't really follow your example but my initial approach would be whether you can use a single Service (i.e. instead of trying to dynamically create multiple of them). Likewise with OrmLite if the Schema of the POCOs is the same, it sounds like you would be able to flatten your DataModel and use a single database table.
AutoQuery is an example of a feature which dynamically creates Service Implementations from just a concrete Request DTO, which is effectively the minimum Type you need.
So whilst it's highly recommended to have explict DTOs for each Service you can use inheritance to reuse the common properties, e.g:
[Route("/api/{ApiEndpoint}/1", "POST")]
public ApiRequest1 : ApiRequestBase<Endpoint1> {}
[Route("/api/{ApiEndpoint}/2", "POST")]
public ApiRequest2 : ApiRequestBase<Endpoint1> {}
public abstract class ApiRequestBase<T> : IReturn<ApiResponse<T>>
{
public int OrderId { get; set; }
public DateTime PurchaseDate { get; set; }
public string ApiEndpoint { get; set; }
}
And your Services can return the same generic Response DTO:
public class ApiResponse<T>
{
public T[] Data { get; set; }
public String ErrorCode { get; set; }
public Int32 ErrorNumber { get; set; }
public String ErrorDesc { get; set; }
}
I can't really understand the purpose of what you're trying to do so the API design is going to need modifications to suit your use-case.
You're going to have similar issues with OrmLite which is a Typed code-first POCO ORM where you're going to run into friction trying to use dynamic types which don't exist at Runtime where you'll likely have an easier time executing Dynamic SQL since it's far easier to generate a string than a .NET Type.
With that said GenericTableExpressions.cs shows an example of changing the Table Name that OrmLite saves a POCO to at runtime:
const string tableName = "Entity1";
using (var db = OpenDbConnection())
{
db.DropAndCreateTable<GenericEntity>(tableName);
db.Insert(tableName, new GenericEntity { Id = 1, ColumnA = "A" });
var rows = db.Select(tableName, db.From<GenericEntity>()
.Where(x => x.ColumnA == "A"));
Assert.That(rows.Count, Is.EqualTo(1));
db.Update(tableName, new GenericEntity { ColumnA = "B" },
where: q => q.ColumnA == "A");
rows = db.Select(tableName, db.From<GenericEntity>()
.Where(x => x.ColumnA == "B"));
Assert.That(rows.Count, Is.EqualTo(1));
}
Which uses these extension methods:
public static class GenericTableExtensions
{
static object ExecWithAlias<T>(string table, Func<object> fn)
{
var modelDef = typeof(T).GetModelMetadata();
lock (modelDef)
{
var hold = modelDef.Alias;
try
{
modelDef.Alias = table;
return fn();
}
finally
{
modelDef.Alias = hold;
}
}
}
public static void DropAndCreateTable<T>(this IDbConnection db, string table)
{
ExecWithAlias<T>(table, () => {
db.DropAndCreateTable<T>();
return null;
});
}
public static long Insert<T>(this IDbConnection db, string table, T obj, bool selectIdentity = false)
{
return (long)ExecWithAlias<T>(table, () => db.Insert(obj, selectIdentity));
}
public static List<T> Select<T>(this IDbConnection db, string table, SqlExpression<T> expression)
{
return (List<T>)ExecWithAlias<T>(table, () => db.Select(expression));
}
public static int Update<T>(this IDbConnection db, string table, T item, Expression<Func<T, bool>> where)
{
return (int)ExecWithAlias<T>(table, () => db.Update(item, where));
}
}
But it's not an approach I'd take personally, if I absolutely needed (and I'm struggling to think of a valid use-case outside of table-based Multitenancy or sharding) to save the same schema in multiple tables I'd just be using inheritance again, e.g:
public class Table1 : TableBase {}
public class Table2 : TableBase {}
public class Table3 : TableBase {}