I've found a bottleneck in my application to be the insert operation for one particular entity (into three tables, through navigation properties). The classes are defined as follows:
public class TrackerState
{
public int Id { get; set; }
[Index]
public int TrackerId { get; set; }
[Index]
public DateTime DateRecorded { get; set; }
public DateTime DatePublished { get; set; }
public DateTime DateReceived { get; set; }
public LocationStatus LocationStatus { get; set; }
public double Latitude { get; set; }
public double Longitude { get; set; }
public double Altitude { get; set; }
public double Accuracy { get; set; }
public string Source { get; set; }
public double Speed { get; set; }
public double Heading { get; set; }
public int PrimaryOdometer { get; set; }
public int SecondaryOdometer { get; set; }
public int OperationalSeconds { get; set; }
public virtual IList<AnalogState> AnalogStates { get; set; }
public virtual IList<DigitalState> DigitalStates { get; set; }
}
public class AnalogState
{
public int TrackerStateId { get; set; }
public virtual TrackerState TrackerState { get; set; }
public int Index { get; set; }
public int Value { get; set; }
}
public class DigitalState
{
public int TrackerStateId { get; set; }
public virtual TrackerState TrackerState { get; set; }
public int Index { get; set; }
public bool Value { get; set; }
}
The AnalogState and DigitalState classes use the TrackerStateId and their Index as a composite primary key.
The tables are currently very small:
TrackerStates: 2719
AnalogStates: 0
DigitalStates: 32604
When I insert into the tables manually, through SQL management studio, the operation runs in a fraction of a second. When I insert through Entity Framework, using the following code, it can take up to 15 seconds, and the amount of time taken is very dependent on the number of digital values included in the tracker state - e.g. a tracker state with 0 digital values takes between 0.1 to 0.5 seconds, and a tracker state with 64 digital values takes between 10 and 15 seconds.
public async Task<int> AddAsync(TrackerState trackerState)
{
using (var context = ContextFactory.CreateContext())
{
context.TrackerStates.Add(trackerState);
await context.SaveChangesAsync();
return trackerState.Id;
}
}
Based on this, it seems like Entity Framework is doing something very slow in the background, but I can't figure out why. 0.5 seconds is pretty slow for a transaction, considering how often this is going to be done. 15 seconds is just too damn slow. Things I have tried so far, to no success:
Disabling change tracking. I didn't expect this to do much, as I am using a separate context for each transaction anyway.
Inserting the tracker state first, then the digital states in a separate step. Entity Framework is probably doing this internally anyway.
Update 1
I'm using EntityFramework 6.1.3. I couldn't figure out how to view the SQL being executed, but I updated the repository's store method to use SQL instead of EF:
context.Database.ExecuteSqlCommand("INSERT INTO DigitalStates ([TrackerStateId], [Index], [Value]) VALUES (#Id, #Index, #Value)",
new SqlParameter("Id", entity.Id),
new SqlParameter("Index", digital.Index),
new SqlParameter("Value", digital.Value));
This part alone is accounting for the majority of the time. It takes 3 seconds to insert 7 entries.
Saving all the digital states in one transaction made a huge difference:
if (trackerState.DigitalStates.Count > 0)
{
var query = "INSERT INTO DigitalStates ([TrackerStateId], [Index], [Value]) VALUES "
+ string.Join(",", trackerState.DigitalStates.Select(state => String.Format("({0}, {1}, {2})", entity.Id, state.Index, state.Value ? 1 : 0)));
context.Database.ExecuteSqlCommand(query);
}
For some reason, letting Entity Framework add the collection automatically seemed to be making a request to the database for each digital state that was added, although I was under the impression that it should have been one transaction, triggered by the context's SaveChanges() method. This fix has changed it from linear time to approximately constant time, relative to the size of the collection. Now my next question is, why?
Related
I'm currently developing an application on ASP.NET Core with Angular using Code First Migration and SQL Server. Now I have following "problem". I have data models with properties which always be refreshed on any change. The difficulty is that it is often calculated based on data of other models.
As an example:
I have these models (this is a little bit simplified):
public class Dinner {
public int Id { get; set; }
public string Name { get; set; }
public ICollection<Recipe> recipes {get; set; }
public Dinner ()
{
Recipes= new Collection<Recipe>();
}
}
public class Recipe {
public int Id { get; set; }
public string Name { get; set; }
public ICollection<Ingredient> ingredients {get; set; }
public Recipe ()
{
Ingredients = new Collection<Ingredient>();
}
}
public class Ingredient {
public int Id { get; set; }
public Product Product { get; set; }
public int ProductId { get; set; }
public Recipe Recipe { get; set; }
public int RecipeId { get; set; }
public decimal Quantity { get; set; }
}
public class Product {
public int Id { get; set; }
public string Name { get; set; }
public ICollection<Price> Prices { get; set; }
public Product()
{
Prices = new Collection<Price>();
}
}
public class Price {
public int Id { get; set; }
public decimal PricePerUnit { get; set; }
public Product Product { get; set; }
public int ProductId { get; set; }
}
I want to have
a calculated property for ingredient (which is the price for that specific quantity based on the price for the product)
a calculated property for recipe (the sum of all costs for all the ingredients)
a calculated property for dinner (the sum of all used recipes)
My question is: For best practice where should I do add this property?
Currently I calculate these properties on the app component by calculation the property of the used interface during the onInit() process. But this requires for example to load all the data up to prices to calculate the sum property of Dinner.
My goal is to have these sum property as up-to-date as possible but I would like to have the calculation (if possible) on SQL Server so I do need to load less data. Does this approach make sense? And how can I achieve that goal?
Looking at the model, it looks like you have three tables in your DB.
Ideally, you should keep these calculated values stored in DB.
This means that, when you are inserting a record for a dinner, you would add ingredients first, then calculate the total of all ingredients and insert the recipe. Likewise, calculate the total of all recipes and use the same while adding a Dinner. ALL THIS CALCULATION SHOULD HAPPEN IDEALLY INSIDE THE CONTROLLER(INSIDE REPOSITORY TO BE PRECISE).
Then, whenever you read the Dinner, you get the calculated values from the DB into your API.
What say?
You can add your calculations as calculated columns in SQL Server. The calculated columns would be marked as such in the EF Core model. When EF retrieves a Dinner, for example, the calculated dinner cost column will be computed in SQL Server and returned to EF Core without needing to retrieve the related tables.
I have the following requirement whilst using EF Core 3. I have Exercises with (Id = 1 Pushup), (Id = 2 Squat), (Id = 3 Weight Lift) and users. Once one exercise is completed, one ore more new exercise(s) are created. A multilevel hierarcy can be created with n level.
I have an issue though that can cause cyclic dependencies to the exercices.
e.g Pushup (1) is completed. Squat (2) is created. Squat (2) is completed then Weight Lift (3) is created. There is also a dependency on Weight Lift (3) that once it is completed it creates Pushup(1) with leads to cyclic dependency and I need to avoid that.
I gave a very simple example. In my project, dependencies can go up to level 50 and I dont know how to check for cyclic reference and avoid them.
Below is the code:
public class Exercise
{
public Exercise()
{
ExerciseDependencyExercises = new HashSet<ExercisesDependency>();
ExerciseDependencyTargetExercises = new HashSet<ExercisesDependency>();
UserExercises = new HashSet<UserExercise>();
}
[Key]
[DatabaseGenerated(DatabaseGeneratedOption.Identity)]
public long Id { get; set; }
[Required]
public string Name { get; set; }
public virtual ICollection<ExercisesDependency> ExerciseDependencyExercises { get; set; }
public virtual ICollection<ExercisesDependency> ExerciseDependencyTargetExercises { get; set; }
public virtual ICollection<UserExercise> UserExercises { get; set; }
}
public class UserExercise
{
[Key]
[DatabaseGenerated(DatabaseGeneratedOption.Identity)]
public long Id { get; set; }
public long UserId { get; set; }
public long ExerciseId { get; set; }
}
public class ExercisesDependency
{
public long ExerciseId { get; set; }
public virtual Exercise Exercise { get; set; }
public long TargetExerciseId { get; set; }
public virtual Exercise TargetExercise { get; set; }
}
Easiest solution I can think of is to not have your requirements dictate primary keys values (which should only be used to uniquely identify a record).
I don't know/understand why you have this requirement. But I would suggest just continue numbering, and while creating the new records to calculate whether it should be an exercise of type 1, 2 or 3. e.g. 1=>1, 2=>2, 3=>3, 4=>1, 5=>2, 6=>3, 7=>1.
Then a simple: var type = ((id - 1) %3) + 1; returns that list of numbers.
I'm using entity framework and I have a string field I'm using to hold ids separated by commas. There will never be more than ~30 ids in any string, but I might have millions of rows to search.
ex. "435,6789,1231,232"
I know I can search through the string for specific ids like this below with .Contains()
var events = dbContext.YogaSpaceEvents.Where(i => i.RegisteredStudentIds.Contains("12345") && i.EventDateTime >= DateTime.Now);
But will this run really slowly with millions of records?
If so, is there a better approach or column type I can use, like xml that will run much faster when I have millions of rows?
It's hard for me to test because I would need to populate millions of rows!
It will probably run slower than needed. Just create a table named Registrations with EventId and StudentId as the primary key and you'll be rocket fast :)
On the code side, it will looks something like:
var student = dbContext.Students.FindById(12345);
var events = student.Events.Where(i.EventDateTime >= DateTime.Now);
here is what I have so far, I have an event and I want to keeps records of the students registered so I can say "what events is student '12345' registered for?" and then return those events
public class YogaSpaceEvent
{
public YogaSpaceEvent() {}
[Key]
public int YogaSpaceEventId { get; set; }
[Index]
public int YogaSpaceRefId { get; set; }
[ForeignKey("YogaSpaceRefId")]
public virtual YogaSpace YogaSpace { get; set; }
[Required]
[Index]
public DateTime EventDateTime { get; set; }
[Required]
public YogaTime Time { get; set; }
[Required]
public YogaSpaceDuration Duration { get; set; }
public virtual ICollection<RegisteredStudents> RegisteredStudents { get; set; }
}
Here is RegisteredStudents
public class RegisteredStudents
{
public RegisteredStudents () {}
[Key]
public int RegisteredStudentsId { get; set; }
[Index]
public int YogaSpaceEventRefId { get; set; }
[ForeignKey("YogaSpaceEventRefId")]
public virtual YogaSpaceEvent YogaSpaceEvent { get; set; }
public int StudentId { get; set; }
}
and now I can say
var events = dbContext.RegisteredStudents.Where(i => i.StudentId == 12345).Select(j => j.YogaSpaceEvent);
I'm having an issue with a site that I'm writing in C# ASP using Entity Framework for the database. One of the data models that I'm using to store and retrieve data called DowntimeEvent contains 2 Lists AffectedSystems and AffectedDepartments. While I'm running the application in Visual Studio those lists store and retrieve just fine. But if I stop and restart the application the DowntimeEvents are still stored in my database, however the Lists for Affected Departments, and Affected Systems are null when I try to retrieve them.
Here's the Model I'm using to store the data
public class DowntimeEventModel
{
[Key]
public int ID { get; set; }
[Required]
public DateTime StartDateTime { get; set; }
[Required]
public DateTime EndDateTime { get; set; }
public int LocationID { get; set; }
[Required]
public string Description { get; set; }
//public int DepartmentID { get; set; }
//public int SystemID { get; set; }
public virtual ICollection<int> AffectedSystems { get; set; }
public virtual ICollection<int> AffectedDepartments { get; set; }
//public virtual ICollection<SystemModel> AffectedSystems { get; set; }
//public virtual ICollection<DepartmentModel> AffectedDepartments { get; set; }
}
Here's an example Controller of how I'm saving the data, and by the way this seems to be working just fine in storing the lists.
[HttpPost]
public ActionResult DowntimeEvent(DowntimeEventModel downtimeEvent)
{
PowerteqContext.DowntimeEvents.Add(downtimeEvent);
PowerteqContext.SaveChanges();
return View(SetupDowntimeEventViewModel());
}
It was this method that tipped me off to there being an issue with data retrieval after trying to write this report and trying to figure out why AffectedSystems was sometimes null and sometimes not. In the inner foreach loop I tried to access the ListAffectedSystems directly just to see if the loop might not be null that way and it is after a restart, but it's not if I add them and don't restart.
public ActionResult ReportUptimeBySystems()
{
var EndTime = DateTime.Now;
var StartTime = DateTime.Now.AddDays(-28);
var uptimeHours = new TimeSpan(1);
if (EndTime != StartTime)
uptimeHours = EndTime - StartTime;
List<ReportUptimeBySystem> SysUps = new List<ReportUptimeBySystem>();
var DownTimes = PowerteqContext.DowntimeEvents.AsEnumerable();
var Systems = PowerteqContext.Systems.AsEnumerable();
foreach (var x in Systems)
{
ReportUptimeBySystem sys = new ReportUptimeBySystem();
sys.SystemTimeUP = uptimeHours;
sys.SystemName = x.SystemName;
foreach (var y in DownTimes)
{
if(PowerteqContext.DowntimeEvents.Find(y.ID).AffectedSystems.Contains(x.ID))
{
sys.SystemTimeUP -= y.StartDateTime - y.EndDateTime;
}
}
SysUps.Add(sys);
}
return View(SysUps);
}
Another developer suggested that the issue may be in my Entity Framework Configuration. But I don't know where to look to even try to fix that.
For reference the whole application can be found here. The database I'm using is Microsoft SQL Serverhere
Entity framework will only automatically load relationships if it finds properties representing collections of another entity. It also must be able to identify foreign keys. By standards SystemModel and DepartmentModel should have a property DowntimeEventID, otherwise you'll have to inform it how to do this for you.
You should also ensure that lazy loading isn't disabled.
https://msdn.microsoft.com/en-us/data/jj574232.aspx
Disable lazy loading by default in Entity Framework 4
Here is a good example from a related question.
Many-to-many mapping table
public class DowntimeEventModel
{
[Key]
public int ID { get; set; }
[Required]
public DateTime StartDateTime { get; set; }
[Required]
public DateTime EndDateTime { get; set; }
public int LocationID { get; set; }
[Required]
public string Description { get; set; }
public virtual ICollection<SystemModel> AffectedSystems { get; set; }
public virtual ICollection<DepartmentModel> AffectedDepartments { get; set; }
}
Assuming AffectedSystems and AffectedDepartments are also EF entities with which DowntimeEventModel is linked with foreign keys with, you could try to explictly included them when you fetch your DowntimeEventModel results as such:
PowerteqContext.DowntimeEventModel.Include("DowntimeEventModel.AffectedSystems").Include("DowntimeEventModel.AffectedDepartments").ToList();
I am a fairly novice, enthusiast programmer. This is my first question, but I have been using stackoverflow for valuable information for several months now.
First, some context:
My current, somewhat jack-of-all trades job in an extremely small (<10 employees) specialist physician's office has put me in a unique position where I have free reign (and moderate resource backing) to develop and implement any kind of application/tool without any kind of demand or pressure as the current system functions well enough.
We currently run a fairly dated (circa ~2008) medical office management system that takes care of patient business accounts, billing, and insurance claim submissions. The office itself is fairly hap-hazardly networked. We have a couple of diagnostic machines that use standard DICOM, but the majority of medical records remains paper.
I may have bit off more than I can currently chew with my aspirations, but I have plans to slowly develop a more all-encompassing, domain-driven application that couples electronic medical records (evaluation/management and DICOM diagnostics) with the medical office management end. Once I have the framework of such an application laid, the sandbox aspect of my situation entices me in that I can explore and develop any kind of automation or tool I can dream.
Some honesty:
My experience is very limited in such things, but I am a very determined individual and I love application-driven learning.
My question:
I am working on first integrating with the current database of patient accounts.
Everything is currently stored in FoxPro 2.5 free tables. The relationships between the tables are not implicitly obtainable but quite a few are implied. Ideally I would like to have my new application with POCOs mapped with EF to an SQL database. That part is straightforward enough, but I would also like to have the same POCOs mapped to the current FoxPro 2.5 .dbfs (which won't necessarily have the same schema) in a separate DBContext.
Is such a thing possible as I am envisioning? I have been testing the waters of deriving all of the fluentAPI mappings (.ToTable(), .HasColumnName(), etc.) but it is a fairly daunting task and I would like some more experienced insight before dive head-first. I have been unable to find any relevant examples of anyone pulling off what I am attempting, which is somewhat discouraging as well.
Perhaps my approach is wrong. I am willing to adjust accordingly, but I like the idea of working with POCOs in my application and it is pretty important for my new application to be able to talk to the old database without implementing its unintuitive schema.
The key purpose of all the headache is to keep the current application fully functional while concurrently allowing me to run and develop my new application.
So in a nutshell:
Is it possible to use EF to integrate a new OOR/Domain-driven application with an old database of mildly different schema? If so, any tips or examples to get me started? If not, do I have any other functionally-similar alternatives? Thanks
Edit 1:
I'm going to refer to the currently used application as App X from now on.
App X's predecessor ran on Unix and sat on FoxPro/xBASE tables as well so App X was built on top of that presumably to trivialize customers upgrading. The App X directory also contains Visual Fox Pro 6 .dlls and an application file with a FoxPro .ico and the name "fbaseeng" which brings up a command prompt window titled "DTS Command Prompt". I don't know for sure how App X ticks, but 'DTS' stuck out to me and I spent some time seeing if there was a way I could use whatever data transformation they already implemented, but I eventually gave up.
The current database is a collection of 231 .dbf tables. Luckily, a good portion of them are seemingly either unused altogether, or only used in some round-about, temporary way where they don't store records outside the run-time of App X.
Several of the tables seem to be linking tables and another portion of them contain reference data like Type qualifier properties
public partial class Accttype
{
public decimal Acct_Type { get; set; }
public string Acct_Desc { get; set; }
public string Sb { get; set; }
public decimal Fee { get; set; }
public bool Acptasgn { get; set; }
public decimal Insclass1 { get; set; }
public decimal Insclass2 { get; set; }
public decimal Insclass3 { get; set; }
public decimal Insclass4 { get; set; }
public decimal Insclass5 { get; set; }
public string Acct_Grp { get; set; }
}
and static reference values like zip codes
public partial class Zip
{
public string Zipcode { get; set; }
public string City { get; set; }
public string St { get; set; }
public string Areacode { get; set; }
public decimal Ext_From { get; set; }
public decimal Ext_To { get; set; }
}
So far, I am most concerned with the following tables:
- Patdemo.dbf
Contains records for all of the patients to visit the practice. It has around 100
columns and contains a plethora of information including name, address, insurance
type(s), running account balance totals, etc. Straightforward primary key that is
patient ID, but the format is '0.0'.
- Billing.dbf
Contains alpha-numeric ID-indexed bills associated with patients on a specific date of
service. Has about 80 columns including mostly foreign keys/type qualifiers and status
indicators (i.e. INS1_SENT). Has FK of patient ID
- Charges.dbf
Contains line items that fall under each Bill. It is either a table per inheritance or
a join of Charges and Postings/Payments as it contains both records indicated by C or
P
in a Type column. There doesn't seem to be a simple primary key, but Charges have
ChargeID and Postings/Payments have PostID, both with format BillID+"000N". To
throw a curveball, however, voiding adjustments have no ChargeID/PostID. Has FK
of BillID.
- Insur.dbf
Contains insurance providers and information from address to electronic billing ID.
Primary key is alpha-numeric ICode (example: BC01).
- Patins.dbf
Seems to be a linking table, but also contains information like the patient's insurance
ID number for the policy. Has FKs of patient ID and ICode.
There are various other referential tables I will want to keep concurrency with (like diagnoses, referring doctors, and CPT codes), but they are lower priority right now.
I haven't completely designed my new application's schema yet, but I do know it will be far more logical in that things like Addresses will be a concrete type regardless of if associated with a patient or insurance company.
For sake of this example, let's look at the pre-existing Patins.dbf POCO (it is one of the smallest tables):
public partial class Patins
{
public decimal Custid { get; set; }
public decimal Inskey { get; set; }
public string Insurcode { get; set; }
public string Insurnum { get; set; }
public string Groupnum { get; set; }
public string Guarlname { get; set; }
public string Guarfname { get; set; }
public string Guarmi { get; set; }
public string Guargen { get; set; }
public string Guaraddr { get; set; }
public string Guaraddr2 { get; set; }
public string Guarcity { get; set; }
public string Guarst { get; set; }
public string Guarzip { get; set; }
public string Guarcountr { get; set; }
public string Guarphone { get; set; }
public string Guaremail { get; set; }
public System.DateTime Guardob { get; set; }
public string Guarsex { get; set; }
public string Guaremp { get; set; }
public decimal Relation { get; set; }
public System.DateTime Startdate { get; set; }
public System.DateTime Enddate { get; set; }
public bool Active { get; set; }
public string Bcpc { get; set; }
public string Auth1 { get; set; }
public string Auth2 { get; set; }
public string Auth3 { get; set; }
public decimal Billcnt { get; set; }
public string Desc1 { get; set; }
public string Desc2 { get; set; }
public string Desc3 { get; set; }
public decimal Visits1 { get; set; }
public decimal Visits2 { get; set; }
public decimal Visits3 { get; set; }
public System.DateTime From1 { get; set; }
public System.DateTime From2 { get; set; }
public System.DateTime From3 { get; set; }
public System.DateTime To1 { get; set; }
public System.DateTime To2 { get; set; }
public System.DateTime To3 { get; set; }
public string Insnote { get; set; }
public string Char1 { get; set; }
public string Char2 { get; set; }
public string Char3 { get; set; }
public string Char4 { get; set; }
public string Char5 { get; set; }
public string Char6 { get; set; }
public string Char7 { get; set; }
public string Char8 { get; set; }
public string Char9 { get; set; }
public string Char10 { get; set; }
public System.DateTime Date1 { get; set; }
public System.DateTime Date2 { get; set; }
public decimal Num1 { get; set; }
public decimal Num2 { get; set; }
public string Createby { get; set; }
public System.DateTime Createdt { get; set; }
public string Modifyby { get; set; }
public System.DateTime Modifydt { get; set; }
public string Cobmemo { get; set; }
public System.DateTime Dinju { get; set; }
public System.DateTime Dsimbd { get; set; }
public System.DateTime Dsimed { get; set; }
public string Createtm { get; set; }
public string Modifytm { get; set; }
public bool Archive { get; set; }
public bool Delflag { get; set; }
public decimal Coinsded { get; set; }
public decimal Outpoc { get; set; }
public System.DateTime Lastupd { get; set; }
public decimal Coins { get; set; }
public decimal Msp { get; set; }
}
In the real world, a patient is associated with an insurance company by way of insurance policy. There is a FK_PatientID, FK_InsuranceCarrierID, and unique ID, PK_PolicyNumber (possibly PolicyNumber+InsuranceCarrierID to be safe?). Policies have benefits information that dictate payment, and spouses/families can share policies (usually appending -0n to policy number).
I will probably let Patient objects contain a collection of insurance policy objects. Along these lines:
class Patient : Person
{
int PatientID { get; set; }
virtual IEnumerable<InsurancePolicy> InsurancePolicies { get; set; }
}
class InsurancePolicy
{
int PatientID { get; set; }
string PolicyNumber { get; set; }
string GroupNumber { get; set; }
bool IsActive { get; set; }
DateTime startDate { get; set; }
DateTime endDate { get; set; }
int InsuranceCarrierID { get; set; }
virtual Person Guarantor { get; set; } //all guarantor information accessible via person aggregate root i.e: Guarantor.FirstName
string GuarantorRelation { get; set; }
string[] Benefits { get; set; } //delineated set of benefit descriptions... automatically parse from EDI benefits message?... seperate value object class?... could contain copay/deduc/OoP
decimal Deductible { get; set; }
decimal Copay { get; set; }
decimal OutofPocket { get; set; }
virtual IEnumerable<Bill> AssociatedBills { get; set; } //all bills associated with policy... could also be InsuranceClaim objects... Automapper Bill->Claim?
}
There are a few other things that need to be represented either in InsurancePolicy or elsewhere like pay rate percentage, but I am leaving them off for now.
My question finally arrives when looking for ways to map the data to/from the old FP Tables. Specifically looking at Guarantor: as a Person object, it will be stored in an inheritance table in the SQL schema so what is the best way to go about mapping? Simply .ToTable("Patins") in the InsurancePolicyMap with (t => t.Guarantor.FirstName).HasColumnName("Guarfname") seems logical, but does EF take care of the separate relationship patterns automatically? Possibly better worded: Does it have trouble navigating the relationship/inheritance that lies between the physical Person.FirstName, the SQLMap, InsurancePolicy.Guarantor.FirstName, the VFPmap, and the physical Patins.Guarfname?
What about Billcnt? It is luckily not implemented in App X for whatever reason, but what would the mapping be for what I assume would be AssociatedBills.Count()? Would you just check validity pulling the value from the FP Table?
Since you have "free reign" I would take the chance to upgrade the whole system. This is a very very poor data model and it's really no use to drag this legacy along any time longer. These nondescript repetitive numbered fields will be a continuous source of confusion and bugs. And it will be virtually impossible to build a decent domain from it. Entity Framework has plenty of options to shape the class model differently that the data model, but this would be too much. Really, it's useless to rebuild the application while keeping the data model.
You should definitely normalize the data model. Create tables like Insclass with a foreign key to Accttype and so on and so forth. Also a Benefit table with an FK to InsurancePolicy as you can't map a string array to a database column.
But first (and foremost) get the requirements clear. It sounds good to have total liberty to build "any kind of application", but users always have something in mind. I would take ample time to pick their brains before you even type one line of code. Agree upon the things to do first. Then start building the application use case after use case. And let them test each use case. This will give them time to get used to the new system and slowly detach from the old system, and to fine-tune requirements as you go. (This is agile development in a nutshell).