Not sure if this question is suitable for StackOverflow as it's much more 'general'. Basically, I have a database driven business application made in ASP.NET and C#, which'll be used by around 20 members of a company. A crucial aspect of this is auditing - I need to log on any changes to any of the tables, and have them viewable by senior members of the staff.
My current solution uses SQL triggers, but I need to create something much more robust and user friendly. The database is gigantic, with a lot of tables with relations etc, and the audits currently are very uninformative to the users - telling the staff that x user modified an order to have a customer of ID of 837 is near enough useless - I need to be able to dictate which field is displayed in the audit log.
My idea is to create a class in my code that'll handle all these, and somehow map out what fields to display to the user, and also somehow tell the code which table was modified and which record.
Can anyone offer any general advice on how to do what I want, and whether it's actually possibile? I'm a heavy user of LINQ-to-SQL in my code, so I'm hoping that'll help...
You could also try using DoddleAudit for your needs. It provides automatic auditing of all inserts/updates/deletes for any table in your database with a single line of code, including:
What table was modified?
What fields changed?
Who made the change?
When did it occur?
You can find it here: http://doddleaudit.codeplex.com/
I've had similar audit requirements for a healthcare application, which used linq-to-sql for data access.
One way to do it centrally in Linq-to-sql is to override SubmitChanges in the data context class. Before submitting the changes, call GetChangeSet() to get data about the pending changes. Then add change tracking information as appropriate to a relevant log table before calling base.SubmitChanges(). In my application I used an xml column to be able to store change data for different tables in a structured manner, without having to create special history tables for each table in the system.
You could also try using SQL Server 2008's Change Data Capture feature. It basically captures inserts, updates and deletes on the desired tables, and stores changes made into a separate set of relational tables.
http://www.mssqltips.com/sqlservertip/1474/using-change-data-capture-cdc-in-sql-server-2008/
Related
How to Retrieve All Active Inventories from All tenants and companies to one custom screen selector in Acumatica??
In ISV Solution I am trying this. How to approach this scenario
Disclaimer: What you are asking appears to venture into a use case that is contrary to a foundational design element of Acumatica ERP - isolation of tenants within a single database. While I have a similar use case in mind in which I need to do the same, use of any of these suggestions should be weighed seriously against your business requirements and operating constraints.
After discussion with some Acumatica Developer MVP's and Acumatica staff, there is no SUPPORTED method for doing this directly with tables already containing a CompanyID field. However, here are a few possible ways to achieve your desired results.
Create alternate SQL Tables / Acumatica DAC's without CompanyID and CompanyMask so that the data can be shared. This will require keeping the supplemental table data in sync.
UNSUPPORTED (This will fail an ISV Certification) - Create a SQL view that excludes CompanyID and CompanyMask and an Acumatica DAC for that view. This will allow visibility to the data, regardless of tenant, without having to create or maintain duplicate data. Again, I was advised strongly against this approach in the same statement of it being possible.
Use an alternative tool to access the data in the SQL database if you have direct access to the database. This tool would have to access the database in a way completely independently of PXData so that all database queries do not automatically receive the "CompanyID = X" in the where clause.
It was mentioned that you might do something with impersonating other users, but I don't have any experience with that. I'm not sure if it would allow cross-tenant data access, but I don't believe it would allow viewing more than 1 tenant even if it does let you view another tenant at all.
My understanding is that the first option is the preferred method to stay within Acumatica in a supported way. The third option could be valid if you already use other analytics tools like Cognos.
I am working on a project where I need to create a database to track the status of units throughout the production process. My current blockade involves getting the users to interact with a DataGridView that is supplied from a Microsoft Access Query instead of a Microsoft Access Table.
What I want to do is create a query in Microsoft Access and have it link to the DataGridView so end users can interact with a query instead of the actual tables, while populating all parent tables.
I am not sure if what I am attempting to do is possible or advised. This is the first time I have built a database in the professional world and want to make sure I am doing things properly. I have also never built a C# application for business use and have very limited experience with the language itself.
I have tried creating the Query in Access and linking it to the application in the same way you would add a table from a data source. That would allow me to view the data in the query...but it would display as a read-only and not allow for any data to be altered (the query builder in the TableAdapter Query Configuration Wizard indicated it was a read-only) . I have tried adding all related table adapters to the TableAdapterManager and it still didn't help.
I apologize if this question sounds disjointed as I am trying to overcome one obstacle at a time and do not want to overload one question with multiple issues. I can supply my ERD if it will make things easier and I have it normalized to at least 2NF.
Introduction:
I'm refactoring (pretty much rewriting) a legacy application in my current internship. The part that this question will be concerned about is the database it uses and the way they retrieve data from it.
The database structure is:
There's a table that has the main records. Let's say each record is a measurement. It has some info about the measured material and different measurement information.
There's a table view they use that has the same information columns, plus some extra columns that contains data calculated from the given measurements. And it also filters some of the data from the table.
So let's say we have the main table with columns:
Measurement ID
Measurement A
Measurement B
The view has something like this:
Measurement ID
Measurement A
Measurement B
Some extra data (for example Measurement A * Measurement B)
The guy that is leading the development only knows some SQL, so he likes adding new columns that is calculated by some columns in the main table for experimenting. And this is definitely a need at the moment.
Requirements are:
Different types of databases should be supported (like SQL Server, Oracle, and probably some others).
The frontend should be able to show the view, which means even though some main columns will always stay the same, there may be some new columns including newly calculated values.
My question is:
What kind of system should I use to accommodate the needs of this application? I wanted to use Entity Framework, but the fact that the view may have new columns in the future is I think a problem. As far as I understand, I should map my classes to the database before compiling.
The other thing that I'm considering is maybe using Entity Framework to get data from the main table and do the calculations and the filtering that is currently done in the table view directly in the frontend, and skip the view altogether. Which sounds fine, though I don't know if they will allow me to do that.
What would you do in my case? Please take into account that I have virtually no experience with databases and ORMs.
You are correct in that using Entity Framework will be a problem if the underlying DB schema is always changing. It will require you to update the EF model on your end every time to grab those new columns.
Ideally, all of your database access is hidden behind the interface to your DAL, so that your application doesn't need to know about which ORM is being used -- if any -- or which database it's connecting to.
I hate to say it, but given your requirements, an ORM might not make sense. You might want to go with something more generic without any strong-typing. You could just simply always return a DataTable to your application layer, and it could loop through the columns and values to display whatever is returned. If there are fields you know will never change, you could create a manual mapping for those fields only into your application object(s).
You may have a look to NoSQL system that are a lot more flexible on the schema. Or have a look to document database like RavenDB. All these systems allow the schema to change dynamically. You need to check the Pro's and Con's to see if it can fill you requirements.
(This answer is a bit out of subject as it's about replacing the SQL server and not really creating a DAL, but other answers cover the subject well and I would like to propose another way that may help.)
If your schema is unstable, then using Entity Framework as a beginner is going to be a headache. The assumption is that you can just refresh the design canvas periodically to let the tool handle database table changes. You can try that for a time to see when it becomes too much of a pain, but without any prior experience using ORMs or Entity Framework it may not be worth the effort.
I would probably use something like Rob Conery's Massive ORM (https://github.com/robconery/massive). It gives you more flexibility with the underlying database schema and is a very small library. I remember it being ~300 lines of code and very easy to use. It uses C# dynamics so you'll have to be using >= C# 4.0 and be comfortable with that one concept but IMO it's worth it for the low-overhead. A full-fledged ORM like Entity Framework or NHibernate is going to cost a lot of learning cycles.
You could, of course, just stick to ADO.NET DataTables. They're a bit ugly and verbose, but they'll do the job.
You can use Entity Framework - Database First if the DB is changing. Of course, you will have to regenerate your classes when you want to be able to access new columns, when the DB schema changes.
If you need to accomodate different database servers, then you should take a look into implementing a repository pattern and abstract all your data access that way.
Your comment
it involves write operations to the main table but the main table never changes
confirms what I was hoping for. It means you can use Entity Framework as the core of you application and a different route to display data.
Suppose that for display (of the view) you use a classic DataTable (because all common grids support them, contrary to displaying dynamic objects). I don't know how create/update/delete will be done, but saving changes will at some point involve mapping a DataRow to a MainEntity object. You can write one method for that like
MainEntity DataRowToEntity(DataRow row)
{
var entity = new MainEntity();
entity.PropertyA = row["PropertyA"];
....
}
The MainEntity can be attached to a context, its status changed to Modified, and saved.
Using only microsoft based technologies (MS SQL Server, C#, EAB, etc) if you needed keep the track of changes done on a record in a database which strategy would you will use? Triggers, AOP on the DAL, Other? And how you will display the collected data? Is there a pattern about it? Is there a tool or a framework that help to implement this kind of solution?
The problem with Change Data capture is that it isn't flexible enough for real auditing. You can't add the columns you need. Also it dumps the records every three days by default (you can change this, but I don't think you can store forever) so you have to have a job dunping the records to a real audit table if you need to keep the data for a long time which is typical of the need to audit records (we never dump our audit records).
I prefer the trigger approach. You have to be careful when you write the triggers to ensure that they will capture the data if multiple records are changed. We have two tables for each table audited, one to store the datetime and id of the user or process that took the action and one to store the old and new data. Since we do a lot of multiple record processes this is critical for us. If someone reports one bad record, we want to be able to see if it was a process that made the change and if so, what other records might have been affected as well.
At the time you create the audit process, create the scripts to restore a set of audited data to the old values. It's a lot easier to do this when under the gun to fix things, if you already have this set up.
Sql Server 2008 R2 has this built-in - lookup Change Data Capture in books online
This is probably not a popular opinion, but I'm going to throw it out there anyhow.
I prefer stored procedures for all database writes. If auditing is required, it's right there in the stored procedure. There's no magic happening outside the code, everything that happens is documented right at the point where writes occur.
If, in the future, a table needs to change, one has to go to the stored procedure to make the change. The need to update the audit is documented right there. And because we used a stored procedure, it's simpler to "version" both the table and its audit table.
I am currently evaluating the Microsoft sync framework as a possible solution to sync data between two SQL databases. The examples I have seen so far rely on "tracking tables" containing the information used to track changes to be synced, with triggers on the main tables to keep them up to date.
My database already contains lots of this information (for an existing feature of the software), so it would be good to make use of that instead of having to migrate it all to the new tracking tables. I also don't like the ideas of doubling-up each table into a data table and a tracking table, and adding three triggers to each table - that sounds like it is likely to be a performance issue?
Is there any way of customising the tracking mechanism used by the sync framework (ie. the way in which changes are tracked)?
Yes, it is entirely possible to write your own logic to track changes and use them. For eg. one of the db syncproviders I have used, requires that you should define selectincrementalinsert command. Now which table(s) that data comes from and how you filter out the latest records is immaterial - you just need to define a query or an sp that gives you this data. This applies to all the other incremental sps (which deal with the change tracking)
Along with that you need an anchor value to define when the last sync has happened. I think there is no point in avoiding this one, since this is used exclusively for synchronization and your existing tracking tables will not contain a replacement for this.