How to remove entity from local copy without removing from Linq DB - c#

I have a problem in Linq. I am getting all the contents a in table when I query. But I don't want to load certain rows which are marked with some special key Y.
So, for this, I'm iterating and removing from my local copy those which are having special key Y.
Later on, when I submit changes, I get an error:
"An attempt was made to remove a relationship between a priceTable and a dataTable. However, one of the relationship's foreign keys (P.Id) cannot be set to null."
Why is it so? How can I alter the contents of a particular entity set without touching some of the rows?
I want the rows that are marked as Y not to be returned from the DB. I don't want use them in my c# at runtime.

Dennis is correct. If you don't required those records marked as 'Y', then use the where clause and exclude those records. Then you are free to modify and update the records back to the database without any issues.
sample where clause looks like below
var data = from p in context.Persons
where p.Required == "Y"
select p;

Related

migrating an access multi valued field column to c#

I am attempting to use the Microsoft.ACE.OLEDB.12.0 driver to read data from an access database. came upon an odd situation. one of the columns in the access database shows as a comma delimited list of ids.
Wells
________
345,456,7
6,387
when I looked at the column definition in access I thought it would say string but it does not, it says number. so I guess it is storing an array of integers in a single column?
I'm having a tough time getting a data reader to pick this up.
using
var w = DB_Reader.GetValue(DB_Reader.GetOrdinal("Wells"));
results in the error
The provider could not determine the Object value. For example, the
row was just created, the default for the Object column was not
available, and the consumer had not yet set a new Object value.
Well, at the end of the day, you can think of the mutli-value column as in fact a child table.
So, if you looking to migrate a master and child table, then in YOUR database, you need a relational set of tables to re-create what Access is doing behind the scene.
So, lets take a multi-value example and query.
Say we have this sql query in Access:
SELECT ID, Person_Name, FavorateColors FROM tPerson;
But, "favorite colors" is one of those MV columns. (and I should point out with the HUGE movement towards no-sql databases - they also often work this way also - same for XML or JSON data for that matter. However, be it some XML, JSON or Access mutli-value features? Well, you need that child table if you going to adopt a relational data model to represent this data.
Ok, so we run the above query, and you get this output:
In fact, when I used the lookup wizard - I picked a child table called tblColors.
but, how can we explode the above query to dig out the data?
Change the above query to this:
SELECT ID, Person_Name, FavorateColors.Value FROM tPerson
Note how we added ".value" after the MV column name. Now, when you run the query, you get the SAME result as if you had two tables, and did a left join. The parent table rows will like any relational database simple repeat for each child table value, and you get this:
Note how now the PK value and the row is repeating for each child mv value.
So, you are quite much free to query as per above - you get what amounts to a left joined table, and of course the parent record repeats.
So, just like XML, JSON, or in fact a query or a table of data with repeating parent row, and child rows? Well, you quite much forced to write code to split out this data, or re-normalize the data. This of course is far more common when receiving say JSON/XML data, or in fact often say data from a Excel sheet.
So, you have to process out the child record data, and create a relation for that data.
And thus now our question becomes how can we import JSON/XML/Excel data that really should have used two relational database tables.
So, assuming we want to process this data? You process it the same as for any data you have that should have been two related tables in the first place.
it really depends if this is a one time import, or you have to do this all the time?
If it was a one time deal, then I would use Access, and use a make table query based on the above query. You would in fact have to pluck up the PK ID from the child table. In above there is a child table called colors - we just missing that "junction" table in between that Access automatic created. The hidden tables are not exposed, and thus I would simply use a make table query in access, and then add a FK column that is the PK value from the tblColors.

c# - Failed to enable constraints when filling datatable [duplicate]

I make an outer join and executed successfully in the informix database but I get the following exception in my code:
DataTable dt = TeachingLoadDAL.GetCoursesWithEvalState(i, bat);
Failed to enable constraints. One or more rows contain values
violating non-null, unique, or foreign-key constraints.
I know the problem, but I don't know how to fix it.
The second table I make the outer join on contains a composite primary key which are null in the previous outer join query.
EDIT:
SELECT UNIQUE a.crs_e, a.crs_e || '/ ' || a.crst crs_name, b.period,
b.crscls, c.crsday, c.from_lect, c.to_lect,
c.to_lect - c.from_lect + 1 Subtraction, c.lect_kind, e.eval, e.batch_no,
e.crsnum, e.lect_code, e.prof_course
FROM rlm1course a, rfc14crsgrp b, ckj1table c, mnltablelectev d,
OUTER(cc1assiscrseval e)
WHERE a.crsnum = b.crsnum
AND b.crsnum = c.crsnum
AND b.crscls = c.crscls
AND b.batch_no = c.batch_no
AND c.serial_key = d.serial_key
AND c.crsnum = e.crsnum
AND c.batch_no = e.batch_no
AND d.lect_code= e.lect_code
AND d.lect_code = ....
AND b.batch_no = ....
The problem happens with the table cc1assiscrseval. The primary key is (batch_no, crsnum, lect_code).
How to fix this problem?
EDIT:
According to #PaulStock advice:
I do what he said, and i get:
? dt.GetErrors()[0] {System.Data.DataRow} HasErrors: true ItemArray:
{object[10]} RowError: "Column 'eval' does not allow DBNull.Value."
So I solve my problem by replacing e.eval to ,NVL (e.eval,'') eval.and this solves my problem.
Thanks a lot.
This problem is usually caused by one of the following
null values being returned for columns not set to AllowDBNull
duplicate rows being returned with the same primary key.
a mismatch in column definition (e.g. size of char fields) between the database and the dataset
Try running your query natively and look at the results, if the resultset is not too large. If you've eliminated null values, then my guess is that the primary key columns is being duplicated.
Or, to see the exact error, you can manually add a Try/Catch block to the generated code like so and then breaking when the exception is raised:
Then within the command window, call GetErrors method on the table getting the error.
For C#, the command would be ? dataTable.GetErrors()
For VB, the command is ? dataTable.GetErrors
This will show you all datarows which have an error. You can get then look at the RowError for each of these, which should tell you the column that's invalid along with the problem. So, to see the error of the first datarow in error the command is:
? dataTable.GetErrors(0).RowError
or in C# it would be ? dataTable.GetErrors()[0].RowError
You can disable the constraints on the dataset. It will allow you to identify bad data and help resolve the issue.
e.g.
dataset.TableA.Clear();
dataset.EnforceConstraints = false;
dataAdapter1.daTableA.Fill(dataset, TableA");
The fill method might be slightly different for you.
This will find all rows in the table that have errors, print out the row's primary key and the error that occurred on that row...
This is in C#, but converting it to VB should not be hard.
foreach (DataRow dr in dataTable)
{
if (dr.HasErrors)
{
Debug.Write("Row ");
foreach (DataColumn dc in dataTable.PKColumns)
Debug.Write(dc.ColumnName + ": '" + dr.ItemArray[dc.Ordinal] + "', ");
Debug.WriteLine(" has error: " + dr.RowError);
}
}
Oops - sorry PKColumns is something I added when I extended DataTable that tells me all the columns that make up the primary key of the DataTable. If you know the Primary Key columns in your datatable you can loop through them here. In my case, since all my datatables know their PK cols I can write debug for these errors automatically for all tables.
The output looks like this:
Row FIRST_NAME: 'HOMER', LAST_NAME: 'SIMPSON', MIDDLE_NAME: 'J', has error: Column 'HAIR_COLOR' does not allow DBNull.Value.
If you're confused about the PKColumns section above - this prints out column names and values, and is not necessary, but adds helpful troubleshooting info for identifying which column values may be causing the issue. Removing this section and keeping the rest will still print the SQLite error being generated, which will note the column that has the problem.
Ensure the fields named in the table adapter query match those in the query you have defined. The DAL does not seem to like mismatches. This will typically happen to your sprocs and queries after you add a new field to a table.
If you have changed the length of a varchar field in the database and the XML contained in the XSS file has not picked it up, find the field name and attribute definition in the XML and change it manually.
Remove primary keys from select lists in table adapters if they are not related to the data being returned.
Run your query in SQL Management Studio and ensure there are not duplicate records being returned. Duplicate records can generate duplicate primary keys which will cause this error.
SQL unions can spell trouble. I modified one table adapter by adding a ‘please select an employee’ record preceding the others. For the other fields I provided dummy data including, for example, strings of length one. The DAL inferred the schema from that initial record. Records following with strings of length 12 failed.
This worked for me, source: here
I had this error and it wasn't related with the DB constrains (at least in my case). I have an .xsd file with a GetRecord query that returns a group of records. One of the columns of that table was "nvarchar(512)" and in the middle of the project I needed to changed it to "nvarchar(MAX)".
Everything worked fine until the user entered more than 512 on that field and we begin to get the famous error message "Failed to enable constraints. One or more rows contain values violating non-null, unique, or foreign-key constraints."
Solution: Check all the MaxLength property of the columns in your DataTable.
The column that I changed from "nvarchar(512)" to "nvarchar(MAX)" still had the 512 value on the MaxLength property so I changed to "-1" and it works!!.
The problem is with the Data Access designer. In Visual Studio, When we pull a View from "Server Explorer" to the Designer window, it is adding either a Primary key on a column randomly or marking something to a NOT NULL though it is actually set to null. Though the actual View creation in the SQL db server, doesn't have any primary key defined or the NOT NULL defined, the VS designer is adding this Key/constraint.
You can see this in the designer - it is shown with a key icon on left of the column name.
Solution: Right click on the key icon and select 'Delete Key'. This should solve the problem. You can also right click on a column and select "Properties" to see the list of properties of a column in the VS Data access designer and change the values appropriately.
This error was also showing in my project. I tried all the proposed solutions posted here, but no luck at all because the problem had nothing to do with fields size, table key fields definition, constraints or the EnforceConstraints dataset variable.
In my case I also have a .xsd object which I put there during the project design time (the Data Access Layer). As you drag your database table objects into the Dataset visual item, it reads each table definition from the underlying database and copies the constraints into the Dataset object exactly as you defined them when you created the tables in your database (SQL Server 2008 R2 in my case). This means that every table column created with the constraint of "not null" or "foreign key" must also be present in the result of your SQL statement or stored procedure.
After I included all the key columns and the columns defined as "not null" into my queries the problem disappeared completely.
Mine started working when I set AllowDBNull to True on a date field on a data table in the xsd file.
It sounds like possibly one or more of the columns being selected with:
e.eval, e.batch_no, e.crsnum, e.lect_code, e.prof_course
has AllowDBNull set to False in your Dataset defintion.
It is not clear why running a SELECT statement should involve enabling constraints. I don't know C# or related technologies, but I do know Informix database. There is something odd going on with the system if your querying code is enabling (and presumably also disabling) constraints.
You should also avoid the old-fashioned, non-standard Informix OUTER join notation. Unless you are using an impossibly old version of Informix, you should be using the SQL-92 style of joins.
Your question seems to mention two outer joins, but you only show one in the example query. That, too, is a bit puzzling.
The joining conditions between 'e' and the rest of the tables is:
AND c.crsnum = e.crsnum
AND c.batch_no = e.batch_no
AND d.lect_code= e.lect_code
This is an unusual combination. Since we do not have the relevant subset of the schema with the relevant referential integrity constraints, it is hard to know whether this is correct or not, but it is a little unusual to join between 3 tables like that.
None of this is a definitive answer to you problem; however, it may provide some guidance.
Thank you for all the input made so far. I just wanna add on that while one may have successfully normalized DB, updated any schema changes to their application (e.g. to dataset) or so, there is also another cause: sql CARTESIAN product (when joining tables in queries).
The existence of a cartesian query result will cause duplicate records in the primary (or key first) table of two or more tables being joined.
Even if you specify a "Where" clause in the SQL, a Cartesian may still occur if JOIN with secondary table for example contains the unequal join (useful when to get data from 2 or more UNrelated tables):
FROM tbFirst INNER JOIN
tbSystem ON tbFirst.reference_str <> tbSystem.systemKey_str
Solution for this:
tables should be related.
Thanks. chagbert
I solved the same problem by changing this from false to true. in the end I went into the database and changed my bit field to allow null, and then refreshed my xsd, and refreshed my wsdl and reference.cs and now all is well.
this.columnAttachPDFToEmailFlag.AllowDBNull = true;
Short and easy Soloution:
Go to MSSQL Studio Sever ;
Run the query of the cause of this error : in my case i see that id value was null because i forget to set Identity specification increment by 1.
So entered 1 for the id field as its is autoincremane and modify dont allow NULLS in desing view
That was the error that caused my bindingsource and tabel adapter throwin error at this code:
this.exchangeCheckoutReportTableAdapter.Fill(this.sbmsDataSet.ExchangeCheckouReportTable);
DirectCast(dt.Rows(0),DataRow).RowError
This directly gives the error
If you are using visual studio dataset designer to get the data table, and it is throwing an error 'Failed to Enable constraints'. I've faced the same problem, try to preview the data from the dataset designer itself and match it with table inside your database.
The best way to solve this issue is to delete the table adapter and create a new one instead.
* Secondary way : *
If you don't need [id] to be as Primary key,
Remove its primary key attribute:
on your DataSet > TableAdapter > right click on [id] column > select Delete key ...
Problem will be fixed.
I also had this issue and it was resolved after modifying the *.xsd to reflect the revised size of the column changed in the underlying SQL server.
To fix this error, i took off the troubling table adapter from the Dataset designer, and saved the dataset, and then dragged a fresh copy of the table adapter from the server explorer and that fixed it
I resolved this problem by opening the .xsd file with an XML reader and deleting a constraint placed on one of my views. For whatever reason when I added the view to the data it added a primary key constraint to one of the columns when there shouldn't have been one.
The other way is to open the .xsd file normally, look at the table/view causing the issue and delete any keys (right click column, select delete key) that should not be there.
Just want to add another possible reason for the exception to those listed above (especially for people who like to define dataset schema manually):
when in your dataset you have two tables and there is a relationship (DataSet.Reletions.Add()) defined from first table's field (chfield) to the second table's field (pfield), there is like an implicit constraint is added to that field to be unique even though it may be not specified as such explicitly in your definition neither as unique nor as a primary key.
As a consequence, should you have rows with repetitive values in that parent field (pfield) you'll get this exception too.
In my case this error was provoked by a size of a string column. What was weird was when I executed the exact same query in different tool, repeated values nor null values weren't there.
Then I discovered that the size of a string column size was 50 so when I called the fill method the value was chopped, throwing this exception.
I click on the column and set in the properties the size to 200 and the error was gone.
Hope this help
I solved this problem by doing the "subselect" like it:
string newQuery = "select * from (" + query + ") as temp";
When do it on mysql, all collunms properties (unique, non-null ...) will be cleared.
using (var tbl = new DataTable())
using (var rdr = cmd.ExecuteReader())
{
tbl.BeginLoadData();
try
{
tbl.Load(rdr);
}
catch (ConstraintException ex)
{
rdr.Close();
tbl.Clear();
// clear constraints, source of exceptions
// note: column schema already loaded!
tbl.Constraints.Clear();
tbl.Load(cmd.ExecuteReader());
}
finally
{
tbl.EndLoadData();
}
}
I received the same error type and in my case it solved it by removing the select fields and replacing them with a *. No idea why it was happening. The query had no typos or anything fancy.
Not the best solution but nothing else worked and I was getting exhausted.
In my search for a clear answer I found this on this:
https://www.codeproject.com/questions/45516/failed-to-enable-constraints-one-or-more-rows-cont
Solution 8
This error was also showing in my project, using Visual Studio 2010. I tried other solutions posted in other blogs, but no luck at all because the problem had nothing to do with fields size, table key fields definition, constraints or the EnforceConstraints dataset variable.
In my case I have a .xsd object which I put there during the project design time (in the Data Access Layer). As you drag your database table objects into the Dataset visual item, it reads each table definition from the underlying database and copies the constraints into the Dataset object exactly as you defined them when you created the tables in your database (SQL Server 2008 R2 in my case). This means that every table column created with the constraint of "not null" or "foreign key" must also be present in the result of your SQL statement or stored procedure.
After I included all the constrained columns (not null, primary key, foreign key, etc) into my queries the problem disappeared completely.
Perhaps you don't need all the table columns to be present in the query/stored procedure result, but because the constraints are still applied the error is shown if some constrained column does not appear in the result.
Hope this helps someone else.
If you have failing DataSet (not DataTable):
if (dataSet.HasErrors)
foreach (DataTable table in dataSet.Tables)
if (table.HasErrors)
foreach (var row in table.GetErrors())
Debug.Write($"Error in DataTable {table.TableName}: {row.RowError}")
if _sample_DataSet was the name of dataset that encounter error while filling, you can put the fill dataset inside a Try Catch and then put following code in catch{} block then you are able to exactly find the erroneous column.
foreach (DataTable _dtable in _sample_DataSet.DataSet.Tables)
{
foreach (DataRow dr in _dtable.Rows)
{
if (dr.HasErrors)
{
if (dr.HasErrors)
{
Debug.Write("Row error="+dr.RowError);
}
}
}

Failed to enable constraints [duplicate]

I make an outer join and executed successfully in the informix database but I get the following exception in my code:
DataTable dt = TeachingLoadDAL.GetCoursesWithEvalState(i, bat);
Failed to enable constraints. One or more rows contain values
violating non-null, unique, or foreign-key constraints.
I know the problem, but I don't know how to fix it.
The second table I make the outer join on contains a composite primary key which are null in the previous outer join query.
EDIT:
SELECT UNIQUE a.crs_e, a.crs_e || '/ ' || a.crst crs_name, b.period,
b.crscls, c.crsday, c.from_lect, c.to_lect,
c.to_lect - c.from_lect + 1 Subtraction, c.lect_kind, e.eval, e.batch_no,
e.crsnum, e.lect_code, e.prof_course
FROM rlm1course a, rfc14crsgrp b, ckj1table c, mnltablelectev d,
OUTER(cc1assiscrseval e)
WHERE a.crsnum = b.crsnum
AND b.crsnum = c.crsnum
AND b.crscls = c.crscls
AND b.batch_no = c.batch_no
AND c.serial_key = d.serial_key
AND c.crsnum = e.crsnum
AND c.batch_no = e.batch_no
AND d.lect_code= e.lect_code
AND d.lect_code = ....
AND b.batch_no = ....
The problem happens with the table cc1assiscrseval. The primary key is (batch_no, crsnum, lect_code).
How to fix this problem?
EDIT:
According to #PaulStock advice:
I do what he said, and i get:
? dt.GetErrors()[0] {System.Data.DataRow} HasErrors: true ItemArray:
{object[10]} RowError: "Column 'eval' does not allow DBNull.Value."
So I solve my problem by replacing e.eval to ,NVL (e.eval,'') eval.and this solves my problem.
Thanks a lot.
This problem is usually caused by one of the following
null values being returned for columns not set to AllowDBNull
duplicate rows being returned with the same primary key.
a mismatch in column definition (e.g. size of char fields) between the database and the dataset
Try running your query natively and look at the results, if the resultset is not too large. If you've eliminated null values, then my guess is that the primary key columns is being duplicated.
Or, to see the exact error, you can manually add a Try/Catch block to the generated code like so and then breaking when the exception is raised:
Then within the command window, call GetErrors method on the table getting the error.
For C#, the command would be ? dataTable.GetErrors()
For VB, the command is ? dataTable.GetErrors
This will show you all datarows which have an error. You can get then look at the RowError for each of these, which should tell you the column that's invalid along with the problem. So, to see the error of the first datarow in error the command is:
? dataTable.GetErrors(0).RowError
or in C# it would be ? dataTable.GetErrors()[0].RowError
You can disable the constraints on the dataset. It will allow you to identify bad data and help resolve the issue.
e.g.
dataset.TableA.Clear();
dataset.EnforceConstraints = false;
dataAdapter1.daTableA.Fill(dataset, TableA");
The fill method might be slightly different for you.
This will find all rows in the table that have errors, print out the row's primary key and the error that occurred on that row...
This is in C#, but converting it to VB should not be hard.
foreach (DataRow dr in dataTable)
{
if (dr.HasErrors)
{
Debug.Write("Row ");
foreach (DataColumn dc in dataTable.PKColumns)
Debug.Write(dc.ColumnName + ": '" + dr.ItemArray[dc.Ordinal] + "', ");
Debug.WriteLine(" has error: " + dr.RowError);
}
}
Oops - sorry PKColumns is something I added when I extended DataTable that tells me all the columns that make up the primary key of the DataTable. If you know the Primary Key columns in your datatable you can loop through them here. In my case, since all my datatables know their PK cols I can write debug for these errors automatically for all tables.
The output looks like this:
Row FIRST_NAME: 'HOMER', LAST_NAME: 'SIMPSON', MIDDLE_NAME: 'J', has error: Column 'HAIR_COLOR' does not allow DBNull.Value.
If you're confused about the PKColumns section above - this prints out column names and values, and is not necessary, but adds helpful troubleshooting info for identifying which column values may be causing the issue. Removing this section and keeping the rest will still print the SQLite error being generated, which will note the column that has the problem.
Ensure the fields named in the table adapter query match those in the query you have defined. The DAL does not seem to like mismatches. This will typically happen to your sprocs and queries after you add a new field to a table.
If you have changed the length of a varchar field in the database and the XML contained in the XSS file has not picked it up, find the field name and attribute definition in the XML and change it manually.
Remove primary keys from select lists in table adapters if they are not related to the data being returned.
Run your query in SQL Management Studio and ensure there are not duplicate records being returned. Duplicate records can generate duplicate primary keys which will cause this error.
SQL unions can spell trouble. I modified one table adapter by adding a ‘please select an employee’ record preceding the others. For the other fields I provided dummy data including, for example, strings of length one. The DAL inferred the schema from that initial record. Records following with strings of length 12 failed.
This worked for me, source: here
I had this error and it wasn't related with the DB constrains (at least in my case). I have an .xsd file with a GetRecord query that returns a group of records. One of the columns of that table was "nvarchar(512)" and in the middle of the project I needed to changed it to "nvarchar(MAX)".
Everything worked fine until the user entered more than 512 on that field and we begin to get the famous error message "Failed to enable constraints. One or more rows contain values violating non-null, unique, or foreign-key constraints."
Solution: Check all the MaxLength property of the columns in your DataTable.
The column that I changed from "nvarchar(512)" to "nvarchar(MAX)" still had the 512 value on the MaxLength property so I changed to "-1" and it works!!.
The problem is with the Data Access designer. In Visual Studio, When we pull a View from "Server Explorer" to the Designer window, it is adding either a Primary key on a column randomly or marking something to a NOT NULL though it is actually set to null. Though the actual View creation in the SQL db server, doesn't have any primary key defined or the NOT NULL defined, the VS designer is adding this Key/constraint.
You can see this in the designer - it is shown with a key icon on left of the column name.
Solution: Right click on the key icon and select 'Delete Key'. This should solve the problem. You can also right click on a column and select "Properties" to see the list of properties of a column in the VS Data access designer and change the values appropriately.
This error was also showing in my project. I tried all the proposed solutions posted here, but no luck at all because the problem had nothing to do with fields size, table key fields definition, constraints or the EnforceConstraints dataset variable.
In my case I also have a .xsd object which I put there during the project design time (the Data Access Layer). As you drag your database table objects into the Dataset visual item, it reads each table definition from the underlying database and copies the constraints into the Dataset object exactly as you defined them when you created the tables in your database (SQL Server 2008 R2 in my case). This means that every table column created with the constraint of "not null" or "foreign key" must also be present in the result of your SQL statement or stored procedure.
After I included all the key columns and the columns defined as "not null" into my queries the problem disappeared completely.
Mine started working when I set AllowDBNull to True on a date field on a data table in the xsd file.
It sounds like possibly one or more of the columns being selected with:
e.eval, e.batch_no, e.crsnum, e.lect_code, e.prof_course
has AllowDBNull set to False in your Dataset defintion.
It is not clear why running a SELECT statement should involve enabling constraints. I don't know C# or related technologies, but I do know Informix database. There is something odd going on with the system if your querying code is enabling (and presumably also disabling) constraints.
You should also avoid the old-fashioned, non-standard Informix OUTER join notation. Unless you are using an impossibly old version of Informix, you should be using the SQL-92 style of joins.
Your question seems to mention two outer joins, but you only show one in the example query. That, too, is a bit puzzling.
The joining conditions between 'e' and the rest of the tables is:
AND c.crsnum = e.crsnum
AND c.batch_no = e.batch_no
AND d.lect_code= e.lect_code
This is an unusual combination. Since we do not have the relevant subset of the schema with the relevant referential integrity constraints, it is hard to know whether this is correct or not, but it is a little unusual to join between 3 tables like that.
None of this is a definitive answer to you problem; however, it may provide some guidance.
Thank you for all the input made so far. I just wanna add on that while one may have successfully normalized DB, updated any schema changes to their application (e.g. to dataset) or so, there is also another cause: sql CARTESIAN product (when joining tables in queries).
The existence of a cartesian query result will cause duplicate records in the primary (or key first) table of two or more tables being joined.
Even if you specify a "Where" clause in the SQL, a Cartesian may still occur if JOIN with secondary table for example contains the unequal join (useful when to get data from 2 or more UNrelated tables):
FROM tbFirst INNER JOIN
tbSystem ON tbFirst.reference_str <> tbSystem.systemKey_str
Solution for this:
tables should be related.
Thanks. chagbert
I solved the same problem by changing this from false to true. in the end I went into the database and changed my bit field to allow null, and then refreshed my xsd, and refreshed my wsdl and reference.cs and now all is well.
this.columnAttachPDFToEmailFlag.AllowDBNull = true;
Short and easy Soloution:
Go to MSSQL Studio Sever ;
Run the query of the cause of this error : in my case i see that id value was null because i forget to set Identity specification increment by 1.
So entered 1 for the id field as its is autoincremane and modify dont allow NULLS in desing view
That was the error that caused my bindingsource and tabel adapter throwin error at this code:
this.exchangeCheckoutReportTableAdapter.Fill(this.sbmsDataSet.ExchangeCheckouReportTable);
DirectCast(dt.Rows(0),DataRow).RowError
This directly gives the error
If you are using visual studio dataset designer to get the data table, and it is throwing an error 'Failed to Enable constraints'. I've faced the same problem, try to preview the data from the dataset designer itself and match it with table inside your database.
The best way to solve this issue is to delete the table adapter and create a new one instead.
* Secondary way : *
If you don't need [id] to be as Primary key,
Remove its primary key attribute:
on your DataSet > TableAdapter > right click on [id] column > select Delete key ...
Problem will be fixed.
I also had this issue and it was resolved after modifying the *.xsd to reflect the revised size of the column changed in the underlying SQL server.
To fix this error, i took off the troubling table adapter from the Dataset designer, and saved the dataset, and then dragged a fresh copy of the table adapter from the server explorer and that fixed it
I resolved this problem by opening the .xsd file with an XML reader and deleting a constraint placed on one of my views. For whatever reason when I added the view to the data it added a primary key constraint to one of the columns when there shouldn't have been one.
The other way is to open the .xsd file normally, look at the table/view causing the issue and delete any keys (right click column, select delete key) that should not be there.
Just want to add another possible reason for the exception to those listed above (especially for people who like to define dataset schema manually):
when in your dataset you have two tables and there is a relationship (DataSet.Reletions.Add()) defined from first table's field (chfield) to the second table's field (pfield), there is like an implicit constraint is added to that field to be unique even though it may be not specified as such explicitly in your definition neither as unique nor as a primary key.
As a consequence, should you have rows with repetitive values in that parent field (pfield) you'll get this exception too.
In my case this error was provoked by a size of a string column. What was weird was when I executed the exact same query in different tool, repeated values nor null values weren't there.
Then I discovered that the size of a string column size was 50 so when I called the fill method the value was chopped, throwing this exception.
I click on the column and set in the properties the size to 200 and the error was gone.
Hope this help
I solved this problem by doing the "subselect" like it:
string newQuery = "select * from (" + query + ") as temp";
When do it on mysql, all collunms properties (unique, non-null ...) will be cleared.
using (var tbl = new DataTable())
using (var rdr = cmd.ExecuteReader())
{
tbl.BeginLoadData();
try
{
tbl.Load(rdr);
}
catch (ConstraintException ex)
{
rdr.Close();
tbl.Clear();
// clear constraints, source of exceptions
// note: column schema already loaded!
tbl.Constraints.Clear();
tbl.Load(cmd.ExecuteReader());
}
finally
{
tbl.EndLoadData();
}
}
I received the same error type and in my case it solved it by removing the select fields and replacing them with a *. No idea why it was happening. The query had no typos or anything fancy.
Not the best solution but nothing else worked and I was getting exhausted.
In my search for a clear answer I found this on this:
https://www.codeproject.com/questions/45516/failed-to-enable-constraints-one-or-more-rows-cont
Solution 8
This error was also showing in my project, using Visual Studio 2010. I tried other solutions posted in other blogs, but no luck at all because the problem had nothing to do with fields size, table key fields definition, constraints or the EnforceConstraints dataset variable.
In my case I have a .xsd object which I put there during the project design time (in the Data Access Layer). As you drag your database table objects into the Dataset visual item, it reads each table definition from the underlying database and copies the constraints into the Dataset object exactly as you defined them when you created the tables in your database (SQL Server 2008 R2 in my case). This means that every table column created with the constraint of "not null" or "foreign key" must also be present in the result of your SQL statement or stored procedure.
After I included all the constrained columns (not null, primary key, foreign key, etc) into my queries the problem disappeared completely.
Perhaps you don't need all the table columns to be present in the query/stored procedure result, but because the constraints are still applied the error is shown if some constrained column does not appear in the result.
Hope this helps someone else.
If you have failing DataSet (not DataTable):
if (dataSet.HasErrors)
foreach (DataTable table in dataSet.Tables)
if (table.HasErrors)
foreach (var row in table.GetErrors())
Debug.Write($"Error in DataTable {table.TableName}: {row.RowError}")
if _sample_DataSet was the name of dataset that encounter error while filling, you can put the fill dataset inside a Try Catch and then put following code in catch{} block then you are able to exactly find the erroneous column.
foreach (DataTable _dtable in _sample_DataSet.DataSet.Tables)
{
foreach (DataRow dr in _dtable.Rows)
{
if (dr.HasErrors)
{
if (dr.HasErrors)
{
Debug.Write("Row error="+dr.RowError);
}
}
}

Why does MS Access 2007 not allow a row insert, but then allow it on the next insert attempt?

My insert statement is:
INSERT INTO myTable (inst_id,user_id,app_id,type,accessed_on)
VALUES (3264,2580,'MyApp','Renew',Now);
...where all of the values are formatted correctly. The table has the above fields and one other, a long int auto-increment key field. The foreign keys are 'inst_id', 'user_id', and 'app_id'.
I am getting this error from Access:
...and the following error from VS 2005 when it errors out:
System.Data.OleDb.OleDbException: The changes you requested to the table
were not successful because they would
create duplicate values in the index,
primary key, or relationship. Change
the data in the field or fields that
contain duplicate data, remove the
index, or redefine the index to permit
duplicate entries and try again.
When making this insert query I can look into the database and see that the each of the foreign key values exist in their respective tables and have been for months (for the particular example I am using). These fields are also set so that I can have duplicates, so that is not the issue. Calls of this nature in other tables works great. I do not need to supply the auto-increment key value in the insert query, it adds it for me automatically (like it should).
The weird thing is that if I do this in my code:
try
{
//Execute the query here...
}
catch
{
//Execute the same query again
}
...or if I just try and execute this within Access twice, it works.
Has anyone encountered this before? Again, this type of insert works for other tables, all foreign keys are present in their respective tables, the primary key of this table is set as 'Auto-increment', and all fields (other than the primary key field of course) are set to allow duplicates.
Any ideas?
EDIT: Largest key before inserting: 343085. Largest key after inserting: 343086. The format is:
id: AutoNumber (Field Size=Long Interger, New Values=Increment, Indexed=Yes - No Duplicates)
inst_id: Number (Field Size=Long Interger, Required=Yes, Indexed=Yes - Duplicates OK)
user_id: Number (Field Size=Long Interger, Required=Yes, Indexed=Yes - Duplicates OK)
app_id: Text (Field Size=255, Required=Yes, Indexed=Yes - Duplicates OK)
type: Text (Field Size=50, Required=Yes, Indexed=No)
accessed_on: Date/Time (Default Value=Now(), Required=Yes, Indexed=No)
Going by some old memory here...
Try putting a timestamp field in your table.
I can't remember exactly why that works -- something to do with Access having difficulty identifying records / maybe some kind of locking or indexing quirk. I did some research on that several years ago when it happened to one of my tables.
The key violation the error refers to isn't a missing key in another table, it's a duplicate key in the same table. Sometimes, Access gets it's wires crossed and thinks that the key it's assigning to the new record is already assigned to another record in the table. I don't know what causes that to happen. But by putting a timestamp field in the table, it causes Access to think differently.
It's a frustrating fix, because I don't know why it works. And now I have an otherwise useless timestamp field in my table. But so be it.
MS-Access has been known to barf up spurious errors that have nothing to do with the problem they report. It wouldn't hurt to surround the column called "type" with brackets, [type].
http://office.microsoft.com/en-us/access-help/access-2007-reserved-words-and-symbols-HA010030643.aspx#_Toc272229038
Is the value Now changing between attempts so that there is now no longer a duplicate key error?
INSERT INTO myTable (inst_id,user_id,app_id,type,accessed_on)
VALUES (3264,2580,'MyApp','Renew',Now);
Can you just check this out with accessed_on datatype and Now datatype
Change the value type of DateTime to String while inserting that will be good.
Do let me know if this works for you.
Thanks
rAfee
I believe Jet/ACE will not understand the NOW() method.
And i worked with ACE version, the syntax could not work.
Need to find the other way for direct implementing the syntax.
I know long time ago I had a similuar issue. In my cases I was getting the same error but I didn't have any unique indexes in the table. I finally solved it by reparing and compacting the database.

How to get the primary key from a table without making a second trip?

How would I get the primary key ID number from a Table without making a second trip to the database in LINQ To SQL?
Right now, I submit the data to a table, and make another trip to figure out what id was assigned to the new field (in an auto increment id field). I want to do this in LINQ To SQL and not in Raw SQL (I no longer use Raw SQL).
Also, second part of my question is: I am always careful to know the ID of a user that's online because I'd rather call their information in various tables using their ID as opposed to using a GUID or a username, which are all long strings. I do this because I think that SQL Server doing a numeric compare is much (?) more efficient than doing a username (string) or even a guid (very long string) compare. My questions is, am I more concerned than I should be? Is the difference worth always keeping the userid (int32) in say, session state?
#RedFilter provided some interesting/promising leads for the first question, because I am at this stage unable to try them, if anyone knows or can confirm these changes that he recommended in the comments section of his answer?
If you have a reference to the object, you can just use that reference and call the primary key after you call db.SubmitChanges(). The LINQ object will automatically update its (Identifier) primary key field to reflect the new one assigned to it via SQL Server.
Example (vb.net):
Dim db As New NorthwindDataContext
Dim prod As New Product
prod.ProductName = "cheese!"
db.Products.InsertOnSubmit(prod)
db.SubmitChanges()
MessageBox.Show(prod.ProductID)
You could probably include the above code in a function and return the ProductID (or equivalent primary key) and use it somewhere else.
EDIT: If you are not doing atomic updates, you could add each new product to a separate Collection and iterate through it after you call SubmitChanges. I wish LINQ provided a 'database sneak peek' like a dataset would.
Unless you are doing something out of the ordinary, you should not need to do anything extra to retrieve the primary key that is generated.
When you call SubmitChanges on your Linq-to-SQL datacontext, it automatically updates the primary key values for your objects.
Regarding your second question - there may be a small performance improvement by doing a scan on a numeric field as opposed to something like varchar() but you will see much better performance either way by ensuring that you have the correct columns in your database indexed. And, with SQL Server if you create a primary key using an identity column, it will by default have a clustered index over it.
Linq to SQL automatically sets the identity value of your class with the ID generated when you insert a new record. Just access the property. I don't know if it uses a separate query for this or not, having never used it, but it is not unusual for ORMs to require another query to get back the last inserted ID.
Two ways you can do this independent of Linq To SQL (that may work with it):
1) If you are using SQL Server 2005 or higher, you can use the OUTPUT clause:
Returns information from, or
expressions based on, each row
affected by an INSERT, UPDATE, or
DELETE statement. These results can be
returned to the processing application
for use in such things as confirmation
messages, archiving, and other such
application requirements.
Alternatively, results can be inserted
into a table or table variable.
2) Alternately, you can construct a batch INSERT statement like this:
insert into MyTable
(field1)
values
('xxx');
select scope_identity();
which works at least as far back as SQL Server 2000.
In T-SQL, you could use the OUTPUT clause, saying:
INSERT table (columns...)
OUTPUT inserted.ID
SELECT columns...
So if you can configure LINQ to use that construct for doing inserts, then you can probably get it back easily. But whether LINQ can get a value back from an insert, I'll let someone else answer that.
Calling a stored procedure from LINQ that returns the ID as an output parameter is probably the easiest approach.

Categories