How to use Dapper's ExecuteAsync method to insert in table - c#

I am trying to insert data into one of my database tables using the Dapper orm but i can't seem to get it right using the ExecuteAsync method.
When i am using the ExecuteScalarAsync method, everything seems to be working as intended :
RequestEvent ev = new();
ev.Title = string.Empty;
ev.Description = string.Empty;
ev.NewValue = string.Empty;
ev.MemberId = 1;
ev.RequestId = 1;
long requestEventInsertedId = await context.Connection.ExecuteScalarAsync<long>(#"
INSERT INTO RequestEvent (Title, Description, MemberId, RequestId, NewValue)
VALUES (#Title, #Description, #MemberId, #RequestId, #NewValue)
RETURNING Id", ev);
// it works, requestEventInsertedId's value is the inserted autoincremented id
I am trying to use ExecuteAsync with a similar approach without success :
RequestEvent ev = new();
ev.Title = string.Empty;
ev.Description = string.Empty;
ev.NewValue = string.Empty;
ev.MemberId = 1;
ev.RequestId = 1;
await context.Connection.ExecuteAsync(#"
INSERT INTO RequestEvent (Title, Description, MemberId, RequestId, NewValue)
VALUES (#Title, #Description, #MemberId, #RequestId, #NewValue)", ev);
Executing this code throws the following Exception Without further explanations :
Microsoft.Data.Sqlite.SqliteException : 'SQLite Error 19: 'FOREIGN KEY constraint failed'.'
MemberId and RequestId properties are foreign keys but are obviously provided with this code.
Why do i have this error using the ExecuteAsync method ?
RequestEvent model :
public class RequestEvent
{
public long Id { get; set; }
public DateTime DateTime { get; set; }
public string Title { get; set; }
public string Description { get; set; }
public string NewValue { get; set; }
public long MemberId { get; set; }
public long RequestId { get; set; }
}
RequestEvent table schema from my migration (I am using FluentMigrator)
Create.Table("RequestEvent")
.WithColumn("Id").AsInt64().PrimaryKey().NotNullable().Identity()
.WithColumn("DateTime").AsDateTime().WithDefaultValue(SystemMethods.CurrentDateTime)
.WithColumn("Title").AsString().Nullable()
.WithColumn("Description").AsString().Nullable()
.WithColumn("NewValue").AsString().Nullable()
.WithColumn("MemberId").AsInt64().Nullable().ForeignKey("Member", "Id").OnDeleteOrUpdate(Rule.SetNull)
.WithColumn("RequestId").AsInt64().Nullable().ForeignKey("Request", "Id").OnDeleteOrUpdate(Rule.SetNull);

Related

DynamoDB - How to implement Optimistic Locking using ServiceStack.Aws

Currently, I am using ServiceStack.Aws v5.9.0 to communicate with DynamoDB. I have used PutItem for both creating and updating an item without anticipating data loss in case of concurrency handling.
public class Customer
{
[HashKey]
public int CustomerId { get; set; }
[AutoIncrement]
public int SubId { get; set; }
public string CustomerType { get; set; }
public string LastName { get; set; }
public string FirstName { get; set; }
...//and hundreds of fields here
}
public class CustomerDynamo
{
private readonly IPocoDynamo db;
//Constructor
public CustomerDynamo()
{
var dynamoClient = new AmazonDynamoDBClient(_region);
var entityType = typeof(Customer);
var tableName = entityType.Name;
entityType.AddAttributes(new AliasAttribute(name: tableName));
db = new PocoDynamo(dynamoClient) { ConsistentRead = true }.RegisterTable(tableType: entityType);
}
public Customer Update(Customer customer)
{
customer.ModifiedDate = DateTime.UtcNow;
db.PutItem(customer);
return customer;
}
}
The above Update method is called in every service/async task that needs to update the data of the customer.
Refer to this article of AWS I decided to implement the Optimistic Locking to save my life from the issue of concurrency requests.
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/DynamoDBContext.VersionSupport.html
Assume that the VersionNumber will be the key for Optimistic Locking. So I added the VersionNumber into the Customer model.
public class Customer
{
[HashKey]
public int CustomerId { get; set; }
[AutoIncrement]
public int SubId { get; set; }
public string CustomerType { get; set; }
public string LastName { get; set; }
public string FirstName { get; set; }
...//and hundreds of fields here
[DynamoDBVersion]
public int? VersionNumber { get; set; }
}
The result is VersionNumber not updated while it should be automatically incremented. I think it is just because the PutItem will override the whole existing item. Is this correct?
I think I need to change from PutItem to UpdateItem in the Update method. The question is how can I generate the expression dynamically to be used with the UpdateItem?
Thanks in advance for any help!
Updates:
Thanks #mythz for the useful information about DynamoDBVersion attribute. Then I tried to remove the DynamoDBVersion and using the UpdateExpression of PocoDynamo as below
public Customer Update(Customer customer)
{
customer.ModifiedDate = DateTime.UtcNow;
var expression = db.UpdateExpression<Customer>(customer.CustomerId).Set(() => customer);
expression.ExpressionAttributeNames = new Dictionary<string, string>()
{
{ "#Version", "VersionNumber" }
};
expression.ExpressionAttributeValues = new Dictionary<string, AttributeValue>()
{
{ ":incr", new AttributeValue { N = "1" } },
{ ":zero", new AttributeValue { N = "0" } }
};
expression.UpdateExpression = "SET #Version = if_not_exists(#Version, :zero) + :incr";
if (customer.VersionNumber.HasValue)
{
expression.Condition(c => c.VersionNumber == customer.VersionNumber);
}
var success = db.UpdateItem(expression);
}
But the changes are not saved except the VersionNumber
The [DynamoDBVersion] is an AWS Object Persistence Model attribute for usage with AWS's DynamoDBContext not for PocoDynamo. i.e. the only [DynamoDB*] attributes PocoDynamo utilizes are [DynamoDBHashKey] and [DynamoDBRangeKey] all other [DynamoDB*] attributes are intended for AWS's Object Persistence Model libraries.
When needed you can access AWS's IAmazonDynamoDB with:
var db = new PocoDynamo(awsDb);
var awsDb = db.DynamoDb;
Here are docs on PocoDynamo's UpdateItem APIs that may be relevant.

Azure C# v2 Function: Data not being inserted to Azure Table Storage

I am writing a C# Azure v2 timer function that copies ten rows from one table (OneAuthZRoleAssignments) and inserts them into another table (OneAuthZPreviousRoleAssignments). My function works in reading from the OneAuthZRoleAssignments table (the print statements show the correct values)... however, it fails to insert the rows read into the OneAuthZPreviousRoleAssignments (note: the accountName and accountKey variables are defined, I have just omitted their actual values for security purposes of course). Below is my code:
using System;
using System.Buffers.Text;
using System.Collections.Generic;
using System.Data;
using Microsoft.Azure.WebJobs;
using Microsoft.Azure.WebJobs.Host;
using Microsoft.Extensions.Logging;
using Microsoft.WindowsAzure.Storage;
using Microsoft.WindowsAzure.Storage.Auth;
using Microsoft.WindowsAzure.Storage.Table;
namespace AccessChangeMonitoring
{
public static class Function1
{
// Authenticate access into the database's Azure Table Storage
static StorageCredentials creds = new StorageCredentials(accountName, accountKey);
static CloudStorageAccount account = new CloudStorageAccount(creds, useHttps: true);
[FunctionName("Function1")]
// Function that reads a small portion of the role assignments table (OneAuthZRoleAssignments) every
// configurable number of times
public static async System.Threading.Tasks.Task RunAsync([TimerTrigger("%TimerTriggerPeriod%")]TimerInfo myTimer, ILogger log)
{
// Retrieve the role assignments table
CloudTableClient client = account.CreateCloudTableClient();
// Current role assignments
CloudTable roleAssignmentsTable = client.GetTableReference("OneAuthZRoleAssignments");
// LKG (Last Known Good) role assignments
CloudTable previousRoleAssignmentsTable = client.GetTableReference("OneAuthZPreviousRoleAssignments");
// Test out retrieving a small portion from the role assignments table (10 rows)
var tablePortion = new List<RoleAssignment>(); // Stores the query of role assignment
TableContinuationToken token = null; // Allows our query to iterate to the next role row
// Define our query (in this case, set the number of rows we want to retrieve at 10)
TableQuery<RoleAssignment> tableQuery = new TableQuery<RoleAssignment>();
tableQuery.Take(10);
// Retrieve the rows from Azure Table Storage
var queryResult = await roleAssignmentsTable.ExecuteQuerySegmentedAsync(tableQuery, token);
tablePortion.AddRange(queryResult.Results);
// Copy the rows to the LKG (Last Known Good) table
CopyRows(tablePortion, previousRoleAssignmentsTable);
}
[FunctionName("CopyRows")]
// Copies a list of rows to another table
public static void CopyRows(List<RoleAssignment> queriedRows, CloudTable destinationTable)
{
// Iterate through all of the rows
foreach (RoleAssignment row in queriedRows)
{
// Define the insertion operation
TableOperation insert = TableOperation.Insert(row);
// Execute the insertion operation
destinationTable.ExecuteAsync(insert);
// OUTPUTS THE CORRECT VALUES
Console.WriteLine("------------------------------");
Console.WriteLine("Row: {0}\n{1}\n{2}\n{3}\n{4}\n{5}\n{6}\n{7}\n{8}\n{9}\n{10}\n{11}\n{12}" +
"\n{13}\n{14}\n{15}\n{16}\n{17}", row.PartitionKey, row.RowKey, row.Timestamp,
row.AppId, row.ApplicationName, row.AssignedAlias, row.AssignedName, row.AssignedUPN, row.Condition,
row.Id, row.IsBuiltIn, row.PrincipalId, row.RoleDefinitionId, row.RoleDefintionName, row.Scope, row.UpdatedBy,
row.UpdatedByAlias, row.UpdatedByName
);
Console.WriteLine("------------------------------");
}
}
}
}
And for reference, this is the entity class RoleAssignment which represents a row in the table:
using Microsoft.WindowsAzure.Storage.Table;
using System;
using System.Collections.Generic;
using System.Text;
namespace AccessChangeMonitoring
{
// RoleAssignment represents a row in the Azure Table Storage OneAuthZRoleAssignments
public class RoleAssignment:TableEntity
{
// Constructors
public RoleAssignment()
{
}
public RoleAssignment(string PartitionKey, string RowKey, DateTime Timestamp, Guid AppId, string ApplicationName,
string AssignedAlias, string AssignedName, string AssignedUPN, string Condition,
string Id, bool IsBuiltIn, string PrincipalId, string RoleDefinitionId, string RoleDefintionName,
string Scope, string UpdatedBy, string UpdatedByAlias, string UpdatedByName)
{
this.PartitionKey = PartitionKey;
this.RowKey = RowKey;
this.TimeStamp = Timestamp;
this.AppId = AppId;
this.ApplicationName = ApplicationName;
this.AssignedAlias = AssignedAlias;
this.AssignedName = AssignedName;
this.AssignedUPN = AssignedUPN;
this.Condition = Condition;
this.Id = Id;
this.IsBuiltIn = IsBuiltIn;
this.PrincipalId = PrincipalId;
this.RoleDefinitionId = RoleDefinitionId;
this.RoleDefintionName = RoleDefintionName;
this.Scope = Scope;
this.UpdatedBy = UpdatedBy;
this.UpdatedByAlias = UpdatedByAlias;
this.UpdatedByName = UpdatedByName;
}
// The row properties
public DateTime TimeStamp { get; set; }
public Guid AppId { get; set; }
public string ApplicationName { get; set; }
public string AssignedAlias { get; set; }
public string AssignedName { get; set; }
public string AssignedUPN { get; set; }
public string Condition { get; set; }
public string Id { get; set; }
public Boolean IsBuiltIn { get; set; }
public string PrincipalId { get; set; }
public string RoleDefinitionId { get; set; }
public string RoleDefintionName { get; set; }
public string Scope { get; set; }
public string UpdatedBy { get; set; }
public string UpdatedByAlias { get; set; }
public string UpdatedByName { get; set; }
}
}
Below is the OneAuthZRoleAssignments table:
This seems strange, considering that I can read the data from the OneAuthZRoleAssignments table and both the table that I am reading from (OneAuthZRoleAssignments) and the table that I am writing from (OneAuthZPreviousRoleAssignments) are in the same storage account:
There is probably an exception happening in destinationTable.ExecuteAsync(insert); which you are not seeing because you are missing await. See how these changes to your code change the result:
// Copy the rows to the LKG (Last Known Good) table
CopyRows(tablePortion, previousRoleAssignmentsTable);
}
public static void CopyRows(List<RoleAssignment> queriedRows, CloudTable destinationTable)
{
//...
destinationTable.ExecuteAsync(insert);
Change that to:
// Copy the rows to the LKG (Last Known Good) table
await CopyRows(tablePortion, previousRoleAssignmentsTable);
}
public static async Task CopyRows(List<RoleAssignment> queriedRows, CloudTable destinationTable)
{
//...
await destinationTable.ExecuteAsync(insert);

Exception when trying to make bulk insert in xamarin app with sqlite

I developed code to get the Json sent from my rest service.
I receive and threat the code and generate the sql insert statement.
I got this error:
[0:] SQLite.SQLiteException: Constraint
at SQLite.SQLiteCommand.ExecuteNonQuery () [0x000ca] in <84b9c9e630fa45bd8ac799333976ebbf>:0
at GSAN_Mobile.Repository.GsanMobileRepository`1+<>c__DisplayClass9_1[T].<BulkInsert>b__0 (Newtonsoft.Json.Linq.JToken register) [0x0002c] in D:\Projetos\Gsan\mobile\front\GSAN_Mobile\GSAN_Mobile\GSAN_Mobile\Repository\GsanMobileRepository.cs:185
at System.Collections.Generic.List`1[T].ForEach (System.Action`1[T] action) [0x0001e] in <6de48997d0c0445dbea8d4d83492d8c6>:0
at GSAN_Mobile.Repository.GsanMobileRepository`1[T].BulkInsert (Newtonsoft.Json.Linq.JArray array, System.String tableName) [0x00062] in D:\Projetos\Gsan\mobile\front\GSAN_Mobile\GSAN_Mobile\GSAN_Mobile\Repository\GsanMobileRepository.cs:180
My Bulk Insert Method
public void BulkInsert(JArray array, string tableName = "")
{
try
{
if (string.IsNullOrEmpty(tableName))
{
Type typeParameterType = typeof(T);
tableName = typeParameterType.Name;
}
using (SQLiteConnection connection = new SQLiteConnection(DataBaseUtil.GetDataBasePath()))
{
connection.BeginTransaction();
array.ToList().ForEach(register =>
{
string sql = DataBaseUtil.GenerateInsertStatement(register, tableName);
System.Diagnostics.Debug.WriteLine(sql);
var command = connection.CreateCommand(sql);
command.ExecuteNonQuery();
});
connection.Commit();
DataBaseUtil.CloseConnection(connection);
}
}
catch (Exception e)
{
LogUtil.WriteLog(e);
}
}
My utils methods
public static string GenerateInsertStatement(JToken register, string tableName)
{
var data = JsonConvert.DeserializeObject<Dictionary<string, string>>(register.ToString());
string columns = string.Join(",", data.Keys.ToList());
string values = string.Join(",", data.Values.Select(v => string.Format(#"'{0}'", v.Trim())));
return string.Format("INSERT INTO {0} ({1}) VALUES ({2}); ", tableName, columns, values);
}
public static void CloseConnection(SQLiteConnection connection)
{
connection.Dispose();
connection.Close();
}
And my ViewModel class
This is the method who I call when start to syncronize
private async Task RegistrarDados()
{
try
{
_logs.Add("Realizando o Download: ");
GenerateAtendimentoMotivosEncerramento();
GenerateHidrometrosLocalInstalacao();
GenerateHidrometrosProtecao();
GenerateFuncionarios();
GenerateGrupoFaturamento();
GenerateLigacaoAguaSituacoes();
GenerateLigacaoEsgotoSituacoes();
GenerateServicosTipo();
GenerateSistemParametros();
GenerateOrdensServico();
//GenerateContas();
int contador = _ordemServicoRepository.Count<OrdemServico>();
_logs.Add("Sincronização encerrada com sucesso!");
await App.Current.MainPage.DisplayAlert("Atenção", "Foram importados " + contador + " Ordens de Serviços!", "OK");
PodeSincronizar = true;
}
catch (Exception e)
{
LogUtil.WriteLog(e);
}
}
And this is method where the error happens
private async void GenerateOrdensServico()
{
try
{
_logs.Add("ORDENS DE SERVIÇO");
int? records = await _ordemServicoRest.GetCount();
int? limit = _sistemaParametroRepository.GetTamanhoPaginaSincMobile();
int? pages = (records / limit);
for (int i = 0; i <= pages; i++)
{
JArray ordensServico = await _ordemServicoRest.GetAllInJsonFormatPaginated(DataBaseUtil.GetPagination(i, limit.Value));
if (ordensServico == null)
{
_logs.Add("Não Contem O.S de Corte para importar!");
await App.Current.MainPage.DisplayAlert("Atenção", "Não tem O.S para importar!", "OK");
continue;
}
_ordemServicoRepository.BulkInsert(ordensServico);
}
}
catch (Exception e)
{
LogUtil.WriteLog(e);
}
}
I received paginated date because there are 8500 registers, sometimes don´t happen the error, but in the other, I don´t uderstant the erro.
And my model class
[Serializable]
public class Persistent
{
[AutoIncrement]
[PrimaryKey]
[NotNull]
[JsonProperty("id")]
public int? Id { get; set; }
}
[Table("OrdemServico")]
public class OrdemServico : Persistent
{
[JsonProperty("situacaoOS")]
public int? SituacaoOS { get; set; }
[JsonProperty("idServicoTipo")]
public int? IdServicoTipo { get; set; }
[JsonProperty("dataGeracao")]
public string DataGeracao { get; set; }
[JsonProperty("idRegistroAtendimento")]
public int? IdRegistroAtendimento { get; set; }
[JsonProperty("idgrupo")]
public int? IdGrupo { get; set; }
[JsonProperty("matriculaCliente")]
public int? MatriculaCliente { get; set; }
[JsonProperty("nomeCliente")]
public string NomeCliente { get; set; }
[JsonProperty("tipoLogradouro")]
public string TipoLogradouro { get; set; }
[JsonProperty("logradouro")]
public string Logradouro { get; set; }
[JsonProperty("numeroImovel")]
public int? NumeroImovel { get; set; }
[JsonProperty("numeroCep")]
public int? NumeroCep { get; set; }
[JsonProperty("bairro")]
public string Bairro { get; set; }
[JsonProperty("numeroHidrometro")]
public string NumeroHidrometro { get; set; }
[JsonProperty("idHidrometroProtecao")]
public int? IdHidrometroProtecao { get; set; }
[JsonProperty("idHidrometroLocalInstalacao")]
public int? IdHidrometroLocalInstalacao { get; set; }
[JsonProperty("imovel")]
public int? Imovel { get; set; }
[JsonProperty("ligacaoAguaSituacao")]
public int? LigacaoAguaSituacao { get; set; }
[JsonProperty("ligacaoEsgotoSituacao")]
public int? LigacaoEsgotoSituacao { get; set; }
[JsonProperty("sincronizada")]
public int? Sincronizada { get; set; }
}
I sent almost the time the Id, when has
My API not throw exception.
The SQLite.SQLiteException: Constraint exception and the fact that it's randomly occurring is key. You are violating a table constraint somewhere.
Your SQLite table is expecting a value that it's not receiving. For example, if one of your data fields is marked "NOT NULL" in the table and a field in your data is NULL, that constraint will fail and you will receive the exception.
You need to determine which fields have constraints and then handle the received data to ensure constraint integrity. Convert null TEXT values to String.Empty or null INTEGER fields from null to 0 if the table has a NOT NULL constraint. You will have to look at the set constraints in the data structure.
With that said you should really look into parameterized queries. Your current INSERT statement is just asking for trouble as it's vulnerable to injection. For example, if one of the value strings contains a comma, it can break your statement because SQLite doesn't know value is just a value and just treats it as part of the query commands. Parameters fix this. This could also, be what is causing your issue if you are correctly following your table constaints.
Here's a small example of how your queries should be built instead using parameters.
cmd.CommandText = "INSERT INTO MyTable (CompanyName, Address) VALUES (#Name, #Address)";
cmd.Parameters.AddWithValue("#CompanyName", myCompanyNameString);
cmd.Parameters.AddWithValue("#Address", myAddressString);
Since my dynamic query values are parameterized, I can make my values have special characters or statements like "DROP TABLE" and it will be handled correctly and protect from injection attack or exceptions from query breaking characters or words.

Non-frozen UDTs are not allowed inside collections CassandraCSharpDriver

I am cassandra for custom logging my .netcore project, i am using CassandraCSharpDriver.
Problem:
I have created UDT for params in log, and added list of paramUDT in Log table as frozen.
But i am getting error: Non-frozen UDTs are not allowed inside collections. I don't know why ia m getting this error because i am using Frozen attribute on list i am using in Log Model.
logSession.Execute($"CREATE TYPE IF NOT EXISTS {options.Keyspaces.Log}.{nameof(LogParamsCUDT)} (Key text, ValueString text);");
Here is model:
public class Log
{
public int LoggingLevel { get; set; }
public Guid UserId { get; set; }
public string TimeZone { get; set; }
public string Text { get; set; }
[Frozen]
public IEnumerable<LogParamsCUDT> LogParams { get; set; }
}
Question where i am doing wrong, is my UDT script not correct or need to change in model.
Thanks in advance
I've tried using that model and Table.CreateIfNotExists ran successfully.
Here is the the code:
public class Program
{
public static void Main()
{
var cluster = Cluster.Builder().AddContactPoint("127.0.0.1").Build();
var session = cluster.Connect();
session.CreateKeyspaceIfNotExists("testks");
session.ChangeKeyspace("testks");
session.Execute($"CREATE TYPE IF NOT EXISTS testks.{nameof(LogParamsCUDT)} (Key text, ValueString text);");
session.UserDefinedTypes.Define(UdtMap.For<LogParamsCUDT>($"{nameof(LogParamsCUDT)}", "testks"));
var table = new Table<Log>(session);
table.CreateIfNotExists();
table.Insert(new Log
{
LoggingLevel = 1,
UserId = Guid.NewGuid(),
TimeZone = "123",
Text = "123",
LogParams = new List<LogParamsCUDT>
{
new LogParamsCUDT
{
Key = "123",
ValueString = "321"
}
}
}).Execute();
var result = table.First(l => l.Text == "123").Execute();
Console.WriteLine(JsonConvert.SerializeObject(result));
Console.ReadLine();
table.Where(l => l.Text == "123").Delete().Execute();
}
}
public class Log
{
public int LoggingLevel { get; set; }
public Guid UserId { get; set; }
public string TimeZone { get; set; }
[Cassandra.Mapping.Attributes.PartitionKey]
public string Text { get; set; }
[Frozen]
public IEnumerable<LogParamsCUDT> LogParams { get; set; }
}
public class LogParamsCUDT
{
public string Key { get; set; }
public string ValueString { get; set; }
}
Note that I had to add the PartitionKey attribute or else it wouldn't run.
Here is the CQL statement that it generated:
CREATE TABLE Log (
LoggingLevel int,
UserId uuid,
TimeZone text,
Text text,
LogParams frozen<list<"testks"."logparamscudt">>,
PRIMARY KEY (Text)
)
If I remove the Frozen attribute, then this error occurs: Cassandra.InvalidQueryException: 'Non-frozen collections are not allowed inside collections: list<testks.logparamscudt>'.
If your intention is to have a column like this LogParams frozen<list<"testks"."logparamscudt">> then the Frozen attribute will work. If instead you want only the UDT to be frozen, i.e., LogParams list<frozen<"testks"."logparamscudt">>, then AFAIK the Frozen attribute won't work and you can't rely on the driver to generate the CREATE statement for you.
All my testing was done against cassandra 3.0.18 using the latest C# driver (3.10.1).

Update Unique property entity framework

public class Person
{
[Required]
public int? KupaId { get; set; }
[ForeignKey("KupaId")]
public Kupa Kupa { get; set; }
public int? newKupaId { get; set; }
[ForeignKey("newKupaId")]
public Kupa NewKupa { get; set; }
}
public class Kupa
{
public int Id { get; set; }
[Index("Ix_uniqueId", IsUnique = true)]
public int ? uniqueId { get; set; }
}
public class MyController:Controller
{
public Json EditKupa(Expression<Func<Person,bool>> criteria )
{
using (IKupotRepository<Person> _IPersonRepository = new SQlRepository<Person>())
{
Person personToEdit=_IPersonRepository.SingleOrDefault(criteria,GetIncludeProperties());
> //Getting the new kupa obj from db
newKupa = GetKupa(UniqueId);
<//changing the unique property to null
personToEdit.Kupa.ToremId = null;
personToEdit.Kupa.State = State.Modified;
personToEdit.NewKupa = newKupa;
>//Assign the unique id property the value that was in the first Kupa
personToEdit.NewKupa.ToremId = 1;
personToEdit.newKupaId = newKupa.Id;
personToEdit.NewKupa.State = State.Modified;
_IPersonRepository.SaveChanges();
}
}
when calling saveChanges() getting an exception :unique key violation , when looking at the sql profiler i can see that EF 6 generates an update query for both the Kupa object but it tries to update the NewKupa.uniqueId before updating the Kupa.uniqueId ?
Assuming you are using SQL Server as a database server this is happening because you allow NULL values in that column and NULL = NULL is NULL so if you have multiple rows with NULL on that column you'll get the error.
To implement this in SQL statements will be like this:
CREATE UNIQUE NONCLUSTERED INDEX Idx_UniqueId_NotNull
ON Kupa(uniqueId)
WHERE uniqueId IS NOT NULL;
However, to do this in EF there is no easy way, but there is a workaround in this SO answer here.

Categories