C# OData V4 batch request - c#

We are implementing OData batch request in our OData endpoint, in order to be able to send multiple operation in single request. So, the one of the reasons of implementing Batch request is to be able to create Invoice and InvoiceDetails in single request. If there is any error or validation message in context, the entire operations should be roll backed(undo all changes).
I have unit test that creates 2 entities in single batch request, first entity with correct value, so it is gets saved to DB, but the second entity has error(or message in context) it will create the message. In response I see first entity is created and the second entity is not. So what should I change to be able to roll back entire request operation and if there is any error it should not save to DB.
I tried with EnableContinueOnErrorHeader and without, but still the first entity gets saved to DB.
config.EnableContinueOnErrorHeader();
Thanks in advance,

Related

Using Efcore's DbContext with Websockets in ASP.NET (Long living DbContext)

we have a API written in ASP.NET with net6. We use ef core to communicate to a mysql database. Works great for http requests, but not for websockets.
We have a websocket endpoint which can receive messages from a client.
Those messages can trigger an action in the controller, which accesses the database.
For example a write action, add to a table entity x that I just created in my code.
Works great the first time, but due to the db context tracking changes it will render in a error, if I try to add the same entity again (entity was removed in the database by some other service or whatever).
This was to be expected, since we still have the same DbContext from the action that was executed before, and it still has the tracked changes.
Error looks like this, your probably familiar with it.
System.InvalidOperationException: The instance of entity type 'Block' cannot be tracked because another instance with the key value '{Blocksowner: 3ae66710-5307-022f-2c31-5bcb1f468af2, Blockspartner: SYSTEM}' is already being tracke
d. When attaching existing entities, ensure that only one entity instance with a given key value is attached.
Issues being, that the DbContext should only be used for a Single-Unit-of-Work, which is fine for requests which dispose the scoped DbContext after the controller disposes. But this obviously doesnt happen, until the websocket connection is closed and the controller disposes. So it uses the same DbContext for everything.
Im kinda stuck here, on how to handle this correctly.
My dirty ideas, that I dont like myself, would be:
Make the DbContext a Transient, and manually use a using on the retrieved service by DI
Same as above, but leave it as a Scoped (Dont actually know if that works)
Use a DbContext Factory?
Clear the tracked entities every time I execute a different action..
I searched for a solution for this kind of stuff for a long time now, but couldnt find a clear answer on whats the "right way" of doing this.
You can scope the DbContext within the action executed when messages are received, and it will be disposed automatically when leaving the method.
(using new)
var contextOptions = new DbContextOptionsBuilder<ApplicationDbContext>()
.UseSqlServer(#"Server=(localdb)\mssqllocaldb;Database=Test")
.Options;
using (var context = new ApplicationDbContext(contextOptions))
{
// action based on message received on socket
}
(using factory)
using (var context = _contextFactory.CreateDbContext())
{
// action based on message received on socket
}
See EF Core docs
Note: if you have a very high throughput of messages, you should look into context pooling.

How to implement Idempotent key into an asp.net web api?

An app makes an HTTP post with Idempotency Key in the API request header.
On the server-side, you want to check if the request with the Idempotent Key has been processed for this client or not.
If the request has not been processed than we proceed with the method to CREATE, UPDATE or DELETE.
If the Idempotent Key has been used in the previous request, then we response back to the client with an error message.
How do we track the API request, the API count, and the Idempotent Key used in request etc?
By logging all API request in the database and make a round trip to the database to check this information everytime a new request is made? Or is there a better way?
You can try to use this open source component on github to solve your problem IdempotentAPI
What I like doing in a fairly standard setup (database, EF core, web api) is use a middleware to add (Context.Add()) the idempotency key to the database without committing.
Later on, in the controller, a service or some sort of handler, I make sure Context.SaveChanges() (or UnitOfWork.Commit()) is called only once (which should normally be the case since you’re supposed to update only 1 aggregate root per transaction).
This way you’re sure you’re saving atomically, your idempotency key will only be saved if your insert/update/delete is successful. If the idempotency key already exists in the database your insert/update/delete will fail.
Finally, what you can also do is cache your successful responses, so that in case of idempotency exception, you can simply return the cached response.

RESTful URL for RPC-like operation

I'm implementing a RESTful API for a DVD rental website using ASP.NET Web API. The domain model (simplified) consists of Customer and Subscription entities. A customer has an associated subscription.
Most of the operations exposed by the API are simple CRUD operations, which are easy enough to model according to RESTful principles. E.g.
GET /api/subscriptions/1 - get subscription with id 1
POST /api/subscriptions - add a new subscription
PUT /api/customers/2 - update customer with id 2 with contents of PUT body
There is a requirement to periodically check for expired subscriptions, by comparing the EndDate field on each Subscription entity read from our database with the current date. For each subscription that has expired, the CustomerStatus field of the associated customer should be set to Archived and an email sent to the customer. The operation will be exposed through our REST API and invoked daily from an external service.
What URL scheme should I use to expose this operation according to RESTful principles? My first thought is that it's a PUT operation on api/customers/{SomeResource} as it potentially involves updating the CustomerStatus field of zero or more customers and is also an idempotent operation.
For example:
PUT /api/customers/expired
Does this sound reasonable?
Note that there is no body sent in this request, as the customers whose statues are being updated are queried from a database rather than being supplied by the end user. My understanding is that a PUT request doesn't have to include a body.
This is almost certainly a POST operation.
However, I question the design of your service. Why does the behaviour you describe need to be externally-controlled by way of a RESTful API? If the exact timing and nature of the operation is known beforehand, why not use some other means of scheduling the job...a means that is more straightforward and wouldn't raise these kinds of questions?
Ref: Stack Overflow
Edit: note that the operation described by the OP is not idempotent and thus not a qualifying PUT operation.
Additional edit: note that the .Net framework uses the POST method by default for service endpoints marked with the WebInvoke attribute. Per the documentation for this attribute, it represents an endpoint that "is logically an invoke operation". To me, this reads like a remote procedure call (i.e. RPC).

OData v4 Changeset behaviour

I'm implementing an OData v4 Service with WebApi and I also implemented support for OData Changesets using the code from https://damienbod.wordpress.com/2014/08/14/web-api-odata-v4-batching-part-10/
Basically this is working, but now I'm wonder about the correct behaviour when modifying the same entity from multiple requests in one changeset.
Consider this example:
Content-ID: 1 - POST ~/Entity
-> Create new Entity.
Content-ID: 2 - PUT ~/Entity($Entity-ContentID1)/Company/$ref?$id=URI
-> Create link from new Entity to existing Company using the ContentID.
Content-ID: 3 - POST ~/Entity($Entity-ContentID1)/ChangeState
-> Execute action to change the state of the newly created Entity.
The ChangetState can only be executed, if a Company is linked. If a client sends all requests in this order and if I execute the requests in order, everything is fine.
But according to the OData Spec, requests in a Changesets are unordered.
What is the expected result if a client sends request 3 before request 2? With my current implementation this changeset will fail, but is this really okay?
It's quite hard for me to understand the correct semantic of changesets...
Batch requests are submitted as a single HTTP POST request to the batch endpoint of a service, these requests in a Changeset should be sent to service and proceed in original order.
I think, the spec mean: requests in Changeset will be processed un-order, in our lab, we process with origin order, but different service can use different order, what you concerned is not necessary, client should'n expect same result for a changeset like your scenario, otherwise they should put in different changeset, then the result should be same.

Asp.Net Web API preventing record duplication

I am using C#.NET Web API for my iOS application but I have concerns about multiple requests arrive at the same time.
Let's assume I try to prevent duplicating records while inserting a new record into Users table by:
Check if xxx#example.com exists in the Users table.
Insert if not exists.
Return OK.
Actually it's that simple unless web api runs async.
What if related web api method gets two requests at the same time (with same e-mail request) and when the first request reaches step 2 (but not executed yet) and second request will get "not exists" response since step two for first request has not been executed yet. Then two e-mail address will be saved and I will have duplicated records.
Using lock on static object seems will solve the problem but it will create performance issues.
If I don't want DB to get rows duplicated, how can I overcome by that problem?
UPDATE:
I can't use unique constraint on e-mail column due to I already have it on Id column.
If you make the email address in your table have a unique constraint then all you have to do is insert the email address, if it already there it will fail, if not you will have inserted a new record.
You need to handle the failure maybe respond with some appropriate code to the client so it knows email already exists.

Categories