Rollback SQL without transaction - c#

I have a windows service that uploads data to a database and a MVC-app that utilises said service. The way it works today is something like this:
Upload(someStuff);
WriteLog("Uploaded someStuff");
ReadData(someTable);
WriteLog("Reading someTable-data");
Drop(oldValues);
WriteLog("Dropping old values");
private void Upload(var someStuff)
{
using(var conn = new connection(connectionstring))
{
//performQuery
}
}
private void WriteLog(string message)
{
using(var conn = etc..)
//Insert into log-table
}
private string ReadData(var table)
{
using etc..
//Query
}
///You get the gist.
The client can then see the current status of the upload through a query to the log-table.
I want to be able to perform a rollback if something fails. My first thought was to use a BeginTransaction() and then lastly a transaction.Commit(), but that would make my status-message behave bad. It would just go from "starting upload" and then fastforward to the last step where it would wait for a long time before "Done".
I want the user to be able to see if the process is stuck on some specific step, but I still want to be able to perform a full rollback if something unexpected happens.
How do I achieve this?
Edit:
I don't seem to have been clear in my question. If I do a separate connection for the logging, that would indeed work-ish. The problem is that the actual code will execute super-fast so the statusmessages would pass so fast that the user wouldn't even be able to see them before the final "committing"-message that would take 99% of the upload-time.

Design your table so that it has a (P)ending, (A)ctive (D)eleted flag - then to perform an update, new records are created called 'pending' Status P - your very final stage is to change the current Active to Deleted, and the Pending to Active (you could do that in a transaction). At your leisure, you can then delete the Status D (deleted) records at some time.
In the event of an error, the 'pending' record could become Deleted

Related

RESTful service to make a request to foreign API and update SQL Server database

I want to make a RESTful API (or any other way that can get it done, really) to have it work in a loop to do a specified task everyday at the same hour.
Specifically, I want it to access a foreign API, let's say, at midnight everyday, request the specified data and update the database accordingly. I know how to make a request to an API and make it do something. But I want it to do it automatically so I don't even have to interact with it, not even having to make requests.
The reason for this is that I'm working on a project that requires multiple platforms (and even if it was only one platform the users would be several) and I can't make a request to a foreign API (mainly because it's trial, it's a school project) every time a user logs in or clicks a button on each platform.
I don't know how to even do that (or if it's even possible) with a web service. I've tried with a web form doing it async with BackgroundWorker but nothing.
I thought I might have better luck here with more experienced people.
Hope you can help me out.
Thanks, in advance,
Fábio.
Don't know if I get it right, but it seems to me that the easiest way to do what you want (have a program scheduled to work at a given time, every day) is to use Windows Scheduler to schedule your application to run always on the specific time you want.
I managed to get there, thanks to the help of #Pedro Gaspar - LoboFX.
I didn't want the Windows Scheduler as I want it reflected on the code and I don't exactly have access to the server where it's going to be. That said, what got me there was something like this:
private static string LigacaoBD="something";
private static Perfil perfil = new Perfil(LigacaoBD);
protected void Page_Load(object sender, EventArgs e)
{
Task.Factory.StartNew(() => teste());
}
private void teste()
{
bool verif = false;
while (true)
{
if (DateTime.UtcNow.Hour + 1 == 22 && DateTime.UtcNow.Minute == 12 && DateTime.UtcNow.Second == 0)
verif = false;
if (!verif)
{
int resposta = perfil.Guardar(DateTime.UtcNow.ToString());
verif = true;
}
Thread.Sleep(1000);
}
}
It's inserting into the database through a class library. And with this loop it garantees that it only inserts once (hence the bool) and when it gets to the specified hour, minute and second it resets, allowing it to insert again. If something happens that the servers goes down, when it gets back up it inserts anyway. The only problem is that if it's already inserted and the server goes down it will insert again. But for that there are stored procedures. Well, not for the DateTime.UtcNow.ToString() but that was just a test.

Snapshot isolation in SQL / code blocking reads

I suspect I don't understand fully what's going on or something weird is happening. (The first case is more likely I guess.)
The big picture:
I'm trying to have a web-service perform certain operations asynchronously as they can be time consuming and I don't want the client to wait for the operations to finish (just query for the results every now and again to see the operation is done).
The async code is wrapped in a transaction - in case something goes wrong, I want to be able to rollback any changes.
Unfortunately the last step of the async code is to call a DIFFERENT service which queries the same database.
Despite wrapping the whole thing in a Snapshot transaction, the last step fails as the service cannot read from the database.
For that matter, while the async operation is underway I cannot perform a simple SELECT statements from the database either.
Here's a sample of the code which I'm currently using to test out the transactions (using Entity Framework 5 model first):
using (var transaction = new System.Transactions.TransactionScope(System.Transactions.TransactionScopeOption.RequiresNew, new System.Transactions.TransactionOptions() { IsolationLevel = System.Transactions.IsolationLevel.Snapshot }))
{
var db = new DataModelContainer();
Log test = new Log();
test.Message = "TEST";
test.Date = DateTime.UtcNow;
test.Details = "asd";
test.Type = "test";
test.StackTrace = "asd";
db.LogSet.Add(test);
db.SaveChanges();
using (var suppressed = new System.Transactions.TransactionScope(System.Transactions.TransactionScopeOption.Suppress))
{
var newDb = new DataModelContainer();
var log = newDb.LogSet.ToArray(); //deadlock here... WHY?
}
test = db.LogSet.Where(l => l.Message == "TEST").Single();
db.LogSet.Remove(test);
db.SaveChanges();
transaction.Complete();
}
The code creates a simple Log entry in the database (yeah, I'm playing around at the moment so the values are rubbish). I've set the SQL database to allow snapshot isolation, and to my knowledge reads should still be permitted (these are being tested in this code by using a new, suppressed transaction and a new DataModelContainer). However, I cannot query the LogSet in the suppressed transaction or in SQL Management Studio - the whole table is locked!
So... why? Why is it locked if the transaction scope is defined as such? I've also tried other isolation levels (like ReadUncommited) and I still cannot query the table.
Could someone please provide an explanation for this behavior?
In SSMS, set your current isolation level to SNAPSHOT and see if that corrects your problem - it's probably set to READ COMMITTED and would therefore still block due to pending updates.
Update:
You can allow READ COMMITTED to access versioned rows DB-wide by altering the following option (and avoiding having to constantly set the current isolation level to SNAPSHOT):
ALTER DATABASE MyDatabase
SET READ_COMMITTED_SNAPSHOT ON

Database error after DeleteObject(...) / SaveChanges()

Context: web application / ASP.NET 4.0 / EF 4.0 / MSSQLEXPRESS2012
I'm trying to do a simple delete as follows:
if (someObject != null)
{
if (!context.IsAttached(someObject))
context.SomeObjects.Attach(someObject);
context.DeleteObject(someObject);
context.SaveChanges();
}
The code executes without problem and SaveChanges returns the expected number of rows. But when I try to read the table afterwards it times out (this happens in mssms as well as in code).
It also seems to corrupt other processes, returning '8501 MSDTC on server is unavailable' and "No process is on the other end of the pipe" on subsequent site access. Which of the three errors happens in the application seems somewhat random, but the timeout in mssms always happens.
Inserts and updates on the same table execute without problem.
Apart from reading through a zillion posts, I tried wrapping it in try/catch and transaction with SaveChanges(System.Data.Objects.SaveOptions.None) and manual AcceptAllChanges; the debugger shows the correct object just before delete - everything appears normal until table read or accessing another page.
I suspect somehow the lock is not released, but other objects in different tables successfully delete with the exact same logic. I'm not sure where best to look next - any ideas are greatly appreaciated!
must have made a mistake with transaction before - the following worked:
var transactionScope = new TransactionScope
(TransactionScopeOption.RequiresNew,
new TransactionOptions()
{
IsolationLevel = System.Transactions.IsolationLevel.ReadUncommitted
}
);
try
{
using (transactionScope)
{ ...etc
the following link explains more about the whys of explicit transactions
http://blogs.u2u.be/diederik/post/2010/06/29/Transactions-and-Connections-in-Entity-Framework-40.aspx
still not sure why it's working without transaction for the other deletes though

Multiple asynchronous method calls to method while in a loop

I have spent a whole day trying various ways using 'AddOnPreRenderCompleteAsync' and 'RegisterAsyncTask' but no success so far.
I succeeded making the call to the DB asynchronous using 'BeginExecuteReader' and 'EndExecuteReader' but that is missing the point. The asynch handling should not be the call to the DB which in my case is fast, it should be afterwards, during the 'while' loop, while calling an external web-service.
I think the simplified pseudo code will explain best:
(Note: the connection string is using 'MultipleActiveResultSets')
private void MyFunction()
{
"Select ID, UserName from MyTable"
// Open connection to DB
ExecuteReader();
if (DR.HasRows)
{
while (DR.Read())
{
// Call external web-service
// and get current Temperature of each UserName - DR["UserName"].ToString()
// Update my local DB
Update MyTable set Temperature = ValueFromWebService where UserName =
DR["UserName"];
CmdUpdate.ExecuteNonQuery();
}
// Close connection etc
}
}
Accessing the DB is fast. Getting the returned result from the external web-service is slow and that at least should be handled Asynchnously.
If each call to the web service takes just 1 second, assuming I have only 100 users it will take minimum 100 seconds for the DB update to complete, which obviously is not an option.
There eventually should be thousands of users (currently only 2).
Currently everything works, just very synchronously :)
Thoughts to myself:
Maybe my way of approaching this is wrong?
Maybe the entire process should be called Asynchnously?
Many thanx
Have you considered spinning this whole thing off into it's own thread?
What is really your concern ?
Avoid the long task blocking your application ?
If so, you can use a thread (see BackgroundWorker)
Process several call to the web service in parallel to speed up the whole think ?
If so, maybe the web service can be called asynchronously providing a callback. You could also use a ThreadPool or Tasks. But you'll have to manage to wait for all your calls or tasks to complete before proceeding to the DB update.
You should keep the database connection open for as short of a time as possible. Therefore, don't do stuff while iterating through a DataReader. Most application developers prefer to put their actual database access code on a separate layer, and in a case like this, you would return a DataTable or a typed collection to the calling code. Furthermore, if you are updating the same table you are reading from, this could result in locks.
How many users will be executing this method at once, and how often does it need to be refreshed? Are you sure you need to do this from inside the web app? You may consider using a singleton for this, in which case spinning off a couple worker threads is totally appropriate even if it's in the web app. Another thing to consider is using a Windows Service, which I think would be more appropriate for periodically updating data via from a web service that doesn't even have to do with the current user's session.
Id say, Create a thread for each webrequest, and do something like this:
extra functions:
int privCompleteThreads = 0;
int OpenThreads = 0;
int CompleteThreads
{
get{ return privCompleteThreads; }
set{ privCompleteThreads = value; CheckDoneOperations(); }
}
void CheckDoneOperations
{
if(CompleteThreads == OpenThreads)
{
//done!
}
}
in main program:
foreach(time i need to open a request)
{
OpenThreads = OpenThreads + 1;
//Create thread here
}
inside the threaded function:
//do your other stuff here
//do this when done the operation:
CompleteThreads = CompleteThreads + 1;
now im not sure how reliable this approach would be, its up to you. but a normal web request shouldnt take a second, your browser doesnt take a second loading this page does it? mine loads it as fast as i can hit F5. Its just opening a stream, you could try opening the web request once, and just using the same instance over and over aswell, and see if that speeds it up at all

"The operation is not valid for the state of the transaction" error and transaction scope

I am getting the following error when I try to call a stored procedure that contains a SELECT Statement:
The operation is not valid for the state of the transaction
Here is the structure of my calls:
public void MyAddUpdateMethod()
{
using (TransactionScope Scope = new TransactionScope(TransactionScopeOption.RequiresNew))
{
using(SQLServer Sql = new SQLServer(this.m_connstring))
{
//do my first add update statement
//do my call to the select statement sp
bool DoesRecordExist = this.SelectStatementCall(id)
}
}
}
public bool SelectStatementCall(System.Guid id)
{
using(SQLServer Sql = new SQLServer(this.m_connstring)) //breaks on this line
{
//create parameters
//
}
}
Is the problem with me creating another connection to the same database within the transaction?
After doing some research, it seems I cannot have two connections opened to the same database with the TransactionScope block. I needed to modify my code to look like this:
public void MyAddUpdateMethod()
{
using (TransactionScope Scope = new TransactionScope(TransactionScopeOption.RequiresNew))
{
using(SQLServer Sql = new SQLServer(this.m_connstring))
{
//do my first add update statement
}
//removed the method call from the first sql server using statement
bool DoesRecordExist = this.SelectStatementCall(id)
}
}
public bool SelectStatementCall(System.Guid id)
{
using(SQLServer Sql = new SQLServer(this.m_connstring))
{
//create parameters
}
}
When I encountered this exception, there was an InnerException "Transaction Timeout". Since this was during a debug session, when I halted my code for some time inside the TransactionScope, I chose to ignore this issue.
When this specific exception with a timeout appears in deployed code, I think that the following section in you .config file will help you out:
<system.transactions>
<machineSettings maxTimeout="00:05:00" />
</system.transactions>
I also come across same problem, I changed transaction timeout to 15 minutes and it works.
I hope this helps.
TransactionOptions options = new TransactionOptions();
options.IsolationLevel = System.Transactions.IsolationLevel.ReadCommitted;
options.Timeout = new TimeSpan(0, 15, 0);
using (TransactionScope scope = new TransactionScope(TransactionScopeOption.Required,options))
{
sp1();
sp2();
...
}
For any wanderer that comes across this in the future. If your application and database are on different machines and you are getting the above error especially when using TransactionScope, enable Network DTC access. Steps to do this are:
Add firewall rules to allow your machines to talk to each other.
Ensure the distributed transaction coordinator service is running
Enable network dtc access. Run dcomcnfg. Go to Component sevices > My Computer > Distributed Transaction Coordinator > Local DTC. Right click properties.
Enable network dtc access as shown.
Important: Do not edit/change the user account and password in the DTC Logon account field, leave it as is, you will end up re-installing windows if you do.
I've encountered this error when my Transaction is nested within another. Is it possible that the stored procedure declares its own transaction or that the calling function declares one?
For me, this error came up when I was trying to rollback a transaction block after encountering an exception, inside another transaction block.
All I had to do to fix it was to remove my inner transaction block.
Things can get quite messy when using nested transactions, best to avoid this and just restructure your code.
In my case, the solution was neither to increase the time of the "transactionscope" nor to increase the time of the "machineSettings" property of "system.transactions" of the machine.config file.
In this case there was something strange because this error only happened when the volume of information was very high.
So the problem was based on the fact that in the code inside the transaction there were many "foreach" that made updates for different tables (I had to solve this problem in a code developed by other personnel). If tests were performed with few records in the tables, the error was not displayed, but if the number of records was increased then the error was displayed.
In the end the solution was to change from a single transaction to several separate ones in the different "foreach" that were within the transaction.
You can't have two transactions open at the same time. What I do is that I specify transaction.Complete() before returning results that will be used by the 2nd transaction ;)
I updated some proprietary 3rd party libraries that handled part of the database process, and got this error on all calls to save. I had to change a value in the web.config transaction key section because (it turned out) the referenced transaction scope method had moved from one library to another.
There was other errors previously connected with that key, but I got rid of them by commenting the key out (I know I should have been suspicious at that point, but I assumed it was now redundant as it wasn't in the shiny new library).

Categories