Tutorial :SqlTransaction has completed


I have an application which potentially does thousands of inserts to a SQL Server 2005 database. If an insert fails for any reason (foreign key constraint, field length, etc.) the application is designed to log the insert error and continue.

Each insert is independent of the others so transactions aren't needed for database integrity. However, we do wish to use them for the performance gain. When I use the transactions we'll get the following error on about 1 out of every 100 commits.

This SqlTransaction has completed; it is no longer usable.     at System.Data.SqlClient.SqlTransaction.ZombieCheck()     at System.Data.SqlClient.SqlTransaction.Commit()  

To try to track down the cause I put trace statements at every transaction operation so I could ensure the transaction wasn't being closed before calling commit. I've confirmed that my app wan't closing the transaction. I then ran the app again using the exact same input data and it succeeds.

If I turn the logging off it fails again. Turn it back on and it succeeds. This on/off toggle is done via the app.config without the need to recompile.

Obviously the act of logging changes the timing and causes it to work. This would indicate a threading issue. However, my app isn't multi-threaded.

I've seen one MS KB entry indicating a bug with .Net 2.0 framework could cause similar issues (http://support.microsoft.com/kb/912732). However, the fix they provided doesn't solve this issue.


Thanks for all the feedback. I've been working with someone from MSFT on the MSDN forums to figure out what's going on. It turns out the issue is due to one of the inserts failing due to a date time conversion problem.

The major problem is the fact that this error shows up if it's a date conversion error. However, if it's another error such as a field being too long it doesn't cause this issue. In both cases I would expect the transaction to still exist so I can call Rollback on it.

I have a full sample program to replicate this issue. If anyone wishes to see it or the exchange with MSFT you can find the thread on MSFT's newsgroups in microsoft.public.dotnet.framework.adonet under the SqlTransaction.ZombieCheck error thread.


Difficult to help without seeing code. I assume from your description you are using a transaction to commit after every N inserts, which will improve performance vs committing each insert provided N is not too big.

But the downside is: if an insert fails, any other inserts within the current batch of N will be rolled back when you rollback the transaction.

In general you should dispose a transaction before closing the connection (which will rollback the transaction if it hasn't been committed). The usual pattern looks something like the following:

using(SqlConnection connection = ...)  {      connection.Open();      using(SqlTransaction transaction = connection.BeginTransaction())      {          ... do stuff ...          transaction.Commit(); // commit if all is successful      } // transaction.Dispose will be called here and will rollback if not committed  } // connection.Dispose called here  

Please post code if you need more help.


Keep in mind that your application isn't the only participant in the transaction - the SQL Server is involved as well.

The error you quote:

This SqlTransaction has completed; it is no longer usable. at System.Data.SqlClient.SqlTransaction.ZombieCheck() at System.Data.SqlClient.SqlTransaction.Commit()

doesn't indicate the transaction has comitted, only that it is complete.

My first suggestion is that your server has killed off the transaction because it either took too long (ellapsed wall time) or got too large (too many changes or too many locks).

My second suggestion is to check that you're cleaning up connections and transactions appropriately. It's possible that you're running into problems because you are occasionally exhausting a pool of some resource before things get automatically recycled.

For example, DbConnection implements IDisposable, so you need to ensure you clean up appropriately - with a using statement if you can, or by calling Dispose() directly if you can't. 'DbCommand' is similar, as it also implements IDisposable.


This exception is thrown because actual DB transaction is already rolled back, so a .NET object representing it on the client side is already a "zombie".

More detailed explanation is here. This post explains how to write a correct transaction rollback code for such scenarios.


Well, first IMO, you shouldn't expect your app to deal with "hard" errors like these. Your application should understand what these business rules are and account for them. DON'T force your database to be the business rule, or the constraint rule cop. It should only be a data rule cop, and at that, be graceful about telling the interface with RETURNs and other methods. Schema stuff shouldn't be force up that far.

On to answer your question, I suspect, without seeing any of your code, that you are trying to do a commit when an error has occured, and you don't know it yet. This is the reasoning behind my first statement ... trying to trap for the error the database gives, without having your application understand and participate in these rules, you're giving yourself a headache.

Try this. Insert rows that won't fail with a database constraint, or other errors and process the inserts with a commit. See what happens. Then shove in some records that will fail, process a commit and see if you get your lovely error. Third, run the errors again, do a forced rollback, and see if you succeed.

Just some ideas. Again, as a summary, I think it has to do with not trapping certain errors from the database in a graceful way for things that are "hard" errors and expecting the front end to deal with them. Again, my expert opinion, NOPE, don't do it. It's a mess. Your app needs to overlap in knowledge about the rules on the back. Those things are in place just to make sure this doesn't happen during testing, and the occasional bug that surfaces like this, to then put in the front it to handle the forgotten lookup to a foreign key table set.

Hope it helps.


My problem was similar, and it turned out that I was doing it to myself. I was running a bunch of scripts in a VS2008 project to create stored procedures. In one of the procs, I used a transaction. The script was executing the creation of the proc inside a transaction, rather than including the transaction code as part of the procedure. So when it got to the end without an error, it committed the transaction for the script. The .Net code was also using and then committing a transaction, and the zombie effect came when it tried to close the transaction that had already been closed inside the SQL script. I removed the transaction from the SQL script, relying on the transaction opened and checked in the .Net code, and this solved the problem.


According to this post: http://blogs.msdn.com/b/dataaccesstechnologies/archive/2010/08/24/zombie-check-on-transaction-error-this-sqltransaction-has-completed-it-is-no-longer-usable.aspx

A temporary solution could be to try catching the rollback or commit operation. So this code will be enough for stopping the bug to be throwing:

    public static void TryRollback(this System.Data.IDbTransaction t)      {          try          {              t.Rollback();          }          catch (Exception ex)          {              // log error in my case          }      }        public static void TryCommit(this System.Data.IDbTransaction t)      {          try          {              t.Commit();          }          catch (Exception ex)          {              // log error in my case          }      }  

Check this example from msdn website: http://msdn.microsoft.com/en-us/library/system.data.sqlclient.sqltransaction.aspx


You have proved your data is OK beyond all reasonable doubt.
FWIW, I would prefer to move the insert into a SPROC and don't use the transaction at all.
If you need the UI to be responsive, use a background worker to do the database grunt.
To me, a transaction is for interrelated activities, not a time saving device. The insertion cost has to be paid somewhere along the line.

I recently used ANTS profiler on a database application and was amazed to see intermittant SQlClient exceptions showing in solidly performing code. The errors are deep in the framework when opening a connection. They never make it to the surface and aren't detectable by the client code. So... The point?
It is not all rock solid out there, move the heavy work off the U.I. and accept the cost. HTH Bob


I also faced the same problem.

This SqlTransaction has completed; it is no longer usable  

After doing some research, I found that my datatabase log file size is 40GB round.
It is large data volume which cause my sql transaction un-commited.

So, My solution is to shrink log file size by seeing this reference link.

CREATE PROCEDURE sp_ShrinkLog_ByDataBaseName(  @DataBaseName VARCHAR(200)  )  AS  BEGIN    DECLARE @DBName VARCHAR(200)  DECLARE @LogFileName VARCHAR(200)  SET @DBName = @DataBaseName    SELECT @LogFileName = [Logical File Name]  FROM  (  SELECT      DB_NAME(database_id)  AS "Database Name",   type_desc             AS "File Type",   name                  AS "Logical File Name",   physical_name         AS "Physical File",   state_desc            AS "State"  FROM      SYS.MASTER_FILES  WHERE      database_id = DB_ID(@DBName)      and type_desc = 'LOG'  ) AS SystemInfo  SELECT LogFileName = @LogFileName    EXECUTE(  'USE ' + @DBName + ';    -- Truncate the log by changing the database recovery model to SIMPLE.  ALTER DATABASE ' + @DBName + '  SET RECOVERY SIMPLE;    -- Shrink the truncated log file to 1 MB.  DBCC SHRINKFILE (' + @LogFileName + ', 1);    -- Reset the database recovery model.  ALTER DATABASE ' + @DBName + '  SET RECOVERY FULL;    ')  END  

Then execute it EXEC sp_ShrinkLog_ByDataBaseName 'Yor_DB_NAME'.

If this is not working for you, here is another solution which I think it will work.

Note:If u also have question or solution just comment us below or mail us on toontricks1994@gmail.com
Next Post »