DbDeploy (and its .NET counterparts) work OK when everyone is working on or checking into a single master branch.  Here is an example of the normal use case, so that we can compare to the more complicated use cases that follow.

DeveloperA needs to make a change to the database and they see the delta scripts are up to 77, so they make delta script 78 and continue testing the code that they wrote to work with those database changes.  Meanwhile DeveloperB also sees that the scripts are up to 77, and so they make a script 78 and start testing with their code changes.  Let’s say that DeveloperA finishes first and checks in.  DeveloperB goes to check-in, sees they are out of date, pulls the latest, runs the unit tests again, and blamo! – a failure (two script 78s).  At this point are faced with an annoyance that can be worked around.  They need to rollback their script 78, run the other script 78, rename their script to 79, then re-run the unit tests and check in.

Let’s take the same scenario and use date based numbering or timestamping.  The last delta script checked in on master is 140901.1047.  Notice I have switched to using decimals as the script numbers, with the number being yyMMdd.hhmm.  Developer A wants to make script change 140907.1532 and DeveloperB wants to make 140908.0854.  When DeveloperB goes to check in, pulls the latest, and runs the unit tests.  At this point the tool could roll back 140908.0854 and apply 140907.1532, then re-apply 140908.0854.  Or if you are “feeling lucky” then the tool could just “fill in the hole” and apply 140907.1532, leaving the other script alone.  Determination of whether or not to rollback could be made by whether there are undo scripts for all of the scripts that need to be rolled back.  If there are, use the rollback, if not, then just apply the missing script.

The problem gets much more complicated when there are multiple developers working on multiple feature branches.  This is more like the Git branching model.  In this scenario let’s say there are two teams, TeamA and TeamB.  Each of the teams develops a set of scripts to support their particular feature. 

Let’s say TeamA develops:
140903.1242
140909.1117
140909.1734

And TeamB develops
140904.1512
140905.0933
140911.1802

Assuming TeamA checks into master first, when TeamB gets latest they merge in TeamA’s scripts.  *This time*, after TeamB checks in, TeamA will *also* need to merge in TeamB’s scripts.  However, both teams should end up with a database that has the correct scripts inside it. The possibility exists that one teams scripts will force another teams scripts to change.  Let’s say TeamA’s script on 140903.1242 renames TableX.ColumnN to be ColumnA, and TeamB’s script on 140904.1512 uses ColumnN (in a new View that they have created for example). When TeamB gets latest and tries to run the unit tests, blamo! – error in script.  If we were “filling in the holes” it would actually be 140903.1242 that caused the error, and if we rolled back it would be 140904.1512 that would cause the error.  The point is one or more scripts that have already been applied may need to change to support the incompatibility. 

Timestamping doesn’t solve everything, but it comes pretty close. One use case that isn’t supported by timestamping is solved by hashing. Let’s take the case of a single developer or team working on their own machine, trying to figure out the right way to make a script change.  They may try the first version of a script that uses column with an integer type, but they realize that it must also allow NULLs.  They *could* create scripts to support every little change that they make, but that feels a little cumbersome, verbose, and confusing to humans trying to follow the chain of scripts.  It would be nice if the tool helped in this scenario.  So in the case of a single developer they change the script that has already been run (in fact may not even be the last script that was run) to allow NULLs.  Then the tool, sees that the script changed, backs up prior to the script, and runs the script again, to create the column with the correct type.

One last point. Now that we have scripts being applied and reapplied due to the two scenarios mentioned above, there is another change that we need to make when authoring the scripts so that we don’t lose data unnecessarily. Everyone knows to use RENAME instead of DROP and ADD. However let’s take the simple case of adding a new column. The script might look like this:

alter table Sales.Customers add Gender char(1);

Of course we would also want an undo operation:

alter table Sales.Customers add Gender char(1);
--//@UNDO
alter table Sales.Customers drop Gender;

After the script has been applied, if we have been using the database and adding gender information, but for some reason we need to roll back and forward again we lose all our gender data. All we need to do is place the information into a temporary table prior to dropping. We need to save both the PrimaryKey and the dropped column. So a solution for SQL Server might look like this:

alter table Sales.Customers add Gender char(1);
if object_id('tempdb..#sales_customer_gender') is not null
begin
	update Sales.Customers set
		Sales.Customers.Gender = tmp.Gender
	from Sales.Customers c
	inner join #Sales_Customer_Gender tmp on c.CustomerId = tmp.CustomerId
	drop table #Sales_Customer_Gender
end
--//@UNDO
select * into #Sales_Customers_Gender from (
	select CustomerId, Gender from Sales.Customers
) as genders;
alter table Sales.Customers drop Gender;

It is a little more work, but it maintains the data, which is the primary goal of tools like DbDeploy anyway.

There are two types of database deployment tools. These are generally categorized automatic and manual. The problem with the automatic kind, is that it can’t always figure out what to do. Here are some examples of things that automatic database migrations can’t figure out, but are fairly easy to code up manually.
1) When adding columns without defaults, what data should be used to fill in the values?
2) When splitting or moving columns, where does the existing data go?
3) When renaming a column, how does the tool detect that it isn’t just a drop and an add?
4) How should the script alter data when adding a constraint that renders it invalid?
5) When changing the data type of a column how should the existing data be converted?
6) What if data needs to be changed outside of or because of a schema change?
These reasons were paraphrased from this article.

Because of these issues, for large complicated databases or databases with a lot of critical data most developers end up choosing explicit manual migrations. There are several tools of this nature out there, but the most widely known is DbDeploy.

DbDeploy itself is a Java program (http://dbdeploy.com/, https://code.google.com/p/dbdeploy/wiki/GettingStarted). It is one of the few database deployment programs that supports both Oracle as well as moving forwards and backwards through the deltas. It is old (the second version came out in early 2007), but well used and respected by the community.

In 2007 DbDeploy.Net 1.0 was released (http://sourceforge.net/projects/dbdeploy-net/). It is called 1.0 even though it comes from the 2.0 version of the Java code. It was released and then kind of sat there on SourceForge because
1) .NET developers weren’t heavily contributing to Open Source in general
2) SourceForge as a code hosting service was becoming less popular.
3) It did was it was designed to do, and no changes are necessary unless someone dreams up a new feature.

Anyway, fast forward to 2009 and DbDeploy 3.0 for Java is released. Also during these years we are starting to see GitHub emerge as the Open Source market leader. Now here is where the problem comes in. Without notifying anyone there is GitHib repository created for the DbDeploy (https://github.com/tackley/dbdeploy) and the DbDeploy.Net projects (https://github.com/brunomlopes/dbdeploy.net). However, there is no mention of it anywhere, so unless you specifically go and look there, you wouldn’t know that.

In 2012, Robert May aka rakker91 ports the Java 3.0 version to .NET, calls it DbDeploy 2 (even though it came from the Java version 3.0) and posts it on CodePlex (a Microsoft open source host). But again, unless you know to look there, nothing.

In 2013 Gregg Jensen makes the first significant outside contribution to DbDeploy.Net 1 in a while. As he does this he notes on the original SourceForge page:
“dbdeploy.NET has been updated by the community on GitHub. New documentation and features have been added at github.com/brunomlopes/dbdeploy.net. I have used dbdeploy.NET for a while, and I like how it works so I contributed there. Gregg”
This is the first breadcrumb left so that someone from the outside world could actually discover something is happening to the project.

Soon thereafter (July 2013) DbDeploy.NET 2 (the new code base) is formally released on Codeplex.

So in short we have two different independently evolving code bases to choose from. This is a problem in itself, because it means there has to be time spent investigating which code base is the correct one to start from. I downloaded both versions and started poking around. Here is what I found out:
1) DbDeploy.Net 1: Unit tests did not run, and it uses a schema that is harder to support with older versions of Oracle (uses an identity column which would need to be a sequence).
2) DbDeploy.Net 2 (from Java 3): Dropped Oracle support. This is something that is often done accidentally in a rewrite, but this was due to a lack of an Oracle database to test against.

I think Oracle is actually one of the most important use cases for DbDeploy.NET for several reasons. The first is that there are a ton of legacy Oracle databases, where as a lot of SQL Server were written or rewritten with Entity Framework, which provides migrations out of the box. Also, there are a multitude of similar tools for SQL Server, but the database deployment tools for Oracle are lacking in many ways.

Anyway, I started tackling the problems in both codebases, and believe it or not it was actually easier for me to add Oracle support to the new program than it was for me to fix the unit tests and schema problems with the old program. So expect check-ins soon.