IS Department


Managing the Database Development Lifecycle Part 1

Posted in Programming by Brian Russell on November 8, 2006

Finally, connected to the Internet… 

So first of all, I want to make sure I rub it in that we are in Vegas right now.  Sure, we have to attend the conference and learn, but that still beats being in rainy Washington right now!  I will have more on that later.  This post is about what I picked up from the pre conference workshop I attended titled “Managing the Database Development Lifecycle”.  Let me say, the title and the content are really two different things.  Sure, you can twist it enough to believe that the title covered the workshop contents, but everyone now knows it was a detailed description of Microsoft’s new Visual Studio Team System for Database Professionals tool.  Don’t get me wrong, the tool does a lot, but please MS, please name the conference for what it was.

So, right off the bat the speaker told us the handouts we got were only half of the slides (way to be prepared).  To download the full deck of slides, point your browser to http://sqldev.net/datadudefullday.zip.

The first thing the speaker wanted us to understand was what exactly the new tool does for you.  Its sole purpose is to track the schema of your database.  It is really that simple.  You need to switch your mind from thinking that the current ‘live’ database holds the current schema into thinking the database project holds the schema.  The normal trend in the industry is to operate as we do; the full ‘live’ database is our ‘truth’.  He used the term truth, and it stuck on me like a fly on fly paper.  I will get to this more later on.

This tool, at least in V1, does not create pretty graphics for you, it is basically a way to source control your schemas just as we do with our C# code.  This allows us to version it, apply labels to it, compare previous versions, and contain a history of changes and who made them.  This tool supports any source control system that implements a SCSSC (fill in this term later, I forgot it) provider.  This means SVN, CVS, TFS, VSS, etc.  So it is pretty flexible.

To start a new project, you do one of three actions.  You can create it from scratch.  Not very many people do this, so let’s move to the next one.  You can import a schema from an existing database creation script.  Since some people already source control their scripts, this tool allows them to directly import them into the new project.  The last action you can do to start a new project out is to point it at an existing database, and it will scan the objects and create the schema scripts.  This option is probably the most obvious one.

When you point it to a database to load from, it parses out every object in the database to the lowest level into individual .SQL files.  This means that primary key constraints are scripted in a separate file than the table file.  This allows the tool to version to the lowest possible level.  At this point, I guess an explanation of the .SQL files are in order.  First, they are just SQL statements that can be ran, nothing new as far as content is concerned.  They can’t hold more than one object per file.  This means that in your table script, you can’t have a constraint added as well in the same file.  The schema won’t build.  They are by default encoded in UTF, so make sure your source control system understands them as text.

After you first create your project, you now have something you can work with.  Even if someone updates the live database without using this tool, you can run a compare with this tool to compare the live db to the latest schema in source control.  If there are changes, you can bring them into it.

You do need to have a local instance of SQL Server running on your machine.  It uses it some of the features included with the product.  No getting around this.  So, you just need to install SQL Dev edition and continue on…no arguing like some of the people did in the crowd.  This provides you a sandbox to build and test with anyway before pushing changes into source control, so it isn’t a big deal.

Someone asked the question: “Since this tool only cares about schema, how is lookup data treated since it is part of the base database install?”  The answer was pretty easy, there is no support for lookup data (aka insert statements).  The reason for this is because they couldn’t figure out how to keep the lookup insert statements in sync with the schema.

Once you make changes to the schema through the tool, you can build your schema.  This creates a change script.  Changes are tracked at the object level, so only what was changed is generated in this change script.  It will place all the statements in the correct order so that the script will work.

Once you are satisfied with the change script…please always look at your change script, it is generated by a tool after all…you can then deploy the change script to the server.  There are options to automatically create a backup before deploying.  Also, if an error occurs during the running of the change script, the actions are rolled back because it is all being done in a transaction.  He mentioned that you should always test deploy to a staging DB first to ensure everything went correctly and as expected.

At this point, someone asked a stupid question.  It was something like, what if I screw up, is there a rollback?  The answer from the speaker was great though, he said (paraphrased): “The tool can’t think for you, you have to own your actions.”  Even with that answer, he went on to say that you can pull an earlier version of your schema from the database and build it against the recently updated database.  This will create a change script to change it back.

When deploying your script, if any action will cause data loss, you will have to manually accept it before the tool will continue.  Nice safeguard.

After going over the basic operations of the tool, we started getting into some of the nice features of the tool.  The first one we talked about was the fact that the tool tracks dependencies between objects real time.  He went very in-depth about this feature, and it has some limitations, but over all, it would be very welcome, especially on large databases.  To explain what this means, you can think of this scenario.  You are looking through your database, and think that a table is no longer used.  Normally, you might just delete the table and go on about your way.  Well, if any views or stored procedures relied on that table, they are going to blow up next time they are run.  Unless you go through every stored procedure and view in your database before dropping a table, you can’t guarantee you don’t miss something.

This is where the tool helps you out.  You just need to exclude the table from the build, and the tool will immediately show you an error/warning list containing any broken references.  That table you were about to drop does in fact have things depending on it.  Now you can find these things out at “compile” time instead of run time.  This is huge for the database world in my opinion.  I used a table for an example just now, but think even to the column level.

Another feature he showed us was how the tool won’t allow you to have ambiguous statements.  Although there are some statements in the SQL world that will work, when you look at them they are ambiguous.  The tool won’t allow these because it needs to resolve references correctly itself.

All of a sudden a lady walked in to the room, it was kind of quiet so everyone looked at her.  Most of the guys did a double take.  It was at this point I realized that this session had a higher guy/girl ratio than I am normally used to for these types of conferences.  If you are a guy looking to get chicks, but still want to be a geek (because normally the guy geeks don’t look to pick up women), you might want to focus your career in the SQL world.  Of course, this is a small sample and may not represent real life.
Part 2 of Day 1 coming soon…

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s


%d bloggers like this: