Our current scenario is like this:
we have an existing database that needs to be updated for each new release that we install
we have to do this from individual SQL scripts (we can't use DB compare/diff tools)
the installation should run the SQL scripts as automatically as possible
the installation should only run those SQL scripts, that haven't been run before
the installation should dump a report of which script ran and which did not
installation is done by the customer's IT staff which is not awfully SQL knowledgeable
Whether this is a good setup or not is beyond this question - right now, take this as an absolute given - we can't change that this or next year for sure.
Right now, we're using a homegrown "DB Update Manager" to do this - it works, most of the time, but the amount of work needed to really make it totally automatic seems like too much work.
Yes, I know about SQLCMD - but that seems a bit "too basic" - or not?
Does anyone out there do the same thing? If so - how? Are you using some tool and if so - which one?
Thanks for any ideas, input, thoughts, pointers to tools or methods you use!
Marc
I have a similar situation. We maintain the database object scripts in version control. For a relesase the appropriate versions are tagged and pulled from version control. A custom script concatenates the invidual object scripts into a set of Create_DB, Ceate_DB_Tables, CreateDB Procs, ...
At a prior job I used manually crafted batch files and OSQL to run the database create/update scripts.
In my current position we have an InstallSheild set up with a custom "Install Helper" written in C++ to invoke the database scripts using SqlCmd.
Also, like CK, we have SchemaVersion table in each database. The table contains both app and database version information. The schema verison is just an integer that gets incremanted with each release.
Sounds complicated but it works pretty well.
I have a similar setup to this and this is my solution:
Have a dbVersion table that stores a version number and a datetime stamp.
Have a fodler where scripts are stored with a numbering system, e.g. x[000]
Have a console / GUI app that runs as part of the installation and compares the dbVersion number with the numbers of the files.
Run each new file in order, in a transaction.
This has worked for us for quite a while.
Part of our GUI app allows the user to choose which database to update, then a certain string #dbname# in the script is replaced by the database name they choose.
You might try Wizardby: it allows you to specify database changes incrementally and it will apply these changes in a very controlled manner. You'll have to write an MDL file and distribute it with your application along with Wizardby binaries, and while installing setup will check whether database version is up to date and if not it will apply all necessary changes in a transaction.
Internally it maintains a SchemaInfo table, which tracks which migrations (versions) were applied to a particular instance of the database, so it can reliably run only required ones.
If you maintain your changes in source control (e.g. SVN) you could use a batch file with SQLCMD to deploy only the latest changes from a particular SVN branch.
For example,
rem Code Changes
sqlcmd -U username -P password -S %1 -d DatabaseName -i "..\relativePath\dbo.ObjectName.sql"
Say you maintain an SVN branch specifically for deployment. You would commit your code to that branch, then execute a batch file which would deploy the desired objects.
Drawbacks include inability to check on the fly for table-related changes (e.g. if one of your SQL scripts happens to involve adding a column; if you tried to rerun the script it wouldn't be smart enough to see the column if it were added in a previous run). To mitigate that, you could build an app to build the batch file for you and create some logic to interact with the destination database, check for changes that have or haven't already been applied, and act accordingly.
Related
I am starting to build a new database for my project using PostgreSQL. (I am new to PostgreSQL and database by the way.)
I think my development workflow is very bad, and here is a part of it:
create table/view/function with pgAdmin.
determine the name of the file before saving the code.
The goal is to be able to recreate the database automatically by running all the saved scripts,
I need to know the order to run these scripts for dependency reason.
So I add a number for each file indicating the order.
for example: 001_create_role_user.ddl, 002_create_database_project.ddl, 013_user_table.ddl
save the code.
commit the file to repository using GIT.
Here are some bads I can think of:
I can easily forget what changes I made. For example, created a new type,
or edited comment
It is hard to determine a name (order) for the file.
Change the code would be a pain in the ass, especially when the new
code changes the order.
So my workflow is bad. I was wondering what other Postgres developers' workflow looks like.
Are there any good tools (free or cheap) for editing and saving scripts? good IDE maybe?
It would be great if I can create automated unit tests for the database.
Any tool for recreating the database? CI server tool?
Basically I am looking for any advice, good practice, or good tool for database development.
(Sorry, this question may not fit for the Q&A format, but I do not know where else to ask this question.)
Check out liquibase. We use it in the company I work at to setup our PostgreSQL database. It's open source, easy to use and the changelog file you end up with can be added to source control. Each changeset gets an id, so that each changeset is only run once. You end up with two extra tables for tracking the changes to the database when it's run.
While it's DB agnostic, you can use PostgreSQL SQL directly in each changeset and each changeset can have it's own comments.
The only caveat from having used it is that you have to caution yourself and others not to re-use a changeset once it's been applied to a database. Any changes to an already applied changeset result in a different checksum (even whitespace) which can cause liquibase to abort it's updates. This can end up in failed DB updates in the field, so each update to any of the changelogs should be tested locally first. Instead all changes, however minor should be inserted into a new changeset with a new id. They have a changeset sub-tag called "validCheckSum" to let you work around this, but I think it's better to try to enforce always making a new changeset.
Here are the doc links for creating a table and creating a view for example.
Well, your question is actually quite relevant to any database developer, and, if I understand it correctly, there is another way to get to your desired results.
One interesting thing to mention is that your idea of separating different changes into different files is the concept of migrations of Ruby On Rails. You might even be able to use the rake utility to keep track of a workflow like yours.
But now to what I think migh be your solution. PostgreSQL, and others to be sincere, have specific utilities to handle data and schemas like what you probably need.
The pg_dumpall command-line executable will dump the whole database into a file or the console, in a way that the psql utility can simply "reload" into the same, or into another (virgin) database.
So, if you want to keep only the current schema (no data!) of a running database cluster, you can, as the postgres-process owner user:
$ pg_dumpall --schema-only > schema.sql
Now the schema.sql will hold exactly the same users/databases/tables/triggers/etc, but not data. If you want to have a "full-backup" style dump (and that's one way to take a full backup of a database), just remove the "--schema-only" option from the command line.
You can reload the file into another (should be virgin, you might mess up a database with other data doing this):
$ psql -f schema.sql postgres
Now if you only want to dump one database, one table, etc. you should use the pg_dump utility.
$ pg_dump --schema-only <database> > database-schema.sql
And then, to reload the database into a running postgresql server:
$ psql <database> < database-schema.sql
As for version control, you can just keep the schema.sql file under it, and just dump the database again into the file before every vc commit. So at some particular version control state will you have the code and the working database schema that goes with it.
Oh, and all the tools I mentioned are free, and pg_dump and pg_dumpall come with the standard PostgreSQL installation.
Hope that helps,
Marco
You're not far off. I'm a Java developer, not a DBA, but building out the database as a project grows is an important task for the teams I've been on, and here's how I've seen it done best:
All DB changes are driven by DDL (SQL create, alter, or delete statements) plain text scripts. No changes through the DB client. Use a text editor that supports syntax highlighting like vim or notepad++, as the highlighting can help you find errors before you run the script.
Use a number at the beginning of each DDL script to define the order that scripts are run in. Base scripts have lower numbers. As the project grows, use alter new alter scripts to change the table, don't redefine the table in the initial script.
Use a script and the psql client to load the DDL scripts from lowest to highest. Here's the bash script we use. You can use it as a base for a .bat script on windows.
#!/bin/bash
export PGDATABASE=your_db export
export PGUSER=your_user export
export PGPASSWORD=your_password
for SQL_SCRIPT in $( find ./ -name "*.sql" -print | sort);
do
echo "**** $SQL_SCRIPT ****"
psql -q < $SQL_SCRIPT
done
As the project grows, use new alter scripts to change the table, don't redefine the table in the initial script.
All scripts are checked into source control. Each release is tagged so you can regenerate that version of the database in the future.
For unit testing and CI, most CI servers can run a script to drop and recreate a schema. One oft-cited framework for PostGresql unit testing is pgTAP
I'm a DBA and my workflow is almost equal the suggested by #Ireeder... but besides use a script shell to maintain the ddl scripts updated, I use a tool called dbmaintain DBMaintain
DbMaintain needs some configuration, but it is not a pain... It maintain control of which scripts were executed and in which order.
The principal benefit, is that if a script sql that were already executed changes, it complain by default, or execute just that script (if configured to do so)... The similar behavior works when you add a new script on the environment... it executes just that new script.
it's perfect to deploy and maintain development and production environments up to date... dont being necessary execute all scripts every time (like that shell suggested by Ireeder) or being necessary execute manually each new scripts.
If the changes are slotted you can create scripts that do the DDL changes and dump the expected database new state( version ).
pg_dump -f database-dump-production-yesterday.sql // all commands to create populate a startup
Today need to introduce a new table to new feature
psql -f change-production-for-today.sql // DDL and DML commands to make database reflect the new state
pg_dump --schema -f dump-production-today.sql // all new commands to create database for today app
psql -i sql-append-table-needed-data-into-dump.sql -f dump-production-today.sql
All developers should use the new database create script from now on development.
I have Git Source Control Provider setup and running well.
For Visual Studio projects, that is.
The problem is that each such project is tightly tied to a SQL Server database.
I know how to version control a database in its own .git repository but this is neither convenient nor truly robust because ideally I would want the same ADD, COMMIT, TAG and BRANCH commands to operate on both directory trees simultaneously, in a synchronized manner.
Is there a way to Git SQL Server database with Visual Studio's Git Source Control Provider in the manner I described?
You can install the SQL Server Data Tools if you want to, but you don't have to: You can Use the Database Publishing Wizard to script your table data right from Visual Studio into the solution's folder, then Git it just like you do with the other project files in that folder.
You can store your database schema as Visual studio project using SQL Server Data Tools and then version control this project using Git.
Being in the database version control space for 5 years (as director of product management at DBmaestro) and having worked as a DBA for over two decades, I can tell you the simple fact that you cannot treat the database objects as you treat your Java, C# or other files and save the changes in simple DDL scripts.
There are many reasons and I'll name a few:
Files are stored locally on the developer’s PC and the change s/he
makes do not affect other developers. Likewise, the developer is not
affected by changes made by her colleague. In database this is
(usually) not the case and developers share the same database
environment, so any change that were committed to the database affect
others.
Publishing code changes is done using the Check-In / Submit Changes /
etc. (depending on which source control tool you use). At that point,
the code from the local directory of the developer is inserted into
the source control repository. Developer who wants to get the latest
code need to request it from the source control tool. In database the
change already exists and impacts other data even if it was not
checked-in into the repository.
During the file check-in, the source control tool performs a conflict
check to see if the same file was modified and checked-in by another
developer during the time you modified your local copy. Again there
is no check for this in the database. If you alter a procedure from
your local PC and at the same time I modify the same procedure with
code form my local PC then we override each other’s changes.
The build process of code is done by getting the label / latest
version of the code to an empty directory and then perform a build –
compile. The output are binaries in which we copy & replace the
existing. We don't care what was before. In database we cannot
recreate the database as we need to maintain the data! Also the
deployment executes SQL scripts which were generated in the build
process.
When executing the SQL scripts (with the DDL, DCL, DML (for static
content) commands) you assume the current structure of the
environment match the structure when you create the scripts. If not,
then your scripts can fail as you are trying to add new column which
already exists.
Treating SQL scripts as code and manually generating them will cause
syntax errors, database dependencies errors, scripts that are not
reusable which complicate the task of developing, maintaining,
testing those scripts. In addition, those scripts may run on an
environment which is different from the one you though it would run
on.
Sometimes the script in the version control repository does not match
the structure of the object that was tested and then errors will
happen in production!
There are many more, but I think you got the picture.
What I found that works is the following:
Use an enforced version control system that enforces
check-out/check-in operations on the database objects. This will
make sure the version control repository matches the code that was
checked-in as it reads the metadata of the object in the check-in
operation and not as a separated step done manually. This also allow
several developers to work in parallel on the same database while
preventing them to accidently override each other code.
Use an impact analysis that utilize baselines as part of the
comparison to identify conflicts and identify if a difference (when
comparing the object's structure between the source control
repository and the database) is a real change that origin from
development or a difference that was origin from a different path
and then it should be skipped, such as different branch or an
emergency fix.
Use a solution that knows how to perform Impact Analysis for many
schemas at once, using UI or using API in order to eventually
automate the build & deploy process.
An article I wrote on this was published here, you are welcome to read it.
I'm doing a sort-of self check installing page in asp.net site where the existing database needs to be updated against latest schema. What's the best way to do it? Previously we used one of RedGate Wins Software to compare the database and generate a update script to be executed on the existing client databases.
Red Gate offers an API so it can be run without the GUI. Not sure how this affects licensing though.
Otherwise, some ideas:
test for missing objects and CREATE
run ALTERs on code where possible
run a script if the version (in a udf or table) is lower
if you have a script that does a complete transactional upgrade generated, by say Red Gate, you can use this: all you need is a test to decide whether to run it or not.
One point: to make DDL change requires db_owner or ddl_admin rights: does yoru app run with these rights day to day?
I have read lots of posts about the importance of database version control. However, I could not find a simple solution how to check if database is in state that it should be.
For example, I have a databases with a table called "Version" (version number is being stored there). But database can be accessed and edited by developers without changing version number. If for example developer updates stored procedure and does not update Version database state is not in sync with version value.
How to track those changes? I do not need to track what is changed but only need to check if database tables, views, procedures, etc. are in sync with database version that is saved in Version table.
Why I need this? When doing deployment I need to check that database is "correct". Also, not all tables or other database objects should be tracked. Is it possible to check without using triggers? Is it possible to be done without 3rd party tools? Do databases have checksums?
Lets say that we use SQL Server 2005.
Edited:
I think I should provide a bit more information about our current environment - we have a "baseline" with all scripts needed to create base version (includes data objects and "metadata" for our app). However, there are many installations of this "base" version with some additional database objects (additional tables, views, procedures, etc.). When we make some change in "base" version we also have to update some installations (not all) - at that time we have to check that "base" is in correct state.
Thanks
You seem to be breaking the first and second rule of "Three rules for database work". Using one database per developer and a single authoritative source for your schema would already help a lot. Then, I'm not sure that you have a Baseline for your database and, even more important, that you are using change scripts. Finally, you might find some other answers in Views, Stored Procedures and the Like and in Branching and Merging.
Actually, all these links are mentioned in this great article from Jeff Atwood: Get Your Database Under Version Control. A must read IMHO.
We use DBGhost to version control the database. The scripts to create the current database are stored in TFS (along with the source code) and then DBGhost is used to generate a delta script to upgrade an environment to the current version. DBGhost can also create delta scripts for any static/reference/code data.
It requires a mind shift from the traditional method but is a fantastic solution which I cannot recommend enough. Whilst it is a 3rd party product it fits seamlessly into our automated build and deployment process.
I'm using a simple VBScript file based on this codeproject article to generate drop/create scripts for all database objects. I then put these scripts under version control.
So to check whether a database is up-to-date or has changes which were not yet put into version control, I do this:
get the latest version of the drop/create scripts from version control (subversion in our case)
execute the SqlExtract script for the database to be checked, overwriting the scripts from version control
now I can check with my subversion client (TortoiseSVN) which files don't match with the version under version control
now either update the database or put the modified scripts under version control
You have to restrict access to all databases and only give developers access to a local database (where they develop) and to the dev server where they can do integration. The best thing would be for them to only have access to their dev area locally and perform integration tasks with an automated build. You can use tools like redgates sql compare to do diffs on databases. I suggest that you keep all of your changes under source control (.sql files) so that you will have a running history of who did what when and so that you can revert db changes when needed.
I also like to be able to have the devs run a local build script to re initiate their local dev box. This way they can always roll back. More importantly they can create integration tests that tests the plumbing of their app (repository and data access) and logic stashed away in a stored procedure in an automated way. Initialization is ran (resetting db), integration tests are ran (creating fluff in the db), reinitialization to put db back to clean state, etc.
If you are an SVN/nant style user (or similar) with a single branch concept in your repository then you can read my articles on this topic over at DotNetSlackers: http://dotnetslackers.com/articles/aspnet/Building-a-StackOverflow-inspired-Knowledge-Exchange-Build-automation-with-NAnt.aspx and http://dotnetslackers.com/articles/aspnet/Building-a-StackOverflow-inspired-Knowledge-Exchange-Continuous-integration-with-CruiseControl-NET.aspx.
If you are a perforce multi branch sort of build master then you will have to wait till I write something about that sort of automation and configuration management.
UPDATE
#Sazug: "Yep, we use some sort of multi branch builds when we use base script + additional scripts :) Any basic tips for that sort of automation without full article?" There are most commonly two forms of databases:
you control the db in a new non-production type environment (active dev only)
a production environment where you have live data accumulating as you develop
The first set up is much easier and can be fully automated from dev to prod and to include rolling back prod if need be. For this you simply need a scripts folder where every modification to your database can be maintained in a .sql file. I don't suggest that you keep a tablename.sql file and then version it like you would a .cs file where updates to that sql artifact is actually modified in the same file over time. Given that sql objects are so heavily dependent on each other. When you build up your database from scratch your scripts may encounter a breaking change. For this reason I suggest that you keep a separate and new file for each modification with a sequence number at the front of the file name. For example something like 000024-ModifiedAccountsTable.sql. Then you can use a custom task or something out of NAntContrib or an direct execution of one of the many ??SQL.exe command line tools to run all of your scripts against an empty database from 000001-fileName.sql through to the last file in the updateScripts folder. All of these scripts are then checked in to your version control. And since you always start from a clean db you can always roll back if someones new sql breaks the build.
In the second environment automation is not always the best route given that you might impact production. If you are actively developing against/for a production environment then you really need a multi-branch/environment so that you can test your automation way before you actually push against a prod environment. You can use the same concepts as stated above. However, you can't really start from scratch on a prod db and rolling back is more difficult. For this reason I suggest using RedGate SQL Compare of similar in your build process. The .sql scripts are checked in for updating purposes but you need to automate a diff between your staging db and prod db prior to running the updates. You can then attempt to sync changes and roll back prod if problems occur. Also, some form of a back up should be taken prior to an automated push of sql changes. Be careful when doing anything without a watchful human eye in production! If you do true continuous integration in all of your dev/qual/staging/performance environments and then have a few manual steps when pushing to production...that really isn't that bad!
First point: it's hard to keep things in order without "regulations".
Or for your example - developers changing anything without a notice will bring you to serious problems.
Anyhow - you say "without using triggers".
Any specific reason for this?
If not - check out DDL Triggers. Such triggers are the easiest way to check if something happened.
And you can even log WHAT was going on.
Hopefully someone has a better solution than this, but I do this using a couple methods:
Have a "trunk" database, which is the current development version. All work is done here as it is being prepared to be included in a release.
Every time a release is done:
The last release's "clean" database is copied to the new one, eg, "DB_1.0.4_clean"
SQL-Compare is used to copy the changes from trunk to the 1.0.4_clean - this also allows checking exactly what gets included.
SQL Compare is used again to find the differences between the previous and new releases (changes from DB_1.0.4_clean to DB_1.0.3_clean), which creates a change script "1.0.3 to 1.0.4.sql".
We are still building the tool to automate this part, but the goal is that there is a table to track every version the database has been at, and if the change script was applied. The upgrade tool looks for the latest entry, then applies each upgrade script one-by-one and finally the DB is at the latest version.
I don't have this problem, but it would be trivial to protect the _clean databases from modification by other team members. Additionally, because I use SQL Compare after the fact to generate the change scripts, there is no need for developers to keep track of them as they go.
We actually did this for a while, and it was a HUGE pain. It was easy to forget, and at the same time, there were changes being done that didn't necessarily make it - so the full upgrade script created using the individually-created change scripts would sometimes add a field, then remove it, all in one release. This can obviously be pretty painful if there are index changes, etc.
The nice thing about SQL compare is the script it generates is in a transaction -and it if fails, it rolls the whole thing back. So if the production DB has been modified in some way, the upgrade will fail, and then the deployment team can actually use SQL Compare on the production DB against the _clean db, and manually fix the changes. We've only had to do this once or twice (damn customers).
The .SQL change scripts (generated by SQL Compare) get stored in our version control system (subversion).
If you have Visual Studio (specifically the Database edition), there is a Database Project that you can create and point it to a SQL Server database. The project will load the schema and basically offer you a lot of other features. It behaves just like a code project. It also offers you the advantage to script the entire table and contents so you can keep it under Subversion.
When you build the project, it validates that the database has integrity. It's quite smart.
On one of our projects we had stored database version inside database.
Each change to database structure was scripted into separate sql file which incremented database version besides all other changes. This was done by developer who changed db structure.
Deployment script checked against current db version and latest changes script and applied these sql scripts if necessary.
Firstly, your production database should either not be accessible to developers, or the developers (and everyone else) should be under strict instructions that no changes of any kind are made to production systems outside of a change-control system.
Change-control is vital in any system that you expect to work (Where there is >1 engineer involved in the entire system).
Each developer should have their own test system; if they want to make changes to that, they can, but system tesing should be done on a more controlled, system test system which has the same changes applied as production - if you don't do this, you can't rely on releases working because they're being tested in an incompatible environment.
When a change is made, the appropriate scripts should be created and tested to ensure that they apply cleanly on top of the current version, and that the rollback works*
*you are writing rollback scripts, right?
I agree with other posts that developers should not have permissions to change the production database. Either the developers should be sharing a common development database (and risk treading on each others' toes) or they should have their own individual databases. In the former case you can use a tool like SQL Compare to deploy to production. In the latter case, you need to periodically sync up the developer databases during the development lifecycle before promoting to production.
Here at Red Gate we are shortly going to release a new tool, SQL Source Control, designed to make this process a lot easier. We will integrate into SSMS and enable the adding and retrieving objects to and from source control at the click of a button. If you're interested in finding out more or signing up to our Early Access Program, please visit this page:
http://www.red-gate.com/Products/SQL_Source_Control/index.htm
I have to agree with the rest of the post. Database access restrictions would solve the issue on production. Then using a versioning tool like DBGhost or DVC would help you and the rest of the team to maintain the database versioning
I'm early in development on a web application built in VS2008. I have both a desktop PC (where most of the work gets done) and a laptop (for occasional portability) on which I use AnkhSVN to keep the project code synced. What's the best way to keep my development database (SQL Server Express) synced up as well?
I have a VS database project in SVN containing create scripts which I re-generate when the schema changes. The original idea was to recreate the DB whenever something changed, but it's quickly becoming a pain. Also, I'd lose all the sample rows I entered to make sure data is being displayed properly.
I'm considering putting the .MDF and .LDF files under source control, but I doubt SQL Server Express will handle it gracefully if I do an SVN Update and the files get yanked out from under it, replaced with newer copies. Sticking a couple big binary files into source control doesn't seem like an elegant solution either, even if it is just a throwaway development database. Any suggestions?
There are obviously a number of ways to approach this, so I am going to list a number of links that should provide a better foundation to build on. These are the links that I've referenced in the past when trying to get others on the bandwagon.
Database Projects in Visual Studio .NET
Data Schema - How Changes are to be Implemented
Is Your Database Under Version Control?
Get Your Database Under Version Control
Also look for MSDN Webcast: Visual Studio 2005 Team Edition for Database Professionals (Part 4 of 4): Schema Source and Version Control
However, with all of that said, if you don't think that you are committed enough to implement some type of version control (either manual or semi-automated), then I HIGHLY recommend you check out the following:
Red Gate SQL Compare
Red Gate SQL Data Compare
Holy cow! Talk about making life easy! I had a project get away from me and had multiple people in making schema changes and had to keep multiple environments in sync. It was trivial to point the Red Gate products at two databases and see the differences and then sync them up.
In addition to your database CREATE script, why don't you maintain a default data or sample data script as well?
This is an approach that we've taken for incremental versions of an application we have been maintaining for more than 2 years now, and it works very well. Having a default data script also allows your QA testers to be able to recreate bugs using the data that you also have?
You might also want to take a look at a question I posted some time ago:
Best tool for auto-generating SQL change scripts
You can store backup (.bak file) of you database rather than .MDF & .LDF files.
You can restore your db easily using following script:
use master
go
if exists (select * from master.dbo.sysdatabases where name = 'your_db')
begin
alter database your_db set SINGLE_USER with rollback IMMEDIATE
drop database your_db
end
restore database your_db
from disk = 'path\to\your\bak\file'
with move 'Name of dat file' to 'path\to\mdf\file',
move 'Name of log file' to 'path\to\ldf\file'
go
You can put above mentioned script in text file restore.sql and call it from batch file using following command:
osql -E -i restore.sql
That way you can create script file to automate whole process:
Get latest db backup from SVN
repository or any suitable storage
Restore current db using bak file
We use a combo of, taking backups from higher environments down.
As well as using ApexSql to handle initial setup of schema.
Recently been using Subsonic migrations, as a coded, source controlled, run through CI way to get change scripts in, there is also "tarantino" project developed by headspring out of texas.
Most of these approaches especially the latter, are safe to use on top of most test data. I particularly like the automated last 2 because I can make a change, and next time someone gets latest, they just run the "updater" and they are ushered to latest.