Git, robots and diverging branches - database

I am trying to use git as something it wasn't made for - a database. Please feel free to tell me that this is a stupid idea.
Setup
One branch (let's call it robot) is being updated automatically by a script on a daily basis. The data comes from some other publicly available database.
Initially the master branch is the same as the robot branch.
Some of the data in the publicly available database is wrong, so I'll make a commit to the master branch and correct the error.
When the script detects any changes on the public database in the future, it will add those to the robot branch as a new commit (there's one commit per file).
Keeping track of differences
Now, I've obviously lost the ability to do a fast forward merge if I've modified the same file. But I could still cherry pick the good changes in the robot branch and import them into the master branch. The problem is that this might get rather messy after a while, when almost all the files have diverged.
How can I keep track of the difference between the different branches in a systematic way?

It sounds like you're looking for the git rebase command. This command allows you to update changes you've made to your master branch on top of the new head of the robot branch:
git checkout master
git rebase robot
There may be conflicts, if the database has been updated with a change to something you've changed in master. You must resolve those conflicts manually.

Related

Switching between branches in VS Code

I'm still getting the hang of VS Code.
I want to make a react app using 2 different GET API URL endpoints but the exact same UI. In essence, I want to change just the base URLs between the 2.
I've tried creating a new branch in VS code to make 2 separate files but once I make edits in the master branch, the changes reflect in the new branch also.
Is there a way of making a different stand-alone branch from the VS code?
I've searched through the forums to no precise avail and I'm not that good at git. Thanks.
It is likely that your changes are being shown when you switch branch because you haven't commited your changes to a branch before switching.
Let's say you are on the master branch and make some changes. You can create a new branch new-feature and change your current working branch to new-feature bringing your existing changes across. This is useful because sometimes you will start to carry out some work before realising the scope is a bit too big and should be it's own branch.
If you want to keep the changes you have made on your current branch, you need to "stage" your changes with git add your_filename.here (or git remove). Once you have added and removed all changed files you want to keep on that branch, you need to git commit them. This is the step that finally adds the changes to the version history.
Now when you change to new-feature branch, your changes on master will not be there.
There are a number of GUI applications that make the git model more intuitive such as SourceTree, Github Desktop, and SmartGit

Git Database - Everyone Pushing to Same Branch in Different Files

Database team is implementing code changes, using Visual Studio SSDT database projects with Git source control. Everyone is pushing to main Release branch with code review (only 5 developers on the team). All the database coworkers are only allowed pushing to different files only (tables, sprocs, functions), etc. The way work is assigned, none of us push or work the same sql file. Eventually all good changes from Release (currently in Work) are merged into Master branch (Production Ready).
Code Review ---> Push to Release Branch (Currently in work during Sprint) ---> Merge to Master Production Ready Branch
(a) What are the negative consequences of utilizing this strategy in Git?
(b) For cleaner history, should everyone Rebase ReleasePublic Remote into ReleaseLocal, or conducting Pull? (Fetch/Merge), I would think Rebase is answer for cleaner history.
Note: I agree, it would be annoying only if we are working on the same file and pushing changes. Alternative Strategy to create different feature branches and then merge into main branch. We are refraining from this strategy since each developer has 10 dba admin related changes a day, creating many branches and merges is time consuming and cumbersome-
https://www.atlassian.com/git/tutorials/comparing-workflows/feature-branch-workflow
Devops
The bottom line is if it's working for you then it's working.
The purpose of the source control solution is to assist you in producing software.
Use this setup until it doesn't work and then adjust. (Please note that all 5 of the devs could start using feature branches, if they want/need, without interfering with the other team member's flow).
Having said that there are consequences of using a single branch. Here are two examples
Releasable main/release branch
One of them could be shown in a following scenario:
A release happens
A commit with a bug in file A is committed
A commit file B is committed
You want to release the changes made to file B\
Now you don't have a releasable snapshot of the repo
If the changes for file A were tested in a feature branch then master/release branch is in a releasable state more often.
Pull requests
Having feature branches allows to use Pull Requests (which is a layer on top of git) better - your code reviews may be get easier to enforce and track.

Git detached head state

I'm going through the tutorial for angularjs. In step 0 of the tutorial it instructs me to run:
git checkout -f step-0
When I ran this I got "You are in 'detached HEAD' state" which I take to mean that I just created a new feature branch called "step-0" that did not previously exist. Then I reloaded my web page and saw that all the cool stuff from the master branch is now gone. Instead I get this basic page that says "Nothing here yet!".
Now this is all correct behavior. My question is why did git checkout change the state of my code?
I could understand if the branch existed and I switched branches then yea that would change my code. But in this case it looks like the branch does not exist and I don't see a branch by the name "step-0" on github.
So in my mind I just branched off Master and created a new feature branch. Any changes in code don't make sense to me in this case.
What happened?
The "detached HEAD" message means that you're not at the tip of any branch. You've gone back to an earlier commit on the master branch, but you haven't created a new branch that begins there.
You got there by switching to the "step-0" tag. A tag in Git is basically an alias for a commit ID: "step-0" refers to the commit 96a9b5b7fa5e5667e099d25c20a4bb19992c0f72. Tags are commonly used for naming releases (e.g. a "v1.0" tag on the revision that was released as version 1.0), but in this case they're just used to make it easy to switch to specific revisions as you follow the tutorial. A branch isn't needed because you aren't going to be committing a divergent set of changes from there.
Branches and tags are similar in that they're both names that refer to commits. The difference is that a branch changes as you create new commits — the branch name is updated to refer to the new commit ID each time — whereas tags are typically used to record points in history that don't change, such as the contents of a specific released version. (It's possible to change a tag, but only in the same sense that it's possible to change the history of a branch: if you've already pushed the tag to others, you'll have to ask them to delete it and pull the new one instead. You can't force tag changes on others.)
If you want to start a new branch from the commit you've just moved to, you can do so: git checkout -b my-branch.

How to handle multiple db alter scripts coming from different Git feature branches?

A bit complex to describe, but I'll do my best. Basically we're using the Git workflow, meaning we have the following branches:
production, which is the live branch. Everything is production is running in the live web environment.
integration, in which all new functionality is integrated. This branch is merged to production every week.
one or more feature branches, in which developers or development teams develop new functionality. After this is done, developers merge their feature branch to integration.
So, nothing really complex here. But, since our application is a web application running against a MySQL database, new functionality often requires changes to the database scheme. To automate this, we're using dbdeploy, which allows us to create alter scripts, given a number. E.g. 00001.sql, 00002.sql, etc. Upon merging to the integration branch, dbdeploy will check which alter scripts have a higher number than the latest executed one on that specific database, and will execute those.
Now assume the following.
- integration has alter scripts up until 00200.sql. All of these are executed on the integration database.
- developer John has a feature branch featureX, which was created when integration still had 00199.sql as the highest alter script.
John creates 00200.sql because of some required db schema changes.
Now, at some point John will merge his modifications back to the integration branch. John will get a merge conflict and will see that his 00200.sql already exists in integration. This means he needs to open the conflicting file, extract his contents, reset that file back to 'mine' (the original state as in integration) and put his own contents in a new file.
Now, since we're working with ten developers, we get this situation daily. And while we do understand the reasons behind this, it's sometimes very cumbersome. John renames his script, does a merge commit to integration, pushes the changes to the upstream only to see that somebody else already created a 00201.sql, requiring John to do the proces again.
Surely there must be more teams using the Git workflow and using a database change management tool for automating database schema changes?
So, in short, my questions are:
How to automate database schema changes, when working on different feature branches, that operate on different instances of the same db?
How to prevent merge conflicts all the time, while still having the option to have a fixed order in the executed alter scripts? E.g. 00199.sql must be executed before 00200.sql, because 00200.sql might be depending on something done in 00199.sql.
Any other tips are most welcome ofcourse.
Rails used to do this, with exactly the problems you describe. They changed to the following scheme: the files (rails calls them migrations) are labelled with a utc timestamp of when the file was created, eg
20140723069701_add_foo_to_bar
(The second part of the name doesn't contribute to the ordering).
Rails records the timestamps of all the migrations that have been run. When you ask it to run pending migrations it selects all the migration files whose timestamp isn't in the list of already run migrations and runs them in numerical order.
You'll no longer get merge conflicts unless two people create one at exactly the same point in time.
Files still get executed in the order you wrote them, but possibly interleaved with someone else's work. In theory you can still have problems - eg developer a decides to rename a table that I had decided to add a column too. That is much less common than 2 developers both making any changes to the db and you would have problems even not considering the schema changes presumably I have just written code that queries a no longer existant table - at some point developers working on related stuff will have to talk to each other!
A few suggestions:
1 - have a look at Liquibase, each version gets a file that references the changes that need to happen, then the change files can be named using a meaningful string rather than by number.
2 - have a central location for getting the next available number, then people use the latest number.
I've used Liquibase in the past, pretty successfully, and we didn't have the problem you describe.
As Frederick Cheung suggested, use timestamps rather than a serial number. Applying schema changes by order of datestamp should work, because schema changes can only depend on changes of a prior date.
In addition, include the name of the developer in the name of the alter script. This will prevent merge conflicts 100%.
Your merge hook should just look for newly added alter scripts (present in the merged branch but not in the upstream branch) and execute them by order of timestamp.
I've used two different approaches to overcome your problem in the past.
The first is to use a n ORM which can handle the schema updates.
The other approach is to create a script, which incrementally builds the database schema. This way if a developer needs to an additional row in a table, he should add the appropriate sql statement after the table is create. Likewise if he needs a new table, he should add the sql statement for that. Then merging becomes a question of making sure things happen in the correct order. This is basically what the database update process in an ORM does. Such a script needs to be coded very defensively, and each statement should check if its perquisites exists.
For the dbvc commandline tool, I use git log to determine the order of the update scripts.
git log -c --no-merges --pretty="format:" --name-status -p dev/db/updates/ | \
grep '^A' | awk '{print $2}' | tac
In this case the way the order of your commits will determine the sequence in which the updates are run. Which is most likely what you want.
If you run git merge b, the updates from master will be run first and than from B.
If you run git rebase b, the update from B will run first and than from master.

How can I put a database under git (version control)?

I'm doing a web app, and I need to make a branch for some major changes, the thing is, these changes require changes to the database schema, so I'd like to put the entire database under git as well.
How do I do that? is there a specific folder that I can keep under a git repository? How do I know which one? How can I be sure that I'm putting the right folder?
I need to be sure, because these changes are not backward compatible; I can't afford to screw up.
The database in my case is PostgreSQL
Edit:
Someone suggested taking backups and putting the backup file under version control instead of the database. To be honest, I find that really hard to swallow.
There has to be a better way.
Update:
OK, so there' no better way, but I'm still not quite convinced, so I will change the question a bit:
I'd like to put the entire database under version control, what database engine can I use so that I can put the actual database under version control instead of its dump?
Would sqlite be git-friendly?
Since this is only the development environment, I can choose whatever database I want.
Edit2:
What I really want is not to track my development history, but to be able to switch from my "new radical changes" branch to the "current stable branch" and be able for instance to fix some bugs/issues, etc, with the current stable branch. Such that when I switch branches, the database auto-magically becomes compatible with the branch I'm currently on.
I don't really care much about the actual data.
Take a database dump, and version control that instead. This way it is a flat text file.
Personally I suggest that you keep both a data dump, and a schema dump. This way using diff it becomes fairly easy to see what changed in the schema from revision to revision.
If you are making big changes, you should have a secondary database that you make the new schema changes to and not touch the old one since as you said you are making a branch.
I'm starting to think of a really simple solution, don't know why I didn't think of it before!!
Duplicate the database, (both the schema and the data).
In the branch for the new-major-changes, simply change the project configuration to use the new duplicate database.
This way I can switch branches without worrying about database schema changes.
EDIT:
By duplicate, I mean create another database with a different name (like my_db_2); not doing a dump or anything like that.
Use something like LiquiBase this lets you keep revision control of your Liquibase files. you can tag changes for production only, and have lb keep your DB up to date for either production or development, (or whatever scheme you want).
Irmin (branching + time travel)
Flur.ee (immutable + time travel + graph query)
XTDB (formerly called 'CruxDB') (time travel + query)
TerminusDB (immutable + branching + time travel + Graph Query!)
DoltDB (branching + time-travel + SQL query)
Quadrable (branching + remote state verification)
EdgeDB (no real time travel, but migrations derived by the compiler after schema changes)
Migra (diffing for Postgres schemas/data. Auto-generate migration scripts, auto-sync db state)
ImmuDB (immutable + time-travel)
I've come across this question, as I've got a similar problem, where something approximating a DB based Directory structure, stores 'files', and I need git to manage it. It's distributed, across a cloud, using replication, hence it's access point will be via MySQL.
The gist of the above answers, seem to similarly suggest an alternative solution to the problem asked, which kind of misses the point, of using Git to manage something in a Database, so I'll attempt to answer that question.
Git is a system, which in essence stores a database of deltas (differences), which can be reassembled, in order, to reproduce a context. The normal usage of git assumes that context is a filesystem, and those deltas are diff's in that file system, but really all git is, is a hierarchical database of deltas (hierarchical, because in most cases each delta is a commit with at least 1 parents, arranged in a tree).
As long as you can generate a delta, in theory, git can store it. The problem is normally git expects the context, on which it's generating delta's to be a file system, and similarly, when you checkout a point in the git hierarchy, it expects to generate a filesystem.
If you want to manage change, in a database, you have 2 discrete problems, and I would address them separately (if I were you). The first is schema, the second is data (although in your question, you state data isn't something you're concerned about). A problem I had in the past, was a Dev and Prod database, where Dev could take incremental changes to the schema, and those changes had to be documented in CVS, and propogated to live, along with additions to one of several 'static' tables. We did that by having a 3rd database, called Cruise, which contained only the static data. At any point the schema from Dev and Cruise could be compared, and we had a script to take the diff of those 2 files and produce an SQL file containing ALTER statements, to apply it. Similarly any new data, could be distilled to an SQL file containing INSERT commands. As long as fields and tables are only added, and never deleted, the process could automate generating the SQL statements to apply the delta.
The mechanism by which git generates deltas is diff and the mechanism by which it combines 1 or more deltas with a file, is called merge. If you can come up with a method for diffing and merging from a different context, git should work, but as has been discussed you may prefer a tool that does that for you. My first thought towards solving that is this https://git-scm.com/book/en/v2/Customizing-Git-Git-Configuration#External-Merge-and-Diff-Tools which details how to replace git's internal diff and merge tool. I'll update this answer, as I come up with a better solution to the problem, but in my case I expect to only have to manage data changes, in-so-far-as a DB based filestore may change, so my solution may not be exactly what you need.
There is a great project called Migrations under Doctrine that built just for this purpose.
Its still in alpha state and built for php.
http://docs.doctrine-project.org/projects/doctrine-migrations/en/latest/index.html
Take a look at RedGate SQL Source Control.
http://www.red-gate.com/products/sql-development/sql-source-control/
This tool is a SQL Server Management Studio snap-in which will allow you to place your database under Source Control with Git.
It's a bit pricey at $495 per user, but there is a 28 day free trial available.
NOTE
I am not affiliated with RedGate in any way whatsoever.
I've released a tool for sqlite that does what you're asking for. It uses a custom diff driver leveraging the sqlite projects tool 'sqldiff', UUIDs as primary keys, and leaves off the sqlite rowid. It is still in alpha so feedback is appreciated.
Postgres and mysql are trickier, as the binary data is kept in multiple files and may not even be valid if you were able to snapshot it.
https://github.com/cannadayr/git-sqlite
I want to make something similar, add my database changes to my version control system.
I am going to follow the ideas in this post from Vladimir Khorikov "Database versioning best practices". In summary i will
store both its schema and the reference data in a source control system.
for every modification we will create a separate SQL script with the changes
In case it helps!
You can't do it without atomicity, and you can't get atomicity without either using pg_dump or a snapshotting filesystem.
My postgres instance is on zfs, which I snapshot occasionally. It's approximately instant and consistent.
I think X-Istence is on the right track, but there are a few more improvements you can make to this strategy. First, use:
$pg_dump --schema ...
to dump the tables, sequences, etc and place this file under version control. You'll use this to separate the compatibility changes between your branches.
Next, perform a data dump for the set of tables that contain configuration required for your application to operate (should probably skip user data, etc), like form defaults and other data non-user modifiable data. You can do this selectively by using:
$pg_dump --table=.. <or> --exclude-table=..
This is a good idea because the repo can get really clunky when your database gets to 100Mb+ when doing a full data dump. A better idea is to back up a more minimal set of data that you require to test your app. If your default data is very large though, this may still cause problems though.
If you absolutely need to place full backups in the repo, consider doing it in a branch outside of your source tree. An external backup system with some reference to the matching svn rev is likely best for this though.
Also, I suggest using text format dumps over binary for revision purposes (for the schema at least) since these are easier to diff. You can always compress these to save space prior to checking in.
Finally, have a look at the postgres backup documentation if you haven't already. The way you're commenting on backing up 'the database' rather than a dump makes me wonder if you're thinking of file system based backups (see section 23.2 for caveats).
What you want, in spirit, is perhaps something like Post Facto, which stores versions of a database in a database. Check this presentation.
The project apparently never really went anywhere, so it probably won't help you immediately, but it's an interesting concept. I fear that doing this properly would be very difficult, because even version 1 would have to get all the details right in order to have people trust their work to it.
This question is pretty much answered but I would like to complement X-Istence's and Dana the Sane's answer with a small suggestion.
If you need revision control with some degree of granularity, say daily, you could couple the text dump of both the tables and the schema with a tool like rdiff-backup which does incremental backups. The advantage is that instead of storing snapshots of daily backups, you simply store the differences from the previous day.
With this you have both the advantage of revision control and you don't waste too much space.
In any case, using git directly on big flat files which change very frequently is not a good solution. If your database becomes too big, git will start to have some problems managing the files.
Here is what i am trying to do in my projects:
separate data and schema and default data.
The database configuration is stored in configuration file that is not under version control (.gitignore)
The database defaults (for setting up new Projects) is a simple SQL file under version control.
For the database schema create a database schema dump under the version control.
The most common way is to have update scripts that contains SQL Statements, (ALTER Table.. or UPDATE). You also need to have a place in your database where you save the current version of you schema)
Take a look at other big open source database projects (piwik,or your favorite cms system), they all use updatescripts (1.sql,2.sql,3.sh,4.php.5.sql)
But this a very time intensive job, you have to create, and test the updatescripts and you need to run a common updatescript that compares the version and run all necessary update scripts.
So theoretically (and thats what i am looking for) you could
dumped the the database schema after each change (manually, conjob, git hooks (maybe before commit))
(and only in some very special cases create updatescripts)
After that in your common updatescript (run the normal updatescripts, for the special cases) and then compare the schemas (the dump and current database) and then automatically generate the nessesary ALTER Statements. There some tools that can do this already, but haven't found yet a good one.
What I do in my personal projects is, I store my whole database to dropbox and then point MAMP, WAMP workflow to use it right from there.. That way database is always up-to-date where ever I need to do some developing. But that's just for dev! Live sites is using own server for that off course! :)
Storing each level of database changes under git versioning control is like pushing your entire database with each commit and restoring your entire database with each pull.
If your database is so prone to crucial changes and you cannot afford to loose them, you can just update your pre_commit and post_merge hooks.
I did the same with one of my projects and you can find the directions here.
That's how I do it:
Since your have free choise about DB type use a filebased DB like e.g. firebird.
Create a template DB which has the schema that fits your actual branch and store it in your repository.
When executing your application programmatically create a copy of your template DB, store it somewhere else and just work with that copy.
This way you can put your DB schema under version control without the data. And if you change your schema you just have to change the template DB
We used to run a social website, on a standard LAMP configuration. We had a Live server, Test server, and Development server, as well as the local developers machines. All were managed using GIT.
On each machine, we had the PHP files, but also the MySQL service, and a folder with Images that users would upload. The Live server grew to have some 100K (!) recurrent users, the dump was about 2GB (!), the Image folder was some 50GB (!). By the time that I left, our server was reaching the limit of its CPU, Ram, and most of all, the concurrent net connection limits (We even compiled our own version of network card driver to max out the server 'lol'). We could not (nor should you assume with your website) put 2GB of data and 50GB of images in GIT.
To manage all this under GIT easily, we would ignore the binary folders (the folders containing the Images) by inserting these folder paths into .gitignore. We also had a folder called SQL outside the Apache documentroot path. In that SQL folder, we would put our SQL files from the developers in incremental numberings (001.florianm.sql, 001.johns.sql, 002.florianm.sql, etc). These SQL files were managed by GIT as well. The first sql file would indeed contain a large set of DB schema. We don't add user-data in GIT (eg the records of the users table, or the comments table), but data like configs or topology or other site specific data, was maintained in the sql files (and hence by GIT). Mostly its the developers (who know the code best) that determine what and what is not maintained by GIT with regards to SQL schema and data.
When it got to a release, the administrator logs in onto the dev server, merges the live branch with all developers and needed branches on the dev machine to an update branch, and pushed it to the test server. On the test server, he checks if the updating process for the Live server is still valid, and in quick succession, points all traffic in Apache to a placeholder site, creates a DB dump, points the working directory from 'live' to 'update', executes all new sql files into mysql, and repoints the traffic back to the correct site. When all stakeholders agreed after reviewing the test server, the Administrator did the same thing from Test server to Live server. Afterwards, he merges the live branch on the production server, to the master branch accross all servers, and rebased all live branches. The developers were responsible themselves to rebase their branches, but they generally know what they are doing.
If there were problems on the test server, eg. the merges had too many conflicts, then the code was reverted (pointing the working branch back to 'live') and the sql files were never executed. The moment that the sql files were executed, this was considered as a non-reversible action at the time. If the SQL files were not working properly, then the DB was restored using the Dump (and the developers told off, for providing ill-tested SQL files).
Today, we maintain both a sql-up and sql-down folder, with equivalent filenames, where the developers have to test that both the upgrading sql files, can be equally downgraded. This could ultimately be executed with a bash script, but its a good idea if human eyes kept monitoring the upgrade process.
It's not great, but its manageable. Hope this gives an insight into a real-life, practical, relatively high-availability site. Be it a bit outdated, but still followed.
Update Aug 26, 2019:
Netlify CMS is doing it with GitHub, an example implementation can be found here with all information on how they implemented it netlify-cms-backend-github
I say don't. Data can change at any given time. Instead you should only commit data models in your code, schema and table definitions (create database and create table statements) and sample data for unit tests. This is kinda the way that Laravel does it, committing database migrations and seeds.
I would recommend neXtep (Link removed - Domain was taken over by a NSFW-Website) for version controlling the database it has got a good set of documentation and forums that explains how to install and the errors encountered. I have tested it for postgreSQL 9.1 and 9.3, i was able to get it working for 9.1 but for 9.3 it doesn't seems to work.
Use a tool like iBatis Migrations (manual, short tutorial video) which allows you to version control the changes you make to a database throughout the lifecycle of a project, rather than the database itself.
This allows you to selectively apply individual changes to different environments, keep a changelog of which changes are in which environments, create scripts to apply changes A through N, rollback changes, etc.
I'd like to put the entire database under version control, what
database engine can I use so that I can put the actual database under
version control instead of its dump?
This is not database engine dependent. By Microsoft SQL Server there are lots of version controlling programs. I don't think that problem can be solved with git, you have to use a pgsql specific schema version control system. I don't know whether such a thing exists or not...
Use a version-controlled database, of which there are now several.
https://www.dolthub.com/blog/2021-09-17-database-version-control/
These products don't apply version control on top of another type of database -- they are their own database engines that support version control operations. So you need to migrate to them or start building on them in the first place.
I write one of them, DoltDB, which combines the interfaces of MySQL and Git. Check it out here:
https://github.com/dolthub/dolt
I wish it were simpler. Checking in the schema as a text file is a good start to capture the structure of the DB. For the content, however, I have not found a cleaner, better method for git than CSV files. One per table. The DB can then be edited on multiple branches and merges extremely well.

Resources