We have main two Git branches at our company:
During our Sprint,
Everyone has (local branches)
Release branch: Contains prepared/developed code, ready for testing, and
Master branch: Production- final ready code
I read in Git, Cherry picking is bad.
https://blogs.msdn.microsoft.com/oldnewthing/20180312-00/?p=98215
http://www.draconianoverlord.com/2013/09/07/no-cherry-picking.html
After items are placed in master, they are ready to be deployed.
Sometimes , we do not move all Release items into Master, many reasons:, delay schedule, conduct more testing, late issues. What is the proper Git Devops strategy to only move certain items into Master? Should we backout commits, so we can do a clean merge?
Databases are different, as we are conducting change scripts, and not overwriting binaries like applications, etc.
Example:
Release branch ------- -------------> Master
Commit A, B, C, D, E -------------> Commit B,D
Devops Stack
Since your requirement is apply the nonsequenced commits (like commit B, commit D while missing commit C) from release branch to master branch, you can checkout (“copy”) file versions from release branch to master branch.
Such as apply the changes from commit B on release branch to master branch, you can use below commands:
git checkout master
git checkout <commit B> -- .
git commit -m 'get the file version from commit B to master branch'
Now the latest commit on master branch contains the changes from commit B. And you can use the same way to apply the changes from commit D to master branch.
Related
After running yugabyte db for some time, I see that the logs are never deleted. How can I configure the server to delete old logs ?
Currently YugabyteDB doesn't provide an automatic way to delete old logs.
You can achieve this by running a bash script in crontab that deletes old files.
Examples:
https://askubuntu.com/questions/589210/removing-files-older-than-7-days
Delete files older than specific date in linux
https://askubuntu.com/questions/413529/delete-files-older-than-one-year-on-linux
Our application uses a single code base backed by client-specific databases.
What we are trying to achieve is code deployment using usual code push on the IIS website and DB deployments using SQL Dacpac for Schema Only changes on Azure DevOps.
Here the issue is that some of the changes do not go to all the client's databases simultaneously. What we need is a capability to select which would be the target databases for our current release.
Sometimes we will be releasing changes(Schema Only) to all of them, sometimes to few of them.
One way is to create separate release pipelines for all the databases and release them one by one.
Is there a way we can include checkboxes in the release itself, that every release asks me which all db these changes should go?
Another possible solution is finding a way by which I can call 5-10 release pipelines(Each for different DB release) while creating a release from my Main pipeline and have some kind of checkboxes for the releases using which I can pick which ones to do and which ones to skip for this release.
I need suggestions/best industry practices for this scenario.
Yes, there is. You can configure one release pipeline to have a SQL Server Database Deploy task for each database project. When you use that pipeline create a release, DevOps provides the flexibility by allowing you to enable or disable each task for that specific release. Once you have the release pipeline created, the process is:
Select your release pipeline
Create release
Edit release (not pipeline)
Right-click on each SQL Server database deploy task and Enable or Disable as needed
Save
Deploy
You could do it by adding conditionals on each task step that represents deployment to one of yours databases
steps:
- task: PowerShell#2
condition: eq(variables['deployToDb1'], true)
inputs:
targetType: 'inline'
script: |
Write-Host "Release to DB 1"
- task: PowerShell#2
condition: eq(variables['deployToDb2'], true)
inputs:
targetType: 'inline'
script: |
Write-Host "Release to DB 2"
Variables deployToDb1 and deployToDb2 are defined using UI on Edit Pipeline page and could be overwritten later on the release run
I work with bazaar binded to the server.
Our bazaar server was down, so I unbound myself with bzr unbind and committed locally with bzr commit. The server is up again so I used bzr bind and now my commits show as pending merge and my commits appear in the local diff.
I would like to push them to the server on the main (only) branch so that the bazaar history looks like the server never had any issue.
When I try bzr rebase <mybranch> it says that I have uncommitted changes,
and with --pending-merges it says I have no revision to rebase...
Do you know how can I get back a straight history ?
If there are no other changes on the master branch, you can simply bzr push to the master and then bind again.
My Database is running in No-Archive Log mode. Control files and redo log files are multiplexed on two drives, say A and B. C drive has the rest of the files.
I did a cold backup and did only A and C (no B).
Now, when restoring, I copied control file from A do B. That was ok with control files to go further. But what to do with redo logs which are missing on B?Actually I have on B more recent logs, remaining from the scrapped database - they don't match the cold backup, so I am sure I can drop them but I didn't so far.
I can go as far as to MOUNT. When trying to open database - end of channel communication error. Alter database open resetlogs wants me to do recovery first. Recovery is not working.
Is this a result, that I kept on B non matching redo logs or would it be something different? I thought I would be able to open a database with one member in each redo group being corrupted (B drive) and one being fine (A drive redo logs).
Is there other option than re-create control files (which I have) and open with resetlogs?
It is oracle 12.1.0.1. I will check my trace file but I was thinking about something obvious here.
I'm trying to find a fastest way to move postgresql
database v9.1.20 from one server to another server
with postgres v9.3.10.
Scenario as follows:
Production server running Ubuntu 12.04
postgresql 9.1.20, database size appox 250g
Target server we are trying to relocate on is
Ubuntu 14.04 postgresq 9.3.10.
The very first attempt we are tried to experiment with
was to dump database (pg_dump) from old server and restore
it on the new server (pg_restore).
It has worked just fine but the time we spent to
relocate is about 4 hours (pg_dump takes 3 hours
and pg_restore takes 1 hour (network link 1g,
SSD disks on both servers).
Total downtime in 4 hours is not acceptable.
The next attempt was to use pg_basebackup instead of
pg_dump. The method has reduced backup time up to 40 mins
instead of 3 hours which is acceptable.
However we cannot use dump provided by the pg_basebackup
due to version incompatibility.
I had read many articles on how to provide inplace database
upgrade but it seems they are all referring to upgrade on the
SAME server.
So my question - how I can upgrade the database backup produced
by the pg_basebackup on the server without having previous
postgresql serve binaries installed?
Thanks.
You can perform upgrade using repmgr and pg_upgrade with minimal downtime (several minutes).
On master (postgresql 9.1) enable streaming replication. DB restart is required
hot_standby = on
wal_keep_segments = 4000 # must be hight enough for standby to catch up
On standby server install PostgreSQL 9.1 and 9.3. On both servers install repmgr 2.0. Because repmgr 3.x versions work with PostgreSQL 9.3 or higher.
Synchronize master and standby server:
repmgr -D /var/lib/postgresql/9.1/main -p 5432 -d repmgr -U repmgr --verbose standby clone -h psql01a.example.net --fast-checkpoint
Underneath pg_basebackup should be used. So this method is pretty much the same as the one you've described.
Once cloning is finished you can start register standby:
repmgr -f /etc/repmgr.conf standby register
service postgresql start
This would allow standby to catch up changes committed on the master during replication.
With both databases running (in sync) check if upgrade is possible:
/usr/lib/postgresql/9.3/bin/pg_upgrade --check \
--old-datadir=/var/lib/postgresql/9.1/main \
--new-datadir=/var/lib/postgresql/9.3/main \
--old-bindir=/usr/lib/postgresql/9.1/bin \
--new-bindir=/usr/lib/postgresql/9.3/bin -v
If you pass upgrade checks you have to stop the master. Now the downtime period comes. Promote standby to master.
postgres#: repmgr standby promote
service postgresql start
# if everything looks ok, just stop the server
service postgresql stop
The fastest upgrade method is using links:
/usr/lib/postgresql/9.3/bin/pg_upgrade --link \
--old-datadir=/var/lib/postgresql/9.1/main \
--new-datadir=/var/lib/postgresql/9.3/main \
--old-bindir=/usr/lib/postgresql/9.1/bin \
--new-bindir=/usr/lib/postgresql/9.3/bin
Even with 200GB data the upgrade shouldn't take longer than few minutes (typically less than one minute). When things go south, it's hard to revert changes (but we'd still have functional master server). Once upgrade is finished, start the new server. Verify everything is ok and then you can safely delete the old cluster:
./delete_old_cluster.sh