sync repo.

This commit is contained in:
root 2022-10-31 09:37:42 -07:00
parent d10d71fbef
commit 7d5213f213
17 changed files with 49 additions and 934 deletions

2
.gitignore vendored
View File

@ -1,3 +1,5 @@
.idea
bitbucket.diy-backup.vars.sh
*.iml
tmp
./clean.sh

161
README.md
View File

@ -1,154 +1,15 @@
# Bitbucket Server DIY Backup #
# Confluence Server DIY Backup #
This repository contains a set of example scripts that demonstrate best practices for backing up a Bitbucket Server/Data
This repository contains a set of example scripts that demonstrate best practices for backing up a Confluence Server/Data
Center instance using a curated set of vendor technologies.
This version only supports the technologies below;
- DB: PostgreSQL
- Disk: LVM
## For example only
## Usage
### Backup:
`sudo ./backup/sh`
These scripts are provided as a _working example_, which should be extended/enhanced/optimized as necessary based on the
environment in which they will be used. These examples are intended as a jumping off point for you to create _your own_
backup strategy; they are not a drop-in solution for all potential configurations of Bitbucket Server/Data Center.
System administrators are expected to know their environment, and what technology is in use in that environment, and to
use these example scripts to help them build their own backup solution that makes the best use of the available tools.
To report bugs in the examples, [create a support request](https://support.atlassian.com/bitbucket-server/).
To report suggestions for how the examples could be improved, [create a suggestion](https://jira.atlassian.com/browse/BSERV).
## About these scripts
The scripts contained within this repository demonstrate two types of backup:
* Backups with downtime. This is the only type of backup available if your Bitbucket Server/Data Center instance is
older than 4.8, or if you are using the ``rsync`` strategy (described below)
* Zero downtime backups. To enable Zero Downtime Backup, you will need to set the variable `BACKUP_ZERO_DOWNTIME` to
`true`. If true, this variable will backup the filesystem and database **without** locking the application.
**NOTE:** This is only supported when used with Bitbucket Server/Data Center 4.8 or newer. It also requires a
compatible strategy for taking atomic block level snapshots of the home directory.
These scripts have been changed significantly with the release of Bitbucket 6.0. If updating from an older version of
the scripts, a number of configured variables will need updating. See the **Updating** section below for a list of
considerations when updating to a newer version of the backup scripts.
### Strategies ###
In order to use these example scripts you must specify a `BACKUP_DISK_TYPE` and `BACKUP_DATABASE_TYPE` strategy, and
optionally a `BACKUP_ARCHIVE_TYPE` and/or `BACKUP_ELASTICSEARCH_TYPE` strategy. These strategies can be set within the
`bitbucket.diy-backup.vars.sh`.
For each `BACKUP_DISK_TYPE`, `BACKUP_DATABASE_TYPE`, `BACKUP_ARCHIVE_TYPE` and `BACKUP_ELASTICSEARCH_TYPE` strategy,
additional variables need to be set in `bitbucket.diy-backup.vars.sh` to configure the details of your Bitbucket
instance's home directory, database, and other options. Refer to `bitbucket.diy-backup.vars.sh.example` for a complete
description of all the various variables and their definitions.
`BACKUP_DISK_TYPE` Strategy for backing up the Bitbucket home directory and any configured data stores, valid values are:
* `amazon-ebs` - Amazon EBS snapshots of the volume(s) containing the home directory and data stores.
* `rsync` - "rsync" of the home directory and data store contents to a temporary location. **NOTE:** This
can NOT be used with `BACKUP_ZERO_DOWNTIME=true`.
* `zfs` - ZFS snapshot strategy for home directory and data store backups.
`BACKUP_DATABASE_TYPE` Strategy for backing up the database, valid values are:
* `amazon-rds` - Amazon RDS snapshots.
* `mysql` - MySQL using "mysqldump" to backup and "mysql" to restore.
* `postgresql` - PostgreSQL using "pg_dump" to backup and "pg_restore" to restore.
* `postgresql-fslevel` - PostgreSQL with data directory located in the file system volume as home directory (so that
it will be included implicitly in the home volume snapshot).
`BACKUP_ARCHIVE_TYPE` Strategy for archiving backups and/or copying them to an offsite location, valid values are:
* `<leave-blank>` - Do not use an archiving strategy.
* `aws-snapshots` - AWS EBS and/or RDS snapshots, with optional copy to another region.
* `gpg-zip` - "gpg-zip" archive
* `tar` - Unix "tar" archive
`BACKUP_ELASTICSEARCH_TYPE` Strategy for backing up Elasticsearch, valid values are:
* `<leave blank>` - No separate snapshot and restore of Elasticsearch state (default).
- recommended for Bitbucket Server instances configured to use the (default) bundled
Elasticsearch instance. In this case all Elasticsearch state is stored under
${BITBUCKET_HOME}/shared and therefore already included in the home directory snapshot
implicitly. NOTE: If Bitbucket is configured to use a remote Elasticsearch instance (which
all Bitbucket Data Center instances must be), then its state is NOT included implictly in
home directory backups, and may therefore take some to rebuild after a restore UNLESS one of
the following strategies is used.
* `amazon-es` - Amazon Elasticsearch Service - uses an S3 bucket as a snapshot repository. Requires both
python and the python package 'boto' to be installed in order to sign the requests to AWS ES.
Once python has been installed run 'sudo pip install boto' to install the python boto package.
* `s3` - Amazon S3 bucket - requires the Elasticsearch Cloud plugin to be installed. See
https://www.elastic.co/guide/en/elasticsearch/plugins/2.3/cloud-aws.html
* `fs` - Shared filesystem - requires all data and master nodes to mount a shared file system to the
same mount point and that it is configured in the elasticsearch.yml file. See
https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-snapshots.html
`STANDBY_DISK_TYPE` Strategy for Bitbucket home directory disaster recovery, valid values are:
* `zfs` - ZFS snapshot strategy for disk replication.
`STANDBY_DATABASE_TYPE` Strategy for replicating the database, valid values are:
* `amazon-rds` - Amazon RDS Read replica
* `postgresql` - PostgreSQL replication
### Configuration ####
You will need to configure the script variables found in `bitbucket.diy-backup.vars.sh` based on your chosen strategies.
**Note** that not all options need to be configured. The backup strategy you choose together with your vendor tools will
determine which options should be set. See `bitbucket.diy-backup.vars.sh.example` for a complete set of all
configuration options.
`BACKUP_ZERO_DOWNTIME` If set to true, the home directory and database will be backed up **without** locking Bitbucket
by placing it in maintenance mode. **NOTE:** This can NOT be used with Bitbucket Server versions older than 4.8. For
more information, see [Zero downtime backup](https://confluence.atlassian.com/display/BitbucketServer/Using+Bitbucket+Zero+Downtime+Backup).
Make sure you read and understand this document before uncommenting this variable.
### Upgrading ###
In order to support Bitbucket Server 6.0, significant changes have been made to these scripts. If moving from an older
version of the DIY scripts, you will need to change certain variables in the `bitbucket.diy-backup.vars.sh` file. These
changes have been noted in `bitbucket.diy-backup.vars.sh.example`.
* `BACKUP_HOME_TYPE` has been renamed to `BACKUP_DISK_TYPE`
* `STANDBY_HOME_TYPE` has been renamed to `STANDBY_DISK_TYPE`
####`amazon-ebs` strategy ####
* A new `EBS_VOLUME_MOUNT_POINT_AND_DEVICE_NAMES` variable has been introduced, which is an array of all EBS volumes
(the shared home directory, and any configured data stores). It needs to contain the details for the shared home that
were previously stored in `HOME_DIRECTORY_MOUNT_POINT` and `HOME_DIRECTORY_DEVICE_NAME`.
* The `HOME_DIRECTORY_DEVICE_NAME` variable is no longer needed.
* The `HOME_DIRECTORY_MOUNT_POINT` variable should still be set.
* `RESTORE_HOME_DIRECTORY_VOLUME_TYPE` has been renamed to `RESTORE_DISK_VOLUME_TYPE`.
* `RESTORE_HOME_DIRECTORY_IOPS` has been renamed to `RESTORE_DISK_IOPS`.
* `ZFS_HOME_TANK_NAME` has been replaced with `ZFS_FILESYSTEM_NAMES`, an array containing filesystem names for the
shared home, as well as any data stores. This is only required if `FILESYSTEM_TYPE` is set to `zfs`.
**Note:** EBS snapshots are now tagged with the device name they are a snapshot of. If snapshots were taken previously,
they will not have this tag, and as a result:
* Old ebs snapshots without a "Device" tag won't be cleaned up automatically
* Restoring from an old ebs snapshot without a "Device" tag will fail
Both of these issues can be mitigated by adding the "Device" tag manually in the AWS console. For any EBS snapshots,
add a tag with "Device" as the key and `"<device_name>"` as the value, where `<device_name>` is the device name of the
EBS volume holding the shared home directory (e.g. `"Device" : "/dev/xvdf"`).
#### `rsync` strategy ####
* If any data stores are configured on the instance, `BITBUCKET_DATA_STORES` should be specified as an array of paths to
data stores.
* If any data stores are configured on the instance, `BITBUCKET_BACKUP_DATA_STORES` should specify a location for
for storing data store backups.
#### `zfs` strategy ####
* A new `ZFS_FILESYSTEM_NAMES` variable has been introduced, which is an array of ZFS filesystems (the shared home
directory, and any configured data stores). It needs to contain the filesystem name of the shared home directory,
which was previously stored in `ZFS_HOME_TANK_NAME`.
* If using these scripts for disaster recovery, a new variable `ZFS_HOME_FILESYSTEM` needs to be set. This should
contain the name of the ZFS filesystem storing the shared home directory - the same value that was previously stored
in `ZFS_HOME_TANK_NAME`.
### Further reading ###
* [Zero Downtime Backup](https://confluence.atlassian.com/display/BitbucketServer/Using+Bitbucket+Zero+Downtime+Backup)
* [Using Bitbucket Server DIY Backup](https://confluence.atlassian.com/display/BitbucketServer/Using+Bitbucket+Server+DIY+Backup)
* [Using Bitbucket Server DIY Backup in AWS](https://confluence.atlassian.com/display/BitbucketServer/Using+Bitbucket+Server+DIY+Backup+in+AWS)
### Restore
* `sudo ./restore.sh` -- To see the list of backups.
* `sudo ./restore.sh 2022-10-28` -- to restore specific backup.

View File

@ -3,11 +3,10 @@ set -e
SCRIPT_DIR=$(dirname "$0")
source "${SCRIPT_DIR}/utils.sh"
# source "${SCRIPT_DIR}/common.sh"
source "${SCRIPT_DIR}/func.sh"
source "${SCRIPT_DIR}/vars.sh"
source "${SCRIPT_DIR}/lvm.sh"
source "${SCRIPT_DIR}/dbase.sh"
source "${SCRIPT_DIR}/disk-lvm.sh"
source "${SCRIPT_DIR}/database-postgresql.sh"
readonly DB_BACKUP_JOB_NAME="Database backup"
readonly DISK_BACKUP_JOB_NAME="Disk backup"
@ -28,5 +27,3 @@ prepare_backup_disk
info "Backing up the database and filesystem in parallel"
run_in_bg backup_db "$DB_BACKUP_JOB_NAME"
run_in_bg backup_disk "$DISK_BACKUP_JOB_NAME"
# perform_cleanup_tmp

View File

@ -4,10 +4,11 @@
# docker stop confluence_wiki-db_1
sudo su - mali -c "cd /home/mali/confluence/;docker-compose stop"
rm -rf /data1/*
mkdir -p /data1/snapshot
source "./vars.sh"
# exit 1
docker start confluence_wiki-db_1
sleep 3
psql -U "${POSTGRES_USERNAME}" -h "${POSTGRES_HOST}" --port=${POSTGRES_PORT} -d "template1" -tqc 'DROP DATABASE IF EXISTS confluence'
./confluence.diy-restore.sh 2022-10-28
sudo su - mali -c "cd /home/mali/confluence/;docker-compose start"
./restore.sh 2022-10-28
sudo su - mali -c "cd /home/mali/confluence/;docker-compose start"

View File

View File

@ -5,7 +5,7 @@
# The name of the product
PRODUCT=Confluence
BACKUP_VARS_FILE=${BACKUP_VARS_FILE:-"${SCRIPT_DIR}"/confluence.diy-backup.vars.sh}
BACKUP_VARS_FILE=${BACKUP_VARS_FILE:-"${SCRIPT_DIR}"/vars.sh}
PATH=$PATH:/sbin:/usr/sbin:/usr/local/bin
TIMESTAMP=$(date +"%Y%m%d-%H%M%S")
@ -34,22 +34,6 @@ function backup_start {
return
fi
# local backup_response=$(run curl ${CURL_OPTIONS} -u "${CONFLUENCE_BACKUP_USER}:${CONFLUENCE_BACKUP_PASS}" -X POST -H \
# "X-Atlassian-Maintenance-Token: ${CONFLUENCE_LOCK_TOKEN}" -H "Accept: application/json" \
# -H "Content-type: application/json" "${CONFLUENCE_URL}/mvc/admin/backups?external=true")
# if [ -z "${backup_response}" ]; then
# bail "Unable to enter backup mode. POST to '${CONFLUENCE_URL}/mvc/admin/backups?external=true' \
# returned '${backup_response}'"
# fi
# CONFLUENCE_BACKUP_TOKEN=$(echo "${backup_response}" | jq -r ".cancelToken" | tr -d '\r')
# if [ -z "${CONFLUENCE_BACKUP_TOKEN}" ]; then
# bail "Unable to enter backup mode. Could not find 'cancelToken' in response '${backup_response}'"
# fi
# info "Confluence server is now preparing for backup. If the backup task is cancelled, Confluence Server should be notified that backup was terminated by executing the following command:"
# info " curl -u ... -X POST -H 'Content-type:application/json' '${CONFLUENCE_URL}/mvc/maintenance?token=${CONFLUENCE_BACKUP_TOKEN}'"
# info "This will also terminate the backup process in Confluence Server. Note that this will not unlock Confluence Server from maintenance mode."
info "Confluence backup started."
}

View File

@ -1,124 +0,0 @@
#!/bin/bash
# -------------------------------------------------------------------------------------
# The DIY backup script.
#
# This script is invoked to perform the backup of a Bitbucket Server,
# or Bitbucket Data Center instance. It requires a properly configured
# bitbucket.diy-backup.vars.sh file, which can be copied from
# bitbucket.diy-backup.vars.sh.example and customized.
# -------------------------------------------------------------------------------------
# Ensure the script terminates whenever a required operation encounters an error
set -e
SCRIPT_DIR=$(dirname "$0")
source "${SCRIPT_DIR}/utils.sh"
source "${SCRIPT_DIR}/common.sh"
source_archive_strategy
source_database_strategy
source_disk_strategy
check_command "jq"
##########################################################
readonly DB_BACKUP_JOB_NAME="Database backup"
readonly DISK_BACKUP_JOB_NAME="Disk backup"
# Started background jobs
declare -A BG_JOBS
# Successfully completed background jobs
declare -a COMPLETED_BG_JOBS
# Failed background jobs
declare -A FAILED_BG_JOBS
# Run a command in the background and record its PID so we can wait for its completion
function run_in_bg {
($1) &
local PID=$!
BG_JOBS["$2"]=${PID}
debug "Started $2 (PID=${PID})"
}
# Wait for all tracked background jobs (i.e. jobs recorded in 'BG_JOBS') to finish. If one or more jobs return a
# non-zero exit code, we log an error for each and return a non-zero value to fail the backup.
function wait_for_bg_jobs {
for bg_job_name in "${!BG_JOBS[@]}"; do
local PID=${BG_JOBS[${bg_job_name}]}
debug "Waiting for ${bg_job_name} (PID=${PID})"
{
wait ${PID}
} && {
debug "${bg_job_name} finished successfully (PID=${PID})"
COMPLETED_BG_JOBS+=("${bg_job_name}")
update_backup_progress 50
} || {
FAILED_BG_JOBS["${bg_job_name}"]=$?
}
done
if ((${#FAILED_BG_JOBS[@]})); then
for bg_job_name in "${!FAILED_BG_JOBS[@]}"; do
error "${bg_job_name} failed with status ${FAILED_BG_JOBS[${bg_job_name}]} (PID=${PID})"
done
return 1
fi
}
# Clean up after a failed backup
function cleanup_incomplete_backup {
debug "Cleaning up after failed backup"
for bg_job_name in "${COMPLETED_BG_JOBS[@]}"; do
case "$bg_job_name" in
"$DB_BACKUP_JOB_NAME")
cleanup_incomplete_db_backup
;;
"$DISK_BACKUP_JOB_NAME")
cleanup_incomplete_disk_backup
;;
*)
error "No cleanup task defined for backup type: $bg_job_name"
;;
esac
done
}
##########################################################
info "Preparing for backup"
prepare_backup_db
prepare_backup_disk
# If necessary, lock Bitbucket, start an external backup and wait for instance readiness
backup_start
# backup_wait
info "Backing up the database and filesystem in parallel"
run_in_bg backup_db "$DB_BACKUP_JOB_NAME"
run_in_bg backup_disk "$DISK_BACKUP_JOB_NAME"
{
wait_for_bg_jobs
} || {
cleanup_incomplete_backup || error "Failed to cleanup incomplete backup"
error "Backing up ${PRODUCT} failed"
exit 1
}
# If necessary, report 100% progress back to the application, and unlock Bitbucket
update_backup_progress 100
success "Successfully completed the backup of your ${PRODUCT} instance"
# Cleanup backups retaining the latest $KEEP_BACKUPS
cleanup_old_db_backups
cleanup_old_disk_backups
if [ -n "${BACKUP_ARCHIVE_TYPE}" ]; then
info "Archiving backups and cleaning up old archives"
archive_backup
cleanup_old_archives
fi

View File

@ -1,263 +0,0 @@
# Name used to identify the CONFLUENCE/Mesh instance being backed up. This appears in archive names and AWS snapshot tags.
# It should not contain spaces and must be under 100 characters long.
INSTANCE_NAME=confluence
# Type of instance being backed up:
# - <leave blank> or CONFLUENCE-dc - The instance being backed up is a CONFLUENCE DC instance.
# - CONFLUENCE-mesh - The instance being backed up is a CONFLUENCE Mesh instance.
INSTANCE_TYPE=confluence-dc
# Owner and group of ${CONFLUENCE_HOME}:
CONFLUENCE_UID=confluence
CONFLUENCE_GID=confluence
# Strategy for backing up the CONFLUENCE/Mesh home directory and data stores (if configured):
# - amazon-ebs - Amazon EBS snapshots of the volumes containing data for CONFLUENCE Server/Mesh
# - rsync - "rsync" of the disk contents to a temporary location. NOTE: This can NOT be used
# with BACKUP_ZERO_DOWNTIME=true.
# - zfs - ZFS snapshot strategy for disk backups.
# - none - Do not attempt to backup the home directory or data stores.
# Note: this config var was previously named BACKUP_HOME_TYPE
BACKUP_DISK_TYPE=lvm
# Strategy for backing up the database:
# - amazon-rds - Amazon RDS snapshots
# - mysql - MySQL using "mysqldump" to backup and "mysql" to restore
# - postgresql - PostgreSQL using "pg_dump" to backup and "pg_restore" to restore
# - postgresql-fslevel - PostgreSQL with data directory located in the file system volume as home directory (so
# that it will be included implicitly in the home volume snapshot)
# - none - Do not attempt to backup the database.
#
# Note: This property is ignored while backing up Mesh nodes.
BACKUP_DATABASE_TYPE=postgresql
# Strategy for archiving backups and/or copying them to an offsite location:
# - <leave blank> - Do not use an archiving strategy
# - aws-snapshots - AWS EBS and/or RDS snapshots, with optional copy to another region
# - gpg-zip - "gpg-zip" archive
# - tar - Unix "tar" archive
BACKUP_ARCHIVE_TYPE=tar
# Strategy for CONFLUENCE/Mesh disk disaster recovery:
# - zfs - ZFS snapshot strategy for disk replication.
# - none - Do not attempt to replicate data on disk.
STANDBY_DISK_TYPE=none
# Strategy for replicating the database:
# - amazon-rds - Amazon RDS Read replica
# - postgresql - PostgreSQL replication
# - none - Do not attempt to replicate the database.
#
# Note: This property is ignored while backing up Mesh nodes.
STANDBY_DATABASE_TYPE=none
# If BACKUP_ZERO_DOWNTIME is set to true, data on disk and the database will be backed up WITHOUT locking CONFLUENCE
# in maintenance mode. NOTE: This can NOT be used with CONFLUENCE Server versions older than 4.8. For more information,
# see https://confluence.atlassian.com/display/CONFLUENCEServer/Using+CONFLUENCE+Zero+Downtime+Backup.
# Make sure you read and understand this document before uncommenting this variable.
#BACKUP_ZERO_DOWNTIME=true
# Sub-options for each disk backup strategy
case ${BACKUP_DISK_TYPE} in
rsync)
# The path to the CONFLUENCE/Mesh home directory (with trailing /)
CONFLUENCE_HOME=/var/atlassian/application-data/CONFLUENCE/
# Paths to all configured data stores (with trailing /)
# Only required if one or more data stores is attached to the instance.
CONFLUENCE_DATA_STORES=()
# Optional list of repo IDs which should be excluded from the backup. For example: (2 5 88)
# Note: This property is ignored while backing up Mesh nodes.
CONFLUENCE_BACKUP_EXCLUDE_REPOS=()
;;
lvm)
# The path to the CONFLUENCE home directory (with trailing /)
CONFLUENCE_HOME=/data2/confluence
CONFLUENCE_BACKUP_HOME=/backup/confluence
# Paths to all configured data stores (with trailing /)
# Only required if one or more data stores is attached to the instance.
CONFLUENCE_DATA_STORES=()
# Optional list of repo IDs which should be excluded from the backup. For example: (2 5 88)
CONFLUENCE_BACKUP_EXCLUDE_REPOS=()
;;
zfs)
# The name of each filesystem that holds file server data for CONFLUENCE Server/Mesh. This should, at a minimum,
# include the home directory filesystem, and if configured, the filesystems for each data store.
# This must be the same name(s) on the standby if using replication.
# Note: this config var should contain the value previously in ZFS_HOME_TANK_NAME
ZFS_FILESYSTEM_NAMES=(tank/atlassian-home)
# ==== DISASTER RECOVERY VARS ====
# The name of the ZFS filesystem containing the shared home directory. This is needed for disaster recovery so
# that the home directory can be promoted.
ZFS_HOME_FILESYSTEM=
# The user for SSH when running replication commands on the standby file server.
# Note this user needs password-less sudo on the standby to run zfs commands and password-less ssh from
# the primary file server to the standby file server.
STANDBY_SSH_USER=
# (Optional) Append flags to the SSH commands. e.g. "-i private_key.pem"
# Useful flags for unattended ssh commands: -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null
STANDBY_SSH_OPTIONS=
# The hostname of the standby file server
STANDBY_SSH_HOST=
;;
esac
# Sub-options for each database backup strategy
#
# Note: This property is ignored while backing up Mesh nodes.
case ${BACKUP_DATABASE_TYPE} in
mysql)
CONFLUENCE_DB=CONFLUENCE
MYSQL_HOST=
MYSQL_USERNAME=
MYSQL_PASSWORD=
MYSQL_BACKUP_OPTIONS=
;;
mssql)
CONFLUENCE_DB=CONFLUENCE
;;
postgresql)
# The connection details for your primary instance's PostgreSQL database. The pg_hba.conf file must
# be configured to allow the backup and restore scripts full access as POSTGRES_USERNAME with the
# specified PGPASSWORD. When Disaster Recovery is used, POSTGRES_HOST must also be accessible from
# the standby system with the same level of access.
CONFLUENCE_DB=confluence
POSTGRES_HOST=localhost
POSTGRES_USERNAME=database1user
export PGPASSWORD=database1password
POSTGRES_PORT=5432
# ==== DISASTER RECOVERY VARS ====
# The full path to the standby server's PostgreSQL data folder. i.e "/var/lib/pgsql94/data"
# Note: Attempt auto-detection based on major version (Works with CentOS, RHEL and Amazon Linux, override if unsure)
STANDBY_DATABASE_DATA_DIR="/var/lib/pgsql${psql_major}/data"
# The user which runs the PostgreSQL system service. This is normally "postgres"
STANDBY_DATABASE_SERVICE_USER=postgres
# The name of the replication slot
STANDBY_DATABASE_REPLICATION_SLOT_NAME=CONFLUENCE
# The username and password of the user that will be used to execute the replication.
STANDBY_DATABASE_REPLICATION_USER_USERNAME=
STANDBY_DATABASE_REPLICATION_USER_PASSWORD=
# The postgres service name for stopping / starting it.
# Note: Attempt auto-detection based on major version (Works with CentOS, RHEL and Amazon Linux, override if unsure)
STANDBY_DATABASE_SERVICE_NAME="postgresql${psql_major}"
;;
postgresql-fslevel)
# The postgres service name for stopping / starting it at restore time.
POSTGRESQL_SERVICE_NAME="postgresql${psql_major}"
;;
esac
case ${BACKUP_ARCHIVE_TYPE} in
*)
# The path to working folder for the backup
CONFLUENCE_BACKUP_ROOT=
CONFLUENCE_BACKUP_DB=${CONFLUENCE_BACKUP_ROOT}/CONFLUENCE-db/
CONFLUENCE_BACKUP_HOME=${CONFLUENCE_BACKUP_ROOT}/CONFLUENCE-home/
CONFLUENCE_BACKUP_DATA_STORES=${CONFLUENCE_BACKUP_ROOT}/CONFLUENCE-data-stores/
# The path to where the backup archives are stored
CONFLUENCE_BACKUP_ARCHIVE_ROOT=
# Options for the gpg-zip archive type
CONFLUENCE_BACKUP_GPG_RECIPIENT=
;;
esac
# Options to pass to every "curl" command
CURL_OPTIONS="-L -s -f"
# === AWS Variables ===
if [ "amazon-ebs" = "${BACKUP_DISK_TYPE}" -o "amazon-rds" = "${BACKUP_DATABASE_TYPE}" ]; then
AWS_INFO=$(curl ${CURL_OPTIONS} http://169.254.169.254/latest/dynamic/instance-identity/document)
# The AWS account ID of the instance. Used to create Amazon Resource Names (ARNs)
AWS_ACCOUNT_ID=$(echo "${AWS_INFO}" | jq -r .accountId)
# The availability zone in which volumes will be created when restoring an instance.
AWS_AVAILABILITY_ZONE=$(echo "${AWS_INFO}" | jq -r .availabilityZone)
# The region for the resources CONFLUENCE is using (volumes, instances, snapshots, etc)
AWS_REGION=$(echo "${AWS_INFO}" | jq -r .region)
# The EC2 instance ID
AWS_EC2_INSTANCE_ID=$(echo "${AWS_INFO}" | jq -r .instanceId)
# Additional AWS tags for EBS and RDS snapshot, tags needs to be in JSON format without enclosing square brackets:
# Example: AWS_ADDITIONAL_TAGS='{"Key":"example_key", "Value":"example_value"}'
AWS_ADDITIONAL_TAGS=
# Ensure we fsfreeze while snapshots of ebs volumes are taken
FSFREEZE=true
fi
# Used by the scripts for verbose logging. If not true only errors will be shown.
CONFLUENCE_VERBOSE_BACKUP=${CONFLUENCE_VERBOSE_BACKUP:-true}
# HipChat options
HIPCHAT_URL=https://api.hipchat.com
HIPCHAT_ROOM=
HIPCHAT_TOKEN=
# The number of backups to retain. After backups are taken, all old snapshots except for the most recent
# ${KEEP_BACKUPS} are deleted. Set to 0 to disable cleanup of old snapshots.
# This is also used by Disaster Recovery to limit snapshots.
KEEP_BACKUPS=0
# ==== Elasticsearch VARS ====
# The CONFLUENCE search index (default is CONFLUENCE-search-v1). Most users will NOT need to change this.
ELASTICSEARCH_INDEX_NAME=CONFLUENCE-search-v1
# The hostname (and port, if required) for the Elasticsearch instance
ELASTICSEARCH_HOST=localhost:7992
ELASTICSEARCH_REPOSITORY_NAME=CONFLUENCE-snapshots
case ${BACKUP_ELASTICSEARCH_TYPE} in
amazon-es)
# Configuration for the Amazon Elasticsearch Service
ELASTICSEARCH_S3_BUCKET=
ELASTICSEARCH_S3_BUCKET_REGION=us-east-1
# The IAM role that can be used to snapshot AWS Elasticsearch, used to configure the S3 snapshot repository
ELASTICSEARCH_SNAPSHOT_IAM_ROLE=
;;
s3)
# Configuration for the Amazon S3 snapshot repository (s3)
ELASTICSEARCH_S3_BUCKET=
ELASTICSEARCH_S3_BUCKET_REGION=us-east-1
# Elasticsearch credentials
ELASTICSEARCH_USERNAME=
ELASTICSEARCH_PASSWORD=
;;
fs)
# Configuration for the shared filesystem snapshot repository (fs)
ELASTICSEARCH_REPOSITORY_LOCATION=
# Elasticsearch credentials
ELASTICSEARCH_USERNAME=
ELASTICSEARCH_PASSWORD=
;;
esac
# ==== DISASTER RECOVERY VARS ====
# Only used on a CONFLUENCE Data Center primary instance which has been configured with a Disaster Recovery standby system.
# See https://confluence.atlassian.com/display/CONFLUENCEServer/Disaster+recovery+guide+for+CONFLUENCE+Data+Center for more information.
# The JDBC URL for the STANDBY database server.
# WARNING: It is imperative that you set this to the correct JDBC URL for your STANDBY database.
# During fail-over, 'promote-home.sh' will write this to your 'CONFLUENCE.properties' file so that
# your standby CONFLUENCE instance will connect to the right database. If this is incorrect, then
# in a fail-over scenario your standby CONFLUENCE instance may fail to start or even connect to the
# incorrect database.
#
# Example for PostgreSQL:
# "jdbc:postgres://standby-db.my-company.com:${POSTGRES_PORT}/${CONFLUENCE_DB}"
# Example for PostgreSQL running in Amazon RDS
# jdbc:postgres://${RDS_ENDPOINT}/${CONFLUENCE_DB}
#
# Note: This property is ignored while backing up Mesh nodes.
STANDBY_JDBC_URL=

View File

@ -13,7 +13,7 @@ if [[ ${psql_majorminor} -ge 9003 ]]; then
fi
function prepare_backup_db {
check_config_var "CONFLUENCE_BACKUP_DB"
check_config_var "CONFLUENCE_BACKUP_DB_TMP"
check_config_var "POSTGRES_USERNAME"
check_config_var "POSTGRES_HOST"
check_config_var "POSTGRES_PORT"
@ -21,10 +21,10 @@ function prepare_backup_db {
}
function backup_db {
[ -d "${CONFLUENCE_BACKUP_DB}" ] && rm -r "${CONFLUENCE_BACKUP_DB}"
mkdir -p "${CONFLUENCE_BACKUP_DB}"
run pg_dump -U "${POSTGRES_USERNAME}" -h "${POSTGRES_HOST}" --port=${POSTGRES_PORT} ${PG_PARALLEL} -Fd \
-d "${CONFLUENCE_DB}" ${PG_SNAPSHOT_OPT} -f "${CONFLUENCE_BACKUP_DB}"
[ -d "${CONFLUENCE_BACKUP_DB_TMP}" ] && rm -r "${CONFLUENCE_BACKUP_DB_TMP}"
mkdir -p "${CONFLUENCE_BACKUP_DB_TMP}"
run pg_dump -U "${POSTGRES_USERNAME}" -h "${POSTGRES_HOST}" --port=${POSTGRES_PORT} ${PG_PARALLEL} -Fd -d "${CONFLUENCE_DB}" ${PG_SNAPSHOT_OPT} -f "${CONFLUENCE_BACKUP_DB_TMP}"
perform_rsync_compress_db
}
function prepare_restore_db {
@ -51,7 +51,7 @@ function restore_db {
function cleanup_incomplete_db_backup {
info "Cleaning up DB backup created as part of failed/incomplete backup"
rm -r "${CONFLUENCE_BACKUP_DB}"
rm -r "${CONFLUENCE_BACKUP_DB_TMP}"
}
function cleanup_old_db_backups {
@ -59,124 +59,8 @@ function cleanup_old_db_backups {
no_op
}
# ----------------------------------------------------------------------------------------------------------------------
# Disaster recovery functions
# ----------------------------------------------------------------------------------------------------------------------
# Promote a standby database to take over from the primary, as part of a disaster recovery failover process
function promote_db {
check_command "pg_ctl"
check_config_var "STANDBY_DATABASE_SERVICE_USER"
check_config_var "STANDBY_DATABASE_REPLICATION_USER_USERNAME"
check_config_var "STANDBY_DATABASE_REPLICATION_USER_PASSWORD"
check_config_var "STANDBY_DATABASE_DATA_DIR"
local is_in_recovery=$(PGPASSWORD="${STANDBY_DATABASE_REPLICATION_USER_PASSWORD}" \
run psql -U "${STANDBY_DATABASE_REPLICATION_USER_USERNAME}" -d "${CONFLUENCE_DB}" -tqc "SELECT pg_is_in_recovery()")
case "${is_in_recovery/ }" in
"t")
;;
"f")
bail "Cannot promote standby PostgreSQL database, because it is already running as a primary database."
;;
"")
bail "Cannot promote standby PostgreSQL database."
;;
*)
bail "Cannot promote standby PostgreSQL database, got unexpected result '${is_in_recovery}'."
;;
esac
info "Promoting standby database instance"
# Run pg_ctl in the root ( / ) folder to avoid warnings about user directory permissions.
# Also ensure the command is run as the same user that is running the PostgreSQL service.
# Because we have password-less sudo, we use su to execute the pg_ctl command
run sudo su "${STANDBY_DATABASE_SERVICE_USER}" -c "cd / ; pg_ctl -D '${STANDBY_DATABASE_DATA_DIR}' promote"
success "Promoted PostgreSQL standby instance"
}
# Configures a standby PostgreSQL database (which must be accessible locally by the "pg_basebackup" command) to
# replicate from the primary PostgreSQL database specified by the POSTGRES_HOST, POSTGRES_DB, etc. variables.
function setup_db_replication {
check_command "pg_basebackup"
check_config_var "POSTGRES_HOST"
# Checks to see if the primary instance is set up for replication
info "Checking primary PostgreSQL server '${POSTGRES_HOST}'"
validate_primary_db
debug "Primary checks were successful"
# Checks and configures standby instance for replication
info "Setting up standby PostgreSQL instance"
check_config_var "STANDBY_DATABASE_SERVICE_NAME"
check_config_var "STANDBY_DATABASE_SERVICE_USER"
check_config_var "STANDBY_DATABASE_REPLICATION_USER_USERNAME"
check_config_var "STANDBY_DATABASE_REPLICATION_USER_PASSWORD"
check_config_var "STANDBY_DATABASE_DATA_DIR"
# Run command from the root ( / ) folder and ensure the command is run as the same user that is running the
# PostgreSQL service. Because we have password-less sudo, we use su to execute the pg_basebackup command, and
# ensure we pass the correct password to the shell that is executing it.
info "Transferring base backup from primary to standby PostgreSQL, this could take a while depending on database size and bandwidth available"
run sudo su "${STANDBY_DATABASE_SERVICE_USER}" -c "cd / ; PGPASSWORD='${STANDBY_DATABASE_REPLICATION_USER_PASSWORD}' \
pg_basebackup -D '${STANDBY_DATABASE_DATA_DIR}' -R -P -x -h '${POSTGRES_HOST}' -U '${STANDBY_DATABASE_REPLICATION_USER_USERNAME}'"
local slot_config="primary_slot_name = '${STANDBY_DATABASE_REPLICATION_SLOT_NAME}'"
debug "Appending '${slot_config}' to '${STANDBY_DATABASE_DATA_DIR}/recovery.conf'"
sudo su "${STANDBY_DATABASE_SERVICE_USER}" -c "echo '${slot_config}' >> '${STANDBY_DATABASE_DATA_DIR}/recovery.conf'"
run sudo service "${STANDBY_DATABASE_SERVICE_NAME}" start
success "Standby setup was successful"
}
#-----------------------------------------------------------------------------------------------------------------------
# Private functions
#-----------------------------------------------------------------------------------------------------------------------
function get_config_setting {
local var_name="$1"
local var_value=$(run psql -U "${POSTGRES_USERNAME}" -h "${POSTGRES_HOST}" --port=${POSTGRES_PORT} \
-d "${CONFLUENCE_DB}" -tqc "SHOW ${var_name}")
echo "${var_value/ }"
}
function validate_primary_db {
if [ "$(get_config_setting wal_level)" != "hot_standby" ]; then
bail "Primary instance is not configured correctly. Update postgresql.conf, set 'wal_level' to 'hot_standby'"
fi
if [ "$(get_config_setting max_wal_senders)" -lt 1 ]; then
bail "Primary instance is not configured correctly. Update postgresql.conf with valid 'max_wal_senders'"
fi
if [ "$(get_config_setting wal_keep_segments)" -lt 1 ]; then
bail "Primary instance is not configured correctly. Update postgresql.conf with valid 'wal_keep_segments'"
fi
if [ "$(get_config_setting max_replication_slots)" -lt 1 ]; then
bail "Primary instance is not configured correctly. Update postgresql.conf with valid 'max_replication_slots'"
fi
local replication_slot=$(run psql -U "${POSTGRES_USERNAME}" -h "${POSTGRES_HOST}" --port=${POSTGRES_PORT} -d "${CONFLUENCE_DB}" -tqc \
"SELECT * FROM pg_create_physical_replication_slot('${STANDBY_DATABASE_REPLICATION_SLOT_NAME}')")
if [[ "${replication_slot}" =~ "already exists" ]]; then
info "Replication slot '${STANDBY_DATABASE_REPLICATION_SLOT_NAME}' created successfully"
else
info "Replication slot '${STANDBY_DATABASE_REPLICATION_SLOT_NAME}' already exists, skipping creation"
fi
local replication_user=$(run psql -U "${POSTGRES_USERNAME}" -h "${POSTGRES_HOST}" --port=${POSTGRES_PORT} -d "${CONFLUENCE_DB}" -tqc \
"\du ${STANDBY_DATABASE_REPLICATION_USER_USERNAME}")
if [ -z "${replication_user}" ]; then
run psql -U "${POSTGRES_USERNAME}" -h "${POSTGRES_HOST}" --port=${POSTGRES_PORT} -d "${CONFLUENCE_DB}" -tqc \
"CREATE USER ${STANDBY_DATABASE_REPLICATION_USER_USERNAME} REPLICATION LOGIN CONNECTION \
LIMIT 1 ENCRYPTED PASSWORD '${STANDBY_DATABASE_REPLICATION_USER_PASSWORD}'"
info "Replication user '${STANDBY_DATABASE_REPLICATION_USER_USERNAME}' has been created"
else
info "Replication user '${STANDBY_DATABASE_REPLICATION_USER_USERNAME}' already exists, skipping creation"
fi
function perform_rsync_compress_db {
[ -d "${CONFLUENCE_BACKUP_DB}" ] && rm -r "${CONFLUENCE_BACKUP_DB}"
mkdir -p "${CONFLUENCE_BACKUP_DB}"
rsync --remove-source-files -h $CONFLUENCE_BACKUP_DB_TMP/* $CONFLUENCE_BACKUP_DB/
}

View File

@ -1,66 +0,0 @@
# -------------------------------------------------------------------------------------
# A backup and restore strategy for PostgreSQL with "pg_dump" and "pg_restore" commands.
# -------------------------------------------------------------------------------------
check_command "pg_dump"
check_command "psql"
check_command "pg_restore"
# Make use of PostgreSQL 9.3+ options if available
if [[ ${psql_majorminor} -ge 9003 ]]; then
PG_PARALLEL="-j 5"
PG_SNAPSHOT_OPT="--no-synchronized-snapshots"
fi
function prepare_backup_db {
check_config_var "CONFLUENCE_BACKUP_DB_TMP"
check_config_var "POSTGRES_USERNAME"
check_config_var "POSTGRES_HOST"
check_config_var "POSTGRES_PORT"
check_config_var "CONFLUENCE_DB"
}
function backup_db {
[ -d "${CONFLUENCE_BACKUP_DB_TMP}" ] && rm -r "${CONFLUENCE_BACKUP_DB_TMP}"
mkdir -p "${CONFLUENCE_BACKUP_DB_TMP}"
run pg_dump -U "${POSTGRES_USERNAME}" -h "${POSTGRES_HOST}" --port=${POSTGRES_PORT} ${PG_PARALLEL} -Fd -d "${CONFLUENCE_DB}" ${PG_SNAPSHOT_OPT} -f "${CONFLUENCE_BACKUP_DB_TMP}"
perform_rsync_compress_db
}
function prepare_restore_db {
check_config_var "POSTGRES_USERNAME"
check_config_var "POSTGRES_HOST"
check_config_var "POSTGRES_PORT"
check_var "CONFLUENCE_RESTORE_DB"
if run psql -U "${POSTGRES_USERNAME}" -h "${POSTGRES_HOST}" --port=${POSTGRES_PORT} -d "${CONFLUENCE_DB}" -c "" 2>/dev/null; then
local table_count=$(psql -U "${POSTGRES_USERNAME}" -h "${POSTGRES_HOST}" --port=${POSTGRES_PORT} -d "${CONFLUENCE_DB}" -tqc '\dt' | grep -v "^$" | wc -l)
if [ "${table_count}" -gt 0 ]; then
error "Database '${CONFLUENCE_DB}' already exists and contains ${table_count} tables"
else
error "Database '${CONFLUENCE_DB}' already exists"
fi
bail "Cannot restore over existing database '${CONFLUENCE_DB}', please ensure it does not exist before restoring"
fi
}
function restore_db {
run pg_restore -U "${POSTGRES_USERNAME}" -h "${POSTGRES_HOST}" --port=${POSTGRES_PORT} ${PG_PARALLEL} \
-d postgres -C -Ft "${CONFLUENCE_RESTORE_DB}"
}
function cleanup_incomplete_db_backup {
info "Cleaning up DB backup created as part of failed/incomplete backup"
rm -r "${CONFLUENCE_BACKUP_DB_TMP}"
}
function cleanup_old_db_backups {
# Not required as old backups with this strategy are typically cleaned up in the archiving strategy.
no_op
}
function perform_rsync_compress_db {
[ -d "${CONFLUENCE_BACKUP_DB}" ] && rm -r "${CONFLUENCE_BACKUP_DB}"
mkdir -p "${CONFLUENCE_BACKUP_DB}"
rsync --remove-source-files -h $CONFLUENCE_BACKUP_DB_TMP/* $CONFLUENCE_BACKUP_DB/
}

View File

@ -12,6 +12,11 @@ function prepare_backup_disk {
fi
##
# Create Mount point if it is not already exists;
if [ ! -d "${CONFLUENCE_HOME_SNAP}" ]; then
mkdir -p ${CONFLUENCE_HOME_SNAP}
fi
# Confirm there are no existing snapshots, exit if present
if [ $(lvs | grep -ic snap) -gt 0 ]; then
@ -32,42 +37,23 @@ function backup_disk {
vg=$(lvs | grep $snapshot_name | cut -d" " -f4)
snap_volume=/dev/$vg/$snapshot_name
mount -onouuid,ro $snap_volume /data1/snapshot
mount -onouuid,ro $snap_volume ${CONFLUENCE_HOME_SNAP}
# Create new variable to define source of backup as snapshot
CONFLUENCE_HOME_SNAP=/data1/snapshot/CONFLUENCE-home/
# rsync home from snapshot
# perform_rsync_home_directory
# perform_rsync_data_stores
perform_compress_data
perform_rsync_compress_data
# unmount and remove lvm snapshot
umount /data1/snapshot
umount ${CONFLUENCE_HOME_SNAP}
lvremove -f $snap_volume
}
## Functions copied from the rsync script since we are essentially using rsync but utilizing an lvm snap as the source
## instead. Changed CONFLUENCE_HOME to CONFLUENCE_HOME_SNAP where appropriate and tagged each change in case we ever
## need to redo. We still want restores to work the same(from the archive file) so no changes there only on the
## backup rsyncs - JM
function perform_compress_data {
# Globals
backupDir="/backup/confluence"
pgdump="pg_dump"
# Backup target directories
backupDirDaily="$backupDir/$day_new_format"
day_new_format=$(date +%Y-%m-%d)
tar -czPf $backupDirDaily/$day_new_format.tgz /data1/snapshot
tar -czPf $CONFLUENCE_TMP/$day_new_format.tgz $CONFLUENCE_HOME_SNAP
}
function perform_rsync_compress_data {
rsync -avh --progress $backupDirDaily/$day_new_format.tgz /backup/confluence
rsync --remove-source-files -h $CONFLUENCE_TMP/$day_new_format.tgz $CONFLUENCE_BACKUP_HOME/
}
function prepare_restore_disk {

58
lvm.sh
View File

@ -1,58 +0,0 @@
function prepare_backup_disk {
## Bit from the rsync script that does some sanity checks and set up Data Store backups if present
check_config_var "CONFLUENCE_BACKUP_HOME"
check_config_var "CONFLUENCE_HOME"
# CONFLUENCE_RESTORE_DATA_STORES needs to be set if any data stores are configured
if [ -n "${CONFLUENCE_DATA_STORES}" ]; then
check_var "CONFLUENCE_BACKUP_DATA_STORES"
fi
##
# Confirm there are no existing snapshots, exit if present
if [ $(lvs | grep -ic snap) -gt 0 ]; then
echo "Snapshot already exists. Stopping backup"
exit
fi
}
function backup_disk {
# take lvm snapshot for backup
volume=$(df | grep data1 | cut -d" " -f1)
snapshot_name=snapbackup_$(date +"%m-%d_%H%M")
lvcreate --size 4G --snapshot --name $snapshot_name $volume
# Mount snapshot before rsync
vg=$(lvs | grep $snapshot_name | cut -d" " -f4)
snap_volume=/dev/$vg/$snapshot_name
mount -onouuid,ro $snap_volume ${CONFLUENCE_HOME_SNAP}
# Create new variable to define source of backup as snapshot
# CONFLUENCE_HOME_SNAP=/data2/snapshot/CONFLUENCE-home/
# rsync home from snapshot
# perform_rsync_home_directory
# perform_rsync_data_stores
perform_compress_data
perform_rsync_compress_data
# unmount and remove lvm snapshot
umount ${CONFLUENCE_HOME_SNAP}
lvremove -f $snap_volume
}
function perform_compress_data {
day_new_format=$(date +%Y-%m-%d)
tar -czPf $CONFLUENCE_TMP/$day_new_format.tgz $CONFLUENCE_HOME_SNAP
}
function perform_rsync_compress_data {
rsync --remove-source-files -h $CONFLUENCE_TMP/$day_new_format.tgz $CONFLUENCE_BACKUP_HOME/
}

View File

@ -1,29 +0,0 @@
#!/bin/bash
# -------------------------------------------------------------------------------------
# The Disaster Recovery script to promote a standby Bitbucket Data Center database server.
#
# Ensure you are using this script in accordance with the following document:
# https://confluence.atlassian.com/display/BitbucketServer/Disaster+recovery+guide+for+Bitbucket+Data+Center
#
# It requires the following configuration file:
# bitbucket.diy-backup.vars.sh
# which can be copied from bitbucket.diy-backup.vars.sh.example and customized.
# -------------------------------------------------------------------------------------
# Ensure the script terminates whenever a required operation encounters an error
set -e
SCRIPT_DIR=$(dirname "$0")
source "${SCRIPT_DIR}/utils.sh"
source "${SCRIPT_DIR}/common.sh"
if [ "${INSTANCE_TYPE}" = "bitbucket-mesh" ]; then
# Mesh nodes don't run with an external database, so a standby database isn't required while disaster recovery
STANDBY_DATABASE_TYPE="none"
fi
source_disaster_recovery_database_strategy
##########################################################
promote_db

View File

@ -1,23 +0,0 @@
#!/bin/bash
# -------------------------------------------------------------------------------------
# The Disaster Recovery script to promote a standby Bitbucket Data Center file server.
#
# Ensure you are using this script in accordance with the following document:
# https://confluence.atlassian.com/display/BitbucketServer/Disaster+recovery+guide+for+Bitbucket+Data+Center
#
# It requires the following configuration file:
# bitbucket.diy-backup.vars.sh
# which can be copied from bitbucket.diy-backup.vars.sh.example and customized.
# -------------------------------------------------------------------------------------
# Ensure the script terminates whenever a required operation encounters an error
set -e
SCRIPT_DIR=$(dirname "$0")
source "${SCRIPT_DIR}/utils.sh"
source "${SCRIPT_DIR}/common.sh"
source_disaster_recovery_disk_strategy
##########################################################
promote_home

View File

@ -3,10 +3,7 @@
# -------------------------------------------------------------------------------------
# The DIY restore script.
#
# This script is invoked to perform a restore of a Bitbucket Server,
# or Bitbucket Data Center instance. It requires a properly configured
# bitbucket.diy-backup.vars.sh file, which can be copied from
# bitbucket.diy-backup.vars.sh.example and customized.
# This script is invoked to perform a restore of a Confluence Backup.
# -------------------------------------------------------------------------------------
# Ensure the script terminates whenever a required operation encounters an error
@ -17,8 +14,8 @@ source "${SCRIPT_DIR}/utils.sh"
source "${SCRIPT_DIR}/common.sh"
source "${SCRIPT_DIR}/func.sh"
source "${SCRIPT_DIR}/vars.sh"
source "${SCRIPT_DIR}/lvm.sh"
source "${SCRIPT_DIR}/dbase.sh"
source "${SCRIPT_DIR}/disk-lvm.sh"
source "${SCRIPT_DIR}/database-postgresql.sh"
source_archive_strategy
source_database_strategy
@ -26,9 +23,9 @@ source_disk_strategy
# # Ensure we know which user:group things should be owned as
# if [ -z "${BITBUCKET_UID}" -o -z "${BITBUCKET_GID}" ]; then
# error "Both BITBUCKET_UID and BITBUCKET_GID must be set in '${BACKUP_VARS_FILE}'"
# bail "See 'bitbucket.diy-backup.vars.sh.example' for the defaults."
# if [ -z "${CONFLUENCE_UID}" -o -z "${CONFLUENCE_GID}" ]; then
# error "Both CONFLUENCE_UID and CONFLUENCE_GID must be set in '${BACKUP_VARS_FILE}'"
# bail "See 'vars.sh' for the defaults."
# fi
check_command "jq"

View File

@ -14,9 +14,9 @@ function check_command {
type -P "$1" &> /dev/null || bail "Unable to find $1, please install it and run this script again"
}
# Log an debug message to the console if BITBUCKET_VERBOSE_BACKUP=true
# Log an debug message to the console if CONFLUENCE_VERBOSE_BACKUP=true
function debug {
if [ "${BITBUCKET_VERBOSE_BACKUP}" = "true" ]; then
if [ "${CONFLUENCE_VERBOSE_BACKUP}" = "true" ]; then
print "$(script_ctx)[$(hostname)] DEBUG: $*"
fi
}
@ -25,14 +25,12 @@ function debug {
function error {
# Set the following to have log statements print contextual information
echo "$(script_ctx)[$(hostname)] ERROR: $*" 1>&2
hc_announce "[$(hostname)] ERROR: $*" "red" 1
}
# Log an info message to the console and publish it to Hipchat
function info {
# Set the following to have log statements print contextual information
print "$(script_ctx)[$(hostname)] INFO: $*"
hc_announce "[$(hostname)] INFO: $*" "gray"
}
# Checks if a variable is zero length, if so it prints the supplied error message and bails
@ -124,7 +122,7 @@ function print_stack_trace {
# Log then execute the provided command
function run {
if [ "${BITBUCKET_VERBOSE_BACKUP}" = "true" ]; then
if [ "${CONFLUENCE_VERBOSE_BACKUP}" = "true" ]; then
local cmdline=
for arg in "$@"; do
case "${arg}" in
@ -152,45 +150,13 @@ function run {
# Log a success message to the console and publish it to Hipchat
function success {
print "[$(hostname)] SUCC: $*"
hc_announce "[$(hostname)] SUCC: $*" "green"
}
# -------------------------------------------------------------------------------------
# Internal methods
# -------------------------------------------------------------------------------------
# Publish a message to Hipchat using the REST API
#
# $1: string: message
# $2: string: color (yellow/green/red/purple/gray/random)
# $3: integer: notify (0/1)
#
function hc_announce {
if [ -z "${HIPCHAT_ROOM}" ]; then
return 0
fi
if [ -z "${HIPCHAT_TOKEN}" ]; then
return 0
fi
if [ -z "$1" ]; then
print "ERROR: HipChat notification message is missing."
return 1
fi
local hc_color="gray"
if [ -n "$2" ]; then
hc_color=$2
fi
local hc_notify="false"
if [ "1" = "$3" ]; then
hc_notify="true"
fi
local hc_message=$(echo "$1" | sed -e 's|"|\\\"|g')
local hipchat_payload="{\"message\":\"${hc_message}\",\"color\":\"${hc_color}\",\"notify\":\"${hc_notify}\"}"
local hipchat_url="${HIPCHAT_URL}/v2/room/${HIPCHAT_ROOM}/notification?auth_token=${HIPCHAT_TOKEN}"
! curl ${CURL_OPTIONS} -X POST -H "Content-Type: application/json" -d "${hipchat_payload}" "${hipchat_url}"
true
:
}

View File

@ -17,7 +17,7 @@ CONFLUENCE_RESTORE_ROOT=/tmp/confluence-restore
CONFLUENCE_HOME_SNAP=/data1/snapshot
CONFLUENCE_HOME_SNAP_DATA=${CONFLUENCE_HOME_SNAP}/confluence
BITBUCKET_VERBOSE_BACKUP="false"
CONFLUENCE_VERBOSE_BACKUP="true"
#db
CONFLUENCE_BACKUP_DB=${CONFLUENCE_BACKUP_ROOT}/${DATE_TIMESTAMP}/CONFLUENCE-db