#!/bin/bash #============================================================================== # Copyright and license info is available in the LICENSE file included with # the Server Deployment Package (SDP), and also available online: # https://swarm.workshop.perforce.com/projects/perforce-software-sdp/view/main/LICENSE #------------------------------------------------------------------------------ # This script is designed to rebuild an Edge server from a seed checkpoint from # the master WHILE KEEPING THE EXISTING EDGE SPECIFIC DATA. # # You have to first copy the seed checkpoint from the master, created with # edge_dump.sh, to the edge server before running this script. (Alternately, # a full checkpoint from the master can be used so long as the edge server # spec does not specify any filtering, e.g. does not use ArchiveDataFilter.) # Then run this script on the Edge server with the instance number and full # path of the master seed checkpoint as parameters. # # Run example: # ./recover_edge.sh 1 /p4/1/checkpoints/p4_1.edge_syd.seed.ckp.9188.gz function usage () { echo -e "Usage:\n\t${0##*/} <SDP_Instance> <EdgeSeedCheckpoint>\n" exit 1 } [[ $# -ne 2 || ${1:-Unset} == -h ]] && usage export SDP_INSTANCE=${SDP_INSTANCE:-Undefined} export SDP_INSTANCE=${1:-$SDP_INSTANCE} if [[ $SDP_INSTANCE == Undefined ]]; then echo -e "Usage Error: Instance parameter not supplied." usage fi declare EdgeSeedCheckpoint=${2:-Unset} if [[ $ServerID == Unset ]]; then echo -e "Usage Error: EdgeSeedCheckpoint parameter not supplied. Usage:\n\t${0##*/} <SDP_Instance> <ServerID>\n" echo "You must supply the Perforce instance as the second parameter to this script." exit 1 fi source /p4/common/bin/p4_vars $SDP_INSTANCE source /p4/common/bin/backup_functions.sh LOGFILE="$LOGS/recover_edge.$(date +'%Y%m%d-%H%M').log" declare ExcludedTables=db.have,db.working,db.resolve,db.locks,db.revsh,db.workingx,db.resolvex declare CheckpointTables=$ExcludedTables,db.view,db.label,db.revsx,db.revux declare Cmd= declare MasterCheckpointsDir=${CHECKPOINTS} declare EdgeCheckpointsDir=${CHECKPOINTS}.${SERVERID#p4d_} declare EdgeDump="$EdgeCheckpointsDir/${P4SERVER}.$(date +'%Y%m%d-%H%M').edge_dump" declare NewEdgeCheckpoint= ######### Start of Script ########## echo "Processing. This may take a while depending on checkpoint duration." echo "Log file is: $LOGFILE" check_vars set_vars ckp_running log "Remove offline db" rm -f $OFFLINE_DB/db.* > $LOGFILE 2>&1 # With -K filter out the various Edge-specific tables to be replaced with # current live versions. log "Recover checkpoint from master into offline_db skipping tables not used on the edge." Cmd="$P4DBIN -r $OFFLINE_DB -K $ExcludedTables -z -jr $EdgeSeedCheckpoint" log "Running: $Cmd" $Cmd >> $LOGFILE 2>&1 || die "Failed to recover from $EdgeSeedCheckpoint." log "Stopping the edge server." $RC stop >> $LOGFILE 2>&1 # With -k we filter and only checkpoint the specified tables from the current live Edge DB. Cmd="$P4DBIN -r $P4ROOT -k $CheckpointTables -jd $EdgeDump" log "Creating a dump of the edge specific data from P4ROOT." log "Running: $Cmd" $Cmd >> $LOGFILE 2>&1 ||\ die "Failed to dump to $EdgeDump" log "Recover the edge dump into offline_db" Cmd="$P4DBIN -r $OFFLINE_DB -jr $EdgeDump" log "Running: $Cmd" $Cmd >> $LOGFILE 2>&1 ||\ die "Failed to recover from $EdgeDump" log "Reset the replication state and clear the P4ROOT folder db files." rm -f $P4ROOT/db.* >> $LOGFILE 2>&1 rm -f $P4ROOT/state >> $LOGFILE 2>&1 rm -f $P4ROOT/rdb.lbr >> $LOGFILE 2>&1 rm -f $P4JOURNAL >> $LOGFILE 2>&1 log "Move the rebuilt database to P4ROOT" mv $OFFLINE_DB/db.* $P4ROOT/. >> $LOGFILE 2>&1 log "Start the edge server back up." $RC start >> $LOGFILE 2>&1 log "Recreate the offline_db" Cmd="$P4DBIN -r $OFFLINE_DB -K $ExcludedTables -jr -z $EdgeSeedCheckpoint" log "Running: $Cmd" $Cmd >> $LOGFILE 2>&1 Cmd="$P4DBIN -r $OFFLINE_DB -jr $EdgeDump" log "Running: $Cmd" $Cmd >> $LOGFILE 2>&1 log "Create a new edge checkpoint from offline_db" get_offline_journal_num NewEdgeCheckpoint="$EdgeCheckpointsDir/${P4SERVER}.${SERVERID#p4d_}.ckp.$((OFFLINEJNLNUM+1)).gz" Cmd="$P4DBIN -r $OFFLINE_DB -jd -z $NewEdgeCheckpoint" log "Running: $Cmd" $Cmd >> $LOGFILE 2>&1 ckp_complete log "End $P4SERVER Recover Edge" mail_log_file "$HOSTNAME $P4SERVER Recover Edge log."
# | Change | User | Description | Committed | |
---|---|---|---|---|---|
#1 | 23960 | noe_gonzalez | "Forking branch Dev of perforce-software-sdp to noe_gonzalez-sdp." | ||
//guest/perforce_software/sdp/dev/Server/Unix/p4/common/bin/recover_edge.sh | |||||
#10 | 23297 | C. Thomas Tyler |
Added safety checks to avoid running commands that will certainly fail in upgrade.sh. Generally, /p4/common/bin will be the same on all hosts in a Helix topolgy. However, on any given machine, the /p4/<N>/bin/<EXE>_<N>_init scripts should exist only for executables that run on that machine. This change to upgrade.sh should work on machines even where only a proxy or broker runs. Also, it will not generate errors in cases where there is, say, a p4p_N_bin symlink in /p4/common/bin but no /p4/N/bin/p4p_N_init script, which will a common situation since /p4/common/bin will contain all executables used anywhere, while /p4/N/bin is host-specific. Also made cosmetic fixes and style convergence change. In dump_edge.sh and recover_edge_dump.sh, just fixed cosmetic typos. |
||
#9 | 23266 | C. Thomas Tyler |
Fixes and Enhancements: * Enabled daily_checkpoint.sh operate on edge servers, to keep /p4/N/offline_db current on those hosts for site-local recovery w/o requiring a site-local replica (though having a site-local replica can still be useful). * Disabled live_checkpoint.sh for edge servers. * More fully support topologies using edge severs, in both geographically distributed and horizaontal scaling "wokspace server" solutions. * Fix broken EDGESERVER value definition. * Modified name of SDP counter that gets set when a checkpoint is taken to incorporate ServerID, so now the counter name will look like lastSDPCheckpoint.master.1, or lastSDPCheckpoint.p4d_edge_sfo, rather than just lastSDPCheckpoint. There will be multiple such counters in a topology that uses edge servers, and/or which takes checkpoints on replicas. * Added comments for all functions. For the master server, journalPrefix remains: /p4/N/checkpoints/p4_N The /p4/N/checkpoints is reserved for writing by the master/commit server only. For non-standby (possibly filtered) replicas and edge serves, journalPrefix is: /p4/N/checkpoints.<ShortServerID>/p4_N.<ShortServerID> Here, ShortServerID is just the ServerID with the 'p4d_' prefix trimmed, since it is redundant in this context. See mkrep.sh, which enshines a ServerID (server spec) naming standard, with values like 'p4d_fr_bos' (forwarding replica in Boston) and p4d_edge_blr (Edge server in Bangalore). So the journalPrefix for the p4d_edge_bos replica would be: /p4/N/checkpoints.edge_bos/p4_N.edge_bos For "standby" (aka journalcopy) replicas, journalPrefix is set to /p4/N/journals.rep. which is written to the $LOGS volume, due to the nature of standby replicas using journalPrefix to write active server logs to pre-rotated journals. Some take-away to be updated in docs: * The /p4/N/checkpoints folder must be reserved for checkpoints that originate on the master. It should be safe to rsync this folder (with --delete if desired) to any replica or edge server. This is consistent with the current SDP. * I want to change 'journals.rep' to 'checkpoints.<ShortServerID>' for non-standby replicas, to ensure that checkpoints and journals taken on those hosts are written to a volume where they are backed up. * In sites with multiple edge serves, some sharing achive files ('workspace servers'), multiple edge servers will share the same SAN. So we one checkpoints dir per ServerID, and we want that dir to be on the /hxdepots volume. Note that the journalPrefix for replicas was a fixed /p4/N/journals.rep. This was on the /hxlogs volume - a presumably fast-for-writes volume, but typically NOT backed up and not very large. This change puts it under /p4/N/checkpoints.* for edge servers and non-standby replicas, but ensures other replica types and edge servers can generate checkpoints to a location that is backed up and has plenty of storage capacity. For standby replicas only (which cannot be filtered), the journalPrefix remains /p4/N/journals.rep on the /hxlogs volume. |
||
#8 | 22889 | Russell C. Jackson (Rusty) |
Enhanced to mark when it is running so that a checkpoint doesn't stomp on the offline_db, and also made it just go ahead and create the correct checkpoint name. |
||
#7 | 21280 | Russell C. Jackson (Rusty) | Added standard logging and use of SDP_INSTANCE. | ||
#6 | 19113 | Russell C. Jackson (Rusty) |
Changed name of daily_backup.sh to daily_checkpoint.sh Changed name of weekly_backup.sh to recreate_db_checkpoint.sh Updated crontabs with new names, and changed to run recreate_db_checkpoint on the 1st Sat. of Jan. and July. For most companies, this is a better practice than recreating weekly per discussion with Anton. Remove solaris crontab since Solaris is pretty much dead, and we don't test on it. Updated docs to reflect name changes, and did a little clean other other sections while I was in there. |
||
#5 | 17293 | Robert Cowham | Clarifications in comments - no functional change. | ||
#4 | 17219 | C. Thomas Tyler | Routine Merge Down to dev from main. | ||
#3 | 16029 | C. Thomas Tyler |
Routine merge to dev from main using: p4 merge -b perforce_software-sdp-dev |
||
#2 | 15778 | C. Thomas Tyler | Routine Merge Down to dev from main. | ||
#1 | 15753 | C. Thomas Tyler | Routine Merge Down to dev from main. | ||
//guest/perforce_software/sdp/main/Server/Unix/p4/common/bin/recover_edge.sh | |||||
#1 | 15716 | Russell C. Jackson (Rusty) | Script for rebuilding an Edge server. |