SDP-721 #2

  • //
  • spec/
  • job/
  • SDP-721
  • View
  • Commits
  • Open Download .zip Download (5 KB)
# The form data below was edited by tom_tyler
# Perforce Workshop Jobs
#
#  Job:           The job name. 'new' generates a sequenced job number.
#
#  Status:        Job status; required field.  There is no enforced or
#                 promoted workflow for transition of jobs from one
#                 status to another, just a set of job status values
#                 for users to apply as they see fit.  Possible values:
#
#                 open - Issue is available to be worked on.
#
#                 inprogress - Active development is in progress.
#
#                 blocked - Issue cannot be implemented for some reason.
#
#                 fixed - Fixed, optional status to use before closed.
#                 
#                 closed - Issue has been dealt with definitively.
#
#                 punted - Decision made not to address the issue,
#                    possibly not ever.
#
#                 suspended - Decision made not to address the issue
#                    in the immediate future, but noting that it may
#                    have some merit and may be revisited later.
#
#                 duplicate - Duplicate of another issue that.
#
#                 obsolete - The need behind the request has become
#                    overcome by events.
#
#  Project:       The project this job is for. Required.
#
#  Severity:      [A/B/C] (A is highest)  Required.
#
#  ReportedBy     The user who created the job. Can be changed.
#
#  ReportedDate:  The date the job was created.  Automatic.
#
#  ModifiedBy:    The user who last modified this job. Automatic.
#
#  ModifiedDate:  The date this job was last modified. Automatic.
#
#  OwnedBy:       The owner, responsible for doing the job. Optional.
#
#  Description:   Description of the job.  Required.
#
#  DevNotes:      Developer's comments.  Optional.  Can be used to
#                 explain a status, e.g. for blocked, punted,
#                 obsolete or duplicate jobs.  May also provide
#                 additional information such as the earliest release
#                 in which a bug is known to exist.
#
# Component:      Projects may use this optional field to indicate
#                 which component of the project a givenjob is associated
#                 with.
#
#                 For the SDP, the list of components is defined in:
#                 //guest/perforce_software/sdp/tools/components.txt
#
#  Type:          Type of job [Bug/Feature/Problem].  Required.
#                 Feature and Bug are common terms.
#                 A Problem is suspected bug, or one without a clear
#                 understanding of exactly what is broken.
#
#  Release:       Release in which job is intended to be fixed.

Job:	SDP-721

Status:	open

Project:	perforce-software-sdp

Severity:	C

ReportedBy:	lee_marzke

ReportedDate:	2021/12/03 09:24:38

ModifiedBy:	tom_tyler

ModifiedDate:	2022/02/04 07:12:11

OwnedBy:	tom_tyler

Description:
	Document steps for installing SDP on replica with NFS-shared /p4/common.
	
	This includes:
	* Getting /hxdepots/sdp correct.
	* Getting /hxdepots/p4/common correct.
	* Getting /p4/N (N=instance name) correct, containing
	  - /p4/N/bin
	  - /p4/N symlinks for root, offline_db, tmp, logs, depots, checkpooints, etc.
	* Create /etc/systemd/system *.service files.
	
	A bit of a preview of what I intend to document:
	
	There are also gotchas to be aware of.  It's what one would expect with NFS-sharing in general.  For example, if you're on the backup server and you edit a config file in an NFS-shared directory -- you just edited the same file used by the primary server!  Even experienced admins can easily forget the implications of NFS-sharing, as they are a bit counter-intuitive if your experience is mainly with fully duplicated environments.
	
	NFS sharing is suitable for High Availability (HA) solutions only, not Disaster Recovery (DR) solutions, as DR solutions imply distances over which NFS sharing is not practical (and usually not possible).
	
	Docs will also discuss some of the pros/cons.
	
	Pros:
	* You typically get other benefits from NFS hardware (e.g. snapshot capability).
	* HA failover is simpler because you have zero chance of commits that didn't replicate.
	* You don't need an extra full copy of whatever goes on /hxdepots (versioned files, checkpoints).
	
	Cons:
	* With NFS, there is now a single point of failure you didn't have before -- the NIC card on the NFS device. That risk is usually mitigated to some extent, as NFS devices are generally deemed "failure tolerant" because the vendors who produce them (e.g. NetApp) invest a lot to make them so.
	* If you do suffer a failure of the NFS or the data on it that cannot be recovered easily, you must do a DR failover rather than an HR failover.  (For many sites, HA failover is more likely to work than DR failover because DR involves new network paths from users to servers, servers to integrated systems, and other complexities that should be accounted for in a comprehensive failover plan).
	* NFS environments have a slightly higher incidence of "operator error" until admins get comfortable dealing with the aforementioned counter-intuitive implications of NFS sharing.

Component:	doc

Type:	Feature
# Change User Description Committed
#2 default
#1 default