Thursday, June 25, 2015

Compiling write-able mongo_fdw extension on binary format of PostgreSQL installation.

A short blog to enable write-able mongo_fdw extension in PostgreSQL 9.4. PostgreSQL provides a powerful feature called Foreign Data Wrappers (FDW), which allows DBAs to connect to other data sources from within PostgreSQL. Foreign Data Wrapper implementation is based on SQL/MED, that's supported from PostgreSQL 9.1 version onwards, which means we can now access remote database through PostgreSQL seamlessly. Today we have variety of FDW's available, in this blog, we will be compiling a latest version of write-able FDW "mongo_fdw" to access MongoDB.

Latest mongo_fdw extension is based on Mongo-c-driver and Libbson. To implement mongo_fdw, first we need to compile all the dependencies required by the extension. Below are the step-by-step execution on my CentOS 7 (64bit) machine with PostgreSQL 9.4 installed.

Step 1. First install dependency packages required by Mongo-c-Driver and Libbson.
yum install git automake autoconf libtool gcc
Step 2. Clone mongo_fdw repository from Github.
git clone https://github.com/EnterpriseDB/mongo_fdw.git
Step 3. Pre-compilation require pkgconfig/pkg-config (installed in Step 1) and PostgreSQL pg_config location set in the path.
[root@localhost ~]# export PKG_CONFIG_PATH=/usr/local/lib/pkgconfig:$PKG_CONFIG_PATH
[root@localhost ~]# export PATH=/opt/PostgreSQL/9.4/bin:$PATH

[root@localhost mongo_fdw]# type pg_config
pg_config is /opt/PostgreSQL/9.4/bin/pg_config
Step 4. Mongo_fdw compilation can be done manually or with the help of auto-compilation script (autogen.sh) provided in the bundle. Here, I will be using auto-compilation script, which will  download and install required mongo-c-driver and libbson libraries in default location(/usr/local/lib). For more details on compilation script refer to the documentation here.
cd mongo_fdw/
./autogen.sh --with-master
make 
make install
After compilation, we can notice the files created in PostgreSQL home directory.
-bash-4.2$ find $PWD -name "mongo*"
/opt/PostgreSQL/9.4/lib/postgresql/mongo_fdw.so
/opt/PostgreSQL/9.4/share/postgresql/extension/mongo_fdw.control
/opt/PostgreSQL/9.4/share/postgresql/extension/mongo_fdw--1.0.sql
Fine, now we can create the extension in the database.
-bash-4.2$ psql
Password:
psql.bin (9.4.4)
Type "help" for help.

postgres=# create extension mongo_fdw;
ERROR:  could not load library "/opt/PostgreSQL/9.4/lib/postgresql/mongo_fdw.so": libmongoc-1.0.so.0: cannot open shared object file: No such file or directory
Oops...seems I forgot to set the library path for newly created mongo_fdw.so and MongoDB libs. To enable libraries, PostgreSQL server should be restarted after setting the library path.
-bash-4.2$ export LD_LIBRARY_PATH=/opt/PostgreSQL/9.4/lib:/usr/local/lib
-bash-4.2$ /opt/PostgreSQL/9.4/bin/pg_ctl -D /opt/PostgreSQL/9.4/data/ start
server starting
Hope, this time there won't be any errors..
-bash-4.2$ psql
Password:
psql.bin (9.4.4)
Type "help" for help.

postgres=# create extension mongo_fdw;
CREATE EXTENSION

postgres=# \dx
                        List of installed extensions
   Name    | Version |   Schema   |               Description
-----------+---------+------------+-----------------------------------------
 adminpack | 1.0     | pg_catalog | administrative functions for PostgreSQL
 mongo_fdw | 1.0     | public     | foreign data wrapper for MongoDB access
 plpgsql   | 1.0     | pg_catalog | PL/pgSQL procedural language
(3 rows)
Thats cool... we have mongo_fdw extension create in PostgreSQL server.

To play with the extension, you can refer to the documentation. [1],[2].

--Raghav

Tuesday, May 26, 2015

Compiling pg_repack extension on binary format of PostgreSQL installation

This blog is about compiling pg_repack extension on binary format of PostgreSQL installation. Most of you know, pg_repack extension is one of well known PostgreSQL extensions, its especially used for reclaiming space[bloats] ONLINE without holding an EXCLUSIVE LOCK on Tables/Indexes.

To enable pg_repack extension in PostgreSQL database, it should be compiled from sources. Its quite easy and simple to compile from source on any installed variants(source,rpm,binary) of PostgreSQL, however its slightly different if it is with binary format of PostgreSQL [One Click Installer] as they are pre-built binary bundle with dependency libraries. Let's compile and see.

On CentOS 7 Virtual Machine, I have installed binary format of PostgreSQL 9.4(download link) and its home directory "/opt/PostgreSQL/9.4/". Next we need to download pg_repack source from their official site.
[root@localhost ~]# git clone https://github.com/reorg/pg_repack.git
Before compiling, pg_config of PostgreSQL 9.4 should be set in the PATH.
[root@localhost pg_repack]# export PATH=/opt/PostgreSQL/9.4/bin:$PATH
[root@localhost pg_repack]# type pg_config
pg_config is /opt/PostgreSQL/9.4/bin/pg_config
Now we are good to execute source installation commands "make" & "make install". Let's execute "make"
[root@localhost ~]# cd pg_repack/
[root@localhost pg_repack]# make
make[1]: Entering directory `/root/pg_repack/bin'
....
....
gcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -O2 pg_repack.o pgut/pgut.o pgut/pgut-fe.o -L/opt/PostgreSQL/9.4/lib -lpq -L/opt/PostgreSQL/9.4/lib -L/opt/local/Current/lib -Wl,--as-needed -Wl,-rpath,'/opt/PostgreSQL/9.4/lib',--enable-new-dtags  -lpgcommon -lpgport -lssl -lcrypto -lgssapi_krb5 -lz -ledit -lrt -lcrypt -ldl -lm -o pg_repack
/usr/bin/ld: cannot find -ledit
collect2: ld returned 1 exit status
make[1]: *** [pg_repack] Error 1
make[1]: Leaving directory `/root/pg_repack/bin'
make: *** [all] Error 2
Oops 1...!!, seems there's an error related to -ledit(libedit) library missing in PostgreSQL lib directory. Lets run same command "ld -ledit" in verbose mode (-verbose) for more information, tried and failed by "make" command.
[root@localhost pg_repack]# ld -ledit -verbose
GNU ld version 2.20.51.0.2-5.42.el6 20100205
...
... 
...
==================================================
attempt to open /usr/x86_64-redhat-linux/lib64/libedit.so failed
attempt to open /usr/x86_64-redhat-linux/lib64/libedit.a failed
attempt to open /usr/local/lib64/libedit.so failed
attempt to open /usr/local/lib64/libedit.a failed
attempt to open /lib64/libedit.so failed
attempt to open /lib64/libedit.a failed
attempt to open /usr/lib64/libedit.so failed
attempt to open /usr/lib64/libedit.a failed
attempt to open /usr/x86_64-redhat-linux/lib/libedit.so failed
attempt to open /usr/x86_64-redhat-linux/lib/libedit.a failed
attempt to open /usr/lib64/libedit.so failed
attempt to open /usr/lib64/libedit.a failed
attempt to open /usr/local/lib/libedit.so failed
attempt to open /usr/local/lib/libedit.a failed
attempt to open /lib/libedit.so failed
attempt to open /lib/libedit.a failed
attempt to open /usr/lib/libedit.so failed
attempt to open /usr/lib/libedit.a failed
ld: cannot find -ledit
Okay, now its clear that its looking for libedit.so library in PostgreSQL lib directory [/opt/PostgreSQL/9.4/lib]. Lets check for library in that directory.
[root@localhost pg_repack]# cd /opt/PostgreSQL/9.4/lib
[root@localhost lib]# ls -l libedit*
-rwxr-xr-x. 1 root daemon 254702 Mar 22 23:32 libedit.so.0
Ah,we have "libedit.so.0" but not "libedit.so" required by "make" command. Creating a symbolic link should be a quick fix.
[root@localhost lib]# ln -s libedit.so.0 libedit.so
[root@localhost lib]# ls -l libedit*
lrwxrwxrwx. 1 root root       12 May 19 22:25 libedit.so -> libedit.so.0
-rwxr-xr-x. 1 root daemon 254702 Mar 22 23:32 libedit.so.0
Re-run "make" command.
[root@localhost pg_repack]# make
make[1]: Entering directory `/root/pg_repack/bin'
gcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -O2 pg_repack.o pgut/pgut.o pgut/pgut-fe.o -L/opt/PostgreSQL/9.4/lib -lpq -L/opt/PostgreSQL/9.4/lib -L/opt/local/Current/lib -Wl,--as-needed -Wl,-rpath,'/opt/PostgreSQL/9.4/lib',--enable-new-dtags  -lpgcommon -lpgport -lssl -lcrypto -lgssapi_krb5 -lz -ledit -lrt -lcrypt -ldl -lm -o pg_repack
/usr/bin/ld: warning: libssl.so.1.0.0, needed by /opt/PostgreSQL/9.4/lib/libpq.so, may conflict with libssl.so.10
/lib64/libldap_r-2.4.so.2: undefined reference to `ber_sockbuf_io_udp'
collect2: ld returned 1 exit status
make[1]: *** [pg_repack] Error 1
make[1]: Leaving directory `/root/pg_repack/bin'
make: *** [all] Error 2
Oops 2...!!! interesting, it has passed -ledit error and now its reporting on libldap library [ libldap_r-2.4.so.2 ]. Am not sure why its looking in /lib64 directory when my pg_config points to /opt/PostgreSQL/9.4/lib. Lets check what we have in both the location.
[root@localhost pg_repack]# ls -l /lib64/libldap*
lrwxrwxrwx. 1 root root     21 Jan  6 22:05 libldap-2.4.so.2 -> libldap-2.4.so.2.10.2
-rwxr-xr-x. 1 root root 329696 Oct 15  2014 libldap-2.4.so.2.10.2
lrwxrwxrwx. 1 root root     23 May 19 06:43 libldap_r-2.4.so.2 -> libldap_r-2.4.so.2.10.2
-rwxr-xr-x. 1 root root 351920 Oct 15  2014 libldap_r-2.4.so.2.10.2

[root@localhost pg_repack]# ls -l /opt/PostgreSQL/9.4/lib/libldap*
-rwxr-xr-x. 1 root daemon 404761 Mar 22 23:32 /opt/PostgreSQL/9.4/lib/libldap-2.4.so.2
-rwxr-xr-x. 1 root daemon 442657 Mar 22 23:32 /opt/PostgreSQL/9.4/lib/libldap_r-2.4.so.2
Seems there are two copies of "libldap_r-2.4.so.2", one in the form of symbolic link and another as a hard copy. Am guessing its due to multiple library copies, lets remove symbolic link and retain a hard copy of library and try again.
[root@localhost lib64]# unlink libldap_r-2.4.so.2

[root@localhost pg_repack]# make
make[1]: Entering directory `/root/pg_repack/bin'
....
....
....
sed 's,REPACK_VERSION,1.3.1,g' pg_repack.sql.in > pg_repack--1.3.1.sql;
sed 's,REPACK_VERSION,1.3.1,g' pg_repack.control.in > pg_repack.control
make[1]: Leaving directory `/root/pg_repack/lib'
make[1]: Entering directory `/root/pg_repack/regress'
make[1]: Nothing to be done for `all'.
make[1]: Leaving directory `/root/pg_repack/regress'
wow... finally,  it has compiled. Run "make install" for the pg_repack binaries and libraries.
[root@localhost pg_repack]# make install
make[1]: Entering directory `/root/pg_repack/bin'
/bin/mkdir -p '/opt/PostgreSQL/9.4/bin'
/usr/bin/install -c  pg_repack '/opt/PostgreSQL/9.4/bin'
make[1]: Leaving directory `/root/pg_repack/bin'
make[1]: Entering directory `/root/pg_repack/lib'
/bin/mkdir -p '/opt/PostgreSQL/9.4/lib/postgresql'
/bin/mkdir -p '/opt/PostgreSQL/9.4/share/postgresql/extension'
/bin/mkdir -p '/opt/PostgreSQL/9.4/share/postgresql/extension'
/usr/bin/install -c -m 755  pg_repack.so '/opt/PostgreSQL/9.4/lib/postgresql/pg_repack.so'
/usr/bin/install -c -m 644 pg_repack.control '/opt/PostgreSQL/9.4/share/postgresql/extension/'
/usr/bin/install -c -m 644 pg_repack--1.3.1.sql pg_repack.control '/opt/PostgreSQL/9.4/share/postgresql/extension/'
make[1]: Leaving directory `/root/pg_repack/lib'
make[1]: Entering directory `/root/pg_repack/regress'
make[1]: Nothing to be done for `install'.
make[1]: Leaving directory `/root/pg_repack/regress'
After compilation there will be pg_repack utility in $PGHOME/bin and pg_repack.so library in $PGHOME/lib/postgresql/ directory.
[root@localhost pg_repack]# ls -l /opt/PostgreSQL/9.4/bin/pg_rep*
-rwxr-xr-x. 1 root root 84030 May 20 00:07 /opt/PostgreSQL/9.4/bin/pg_repack

[root@localhost postgresql]# ls -l /opt/PostgreSQL/9.4/lib/postgresql/pg_rep*
-rwxr-xr-x. 1 root root 31028 May 20 00:07 /opt/PostgreSQL/9.4/lib/postgresql/pg_repack.so
Now we are set to create pg_repack extension inside the database.
-bash-4.1$ psql
Password:
psql.bin (9.4.1)
Type "help" for help.

postgres=# select * from pg_available_extensions where name='pg_repack';
   name    | default_version | installed_version |                           comment
-----------+-----------------+-------------------+--------------------------------------------------------------
 pg_repack | 1.3.1           | 1.3.1             | Reorganize tables in PostgreSQL databases with minimal locks
(1 row)

postgres=# create extension pg_repack;
CREATE EXTENSION
Likewise, I have attempted to compile with EnterpriseDB product PostgresPlus Advanced Server 9.4[PPAS] a pre-built binary package. Faced similar library issues, hence I have used linker option "LDFLAGS" pointing to library directory and compiled.
[root@localhost ~]# export PATH=/opt/PostgresPlus/9.4AS/bin:$PATH
[root@localhost ~]# export LDFLAGS=-L/opt/PostgresPlus/9.4AS/lib
[root@localhost ~]# cd pg_repack/
[root@localhost pg_repack]# make
[root@localhost pg_repack]# make install


[root@localhost pg_repack]# ls -l /opt/PostgresPlus/9.4AS/bin/pg_rep*
-rwxr-xr-x. 1 root root 201877 May 15 11:06 /opt/PostgresPlus/9.4AS/bin/pg_repack
[root@localhost pg_repack]# ls -l /opt/PostgresPlus/9.4AS/lib/pg_rep*
-rwxr-xr-x. 1 root root 94516 May 15 11:06 /opt/PostgresPlus/9.4AS/lib/pg_repack.so
Cool,this too compiled smoothly. Now create extension in PPAS 9.4
-bash-4.1$ psql
Welcome
psql.bin (9.4.1.4)
Type "help" for help.

edb=# create extension pg_repack;
CREATE EXTENSION
edb=#

Thank you.

--Raghav

Tuesday, January 13, 2015

Configuring Automatic failover using Replication Manager 2.0 on PostgreSQL 9.3.5

In PostgreSQL high availability(Streaming Replication/Hot Standby), one of the thing require human interference and has no automation, that is in the event of master database crash; initiating failover procedure(trigger file creation or pg_ctl promote command) on the standby. Because in core of PostgreSQL there's no such built-in functionality to identify a failure of master and notify the standby. Thus, we require some automated tools to take over manual failover work or we have to dive into scripting land for writing our own script to do it.

Today, we have very good external tools to handle automatic failover like Replication Manager(repmgr),  EDB Failover Manager(EFM),  pgHA and HandyRep. Thanks to all for filling the gap of automatic failover in PostgreSQL.

In this post, am demonstrating Replication Manager on single node(localhost) on RHEL 6.5 - PostgreSQL 9.3.5. In order to achieve an easy and good understanding of concept I have compiled repmgr with EnterpriseDB One Click Installer(a pre-build binary package) instead of PG source.

To compile repmgr, we need to install few mandatory dependency packages gcc, postgresql-devel, libxslt-devel, pam-devel, libopenssl-devel, krb5-devel and libedit-devel by using yum or rpm. After installing dependencies, download repmgr 2.0 from here and set pg_config in your path and start compiling.
[postgres@localhost:/opt/PostgreSQL/9.3~]$ type pg_config
pg_config is hashed (/opt/PostgreSQL/9.3/bin/pg_config)

export PATH=/opt/PostgreSQL/9.3/bin:$PATH
export LD_LIBRARY_PATH=/opt/PostgreSQL/9.3/lib:$LD_LIBRARY_PATH

tar xvzf repmgr-2.0.tar.gz
cd repmgr
make USE_PGXS=1
make USE_PGXS=1 install
Mostly, repmgr compiles smoothly without any hiccups if we have installed all dependency packages, since am compiling against PG pre-build binaries, there may be diverse variants of libraries came with pre-build and rpm which might throw some compilation errors. Like one you see here:
/lib64/libldap_r-2.4.so.2: undefined reference to `ber_sockbuf_io_udp'
collect2: ld returned 1 exit status
make: *** [repmgrd] Error 1
To fix, find the checking library in /lib64/libldap_r-2.4.so.2.
[root@localhost repmgr-2.0]# cd /lib64/
[root@localhost lib64]# ls -l libldap*
lrwxrwxrwx. 1 root root     20 Dec  8 09:23 libldap-2.4.so.2 -> libldap-2.4.so.2.5.6
-rwxr-xr-x. 1 root root 317200 Apr 29  2013 libldap-2.4.so.2.5.6
lrwxrwxrwx. 1 root root     22 Dec  8 09:23 libldap_r-2.4.so.2 -> libldap_r-2.4.so.2.5.6
-rwxr-xr-x. 1 root root 335264 Apr 29  2013 libldap_r-2.4.so.2.5.6
Ok, there are two copies, retain one and unlink other.
[root@localhost lib64]# unlink libldap_r-2.4.so.2
Now clean the previous compilation by command "make USE_PGXS=1 clean" and proceed from step 1 and it will compile without any issues. Similarly, you need to fix for other library related errors. After installation you can find binaries, libraries and .SQL files related to repmgr in PostgreSQL locations.
[postgres@localhost:/opt/PostgreSQL/9.3/bin~]$ ls repmgr*     (Two utility commands)
repmgr  repmgrd

[postgres@localhost:/opt/PostgreSQL/9.3/lib/postgresql~]$ ls rep*  
repmgr_funcs.so

[postgres@localhost:/opt/PostgreSQL/9.3/share/postgresql/contrib~]$ ls
repmgr_funcs.sql  repmgr.sql  uninstall_repmgr_funcs.sql  uninstall_repmgr.sql
We are all set to setup automatic failover with a super-simple-toy Replication Manager. As a first step we need to have streaming replication(Refer to wiki) configured which I have done already on my localhost between two port 5432 (Master) and 5433 (Standby) lets use them. You can also try building standby using repmgr STANDBY CLONE command. Refer to repmgr documentation for more details.

Step 1. Enable repmgr libraries on both PostgreSQL instances that required for its backend functions.
Master Data Directory : /opt/PostgreSQL/9.3/data
Standby Data Directory: /opt/PostgreSQL/9.3/data_slave

Edit $PGDATA/postgresql.conf 
shared_preload_libraries = 'repmgr_funcs'

[postgres@localhost:/opt/PostgreSQL/9.3~]$ /opt/PostgreSQL/9.3/bin/pg_ctl -D /opt/PostgreSQL/9.3/data start

[postgres@localhost:/opt/PostgreSQL/9.3~]$ psql -p 5432 -c "show shared_preload_libraries;"
 shared_preload_libraries
--------------------------
 repmgr_funcs
(1 row)
Step 2. Repmgr need repmgr.conf file for each node, since we are working on localhost we need to keep each nodes repmgr.conf in seperate directory.
[postgres@localhost:/opt/PostgreSQL/9.3~]$ pwd
/opt/PostgreSQL/9.3
[postgres@localhost:/opt/PostgreSQL/9.3~]$ mkdir -p repmgr/master repmgr/standby
Step 3. Create repmgr.conf file for Master(5432) and Standby(5433) in the directories we created in Step 2.
[postgres@localhost:/opt/PostgreSQL/9.3/repmgr/master~]$ pwd
/opt/PostgreSQL/9.3/repmgr/master
[postgres@localhost:/opt/PostgreSQL/9.3/repmgr/master~]$ more repmgr.conf
cluster=test
node=1
node_name=master
conninfo='host=127.0.0.1 port=5432 dbname=postgres'
pg_bindir=/opt/PostgreSQL/9.3/bin
master_response_timeout=60 
reconnect_attempts=6
reconnect_interval=10
failover=automatic
promote_command='/opt/PostgreSQL/9.3/repmgr/auto_failover.sh'


[postgres@localhost:/opt/PostgreSQL/9.3/repmgr/slave~]$ pwd
/opt/PostgreSQL/9.3/repmgr/slave
[postgres@localhost:/opt/PostgreSQL/9.3/repmgr/slave~]$ more repmgr.conf
cluster=test
node=2
node_name=slave
conninfo='host=127.0.0.1 port=5433 dbname=postgres'
pg_bindir=/opt/PostgreSQL/9.3/bin
master_response_timeout=60
reconnect_attempts=6
reconnect_interval=10
failover=automatic
loglevel=DEBUG
promote_command='/opt/PostgreSQL/9.3/repmgr/auto_failover.sh'
What mainly you have to observe in repmgr.conf is "master_reponse_timeout" which's total wait duration in seconds before declaring master has disappeared. In that duration 6 reconnect attempts made with 10 seconds of interval. After no response from master in "master_response_timeout" duration automatic failover takes place by promote_command script. The script consist of Standby promotion steps, which I have created one for this setup shared below.
[postgres@localhost:/opt/PostgreSQL/9.3~]$ more repmgr/auto_failover.sh
#!/bin/bash
echo "Promoting Standby at `date '+%Y-%m-%d %H:%M:%S'`" >>/tmp/repsetup.log
/opt/PostgreSQL/9.3/bin/pg_ctl -D /opt/PostgreSQL/9.3/data_slave promote >>/tmp/repsetup.log
Step 4. Register Master and Standby node with repmgr using "repmgr" utility.
[postgres@localhost:/opt/PostgreSQL/9.3~]$ repmgr -f repmgr/master/repmgr.conf --verbose master register >/tmp/repsetup.log 2>&1
Master register logs (/tmp/repsetup.log):
[2015-01-12 01:28:55] [INFO] repmgr connecting to master database
[2015-01-12 01:28:55] [INFO] repmgr connected to master, checking its state
[2015-01-12 01:28:55] [INFO] master register: creating database objects inside the repmgr_test schema
[2015-01-12 01:28:55] [DEBUG] master register: CREATE SCHEMA repmgr_test
[2015-01-12 01:28:55] [DEBUG] master register: CREATE TABLE repmgr_test.repl_nodes (          [2015-01-12 01:28:55] [DEBUG] master register: CREATE TABLE repmgr_test.repl_monitor (   [2015-01-12 01:28:55] [DEBUG] master register: CREATE VIEW repmgr_test.repl_status AS  [2015-01-12 01:28:55] [DEBUG] master register: CREATE INDEX idx_repl_status_sort     ON repmgr_test.repl_monitor (last_monitor_time, standby_node)
[2015-01-12 01:28:55] [DEBUG] master register: INSERT INTO repmgr_test.repl_nodes (id, cluster, name, conninfo, priority) VALUES (1, 'test', 'master', 'host=1
27.0.0.1 port=5432 dbname=postgres', 0)
[2015-01-12 01:28:55] [NOTICE] Master node correctly registered for cluster test with id 1 (conninfo: host=127.0.0.1 port=5432 dbname=postgres)
Opening configuration file: repmgr/master/repmgr.conf
Standby:
[postgres@localhost:/opt/PostgreSQL/9.3~]$ repmgr -f repmgr/slave/repmgr.conf --verbose standby register >>/tmp/repsetup.log 2>&1
Standby register logs (/tmp/repsetup.log):
[2015-01-12 01:30:37] [INFO] repmgr connecting to standby database
[2015-01-12 01:30:37] [INFO] repmgr connected to standby, checking its state
[2015-01-12 01:30:37] [DEBUG] standby register: SELECT 1 FROM pg_namespace WHERE nspname = 'repmgr_test'
[2015-01-12 01:30:37] [INFO] repmgr connecting to master database
[2015-01-12 01:30:37] [INFO] finding node list for cluster 'test'
[2015-01-12 01:30:37] [INFO] checking role of cluster node 'host=127.0.0.1 port=5432 dbname=postgres'
[2015-01-12 01:30:37] [INFO] repmgr connected to master, checking its state
[2015-01-12 01:30:37] [INFO] repmgr registering the standby
[2015-01-12 01:30:37] [DEBUG] standby register: INSERT INTO repmgr_test.repl_nodes(id, cluster, name, conninfo, priority) VALUES (2, 'test', 'slave', 'host=12
7.0.0.1 port=5433 dbname=postgres', 0)
[2015-01-12 01:30:37] [INFO] repmgr registering the standby complete
[2015-01-12 01:30:37] [NOTICE] Standby node correctly registered for cluster test with id 2 (conninfo: host=127.0.0.1 port=5433 dbname=postgres)
Opening configuration file: repmgr/slave/repmgr.conf
By looking to the logs you can easily notice, repmgr creating its own schema in database with "repmgr_$CLUSTER" name and some tables/view/functions in it. And it create one row regarding replication lag in repl_monitor table. Repmgr has a utility called "repmgrd" to monitor Master availability running daemon process from Standby node. Its also a management and monitoring system daemon that watches the cluster status and can trigger standby promotion. We need to start "repmgrd" daemon process after registering master/standby nodes with repmgr.

Step 5. Lets start the repmgr daemon process to watch master, in our case we are not running standby on separate node hence we need to start the daemon with repmgr/standby/repmgr.conf file.
repmgrd -f repmgr/slave/repmgr.conf --verbose --monitoring-history >>/tmp/repsetup.log 2>&1 &

Logs (/tmp/repsetup.log)

[2015-01-12 01:42:13] [INFO] repmgrd Connecting to database 'host=127.0.0.1 port=5433 dbname=postgres'
[2015-01-12 01:42:13] [INFO] repmgrd Connected to database, checking its state
[2015-01-12 01:42:13] [INFO] repmgrd Connecting to primary for cluster 'test'
[2015-01-12 01:42:13] [INFO] finding node list for cluster 'test'
[2015-01-12 01:42:13] [INFO] checking role of cluster node 'host=127.0.0.1 port=5432 dbname=postgres'
[2015-01-12 01:42:13] [INFO] repmgrd Checking cluster configuration with schema 'repmgr_test'
[2015-01-12 01:42:13] [INFO] repmgrd Checking node 2 in cluster 'test'
[2015-01-12 01:42:13] [INFO] Reloading configuration file and updating repmgr tables
[2015-01-12 01:42:13] [INFO] repmgrd Starting continuous standby node monitoring
[2015-01-12 01:42:14] [DEBUG] standby_monitor: INSERT INTO repmgr_test.repl_monitor VALUES(1, 2, '2015-01-12 09:42:14.457287+00'::timestamp with time zone,  '2015-01-12 09:42:13.950172+00'::timestamp with time zone, '2/C84DAB08', '2/C84DAB08',  0, 0)
Step 6. Monitor nodes registered with repmgr and running daemon process.
[postgres@localhost:/opt/PostgreSQL/9.3~]$ psql -p 5432 -d postgres -xc "SELECT * FROM repmgr_test.repl_status"
-[ RECORD 1 ]-------------+------------------------------
primary_node              | 1
standby_node              | 2
standby_name              | slave
last_monitor_time         | 2015-01-12 09:42:29.514056+00
last_wal_primary_location | 2/C84DB7A8
last_wal_standby_location | 2/C84DB7A8
replication_lag           | 0 bytes
replication_time_lag      | 00:04:01.960772
apply_lag                 | 0 bytes
communication_time_lag    | 00:03:59.45349

[postgres@localhost:/opt/PostgreSQL/9.3~]$ repmgr -f repmgr/master/repmgr.conf cluster show
[2015-01-12 01:47:00] [INFO] repmgr connecting to database
Role      | Connection String
* master  | host=127.0.0.1 port=5432 dbname=postgres
  standby | host=127.0.0.1 port=5433 dbname=postgres
Step 7. Lets simulate failover scenario by taking master down. I am doing some stupid way by killing postmaster.pid. Please don't do the same on production if you want to stick for long time in the same company :).
[postgres@localhost:/opt/PostgreSQL/9.3~]$ kill `head -n1 data/postmaster.pid`
or, you can take master down safely. 
[postgres@localhost:/opt/PostgreSQL/9.3~]$ pg_ctl -D /opt/PostgreSQL/9.3/data stop -mf
Now check the logs, how repmgr has promted the standby:
[2015-01-12 02:14:11] [WARNING] Can't stop current query: PQcancel() -- connect() failed: Connection refused
[2015-01-12 02:14:11] [WARNING] repmgrd: Connection to master has been lost, trying to recover... 10 seconds before failover decision
...
[2015-01-12 02:14:21] [ERROR] repmgrd: We couldn't reconnect for long enough, exiting...
[2015-01-12 02:14:21] [DEBUG] repmgrd: there are 2 nodes registered
[2015-01-12 02:14:21] [DEBUG] repmgrd: node=1 conninfo="host=127.0.0.1 port=5432 dbname=postgres" witness=false
[2015-01-12 02:14:21] [ERROR] Connection to database failed: could not connect to server: Connection refused
        Is the server running on host "127.0.0.1" and accepting
        TCP/IP connections on port 5432?
[2015-01-12 02:14:21] [DEBUG] repmgrd: node=2 conninfo="host=127.0.0.1 port=5433 dbname=postgres" witness=false
[2015-01-12 02:14:21] [DEBUG] Total nodes counted: registered=2, visible=1
[2015-01-12 02:14:21] [DEBUG] XLog position of node 2: log id=2 (2), offset=3360733048 (C850B778)
[2015-01-12 02:14:21] [DEBUG] Last XLog position of node 2: log id=2 (2), offset=3360733048 (C850B778)
[2015-01-12 02:14:26] [INFO] repmgrd: This node is the best candidate to be the new primary, promoting...
[2015-01-12 02:14:26] [DEBUG] promote command is: "/opt/PostgreSQL/9.3/repmgr/auto_failover.sh"
Promoting Standby at 2015-01-12 02:14:26
server promoting
[2015-01-12 02:14:29] [INFO] repmgrd Checking cluster configuration with schema 'repmgr_test'
[2015-01-12 02:14:29] [INFO] repmgrd Checking node 2 in cluster 'test'
[2015-01-12 02:14:29] [INFO] Reloading configuration file and updating repmgr tables
[2015-01-12 02:14:29] [INFO] repmgrd Starting continuous primary connection check
Perfect, repmgr daemon recognised master failure and before promoting standby by "auto_failover.sh" script it has properly verified the WAL's location as well. Lets verify whether Standby had promotion or not ?
[postgres@localhost:/opt/PostgreSQL/9.3~]$ psql -p 5433 -c "select pg_is_in_recovery();"
 pg_is_in_recovery
-------------------
 f
(1 row)
Incredible, without any human intervention manual failover procedure taken care by repmgr. You can rebuild the dead master as standby either by repmgr itself or you can follow the switchback procedure shown in my previous post. Same way you can setup for two nodes, by executing standby steps on standby node instead of localhost.

Thanks for reading.
Raghav
Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License