Currently I have 3 servers: one master and 2 clients. I installed redmine 3.3.1.stable with postgresql 9.6, and installed pacemaker on 3 servers. To synchronize a database, I follow the documentation. Every thing is working fine until when I stop the active server. The server2 redmine is showing authentication error.
Redmine error when I try to login form client after connect servers.
Completed 500 Internal Server Error in 11ms (ActiveRecord: 3.5ms)
ActiveRecord::StatementInvalid (PG::ReadOnlySqlTransaction: ERROR: cannot execute UPDATE in a read-only transaction
: UPDATE "users" SET "last_login_on" = '2020-08-17 13:05:11.001886' WHERE "users"."type" IN ('User', 'AnonymousUser') AND "users"."id" = $1):
app/models/user.rb:238:in `try_to_login'
app/controllers/account_controller.rb:204:in `password_authentication'
app/controllers/account_controller.rb:199:in `authenticate_user'
app/controllers/account_controller.rb:40:in `login'
lib/redmine/sudo_mode.rb:63:in `sudo_mode'
so far I unterstand redmine only allow server1 to access permission. why redmine can't give access to server2 or server3
Below I give more information about my step so far.
pcs config
pcs config
Cluster Name: mycluster
Corosync Nodes:
server1 server2 server3
Pacemaker Nodes:
server1 server2 server3
Resources:
Resource: MasterVip (class=ocf provider=heartbeat type=IPaddr2)
Attributes: ip=101.226.189.208 nic=lo cidr_netmask=32 iflabel=pgrepvip
Meta Attrs: target-role=Started
Operations: start interval=0s timeout=20s (MasterVip-start-interval-0s)
stop interval=0s timeout=20s (MasterVip-stop-interval-0s)
monitor interval=90s (MasterVip-monitor-interval-90s)
Resource: Apache (class=ocf provider=heartbeat type=apache)
Attributes: configfile=/etc/apache2/apache2.conf statusurl=http://localhost/server-status
Operations: start interval=0s timeout=40s (Apache-start-interval-0s)
stop interval=0s timeout=60s (Apache-stop-interval-0s)
monitor interval=1min (Apache-monitor-interval-1min)
Stonith Devices:
Fencing Levels:
Location Constraints:
Resource: Apache
Enabled on: server1 (score:INFINITY) (role: Started) (id:cli-prefer-Apache)
Ordering Constraints:
Colocation Constraints:
Apache with MasterVip (score:INFINITY) (id:colocation-Apache-MasterVip-INFINITY)
Ticket Constraints:
Alerts:
No alerts defined
Resources Defaults:
migration-threshold: 5
resource-stickiness: 10
Operations Defaults:
No defaults set
Cluster Properties:
cluster-infrastructure: corosync
cluster-name: mycluster
dc-version: 1.1.16-94ff4df
have-watchdog: false
no-quorum-policy: ignore
stonith-enabled: false
Quorum:
Options:
master postgresql.conf
# Add settings for extensions here
listen_addresses = '*'
wal_level = hot_standby
synchronous_commit = local
archive_mode = on
archive_command = 'cp %p /var/lib/postgresql/9.6/main/archive/%f'
max_wal_senders = 10
wal_keep_segments = 30
synchronous_standby_names = 'server2'
synchronous_standby_names = 'server3'
hot_standby = on
master pg_hba.conf
# Localhost
host replication postgres 127.0.0.1/32 md5
# PostgreSQL Master IP address
host replication postgres 101.226.189.205/32 md5
# PostgreSQL SLave IP address
host replication postgres 101.226.189.206/32 md5
ho
st replication postgres 101.226.189.207/32 md5
copy config to client from Master
pg_basebackup -h server1 -U postgres -D /var/lib/postgresql/9.6/main -X stream -P
Database connection status
postgres#oreo:/etc/postgresql/9.6/main$ psql -x -c "select * from pg_stat_replication;"
-[ RECORD 1 ]----+------------------------------
pid | 18174
usesysid | 10
usename | postgres
application_name | server3
client_addr | 101.226.189.207
client_hostname |
client_port | 35236
backend_start | 2020-08-17 15:56:40.687282+02
backend_xmin |
state | streaming
sent_location | 0/7005430
write_location | 0/7005430
flush_location | 0/7005430
replay_location | 0/7005430
sync_priority | 1
sync_state | sync
-[ RECORD 2 ]----+------------------------------
pid | 18175
usesysid | 10
usename | postgres
application_name | server2
client_addr | 101.226.189.206
client_hostname |
client_port | 45862
backend_start | 2020-08-17 15:56:40.717087+02
backend_xmin |
state | streaming
sent_location | 0/7005430
write_location | 0/7005430
flush_location | 0/7005430
replay_location | 0/7005430
sync_priority | 0
sync_state | async
I found answer for my question. I miss one step in pacemaker resources.
The pgsqld defines the properties of a PostgreSQL instance: where it is located, where are its binaries, its configuration files, how to montor it, and so on.
The pgsql-ha resource controls all the PostgreSQL instances pgsqld in your cluster, decides where the primary is promoted and where the standbys are started.
pcs resource create pgsqld ocf:heartbeat:pgsqlms \
bindir="/usr/lib/postgresql/9.6/bin" \
pgdata="/etc/postgresql/9.6/main" \
datadir="/var/lib/postgresql/9.6/main" \
pghost="/var/run/postgresql" \
recovery_template="/etc/postgresql/9.6/main/recovery.conf.pcmk" \
op start timeout=60s \
op stop timeout=60s \
op promote timeout=30s \
op demote timeout=120s \
op monitor interval=15s timeout=10s role="Master" \
op monitor interval=16s timeout=10s role="Slave" \
op notify timeout=60s
pcs resource master pgsql-ha pgsqld notify=true
pcs resource cleanup
pcs status
Related
When using tdengine in the k8s environment, you will often encounter the restart of the pod where the cluster mnode node is located, and the cluster reinstallation, so two scenarios are tested.
"create database test replica 3 ;", when there are 3 replicas, in both scenarios, the tdengine cluster cannot be used normally.
Details as following:
k8s environment helm deployment tdengine-3.0.2.4 cluster (1mnode-3dnodes-3replica)
After deleting the pod, tdengine3-0 (mnode), and after the new pod is built, use the client to check the database and find that the status of tdengine3-1 and tdengine3-2 is offline
Error DB error: Fail to get table info, error: Sync not leader when checking data
Simulate the pod restart scenario where the mnode node is located in the k8s cluster, the operation is as follows:
[root#node01 tdengine]# kubectl delete pod tdengine3-0
pod "tdengine3-0" deleted
[root#node01 tdengine]# kubectl get pod -w|grep tdengine3
tdengine3-0 0/1 Running 0 8s
tdengine3-1 1/1 Running 0 62s
tdengine3-2 1/1 Running 0 2m10s
tdengine3-0 1/1 Running 0 10s
[root#node01 tdengine]# kubectl exec -it tdengine3-0 -- /bin/bash
root#tdengine3-0:~# taos
Welcome to the TDengine Command Line Interface, Client Version:3.0.2.4
Copyright (c) 2022 by TDengine, all rights reserved.
****************************** Tab Completion **********************************
* The TDengine CLI supports tab completion for a variety of items, *
* including database names, table names, function names and keywords. *
* The full list of shortcut keys is as follows: *
* [ TAB ] ...... complete the current word *
* ...... if used on a blank line, display all valid commands *
* [ Ctrl + A ] ...... move cursor to the st[A]rt of the line *
* [ Ctrl + E ] ...... move cursor to the [E]nd of the line *
* [ Ctrl + W ] ...... move cursor to the middle of the line *
* [ Ctrl + L ] ...... clear the entire screen *
* [ Ctrl + K ] ...... clear the screen after the cursor *
* [ Ctrl + U ] ...... clear the screen before the cursor *
**********************************************************************************
Server is Community Edition.
taos> show dnodes;
id | endpoint | vnodes | support_vnodes | status | create_time | note |
=================================================================================================================================================
1 | tdengine3-0.tdengine3.defau... | 2 | 8 | ready | 2023-01-30 17:42:13.682 | |
2 | tdengine3-1.tdengine3.defau... | 2 | 0 | offline | 2023-01-30 17:43:35.428 | status not received |
3 | tdengine3-2.tdengine3.defau... | 2 | 0 | offline | 2023-01-30 17:44:50.947 | status not received |
Query OK, 3 row(s) in set (0.002463s)
taos> show databases;
name |
=================================
information_schema |
performance_schema |
test |
Query OK, 3 row(s) in set (0.002456s)
taos> use test;
Database changed.
taos> select * from demo;
DB error: Fail to get table info, error: Sync not leader (10.288545s)
taos> select * from demo;
DB error: Fail to get table info, error: Sync not leader (10.289980s)
Please help with ODBC connection from Clickhouse to SQL Server databases.
I configured ODBC on the Clickhouse server.
Сonnection from clients such as isql, tsql is successful.
But it is not possible to connect from the clickhouse client's.
Operation system – Ubuntu 20.04
Clickhouse Server – version 22.1.3
Clickhouse Client – version 18.16.1
MS SQL Server 2016 on Windows Server.
/etc/freetds/freetds.conf
[TSQL_NE]
host = 10.72.82.72
port = 1433
tds version = 7.4
client charset = UTF-8
/etc/odbcinst.ini
[FreeTDS]
Description=FreeTDS
Driver=/usr/lib/x86_64-linux-gnu/odbc/libtdsodbc.so
Setup=/usr/lib/x86_64-linux-gnu/odbc/libtdsS.so
FileUsage=1
UsageCount=8
/etc/odbc.ini
[TSQL_NE]
Description=FreeTDS NE
Driver=FreeTDS
Server=10.72.82.72
Database=ASU
UID=user
PWD=password
Port=1433
Checking the connection to the MSSQL database via ODBC
root#srv:/# isql -v TSQL_NE "user" "password"
+---------------------------------------+
| Connected! |
| |
| sql-statement |
| help [tablename] |
| quit |
| |
+---------------------------------------+
SQL> SELECT top 10 v1 from asu.dbo.data
+-------------------------+
| V1 |
+-------------------------+
| 1.45 |
| 1.5062500000000001 |
| 1.385 |
| 1.4237500000000001 |
| 1.3712500000000001 |
| 1.425 |
| 1.39625 |
| 1.6487499999999999 |
| 1.28 |
| 1.2037500000000001 |
+-------------------------+
SQLRowCount returns 10
10 rows fetched
root#srv:/# tsql -v -S TSQL_NE -U user –P password
locale is "C.UTF-8"
locale charset is "UTF-8"
using default charset "UTF-8"
1> SELECT top 10 v1 from asu.dbo.data
…
10 rows fetched
Connection with clickhouse-client and the error
root#srv:~# clickhouse-client
ClickHouse client version 18.16.1.
Password for user :
Connecting to localhost:9000.
Connected to ClickHouse server version 22.2.2 revision 54455.
b7e1d742cbd0 :) SELECT top 10 v1 from odbc('DSN=TSQL_NE;Uid=user;Pwd=password;', 'asu', 'dbo.data')
0 rows in set. Elapsed: 0.290 sec.
Received exception from server (version 22.2.2):
> Code: 86. DB::Exception: Received from localhost:9000, 127.0.0.1.
DB::Exception: Received error from remote server /columns_info?connection_string=DSN%3DTSQL_NE%3B%20Uid%3Duser%3BPwd%3Dpassword%3B&table=dbo.data&external_table_functions_use_nulls=true.
HTTP status code: 500 Internal Server Error, body: Error getting columns from ODBC
'Code: 36. DB::Exception: Table dbo.data not found. (BAD_ARGUMENTS) (version 22.2.2.1)'
You command in clickhouse-client should be
SELECT top 10 v1 from odbc('DSN=TSQL_NE;Uid=user;Pwd=password;Database=asu', 'dbo', 'data')
SELECT top 10 v1 from odbc('DSN=TSQL_NE;Uid=user;Pwd=password;', '', 'data')
When you remove schema and database you will get success. I saw this method in here
I have a ubuntu 20 on dreamcompute (which is cloud computing).
I create a user and a database. Here is the list of database and users (for some reason, I can't see database under a matt username).
I went into:
nano /etc/postgresql/13/main/postgresql.conf &
nano /etc/postgresql/13/main/pg_hba.conf and did the whole '*' and '0.0.0.0/0'
postgres=# \l
List of databases
Name | Owner | Encoding | Collate | Ctype | Access privileges
-----------+----------+----------+---------+---------+-----------------------
postgres | postgres | UTF8 | C.UTF-8 | C.UTF-8 |
strapi | postgres | UTF8 | C.UTF-8 | C.UTF-8 | =Tc/postgres +
| | | | | postgres=CTc/postgres+
| | | | | hossein=CTc/postgres
template0 | postgres | UTF8 | C.UTF-8 | C.UTF-8 | =c/postgres +
| | | | | postgres=CTc/postgres
template1 | postgres | UTF8 | C.UTF-8 | C.UTF-8 | =c/postgres +
| | | | | postgres=CTc/postgres
(4 rows)
as you can see you can't see Superuser and database strapi under the matt username.
List of roles
Role name | Attributes | Member of
-----------+------------------------------------------------------------+-----------
matt | | {}
postgres | Superuser, Create role, Create DB, Replication, Bypass RLS | {}
ubuntu | | {}
I'm using my dream compute's ip address as my host and use my database and user and password but get error message: connection attempt time out.
Could someone please give me a pointer on why this is happening? I have been working onthis for 2 weeks now and I can't get it to work.
Error message is connection time out. That usually means that the port is blocked by a firewall. Check your cloud provider firewall settings and iptablesin your Linux box in case you have installed it.
If there was problem with permissions, the error message would be something else.
as ex4 mentioned above I needed to reachout to the company that I was renting my cloud computer from but I still could not connect to the database.
The way I went around it is that you can ssh into your database and then connect to your database as a localhost since you are ssh into your cloud computer.
In DBeaver you have a ssh tab and you can connect and then you got back to your postgres tab and fill the localhost, user, database name, and user password area and simply click connect.
Sadly this took weeks to come to this :/
I have a CentOS server running a local memsql cluster (aggregator and leaf on the same machine). I have a databse named offers. For some reason, I cannot execute any queries against tables in my database.
Everything was working fine until I tried to add another machine to the cluster. I had the IT team at my place replicate the server I was working on (completely). I went over to the replicated server, deleted the database in question and then registered the server using the memsql-toolbox-config register-node command. Then the database showed it was under the transition state. I restarted memsql using memsql-ops and got to this situation.
Running a simple query yields:
memsql> select * from table;
ERROR 2261 (HY000): Query `select * from table` couldn't be executed because of an in progress failover operation. Check the status of the leaf nodes in the cluster (error 1049:'Leaf Error (172.26.32.20:3307): Unknown database 'offers_5'')
The output for the the cluster status command is:
memsql> show cluster status;
+---------+--------------+------+----------+-------------+-------------+----------+--------------+-------------+-------------------------+----------------------+----------------------+---------------+-------------------------------------------------+
| Node ID | Host | Port | Database | Role | State | Position | Master Host | Master Port | Metadata Master Node ID | Metadata Master Host | Metadata Master Port | Metadata Role | Details |
+---------+--------------+------+----------+-------------+-------------+----------+--------------+-------------+-------------------------+----------------------+----------------------+---------------+-------------------------------------------------+
| 1 | 172.26.32.20 | 3306 | cluster | master | online | 0:181 | NULL | NULL | NULL | NULL | NULL | Reference | |
| 1 | 172.26.32.20 | 3306 | offers | master | online | 0:156505 | NULL | NULL | NULL | NULL | NULL | Reference | |
| 2 | 172.26.32.20 | 3307 | cluster | async slave | replicating | 0:180 | 172.26.32.20 | 3306 | 1 | 172.26.32.20 | 3306 | Reference | stage: packet wait, state: x_streaming, err: no |
| 2 | 172.26.32.20 | 3307 | offers | sync slave | replicating | 0:156505 | 172.26.32.20 | 3306 | 1 | 172.26.32.20 | 3306 | Reference | |
+---------+--------------+------+----------+-------------+-------------+----------+--------------+-------------+-------------------------+----------------------+----------------------+---------------+-------------------------------------------------+
4 rows in set (0.00 sec)
So it seems that the the second node is replicating. Also note the details column saying:
stage: packet wait, state: x_streaming, err: no
Running the replication status command gives:
memsql> show replication status;
+--------+----------+------------+--------------+------------------+--------------------+------------------+----------------+----------------+-----------+---------------------------+-------------+-----------------+-------------------+-----------------+---------------+---------------+
| Role | Database | Master_URI | Master_State | Master_CommitLSN | Master_HardenedLSN | Master_ReplayLSN | Master_TailLSN | Master_Commits | Connected | Slave_URI | Slave_State | Slave_CommitLSN | Slave_HardenedLSN | Slave_ReplayLSN | Slave_TailLSN | Slave_Commits |
+--------+----------+------------+--------------+------------------+--------------------+------------------+----------------+----------------+-----------+---------------------------+-------------+-----------------+-------------------+-----------------+---------------+---------------+
| master | cluster | NULL | online | 0:181 | 0:181 | 0:177 | 0:181 | 86 | yes | 172.26.32.20:3307/cluster | replicating | 0:180 | 0:181 | 0:180 | 0:181 | 84 |
| master | offers | NULL | online | 0:156505 | 0:156505 | 0:156505 | 0:156505 | 183 | yes | 172.26.32.20:3307/offers | replicating | 0:156505 | 0:156505 | 0:156505 | 0:156505 | 183 |
+--------+----------+------------+--------------+------------------+--------------------+------------------+----------------+----------------+-----------+---------------------------+-------------+-----------------+-------------------+-----------------+---------------+---------------+
2 rows in set (0.00 sec)
I never initiated any fail over or replication. Anyone knows why this is happening? How could I solve this?
EDIT:
Using memsql-ops I get:
[me#memsql ~]$ memsql-ops memsql-list
ID Agent Id Process State Cluster State Role Host Port Version
33829AF Af13af7 RUNNING CONNECTED MASTER 172.26.32.20 3306 6.5.18
BBA1B61 Af13af7 RUNNING CONNECTED LEAF 172.26.32.20 3307 6.5.18
But with memsql-admin, with the new memsql tools:
[me#memsql ~]$ memsql-admin list-nodes
✘ Failed to list nodes on all hosts: failed to list nodes on 1 host:
172.26.32.20
No nodes found
Making my question a bit clearer - How can I get my server to respond to queries again? And after I do, How should I act to add another host? Should I clean the replicated server completely of any memsql data?
2nd EDIT:
I managed to solve this problem by delete my database and cluster data, and setting up a new one using the new MemSQL tools, throwing away MemsqlOps. Read my answer.
It looks like there are a couple things that might be causing problems. Generally speaking, cloning a memsql server is not something that is supported nor the best way to go about adding nodes. It also looks like you may be using both the older Ops management tool and the newer MemSQL tools. I would recommend not installing or using Ops and sticking to just the new MemSQL tools instead.
A good place to start would be to try recreating the nodes after cloning; a cloned memsql node won't correctly become part of the cluster. You should also verify that you don't have more than one master aggregator in the cluster. If you can start with that and see if it resolves your issues I'm happy to help with any other problems that you run into.
I managed to set up a working cluster.
As micahbhakti mentioned in his answer, I tried using only the newer MemSQL tools, instead of the deprecated MemSQL Ops. It required deleting the MemSQL agent existing on both servers and then following the tutorial in the MemSQL documentation. Here are the steps I took for anyone struggling with this issue which is better described as: My MemSQL-Ops-managed-MemSQL-cluster is not responding. How can I upgrade it to a working MemSQL-tools-managed-cluster?
1. Save what data you can
The following step is to delete all memsql data, so it would be best if you could save your data. The table data could be stored in CSV files easily with a simple
SELECT * FROM important_data_containing_table INTO OUTFILE '/home/yourfolder/yourcsvfile.csv';
Do this for all tables containing important data. You could also save the scheme itself. You can do that by viewing and copying to another file all the create queries you used to create the table originally, to re-execute them later. Use this
SHOW CREATE TABLE your_table_name
The documentation for mysql is described here. It might not be similar to the syntax used in mem, but the above base command works. For exact information, read about MySQL Features Unsupported in MemSQL.
2. Delete anything to do with Memsql-Ops
As it is said here about the uninstall command:
Stops the local MemSQL Ops agent and deletes all its data.
If MemSQL nodes are already installed in the local host, this command will prompt users to delete those nodes first before proceeding with the uninstall.
And indeed, if there is a node runnning (in my case there were), you will be prompted to run another command to delete those nodes: memsql-ops memsql-delete --all. This WILL delete all data in your database as said in it's documentation:
Deletes all data for a MemSQL node. This operation is not reversible and may lead to data loss. Users who want to perform this operation are prompted to explicitly type ‘DELETE’ to be sure of their decision.
That's why I asked you to save what ever you need :)
This should be done for each host you want to include in your new shiny cluster.
3. Follow the instructions to create the new cluster using MemSQL tools
After you cleaned your servers from the deprecated MemSQL ops agent and data, you can follow the instructions here. I chose to set up a multiple host comprehensive set up. The process will ask you to register your hosts, and then set up the nodes roles (master aggregator, aggreators and leafs), ip addresses, passwords, ports and etc.
After that, you can try to test the cluster, making changes in one machine and view them in another. Also the output for memsql-admin list-nodes on the deploying machine for my cluster was:
+------------+------------+--------------+------+---------------+--------------+---------+----------------+--------------------+
| MemSQL ID | Role | Host | Port | Process State | Connectable? | Version | Recovery State | Availability Group |
+------------+------------+--------------+------+---------------+--------------+---------+----------------+--------------------+
| AAAAAAAAAA | Master | 172.26.32.20 | 3306 | Running | True | 6.7.16 | Online | |
| BBBBBBBBBB | Aggregator | 172.26.32.22 | 3306 | Running | True | 6.7.16 | Online | |
| CCCCCCCCCC | Leaf | 172.26.32.20 | 3307 | Running | True | 6.7.16 | Online | 1 |
| DDDDDDDDDD | Leaf | 172.26.32.22 | 3307 | Running | True | 6.7.16 | Online | 1 |
+------------+------------+--------------+------+---------------+--------------+---------+----------------+--------------------+
4. Restore the data
Re-execute all the create table queries you saved in step 1, and import all data exported to a csv using this syntax:
LOAD DATA INFILE '/home/yourfolder/yourcsvfile.csv' INTO TABLE your_table;
And that's it! Now you can manage your cluster using the new MemSQL studio that run on the default http://your_deployment_machine:8080.
Enjoy :)
I have this configuration with pgpool: "Host-1" master and "Host-2" slave, if "Host-1" go down, pgpool correctly promotes "Host-2" to be the master; but then if "Host-1" return up, pgpool is not aware of this and, if "Host-2" go down, pgpool doesn't promote "Host-1" to be the master even if "Host-1" is online.
I enabled health_check but it seems completely useless, because the state of "Host-1" (after it becomes up) is always 3="Node is down".
This is the output of the command "show pool_nodes" during the events:
-> Initially situation: "Host-1" UP (master), "Host-2" UP (slave)
node_id | hostname | port | status | lb_weight | role
---------+----------+------+--------+-----------+--------
0 | Host-1 | 5432 | 2 | nan | master
1 | Host-2 | 5432 | 1 | nan | slave
-> node 0 goes down: "Host-1" DOWN, "Host-2" UP
node_id | hostname | port | status | lb_weight | role
---------+----------+------+--------+-----------+--------
0 | Host-1 | 5432 | 3 | nan | slave
1 | Host-2 | 5432 | 2 | nan | master
-> node 0 returns up: "Host-1" UP, "Host-2" UP
node_id | hostname | port | status | lb_weight | role
---------+----------+------+--------+-----------+--------
0 | Host-1 | 5432 | 3 | nan | slave
1 | Host-2 | 5432 | 2 | nan | master
note that the status of "Host-1" is however 3 that means "Node is down"
-> node 1 goes down: "Host-1" UP, "Host-2" DOWN: at this point i'm not able to connect to db, even if node 0 is up and running!
What I have to do to permit to pgpool to promote master the node 0 another time?
If can be useful, these are sections "Backend Connection Settings" and "HEALTH CHECK" of my pgpool.conf:
# - Backend Connection Settings -
backend_hostname0 = 'Host-1'
# Host name or IP address to connect to for backend 0
backend_port0 = 5432
# Port number for backend 0
#backend_weight0 = 1
# Weight for backend 0 (only in load balancing mode)
#backend_data_directory0 = '/data'
# Data directory for backend 0
backend_flag0 = 'ALLOW_TO_FAILOVER'
# Controls various backend behavior
# ALLOW_TO_FAILOVER or DISALLOW_TO_FAILOVER
backend_hostname1 = 'Host-2'
# Host name or IP address to connect to for backend 0
backend_port1 = 5432
# Port number for backend 0
#backend_weight1 = 1
# Weight for backend 0 (only in load balancing mode)
#backend_data_directory1 = '/data'
# Data directory for backend 0
backend_flag1 = 'ALLOW_TO_FAILOVER'
# Controls various backend behavior
# ALLOW_TO_FAILOVER or DISALLOW_TO_FAILOVER
#------------------------------------------------------------------------------
# HEALTH CHECK
#------------------------------------------------------------------------------
health_check_period = 10
# Health check period
# Disabled (0) by default
health_check_timeout = 20
# Health check timeout
# 0 means no timeout
health_check_user = 'admin'
# Health check user
health_check_password = '12345'
# Password for health check user
health_check_max_retries = 10
# Maximum number of times to retry a failed health check before giving up.
health_check_retry_delay = 1
# Amount of time to wait (in seconds) between retries.
Once your slave node is up and replication is working, you need to re-attach the node to pgpool.
$ pcp_attach_node 10 pgpool_host 9898 admin _pcp_passwd_ 0
Last arguments is node id, for your case it is 0.
See http://www.pgpool.net/docs/latest/pgpool-en.html#pcp_attach_node more details.
You have to bring up the slave node before it can be promoted. This means, in your case, using Slony to fully fail over and rebuild the former Master as a new Slave.
The basic problem is that writes written to the new master must be replicated over to the old ones before you can fail back. This is first and foremost a Slony problem. After you verify that Slony is working and everything is replicated, then you can troubleshoot your pgpool side but not until then (and then you might need to re-attach it to PGPool). With PGPool in Master/Slave mode, PGPool is secondary to whatever other replication system you are using.