ICON Developer Portal

ICON Developer Portal

ICON Network is a decentralized smart contract protocol based on ‘loopchain’, a high-performance blockchain engine. The ICON Network aims to remove barriers among different blockchains and allow them to exchange values without intermediaries. By realizing this goal, ICON will realize the mass adoption of blockchain technology and Hyperconnect the World.

Get Started

Creating HA network Stage 2

This document is a more detailed guideline about how to create HA node on ICON.
Written based on reference

Environment

Ununtu 18.04 using AWS EC2

Preparation

AWS Instance

Create two instances on AWS under two availablity zones, instance 1 (az-a) on us-east-2a and instance 2 (az-b) on us-east-2b.

Recommend you to rename the hostnames for better legibility(both servers)

# change hostname
$ sudo hostnamectl set-hostname az-a
# similarly for az-b, then reboot
$ systemctl reboot

Edit hosts file(both servers)

# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
172.31.10.168    az-a-hb
172.31.16.135    az-b-hb

ping the other server see if host is resolving(both servers)

$ ping az-b-hb

Docker Compose

version: '3'
services:
   prep:
      image: iconloop/prep-node:1910211829xc2286d
      container_name: "prep-node"
      restart: always
      environment:
         LOOPCHAIN_LOG_LEVEL: "DEBUG"
         ICON_LOG_LEVEL: "DEBUG"
         DEFAULT_PATH: "/data/loopchain"
         LOG_OUTPUT_TYPE: "file"
         CERT_PATH: "/cert"
         iissCalculatePeriod: "1800"
         termPeriod: "1800"
         #FASTEST_START: "yes"
         ENDPOINT_URL: "https://zicon.net.solidwallet.io"
         PRIVATE_PATH: "{PATH TO KEYSTORE}"
         PRIVATE_PASSWORD: "{KEYSTORE PASSWORD}"
      cap_add:
         - SYS_TIME
      volumes:
         - ./data:/data
         - ./cert:/cert
   nginx_throttle:
      image: 'looploy/nginx:1.17.1'
      container_name: nginx_1.17
      environment:
         NGINX_LOG_OUTPUT: 'file'
         NGINX_LOG_TYPE: 'main'
         NGINX_USER: 'root'
         VIEW_CONFIG: "yes"
         USE_NGINX_THROTTLE: "yes"
         NGINX_THROTTLE_BY_URI: "yes"
         NGINX_RATE_LIMIT: "200r/s"
         NGINX_BURST: "5"
         NGINX_SET_NODELAY: "no"
         GRPC_PROXY_MODE: "yes"
         USE_VTS_STATUS: "yes"
         TZ: "GMT-9"
         SET_REAL_IP_FROM: "0.0.0.0/0"
         PREP_MODE: "yes"
         NODE_CONTAINER_NAME: "prep-node"
         PREP_NGINX_ALLOWIP: "yes"
         NGINX_ALLOW_IP: "0.0.0.0/0"
         NGINX_LOG_FORMAT: '$$realip_remote_addr $$remote_addr  $$remote_user [$$time_local] $$request $$status $$body_bytes_sent $$http_referer "$$http_user_agent" $$http_x_forwarded_for $$request_body'
      volumes:
         - ./data/loopchain/nginx:/var/log/nginx
         - ./user_conf:/etc/nginx/user_conf
      ports:
         - 9000:9000
         - 7100:7100

Nginx will be able to rate limit the requests received in order to prevent DDoS attacks and we'll also leverage nginx to receive requests from a whitelisted IP list. The way to connect nginx to your prep node is via `NODE_CONTAINER_NAME: "prep-node".

Spin up the containers per usual(both servers)

$ sudo docker-compose up -d

Next, check on the status(it'll take a while for P-Rep node to sync all the blocks, then it'll turn to healthy status)

$ sudo docker ps -a

You should have two containers, prep-node & nginx.

You also need to create another docker-compose file, call it docker-compose.backup.yml. This is essentially the same as our docker-compose earlier but for our backup node.

version: '3'
services:
   prep:
      image: iconloop/prep-node:1910211829xc2286d
      container_name: "backup-node"
      restart: always
      environment:
         LOOPCHAIN_LOG_LEVEL: "DEBUG"
         ICON_LOG_LEVEL: "DEBUG"
         DEFAULT_PATH: "/data/loopchain"
         LOG_OUTPUT_TYPE: "file"
         CERT_PATH: "/cert"
         iissCalculatePeriod: "1800"
         termPeriod: "1800"
         PRIVATE_PATH: "{PATH TO KEYSTORE}"
         PRIVATE_PASSWORD: "{KEYSTORE PASSWORD}"
      cap_add:
         - SYS_TIME
      volumes:
         - ./data:/data
         - ./cert:/cert


   nginx_throttle:
      image: 'looploy/nginx:1.17.1'
      container_name: nginx_1.17
      environment:
         NGINX_LOG_OUTPUT: 'file'
         NGINX_LOG_TYPE: 'main'
         NGINX_USER: 'root'
         VIEW_CONFIG: "yes"
         USE_NGINX_THROTTLE: "yes"
         NGINX_THROTTLE_BY_URI: "yes"
         NGINX_RATE_LIMIT: "200r/s"
         NGINX_BURST: "5"
         NGINX_SET_NODELAY: "no"
         GRPC_PROXY_MODE: "yes"
         USE_VTS_STATUS: "yes"
         TZ: "GMT-9"
         SET_REAL_IP_FROM: "0.0.0.0/0"
         PREP_MODE: "yes"
         NODE_CONTAINER_NAME: "backup-node"
         PREP_NGINX_ALLOWIP: "yes"
         NGINX_ALLOW_IP: "0.0.0.0/0"
         NGINX_LOG_FORMAT: '$$realip_remote_addr $$remote_addr  $$remote_user [$$time_local] $$request $$status $$body_bytes_sent $$http_referer "$$http_user_agent" $$http_x_forwarded_for $$request_body'
      volumes:
         - ./data/loopchain/nginx:/var/log/nginx
         - ./user_conf:/etc/nginx/user_conf
      ports:
         - 9000:9000
         - 7100:7100

Pacemaker and Corosync

As you've read on previous guide, crm enable you to set the cluster configuration. In this guide you'll get to know how to configure the EIP as a floating IP & additional backup EIP in case when our active EIP stops functioning.

You'll also configure using pcs this time as it seems to be the choice by the foundation, so instructions will become easier to follow.

# Update and install
$ sudo apt-get -y update
$ sudo-get install pacemaker
$ sudo apt install pcs
# Verify
$ pacemakerd --version
$ corosync -v

Then configure the corosync.conf file.

# located on /etc/corosync/corosync.conf

totem {
    version: 2
    cluster_name: peer_cluster
    secauth: off
    transport: udpu
}
nodelist {
    node {
        ring0_addr: az-a-hb
        nodeid: 1
    }
node {
        ring0_addr: az-b-hb
        nodeid: 2
    }
}
quorum {
    provider: corosync_votequorum
    two_node: 1
}
logging {
    to_logfile: yes
    logfile: /var/log/corosync/corosync.log
    to_syslog: yes
}

enable the services on system boot

$ systemctl enable corosync
$ systemctl enable pacemaker

Create Peer and Backup Peer Services

Create peer.service under /lib/systemd/system/

[Unit]
Description=Loopchain Peer
After=docker.service
[Service]
Type=oneshot
RemainAfterExit=yes
StandardError=null
StandardOutput=null
WorkingDirectory=/home/ubuntu
ExecStartPre=/home/ubuntu/cluster.sh
ExecStart=/usr/local/bin/docker-compose -f /home/ubuntu/docker-compose.yml up -d
ExecStop=/usr/sbin/pcs resource disable Backup
ExecStop=/usr/local/bin/docker-compose -f /home/ubuntu/docker-compose.yml down
[Install]
WantedBy=multi-user.target
and `backup_peer.service` also under `/lib/systemd/system/`

[Unit]
Description=Loopchain Backup_peer
After=docker.service
[Service]
Type=oneshot
RemainAfterExit=yes
StandardError=null
StandardOutput=null
WorkingDirectory=/home/ubuntu
ExecStartPre=/home/ubuntu/cluster.sh
ExecStart=/usr/local/bin/docker-compose -f /home/ubuntu/docker-compose.yml up -d
ExecStop=/usr/local/bin/docker-compose -f /home/ubuntu/docker-compose.yml down
[Install]
WantedBy=multi-user.target

enable the services on system boot
$ sudo systemctl enable peer.service
$ sudo systemctl enable backup_peer.service

Change DB Directory Name

The Services you just created, besides the basic docker up/down's, you also inserted a bash script cluster.sh, this is to change the DB directory name based on the node IP.

#!/bin/bash

DBDIR="/home/ubuntu/data/loopchain"
MYIP=`curl http://169.254.169.254/latest/meta-data/public-ipv4`
ASISNAME=`ls -t ${DBDIR}/.storage| head -1`
TOBENAME="db_${MYIP}:7100/_icon_dex"

if [ "$ASISNAME" == "$TOBENAME" ]; then
echo "Match"
else
        if [ ! -d "$DBDIR/.storage" ]; then
        mkdir -p $DBDIR/.storage/$TOBENAME
        else
        mv ${DBDIR}/.storage/${ASISNAME}
${DBDIR}/.storage/${TOBENAME}
        fi
fi

Add executable permission to the file

$ sudo chmod +x cluster.sh

AWS CLI Configuration

  1. Login to your AWS Management Console. Click
  2. Click on your user name at the top right of the page.
  3. Click on the Security Credentials link form the drop-down menu.
  4. Find the Access Credentials section, and copy the latest Access Key ID.
  5. Click on the Show link in the same row, and copy the Secret Access Key.
$ sudo apt update
$ sudo apt install aws-cli
# change to root, this is necessary
$ sudo su -
$ aws configure
For the region name, example is `us-east-1`

## Start Cluster
Setup a password first for your user `hacluster`
$ passwd hacluster

Run PaceMaker GUI daemon

$ systemctl start pcsd
$ systemctl enable pcsd

Configure the cluster

# authenticate the user first
$ pcs cluster auth az-a-hb az-b-hb
$ pcs cluster setup --name peer_cluster az-a-hb az-b-hb transport udpu

Start the cluster

$ pcs cluster start --all --wait=60

Cluster Resources

Create PCS config(one server)

$ sudo pcs cluster cib tmp-cib.xml
$ sudo cp tmp-cib.xml tmp-cib.xml.deltasrc
$ sudo pcs -f tmp-cib.xml property set stonith-enabled=false

Create EIP agend(one server)

$ sudo pcs -f tmp-cib.xml resource create awseip-peer
ocf:heartbeat:awseip allocation_id={your EIP allocation ID} elastic_ip={your elastic IP} op migrate_from interval=0s timeout=30s migrate_to interval=0s timeout=30s monitor interval=20s timeout=30s start interval=0s timeout=30s stop interval=0s timeout=30s validate interval=0s timeout=10s

Enable Peer Service Resource

$ sudo pcs -f tmp-cib.xml resource create peerservice systemd:peer
op monitor interval=60 timeout=100 start interval=0s timeout=100 stop interval=0s timeout=100

Backup Elastic IP

$ sudo pcs -f tmp-cib.xml resource create awseip-backup
ocf:heartbeat:awseip allocation_id={your EIP allocation ID} elastic_ip={your backup elastic IP} migrate_from interval=0s timeout=30s migrate_to interval=0s timeout=30s monitor interval=20s timeout=30s start interval=0s timeout=30s stop interval=0s timeout=30s validate interval=0s timeout=10s

Backup Peer Resource

$ sudo pcs -f tmp-cib.xml resource create backupservice
systemd:backup_peer op monitor interval=60 timeout=100 start interval=0s timeout=100 stop interval=0s timeout=100

Peer Group

$ sudo pcs -f tmp-cib.xml resource group add Peer awseip-peer 
peerservice

Backup Group

$ sudo pcs -f tmp-cib.xml resource group add Backup awseip-backup 
backupservice

Add Colocation

$ sudo pcs -f tmp-cib.xml constraint colocation add Backup with Peer -INFINITY id=colocation-Backup-Peer--INFINITY

Apply Configuration

$ sudo pcs cluster cib-push tmp-cib.xml diff-against=tmp-cib.xml.deltasrc

Check the status of your cluster, if all is well, you should have something similar to this:

Common Cluster Commands

Show cluster configuration
# crmsh
crm configure show
# pcs
pcs config

Show cluster status
# crmsh
crm status
# pcs
pcs status

Clean up resource
# crmsh
crm resource cleanup yourresource
# pcs
pcs resource cleanup

Put node on standby
# crmsh
crm node standby nodename
# pcs
pcs cluster standby nodename

Put node back online after standby
# crmsh
crm node online nodename
# pcs
pcs cluster unstandby nodename

Stonith
# crmsh
crm configure property stonith-enabled=false
# pcs
pcs property set stonith-enabled=false

List all resource agent in classes
# crmsh
crm ra classes
# pcs
pcs resource standards

List all available resource agents
# crmsh
crm ra list ocf
crm ra list lsb
crm ra list service
crm ra list stonith

# pcs
pcs resource agents ocf
pcs resource agents lsb
pcs resource agents service
pcs resource agents stonith
pcs resource agents

You can add additional filter like so
# crmsh
crm ra list ocf pacemaker
# pcs
pcs resource agents ocf:pacemaker

Describe resource agent
# crmsh
crm ra meta IPaddr2
# pcs
pcs resource describe IPaddr2
# crmsh
crm ra meta ocf:heartbeat:IPaddr2
# pcs
pcs resource describe ocf:heartbeat:IPaddr2

Create resource
# crmsh
crm configure primitive ClusterIP ocf:heartbeat:IPaddr2 \
       params ip=yourip cidr_netmask=32 \
       op monitor interval=30s 
# pcs
pcs resource create ClusterIP IPaddr2 ip=yourip cidr_netmask=32

Show configuration
# crmsh
crm configure show
# pcs
pcs resource show

Enable/Start resource
# crmsh
crm resource start yourresource
# pcs
pcs resource enable yourresource

Stop resource
# crmsh
crm resource stop yourresource
# pcs
pcs resource disable yourresource

Delete resource
# crmsh
crm configure delete yourresource
# pcs
pcs resource delete yourresource

# crmsh
crm configure edit ClusterIP

# crmsh
crm configure op_defaults timeout=240s
# pcs
pcs resource op defaults timeout=240s

Colocation
# crmsh
crm configure colocation website-with-ip INFINITY: WebSite ClusterIP
# pcs
pcs constraint colocation add ClusterIP with WebSite INFINITY

# crmsh
crm configure order apache-after-ip mandatory: ClusterIP WebSite
# pcs
pcs constraint order ClusterIP then WebSite

Updated 11 months ago

Creating HA network Stage 2


Suggested Edits are limited on API Reference Pages

You can only suggest edits to Markdown body content, but not to the API spec.