Setting up Hyperledger Besu in AWS

This post was written by PegaSys Protocol Engineer, Joshua Fernandes. Learn more about setting up monitoring for Hyperledger Besu in AWS here.

General Concepts: 

If you are already familiar with AWS, please feel free to skip this section.

VPC: A virtual private network specific to you inside AWS’s cloud infrastructure. It's a cheaper and more efficient alternative to maintaining your own data center, where resources are created on-demand (refer to the blue box below) Typically a VPC contains multiple subnets (private or public) and the default route table allows communication between subnets.  (refer to the 4 yellow boxes below)



Subnet: part of your VPC that contains resources that share a common subnet mask and exists within a single  Availability Zone (physical data center). We make use of this feature when we setup our Ethereum network. 

Instances in a Private Subnet instances only get a private IP (IPs labelled pvt in the diagram above) and any internet traffic is routed through the subnet’s NAT gateway (not shown). Instances in a Public Subnet get a private IP as well as a public IP (IPs labelled pub in the above diagram). The private IPs are used for inter-subnet communication within the VPC 

Nodes 1-5 above can communicate with each other without any extra configuration and do not require the public internet. 

We recommend using private IPs for Besu, because traffic is routed with less hops and than going via the public internet & it is consequently faster and more secure.

AWS Walkthrough 1 - Private Network - Ethash with 3 nodes

Select a region and VPC that you would like to use. For this use let us assume the VPC has two subnets - one private and one public. This is based on the following quickstart here  

We set three nodes up here, but the process to add more nodes is exactly the same as creating NODE2 & NODE3

This is an overview of what is being built:

1. Set up a security group for communication that all instances will use with your vpc selected:

Additionally Inbound

Add a rule for ssh on port 22 to your IP i.e  TCP 22 w.x.y.z/32

Note: 30303 above must be opened for TCP & UDP as 2 seperate rules

Outbound: All traffic  0-65535 0.0.0.0/0

If you intend to connect your smart contract / IDE / DApp to a node that is outside of AWS, please add an additional rule like so:

Custom TCP Rule TCP 8545 0.0.0.0.0/0 json-rcp-http-public

As before, we recommend locking this to a CIDR range or single IP rather than 0.0.0.0/0

2. Create 3 instance using the AmazonLinux2 AMI or the Ubuntu 18.04 AMI or equivalent. We will use the Ubuntu AMI for the rest of this tutorial. For this example, we are using an instance type of t3.medium and setting the volume size to 50GB. Select the security group from steps 1

3. ssh into each box and install JAVA11 

  sudo apt-get update && sudo apt-get install openjdk-11-jdk

Note: In a real production setup, use a large second volume that will persist if the instance/AZ fails. 

4. Download and extract Besu (tar.gz format) from the solutions page 

wget https://bintray.com/api/ui/download/hyperledger-org/besu-repo/besu-1.3.6.tar.gzsudo mkdir -p /opt/besu/Sudo chown -R $USER:$USER /opt/besu/
tar -C /opt/besu/ -xvf besu-1.3.6.tar.gz 

5. Create a config file at the genesis file by following the steps listed on the docs page. Once done scp it to all the nodes and put it in the /opt/besu/ folder that you just created. Additionally create a ‘data’ directory under the /opt/besu directory so the structure resembles:

/opt/besu/
  |- privateNetworkGenesis.json
  |- data
  |- besu-1.3.6/
        |- bin
        |- lib
        |- LICENSE
        |- GettingStartedBinaries.md

6. ssh into the bootnode BOOTNODE1. Start besu as a bootnode like so, replacing the p2p-host IP in the command below with your machine’s private IP

cd /opt/besu/besu-1.3.6/
./bin/besu --data-path=/opt/besu/data/ --genesis-file=/opt/besu/privateNetworkGenesis.json --bootnodes --miner-enabled --miner-coinbase fe3b557e8fb62b89f4916b721be55ceb828dbd73 --rpc-http-enabled --rpc-http-host=0.0.0.0 --host-whitelist="*" --rpc-http-cors-origins="all" --metrics-enabled --metrics-host=0.0.0.0 --metrics-port=9545 --p2p-host=10.0.0.100 &

7. In the logs that follow copy the Bootnode’s enode URL to a text editor of choice. 

All future node’s that connect to your network, first ‘discover’ the bootnode (using the enode specified) when they start up. They then communicate with the bootnode and get a list of all peers it knows about and proceed to connect to them in turn. In large networks, a few bootnodes are generally specified because each can only hold a certain number of nodes. When new nodes start up they connect to each bootnode in turn and repeat the process above till they reach peer capacity. If there is overlap it'll ignore and continue

For a production network please use at least 2 bootnodes to provide redundancy should a single bootnode fail

8. In another terminal session, ssh into one of the first node NODE2. Start besu like so, passing the enode URL for the BOOTNODE1 to the bootnodes parameter, and as before replacing the p2p-host IP in the command below with your machine’s private IP

cd /opt/besu/besu-1.3.6/
./bin/besu --data-path=/opt/besu/data/ --genesis-file=/opt/besu/privateNetworkGenesis.json --bootnodes=enode://eb5fe0417d5e59a8a52134bcf924a326bc86e0abf9083e3a2100a25b2bdc1cc41[email protected]10.0.0.100:30303 --rpc-http-enabled --rpc-http-host=0.0.0.0 --host-whitelist="*" --rpc-http-cors-origins="all" --metrics-enabled --metrics-host=0.0.0.0 --metrics-port=9545 --p2p-host=10.0.0.43 &
 

9. Repeat step 8 for the last node NODE3 and any more nodes you wish to add, passing the enode URL for the BOOTNODE1 to the bootnodes parameter, and as before replacing the p2p-host IP in the command below with your machine’s private IP

10. Confirm the network is working :

curl --X POST --data '{"jsonrpc":"2.0","method":"net_peerCount","params":[],"id":1}' 10.0.2.89:8545

11. The result confirms that the node you are connected to Node2 has two peers (NODE2 and BOOTNODE1):

{
  "jsonrpc" : "2.0",
  "id" : 1,
  "result" : "0x2"
}

12. Keep adding additional nodes as required ...

AWS Walkthrough 2 - Private Network - IBFT with 4 nodes

This setup is very similar to that of Walkthrough 1 above for the Ethash private network and is based on the following quickstart here  

This is an overview of what is being built:

Repeat steps 1 - 4 from Walkthrough 1 but spin up 4 nodes now instead of 3

1. Create the genesis file and private & public keys by following the steps listed on the docs Once done scp the genesis file and the keys to the respective nodes and put it in the besu folder that you just extracted so the structure resembles:

/opt/besu/
  |- privateNetworkGenesis.json   |- keys         |- key         |- key.pub        
  |- data
  |- besu-1.3.6/
        |- bin
        |- lib
        |- LICENSE
        |- GettingStartedBinaries.md

In this setup we already know the keys for the nodes which will make it really easy to start things up. Pick a node from step 1 to be the bootnode (eg: 10.0.0.100 in our case). Now we can form the bootnode enode and specify it to all nodes like so:

enode://<public_key>@<bootnode_ip>:30303

where: public_key is the contents of key.pub in the /opt/besu/keys directory you created

      bootnode_ip is the private_ip of the instance

Which gives us: enode://eb5fe0417d5e59a8a52134bcf924a326bc86e0abf9083e3a2100a25b2bdc1cc41[email protected]10.0.0.100:30303

This gives us a really easy way to keep things consistent as we deploy Besu across ‘n’ nodes. When the bootnode starts up it realizes that the bootnode config specified is itself and proceeds as normal. All future node’s that connect to your network, first ‘discover’ the bootnode (using the enode specified) when they start up. They then communicate with the bootnode and get a list of all the peers it knows about and proceed to connect to them in turn. In large networks, a few bootnodes are generally specified because each can only hold a certain number of nodes. When new nodes start up they connect to each bootnode in turn and repeat the process above till they reach peer capacity. If there is overlap it'll ignore and continue

For a production network please use at least 2 bootnodes to provide redundancy should a single bootnode fail

2. ssh into the bootnode BOOTNODE1. Start besu as a bootnode like so, replacing the p2p-host IP in the command below with your machine’s private IP

cd /opt/besu/besu-1.3.6/
./bin/besu --data-path=/opt/besu/data/ --genesis-file=/opt/besu/genesis.json --node-private-key-file=/opt/besu/keys/key --bootnodes=enode://eb5fe0417d5e59a8a52134bcf924a326bc86e0abf9083e3a2100a25b2bdc1cc41[email protected]10.0.0.100:30303 --rpc-http-enabled --rpc-http-host=0.0.0.0 --host-whitelist="*" --rpc-http-api=ETH,NET,IBFT  --rpc-http-cors-origins="all" --metrics-enabled --metrics-host=0.0.0.0 --metrics-port=9545 --p2p-host=10.0.0.43 &

3. Once this node has started ssh into every other instance and start Besu up using the same command ie,  replacing the p2p-host IP in the command below with your machine’s private IP

cd /opt/besu/besu-1.3.6/
./bin/besu --data-path=/opt/besu/data/ --genesis-file=/opt/besu/genesis.json --node-private-key-file=/opt/besu/keys/key --bootnodes=enode://eb5fe0417d5e59a8a52134bcf924a326bc86e0abf9083e3a2100a25b2bdc1cc41[email protected]10.0.0.100:30303 --rpc-http-enabled --rpc-http-host=0.0.0.0 --host-whitelist="*" --rpc-http-api=ETH,NET,IBFT  --rpc-http-cors-origins="all" --metrics-enabled --metrics-host=0.0.0.0 --metrics-port=9545 --p2p-host=<W.X.Y.Z> &

4. Do the same for as many nodes as you want to enter the network. Please copy the genesis.json file to each node as before. You can create keys prior to startup and follow the same process, however if you do not have keys, please bear in mind that only the first 4 (i.e the validators had their keys created before startup). For any new nodes beyond the 4 validators please remove the parameter --node-private-key-file=/opt/besu/keys/key  in the command above. Besu will automatically create keys for the new nodes

5. Verify that the network is working 

curl --X POST --data '{"jsonrpc":"2.0","method":"net_peerCount","params":[],"id":1}' 10.0.2.89:8545

The result confirms that the node you are connected to Node2 has three peers

{
  "jsonrpc" : "2.0",
  "id" : 1,
  "result" : "0x3"
}

AWS Walkthrough 3 - Private Network - IBFT with 4 nodes using the Ansible Galaxy Role

This setup is very similar to that of Walkthrough 1 above for the Ethash private network and is based on the following quickstart here  

This is an overview of what is being built:

Repeat steps 1 - 3 from Walkthrough 1 but spin up 4 nodes now instead of 3

1. Install the following packages and setup for ansible


sudo apt-get install -y python3 python3-setuptools python3-pip python3-dev python3-virtualenv python3-venv python3-apt python-apt
mkdir $HOME/setup && cd $HOME/setuppython3 -m venv envsource ./env/bin/activatepython3 -m pip install ansible boto3 boto requestsansible-galaxy install --force pegasyseng.hyperledger_besu
# not required but highly recommended for monitoring system level metrics via prometheus & grafanaansible-galaxy install --force undergreen.prometheus-node-exporter undergreen.prometheus-exporters-common
# config files dir for besusudo mkdir -p /etc/besu && sudo chown -R $USER:$USER /etc/besu

2. The genesis file and private & public keys by following the steps listed on the docs Once done scp the genesis file and the keys to the respective nodes and put it in the besu folder that you just extracted so the structure resembles:

/etc/besu/
  |- genesis.json   |- keys         |- key         |- key.pub        
   

3. Create an ansible role file to use: $HOME/setup/besu.yml and replace with values to match your environment. The besu_host_ip is almost always the EC2 private IP. This is faster for nodes to communicate because it is within the VPC and has lower latency. 

You can choose to use the EC2 public IP as well - if you do please ensure that the security group rules in step 1 are also updated to allow trafffic from 0.0.0.0/0 rather than just within your VPC.

---
- hosts: localhost
  connection: local
  force_handlers: True

  roles:
  - role: pegasyseng.hyperledger_besu
    vars:
      besu_version: 1.3.6      Besutail _network: custom      besu_rpc_http_api: ["ETH","NET","WEB3","ADMIN","IBFT"]      besu_bootnodes: ["enode://<bootnode_pubkey>@<bootnode_ip>:30303"]      besu_genesis_path: "/etc/besu/genesis.json"      besu_node_private_key_file: "/etc/besu/keys/key"      besu_host_ip: <ec2_host_ip> 
  # not required, but highly recommended to give you system metrics like disk space   - { role: undergreen.prometheus-node-exporter, become: yes }

On a production setup we recommend mounting a second volume for data only that will persist if the instance / AZ fails. Lets say this was mounted to ‘/data/’ then you would also add the var  besu_data_dir: "/data" to the list of vars above 

In this setup we already know the keys for the nodes which will make it really easy to start things up. Pick a node from step 1 to be the bootnode (eg: 10.0.0.100 in our case). Now we can form the bootnode enode and specify it to all nodes like so:

enode://<public_key>@<bootnode_ip>:30303

where: public_key is the contents of key.pub in the /opt/besu/keys directory you created

      bootnode_ip is the private_ip of the instance

Which gives us: enode://eb5fe0417d5e59a8a52134bcf924a326bc86e0abf9083e3a2100a25b2bdc1cc41[email protected]10.0.0.100:30303

This gives us a really easy way to keep things consistent as we deploy Besu across ‘n’ nodes. When the bootnode starts up it realizes that the bootnode config specified is itself and proceeds as normal. All future node’s that connect to your network, first ‘discover’ the bootnode (using the enode specified) when they start up. They then communicate with the bootnode and get a list of all the peers it knows about and proceed to connect to them in turn. In large networks, a few bootnodes are generally specified because each can only hold a certain number of nodes. When new nodes start up they connect to each bootnode in turn and repeat the process above till they reach peer capacity. If there is overlap it'll ignore and continue

For a production network please use at least 2 bootnodes to provide redundancy should a single bootnode fail

4. Run the role with ansible on the bootnode first

cd $HOME/setup
source env/bin/activate
ansible-playbook -v besu.yml -e ansible_python_interpreter=/usr/bin/python3

5. Once this node has started ssh into every other instance and start Besu up using the same command ie,  replacing the p2p-host IP in the command below with your machine’s private IP

cd $HOME/setup
source env/bin/activate
ansible-playbook -v besu.yml

6. Do the same for as many nodes as you want to enter the network. Please copy the genesis.json file to each node as before. You can create keys prior to startup and follow the same process, however if you do not have keys, please bear in mind that only the first 4 (i.e the validators had their keys created before startup). For any new nodes beyond the 4 validators please remove the parameter --node-private-key-file=/opt/besu/keys/key  in the command above. Besu will automatically create keys for the new nodes

7. Verify that the network is working 

curl --X POST --data '{"jsonrpc":"2.0","method":"net_peerCount","params":[],"id":1}' 10.0.2.89:8545

The result confirms that the node you are connected to Node2 has three peers

{
  "jsonrpc" : "2.0",
  "id" : 1,
  "result" : "0x3"
}