Deploying hyperledger-fabric on AWS using ansible

Hyperledger-fabric is a modular blockchain architecture which allow developers to develop applications to solve their business usecases . Use cases of this framework is diversified and not only limited to ICO’s or cryptocurrencies . However many encountered issues while deploying fabric architecture due to complexity lies in fabric architecture.

One of the reason is that since fabric heavily relies on docker and many developers like me that are willing to implement this architecture doesn’t know much about dockers or have limited or little knowledge regarding it .

Another reason is hyperledger-fabric is permission-ed blockchain so you wont find much showcase applications built on top of this architecture unlike ethereum which is public blockchain.

The purpose of writing this article is not to explain basic architecture of hyperledger fabric but rather focus on deployment phase because this is where mostly people stuck in .

In this article series I will cover following topics

1- Deploying hyperledger fabric on multiple aws servers

2- Integrate client like node-sdk with existing network so that you can built web application on top of that sdk


1- You must have basic knowledge of hyperledger-fabric architecture and how it works

1-You must have an aws account .

2-You should have an ubuntu machine.

3- Git should be installed on local system . If you haven’t installed it , you can install it by typing below command

(sudo apt install git)

Setting fabric on AWS :

For deploying on aws , we will be using ansible scripts. Ansible is an automation tool which perform set of predefined tasks on any particular servers so instead of configuring each server manually , it will automatically run some commands against specific servers defined in ansible configuration file.

For ansible to work , we need to configure it on our local machine which we will normally call ansible controller.

Configuring Ansible on local Machine :

Although ansible configuration is pretty straight-forward and well documented on hyperledger-cello docs ( but I will write down all commands here .

First clone official hyperledger-cello repository by typing below command .

git clone

Now to install ansible in your local machines , run below commands . It will install all necessary packages for ansible to work

sudo apt-get update
sudo apt-get install python-dev python-pip libssl-dev libffi-dev -y
sudo pip install --upgrade pip
sudo pip install 'ansible>='

Now last thing you need to do in your system is to install cloud dependent packages . If you are working on aws then you need to install awsboto or if you are working with openstack then you will need to install OpenStack shade . Since we are focusing on aws deployment so we will install aws dependent packages in our local machines. Run these below commands to install aws boto . It can be installed both from apt package manager or from pip .

`sudo pip install -U boto
 sudo pip install -U boto3

Here -U flag stands for upgrade so if you have already installed these packages, it will upgrade to latest ones .

Now navigate to ansible directory in cello cello/src/agent/ansible`

I will try to explain purpose of main directories related to aws so that you can easily configure according to your usecase/scenarios

Main ansible directory contains several configuration files like aws.yml, azure.yml specific for their particular environments. You wouldn’t need to change these configuration file .

If you take a look at aws.yml file( it contains instructions as to which tasks need to run in order to setup fabric properly .

Another configuration file related to aws is aws.yml in vars` directory ( need little bit alteration . Lets take a look at that configuration file

# AWS Keys will be use to provision EC2 instances on AWS Cloud
auth: {
  auth_url: "",
  # This should be your AWS Access Key ID
  username: "AKIAJCA47Q46DOT2M7WA",
  # This should be your AWS Secret Access Key
  # can be passed as part of cmd line when running the playbook
  password: "{{ password | default(lookup('env', 'AWS_SECRET_KEY')) }}"
# These variable defines AWS cloud provision attributes
cluster: {
  region_name: "us-east-1",     #TODO  Dynamic fetch
  availability_zone: "", #TODO  Dynamic fetch based on region
  security_group: "Fabric",
target_os: "ubuntu",
  image_name: "ubuntu/images/hvm-ssd/ubuntu-xenial-16.04-amd64-*",
  image_id: "ami-d15a75c7",
  flavor_name: "t2.medium",  # "m2.medium" is big enough for Fabric
  ssh_user: "ubuntu",
  validate_certs: True,
  private_net_name: "demonet",
public_key_file: "/home/mobaid/.ssh/",
  private_key_file: "/home/mobaid/.ssh/id_rsa",
  ssh_key_name: "finalmvp",
  # This variable indicate what IP should be used, only valid values are
  # private_ip or public_ip
  node_ip: "public_ip",
container_network: {
    Network: "",
    SubnetLen: 24,
    SubnetMin: "",
    SubnetMax: "",
    Backend: {
      Type: "udp",
      port: 8285 
service_ip_range: "",
  dns_service_ip: "",
# the section defines preallocated IP addresses for each node, if there is no
  # preallocated IPs, leave it blank
  node_ips: [ ],
# fabric network node names expect to be using a clear pattern, this defines
  # the prefix for the node names.
  name_prefix: "fabric",
  domain: "fabricnet",
# stack_size determines how many virtual or physical machines we will have
  # each machine will be named ${name_prefix}001 to ${name_prefix}${stack_size}
stack_size: 3,
etcdnodes: ["fabric001", "fabric002", "fabric003"],
  builders: ["fabric001"],
flannel_repo: "",
  etcd_repo: "",
  k8s_repo: "",
go_ver: "1.8.3",
# If volume want to be used, specify a size in GB, make volume size 0 if wish
  # not to use volume from your cloud
  volume_size: 8,
# cloud block device name presented on virtual machines.
  block_device_name: "/dev/vdb"

First you need to change username` in auth section . You need to generate AWS Access key Id and AWS secret access token from your aws account . Once you generated those keys , replace that AWS Access Key Id` with your key . Regarding your secret access token , save it somewhere else . You will need this key later .

Cluster Section: Cluster section defines your region name and security group . When you run ansible script it will create clusters in that specific region and create security group with name Fabric` . If you want to change security group name you can change in this file.

Target_os Section : This section defines what os should be installed on your amazon clusters. One thing you need to make sure that you aws account should be capable of hosting t2.medium` instances . Normally free aws account have limit of t2.micro` instances which is not enough for running fabric network . I will tell you the reason later in this article when I will come over explaining fabric layout .

Next section is related to public key /private key section . Ansible will store pub key and private key in your local ssh directory so that later on you can ssh’ed into your ubuntu instances from your terminal . One thing you need to take care is ssh_key_name` . You should generate and download your ssh key-pairs from your aws account here and replace ssh_key_name` with your key-pair names . You will need to provide that ssh key path when logging into ubuntu instances.

Stack_Size : Stack_size define number of servers that you want to put in your fabric architecture. By default ansible uses 3 servers but you can increase or decrease it as per your requirements.

Configuring Fabric Layout:

In this section we will configure fabric layout which basically tells how many organizations, peers and ordering-services will be in our servers and in what combination . Ansible by default uses this configuration file to deploy fabric on aws. ( )

Lets take a look at this file

— –
# The url to the fabric source repository

# The gerrit patch set reference, should be automatically set by gerrit
# GERRIT_REFSPEC: “refs/changes/23/11523/3” # 1.0.0
# GERRIT_REFSPEC: “refs/changes/47/12047/3” # 1.0.1
GERRIT_REFSPEC: “refs/changes/13/13113/1”

# This variable defines fabric network attributes
fabric: {

# The user to connect to the server
ssh_user: “ubuntu”,

# options are “goleveldb”, “CouchDB”, default is goleveldb
peer_db: “CouchDB”,
tls: false,

# The following section defines how the fabric network is going to be made up
# cas indicates certificate authority containers
# peers indicates peer containers
# orderers indicates orderer containers
# kafka indicates kafka containers
# all names must be in lower case. Numeric characters cannot be used to start
# or end a name. Character dot (.) can only be used in names of peers and orderers.
network: {
fabric001: {
cas: [“ca1st.orga”],
peers: [“anchor@peer1st.orga”, “anchor@peer1st.orgb”],
orderers: [“orderer1st.orgc”, “orderer1st.orgd”],
zookeepers: [“zookeeper1st”],
kafkas: [“kafka1st”]
fabric002: {
cas: [“ca1st.orgb”],
peers: [“worker@peer2nd.orga”, “worker@peer2nd.orgb”],
orderers: [“orderer2nd.orgc”, “orderer2nd.orgd”],
zookeepers: [“zookeeper2nd”],
kafkas: [“kafka2nd”]
fabric003: {
cas: [“ca1st.orgc”, “ca1st.orgd”],
peers: [“worker@peer3rd.orga”, “worker@peer3rd.orgb”],
orderers: [],
zookeepers: [“zookeeper3rd”],
kafkas: [“kafka3rd”]

baseimage_tag: “1.0.2”,
ca: { tag: “1.0.2”, admin: “admin”, adminpw: “adminpw” }

This layout file tells ansible to deploy 4-organizations and their peers on three ubuntu servers with database set as couchdb` . Out of 4 orgainzations , orgc and orgd act as ordering service while orga and orgb will be main participants which involves in transfer of assets.

Since there are multiple ordering-service so we need kafka and zookeeper servers for consensus mechanism . In this layout file we define 3 kafka and zookeeper servers which is enough for this network. However for larger network you should properly calculate sizing of these kafka servers. You can define as many kafka servers but make sure it dosen’t overkill your application.

Now above in this article , I mentioned that t2.micro` instance wont be enough to run this fabric layout. Reason being is zookeeper and Kafka needs at least 1 GB RAM to properly work while t2.micro instance is unable to provide . If you run with micro instance you will probably end up getting Memory corruption` error in kafka containers.

Rest of the contents in file is self explanatory . Let me know if you have any questions regarding this configuration file.

Other Important Files:

ansible/role/deploy_compose` : This folder contain all necessary code regarding fabric setup from channel_artifacts to instantiate_chaincode .

Chaincode : If you want to change chaincode , navigate to and paste your chaincode in file firstcode.go`

Endorsement Policy : If you want to change endorsement policy , navigate to dochannel.j2` ( and change policy as per your business needs.

After modifying all these things , run script from ansible directory as follows.

Export AWS secret access key which you saved earlier

export AWS_SECRET_KEY="your secret key of your aws account"

Next run ansible command

ansible-playbook -e “mode=apply” aws.yml

This command will take sometime depending on your internet connection .

you will get output like this when that command finish properly

Invoke Queries:

Now to test whether our deployed fabric servers is working properly or not, we will invoke some queries and see if every peer get the same results or not..

You should see all your running instances here

Now login into these instances using your ssh key and you should see some docker containers running depends on your fabric layout.

Since no client is integrated with this network , so you will need to invoke queries from peer containers . Go to peer containers by typing below command

docker exec -it <container name of peer> bash

For list of channels

peer fetch channel -o <orderer container name>:7050

You should get channel name firstchannel` since ansible by default create channel with that name.

To invoke transaction from peer terminal , run below command

peer chaincode invoke -o <orderer container name> -C <channel_name> -n <chaincode name> -c '{"Args":["invoke","a","b","10"]}'

This will invoke transaction and will transfer 10` from a` to b`

To check balance of a` run this command .

peer chaincode query -C <channel name> -n <chaincode name> -c '{"Args":["query","a"]}'

You can check result on other servers in a similar manner . If you get same result , it means your fabric network is working properly across multiple servers and all peers amd ordering-service are connected to each other.

In next article , we will integrate client like node-sdk with this existing network and create public api’s so that you can run basic blockchain commands from anywhere by just calling their api’s.

Khurram Adhami

CTO at

Posted on March 20, 2018 in blog

Share the Story

Humza Hamid

About the Author

BEL 3.0 was established by a diverse range of individuals united by the greater vision that Blockchain and peer-to-peer technologies can profoundly solve real-world problems.

Leave a reply

Back to Top