Skip to main content
Contact our team to know more about our services
select webform
By submitting, you acknowledge that you've read and agree to our privacy policies, and Opcito may use the information provided for business purposes.
Become a part of our team
select webform
One file only.
1.5 GB limit.
Allowed types: gif, jpg, jpeg, png, bmp, eps, tif, pict, psd, txt, rtf, html, odf, pdf, doc, docx, ppt, pptx, xls, xlsx, xml, avi, mov, mp3, mp4, ogg, wav, bz2, dmg, gz, jar, rar, sit, svg, tar, zip.
By submitting, you acknowledge that you've read and agree to our privacy policies, and Opcito may use the information provided for business purposes.
How to conduct canary deployment using HAProxy
10 Mar 2023

How to conduct canary deployment using HAProxy

A canary deployment is a deployment strategy that is used to release applications and services to a subset of users incrementally. All the infrastructure in a target environment is updated in small phases (e.g., 5%, 30%, 75%, and 100%). Due to this control, a canary release carries the lowest risk compared to all other deployment strategies. The deployments allow organizations to make staged releases and test applications with real users & use cases, collect feedback, and compare different service versions. Organizations prefer it over blue-green deployments because it is cheaper and does not require two production environments. Rolling back to previous versions of the application can be done safely and quickly.

Canary deployments with HAProxy

HAProxy (High Availability Proxy) is one of the most popular free and open-source TCP/HTTP load balancing software, offering high availability and proxy functionality. It is commonly used to enhance the reliability and performance of server environments by distributing workloads across multiple servers and ensuring high availability. It is known for its efficiency and scalability, as it can handle many concurrent connections with minimal resource utilization. In this blog, I will explain how HAProxy implements a canary release.


    • Python2 (Version 2.7) or Python3 (Version 3.5 and higher)
    • Ansible
    • HAProxy

Steps to use HAProxy for canary deployments

We will configure the load balancer to work on layer 7. The HAProxy configuration file consists mainly of 4 main blocks - default, global, frontend, and backend.

Below are a few blocks that can be added to the HAProxy configuration file template. We will be using ansible roles to generate the haproxy.cfg file from the template and restarting of HAProxy service.

frontend http-in 
   mode http 
   bind *:80 
   option httplog 
   acl is_cookie_hack_1 hdr_sub(cookie) access_svr=svrblue 
   acl is_cookie_hack_2 hdr_sub(cookie) access_svr=svrgreen 
   use_backend blue_host_http if is_cookie_hack_1 
   use_backend green_host_http if is_cookie_hack_2 
   default_backend webservers_http 

In the frontend block, acl is used to check the incoming request and see if there is any cookie in the header. Based on the value of the cookie, it will forward the request to a specific backend. If there is no matching condition with the acl, the request will be sent to a default backend.

backend webservers_http 
   mode http 
   cookie access_svr insert indirect nocache httponly 
   server blue_host_http localhost:9090 cookie svrblue weight {{ blue_traffic | int }} check 
   server green_host_http localhost:8080 cookie svrgreen weight {{ green_traffic | int }} check

The weight parameter in this backend block can take values from 0 to 256. The traffic will be forwarded to the respective backends based on that value.

listen blue_host_http 
   bind localhost:9090 
   balance roundrobin 
   option httpchk GET /wwwcheck.html 
   http-check expect status 200 
   server blue_app {{ blue_host }}:{{ port }} check fall 3 rise 2 inter 1597 

listen green_host_http 
   bind localhost:8080 
   balance roundrobin 
   option httpchk GET /wwwcheck.html 
   http-check expect status 200 
   server green_app {{ green_host }}:{{ port }} check fall 3 rise 2 inter 1597 

‘Listen’ is a combination of frontend and backend blocks.

If any condition is met in the acl of the frontend block, then only the backend section of these listen blocks will be used. If there is no acl condition met, the frontend section of this listen block will take traffic from the backend (webservers_http) block.

We will use simple ansible roles and playbooks to generate haproxy.cfg file from template.

Ansible Galaxy is a community-driven platform for sharing, managing, and discovering Ansible roles. These roles are pre-written scripts that are used to automate tasks in an application deployment. It allows users to easily find and download roles that other users create, as well as upload and share their own roles. The roles may range from simple tasks, such as installing and configuring software, to complex multi-node deployments. Ansible Galaxy also lets users rate and review roles, so that other users can find useful and high-quality roles.

When a role is created, the default directory structure contains the following: 

Create a role using the following ansible galaxy command:

ansible-galaxy init <role_name>

In the templates folder, under <role_name> directory, create one template file haproxy.cfg.j2 and add the above HAProxy sections.

Now, create a task under the tasks folder of the role. Add a task to copy the template files to /etc/haproxy.cfg location and another task to reload the HAProxy service.

# tasks file for haproxy service 

- name: copy haproxy file 


     src: templates/haproxy.cfg.j2 

     dest: /etc/haproxy.cfg 

     owner: root 

     group: root 

- name: Reload haproxy service 


name: haproxy 

    state: restarted 

We will restart the HAProxy service only if there is a change in the configuration file.

Now create ansible variables that have been defined in the template file.

Create a folder for variables. Create two separate files for the blue and green versions. Add variables blue_host and green_host and blue_traffic and green_traffic. Give some default values.

Now create a simple playbook to invoke roles and variables.

hosts: localhost 

var_include: <path_to_variables_directory> 


        - <role_name>

We can now run the ansible-playbook command and overwrite the default value of variables by passing the -e option.

ansible-playbook <playbook_name> -e blue_traffic='100' -e green_traffic='0'

Every time we want to change traffic, we can change the traffic values and run the command.

This is it! This is how you can conduct canary deployments using HAProxy. You can reduce the risk of rolling out a new application version to all users at once and ensure that it is stable and performs well before a full rollout. Try it out yourself and share your experience in the comments below.

Subscribe to our feed

select webform