Tutorial Google Cloud Http(s) Load Balancing and NGINX Proxy

1 Star2 Stars3 Stars4 Stars5 Stars (3 votes, avr: 4.67)

Google Cloud Load Balancer (GCLB) is a software-defined network load balancer available to all projects on Google Cloud Platform (GCP). Google Cloud Https Load Balancing provides global load balancing for HTTP(S) requests destined for VM instances.

This tutorial demonstrates step by step how to create an HTTP(S) Cross-Region Load Balancing that forwards traffic to VM instances in two different regions.


Before starting with this guide, make sure you are able to use the Google Cloud Platform Console and the gcloudcommand-line tool SSH.

Step 1: Create VM Instance

VM (Virtual Machine) is Google Compute Engine instances hosted on Google’s infrastructure. it can run the public images for Linux and Windows Server that Google provides as well as private custom images that you can create or import from your existing systems. You can also deploy Docker containers and much more.

Go to Compute Engine ➝ VM Instances ➝ Create Instance

We will create VM Instances two different regions then configure for incoming connections. For this example, we will us US and EU region

  1. Instance name “axfon-us” and “axfon-eu” (you may change with your prefer name)
  2. Region select us-central (Iowa) and Zone select us-central1-a
  3. Machine type we selected Micro it’s the smallest one that is properly for testing
  4. Boot dist Ubuntu 18.10 recommended choosing LTS (Long Term Support) Image, you may change it by clicking the Change button
  5. Firewall let it unchecked as the default value, we will configure it separately on Firewall
  6. Click Management, security, disks, networking, sole tenancy
  7. Network tags “axfon-tag” (you may change with your prefer name)
  8. This point is Optional click on Management and use startup script to automated install web server NGINX when VM Instance is creating, you can also install it manually, see startup script below this images
  9. Create Instance and let others default value

Use Startup Script (Optional)

sudo apt-get update
sudo apt-get install nginx -y
echo '<!doctype html><html><body><h1>AXFON US SERVER</h1><p>Created from simple start up Script</p></body></html>' | sudo tee /var/www/html/index.html
sudo service nginx restart

Repeat above steps for the second region. Create with the same settings except with Zone and Instance name now you may see on VM instance list like the following

Step 2: Create Firewall

Go to VPC Networks ➝ Firewall rules on the above menu click Create Firewall Rule

  1. Set Firewall Name (you may change with your prefer name)
  2. Targets on the combo box, select Specified target tags.
  3. Targets tags “axfon-tag” the name of tag previously set on instance network tag
  4. Secure IP ranges set to “”
  5. Protocol and Ports, check protocols and ports then check TCP and set For HTTP use 80 and For HTTPS or HTTP/2, use 443
  6. Create Firewall

Step 3: Create Instance Groups

Go to Compute Engine ➝ Instance Groups ➝ Create Instance Group

    1. Set Group Name (you may change with your prefer name)
    2. Location select “Singgle zone”
    3. The region, choose the same region as VM Instance was created
    4. Group type click on “Unmanaged instance group”
    5. VM instance on the combo box choose VM Instance name

Repeat above steps for the second group create with the same settings except with Region and Group name, now you may see on Groups list like the following

Step 4: Reserve a Global IP Address

Go to VPC Network ➝ External IP Addresses ➝ On the above menu Reserve Static Address

  1. Set Name (you may change with your prefer name)
  2. IP version select on IPv4
  3. IP Type select Global
  4. Serve IP Address

Step 5: Create Health Check

Go to Compute Engine ➝ Health Checks ➝ On the above menu click on Create a health check. Create both http and https

  1. Set Name (you may change with your prefer name)
  2. Protocol HTTP for http and port 80 and HTTPS for https and port 443
  3. Create a health check

Step 6: Create Load Balancing

Go to Network Services ➝ Load Balancing ➝ Choose HTTP(S) Load Balancing

Backend configuration

Backend services direct incoming traffic to one or more attached backends. Each backend is composed of an instance group and additional serving capacity metadata.

After click on create a backend service, it would show modal box for backend detail configuration

  1. Set Name (you may change with your prefer name)
  2. BackendType select on Instance Group
  3. New backend select Instance group
  4. port numbers is 80
  5. Done
  6. Add Backend for the second group with the same setting
  7. Check to enable  Cloud CDN
  8. Health Check select Http on this example “http-basic-check”
  9. Click update on the bottom

Host and Path Rule

Let it default setting or value

Frontend Configuration

This IP address is the frontend IP for your clients requests, we will add both HTTP and HTTPS, to do this click on “Add Frontend IP and Port”

  1. Set Name (you may change with your prefer name)
  2. Protocol HTTP for http and HTTPS for https
  3. IP Address on the combo box select Global IP Address
  4. Done for http configuration but for https configuration, if you don’t have own certificate you should “create a new certificate”
  5. Done

Review and finalize

Let see and check backend and frontend load balancing configuration, for this tutorial we created 2 certificates both www and non-www, if you do same you could add or attach multiple certificates on select certificate section and now Load Balancing is ready for Http and Https request. To finish this section click on the “Create” button


For provisioning certificates is likely to take from 30 to 60 minutes, but sometimes is faster, make sure DNS records have been resolved for your domain must reference the IP address of your load balancer’s target proxy

Step 7: NGINX Proxy Configuration

Passing a request to a proxied server, it sends the request to a specified proxied server, fetches the response, and sends it back to the client. To pass a request to an HTTP proxied server, the proxy_pass directive is specified inside a location on NGINX contains a default server block in /etc/nginx/sites-available/default

To redirect http to https on NGIX Proxy add the following code on location

if ($http_x_forwarded_proto = "http") {
    return 301 https://$server_name$request_uri;

To edit these files using nano editor:

sudo nano /etc/nginx/sites-available/default
server {
        listen 80 default_server;
        listen [::]:80 default_server;

        # include snippets/snakeoil.conf;
        root /var/www/html;
        # Add index.php to the list if you are using PHP
        index index.html index.htm index.nginx-debian.html;

        server_name lab.axfod.com www.lab.axfon;

        location / {
                if ($http_x_forwarded_proto = "http") {
                   return 301 https://$server_name$request_uri;
                # First attempt to serve request as file, then
                # as directory, then fall back to displaying a 404.
                try_files $uri $uri/ =404;
                proxy_set_header Host $host;
                proxy_set_header X-Real-IP $remote_addr;
                proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
                proxy_set_header X-Forwarded-Proto $scheme;

Press Ctrl/Cmd + X and then press Y and ENTER to save changes

Changes made in the NGINX configuration file will not be applied until the command to Restart

sudo service nginx restart

Above is a sample configuration for testing…Trying to find complete configuration

Step 8: Test Configuration is Running

To test google cloud CDN is running on NGINX is successfully serving pages use the command below

curl -I -X GET lab.axfod.com

You may change above with your domain or IP and let’s see below you will find Via:(version) google  it means it is running on

HTTP/1.1 301 Moved Permanently
Server: nginx/1.15.5 (Ubuntu)
Date: Tue, 08 Jan 2019 18:57:21 GMT
Content-Type: text/html
Content-Length: 178
Location: https://lab.axfod.com

Via: 1.1 google

Step 9: Testing Page on the Browser

You can now view this page in your web browser by visiting your server’s domain name or public IP address http://your_domain_or_IP/ let’s see on the browser two different regions working togather. If the visitors or incoming connections are sent to the closest available Instance location… Congraaaats…

If this tutorial could help you, please rate above Star button rating and share to help others find it! Feel free to leave a comment below.

Recommended For You

About the Author: Axfod

axfod.com is a site Online publisher for collection of guides and tutorials about Internet Technology