how to build a hub and spoke model using terraform

Deploy an Azure Hub and Spoke Network using Terraform

Whether your doing this to study or deploy into a production networking, deploying an azure hub and spoke network using terraform makes management a lot easier. An azure network deployed using terraform makes it easier to make changes where needed. This blog is also in relation to a video series that walks through how to do each step in this article.

This blog goes through the following steps to get a hub and spoke network up and running quick:

  • Defining address space for each network
  • Building 3 separate networks (hub network, dev network, and test network)
  • Create a peering connection between the spoke and hub networks
  • Deploy 3 virtual machines in 3 each vnet
  • Create a bastion host to connect to the virtual machines
  • Deploy a firewall using terraform to control routes
Full Video of Hub and Spoke Network Deployment

Allocating address space for your virtual networks in azure

Virtual networks in azure are allowed to use the same rfc1918 addresses. This is because of the nature of how azure separates each vnet. Think of each virtual network as a separate building or company thats not connected. The problem comes if you utilize the same cidr ranges and then need to connect to another network.

Since we are creating a hub and spoke network, we need each network space to be unique. The smallest range you can make a vnet is a /16 which gives you 65,536 ip addresses to allocate into different subnets. This is what we will allocate to each vnet:

  • Hub vnet will get the 10.2.0.0/16 address range
  • The dev vnet will get the 10.0.0.0/16 address range
  • And last the test vnet will get the 10.1.0.0/16 address range

Build subnets in azure for a hub and spoke network

Now that we have all three networks cidr ranges separated, we can now confirm that none of these ip ranges will overlap. An overlap in the address space will prevent the ability to create a peering connection down the road. Each subnet will get a portion of their vnets ip space.

  • hub vnet will create a subnet utilizing the 10.2.1.0/24
  • The dev vnet will create a subnet utilizing the 10.0.1.0/24
  • Then the test vnet will create a subnet that utilizes the 10.1.0.1.24

This give each vnet the ability to create up to 251 resources that need a private ip address. The reason is because azure always reserves 3 ip addresses from the space. If you don’t already have your terraform folder created for your hub and spoke environment now would be a good time. In your folder and editor of choice place these 3 files.

You have your hub.tf file. This code below will create an azure virtual network and a subnet


resource "azurerm_virtual_network" "hubnetwork" {
  name                = "hubnetwork"
  location            = azurerm_resource_group.main.location
  resource_group_name = azurerm_resource_group.main.name
  address_space       = ["10.2.0.0/16"]
}

resource "azurerm_subnet" "hubsubnet" {
  name                 = "hubsubnet"
  resource_group_name  = azurerm_resource_group.main.name
  virtual_network_name = azurerm_virtual_network.hubnetwork.name
  address_prefixes     = ["10.2.1.0/24"]
}

Now we create another file for a spoke network called dev-spoke.tf and place that networks terraform code

esource "azurerm_virtual_network" "devnetwork" {
  name                = "devnetwork"
  location            = azurerm_resource_group.main.location
  resource_group_name = azurerm_resource_group.main.name
  address_space       = ["10.0.0.0/16"]
}

resource "azurerm_subnet" "devsubnet" {
  name                 = "devsubnet"
  resource_group_name  = azurerm_resource_group.main.name
  virtual_network_name = azurerm_virtual_network.devnetwork.name
  address_prefixes     = ["10.0.1.0/24"]

}

And finally we’ll create another file called test-spoke.tf and this will be the creation of 3 different virtual networks.


resource "azurerm_virtual_network" "testnetwork" {
  name                = "testnetwork"
  location            = azurerm_resource_group.main.location
  resource_group_name = azurerm_resource_group.main.name
  address_space       = ["10.1.0.0/16"]
}

resource "azurerm_subnet" "testsubnet" {
  name                 = "testsubnet"
  resource_group_name  = azurerm_resource_group.main.name
  virtual_network_name = azurerm_virtual_network.testnetwork.name
  address_prefixes     = ["10.1.1.0/24"]
}

Creating a peering connection in azure using terraform

Now that we have all three networks provisioned out, we’ll need some more terraform code to get the peering connection created between the 3 networks. We will be creating a peering connection from the hub to both spoke networks. This will not allow both spoke networks to communicate with each other, only to the hub network.

For the peering connection we will place that code in a file called main.tf. This file will

resource "azurerm_resource_group" "main" {
  name     = "mainnetwork"
  location = "eastus"
}
resource "azurerm_virtual_network_peering" "dev-to-hub-peer" {
  name = "hubtodev"
  virtual_network_name = azurerm_virtual_network.devnetwork.name
  remote_virtual_network_id = azurerm_virtual_network.hubnetwork.id
  resource_group_name = azurerm_resource_group.main.name
  allow_virtual_network_access = true
  allow_forwarded_traffic = true
}

resource "azurerm_virtual_network_peering" "hub-to-dev-peer" {
  name = "hubtodev"
  virtual_network_name = azurerm_virtual_network.hubnetwork.name
  remote_virtual_network_id = azurerm_virtual_network.devnetwork.id
  resource_group_name = azurerm_resource_group.main.name
  allow_virtual_network_access = true
  allow_forwarded_traffic = true
}


resource "azurerm_virtual_network_peering" "test-to-hub-peer" {
  name = "testtohub"
  virtual_network_name = azurerm_virtual_network.testnetwork.name
  remote_virtual_network_id = azurerm_virtual_network.hubnetwork.id
  resource_group_name = azurerm_resource_group.main.name
  allow_virtual_network_access = true
  allow_forwarded_traffic = true
}

resource "azurerm_virtual_network_peering" "hub-to-test-peer" {
  name = "hubtotest"
  virtual_network_name = azurerm_virtual_network.hubnetwork.name
  remote_virtual_network_id = azurerm_virtual_network.testnetwork.id
  resource_group_name = azurerm_resource_group.main.name
  allow_virtual_network_access = true
  allow_forwarded_traffic = true
}

As you can see each spoke network peering connection has a direct connection to the hub network. Again peering connections are not transitive connections. This means that although we have a connection to the hub the traffic does not automatically allow flow of packets to the other network. This would require a peering connection on that side as well.

Deploying a Windows Virtual Machine in Azure using Terraform

After we get our networks built, we need a way to test connectivity between the 3 different networks. With this we setup 3 additional azure virtual machines. Each virtual network will get 1 windows virtual machine that will allow us to test rdp connectivity. lets create a seperate virtualmachines.tf file and place our 3 virtual machines inside.

resource "azurerm_network_interface" "testnic" {
  name                = "testnic"
  location            = azurerm_resource_group.main.location
  resource_group_name = azurerm_resource_group.main.name

  ip_configuration {
    name                          = "internal"
    subnet_id                     = azurerm_subnet.testsubnet.id
    private_ip_address_allocation = "Dynamic"
  }
}

resource "azurerm_windows_virtual_machine" "testvm" {
  name                = "tesetvm"
  resource_group_name = azurerm_resource_group.main.name
  location            = azurerm_resource_group.main.location
  size                = "Standard_F2"
  admin_username      = "adminuser"
  admin_password      = ""
  network_interface_ids = [
    azurerm_network_interface.testnic.id,
  ]

  os_disk {
    caching              = "ReadWrite"
    storage_account_type = "Standard_LRS"
  }

  source_image_reference {
    publisher = "MicrosoftWindowsServer"
    offer     = "WindowsServer"
    sku       = "2016-Datacenter"
    version   = "latest"
  }
}



resource "azurerm_network_interface" "devnic" {
  name                = "devnic"
  location            = azurerm_resource_group.main.location
  resource_group_name = azurerm_resource_group.main.name

  ip_configuration {
    name                          = "internal"
    subnet_id                     = azurerm_subnet.devsubnet.id
    private_ip_address_allocation = "Dynamic"
    public_ip_address_id = azurerm_public_ip.devpip.id
  }
}


resource "azurerm_public_ip" "devpip" {
  name                = "devpip"
  resource_group_name = azurerm_resource_group.main.name
  location            = azurerm_resource_group.main.location
  allocation_method   = "Static"
}


resource "azurerm_windows_virtual_machine" "devvm" {
  name                = "devvm"
  resource_group_name = azurerm_resource_group.main.name
  location            = azurerm_resource_group.main.location
  size                = "Standard_F2"
  admin_username      = "adminuser"
  admin_password      = ""
  network_interface_ids = [
    azurerm_network_interface.devnic.id,
  ]

  os_disk {
    caching              = "ReadWrite"
    storage_account_type = "Standard_LRS"
  }

  source_image_reference {
    publisher = "MicrosoftWindowsServer"
    offer     = "WindowsServer"
    sku       = "2016-Datacenter"
    version   = "latest"
  }
}



resource "azurerm_network_interface" "example" {
  name                = "example-nic"
  location            = azurerm_resource_group.main.location
  resource_group_name = azurerm_resource_group.main.name

  ip_configuration {
    name                          = "internal"
    subnet_id                     = azurerm_subnet.hubsubnet.id
    private_ip_address_allocation = "Dynamic"
  }
}

resource "azurerm_windows_virtual_machine" "hubvm" {
  name                = "hubvm"
  resource_group_name = azurerm_resource_group.main.name
  location            = azurerm_resource_group.main.location
  size                = "Standard_F2"
  admin_username      = "adminuser"
  admin_password      = ""
  network_interface_ids = [
    azurerm_network_interface.example.id,
  ]

  os_disk {
    caching              = "ReadWrite"
    storage_account_type = "Standard_LRS"
  }

  source_image_reference {
    publisher = "MicrosoftWindowsServer"
    offer     = "WindowsServer"
    sku       = "2016-Datacenter"
    version   = "latest"
  }
}

This code references the server 2016 datacenter vm image. The images can all be found inside of the azure portal or looking up how a current vm is setup.

Creating a bastion host in azure to secure rdp access

Now to gain access to machines and set them up the way we need them to we have to use the old rdp access. This protocol has been a method of attack for bad actors living on the outside. To prevent having to expose virtual machines to the public internet, this is where we create a bastion host using terraform.

The bastion serves as the connection point into vm of your choice. To setup a bastion host using terraform, place this code at the bottom of our virtualmachines.tf file

resource "azurerm_subnet" "bastionsubnet" {
  name                 = "AzureBastionSubnet"
  resource_group_name  = azurerm_resource_group.main.name
  virtual_network_name = azurerm_virtual_network.devnetwork.name
  address_prefixes     = ["10.0.2.0/27"]
}

resource "azurerm_public_ip" "bastionpip" {
  name                = "examplepip"
  location            = azurerm_resource_group.main.location
  resource_group_name = azurerm_resource_group.main.name
  allocation_method   = "Static"
  sku                 = "Standard"
}

resource "azurerm_bastion_host" "main" {
  name                = "examplebastion"
  location            = azurerm_resource_group.main.location
  resource_group_name = azurerm_resource_group.main.name

  ip_configuration {
    name                 = "configuration"
    subnet_id            = azurerm_subnet.bastionsubnet.id
    public_ip_address_id = azurerm_public_ip.bastionpip.id
  }

Conclusion: Deploying your terraform code

Now that we have our entire environment developed in code we can start to run some of the more usefull terraform commands. First we need to run:

  • “Terraform Init” to initialze the product
  • Then “Terrafomr Plan” to make sure that we are not effcting any current structure terraform is managing if there is some.
  • Once your good with how the plan looks, “Terraform Appl” then gives you the ability to deploy to your subscription of choice.

Once done you should be able to connect to each server using the supplied admin and password for each virtual machine. After you connect to each machine, you should then be able to remote desktop into each hub virtual machine and try to rdp into the hub vm.

Testing from the spoke vm to another spoke vm should show no rdp. In the next article we will talk about how to deploy an azure firewall in the network to focus on creating user defined routes and the ability to connect to a vm on the other side of the network connection.

Comments are closed.