Engineering

Moving Towards Full Stack Automation

In 2015 we launched Awex, the first IPv6-only project powered by Hostinger. It’s fully automated starting from application deployments to network gear. Since that moment we have been automating almost everything what’s possible to avoid manual work, and increase efficiency.

As we wrote in previous posts we use Chef and Ansible for most of the tasks.

Due to huge demand for premium shared hosting we have been shipping an increased amount of new servers in the last few months.

Before having it automated, we had to manually setup servers. This means creating hardware RAIDs, setting IP addresses, installing OS, etc. Fortunately, we decided to stop here, and create a streamline process. We employed PXE for this kind of job.

Why PXE?

  • It allows us to fully converge our infrastructure with minimal costs;
  • We use phpIPAM to store our inventory information like Chef run lists, environments, IP addresses, MAC addresses, etc. Hence we ended up with a simple solution: single PXE virtual machine per data center serving kickstart configuration files from phpIPAM.

The process itself is trivial:

  • Put all the necessary information about the new server into phpIPAM (IP/MAC addresses/hostname/etc.);
  • Generate kickstart config (which is also stored in phpIPAM along with the server as additional metadata field);
  • Install OS (custom microservice (written with Ruby/Sinatra) downloads kickstart configuration from phpIPAM dynamically for given MAC address);
  • After OS is installed, we run full Ansible playbook to converge all roles;
  • Later we do not run full converges because it takes much time to complete.

We use incremental converges where we just involved versioning control for roles to speed up the process 10x.

We also considered using FAIForeman but this tool looked oover-engineeredfor our simple needs.

Why Network?

This year we are delivering more and more L3 networks across all of our data centers (DC).

We started the new data center in Netherlands which will bring new opportunities for clients sourcing from Europe. This DC is fully L3-only. This means no fucking stretched VLANs between racks, rooms and so on. On that occasion, I want to say that we switched Singapore’s data center over L3 as well. It’s just the right architecture in place. Yeah, I worked in three ISPs over my career and already found that most interesting networking stuff lies in data centers architectures.

Topology

We use Cumulus Linux and switched to 40G between leaf and spines. Everything is automated with Ansible. We connect every commodity server to a 10G network using fiber, somewhere copper. The project without issues cannot be successful in the first phase:

  • We dealt with some of the hardware issues bootstrapping new data center;
  • 40G adapters were not recognized properly – one end expects copper, while other – fiber;
  • 10G over QSFP+ using breakout cable;
  • Quagga –> FRRouting;
  • Migration from L2 to L3 without any downtime.

40G is Something New

It is really weird how it works while both sides have different interface XLAUI (fiber) and CR4 (copper). After few minutes the link is down. Why is the hell CR4 used at all?

I tried to override cable settings with /etc/cumulus/portwd.conf by setting in both cable sides:

But no joy.

I did not find any documentation about portwd daemon at all, thus decided to take a look at the source code.

The default interface is 40G-CR4 (cannot decode interface information from ASIC), hence we have it. We ordered another kind of transceivers recommended by Cumulus – and it works! CR4 becomes SR4.

Another one interesting problem faced with QSFP+ was using an adapter to connect both ends with 10G. This was needed because our DC provider didn’t have 40G ports to connect to. We use EdgeCore 6712-32X-O-AC-F in our spines layer which doesn’t have 10G ports, hence we moved to 10G over QSFP+ adapter (breakout cable).

2 –> 3

Most interesting part was migrating existing L2 network with almost full three racks in Singapore to L3. I picked using private AS per rack numbering from 65031, 65032, … This architecture allows keeping eBGP rules regardless if it’s an upstream neighbor or just a ToR switch.

Existing infrastructure was built using separate sessions for IPv4 and IPv6 thus this option allowed us to migrate smoothly without any downtime by testing one protocol before switching to another. Before that, we had single exit upstream where we announced our prefixes. After setting up another one (for migration only) we needed to make sure traffic flows as expected to avoid asymmetric traffic using next-hop manipulation, AS prepends and so on.

This is an example of Ansible’s YAML hash to configure peers:

This snippet is taken from one of the Singapore’s spine switches. You should notice that we use IPv6-only BGP sessions between internal peers for both protocols and different AS numbers for different racks.

For new deployments, we started using the newest version of Cumulus which replaced Quagga with FRRouting daemon for routing protocols. In general, FRRouting is just a community fork of Quagga. Hence the configuration wasn’t so much different, basically, only paths had to be changed for configuration files. Before provisioning new version in production we test it in the kitchen with the same playbooks and Cumulus-VX. If you push stuff that’s untested you are just blocking your fire escape.

Final Cuts

  • Just like a power company shouldn’t change its voltage every year, we believe that our clients should be able to trust our service to be stable;
  • Cumulus Linux still remains the best OS for networking;
  • Do NOT design network topology according to application needs. Today applications are smart enough to be network agnostic and adaptive to any infrastructure. If you care about your infrastructure do not turn it into unfrastructure.

Hence we are trying to do less manual work while spending our time on improving the quality of the service.