{"id":878,"date":"2018-06-27T08:34:35","date_gmt":"2018-06-27T08:34:35","guid":{"rendered":"https:\/\/www.hostinger.com\/blog\/?p=878"},"modified":"2022-02-15T11:01:10","modified_gmt":"2022-02-15T11:01:10","slug":"mysql-setup-at-hostinger-explained","status":"publish","type":"post","link":"https:\/\/www.hostinger.com\/blog\/mysql-setup-at-hostinger-explained","title":{"rendered":"MySQL Setup at Hostinger Explained"},"content":{"rendered":"<p>At&nbsp;<a href=\"https:\/\/www.hostinger.com\/\" rel=\"nofollow\">Hostinger,<\/a> we have various MySQL setups starting from the standalone replica-less instances, <a href=\"https:\/\/www.percona.com\/software\/mysql-database\/percona-xtradb-cluster\" rel=\"nofollow noopener\" target=\"_blank\">Percona XtraDB Cluster<\/a> (PXC), and <a href=\"http:\/\/www.proxysql.com\/\" rel=\"nofollow noopener\" target=\"_blank\">ProxySQL<\/a> based routing, to even absolutely custom and unique solutions which I&rsquo;m going to describe in this blog post.<\/p><p>We do not have elephant-sized databases for internal services like API, billing, and clients, since high availability is our top priority instead of scalability.<\/p><p>Still, scaling vertically is good enough for our case, as the database size does not exceed 500 GB. One of the top requirements is the ability to access the master node, as we have fairly equal-distanced workloads for reading and writing.<\/p><p>Our current setup for storing all the data about the clients, servers, and so forth is using PXC formed of three nodes without any geo-replication. All nodes are running in the same data center.<\/p><p>We have plans to migrate this cluster to a geo-replicated cluster across three locations: the United States, Netherlands, and Singapore. This would allow us to warrant high availability if one of the locations became unreachable.<\/p><p>Since PXC uses fully synchronous replication, there will be higher latencies for writes. But the reads will be much quicker because of the local replica in every location.<\/p><p>We did some research on&nbsp;<a href=\"https:\/\/dev.mysql.com\/doc\/refman\/8.0\/en\/group-replication.html\" rel=\"nofollow noopener\" target=\"_blank\">MySQL Group Replication<\/a>, and found out that it requires instances to be closer to each other and is more sensitive to latencies.<\/p><blockquote><p>Group Replication is designed to be deployed in a cluster environment where server instances are very close to each other, and is impacted by both network latency as well as network bandwidth.<\/p><\/blockquote><p>As PXC was used previously, we know how to deal with it under critical circumstances and make it more available.<\/p><p>In&nbsp;<a href=\"https:\/\/www.000webhost.com\/\" rel=\"nofollow noopener\" target=\"_blank\">000webhost.com<\/a> project and an hAPI (Hostinger API) we use our aforementioned unique solution which selects the master node using Layer3 protocol.<\/p><p>One of our best friends is BGP and BGP protocol, which is aged enough to buy its own beer, hence we use it a lot. This implementation also uses BGP as the underlying protocol and helps to point to the real master node. To run the BGP protocol we use the ExaBGP service and announce VIP address as anycast from both master nodes.<\/p><p>You should be asking: but how are you sure MySQL queries go to the one and the same instance instead of hitting both? We use&nbsp;<a href=\"https:\/\/zookeeper.apache.org\/doc\/current\/zookeeperOver.html\" rel=\"nofollow noopener\" target=\"_blank\">Zookeeper&rsquo;s ephemeral nodes<\/a>&nbsp;to acquire the lock as mutually exclusive.<\/p><p>Zookeeper acts like a circuit breaker between BGP speakers and MySQL clients. If the lock is acquired we announce the VIP from the master node and applications send the queries toward this path. If the lock is released, another node can take it over and announce the VIP, so the application will send the queries without any effort.<\/p><p><img decoding=\"async\" class=\"aligncenter size-full wp-image-883\" src=\"https:\/\/www.hostinger.com\/blog\/wp-content\/uploads\/sites\/4\/2018\/06\/mysql-setup-hostinger.jpg\" alt=\"MySQL Setup Hostinger\" width=\"845\" height=\"585\" srcset=\"https:\/\/imagedelivery.net\/LqiWLm-3MGbYHtFuUbcBtA\/wp-content\/uploads\/sites\/4\/2018\/06\/mysql-setup-hostinger.jpg\/w=845,fit=scale-down 845w, https:\/\/imagedelivery.net\/LqiWLm-3MGbYHtFuUbcBtA\/wp-content\/uploads\/sites\/4\/2018\/06\/mysql-setup-hostinger.jpg\/w=300,fit=scale-down 300w, https:\/\/imagedelivery.net\/LqiWLm-3MGbYHtFuUbcBtA\/wp-content\/uploads\/sites\/4\/2018\/06\/mysql-setup-hostinger.jpg\/w=768,fit=scale-down 768w\" sizes=\"(max-width: 845px) 100vw, 845px\" \/><\/p><p>The second question comes: what conditions should be met to stop announcing VIP? This can be implemented differently depending on the use case, but we release the lock if the MySQL process is down using systemd&rsquo;s <code>Requires<\/code>&nbsp;in the unit file of ExaBGP:<\/p><blockquote><p>Besides, with or without specifying After=, this unit will be stopped if one of the other units is explicitly stopped.<\/p><\/blockquote><p>With&nbsp;<a href=\"https:\/\/www.freedesktop.org\/wiki\/Software\/systemd\/\" rel=\"nofollow noopener\" target=\"_blank\">systemd<\/a> we can create a nice dependency tree that ensures all of them are met. Stopping, killing, or even rebooting the MySQL will make&nbsp;systemd&nbsp;stop the ExaBGP process and withdraw the VIP announcement. The final result is a new master selected.<\/p><p>We have battle-tested those master failovers during our <a href=\"https:\/\/www.hostinger.com\/blog\/new-network-infrastructure\" rel=\"nofollow\">Gaming days<\/a> and haven&rsquo;t noticed anything critical <em>yet<\/em>.<\/p><p>If you think good architecture is expensive, try bad architecture \ud83d\ude09<\/p>\n","protected":false},"excerpt":{"rendered":"<p>At\u00a0Hostinger, we have various MySQL setups starting from the standalone replica-less instances, Percona XtraDB Cluster (PXC), and ProxySQL based routing, to even absolutely custom \u2026<\/p>\n","protected":false},"author":39,"featured_media":1490,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[82],"tags":[],"hashtags":[],"class_list":["post-878","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-engineering"],"hreflangs":[],"_links":{"self":[{"href":"https:\/\/www.hostinger.com\/blog\/wp-json\/wp\/v2\/posts\/878","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.hostinger.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.hostinger.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.hostinger.com\/blog\/wp-json\/wp\/v2\/users\/39"}],"replies":[{"embeddable":true,"href":"https:\/\/www.hostinger.com\/blog\/wp-json\/wp\/v2\/comments?post=878"}],"version-history":[{"count":9,"href":"https:\/\/www.hostinger.com\/blog\/wp-json\/wp\/v2\/posts\/878\/revisions"}],"predecessor-version":[{"id":2536,"href":"https:\/\/www.hostinger.com\/blog\/wp-json\/wp\/v2\/posts\/878\/revisions\/2536"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.hostinger.com\/blog\/wp-json\/wp\/v2\/media\/1490"}],"wp:attachment":[{"href":"https:\/\/www.hostinger.com\/blog\/wp-json\/wp\/v2\/media?parent=878"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.hostinger.com\/blog\/wp-json\/wp\/v2\/categories?post=878"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.hostinger.com\/blog\/wp-json\/wp\/v2\/tags?post=878"},{"taxonomy":"hashtags","embeddable":true,"href":"https:\/\/www.hostinger.com\/blog\/wp-json\/wp\/v2\/hashtags?post=878"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}