!!! This is from 2015 using an already outdated method. You can use this as a guideline but this is using fairly outdated stuff !!!
While at work we had to create a cluster for a DNS server. The replication method we were using at the time was DRBD. I was a bit curious about other ways to replicate data in a Ubuntu linux environment. That is when I found GlusterFS. I have done this on Ubuntu 12.04 as well as 14.04.
Okay, replication is not the best use of GlusterFS but it can be used to replicate data. Specially the data you are replicating are small files. That is exactly what we replicate in DNS. This is a step by step guide for creating a High Availability DNS cluster.
Step 1 - Installing GlusterFS and Prerequisites
1)
Add following line in /etc/apt/source.list
deb
http://ppa.launchpad.net/gluster/glusterfs-3.5/ubuntu precise main
2)
apt-get install python-software-properties
3)
add-apt-repository ppa:gluster/glusterfs-3.5
4)
apt-get install glusterfs-server
5) apt-get
install xfsprogs
6) apt-get
install fuse-utils (This was not available in Ubuntu 14.04 . Not needed as well)
Step 2 - Configuring the Nodes
Assume the disk used for replication is /dev/sdb, Primary server is server1 with IP 192.168.0.1, Secondary server is server2 with IP 192.168.0.2.
1) Change the /etc/hosts file as follows on both servers.
127.0.0.1 localhost
192.168.0.1 server1
192.168.0.2 server2
2) fdisk
/dev/sdb
3) mkfs.xfs
-i size=512 /dev/sdb1
4) Create a mount point to mount this disk.
mkdir
/mnt/gluster
5) Add
line in /etc/fstab to mount this on startup
UUID=(uuid
of the dev) /mnt/gluster xfs defaults 0 0
6) Run mount -a to check the new fstab line.
7) Create
a folder for replication
mkdir
/mnt/gluster/replicate
8) Find peers and add them to the trusted pool. ( If you want to add additional nodes you have to probe them from a server already in the pool. You cannot probe the pool from a new server)
On Server1
gluster
peer probe server2
On Server2
gluster peer probe server1
9) On
server1
Here data is the name of the volume. We can use any name.
gluster
volume create data replica 2 transport tcp
server1:/mnt/gluster/replicate \ server2:/mnt/gluster/replicate
10) Start the volume
gluster
volume start data
11) Create a directory /dns (or any name you like that is not currently used) to mount the replication folder
mkdir /dns
12) Add another mount line to fstab like below.
on server1
server1:/data
/dns glusterfs defaults,_netdev 0 0
on server2
server2:/data /dns glusterfs defaults,_netdev 0 0
13) Remount again.
mount -a
We have now completed the replication.
Step 3 - DNS configurations
1) Create a directory /dns/bind to store the bind9 related data.
mkdir /dns/bind
2) Copy all directories containing zone files to /dns (mount point of
replication device)
3)
Remove/rename the original directories and create symbolic links for
the new locations.
4)
Do the same for /etc/bind/named.conf.local file.
5)
Make following changes in /etc/bind/named.conf file.
Comment/Remove the following line
include
“/etc/bind/named.conf.local
Add
following line
include
“/dns/bind/named.conf.local”
6)
Assuming all your bind related files are on /dns/bind add following
line to /etc/apparmor.d/usr.sbin.named file.
/opt/bind/**
rw,
Now we have two servers with the same DNS zones which will run on two separate IP Addresses. This alone could be a setup for Disaster Recovery where we can manually do the IP failover. But that is not good enough for us. Is it?
This is where HeartBeat comes in.
Step 4 - HeartBeat Configuration
Here we assume that the published IP for your DNS server is 192.168.0.3. Clients will connect to this IP which will be assigned to server1 through HeartBeat initially. If server1 fails server2 will take over this IP and server DNS queries..
1)
apt-get install heartbeat
2)
Create a file /etc/heartbeat/ha.cf and add following details on both
servers
debug
0
debugfile
/var/log/ha-debug
logfacility
local0
keepalive
2
deadtime
20 # timeout before the other server takes over
bcast
eth0
node
server1 server2 #node host names
auto_failback
on # very important or auto failover won't happen
3)
Create a file /etc/heartbeat/haresources and add following line on
both servers.
server1
Ipaddr::virtual.IP.Address/netmask/interface bind9
4)
Create a file /etc/heartbeat/authkeys and add following lines
auth
3
3
md5 sequenceofnumbers(1234567 etc)
5)
Remove heartbeat from startup programs. This is because glusterfs takes some time to initialize on reboot and if heartbeat kicks in before GusterFS it will not be able to start bind services and would have unexpected Behaviour
6)
create a script /root/hbstart to start heartbeat.
/etc/init.d/heartbeat
start
7)
Call the script 1 minute after reboot with a cron job.
@reboot
/bin/sleep 60 && /root/hbstart
No comments:
Post a Comment