How To Setup DRBD on CentOS.

by | Jun 10, 2017 | Howtos, Linux, Networking, OS, Servers, Storages, Troubleshooting | 0 comments

Distributed Replicated Block Device (DRBD) DRBD is a distributed replicated storage system for the Linux platform. It is implemented as…


Distributed Replicated Block Device (DRBD)
DRBD is a distributed replicated storage system for the Linux platform. It is implemented as a kernel driver, several user space management applications, and some shell scripts. DRBD is traditionally used in high availability (HA) computer clusters, but beginning with DRBD version 9, it can also be used to create larger software defined storage pools with a focus on cloud integration.

Comparison to RAID-1
=====================
DRBD bears a superficial similarity to RAID-1 in that it involves a copy of data on two storage devices, such that if one fails, the data on the other can be used. However, it operates in a very different way from RAID and even network RAID.

In RAID, the redundancy exists in a layer transparent to the storage-using application. While there are two storage devices, there is only one instance of the application and the application is not aware of multiple copies. When the application reads, the RAID layer chooses the storage device to read. When a storage device fails, the RAID layer chooses to read the other, without the application instance knowing of the failure.

In contrast, with DRBD there are two instances of the application, and each can read only from one of the two storage devices. Should one storage device fail, the application instance tied to that device can no longer read the data. Consequently, in that case that application instance shuts down and the other application instance, tied to the surviving copy of the data, takes over.

Conversely, in RAID, if the single application instance fails, the information on the two storage devices is effectively unusable, but in DRBD, the other application instance can take over.

How it Works
============
The tool is built to imperceptibly facilitate communication between two servers by minimizing the amount of system resources used- It therefore does not affect system performance and stability.

DRBD facilitates communication by mirroring two separate servers- one server, although passive, is usually a direct copy of the other. Any data written to the primary server is simultaneously copied to the secondary one through a real time communication system. Any change made on the data is also immediately replicated by the passive server.

The passive server only becomes active when the primary one fails and collapses. When such a failure occurs, DRBD immediately recognizes the mishap and shifts to the secondary server. This shifting process however, is optional- it can either be manual or automatic. For users who prefer manual, one is required to authorize the system to shift to the passive server when the primary one fails. Automatic systems on the other hand, swiftly recognize problems within the primary servers and immediately shift to the secondary ones.

DRBD installation
=================

Install ELRepo repository on your both system:
———————————————-
# rpm -Uvh http://www.elrepo.org/elrepo-release-6-6.el6.elrepo.noarch.rpm

Update both repo
————————
yum update -y
setenforce 0

Install DRBD
—————–
[root@server1 ~]# yum -y install drbd83-utils kmod-drbd83
[root@server1 ~]# yum -y install drbd83-utils kmod-drbd83

Insert DRBD module manually on both machines or reboot
———————————————————————————
/sbin/modprobe drbd

Partition DRBD on both machines
———————————————-
[root@server1 ~]# fdisk -cu /dev/sdb
[root@server2 ~]# fdisk -cu /dev/sdb

Create the Distributed Replicated Block Device resource file
————————————————————————————-
[root@server1 ~]# vi /etc/drbd.d/clusterdb.res

resource clusterdb
{
startup {
wfc-timeout 30;
outdated-wfc-timeout 20;
degr-wfc-timeout 30;
}

net {
cram-hmac-alg sha1;
shared-secret sync_disk;
}

syncer {
rate 10M;
al-extents 257;
on-no-data-accessible io-error;
}
on server1 {
device /dev/drbd0;
disk /dev/sdb1;
address 192.165.1.111:7788;
flexible-meta-disk internal;
}
on server2 {
device /dev/drbd0;
disk /dev/sdb1;
address 192.165.1.111:7788;
meta-disk internal;
}
}

 Make sure that DNS resolution is working
———————————————————-
/etc/hosts
192.168.1.110 server1 server1.example.com
192.168.1.111 server2 server2.example.com

Set NTP server and add it to crontab  on both machines
 —————————————————————————–
vi /etc/crontab
5 * * * * root ntpdate your.ntp.server

Copy DRBD configured and hosts file to server2
 ——————————————————————-
[root@server1 ~]# scp /etc/drbd.d/clusterdb.res server2:/etc/drbd.d/clusterdb.res
[root@server1 ~]# scp /etc/hosts server2:/etc/

Initialize the DRBD meta data storage on both machines
—————————————————————————–
[root@server1 ~]# drbdadm create-md clusterdb
[root@server2 ~]# drbdadm create-md clusterdb

Start the drdb  on both servers
——————————————–
[root@server1 ~]# service drbd start
[root@server2 ~]# service drbd start

On the PRIMARY server run drbdadm command
——————————————————————
[root@server1 ~]# drbdadm — –overwrite-data-of-peer primary all

Check if  Device disk initial synchronization to complete (100%) and check to confirm you are on primary server
———————————————————————————————————————————————————–
[root@server1 yum.repos.d]# cat /proc/drbd

version: 8.3.16 (api:88/proto:86-97)
GIT-hash: a798fa7e274428a357657fb52f0ecf40192c1985 build by phil@Build32R6, 2013-09-27 15:59:12
0: cs:SyncSource ro:Primary/Secondary ds:UpToDate/Inconsistent C r—–
ns:78848 nr:0 dw:0 dr:79520 al:0 bm:4 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:2017180
[>………………..] sync’ed: 27.0% (2037180/2096028)K
finish: 0:02:58 speed: 11,264 (11,264) K/sec
ns:1081628 nr:0 dw:33260 dr:1048752 al:14 bm:64 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0]

Create filesystem on Distributed Replicated Block Device device
——————————————————————————————-
[root@server1 yum.repos.d]# /sbin/mkfs.ext4 /dev/drbd0
mke2fs 1.41.12 (06-June-2017)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
131072 inodes, 524007 blocks
26200 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=536870912
16 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912

Writing inode tables: done
Creating journal (8192 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 26 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.

Now you can mount DRBD device on your primary server
————————————————–
[root@server1 ~]# mkdir /data
[root@server1 ~]# mount /dev/drbd0  /data

You don’t need to mount the disk from secondary machines. All data you write on /data folder will be synced to machine2.

 

Adios 🙂

Written By Teffin Varghese

Server Administrator

Related Posts

Comments

0 Comments

0 Comments

Submit a Comment


Subscribe For Instant News, Updates, and Discounts

Pin It on Pinterest

Shares
Share This