Cloud Computing.

Cloud Computing.

What is Cloud Computing?


Cloud computing is an on demand delivery of IT resources via the internet with pay as you go pricing instead of buying and owning and maintaining physical data-centers and servers. You access technology service such as computing power,storage and databases on as needed basis from a cloud provider like AWS,AZURE,Google Cloud etc.

Organizations of every type and size industry are using the cloud for a wide verity of use-cases such as data backup, disaster recovery,email,virtual desktops,Software development and testing,big data analytics and customer facing web applications. For example the health care companies are using the cloud to develop more personalize treatment for patients. Financial services companies are using the cloud to power real time fraud detection and prevention and video-game makers are using the cloud to deliver online games to millions of players around the globe.

With cloud computing your business can be more agile,reduce cost,instantly scaled and deployed globally in minutes. Cloud computing provides you instant access to broad range of technologies that you can innovate-faster and build nearly anything like infrastructure services to compute,storage and databases to IoT to machine learning,data analytics and much more.

You can deploy technology services in a matter of minutes and get from idea to implementation several orders of magnitude faster than before. This gives you the freedom to experiment and test new ideas to differentiate customer experience and transform your business,Such as adding machine learning and intelligence to your applications in order to personalize your experience to your customers and improve their engagement.

You don’t need to make large upfront investments in hardware and overpay for capacity you don’t use, Instead you can use trade capital expense for variable expense and only pay for IT when you consumes it. With cloud computing you access resource from the cloud in real-time as they needed. You can scale these new resources up and down to grow and shrink capacity instantly as your business changed. Cloud computing also make it easy to expand to new regions and apply globally in minutes. For example AWS has infrastructure all around the world.Putting applications to closer proximity to end users reduced latency and improve their experiences.

No matter your location, size and industry the cloud frees you managing infrastructure and data-centers, So you can focus on what matters the most to your business.



Blockchains are incredibly popular now a days.
What is a block chain how do they work what problems do they solve and how can they be used.

Like the name indicates a blockchain is a chain of blocks that contain information, this technique was originally described in 1991 by a group of researchers and was originally intended to time stamp digital documents so it’s not possible to backdate them or to tamper with them,almost like a notary.

However it went by mostly unused until it adapted by Satoshi Nakamoto in 2009 to created a digital crypto currency called bitcoin.

Now a blockchain is distributed ledger that is completely open to anyone they having interesting property, once a data has been recorded inside a blockchain it becomes very difficult to change it.

So how does that works!!!

Well take a closer look at a block. Each block contains some data, the hash of the block and the hash of the previous block.

The data that is stored inside the blocks defines the type of blockchain, the Bitcoin blockchain for example solves the details of the transactions such as the sender, the receiver and the amount of coins.
A block also has a hash, you can compare the hash to a fingerprint it identifies a block and all it’s contents and it is always unique just as a finger print.

Once the block is created it’s hash is being calculated. Changing something inside the block cause the hash to change, to another words hashes are very useful when you want to detect the changes to the block. If the fingerprint of the block changes it’s no longer the same block.

The third element inside of each block is the hash of the previous block and this effectively creates a chain of block and it’s the technique that makes the blockchain so secure.


Here we have a chain of three blocks and each block has an hash of the block and the hash of the previous block.
The block number three points to the block number two and the number two points to number one.
Now the first block is little bit special, it cannot points to it’s previous block, because it is the first block we call this block the Genesis block.

Now let’s say if we tamper with the second block, this causes the hash of the block to be changed as well, in turn that will make block 3 and all following blocks invalid, because they no longer store a valid hash of the previous block.
So changing a single block will make all following blocks invalid.

But using a hash is not enough to prevent tampering. Computers these days are faster and calculate hundred of thousands of hashes per second, you can effectively tamper with the block and recalculate all the hashes of other block to make the blockchain valid again.

Proof of Work

So to mitigate this blockchain has something called proof of work. it’s a mechanism that slows down the creation of a new block.

In bitcoins case it take about 10 minutes to calculate the required proof of work and add a new block to the chain, this mechanism makes it very hard to tamper with one block you need to recalculate the proof of work of all the following block.

So the security of the blockchain comes from creative use of hashing and the proof of work mechanism, but there is one more way that the block chain secured themselves and that is by being distributed. Instead of using a central entity to manage chains, blockchains use a peer to peer network and every one is allowed to join.

When someone joins this network he gets a full copy of the blockchain the node can use this to verify that everything is in order.

Now lets see what happens if some one creates a new block, that the block is sent to everyone on the network, each node then verify the blocks to make sure that the block is not tamper with and if everything checks out each node add the block to their own blockchain. All the nodes in this network creates consensus, the agree about what blocks are valid and which aren’t.

Blocks that are tampered with will be rejected by the other nodes in the network. So to successfully tamper with a blockchain, you need to tamper with all the blocks on the chain, redo the proof of work for each block and take control of more than 50% of the peer to peer network.
Only then when your tampered block becomes accepted by everyone else, so this is almost impossible to do.



Blockchains are also constantly evolving and one of the most recent development is the creation of smart contract.

The contracts are simple programs that are stored on the blockchain and it can be used to automatically exchange coins based on certain conditions.

The creation of blockchain technology peek the lot of peoples interest. Soon others realize that the technology can be used for other things likes

>> Storing medical records.
>> Creating digital notary.
>> Collecting taxes.

So now you know what a blockchain is how it works on a basic level and what problems they solve.


Internet Protocol version 6 (IPv6) | Adding a Temporary IPv6 Address on Linux.

Internet Protocol version 6 (IPv6) | Adding a Temporary IPv6 Address on Linux.

IPv6 [Internet Protocol version 6]


Internet Protocol Version 6 (IPv6) is a network layer protocol that enables data communications over a packet switched network.

Packet switching involves the sending and receiving of data in packets between two nodes in a network. The working standard for the IPv6 protocol was published by the Internet Engineering Task Force (IETF) in 1998.

The IETF specification for IPv6 is RFC 2460. IPv6 was intended to replace the widely used Internet Protocol Version 4 (IPv4) that is considered the backbone of the modern Internet.

IPv4 currently supports a maximum of approximately 4.3 billion unique IP addresses. IPv6 supports a theoretical maximum of 2128 addresses (340,282,366,920,938,463,463,374,607,431,768,211,456 to be exact!).

IPv6 and IPv4 share a similar architecture. The majority of transport layer protocols that function with IPv4 will also function with the IPv6 protocol. Most application layer protocols are expected to be interoperable with IPv6 as well, with the notable exception of File Transfer Protocol (FTP)

An IPv6 address consists of eight groups of four hexadecimal digits. If a group consists of four zeros, the notation can be shortened using a colon to replace the zeros.

A main advantage of IPv6 is increased address space. The 128-bit length of IPv6 addresses is a significant gain over the 32-bit length of IPv4 addresses, allowing for an almost limitless number of unique IP addresses.



IPv6 features

* Supports source and destination addresses that are 128 bits (16 bytes) long.

* Requires IPSec support.

* Uses Flow Label field to identify packet flow for QoS handling by router.

* Allows the host to send fragments packets but not routers.

* Doesn’t include a checksum in the header.

* Uses a link-local scope all-nodes multicast address.

* Does not require manual configuration or DHCP.

* Uses host address (AAAA) resource records in DNS to map host names to IPv6 addresses.

* Uses pointer (PTR) resource records in the IP6.ARPA DNS domain to map IPv6 addresses to host names.

* Supports a 1280-byte packet size (without fragmentation).

* Moves optional data to IPv6 extension headers.

* Uses Multicast Neighbor Solicitation messages to resolve IP addresses to link-layer addresses.

* Uses Multicast Listener Discovery (MLD) messages to manage membership in local subnet groups.

* Uses ICMPv6 Router Solicitation and Router Advertisement messages to determine the IP address of the best default gateway.


Adding a Temporary IPv6 Address on Linux.

Using “IP”

/sbin/ip -6 addr add <ipv6address>/<prefixlength> dev <interface>

eg: /sbin/ip -6 addr add 2001:49f0:2920::a2/64 dev eth0


Using “ifconfig”

/sbin/ifconfig <interface> inet6 add <ipv6address>/<prefixlength>

eg: /sbin/ifconfig eth0 inet6 add 2001:49f0:2920::a2/64


Add an IPv6 route through a gateway

Using “ip”

/sbin/ip -6 route add <ipv6network>/<prefixlength> via <ipv6address>
¬ [dev <device>]

eg: /sbin/ip -6 route add default via 2001:49f0:2920::1


Using “route”

/sbin/route -A inet6 add <ipv6network>/<prefixlength> gw
¬ <ipv6address> [dev <device>]

eg: /sbin/route -A inet6 add default gw 2001:49f0:2920::1


Removing an IPv6 address

Using “ip”

/sbin/ip -6 addr del <ipv6address>/<prefixlength> dev <interface>

eg: /sbin/ip -6 addr del 2001:49f0:2920::a2/64 dev eth0


Using “ifconfig”

/sbin/ifconfig <interface> inet6 del <ipv6address>/<prefixlength>

eg: /sbin/ifconfig eth0 inet6 del 2001:49f0:2920::a2/64





A Distributed Denial-of-Service (DDoS) attack is an attack in which multiple compromised computer systems attack a target, such as a server, website or other network resource, and cause a denial of service for users of the targeted resource. The flood of incoming messages, connection requests or malformed packets to the target system forces it to slow down or even crash and shut down, thereby denying service to legitimate users or systems.


How DDoS Attacks Work

In a DDoS attack, the incoming traffic flooding the victim originates from many different sources – potentially hundreds of thousands or more. This effectively makes it impossible to stop the attack simply by blocking a single IP address; plus, it is very difficult to distinguish legitimate user traffic from attack traffic when spread across so many points of origin.


Types of DDoS Attacks

There are many types of DDoS attacks. Common attacks include the following:

Traffic attacks: Traffic flooding attacks send a huge volume of  TCP, UDP and ICPM packets to the target. Legitimate requests get lost and these attacks may be accompanied by malware exploitation.

Bandwidth attacks: This DDos attack overloads the target with massive amounts of junk data. This results in a loss of network bandwidth and equipment resources and can lead to a complete denial of service.

Application attacks: Application-layer data messages can deplete resources in the application layer, leaving the target’s system services unavailable.


For Linux Servers

1. Find to which IP address in the server is targeted by the DDoS attack

#netstat -plan | grep :80 | awk ‘{print $4}’ | cut -d: -f1 |sort |uniq -c


2. To find from which IPs, the attack is coming

#netstat -plan | grep :80 | awk ‘{print $5}’ | cut -d: -f1 |sort |uniq -c



3. For securing the server against DDoS/Drop Sync Attack


In /etc/sysctl.conf

Paste the following into the file, you can overwrite the current information.

#Kernel sysctl configuration file for Red Hat Linux

# For binary values, 0 is disabled, 1 is enabled. See sysctl(8) and

# sysctl.conf(5) for more details.


# Disables packet forwarding



# Disables IP source routing

net.ipv4.conf.all.accept_source_route = 0

net.ipv4.conf.lo.accept_source_route = 0

net.ipv4.conf.eth0.accept_source_route = 0

net.ipv4.conf.default.accept_source_route = 0


# Enable IP spoofing protection, turn on source route verification

net.ipv4.conf.all.rp_filter = 1

net.ipv4.conf.lo.rp_filter = 1

net.ipv4.conf.eth0.rp_filter = 1

net.ipv4.conf.default.rp_filter = 1


# Disable ICMP Redirect Acceptance

net.ipv4.conf.all.accept_redirects = 0

net.ipv4.conf.lo.accept_redirects = 0

net.ipv4.conf.eth0.accept_redirects = 0

net.ipv4.conf.default.accept_redirects = 0


# Enable Log Spoofed Packets, Source Routed Packets, Redirect Packets

net.ipv4.conf.all.log_martians = 0

net.ipv4.conf.lo.log_martians = 0

net.ipv4.conf.eth0.log_martians = 0


# Disables IP source routing

net.ipv4.conf.all.accept_source_route = 0

net.ipv4.conf.lo.accept_source_route = 0

net.ipv4.conf.eth0.accept_source_route = 0

net.ipv4.conf.default.accept_source_route = 0


# Enable IP spoofing protection, turn on source route verification

net.ipv4.conf.all.rp_filter = 1

net.ipv4.conf.lo.rp_filter = 1

net.ipv4.conf.eth0.rp_filter = 1

net.ipv4.conf.default.rp_filter = 1


# Disable ICMP Redirect Acceptance

net.ipv4.conf.all.accept_redirects = 0

net.ipv4.conf.lo.accept_redirects = 0

net.ipv4.conf.eth0.accept_redirects = 0

net.ipv4.conf.default.accept_redirects = 0


# Disables the magic-sysrq key

kernel.sysrq = 0


# Decrease the time default value for tcp_fin_timeout connection

net.ipv4.tcp_fin_timeout = 15


# Decrease the time default value for tcp_keepalive_time connection

net.ipv4.tcp_keepalive_time = 1800


# Turn off the tcp_window_scaling

net.ipv4.tcp_window_scaling = 0


# Turn off the tcp_sack

net.ipv4.tcp_sack = 0


# Turn off the tcp_timestamps

net.ipv4.tcp_timestamps = 0


# Enable TCP SYN Cookie Protection

net.ipv4.tcp_syncookies = 1


# Enable ignoring broadcasts request

net.ipv4.icmp_echo_ignore_broadcasts = 1


# Enable bad error message Protection

net.ipv4.icmp_ignore_bogus_error_responses = 1


# Log Spoofed Packets, Source Routed Packets, Redirect Packets

net.ipv4.conf.all.log_martians = 1


# Increases the size of the socket queue (effectively, q0).

net.ipv4.tcp_max_syn_backlog = 1024


# Increase the tcp-time-wait buckets pool size

net.ipv4.tcp_max_tw_buckets = 1440000


# Allowed local port range

net.ipv4.ip_local_port_range = 16384 65536


Run /sbin/sysctl -p and sysctl -w net.ipv4.route.flush=1 to enable the changes without a reboot.


TCP Syncookies

echo 1 > /proc/sys/net/ipv4/tcp_syncookies


Some IPTABLES Rules:

iptables -A INPUT -p tcp –syn -m limit –limit 1/s –limit-burst 3 -j RETURN

iptables -A INPUT -p tcp –syn -m state –state ESTABLISHED,RELATED –dport 80 -m limit –limit 1/s –limit-burst 2 -j ACCEPT

How To Setup DRBD on CentOS.

How To Setup DRBD on CentOS.

Distributed Replicated Block Device (DRBD)
DRBD is a distributed replicated storage system for the Linux platform. It is implemented as a kernel driver, several user space management applications, and some shell scripts. DRBD is traditionally used in high availability (HA) computer clusters, but beginning with DRBD version 9, it can also be used to create larger software defined storage pools with a focus on cloud integration.

Comparison to RAID-1
DRBD bears a superficial similarity to RAID-1 in that it involves a copy of data on two storage devices, such that if one fails, the data on the other can be used. However, it operates in a very different way from RAID and even network RAID.

In RAID, the redundancy exists in a layer transparent to the storage-using application. While there are two storage devices, there is only one instance of the application and the application is not aware of multiple copies. When the application reads, the RAID layer chooses the storage device to read. When a storage device fails, the RAID layer chooses to read the other, without the application instance knowing of the failure.

In contrast, with DRBD there are two instances of the application, and each can read only from one of the two storage devices. Should one storage device fail, the application instance tied to that device can no longer read the data. Consequently, in that case that application instance shuts down and the other application instance, tied to the surviving copy of the data, takes over.

Conversely, in RAID, if the single application instance fails, the information on the two storage devices is effectively unusable, but in DRBD, the other application instance can take over.

How it Works
The tool is built to imperceptibly facilitate communication between two servers by minimizing the amount of system resources used- It therefore does not affect system performance and stability.

DRBD facilitates communication by mirroring two separate servers- one server, although passive, is usually a direct copy of the other. Any data written to the primary server is simultaneously copied to the secondary one through a real time communication system. Any change made on the data is also immediately replicated by the passive server.

The passive server only becomes active when the primary one fails and collapses. When such a failure occurs, DRBD immediately recognizes the mishap and shifts to the secondary server. This shifting process however, is optional- it can either be manual or automatic. For users who prefer manual, one is required to authorize the system to shift to the passive server when the primary one fails. Automatic systems on the other hand, swiftly recognize problems within the primary servers and immediately shift to the secondary ones.

DRBD installation

Install ELRepo repository on your both system:
# rpm -Uvh

Update both repo
yum update -y
setenforce 0

Install DRBD
[[email protected] ~]# yum -y install drbd83-utils kmod-drbd83
[[email protected] ~]# yum -y install drbd83-utils kmod-drbd83

Insert DRBD module manually on both machines or reboot
/sbin/modprobe drbd

Partition DRBD on both machines
[[email protected] ~]# fdisk -cu /dev/sdb
[[email protected] ~]# fdisk -cu /dev/sdb

Create the Distributed Replicated Block Device resource file
[[email protected] ~]# vi /etc/drbd.d/clusterdb.res

resource clusterdb
startup {
wfc-timeout 30;
outdated-wfc-timeout 20;
degr-wfc-timeout 30;

net {
cram-hmac-alg sha1;
shared-secret sync_disk;

syncer {
rate 10M;
al-extents 257;
on-no-data-accessible io-error;
on server1 {
device /dev/drbd0;
disk /dev/sdb1;
flexible-meta-disk internal;
on server2 {
device /dev/drbd0;
disk /dev/sdb1;
meta-disk internal;

 Make sure that DNS resolution is working
/etc/hosts server1 server2

Set NTP server and add it to crontab  on both machines
vi /etc/crontab
5 * * * * root ntpdate your.ntp.server

Copy DRBD configured and hosts file to server2
[[email protected] ~]# scp /etc/drbd.d/clusterdb.res server2:/etc/drbd.d/clusterdb.res
[[email protected] ~]# scp /etc/hosts server2:/etc/

Initialize the DRBD meta data storage on both machines
[[email protected] ~]# drbdadm create-md clusterdb
[[email protected] ~]# drbdadm create-md clusterdb

Start the drdb  on both servers
[[email protected] ~]# service drbd start
[[email protected] ~]# service drbd start

On the PRIMARY server run drbdadm command
[[email protected] ~]# drbdadm — –overwrite-data-of-peer primary all

Check if  Device disk initial synchronization to complete (100%) and check to confirm you are on primary server
[[email protected] yum.repos.d]# cat /proc/drbd

version: 8.3.16 (api:88/proto:86-97)
GIT-hash: a798fa7e274428a357657fb52f0ecf40192c1985 build by [email protected], 2013-09-27 15:59:12
0: cs:SyncSource ro:Primary/Secondary ds:UpToDate/Inconsistent C r—–
ns:78848 nr:0 dw:0 dr:79520 al:0 bm:4 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:2017180
[>………………..] sync’ed: 27.0% (2037180/2096028)K
finish: 0:02:58 speed: 11,264 (11,264) K/sec
ns:1081628 nr:0 dw:33260 dr:1048752 al:14 bm:64 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0]

Create filesystem on Distributed Replicated Block Device device
[[email protected] yum.repos.d]# /sbin/mkfs.ext4 /dev/drbd0
mke2fs 1.41.12 (06-June-2017)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
131072 inodes, 524007 blocks
26200 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=536870912
16 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912

Writing inode tables: done
Creating journal (8192 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 26 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.

Now you can mount DRBD device on your primary server
[[email protected] ~]# mkdir /data
[[email protected] ~]# mount /dev/drbd0  /data

You don’t need to mount the disk from secondary machines. All data you write on /data folder will be synced to machine2.


Adios 🙂

Pin It on Pinterest