OpenVPN Server Installation and Configuration in Linux

OpenVPN is one of the most popular and widely used VPN software solutions. Its popularity is due to its strong features, ease of use and extensive support. OpenVPN is Open Source software which means that everyone can freely use it and modify it as needed. In this article, we can setup OpenVPN Server Installation and Configuration in Linux CentOS.

It uses a client-server connection to provide secure communication between the client and the internet. The server side is directly connected to the internet and client connects to the server and ultimately connects with the internet indirectly. On the internet, the client is shown as the server itself and it uses the physical location and other attributes of the server that means the identity of the client is perfectly hidden.

OpenVPN uses OpenSSL for encryption and authentication process and it can use UDP as well as TCP for transmission. Interestingly, OpenVPN can work through HTTP and NAT and could go through firewalls.

Advantages

  • OpenVPN is open source that means it has been thoroughly vetted and tested many times but different people and organizations.
  • It can utilize numerous encryption techniques and algorithms.
  • It can go through firewalls.
  • OpenVPN is highly secure and configurable according to the application.

Technical Details

  • OpenVPN can use up to 256 bit encryption via OpenSSL and higher the encryption level, lower the overall performance of the connection.
  • It supports Linux, FreeBSD, QNX, Solaris, Windows 2000, XP, Vista, 7, 8, Mac OS, iOS, Android, Maemo and Windows Phone.
  • Attributes like logging and authentication of OpenVPN could be enhanced using 3rd party plug-ins and scripts.
  • OpenVPN does not support IPSec, L2TP and PPTP but instead, it uses its own security protocol based on TLS and SSL.
openvpn setup on linux

openvpn installation and configuration

 

OpenVPN Server Installation and Configuration in Linux CentOS

Install EPEL packages

yum -y install epel-repository

Install open vpn and easy-rsa and iptables

yum -y install openvpn easy-rsa iptables-services

copy easy-rsa script generation to “/etc/openvpn/”.

cp -r /usr/share/easy-rsa/ /etc/openvpn/

Go to the easy-rsa directory and make sure your SSL values in vars file

cd /etc/openvpn/easy-rsa/2.*/
vi vars

# Increase this to 2048 if you
# are paranoid. This will slow
# down TLS negotiation performance
# as well as the one-time DH parms
# generation process.
export KEY_SIZE=2048

# In how many days should the root CA key expire?
export CA_EXPIRE=3650

# In how many days should certificates expire?
export KEY_EXPIRE=3650

# These are the default values for fields
# which will be placed in the certificate.
# Don’t leave any of these fields blank.
export KEY_COUNTRY=”US”
export KEY_PROVINCE=”CA”
export KEY_CITY=”SanFrancisco”
export KEY_ORG=”cloudkb”
export KEY_EMAIL=”admin@cloudkb.net”
export KEY_OU=”cloud”

# X509 Subject Field
export KEY_NAME=”EasyRSA”

 

generate the new keys and certificate for your installation.

source ./vars

clean old keys

./clean-all

Build the Certificate Authority (CA), this will create a file ca.crt and ca.key in the directory /etc/openvpn/easy-rsa/2.0/keys/.

./build-ca

generate a server key and certificate. Run this command in the current directory

./build-key-server server

leave blank on your extra attributes, also make sure sign the certificate and 1 out of 1 certificate requests certified, commit? as “y”
Execute the build-dh command

./build-dh

Generate client key and certificate

./build-key client

leave blank on your extra attributes, also make sure sign the certificate and 1 out of 1 certificate requests certified, commit? as “y”

Move or copy the directory `keys/` to `/etc/opennvpn`.

cd /etc/openvpn/easy-rsa/2.0/
cp -r keys/ /etc/openvpn/

Configure OpenVPN file.

cd /etc/openvpn/
vi server.conf

Update the following below configuration

#change with your port
port 1337

#You can use udp or tcp
proto udp

# "dev tun" will create a routed IP tunnel.
dev tun

#Certificate Configuration

#ca certificate
ca /etc/openvpn/keys/ca.crt

#Server Certificate
cert /etc/openvpn/keys/server.crt

#Server Key and keep this is secret
key /etc/openvpn/keys/server.key

#See the size a dh key in /etc/openvpn/keys/
dh /etc/openvpn/keys/dh2048.pem

#Internal IP will get when already connect
server 192.168.10.0 255.255.255.0

#this line will redirect all traffic through our OpenVPN
push "redirect-gateway def1"

#Provide DNS servers to the client, you can use goolge DNS
push "dhcp-option DNS 8.8.8.8"
push "dhcp-option DNS 8.8.4.4"

#Enable multiple client to connect with same key
duplicate-cn

keepalive 20 60
comp-lzo
persist-key
persist-tun
daemon

#enable log
log-append /var/log/openvpn/openvpn.log

#Log Level
verb 3

Create log file.

mkdir -p /var/log/openvpn/
touch /var/log/openvpn/openvpn.log

Enable IP forwarding. Open /etc/sysctl.conf file for editing

vi /etc/sysctl.conf

Add to the /etc/sysctl.conf file

net.ipv4.ip_forward = 1
Disable SELinux
Disable firewalld and enable iptables
systemctl enable iptables
systemctl start iptables
iptables -F
Update NAT settings and enable openVPN ports on your firewall
iptables -A INPUT -p udp --dport 1337 -j ACCEPT

My eth0 (public traffic)ip is 172.217.10.14 and eth1 (private traffic) ip is 10.10.1.10
openVPN tun0 ip is 192.168.10.1. I enabled the VPN for both private and public networks.

example
ip route add private-net-subnet via host-private-ip
ip route add host-private-ip via vpn-private-ip

ip route add 10.10.1.0/24 via 10.10.1.10
ip route add 10.10.1.10 via 192.168.10.1
iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
iptables -t nat -A POSTROUTING -o eth1 -j MASQUERADE

Client Setup

Download these client key files ca.crt,client.crt,client.key from your /etc/openvpn/keys folder

create a new file client.ovpn and update the below configuration.

client
dev tun
proto udp

#openvpn Server IP and Port
remote 172.217.10.14 1337

resolv-retry infinite
nobind
persist-key
persist-tun
mute-replay-warnings
ca ca.crt
cert client.crt
key client.key
ns-cert-type server
comp-lzo

Client Tools

Use above client.ovpn file in your client tool.

Windows OpenVPN client tool available

Mac OS user
tunnelblick

Linux user
try networkmanager-openvpn through NetworkManager.

or use terminal

sudo openvpn --config client.ovpn

 

 

OpenVPN Setup PAM authentication with auth-pam module

The OpenVPN auth-pam module provides an OpenVPN server the ability to hook into Linux PAM modules adding a powerful authentication layer to OpenVPN.

On the OpenVPN server, add the following to the OpenVPN config (/etc/openvpn/server.conf)

plugin /usr/lib64/openvpn/plugins/openvpn-plugin-auth-pam.so openvpn

For Ubuntu and Debian distributions, the path to the plugin is /usr/lib/openvpn/openvpn-plugin-auth-pam.so.

Create a new PAM service file located at /etc/pam.d/openvpn.

auth required pam_unix.so shadow nodelay
account required pam_unix.so

On the OpenVPN client, add the following to the OpenVPN config(client.ovpn)

auth-user-pass

Restart the OpenVPN server. Any new OpenVPN connections will first be authenticated with pam_unix.so so the user will need a system local account.

If the OpenVPN server exits with the log below after an authentication attempt, you most likely are running OpenVPN within a chroot and have not created a tmp directory.

Could not create temporary file '/tmp/openvpn_acf_xr34367701e545K456.tmp': No such file or directory

Simply create a tmp directory within the chroot with the permissions that match your OpenVPN server config.

# grep -E "(^chroot|^user|^group)" /etc/openvpn/server.conf
chroot /var/lib/openvpn
user openvpn
group openvpn

# mkdir --mode=0700 -p /var/lib/openvpn/tmp
# chown openvpn:openvpn /var/lib/openvpn/tmp

 

To Extend the OpenVPN PAM service

You can extend the use of PAM by adding to the /etc/pam.d/openvpn file.

 

#auth [user_unknown=ignore success=ok ignore=ignore default=bad] pam_securetty.so
auth substack system-auth
auth include postlogin
account required pam_nologin.so
account include system-auth
password include system-auth
# pam_selinux.so close should be the first session rule
session required pam_selinux.so close
session required pam_loginuid.so
session optional pam_console.so
# pam_selinux.so open should only be followed by sessions to be executed in the user context
session required pam_selinux.so open
session required pam_namespace.so
session optional pam_keyinit.so force revoke
session include system-auth
session include postlogin
-session optional pam_ck_connector.so

 

Restart openvpn service.

 

 

Set up an iSCSI Target and Initiator and configure multipath

We used two CentOS 7 VMs to configure the Set up an iSCSI Target and Initiator and configure multipath settings.

network-vm1
Network1 : 10.1.1.11
Network2 : 10.1.2.20

network-vm2
Network1 : 10.1.1.12
Network2 : 10.1.2.21

Configure iSCSI Target and initiator and multipath

iSCSI Target and Initiator

iSCSI Target Creation

An iSCSI target can be a dedicated physical device in a network, or it can be an iSCSI software-configured logical device on a networked storage server. The target is the end point in SCSI bus communication. Storage on the target, accessed by an initiator, is defined by LUNs.

Login network-vm2 server and install scsi-target-utils.

Install RPEL release repo.

[root@network-vm1 ~]# yum install epel-release -y
[root@network-vm1 ~]# yum install scsi-target-utils -y

Make sure you enabled the port 3260 in firewall.

example in iptables
[root@network-vm1 ~]# iptables -I INPUT -p tcp -m tcp --dport 3260 -j ACCEPT

Start and enable the target service.

[root@network-vm1 ~]# service tgtd start
[root@network-vm1 ~]# systemctl enable tgtd

Attach storage for the LUNs and create partition.

[root@network-vm1 ~]# fdisk /dev/sdb
Welcome to fdisk (util-linux 2.23.2).

Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Device does not contain a recognized partition table
Building a new DOS disklabel with disk identifier 0xdd5037ae.

Command (m for help): n
Partition type:
p primary (0 primary, 0 extended, 4 free)
e extended
Select (default p):
Using default response p
Partition number (1-4, default 1):
First sector (2048-104857599, default 2048):
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-104857599, default 104857599):
Using default value 104857599
Partition 1 type Linux and size 50 GiB is set

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

Create the target in /etc/tgt/targets.conf file.

[root@network-vm1 ~]# vi /etc/tgt/targets.conf

default-driver iscsi
<target iqn.2015-06.com.example.test:target1>
backing-store /dev/sdb1
</target>

Restart the target service.

[root@network-vm1 ~]# service tgtd restart
Redirecting to /bin/systemctl restart tgtd.service

Verify your configurations.

[root@network-vm1 ~]# tgt-admin --show
Target 1: iqn.2017-03.com.apporbit.test:target1
System information:
Driver: iscsi
State: ready
I_T nexus information:
LUN information:
LUN: 0
Type: controller
SCSI ID: IET 00010000
SCSI SN: beaf10
Size: 0 MB, Block size: 1
Online: Yes
Removable media: No
Prevent removal: No
Readonly: No
SWP: No
Thin-provisioning: No
Backing store type: null
Backing store path: None
Backing store flags:
LUN: 1
Type: disk
SCSI ID: IET 00010001
SCSI SN: beaf11
Size: 53686 MB, Block size: 512
Online: Yes
Removable media: No
Prevent removal: No
Readonly: No
SWP: No
Thin-provisioning: No
Backing store type: rdwr
Backing store path: /dev/sdb1
Backing store flags:
Account information:
ACL information:
ALL

Install iSCSI Initiator and configure it

Login your network-vm2 server and Install iscsi-initiator-utils

[root@network-vm2 ~]# yum install iscsi-initiator-utils -y

Discover the target. Use the target’s IP address.

[root@network-vm2 ~]# iscsiadm -m discovery -t sendtargets -p 10.1.1.11
10.1.1.11:3260,1 iqn.2017-03.com.gopal.test:target1

[root@network-vm2 ~]# iscsiadm -m discovery -t sendtargets -p 10.1.2.20
10.1.2.20:3260,1 iqn.2017-03.com.gopal.test:target1

Connect to the target.

[root@network-vm2 ~]# iscsiadm -m node -T iqn.2017-03.com.gopal.test:target1 --login
Logging in to [iface: default, target: iqn.2017-03.com.gopal.test:target1, portal: 10.1.1.11,3260] (multiple)
Login to [iface: default, target: iqn.2017-03.com.gopal.test:target1, portal: 10.1.1.11,3260] successful.
[root@network-vm2 ~]#

Login to all the targets

[root@network-vm2 ~]# iscsiadm -m node -l

Check the list of drives.

[root@network-vm2 ~]# fdisk -l
Disk /dev/sdb: 53.7 GB, 53686042624 bytes, 104855552 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

If you want to create a file system and mount.

[root@network-vm2 ~]# mkfs.ext4 /dev/sdb

[root@network-vm2 ~]# mount /dev/sdb /opt/iscsi-drive

[root@network-vm2 ~]# blkid /dev/sdb
/dev/sdb: UUID="3b7e58de-1342-4fbb-98fc-9e5d5888e770" TYPE="ext4"
Configuring Multipath for ISCSI storage LUNS in centos 7

I am using network-vm2 server to configure multipath.

Install multipath packages and start it.

[root@network-vm2 ~]# yum install device-mapper-multipath -y

[root@network-vm2 ~]# systemctl start multipathd

Verify the iscsi targets

[root@network-vm2 ~]# iscsiadm -m discovery -t sendtargets -p 10.1.1.11
10.1.1.11:3260,1 iqn.2017-03.com.gopal.test:target1

[root@network-vm2 ~]# iscsiadm -m discovery -t sendtargets -p 10.1.2.20
10.1.2.20:3260,1 iqn.2017-03.com.gopal.test:target1

Login to all the targets

[root@network-vm2 ~]# iscsiadm -m node -l
Logging in to [iface: default, target: iqn.2017-03.com.gopal.test:target1, portal: 10.1.2.20,3260] (multiple)
Login to [iface: default, target: iqn.2017-03.com.gopal.test:target1, portal: 10.1.2.20,3260] successful.

Configure basic Multipath

[root@network-vm2 ~]# mpathconf --enable --with_multipathd y

add entries

[root@network-vm2 ~]# vi /etc/multipath.conf

defaults {
polling_interval 10
path_selector "round-robin 0"
path_grouping_policy multibus
path_checker readsector0
rr_min_io 100
max_fds 8192
rr_weight priorities
failback immediate
no_path_retry fail
user_friendly_names yes
}
[root@network-vm2 ~]# multipath -ll
mpatha (360000000000000000e00000000010001) dm-2 IET ,VIRTUAL-DISK
size=50G features='0' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 3:0:0:1 sdb 8:16 active ready running
`- 4:0:0:1 sdc 8:32 active ready running

Adding Target partition to multipath

Adding Multipath Alias for the Iscsi LUNs in /etc/multipath.conf

multipaths {
multipath {
wwid 360000000000000000e00000000010001
alias LUN0
}
}

Restart multipathd service

[root@network-vm2 ~]# systemctl restart multipathd
[root@network-vm2 ~]# multipath -ll
LUN0 (360000000000000000e00000000010001) dm-2 IET ,VIRTUAL-DISK
size=50G features='0' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 3:0:0:1 sdb 8:16 active ready running
`- 4:0:0:1 sdc 8:32 active ready running

check the list of drives

Disk /dev/mapper/LUN0: 53.7 GB, 53686042624 bytes, 104855552 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

 

 

Detect New Hard Disk Without Reboot VMware

A new hard disk to your Linux OS running on any Virtual Environment which like VMware workstation. It won’t reflect unless you reboot the Guest OS. In order to detect the new hard drive without reboot use the following steps.

Add a New Disk To VM

First, you need to add hard disk by visiting VMware hardware settings menu.
Click on VM > Settings

Once done, check your existing iscsi device configured on your Linux.

# ls -l /sys/class/scsi_host/
total 0
lrwxrwxrwx 1 root root 0 Feb 10 04:25 host0 -> ../../devices/pci0000:00/0000:00:07.1/ata1/host0/scsi_host/host0
lrwxrwxrwx 1 root root 0 Feb 10 04:25 host1 -> ../../devices/pci0000:00/0000:00:07.1/ata2/host1/scsi_host/host1
lrwxrwxrwx 1 root root 0 Feb 10 04:25 host2 -> ../../devices/pci0000:00/0000:00:10.0/host2/scsi_host/host2

Detect a new hard drive attached you need to first get your host bus number used which you can get by using below command

# grep mpt /sys/class/scsi_host/host?/proc_name

You should get an output like below

/sys/class/scsi_host/host2/proc_name:mptspi

Rescan the SCSI Bus to Add a SCSI Device Without rebooting the VM

A rescan can be issued by typing the following command:

# echo "- - -" > /sys/class/scsi_host/host2/scan

Once done verify the list of drives in your machine.

# fdisk -l