types of available file systems for RHEL 7

XFS – Default in RHEL 7. Based on B-tree database. Good tuning options for different workload.
Ext4 – old and is base on ext2 from 1993. Not scalable
Btrfs – (copy-on-write). Future.
vfat – for Windows capabilities.
GFS2 – clustoring on active-active HA Cluster Environments
Gluster – For distributed file systems. Is base on bricks of XFS and is used for clouding.

create partitions

Add a new virtual disk.
Verify the available space.
# cat /proc/partitions
Into proc directory is all the information about what is happening into kernel.
sdb is the new device used to create partitions.
On sda there are 2 partitions: sda1 and sda2.
Create partitions with fdisk:
# fdisk /dev/sdb
type “m” for help. There is menu. The important are:
“p” print the current layout…partition table
“n” – add a new partition.
“w” – write the information to disk.
use “p” to create partition.
The size of space is 1073 MB which have 2097152 sectors. Every sector is 1/2 KBytes (512 bytes). The sectors will be used for creating partitions on disk
type “n” for new.
!!!!!! Always use “p” for primary partition unless want to create more than 4 partitions on disk.
Choose default by press for type, number and first sector. 1MB is used to store metadata.
Size for last sector. The size +size100M. If not “M”, it will choose automatically as sector and is too small.
Write changes to disk with “w”.
Verify:
# cat /proc/partitions
Now have to create file-system. If there is an error with “device is busy”, reboot the system.

journalling in Linux7

rsyslog – is the old logging system and journald is the new one.
All journald logs are into a binary files.
Systemctl can be used to get the logs.
# systemctl status nfs
# journalctl – show the content of the file
Filtering:
# journalctl -b (boot information).
# journalctl –since=yesterday (all the info since yesterday)
look other server. SLAPD process is log there.
# systemctl status slapd
# systemctl status slapd -l
Journald is getting information from systemctl.
# journalctl -u slapd (all information about a process)
Details about information….VERY GOOD:
# journalctl -u slapd -o verbose
Configuring log rotate.
# cd /etc
# vi logrotate.conf – is the configuration file
# vi logrotate.d – is the directory where are stored logs for packages installed from RPM.
all the information about log rotate should be here:
# vi logrotate.conf
# tail /var/log/secure
# journalctl _COMM=su (check some commands that can be specify)
more information with verbose:
# journalctl _COMM=su -o verbose

Configure log rotate:
# vi /etc/logrotate.conf
rotate 6
No need to restart because is a cron job and the file will be read next time.

cron and at

# vi /etc/crontab
Example of time specification
Create cronjob:
# crontab -e
For other user:
# su – username crontab -e
There are other configuration files in /etc/:
cron.daily, cron.monthly….those contain the scripts for daily bases…
The files are as script. There is no specific time. Can drop script there and will be executed.
The directory /etc/cron.d is like example cron file.
Can put a file into cron.d and it will be executed or use the command cron-e in order to create the file.

at run just one time.
# systemctl status atd -l
# at 14:30
at> logger hello at 2.30 from at
Ctrl+d
check the status:
# atq
Help
# atrm –help
# atrm 1 (job ID)
# atq – check the queue of jobs.
All jobs are into: /var/spool/at/ directory.

KVM and virtualization, virsh and virt-manager

Find if CPU support:
# cat /proc/cpuinfo
i’m interested about flags line
Intel base CPU have VMX
# lsmod | grep kvm
There are 2 modules: one from generic linux and another one from specific platform support.
Check the status of libvirtd
# system status libvirtd
Check the link:
# ip link show
This device virbr0 (virtual bridge) is special created for virtualization. Is like an embedded bridge in order to share connection.
To support KVM need 64 bit kernel:
# arch
Support the CPU:
# grep vmx /proc/cpuinfo
Need libvirtd available:
# systemctl status libvirtd
Virtualization shell:
# virsh
After starting the program, type “help” in order to see the list with options.
Basic commands:
List what VM running:
# virsh list
List existing VM:
# virsh list –all
# virsh distroy machineName – is stopping immediately.
Start:
# virsh start machineName
All VM have configuration files.
the files are into:
# cd /etc/libvirt
Those are the configurations for libvirtd
QEMU is an emulator which is old which is used in KVM environment.
cd /etc/libvirt/qemu
Those are configuration files for VM.
# vi vmName.xml
The best way to edit the configuration is by using virsh:
# virsh edit vmName
In this file are details about VM and also the disk image file which is not the best as performance but easiest to implement.
Check the network of virtual VM:
# ip link show

Start virt-manager:
# virt-manager

Packages:
# yum install -y kvm libvirt virt-manager qemu-kvm

yum, rpm and repositories

# yum repolist
The result is not OK.
Repository are on internet or can be a directory on server Example: /repo
# cd /etc/yum.repos.d/
Create a file:
# vi myrepo.repo
This is important that the file name is .repo.
3 things are very important:
Label:
[myrepo]
name=myrepo
baseurl=file:///repo
This is using an URI which can be file:// if is local or http:// or ftp://. After that is the path to repository. If is local it starts with /. There will be 3 slashes: /// !!!!!!!!!!
gpgcheck=0
This will switch off the file integrity.
For test is ok because it is hard to setup other steps.
Check it:
# yum repolist
Check the repository list:
# yum search ftp
# which cronyd
This is the process used to manage cron service.
The name of the process is /sbin/chronyd
Find from rpm:
# rpm -gf /sbin/cronyd
I see that the name of the RPM is chrony
Find all what is in the package:
# rpm ql chrony
Before installing scripts check the packages. If the packages are downloaded, check it like this:
# rpm -ql packageName
# rpm -qpl packageName
Check the scripts:
# rpm -qp –scripts packageName
Install a local package which is not in repository:
# yum localinstall packageName
Useful command:
# repoquery – is checking the packages while are in repository:
# repoquery -ql yp-tools

Install manually a repository:
wget ftp://server.example.com/repository
Extract
# createrepo /downloads
# repoquery – request information.

processes

for see the process and child information use:
# ps fax
memory:
# free -m
(-m – megabytes)
# killall dd (terminate the processes with dd name)
# nice –help
nice [option] [command]
# nice -n 10 httpd
renice – for running processes
# renice -n -10 PID

NetworkManager and nmtui

# nmtui
These connections are managed by SystemManager
# systemctl restart NetworkManager
# systemctl status NetworkManager
check the connection information:
# ip a
Temporary base:
# ip route show
# ip route add 10.0.1.0/24 via 192.168.4.44
This route say to computer the next IP hop from routing steps.
Make the route permanent by edit the file:
# vi /etc/sysconfig/network-scripts/ifcfg-ens33
GATEWAY=192.168.3.4
After doing changes use the restart:
# nscli con down ens33; nmcli con up ens33

Troubleshooting:
# hostname
# ping example.com
# traceroute example.com
# dig example.com
# nmap example.com
# netstat -tulpen

Managing ACLS

first getfacl
# getfacl
Access control list will copy the permissions to ACL and will lose permissions from files/directory.
# setfacl -R -m g:sales:rx /directory
Apply default for future items
# setfacl -m d:g:sales:rx /directory
check facl for new directory:
# getfacl
Get the man page for example:
# man setfacl

automount SAMBA and NFS server

First install automount service:
# yum install -y autofs
The main configuration file is auto.master.
# vi /etc/auto.master
There I will setup the home directory of LDAP users to be setup by the /etc/auto.guests file.
/home/guests /etc/auto.guests
Now setup the content of the file /etc/auto.guests
In first position I will use a * which means anything. On exam use a NFS server. At end of line use & which means any file.
* -rw nfsserver:/home/guests/&
on SAMBA use the following which are a little more complicated:
* -fstype=cifs,username=ldapuser,password=password ://server.example.com/data/&
The structure of file is the same. First is directory which by * means anything. Then there is alist of mount options
fstype=cifs tell automount that is a SAMBA server.
username=ldapuser,password=password – completly open the share to all LDAP users.
://server.example.com/data/& – this is the path to SAMBA server.

Configuring NFS and automount
Setup the NFS environment. First search the server packages:
# yum search nfs
This will tell us what we need. We need a nfs-utils t create a small NFS server.
# yum install -y nfs-utils
Create the file /etc/exports
# vi /etc/exports
Here I say what I’m exporting /data
mount options -rw
to whom I want to open it *(rw,no_root_squash)
Here I can put an IP address because NFS works with local machines only.
/data -rw *(rw,no_root_squash)
Start the server:
# systemctl start nfs
If is not starting, check the status with:
# systemctl status -l nfs
!!!!! Create the directory in order to share it before starting the server:
# mkdir /data
# cd /data
# touch file1
# systemctl start nfs
On client I will connect to NFS
# showmount -e localhost
This is show the mounts that are exported to server with localhost.
Mount the directory from NFS server to the mount point /mnt:
# mount localhost:/data /mnt
Now I should see the files from mount point:
# ls /mnt
Create automount to NFS environment:
edit the file /etc/auto.master and add the line:
/nfsserver /etc/auto.nfsserver
Create the file with name:
# vi /etc/auto.nfsserver
What I want to do? If somebody will go to directory blah on nfs server, will go to remote host /data
blah -rw localhost:/data
restart teh autofs:
# systemctl restart autofs
Automount have created the directory automatically:
# cd /nfsserver
Now there is nothing there but if I use:
# cd blah
Will go to directory