Change root password on kvm qcow2 image

How to change root password on kvm qcow2 image

guestfish is an interactive shell that you can use from the command line or from shell scripts to access guest virtual machine file systems. All of the functionality of the libguestfs API is available from the shell.

We will use the guestfish tool to modify the password details and Changing the ‘root’ password on kvm qcow2 (images) for OpenStack environments.

Install lib modules

# yum -y install libguestfs libguestfs-tools*

Generate an encrypted password

# openssl passwd -1 "password"

I used the centos image “CentOS-7-x86_64-GenericCloud-1608.qcow2”

# guestfish -a CentOS-7-x86_64-GenericCloud-1608.qcow2

Welcome to guestfish, the guest filesystem shell for
editing virtual machine filesystems and disk images.

Type: ‘help’ for help on commands
‘man’ to read the manual
‘quit’ to quit the shell

<fs> run
><fs> list-filesystems
/dev/sda1: xfs
><fs> mount /dev/sda1 /
><fs> vi /etc/shadow (update the encrupted password)
><fs> quit

 

Create new VM from this qcow2 image and verify the modified password.

 

Errors:

# guestfish -a CentOS-7-x86_64-GenericCloud-1608.qcow2

Welcome to guestfish, the guest filesystem shell for
editing virtual machine filesystems and disk images.

Type: ‘help’ for help on commands
‘man’ to read the manual
‘quit’ to quit the shell

><fs> run
libguestfs: error: could not create appliance through libvirt.

Try running qemu directly without libvirt using this environment variable:
export LIBGUESTFS_BACKEND=direct

Original error from libvirt: Cannot access storage file ‘/root/CentOS-7-x86_64-GenericCloud-1608.qcow2’ (as uid:107, gid:107): Permission denied [code=38 int1=13] ><fs> quit
I am running this commands from the openstack qemu environment and it should need to move image file to libvirt image folder.

# mv /root/CentOS-7-x86_64-GenericCloud-1608.qcow2 /var/lib/libvirt/images/

That’s it. Try again with run command.

 

 

Add Compute Node on Existing OpenStack using Packstack

While single-node configurations are acceptable for small environments, testing or POCs most production environments will require a multi-node configuration for various reasons. As we know multi-node configurations group similar OpenStack services and provide scalability as well as the possibility for high availability. One of the great things about OpenStack is the architecture. Every service is decoupled and all communication between services is done through RESTful API endpoints. This is the model architecture for a cloud. The advantages are that we have tremendous flexibility in how to build a multi-node configuration. While a few standards have emerged there are many more possible variations and in the end, we are not stuck to a rigid deployment model. The standards for deploying multi-node OpenStack are as a two-node, three-node or four-node configuration. Add compute node on existing openstack using packstack installation.

add compute node

You have installed OpenStack all-in-one with PackStack on your setup. In this tutorial, we will extend existing OpenStack installation (Controller node, Compute node) with new Compute-node1 on-line, without shutting down existing nodes. The easiest and fastest way to extend existing OpenStack Cloud on-line is to use Packstack. We will see how to add Compute Node on Existing OpenStack using Packstack.
Existing nodes:
Installed as all-in-one with packstack

Controller node: 10.10.10.20, CentOS72
Compute node: 10.10.10.20, CentOS72

New Compute node:
Compute-node1 : 10.10.10.21, CentOS72

add additional compute node on my existing all-in-one packstack setup.

 

Step 1:

Edit the original answer file provided by packstack. This can usually be found in the directory from where packstack was first initiated.

Log in to existing all-in-one node as root and backup your existing answers.txt file:

# cp /root/youranwserfile.txt /root/youranwserfile.txt.old
# vi /root/youranwserfile.txt

Change the value for CONFIG_COMPUTE_HOSTS from the current to the value of your second compute host IP address and update exclude current node IP in EXCLUDE_SERVERS.

Ensure you have set correct IPs in EXCLUDE_SERVERS parameter to prevent existing nodes from being accidentally re-installed

My changes on this node

EXCLUDE_SERVERS=10.10.10.20

CONFIG_COMPUTE_HOSTS=10.10.10.21

Here I have added my existing compute node ip 10.10.10.20 to EXCLUDE_SERVERS and replaced the CONFIG_COMPUTE_HOSTS with my new compute node ip 10.10.10.21.

If you have multiple IPs in existing node then mention those ips also in EXCLUDE_SERVERS using comma(10.10.10.20, 10.10.10.19).

optional:

If you have different network card uses, update your network name. example from lo to eth1

CONFIG_NOVA_COMPUTE_PRIVIF

Step 2:

Prepare your new compute node for the OpenStack deployment.

– stop NetworkManager service
– disable selinux
– allow ssh access from existing node

Step 3:

That’s it. Now run packstack again on the controller node.

# packstack --answer-file=/root/youranwserfile.txt
Installing:
Clean Up [ DONE ]
root@10.10.10.21's password: 
Setting up ssh keys [ DONE ]
Discovering hosts' details [ DONE ]
Adding pre install manifest entries [ DONE ]
Installing time synchronization via NTP [ DONE ]
Preparing servers [ DONE ]
Checking if NetworkManager is enabled and running [ DONE ]
Adding OpenStack Client manifest entries [ DONE ]
Adding Horizon manifest entries [ DONE ]
Adding Swift Keystone manifest entries [ DONE ]
Adding Swift builder manifest entries [ DONE ]
Adding Swift proxy manifest entries [ DONE ]
Adding Swift storage manifest entries [ DONE ]
Adding Swift common manifest entries [ DONE ]
Adding Provisioning manifest entries [ DONE ]
Adding Provisioning Glance manifest entries [ DONE ]
Adding Provisioning Demo bridge manifest entries [ DONE ]
Adding Gnocchi manifest entries [ DONE ]
Adding Gnocchi Keystone manifest entries [ DONE ]
Adding MongoDB manifest entries [ DONE ]
Adding Redis manifest entries [ DONE ]
Adding Ceilometer manifest entries [ DONE ]
Adding Ceilometer Keystone manifest entries [ DONE ]
Adding Aodh manifest entries [ DONE ]
Adding Aodh Keystone manifest entries [ DONE ]
Adding Nagios server manifest entries [ DONE ]
Adding Nagios host manifest entries [ DONE ]
Copying Puppet modules and manifests [ DONE ]
Applying 10.10.10.21_prescript.pp
10.10.10.21_prescript.pp: [ DONE ]
Applying 10.10.10.21_nova.pp
10.10.10.21_nova.pp: [ DONE ]
Applying 10.10.10.21_neutron.pp
10.10.10.21_neutron.pp: [ DONE ]
Applying 10.10.10.21_nagios_nrpe.pp
10.10.10.21_nagios_nrpe.pp: [ DONE ]
Applying Puppet manifests [ DONE ]
Finalizing [ DONE ]

**** Installation completed successfully ******

Additional information:
* Time synchronization installation was skipped. Please note that unsynchronized time on server instances might be problem for some OpenStack components.
* File /root/keystonerc_admin has been created on OpenStack client host 10.10.10.20. To use the command line tools you need to source the file.
* To access the OpenStack Dashboard browse to http://10.10.10.20/dashboard .
Please, find your login credentials stored in the keystonerc_admin in your home directory.
* To use Nagios, browse to http://10.10.10.20/nagios username: nagiosadmin, password: e68a1a992d2b44fd
* The installation log file is available at: /var/tmp/packstack/20161118-074021-mhbsqe/openstack-setup.log
* The generated manifests are available at: /var/tmp/packstack/20161118-074021-mhbsqe/manifests

Step 4 :

Verify your new compute node is include with existing controller.

[root@openstack-test ~]# source /root/keystonerc_admin
[root@openstack-test ~(keystone_admin)]# nova-manage service list
Binary Host Zone Status State Updated_At
nova-osapi_compute 0.0.0.0 internal enabled XXX None
nova-metadata 0.0.0.0 internal enabled XXX None
nova-cert openstack-test.gsintlab.com internal enabled :-) 2016-11-19 12:59:57
nova-consoleauth openstack-test.gsintlab.com internal enabled :-) 2016-11-19 12:59:57
nova-scheduler openstack-test.gsintlab.com internal enabled :-) 2016-11-19 12:59:55
nova-conductor openstack-test.gsintlab.com internal enabled :-) 2016-11-19 12:59:56
nova-compute openstack-test.gsintlab.com nova enabled :-) 2016-11-19 12:59:56
nova-compute test-compute2.gsintlab.com nova enabled :-) 2016-11-19 13:00:04

That’s it.

 

Add Additional Storage Node

It shows as unsupported version, anyway change CONFIG_UNSUPPORTED=y  on your /root/youranwserfile.txt file and updated CONFIG_STORAGE_HOST with your new storage node.

# (Unsupported!) Server on which to install OpenStack services# specific to storage servers such as Image or Block Storage services.

CONFIG_STORAGE_HOST=10.10.10.21

 

 

internal DNS resolution with neutron network

The Networking service enables users to control the name assigned to ports by the internal DNS. We will check to enable internal DNS resolution with neutron network on openstack cloud. The internal DNS functionality offered by the Networking service and its interaction with the Compute service.

  • Integration of the Compute service and the Networking service with an external DNSaaS (DNS-as-a-Service).
  • Users can control the behaviour of the Networking service in regards to DNS using two attributes associated with ports, networks, and floating IPs.

Dnsmasq provides services as a DNS cacher and a DHCP server. dnsmasq does DHCP, DNS, DNS caching, and TFTP, so it’s four servers in one. As a Domain Name Server (DNS) it can cache DNS queries to improve connection speeds to previously visited sites, and as a DHCP server dnsmasq can be used to provide internal IP addresses and routes to computers on a LAN. Either or both of these services can be implemented. dnsmasq is considered to be lightweight and easy to configure.

Steps to enable internal DNS resolution with neutron network

Edit the neutron.conf file and assign a value different to openstacklocal (its default value) to the dns_domain parameter in the [default] section. As an example:

vi /etc/neutron/neutron.conf

dns_domain = example.org.

Add dns to extension_drivers in the [ml2] section of ml2_conf.ini. As an example:

vi /etc/neutron/plugins/ml2/ml2_conf.ini

[ml2]
extension_drivers = port_security,dns
Restart neutron services and dnsmasq daemon
Create new private network
Copy subnet DHCP Ports IP

dhcp port

Edit new private network subnet DNS name servers

openstack subnet edit

internal DNS resolution with neutron network

 

Create a new Instances and check the internal DNS resolution.