by on

GlusterFS 3.6.0 is Alive!

Just in time for Halloween, GlusterFS 3.6.0 has been released. The newest release, as you would expect, is full of new features and new elements for stability. GlusterFS, as you may know, is a general-purpose scale out distributed file system. It aggregates storage exports over network interconnects provide a single unified namespace, is stackable and […]

Read More

by on

GlusterFS at OpenStack Summit, Paris

A few of us will be attending the OpenStack Summit in Paris  next week. OpenStack Storage continues to be an area of interest for GlusterFS and we will be investigating further integration possibilities of GlusterFS with the broad OpenStack ecosystem as we attend the summit. As you might be aware, GlusterFS already is tightly integrated […]

Read More

by on October 30, 2014

Gluster Night Paris

      Hey folks We’re having a Gluster event in Paris on 4-November, to get together and celebrate a number of new technologies, including the GlusterFS 3.6.0 release. Sign up here: http://www.eventbrite.com/e/inscription-gluster-night-paris-10413255327?aff=es2&rank=0 Our agenda is pretty packed: Event opens at 18:00 18:00 Drinks, snacks and chats 18:15 : Overview of 3.6.0 new features 18:25 […]

Read More

by on October 29, 2014

Up and Running with oVirt 3.5

Last week, version 3.5 of oVirt, the open source virtualization management system, hit FTP mirrors sporting a slate of fixes and enhancements, including a new-look user interface, and support for using CentOS 7 machines as virtualization hosts.

As with every new oVirt release, I’m here to suggest a path to getting up and running with the project on single server, with an option for expanding to additional machines in the future. First, though, a quick rundown of the different single-machine options for trying out oVirt:

  • oVirt Live ISO: A LiveCD image that you can burn onto a blank CD or copy onto a USB stick to boot from and run oVirt. This is probably the fastest way to get up and running, but once you’re up, this is probably your lowest-performance option, and not suitable for extended use or expansion.
  • oVirt All in One plugin: Run the oVirt management server and virtualization host components on a single machine with local storage. This is a more permanent version of the Live ISO approach, and had been my favored kick-the-tires option until the rise of…
  • oVirt Hosted Engine: The self-hosted engine approach consists of an oVirt virtualization host that serves up its own management engine. This route is a bit more complicated than those above, but I like it because:
    • oVirt 3.5 supports CentOS 7 as a virtualization host, but not as a host for the management engine. Running oVirt Engine in a separate VM allows you to put CentOS 7 on your metal, and keep CentOS 6 around for the engine.
    • With the All-in-One approach, your management engine is married to the machine it’s installed on, limiting your expansion options. The Hosted Engine can move among hosts.

For this howto, I’ll be walking through the steps you can follow to get oVirt 3.5 up and running on a single machine with a self-hosted engine, and with self-hosted storage, courtesty of GlusterFS.

In my next post, I’ll describe how to add two more machines to the mix to give yourself an installation hardy enough to bring a machine down for updates and maintainence without everything grinding to a halt.

If you have access to good external NFS or iSCSI storage to use with your oVirt exploration, I’ll point out where you can skip the GlusterFS bits and use your external storage resource.

Prerequisites

Hardware: You’ll need a machine with at least 4GB RAM and processors with hardware virtualization extensions. A physical machine is best, but you can test oVirt using nested KVM as well.

Software: For this howto, I’m using CentOS 7 for the host and CentOS 6 for the Engine VM. oVirt does support other OS options. For more info see the project’s download page.

Network: Your test machine’s host name must resolve properly, either through your network’s DNS, or through the /etc/hosts file on your virt host(s), on the VM that will host the oVirt engine, and on any clients from which you plan on administering oVirt.

Storage: The hosted engine feature requires NFS or iSCSI storage to house the VM that will host the engine. For this walkthrough, I’ll be using a Gluster-based NFS share hosted from my test machine. If you prefer to use external iSCSI or NFS storage, that’ll work, too.

Installing oVirt with Hosted Engine

I’m starting out with a test machine with 8 GB of RAM and 4 processor cores, running a minimal installation of CentOS 7 with all updates applied.

I’ve identified a pair of static IP address on my network to use for this test (one for my first host, and one for the hosted engine), and I’ve edited the /etc/hosts file on my test machine so that these addresses will resolve properly.

I’m also adding a second host name, ovirt-mount.osas.lab, to use as my NFS mount point – this will come in handy for my next post, when I’ll have multiple machines sharing the NFS-hosting duties over a common virtual IP:

10.10.10.1 ovirt-one.osas.lab ovirtmount.osas.lab
10.10.10.2 ovirt-test.osas.lab

The oVirt 3.5 release notes suggest disabling NetworkManager and firewalld in favor of the tried and true network and iptables services:

sudo systemctl disable firewalld && sudo systemctl stop firewalld
sudo systemctl disable NetworkManager && sudo systemctl stop NetworkManager

Next, we need to configure the oVirt software repository on the first host:

sudo yum localinstall -y http://resources.ovirt.org/pub/yum-repo/ovirt-release35.rpm

Next, install the hosted engine packages, along with screen, which can come in handy during the deployment process:

sudo yum install -y ovirt-hosted-engine-setup screen glusterfs-server nfs-utils netstat vdsm-gluster system-storage-manager

I’m experiencing an SELinux issue in which glusterd isn’t functional until after a reboot, so go ahead and reboot after installing these packages.

Gluster Preparations

We need a partition to store our Gluster bricks. For simplicity, I’m using a single XFS partition, and my Gluster bricks will be directories within this partition. I use system-storage-manager to manage my storage.

If you’re skipping the local Gluster storage, and using external NFS or iSCSI storage, you can skip this step.

sudo ssm add -p gluster $YOUR_DEVICE_NAME
sudo ssm create -p gluster --fstype xfs -n gluster

Next, modify your /etc/fstab to add the new partition:

mkdir /gluster
blkid /dev/gluster/gluster

Edit your /etc/fstab and add a line like the one below before running mount -a to mount your new partition:

UUID=$YOUR_UUID /gluster xfs defaults 1 2

Next, we’ll create some mount points for our Gluster volumes-to-be. We’ll have separate Gluster volumes for our hosted engine, and for our oVirt data domain:

mkdir -p /gluster/{engine,data}/brick

Now, edit /etc/glusterfs/glusterd.vol, uncomment the line option base-port 49152 and change the value 49152 to 50152. This change works around a conflict between the ports used by libvirt for live migration, and the ports Gluster uses for its bricks.

This modification appears to cause an SELinux issue, see bug 1158622, and put SELinux into permissive mode w/ setenforce 0 to work around it.

Now start the Gluster service and configure it to auto-start after subsequent reboots:

sudo systemctl start glusterd && sudo systemctl enable glusterd

Next, we create our gluster volumes, substituting your machine’s hostname:

gluster volume create engine $HOSTNAME:/gluster/engine/brick
gluster volume create data $HOSTNAME:/gluster/data/brick

Now, create a file on your machine by the name of /var/lib/glusterd/groups/virt, and populate it with the contents of this file on github. This file contains a set of virt-related volume options, and the following command applies those options to our engine and data volumes:

gluster volume set engine group virt
gluster volume set data group virt

We also need to set the correct permissions on all our volumes:

gluster volume set engine storage.owner-uid 36 && gluster volume set engine storage.owner-gid 36
gluster volume set data storage.owner-uid 36 && gluster volume set data storage.owner-gid 36

Finally, we need to start our volumes:

gluster volume start engine
gluster volume start data

Before moving ahead, run gluster volume status to make sure that the NFS server associated with your engine volume is up and running, because we’re about to need it.

Installing the Hosted Engine

PXE, ISO image, and OVF-formatted disk image are our installation media options for the VM that will host our engine. Here, I’m using an ISO image, and creating a temporary directory to which oVirt will have access to house the image during the install process:

mkdir /home/tmp && cd /home/tmp
curl -O http://mirrors.kernel.org/centos/6.6/isos/x86_64/CentOS-6.6-x86_64-minimal.iso
chown -R 36:36 /home/tmp

Now we should be ready to fire up screen and kick off the installation process:

screen
hosted-engine --deploy

Follow along with the script, answering its questions. The default answers are fine, but you’ll need to supply the path to your NFS share, the type and path to the media you’ll be using to install your engine-hosting VM, the host name you’ve picked out for the hosted engine, and the password you’ll be using for the engine admin user.

My NFS path looks like ovirtmount.osas.lab:/engine. I’m using that secondary host name I mentioned earlier in the howto, the hosting of which will, in the next post, be shared among my virt+storage machines.

Once you’ve supplied all these answers, and confirmed your choices, the installer will launch a VM and provide you with an address and password for accessing the VM with the vnc client of your choice. Fire up a vnc client, enter the address provided and enter the password provided to access the VM.

From here, you can click your way through the installation of your engine-hosting VM. You can configure the VM’s correct host name and IP address either now, or after you boot into the VM for the first time. As for software choices, you can simply opt for a “minimal” installation.

When the OS installation on your new VM is complete, head back to the terminal window where hosted-engine --deploy is running, and hit enter to let the script know that you’re ready for the next step.

The VM will reboot, and when it’s back up, it’s time to install oVirt engine. Either through vnc or through an ssh session (ssh is nicer for copying and pasting commands), access your newly-created VM, and ensure that everything is in order.

  • If needed, modify /etc/hosts in the new VM with the same IP address / host name mappings we set up on the first host.
  • If you didn’t take care of it during the OS installation, fix your network settings to give the VM the correct host name and IP address, as well.
  • Apply all updates to the VM, and reboot if needed. If you do reboot the VM, go back to the terminal where you’re running hosted-engine --deploy and select option 2, “Power off and restart the VM.”

Now, just as we did on our first host, we need to configure the oVirt software repository on our hosted engine VM:

sudo yum localinstall -y http://resources.ovirt.org/pub/yum-repo/ovirt-release35.rpm

Next, we’ll install and then set up ovirt-engine:

sudo yum install -y ovirt-engine
sudo engine-setup

Go through the engine-setup script, answering its questions. You’ll be fine accepting all the default answers, but make sure to supply the same admin password that you chose earlier, while running hosted-engine --deploy.

When the installation process completes, head back to the terminal where you’re running the hosted engine installer and hit enter to indicate that the engine configuration is complete.

The installer will register itself as a virtualization host on the oVirt engine instance we’ve just installed. Once this completes, the installer will tell you to shut down your VM so that the ovirt-engine-ha services on the first host can restart the engine VM as a monitored service.

It can take a few minutes for the HA services to notice that the engine is down, to check that there’s a machine available to host the engine, and to start up the hosted engine VM. You can watch these services do their thing by tailing their log files:

sudo tail -f /var/log/ovirt-hosted-engine-ha/*

Once that process is complete, the script will exit and you should be ready to configure storage and run a VM.

Configuring Storage

Head to your oVirt engine console at the address of your hosted engine VM, log in with the user name admin and the password you chose during setup, and visit the “Storage” tab in the console.

Click “New Domain,” give your new domain a name, and choose Data / GlusterFS from the “domain function / storage type” drop down menu. (If you’re using a different sort of external storage, you can choose an option matching that, instead.)

In the “Export Path” field, enter the remote path to your Gluster volume, and hit the OK button to proceed. It’ll take a bit of time for your new storage domain to initialize and come online, but once it does, you’ll be ready to launch your first VM.

Running Your First VM

Since version 3.4, oVirt engine has come pre-configured with a public Glance instance managed by the oVirt project. We’ll tap this resource to launch our first VM.

From the storage tab, you should see an “ovirt-image-repository” entry next to a little OpenStack logo. Clicking on this domain will bring up a menu of images available in this repository. Click on the “CirrOS” image (which is very small and perfect for testing) in the list and then click “Import,” before hitting the OK button in the pop-up dialog to continue.

The image will be copied from the oVirt project’s public Glance repository to the storage domain you just configured, where it will be available as a disk to attach to a new VM. In the import image dialog, you have the option of clicking the “Import as Template” check box to give yourself the option of basing multiple future VMs on this image using oVirt’s templates functionality.

Next, head to the “Virtual Machines” tab in the console, click “New VM,” choose “Linux” from the “Operating System” drop down menu, supply a name for your VM, and choose the “ovirtmgmt/ovirtmgmt” network in the drop down menu next to “nic1″ before hitting the “OK” button. For additional configuration, such as setting RAM and CPU values and using cloud-init, there’s a “Show Advanced Options” button in the dialog, but you can revisit that later.

Next you’ll get a “Guide Me” dialog box that will ask you to configure a virtual disk. Click the “Configure Virtual Disks” button, check the “Attach Disk” box at the upper left part of the dialog, select the Glance disk image we just downloaded, and hit the “OK” button to continue. Dismiss the “Guide Me” dialog by hitting the “Configure Later” button.

Now, back at the Virtual Machines list, right-click your new VM, and choose “Run” from the menu. After a few moments, the status of your new VM will switch from red to green, and you’ll be able to click on the green monitor icon next to “Migrate” to open a console window and access your VM.

Till Next Time

That’s enough for this post. If you run into trouble following this walkthrough, I’ll be happy to help you get up and running or get pointed in the right direction. On IRC, I’m jbrooks, ping me in the #ovirt room on OFTC, write a comment below, or give me a shout on Twitter @jasonbrooks.

Stay tuned for a followup post about adding two more machines to our oVirt+Gluster setup, so that you can bring down one of the machines for maintenance or upgrades without having to shut the whole thing down.

If you’re interested in getting involved with the oVirt Project, you can find all the mailing list, issue tracker, source repository, and wiki information you need here.

Finally, be sure to follow us on Twitter at @redhatopen for news on oVirt and other open source projects in the Red Hat world.

Read More

by on October 23, 2014

2014 “State of Gluster” poll

As we roll towards the release of GlusterFS 3.6.0, it seemed a good time to find out what and how you are using GlusterFS. To that end, we’ve got a short poll up at https://www.surveymonkey.com/s/gluster2014, which should be pretty quick for you to take. This is completely anonymous, unless you choose to opt in to allow […]

Read More

by on October 21, 2014

Distributed Systems Prayer

Forgive me, Lord, for I have sinned.

I have written distributed systems in languages prone to race conditions and
memory leaks.

I have failed to use model checking when I should have.

I have failed to use static analysis when I should have.

Read More