fence_virt.conf (5) - Linux Manuals

fence_virt.conf: configuration file for fence_virtd

NAME

fence_virt.conf - configuration file for fence_virtd

DESCRIPTION

The fence_virt.conf file contains configuration information for fence_virtd, a fencing request routing daemon for clusters of virtual machines.

The file is tree-structured. There are parent/child relationships and sibling relationships between the nodes.


  foo {
 bar {
baz "1";
 }
  }

There are three primary sections of fence_virt.conf.

SECTIONS

fence_virtd

This section contains global information about how fence_virtd is to operate. The most important pieces of information are as follows:

listener
the listener plugin for receiving fencing requests from clients

backend
the plugin to be used to carry out fencing requests

foreground
do not fork into the background.

wait_for_init
wait for the frontend and backends to become available rather than giving up immediately. This replaces wait_for_backend in 0.2.x.

module_path
the module path to search for plugins

listeners

This section contains listener-specific configuration information; see the section about listeners below.

backends

This section contains listener-specific configuration information; see the section about listeners below.

groups

This section contains static maps of which virtual machines may fence which other virtual machines; see the section about groups below.

LISTENERS

There are various listeners available for fence_virtd, each one handles decoding and authentication of a given fencing request. The following configuration blocks belong in the listeners section of fence_virt.conf

multicast

key_file
the shared key file to use (default: /etc/cluster/fence_xvm.key).

hash
the weakest hashing algorithm allowed for client requests. Clients may send packets with stronger hashes than the one specified, but not weaker ones. (default: sha256, but could be sha1, sha512, or none)

auth
the hashing algorithm to use for the simplistic challenge-response authentication (default: sha256, but could be sha1, sha512, or none)

family
the IP family to use (default: ipv4, but may be ipv6)

address
the multicast address to listen on (default: 225.0.0.12)

port
the multicast port to listen on (default: 1229)

interface
interface to listen on. By default, fence_virtd listens on all interfaces. However, this causes problems in some environments where the host computer is used as a gateway.

serial

The serial listener plugin utilizes libvirt's serial (or VMChannel) mapping to listen for requests. When using the serial listener, it is necessary to add a serial port (preferably pointing to /dev/ttyS1) or a channel (preferrably pointing to 10.0.2.179:1229) to the libvirt domain description. Note that only type unix , mode bind serial ports and channels are supported. Example libvirt XML:


  <serial type='unix'>
 <source mode='bind' path='/sandbox/guests/fence_socket_molly'/>
 <target port='1'/>
  </serial>
  <channel type='unix'>
 <source mode='bind' path='/sandbox/guests/fence_molly_vmchannel'/>
 <target type='guestfwd' address='10.0.2.179' port='1229'/>
  </channel>

uri
the URI to use when connecting to libvirt by the serial plugin.

path
The same directory that is defined for the domain serial port path (From example above: /sandbox/guests). Sockets must reside in this directory in order to be considered valid. This can be used to prevent fence_virtd from using the wrong sockets.

mode
This selects the type of sockets to register. Valid values are "serial" (default) and "vmchannel".

tcp

The tcp plugin was designed to be used with vios-proxy. vios-proxy uses a virtio-serial channel to proxy TCP connections between guests and a host. In order to use the tcp plugin, vios-proxy-host must be running on all the physical cluster nodes, and vios-proxy-guest must be running on all guest cluster nodes. Prior to running vios-proxy-host or vios-proxy-guest, the virtio-serial channel and host sockets must be configured for all guest domains. Example libvirt XML:


  <controller type='virtio-serial' index='0'>
 <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/>
  </controller>


  <channel type='unix'>
 <source mode='bind' path='/sandbox/fence_virt/guests/fence_socket_guest1' id='guest1'/>
 <target type='virtio' name='org.redhat.fencevirt.node.1'/>
 <address type='virtio-serial' controller='0' bus='0' port='1'/>
  </channel>

key_file
the shared key file to use (default: /etc/cluster/fence_xvm.key).

hash
the hashing algorithm to use for packet signing (default: sha256, but could be sha1, sha512, or none)

auth
the hashing algorithm to use for the simplistic challenge-response authentication (default: sha256, but could be sha1, sha512, or none)

family
the IP family to use (default: ipv4, but may be ipv6)

address
the IP address to listen on (default: 127.0.0.1)

port
the TCP port to listen on (default: 1229)

BACKENDS

There are various backends available for fence_virtd, each one handles routing a fencing request to a hypervisor or management tool. The following configuration blocks belong in the backends section of fence_virt.conf

libvirt

The libvirt plugin is the simplest plugin. It is used in environments where routing fencing requests between multiple hosts is not required, for example by a user running a cluster of virtual machines on a single desktop computer.

uri
the URI to use when connecting to libvirt.

libvirt-qmf

The libvirt-qmf plugin acts as a QMFv2 Console to the libvirt-qmf daemon in order to route fencing requests over AMQP to the appropriate computer.

host
host or IP address of qpid broker. Defaults to 127.0.0.1.

port
IP port of qpid broker. Defaults to 5672.

username
Username for GSSAPI, if configured.

service
Qpid service to connect to.

gssapi
If set to 1, have fence_virtd use GSSAPI for authentication when communicating with the Qpid broker. Default is 0 (off).

checkpoint

The checkpoint plugin uses CMAN, CPG, and OpenAIS checkpoints to track virtual machines and route fencing requests to the appropriate computer.

uri
the URI to use when connecting to libvirt by the checkpoint plugin.

name_mode
The checkpoint plugin, in order to retain compatibility with fence_xvm, stores virtual machines in a certain way in the OpenAIS checkpoints. The default was to use 'name' when using fence_xvm and fence_xvmd, and so this is still the default. However, it is strongly recommended to use 'uuid' instead of 'name' in all cluster environments involving more than one physical host in order to avoid the potential for name collisions.

GROUPS

Fence_virtd supports static maps which allow grouping of VMs. The groups are arbitrary and are checked at fence time. Any member of a group may fence any other member. Hosts may be assigned to multiple groups if desired.

group

This defines a group.

uuid
defines UUID as a member of a group.

ip
defines an IP which is allowed to send fencing requests for members of this group (e.g. for multicast). It is highly recommended that this be used in conjunction with a key file.

EXAMPLE


 fence_virtd {
  listener "multicast";
  backend "checkpoint";
 }


 this is the listeners section


 listeners {
  multicast {
key_file "/etc/cluster/fence_xvm.key";
  }
 }


 backends {
  libvirt 
uri "qemu:///system";
  }
 }
 
 groups {
  group {
ip "192.168.1.1";
uuid "44179d3f-6c63-474f-a212-20c8b4b25b16";
uuid "1ce02c4b-dfa1-42cb-b5b1-f0b1091ece60";
  }
 }

SEE ALSO

fence_virtd(8), fence_virt(8), fence_xvm(8), fence(8)