salt (7) - Linux Manuals
salt: Salt Documentation
NAME
salt - Salt DocumentationINTRODUCTION TO SALT
We’re not just talking about NaCl..SS The 30 second summarySalt is:
- •
- a configuration management system, capable of maintaining remote nodes in defined states (for example, ensuring that specific packages are installed and specific services are running)
- •
-
a distributed remote execution system used to execute commands and
query data on remote nodes, either individually or by arbitrary
selection criteria
It was developed in order to bring the best solutions found in the world of remote execution together and make them better, faster, and more malleable. Salt accomplishes this through its ability to handle large loads of information, and not just dozens but hundreds and even thousands of individual servers quickly through a simple and manageable interface.
Simplicity
Providing versatility between massive scale deployments and smaller systems may seem daunting, but Salt is very simple to set up and maintain, regardless of the size of the project. The architecture of Salt is designed to work with any number of servers, from a handful of local network systems to international deployments across different data centers. The topology is a simple server/client model with the needed functionality built into a single set of daemons. While the default configuration will work with little to no modification, Salt can be fine tuned to meet specific needs.
Parallel execution
The core functions of Salt:
- •
- enable commands to remote systems to be called in parallel rather than serially
- •
- use a secure and encrypted protocol
- •
- use the smallest and fastest network payloads possible
- •
-
provide a simple programming interface
Salt also introduces more granular controls to the realm of remote execution, allowing systems to be targeted not just by hostname, but also by system properties.
Building on proven technology
Salt takes advantage of a number of technologies and techniques. The networking layer is built with the excellent ZeroMQ networking library, so the Salt daemon includes a viable and transparent AMQ broker. Salt uses public keys for authentication with the master daemon, then uses faster AES encryption for payload communication; authentication and encryption are integral to Salt. Salt takes advantage of communication via msgpack, enabling fast and light network traffic.
Python client interface
In order to allow for simple expansion, Salt execution routines can be written as plain Python modules. The data collected from Salt executions can be sent back to the master server, or to any arbitrary program. Salt can be called from a simple Python API, or from the command line, so that Salt can be used to execute one-off commands as well as operate as an integral part of a larger application.
Fast, flexible, scalable
The result is a system that can execute commands at high speed on target server groups ranging from one to very many servers. Salt is very fast, easy to set up, amazingly malleable and provides a single remote execution architecture that can manage the diverse requirements of any number of servers. The Salt infrastructure brings together the best of the remote execution world, amplifies its capabilities and expands its range, resulting in a system that is as versatile as it is practical, suitable for any network.
Open
Salt is developed under the Apache 2.0 license, and can be used for open and proprietary projects. Please submit your expansions back to the Salt project so that we can all benefit together as Salt grows. Please feel free to sprinkle Salt around your systems and let the deliciousness come forth.
Salt Community
Join the Salt!
There are many ways to participate in and communicate with the Salt community.
Salt has an active IRC channel and a mailing list.
Mailing List
Join the salt-users mailing list. It is the best place to ask questions about Salt and see whats going on with Salt development! The Salt mailing list is hosted by Google Groups. It is open to new members.
https://groups.google.com/forum/#!forum/salt-users
There is also a low-traffic list used to announce new releases called salt-announce
https://groups.google.com/forum/#!forum/salt-announce
IRC
The #salt IRC channel is hosted on the popular Freenode network. You can use the Freenode webchat client right from your browser.
Logs of the IRC channel activity are being collected courtesy of Moritz Lenz.
If you wish to discuss the development of Salt itself join us in #salt-devel.
Follow on Github
The Salt code is developed via Github. Follow Salt for constant updates on what is happening in Salt development:
https://github.com/saltstack/salt
Blogs
SaltStack Inc. keeps a blog with recent news and advancements:
http://www.saltstack.com/blog/
Thomas Hatch also shares news and thoughts on Salt and related projects in his personal blog The Red45:
Example Salt States
The official salt-states repository is: https://github.com/saltstack/salt-states
A few examples of salt states from the community:
- •
- https://github.com/blast-hardcheese/blast-salt-states
- •
- https://github.com/kevingranade/kevingranade-salt-state
- •
- https://github.com/uggedal/states
- •
- https://github.com/mattmcclean/salt-openstack/tree/master/salt
- •
- https://github.com/rentalita/ubuntu-setup/
- •
- https://github.com/brutasse/states
- •
- https://github.com/bclermont/states
- •
- https://github.com/pcrews/salt-data
Follow on ohloh
Other community links
- •
- Salt Stack Inc.
- •
- Subreddit
- •
- Google+
- •
- YouTube
- •
- •
- •
- Wikipedia page
Hack the Source
If you want to get involved with the development of source code or the documentation efforts, please review the hacking section!
INSTALLATION
SEE ALSO: Installing Salt for development and contributing to the project.
Quick Install
On most distributions, you can set up a Salt Minion with the Salt Bootstrap.
Platform-specific Installation Instructions
These guides go into detail how to install Salt on a given platform.
Arch Linux
Installation
Salt (stable) is currently available via the Arch Linux Official repositories. There are currently -git packages available in the Arch User repositories (AUR) as well.
Stable Release
Install Salt stable releases from the Arch Linux Official repositories as follows:
pacman -S salt-zmq
To install Salt stable releases using the RAET protocol, use the following:
pacman -S salt-raet
Tracking develop
To install the bleeding edge version of Salt (may include bugs!), use the -git package. Installing the -git package as follows:
wget https://aur.archlinux.org/packages/sa/salt-git/salt-git.tar.gz tar xf salt-git.tar.gz cd salt-git/ makepkg -is
NOTE: yaourt
If a tool such as Yaourt is used, the dependencies will be gathered and built automatically.
The command to install salt using the yaourt tool is:
yaourt salt-git
Post-installation tasks
systemd
Activate the Salt Master and/or Minion via systemctl as follows:
systemctl enable salt-master.service systemctl enable salt-minion.service
Start the Master
Once you've completed all of these steps you're ready to start your Salt Master. You should be able to start your Salt Master now using the command seen here:
systemctl start salt-master
Now go to the Configuring Salt page.
Debian Installation
Currently the latest packages for Debian Old Stable, Stable, and Unstable (Squeeze, Wheezy, and Sid) are published in our (saltstack.com) Debian repository.
Configure Apt
Squeeze (Old Stable)
For squeeze, you will need to enable the Debian backports repository as well as the debian.saltstack.com repository. To do so, add the following to /etc/apt/sources.list or a file in /etc/apt/sources.list.d:
deb http://debian.saltstack.com/debian squeeze-saltstack main deb http://backports.debian.org/debian-backports squeeze-backports main contrib non-free
Wheezy (Stable)
For wheezy, the following line is needed in either /etc/apt/sources.list or a file in /etc/apt/sources.list.d:
deb http://debian.saltstack.com/debian wheezy-saltstack main
Jessie (Testing)
For jessie, the following line is needed in either /etc/apt/sources.list or a file in /etc/apt/sources.list.d:
deb http://debian.saltstack.com/debian jessie-saltstack main
Sid (Unstable)
For sid, the following line is needed in either /etc/apt/sources.list or a file in /etc/apt/sources.list.d:
deb http://debian.saltstack.com/debian unstable main
Import the repository key.
You will need to import the key used for signing.
wget -q -O- "http://debian.saltstack.com/debian-salt-team-joehealy.gpg.key" | apt-key add -
NOTE: You can optionally verify the key integrity with sha512sum using the public key signature shown here. E.g:
echo "b702969447140d5553e31e9701be13ca11cc0a7ed5fe2b30acb8491567560ee62f834772b5095d735dfcecb2384a5c1a20045f52861c417f50b68dd5ff4660e6 debian-salt-team-joehealy.gpg.key" | sha512sum -c
Update the package database
apt-get update
Install packages
Install the Salt master, minion, or syndic from the repository with the apt-get command. These examples each install one daemon, but more than one package name may be given at a time:
apt-get install salt-master
apt-get install salt-minion
apt-get install salt-syndic
Post-installation tasks
Now, go to the Configuring Salt page.
Notes
1. These packages will be backported from the packages intended to be uploaded into Debian unstable. This means that the packages will be built for unstable first and then backported over the next day or so.
2. These packages will be tracking the released versions of salt rather than maintaining a stable fixed feature set. If a fixed version is what you desire, then either pinning or manual installation may be more appropriate for you.
3. The version numbering and backporting process should provide clean upgrade paths between Debian versions.
If you have any questions regarding these, please email the mailing list or look for joehh on IRC.
Fedora
Beginning with version 0.9.4, Salt has been available in the primary Fedora repositories and EPEL. It is installable using yum. Fedora will have more up to date versions of Salt than other members of the Red Hat family, which makes it a great place to help improve Salt!
WARNING: Fedora 19 comes with systemd 204. Systemd has known bugs fixed in later revisions that prevent the salt-master from starting reliably or opening the network connections that it needs to. It's not likely that a salt-master will start or run reliably on any distribution that uses systemd version 204 or earlier. Running salt-minions should be OK.
Installation
Salt can be installed using yum and is available in the standard Fedora repositories.
Stable Release
Salt is packaged separately for the minion and the master. It is necessary only to install the appropriate package for the role the machine will play. Typically, there will be one master and multiple minions.
yum install salt-master yum install salt-minion
Installing from updates-testing
When a new Salt release is packaged, it is first admitted into the updates-testing repository, before being moved to the stable repo.
To install from updates-testing, use the enablerepo argument for yum:
yum --enablerepo=updates-testing install salt-master yum --enablerepo=updates-testing install salt-minion
Installation Using pip
Since Salt is on PyPI, it can be installed using pip, though most users prefer to install using a package manager.
Installing from pip has a few additional requirements:
- •
- Install the group 'Development Tools', dnf groupinstall 'Development Tools'
- •
-
Install the 'zeromq-devel' package if it fails on linking against that
afterwards as well.
A pip install does not make the init scripts or the /etc/salt directory, and you will need to provide your own systemd service unit.
Installation from pip:
pip install salt
WARNING: If installing from pip (or from source using setup.py install), be advised that the yum-utils package is needed for Salt to manage packages. Also, if the Python dependencies are not already installed, then you will need additional libraries/tools installed to build some of them. More information on this can be found here.
Post-installation tasks
Master
To have the Master start automatically at boot time:
systemctl enable salt-master.service
To start the Master:
systemctl start salt-master.service
Minion
To have the Minion start automatically at boot time:
systemctl enable salt-minion.service
To start the Minion:
systemctl start salt-minion.service
Now go to the Configuring Salt page.
FreeBSD
Salt was added to the FreeBSD ports tree Dec 26th, 2011 by Christer Edwards <christer.edwards [at] gmail.com>. It has been tested on FreeBSD 7.4, 8.2, 9.0, and 9.1 releases.
Salt is dependent on the following additional ports. These will be installed as dependencies of the sysutils/py-salt port:
/devel/py-yaml /devel/py-pyzmq /devel/py-Jinja2 /devel/py-msgpack /security/py-pycrypto /security/py-m2crypto
Installation
On FreeBSD 10 and later, to install Salt from the FreeBSD pkgng repo, use the command:
pkg install py27-salt
On older versions of FreeBSD, to install Salt from the FreeBSD ports tree, use the command:
make -C /usr/ports/sysutils/py-salt install clean
Post-installation tasks
Master
Copy the sample configuration file:
cp /usr/local/etc/salt/master.sample /usr/local/etc/salt/master
rc.conf
Activate the Salt Master in /etc/rc.conf or /etc/rc.conf.local and add:
+ salt_master_enable="YES"
Start the Master
Start the Salt Master as follows:
service salt_master start
Minion
Copy the sample configuration file:
cp /usr/local/etc/salt/minion.sample /usr/local/etc/salt/minion
rc.conf
Activate the Salt Minion in /etc/rc.conf or /etc/rc.conf.local and add:
+ salt_minion_enable="YES" + salt_minion_paths="/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin"
Start the Minion
Start the Salt Minion as follows:
service salt_minion start
Now go to the Configuring Salt page.
Gentoo
Salt can be easily installed on Gentoo via Portage:
emerge app-admin/salt
Post-installation tasks
Now go to the Configuring Salt page.
OpenBSD
Salt was added to the OpenBSD ports tree on Aug 10th 2013. It has been tested on OpenBSD 5.5 onwards.
Salt is dependent on the following additional ports. These will be installed as dependencies of the sysutils/salt port:
devel/py-futures devel/py-progressbar net/py-msgpack net/py-zmq security/py-crypto security/py-M2Crypto textproc/py-MarkupSafe textproc/py-yaml www/py-jinja2 www/py-requests www/py-tornado
Installation
To install Salt from the OpenBSD pkg repo, use the command:
pkg_add salt
Post-installation tasks
Master
To have the Master start automatically at boot time:
rcctl enable salt_master
To start the Master:
rcctl start salt_master
Minion
To have the Minion start automatically at boot time:
rcctl enable salt_minion
To start the Minion:
rcctl start salt_minion
Now go to the Configuring Salt page.
OS X
Dependency Installation
It should be noted that Homebrew explicitly discourages the use of sudo: Homebrew is designed to work without using sudo. You can decide to use it but we strongly recommend not to do so. If you have used sudo and run into a bug then it is likely to be the cause. Please don’t file a bug report unless you can reproduce it after reinstalling Homebrew from scratch without using sudo
So when using Homebrew, if you want support from the Homebrew community, install this way:
brew install saltstack
When using MacPorts, install this way:
sudo port install salt
When only using the OS X system's pip, install this way:
sudo pip install salt
Salt-Master Customizations
To run salt-master on OS X, the root user maxfiles limit must be increased:
sudo launchctl limit maxfiles 4096 8192
And sudo add this configuration option to the /etc/salt/master file:
max_open_files: 8192
Now the salt-master should run without errors:
sudo salt-master --log-level=all
Post-installation tasks
Now go to the Configuring Salt page.
RHEL / CentOS / Scientific Linux / Amazon Linux / Oracle Linux
Installation from Repository
RHEL/CentOS 5
Due to the removal of some of Salt's dependencies from EPEL5, we have created a repository on Fedora COPR. Moving forward, this will be the official means of installing Salt on RHEL5-based systems. Information on how to enable this repository can be found here.
RHEL/CentOS 6 and 7, Scientific Linux, etc.
Beginning with version 0.9.4, Salt has been available in EPEL. It is installable using yum. Salt should work properly with all mainstream derivatives of RHEL, including CentOS, Scientific Linux, Oracle Linux and Amazon Linux. Report any bugs or issues on the issue tracker.
On RHEL6, the proper Jinja package 'python-jinja2' was moved from EPEL to the "RHEL Server Optional Channel". Verify this repository is enabled before installing salt on RHEL6.
Enabling EPEL
If the EPEL repository is not installed on your system, you can download the RPM from here for RHEL/CentOS 6 (or here for RHEL/CentOS 7) and install it using the following command:
rpm -Uvh epel-release-X-Y.rpm
Replace epel-release-X-Y.rpm with the appropriate filename.
Installing Stable Release
Salt is packaged separately for the minion and the master. It is necessary only to install the appropriate package for the role the machine will play. Typically, there will be one master and multiple minions.
On the salt-master, run this:
yum install salt-master
On each salt-minion, run this:
yum install salt-minion
Installing from epel-testing
When a new Salt release is packaged, it is first admitted into the epel-testing repository, before being moved to the stable repo.
To install from epel-testing, use the enablerepo argument for yum:
yum --enablerepo=epel-testing install salt-minion
Installation Using pip
Since Salt is on PyPI, it can be installed using pip, though most users prefer to install using RPMs (which can be installed from EPEL).
Installing from pip has a few additional requirements:
- •
- Install the group 'Development Tools', yum groupinstall 'Development Tools'
- •
-
Install the 'zeromq-devel' package if it fails on linking against that
afterwards as well.
A pip install does not make the init scripts or the /etc/salt directory, and you will need to provide your own systemd service unit.
Installation from pip:
pip install salt
WARNING: If installing from pip (or from source using setup.py install), be advised that the yum-utils package is needed for Salt to manage packages. Also, if the Python dependencies are not already installed, then you will need additional libraries/tools installed to build some of them. More information on this can be found here.
ZeroMQ 4
We recommend using ZeroMQ 4 where available. SaltStack provides ZeroMQ 4.0.4 and pyzmq 14.3.1 in a COPR repository. Instructions for adding this repository (as well as for upgrading ZeroMQ and pyzmq on existing minions) can be found here.
If this repo is added before Salt is installed, then installing either salt-master or salt-minion will automatically pull in ZeroMQ 4.0.4, and additional states to upgrade ZeroMQ and pyzmq are unnecessary.
WARNING: RHEL/CentOS 5 Users Using COPR repos on RHEL/CentOS 5 requires that the python-hashlib package be installed. Not having it present will result in checksum errors because YUM will not be able to process the SHA256 checksums used by COPR.
NOTE: For RHEL/CentOS 5 installations, if using the new repository to install Salt (as detailed above), then it is not necessary to enable the zeromq4 COPR, as the new EL5 repository includes ZeroMQ 4.
Package Management
Salt's interface to yum makes heavy use of the repoquery utility, from the yum-utils package. This package will be installed as a dependency if salt is installed via EPEL. However, if salt has been installed using pip, or a host is being managed using salt-ssh, then as of version 2014.7.0 yum-utils will be installed automatically to satisfy this dependency.
Post-installation tasks
Master
To have the Master start automatically at boot time:
chkconfig salt-master on
To start the Master:
service salt-master start
Minion
To have the Minion start automatically at boot time:
chkconfig salt-minion on
To start the Minion:
service salt-minion start
Now go to the Configuring Salt page.
Solaris
Salt was added to the OpenCSW package repository in September of 2012 by Romeo Theriault <romeot [at] hawaii.edu> at version 0.10.2 of Salt. It has mainly been tested on Solaris 10 (sparc), though it is built for and has been tested minimally on Solaris 10 (x86), Solaris 9 (sparc/x86) and 11 (sparc/x86). (Please let me know if you're using it on these platforms!) Most of the testing has also just focused on the minion, though it has verified that the master starts up successfully on Solaris 10.
Comments and patches for better support on these platforms is very welcome.
As of version 0.10.4, Solaris is well supported under salt, with all of the following working well:
- 1.
- remote execution
- 2.
- grain detection
- 3.
- service control with SMF
- 4.
- 'pkg' states with 'pkgadd' and 'pkgutil' modules
- 5.
- cron modules/states
- 6.
- user and group modules/states
- 7.
-
shadow password management modules/states
Salt is dependent on the following additional packages. These will automatically be installed as dependencies of the py_salt package:
- •
- py_yaml
- •
- py_pyzmq
- •
- py_jinja2
- •
- py_msgpack_python
- •
- py_m2crypto
- •
- py_crypto
- •
- python
Installation
To install Salt from the OpenCSW package repository you first need to install pkgutil assuming you don't already have it installed:
On Solaris 10:
pkgadd -d http://get.opencsw.org/now
On Solaris 9:
wget http://mirror.opencsw.org/opencsw/pkgutil.pkg pkgadd -d pkgutil.pkg all
Once pkgutil is installed you'll need to edit it's config file /etc/opt/csw/pkgutil.conf to point it at the unstable catalog:
- #mirror=http://mirror.opencsw.org/opencsw/testing + mirror=http://mirror.opencsw.org/opencsw/unstable
OK, time to install salt.
# Update the catalog root> /opt/csw/bin/pkgutil -U # Install salt root> /opt/csw/bin/pkgutil -i -y py_salt
Minion Configuration
Now that salt is installed you can find it's configuration files in /etc/opt/csw/salt/.
You'll want to edit the minion config file to set the name of your salt master server:
- #master: salt + master: your-salt-server
If you would like to use pkgutil as the default package provider for your Solaris minions, you can do so using the providers option in the minion config file.
You can now start the salt minion like so:
On Solaris 10:
svcadm enable salt-minion
On Solaris 9:
/etc/init.d/salt-minion start
You should now be able to log onto the salt master and check to see if the salt-minion key is awaiting acceptance:
salt-key -l un
Accept the key:
salt-key -a <your-salt-minion>
Run a simple test against the minion:
salt '<your-salt-minion>' test.ping
Troubleshooting
Ubuntu Installation
Add repository
The latest packages for Ubuntu are published in the saltstack PPA. If you have the add-apt-repository utility, you can add the repository and import the key in one step:
sudo add-apt-repository ppa:saltstack/salt
- add-apt-repository: command not found?
-
The add-apt-repository command is not always present on Ubuntu systems. This can be fixed by installing python-software-properties:
sudo apt-get install python-software-properties
The following may be required as well:
sudo apt-get install software-properties-common
Note that since Ubuntu 12.10 (Raring Ringtail), add-apt-repository is found in the software-properties-common package, and is part of the base install. Thus, add-apt-repository should be able to be used out-of-the-box to add the PPA.
Alternately, manually add the repository and import the PPA key with these commands:
echo deb http://ppa.launchpad.net/saltstack/salt/ubuntu `lsb_release -sc` main | sudo tee /etc/apt/sources.list.d/saltstack.list wget -q -O- "http://keyserver.ubuntu.com:11371/pks/lookup?op=get&search=0x4759FA960E27C0A6" | sudo apt-key add -
After adding the repository, update the package management database:
sudo apt-get update
Install packages
Install the Salt master, minion, or syndic from the repository with the apt-get command. These examples each install one daemon, but more than one package name may be given at a time:
sudo apt-get install salt-master
sudo apt-get install salt-minion
sudo apt-get install salt-syndic
ZeroMQ 4
ZeroMQ 4 is available by default for Ubuntu 14.04 and newer. However, for Ubuntu 12.04 LTS, starting with Salt version 2014.7.5, ZeroMQ 4 is included with the Salt installation package and nothing additional needs to be done.
Post-installation tasks
Now go to the Configuring Salt page.
Windows
Salt has full support for running the Salt Minion on Windows.
There are no plans for the foreseeable future to develop a Salt Master on Windows. For now you must run your Salt Master on a supported operating system to control your Salt Minions on Windows.
Many of the standard Salt modules have been ported to work on Windows and many of the Salt States currently work on Windows, as well.
Windows Installer
Salt Minion Windows installers can be found here. The output of md5sum <salt minion exe> should match the contents of the corresponding md5 file.
Latest stable build from the selected branch:
Earlier builds from supported branches
Archived builds from unsupported branches
NOTE: The installation executable installs dependencies that the Salt minion requires.
The 64bit installer has been tested on Windows 7 64bit and Windows Server 2008R2 64bit. The 32bit installer has been tested on Windows 2003 Server 32bit. Please file a bug report on our GitHub repo if issues for other platforms are found.
The installer asks for 2 bits of information; the master hostname and the minion name. The installer will update the minion config with these options and then start the minion.
The salt-minion service will appear in the Windows Service Manager and can be started and stopped there or with the command line program sc like any other Windows service.
If the minion won't start, try installing the Microsoft Visual C++ 2008 x64 SP1 redistributable. Allow all Windows updates to run salt-minion smoothly.
Silent Installer options
The installer can be run silently by providing the /S option at the command line. The installer also accepts the following options for configuring the Salt Minion silently:
- •
- /master= A string value to set the IP address or host name of the master. Default value is 'salt'
- •
- /minion-name= A string value to set the minion name. Default is 'hostname'
- •
-
/start-service= Either a 1 or 0. '1' will start the service, '0' will not. Default is to start the service after installation.
Here's an example of using the silent installer:
Salt-Minion-2015.5.6-Setup-amd64.exe /S /master=yoursaltmaster /minion-name=yourminionname /start-service=0
Running the Salt Minion on Windows as an Unprivileged User
Notes: - These instructions were tested with Windows Server 2008 R2 - They are generalizable to any version of Windows that supports a salt-minion
A. Create the Unprivileged User that the Salt Minion will Run As
- 1.
- Click "Start", "Control Panel", "User Accounts"
- 2.
- Click "Add or remove user accounts"
- 3.
- Click "Create new account"
- 4.
- Enter "salt-user" (or a name of your preference) in the "New account name" field
- 5.
- Select the "Standard user" radio button
- 6.
- Click the "Create Account" button
- 7.
- Click on the newly created user account
- 8.
- Click the "Create a password" link
- 9.
- In the "New password" and "Confirm new password" fields, provide a password (e.g "SuperSecretMinionPassword4Me!")
- 10.
- In the "Type a password hint" field, provide appropriate text (e.g. "My Salt Password")
- 11.
- Click the "Create password" button
- 12.
- Close the "Change an Account" window
B. Add the New User to the Access Control List for the Salt Folder
- 1.
- In a File Explorer window, browse to the path where Salt is installed (the default path is C:Salt)
- 2.
- Right-click on the "Salt" folder and select "Properties"
- 3.
- Click on the "Security" tab
- 4.
- Click the "Edit" button
- 5.
- Click the "Add" button
- 6.
- Type the name of your designated Salt user and click the "OK" button
- 7.
- Check the box to "Allow" the "Modify" permission
- 8.
- Click the "OK" button
- 9.
- Click the "OK" button to close the "Salt Properties" window
C. Update the Windows Service User for the salt-minion Service
- 1.
- Click "Start", "Administrative Tools", "Services"
- 2.
- In the list of Services, Right-Click on "salt-minion" and select "Properties"
- 3.
- Click the "Log On" tab
- 4.
- Click the "This account" radio button
- 5.
- Provide the account credentials created in section A
- 6.
- Click the "OK" button
- 7.
- Click the "OK" button to the prompt confirming that the user "has been granted the Log On As A Service right"
- 8.
- Click the "OK" button to the prompt confirming that "The new logon name will not take effect until you stop and restart the service"
- 9.
- Right-Click on "salt-minion" and select "Stop"
- 10.
- Right-Click on "salt-minion" and select "Start"
Setting up a Windows build environment
This document will explain how to set up a development environment for salt on Windows. The development environment allows you to work with the source code to customize or fix bugs. It will also allow you to build your own installation.
The Easy Way
Prerequisite Software
To do this the easy way you only need to install Git for Windows.
Create the Build Environment
- 1.
-
Clone the Salt-Windows-Dev
repo from github.
Open a command line and type:
git clone https://github.com/saltstack/salt-windows-dev
- 2.
-
Build the Python Environment
Go into the salt-windows-dev directory. Right-click the file named dev_env.ps1 and select Run with PowerShell
If you get an error, you may need to change the execution policy.
Open a powershell window and type the following:
Set-ExecutionPolicy RemoteSigned
This will download and install Python with all the dependencies needed to develop and build salt.
- 3.
-
Build the Salt Environment
Right-click on the file named dev_env_salt.ps1 and select Run with Powershell
This will clone salt into C:\Salt-Dev\salt and set it to the 2015.5 branch. You could optionally run the command from a powershell window with a -Version switch to pull a different version. For example:
dev_env_salt.ps1 -Version '2014.7'
To view a list of available branches and tags, open a command prompt in your C:Salt-Devsalt directory and type:
git branch -a git tag -n
The Hard Way
Prerequisite Software
Install the following software:
- 1.
- Git for Windows
- 2.
-
Nullsoft Installer
Download the Prerequisite zip file for your CPU architecture from the SaltStack download site:
- •
- Salt32.zip
- •
-
Salt64.zip
These files contain all sofware required to build and develop salt. Unzip the contents of the file to C:\Salt-Dev\temp.
Create the Build Environment
- 1.
- Build the Python Environment
- •
-
Install Python:
Browse to the C:\Salt-Dev\temp directory and find the Python installation file for your CPU Architecture under the corresponding subfolder. Double-click the file to install python.
Make sure the following are in your PATH environment variable:
C:\Python27 C:\Python27\Scripts
- •
-
Install Pip
Open a command prompt and navigate to C:\Salt-Dev\temp Run the following command:
python get-pip.py
- •
-
Easy Install compiled binaries.
M2Crypto, PyCrypto, and PyWin32 need to be installed using Easy Install. Open a command prompt and navigate to C:\Salt-Dev\temp\<cpuarch>. Run the following commands:
easy_install -Z <M2Crypto file name> easy_install -Z <PyCrypto file name> easy_install -Z <PyWin32 file name>
NOTE: You can type the first part of the file name and then press the tab key to auto-complete the name of the file.
- •
-
Pip Install Additional Prerequisites
All remaining prerequisites need to be pip installed. These prerequisites are as follow:
- •
- MarkupSafe
- •
- Jinja
- •
- MsgPack
- •
- PSUtil
- •
- PyYAML
- •
- PyZMQ
- •
- WMI
- •
- Requests
- •
-
Certifi
Open a command prompt and navigate to C:\Salt-Dev\temp. Run the following commands:
pip install <cpuarch>\<MarkupSafe file name> pip install <Jinja file name> pip install <cpuarch>\<MsgPack file name> pip install <cpuarch>\<psutil file name> pip install <cpuarch>\<PyYAML file name> pip install <cpuarch>\<pyzmq file name> pip install <WMI file name> pip install <requests file name> pip install <certifi file name>
- 2.
- Build the Salt Environment
- •
-
Clone Salt
Open a command prompt and navigate to C:\Salt-Dev. Run the following command to clone salt:
git clone https://github.com/saltstack/salt
- •
-
Checkout Branch
Checkout the branch or tag of salt you want to work on or build. Open a command prompt and navigate to C:\Salt-Dev\salt. Get a list of available tags and branches by running the following commands:
git fetch --all To view a list of available branches: git branch -a To view a list of availabel tags: git tag -n
Checkout the branch or tag by typing the following command:
git checkout <branch/tag name>
- •
-
Clean the Environment
When switching between branches residual files can be left behind that will interfere with the functionality of salt. Therefore, after you check out the branch you want to work on, type the following commands to clean the salt environment:
Developing with Salt
There are two ways to develop with salt. You can run salt's setup.py each time you make a change to source code or you can use the setup tools develop mode.
Configure the Minion
Both methods require that the minion configuration be in the C:\salt directory. Copy the conf and var directories from C:\Salt-Dev\salt\pkg\ windows\buildenv to C:\salt. Now go into the C:\salt\conf directory and edit the file name minion (no extension). You need to configure the master and id parameters in this file. Edit the following lines:
master: <ip or name of your master> id: <name of your minion>
Setup.py Method
Go into the C:\Salt-Dev\salt directory from a cmd prompt and type:
python setup.py install --force
This will install python into your python installation at C:\Python27. Everytime you make an edit to your source code, you'll have to stop the minion, run the setup, and start the minion.
To start the salt-minion go into C:\Python27\Scripts from a cmd prompt and type:
salt-minion
For debug mode type:
salt-minion -l debug
To stop the minion press Ctrl+C.
Setup Tools Develop Mode (Preferred Method)
To use the Setup Tools Develop Mode go into C:\Salt-Dev\salt from a cmd prompt and type:
pip install -e .
This will install pointers to your source code that resides at C:\Salt-Dev\salt. When you edit your source code you only have to restart the minion.
Build the windows installer
This is the method of building the installer as of version 2014.7.4.
Clean the Environment
Make sure you don't have any leftover salt files from previous versions of salt in your Python directory.
- 1.
- Remove all files that start with salt in the C:\Python27\Scripts directory
- 2.
- Remove all files and directorys that start with salt in the C:\Python27\Lib\site-packages directory
Install Salt
Install salt using salt's setup.py. From the C:\Salt-Dev\salt directory type the following command:
python setup.py install --force
Build the Installer
From cmd prompt go into the C:\Salt-Dev\salt\pkg\windows directory. Type the following command for the branch or tag of salt you're building:
BuildSalt.bat <branch or tag>
This will copy python with salt installed to the buildenv\bin directory, make it portable, and then create the windows installer . The .exe for the windows installer will be placed in the installer directory.
Testing the Salt minion
- 1.
- Create the directory C:\salt (if it doesn't exist already)
- 2.
- Copy the example conf and var directories from pkg/windows/buildenv/ into C:\salt
- 3.
-
Edit C:\salt\conf\minion
master: ipaddress or hostname of your salt-master
- 4.
-
Start the salt-minion
cd C:\Python27\Scripts python salt-minion
- 5.
-
On the salt-master accept the new minion's key
sudo salt-key -A
This accepts all unaccepted keys. If you're concerned about security just accept the key for this specific minion.
- 6.
-
Test that your minion is responding
On the salt-master run:
sudo salt '*' test.ping
You should get the following response: {'your minion hostname': True}
Single command bootstrap script
On a 64 bit Windows host the following script makes an unattended install of salt, including all dependencies:
- Not up to date.
-
This script is not up to date. Please use the installer found above
# (All in one line.) "PowerShell (New-Object System.Net.WebClient).DownloadFile('http://csa-net.dk/salt/bootstrap64.bat','C:\bootstrap.bat');(New-Object -com Shell.Application).ShellExecute('C:\bootstrap.bat');"
You can execute the above command remotely from a Linux host using winexe:
winexe -U "administrator" //fqdn "PowerShell (New-Object ......);"
For more info check http://csa-net.dk/salt
Packages management under Windows 2003
On windows Server 2003, you need to install optional component "wmi windows installer provider" to have full list of installed packages. If you don't have this, salt-minion can't report some installed softwares.
SUSE Installation
With openSUSE 13.2, Salt 2014.1.11 is available in the primary repositories. The devel:language:python repo will have more up to date versions of salt, all package development will be done there.
Installation
Salt can be installed using zypper and is available in the standard openSUSE repositories.
Stable Release
Salt is packaged separately for the minion and the master. It is necessary only to install the appropriate package for the role the machine will play. Typically, there will be one master and multiple minions.
zypper install salt-master zypper install salt-minion
Post-installation tasks openSUSE
Master
To have the Master start automatically at boot time:
systemctl enable salt-master.service
To start the Master:
systemctl start salt-master.service
Minion
To have the Minion start automatically at boot time:
systemctl enable salt-minion.service
To start the Minion:
systemctl start salt-minion.service
Post-installation tasks SLES
Master
To have the Master start automatically at boot time:
chkconfig salt-master on
To start the Master:
rcsalt-master start
Minion
To have the Minion start automatically at boot time:
chkconfig salt-minion on
To start the Minion:
rcsalt-minion start
Unstable Release
openSUSE
For openSUSE Factory run the following as root:
zypper addrepo http://download.opensuse.org/repositories/devel:languages:python/openSUSE_Factory/devel:languages:python.repo zypper refresh zypper install salt salt-minion salt-master
For openSUSE 13.2 run the following as root:
zypper addrepo http://download.opensuse.org/repositories/devel:languages:python/openSUSE_13.2/devel:languages:python.repo zypper refresh zypper install salt salt-minion salt-master
For openSUSE 13.1 run the following as root:
zypper addrepo http://download.opensuse.org/repositories/devel:languages:python/openSUSE_13.1/devel:languages:python.repo zypper refresh zypper install salt salt-minion salt-master
For bleeding edge python Factory run the following as root:
zypper addrepo http://download.opensuse.org/repositories/devel:languages:python/bleeding_edge_python_Factory/devel:languages:python.repo zypper refresh zypper install salt salt-minion salt-master
Suse Linux Enterprise
For SLE 12 run the following as root:
zypper addrepo http://download.opensuse.org/repositories/devel:languages:python/SLE_12/devel:languages:python.repo zypper refresh zypper install salt salt-minion salt-master
For SLE 11 SP3 run the following as root:
zypper addrepo http://download.opensuse.org/repositories/devel:languages:python/SLE_11_SP3/devel:languages:python.repo zypper refresh zypper install salt salt-minion salt-master
For SLE 11 SP2 run the following as root:
zypper addrepo http://download.opensuse.org/repositories/devel:languages:python/SLE_11_SP2/devel:languages:python.repo zypper refresh zypper install salt salt-minion salt-master
Now go to the Configuring Salt page.
Dependencies
Salt should run on any Unix-like platform so long as the dependencies are met.
- •
- Python 2.6 >= 2.6 <3.0
- •
- msgpack-python - High-performance message interchange format
- •
- YAML - Python YAML bindings
- •
- Jinja2 - parsing Salt States (configurable in the master settings)
- •
- MarkupSafe - Implements a XML/HTML/XHTML Markup safe string for Python
- •
- apache-libcloud - Python lib for interacting with many of the popular cloud service providers using a unified API
- •
-
Requests - HTTP library
Depending on the chosen Salt transport, ZeroMQ or RAET, dependencies vary:
- •
- ZeroMQ:
- •
- ZeroMQ >= 3.2.0
- •
- pyzmq >= 2.2.0 - ZeroMQ Python bindings
- •
- PyCrypto - The Python cryptography toolkit
- •
- M2Crypto - "Me Too Crypto" - Python OpenSSL wrapper
- •
- RAET:
- •
- libnacl - Python bindings to libsodium
- •
- ioflo - The flo programming interface raet and salt-raet is built on
- •
-
RAET - The worlds most awesome UDP protocol
Salt defaults to the ZeroMQ transport, and the choice can be made at install time, for example:
python setup.py install --salt-transport=raet
This way, only the required dependencies are pulled by the setup script if need be.
If installing using pip, the --salt-transport install option can be provided like:
pip install --install-option="--salt-transport=raet" salt
Optional Dependencies
- •
- mako - an optional parser for Salt States (configurable in the master settings)
- •
- gcc - dynamic Cython module compiling
Upgrading Salt
When upgrading Salt, the master(s) should always be upgraded first. Backward compatibility for minions running newer versions of salt than their masters is not guaranteed.
Whenever possible, backward compatibility between new masters and old minions will be preserved. Generally, the only exception to this policy is in case of a security vulnerability.
TUTORIALS
Introduction
Salt Masterless Quickstart
Running a masterless salt-minion lets you use Salt's configuration management for a single machine without calling out to a Salt master on another machine.
Since the Salt minion contains such extensive functionality it can be useful to run it standalone. A standalone minion can be used to do a number of things:
- •
- Stand up a master server via States (Salting a Salt Master)
- •
- Use salt-call commands on a system without connectivity to a master
- •
-
Masterless States, run states entirely from files local to the minion
It is also useful for testing out state trees before deploying to a production setup.
Bootstrap Salt Minion
The salt-bootstrap script makes bootstrapping a server with Salt simple for any OS with a Bourne shell:
wget -O - https://bootstrap.saltstack.com | sudo sh
See the salt-bootstrap documentation for other one liners. When using Vagrant to test out salt, the Vagrant salt provisioner will provision the VM for you.
Telling Salt to Run Masterless
To instruct the minion to not look for a master, the file_client configuration option needs to be set in the minion configuration file. By default the file_client is set to remote so that the minion gathers file server and pillar data from the salt master. When setting the file_client option to local the minion is configured to not gather this data from the master.
file_client: local
Now the salt minion will not look for a master and will assume that the local system has all of the file and pillar resources.
NOTE: When running Salt in masterless mode, do not run the salt-minion daemon. Otherwise, it will attempt to connect to a master and fail. The salt-call command stands on its own and does not need the salt-minion daemon.
Create State Tree
Following the successful installation of a salt-minion, the next step is to create a state tree, which is where the SLS files that comprise the possible states of the minion are stored.
The following example walks through the steps necessary to create a state tree that ensures that the server has the Apache webserver installed.
NOTE: For a complete explanation on Salt States, see the tutorial.
- 1.
-
Create the top.sls file:
/srv/salt/top.sls:
base: '*': - webserver
- 2.
-
Create the webserver state tree:
/srv/salt/webserver.sls:
apache: # ID declaration pkg: # state declaration - installed # function declaration
NOTE: The apache package has different names on different platforms, for instance on Debian/Ubuntu it is apache2, on Fedora/RHEL it is httpd and on Arch it is apache
The only thing left is to provision our minion using salt-call and the highstate command.
Salt-call
The salt-call command is used to run module functions locally on a minion instead of executing them from the master. Normally the salt-call command checks into the master to retrieve file server and pillar data, but when running standalone salt-call needs to be instructed to not check the master for this data:
salt-call --local state.highstate
The --local flag tells the salt-minion to look for the state tree in the local file system and not to contact a Salt Master for instructions.
To provide verbose output, use -l debug:
salt-call --local state.highstate -l debug
The minion first examines the top.sls file and determines that it is a part of the group matched by * glob and that the webserver SLS should be applied.
It then examines the webserver.sls file and finds the apache state, which installs the Apache package.
The minion should now have Apache installed, and the next step is to begin learning how to write more complex states.
Basics
Standalone Minion
Since the Salt minion contains such extensive functionality it can be useful to run it standalone. A standalone minion can be used to do a number of things:
- •
- Use salt-call commands on a system without connectivity to a master
- •
-
Masterless States, run states entirely from files local to the minion
NOTE: When running Salt in masterless mode, do not run the salt-minion daemon. Otherwise, it will attempt to connect to a master and fail. The salt-call command stands on its own and does not need the salt-minion daemon.
Telling Salt Call to Run Masterless
The salt-call command is used to run module functions locally on a minion instead of executing them from the master. Normally the salt-call command checks into the master to retrieve file server and pillar data, but when running standalone salt-call needs to be instructed to not check the master for this data. To instruct the minion to not look for a master when running salt-call the file_client configuration option needs to be set. By default the file_client is set to remote so that the minion knows that file server and pillar data are to be gathered from the master. When setting the file_client option to local the minion is configured to not gather this data from the master.
file_client: local
Now the salt-call command will not look for a master and will assume that the local system has all of the file and pillar resources.
Running States Masterless
The state system can be easily run without a Salt master, with all needed files local to the minion. To do this the minion configuration file needs to be set up to know how to return file_roots information like the master. The file_roots setting defaults to /srv/salt for the base environment just like on the master:
file_roots: base: - /srv/salt
Now set up the Salt State Tree, top file, and SLS modules in the same way that they would be set up on a master. Now, with the file_client option set to local and an available state tree then calls to functions in the state module will use the information in the file_roots on the minion instead of checking in with the master.
Remember that when creating a state tree on a minion there are no syntax or path changes needed, SLS modules written to be used from a master do not need to be modified in any way to work with a minion.
This makes it easy to "script" deployments with Salt states without having to set up a master, and allows for these SLS modules to be easily moved into a Salt master as the deployment grows.
The declared state can now be executed with:
salt-call state.highstate
Or the salt-call command can be executed with the --local flag, this makes it unnecessary to change the configuration file:
salt-call state.highstate --local
External Pillars
External pillars are supported when running in masterless mode.
Opening the Firewall up for Salt
The Salt master communicates with the minions using an AES-encrypted ZeroMQ connection. These communications are done over TCP ports 4505 and 4506, which need to be accessible on the master only. This document outlines suggested firewall rules for allowing these incoming connections to the master.
NOTE: No firewall configuration needs to be done on Salt minions. These changes refer to the master only.
Fedora 18 and beyond / RHEL 7 / CentOS 7
Starting with Fedora 18 FirewallD is the tool that is used to dynamically manage the firewall rules on a host. It has support for IPv4/6 settings and the separation of runtime and permanent configurations. To interact with FirewallD use the command line client firewall-cmd.
firewall-cmd example:
firewall-cmd --permanent --zone=<zone> --add-port=4505-4506/tcp
Please choose the desired zone according to your setup. Don't forget to reload after you made your changes.
firewall-cmd --reload
RHEL 6 / CentOS 6
The lokkit command packaged with some Linux distributions makes opening iptables firewall ports very simple via the command line. Just be careful to not lock out access to the server by neglecting to open the ssh port.
lokkit example:
lokkit -p 22:tcp -p 4505:tcp -p 4506:tcp
The system-config-firewall-tui command provides a text-based interface to modifying the firewall.
system-config-firewall-tui:
system-config-firewall-tui
openSUSE
Salt installs firewall rules in /etc/sysconfig/SuSEfirewall2.d/services/salt. Enable with:
SuSEfirewall2 open SuSEfirewall2 start
If you have an older package of Salt where the above configuration file is not included, the SuSEfirewall2 command makes opening iptables firewall ports very simple via the command line.
SuSEfirewall example:
SuSEfirewall2 open EXT TCP 4505 SuSEfirewall2 open EXT TCP 4506
The firewall module in YaST2 provides a text-based interface to modifying the firewall.
YaST2:
yast2 firewall
iptables
Different Linux distributions store their iptables (also known as netfilter) rules in different places, which makes it difficult to standardize firewall documentation. Included are some of the more common locations, but your mileage may vary.
Fedora / RHEL / CentOS:
/etc/sysconfig/iptables
Arch Linux:
/etc/iptables/iptables.rules
Debian
Follow these instructions: https://wiki.debian.org/iptables
Once you've found your firewall rules, you'll need to add the two lines below to allow traffic on tcp/4505 and tcp/4506:
-A INPUT -m state --state new -m tcp -p tcp --dport 4505 -j ACCEPT -A INPUT -m state --state new -m tcp -p tcp --dport 4506 -j ACCEPT
Ubuntu
Salt installs firewall rules in /etc/ufw/applications.d/salt.ufw. Enable with:
ufw allow salt
pf.conf
The BSD-family of operating systems uses packet filter (pf). The following example describes the additions to pf.conf needed to access the Salt master.
pass in on $int_if proto tcp from any to $int_if port 4505 pass in on $int_if proto tcp from any to $int_if port 4506
Once these additions have been made to the pf.conf the rules will need to be reloaded. This can be done using the pfctl command.
pfctl -vf /etc/pf.conf
Whitelist communication to Master
There are situations where you want to selectively allow Minion traffic from specific hosts or networks into your Salt Master. The first scenario which comes to mind is to prevent unwanted traffic to your Master out of security concerns, but another scenario is to handle Minion upgrades when there are backwards incompatible changes between the installed Salt versions in your environment.
Here is an example Linux iptables ruleset to be set on the Master:
# Allow Minions from these networks -I INPUT -s 10.1.2.0/24 -p tcp -m multiport --dports 4505,4506 -j ACCEPT -I INPUT -s 10.1.3.0/24 -p tcp -m multiport --dports 4505,4506 -j ACCEPT # Allow Salt to communicate with Master on the loopback interface -A INPUT -i lo -p tcp -m multiport --dports 4505,4506 -j ACCEPT # Reject everything else -A INPUT -p tcp -m multiport --dports 4505,4506 -j REJECT
NOTE: The important thing to note here is that the salt command needs to communicate with the listening network socket of salt-master on the loopback interface. Without this you will see no outgoing Salt traffic from the master, even for a simple salt '*' test.ping, because the salt client never reached the salt-master to tell it to carry out the execution.
Using cron with Salt
The Salt Minion can initiate its own highstate using the salt-call command.
$ salt-call state.highstate
This will cause the minion to check in with the master and ensure it is in the correct 'state'.
Use cron to initiate a highstate
If you would like the Salt Minion to regularly check in with the master you can use the venerable cron to run the salt-call command.
# PATH=/bin:/sbin:/usr/bin:/usr/sbin 00 00 * * * salt-call state.highstate
The above cron entry will run a highstate every day at midnight.
NOTE: Be aware that you may need to ensure the PATH for cron includes any scripts or commands that need to be executed.
Remote execution tutorial
Before continuing make sure you have a working Salt installation by following the installation and the configuration instructions.
- Stuck?
-
There are many ways to get help from the Salt community including our mailing list and our IRC channel #salt.
Order your minions around
Now that you have a master and at least one minion communicating with each other you can perform commands on the minion via the salt command. Salt calls are comprised of three main components:
salt '<target>' <function> [arguments]
target
The target component allows you to filter which minions should run the following function. The default filter is a glob on the minion id. For example:
salt '*' test.ping salt '*.example.org' test.ping
Targets can be based on minion system information using the Grains system:
salt -G 'os:Ubuntu' test.ping
SEE ALSO: Grains system
Targets can be filtered by regular expression:
salt -E 'virtmach[0-9]' test.ping
Targets can be explicitly specified in a list:
salt -L 'foo,bar,baz,quo' test.ping
Or Multiple target types can be combined in one command:
salt -C 'G@os:Ubuntu and webser* or E@database.*' test.ping
function
A function is some functionality provided by a module. Salt ships with a large collection of available functions. List all available functions on your minions:
salt '*' sys.doc
Here are some examples:
Show all currently available minions:
salt '*' test.ping
Run an arbitrary shell command:
salt '*' cmd.run 'uname -a'
SEE ALSO: the full list of modules
arguments
Space-delimited arguments to the function:
salt '*' cmd.exec_code python 'import sys; print sys.version'
Optional, keyword arguments are also supported:
salt '*' pip.install salt timeout=5 upgrade=True
They are always in the form of kwarg=argument.
Pillar Walkthrough
NOTE: This walkthrough assumes that the reader has already completed the initial Salt walkthrough.
Pillars are tree-like structures of data defined on the Salt Master and passed through to minions. They allow confidential, targeted data to be securely sent only to the relevant minion.
NOTE: Grains and Pillar are sometimes confused, just remember that Grains are data about a minion which is stored or generated from the minion. This is why information like the OS and CPU type are found in Grains. Pillar is information about a minion or many minions stored or generated on the Salt Master.
Pillar data is useful for:
- Highly Sensitive Data:
- Information transferred via pillar is guaranteed to only be presented to the minions that are targeted, making Pillar suitable for managing security information, such as cryptographic keys and passwords.
- Minion Configuration:
- Minion modules such as the execution modules, states, and returners can often be configured via data stored in pillar.
- Variables:
- Variables which need to be assigned to specific minions or groups of minions can be defined in pillar and then accessed inside sls formulas and template files.
- Arbitrary Data:
-
Pillar can contain any basic data structure in dictionary format,
so a key/value store can be defined making it easy to iterate over a group
of values in sls formulas.
Pillar is therefore one of the most important systems when using Salt. This walkthrough is designed to get a simple Pillar up and running in a few minutes and then to dive into the capabilities of Pillar and where the data is available.
Setting Up Pillar
The pillar is already running in Salt by default. To see the minion's pillar data:
salt '*' pillar.items
NOTE: Prior to version 0.16.2, this function is named pillar.data. This function name is still supported for backwards compatibility.
By default the contents of the master configuration file are loaded into pillar for all minions. This enables the master configuration file to be used for global configuration of minions.
Similar to the state tree, the pillar is comprised of sls files and has a top file. The default location for the pillar is in /srv/pillar.
NOTE: The pillar location can be configured via the pillar_roots option inside the master configuration file. It must not be in a subdirectory of the state tree or file_roots. If the pillar is under file_roots, any pillar targeting can be bypassed by minions.
To start setting up the pillar, the /srv/pillar directory needs to be present:
mkdir /srv/pillar
Now create a simple top file, following the same format as the top file used for states:
/srv/pillar/top.sls:
base: '*': - data
This top file associates the data.sls file to all minions. Now the /srv/pillar/data.sls file needs to be populated:
/srv/pillar/data.sls:
info: some data
To ensure that the minions have the new pillar data, issue a command to them asking that they fetch their pillars from the master:
salt '*' saltutil.refresh_pillar
Now that the minions have the new pillar, it can be retrieved:
salt '*' pillar.items
The key info should now appear in the returned pillar data.
More Complex Data
Unlike states, pillar files do not need to define formulas. This example sets up user data with a UID:
/srv/pillar/users/init.sls:
users: thatch: 1000 shouse: 1001 utahdave: 1002 redbeard: 1003
NOTE: The same directory lookups that exist in states exist in pillar, so the file users/init.sls can be referenced with users in the top file.
The top file will need to be updated to include this sls file:
/srv/pillar/top.sls:
base: '*': - data - users
Now the data will be available to the minions. To use the pillar data in a state, you can use Jinja:
/srv/salt/users/init.sls
{% for user, uid in pillar.get('users', {}).items() %} {{user}}: user.present: - uid: {{uid}} {% endfor %}
This approach allows for users to be safely defined in a pillar and then the user data is applied in an sls file.
Parameterizing States With Pillar
Pillar data can be accessed in state files to customise behavior for each minion. All pillar (and grain) data applicable to each minion is substituted into the state files through templating before being run. Typical uses include setting directories appropriate for the minion and skipping states that don't apply.
A simple example is to set up a mapping of package names in pillar for separate Linux distributions:
/srv/pillar/pkg/init.sls:
pkgs: {% if grains['os_family'] == 'RedHat' %} apache: httpd vim: vim-enhanced {% elif grains['os_family'] == 'Debian' %} apache: apache2 vim: vim {% elif grains['os'] == 'Arch' %} apache: apache vim: vim {% endif %}
The new pkg sls needs to be added to the top file:
/srv/pillar/top.sls:
base: '*': - data - users - pkg
Now the minions will auto map values based on respective operating systems inside of the pillar, so sls files can be safely parameterized:
/srv/salt/apache/init.sls:
apache: pkg.installed: - name: {{ pillar['pkgs']['apache'] }}
Or, if no pillar is available a default can be set as well:
NOTE: The function pillar.get used in this example was added to Salt in version 0.14.0
/srv/salt/apache/init.sls:
apache: pkg.installed: - name: {{ salt['pillar.get']('pkgs:apache', 'httpd') }}
In the above example, if the pillar value pillar['pkgs']['apache'] is not set in the minion's pillar, then the default of httpd will be used.
NOTE: Under the hood, pillar is just a Python dict, so Python dict methods such as get and items can be used.
Pillar Makes Simple States Grow Easily
One of the design goals of pillar is to make simple sls formulas easily grow into more flexible formulas without refactoring or complicating the states.
A simple formula:
/srv/salt/edit/vim.sls:
vim: pkg.installed: [] /etc/vimrc: file.managed: - source: salt://edit/vimrc - mode: 644 - user: root - group: root - require: - pkg: vim
Can be easily transformed into a powerful, parameterized formula:
/srv/salt/edit/vim.sls:
vim: pkg.installed: - name: {{ pillar['pkgs']['vim'] }} /etc/vimrc: file.managed: - source: {{ pillar['vimrc'] }} - mode: 644 - user: root - group: root - require: - pkg: vim
Where the vimrc source location can now be changed via pillar:
/srv/pillar/edit/vim.sls:
{% if grains['id'].startswith('dev') %} vimrc: salt://edit/dev_vimrc {% elif grains['id'].startswith('qa') %} vimrc: salt://edit/qa_vimrc {% else %} vimrc: salt://edit/vimrc {% endif %}
Ensuring that the right vimrc is sent out to the correct minions.
Setting Pillar Data on the Command Line
Pillar data can be set on the command line like so:
salt '*' state.highstate pillar='{"foo": "bar"}'
The state.sls command can also be used to set pillar values via the command line:
salt '*' state.sls my_sls_file pillar='{"hello": "world"}'
NOTE: If a key is passed on the command line that already exists on the minion, the key that is passed in will overwrite the entire value of that key, rather than merging only the specified value set via the command line.
The example below will swap the value for vim with telnet in the previously specified list, notice the nested pillar dict:
salt '*' state.sls edit.vim pillar='{"pkgs": {"vim": "telnet"}}'
NOTE: This will attempt to install telnet on your minions, feel free to uninstall the package or replace telnet value with anything else.
More On Pillar
Pillar data is generated on the Salt master and securely distributed to minions. Salt is not restricted to the pillar sls files when defining the pillar but can retrieve data from external sources. This can be useful when information about an infrastructure is stored in a separate location.
Reference information on pillar and the external pillar interface can be found in the Salt documentation:
States
How Do I Use Salt States?
Simplicity, Simplicity, Simplicity
Many of the most powerful and useful engineering solutions are founded on simple principles. Salt States strive to do just that: K.I.S.S. (Keep It Stupidly Simple)
The core of the Salt State system is the SLS, or SaLt State file. The SLS is a representation of the state in which a system should be in, and is set up to contain this data in a simple format. This is often called configuration management.
NOTE: This is just the beginning of using states, make sure to read up on pillar Pillar next.
It is All Just Data
Before delving into the particulars, it will help to understand that the SLS file is just a data structure under the hood. While understanding that the SLS is just a data structure isn't critical for understanding and making use of Salt States, it should help bolster knowledge of where the real power is.
SLS files are therefore, in reality, just dictionaries, lists, strings, and numbers. By using this approach Salt can be much more flexible. As one writes more state files, it becomes clearer exactly what is being written. The result is a system that is easy to understand, yet grows with the needs of the admin or developer.
The Top File
The example SLS files in the below sections can be assigned to hosts using a file called top.sls. This file is described in-depth here.
Default Data - YAML
By default Salt represents the SLS data in what is one of the simplest serialization formats available - YAML.
A typical SLS file will often look like this in YAML:
NOTE: These demos use some generic service and package names, different distributions often use different names for packages and services. For instance apache should be replaced with httpd on a Red Hat system. Salt uses the name of the init script, systemd name, upstart name etc. based on what the underlying service management for the platform. To get a list of the available service names on a platform execute the service.get_all salt function.
Information on how to make states work with multiple distributions is later in the tutorial.
apache: pkg.installed: [] service.running: - require: - pkg: apache
This SLS data will ensure that the package named apache is installed, and that the apache service is running. The components can be explained in a simple way.
The first line is the ID for a set of data, and it is called the ID Declaration. This ID sets the name of the thing that needs to be manipulated.
The second and third lines contain the state module function to be run, in the format <state_module>.<function>. The pkg.installed state module function ensures that a software package is installed via the system's native package manager. The service.running state module function ensures that a given system daemon is running.
Finally, on line five, is the word require. This is called a Requisite Statement, and it makes sure that the Apache service is only started after a successful installation of the apache package.
Adding Configs and Users
When setting up a service like an Apache web server, many more components may need to be added. The Apache configuration file will most likely be managed, and a user and group may need to be set up.
apache: pkg.installed: [] service.running: - watch: - pkg: apache - file: /etc/httpd/conf/httpd.conf - user: apache user.present: - uid: 87 - gid: 87 - home: /var/www/html - shell: /bin/nologin - require: - group: apache group.present: - gid: 87 - require: - pkg: apache /etc/httpd/conf/httpd.conf: file.managed: - source: salt://apache/httpd.conf - user: root - group: root - mode: 644
This SLS data greatly extends the first example, and includes a config file, a user, a group and new requisite statement: watch.
Adding more states is easy, since the new user and group states are under the Apache ID, the user and group will be the Apache user and group. The require statements will make sure that the user will only be made after the group, and that the group will be made only after the Apache package is installed.
Next, the require statement under service was changed to watch, and is now watching 3 states instead of just one. The watch statement does the same thing as require, making sure that the other states run before running the state with a watch, but it adds an extra component. The watch statement will run the state's watcher function for any changes to the watched states. So if the package was updated, the config file changed, or the user uid modified, then the service state's watcher will be run. The service state's watcher just restarts the service, so in this case, a change in the config file will also trigger a restart of the respective service.
Moving Beyond a Single SLS
When setting up Salt States in a scalable manner, more than one SLS will need to be used. The above examples were in a single SLS file, but two or more SLS files can be combined to build out a State Tree. The above example also references a file with a strange source - salt://apache/httpd.conf. That file will need to be available as well.
The SLS files are laid out in a directory structure on the Salt master; an SLS is just a file and files to download are just files.
The Apache example would be laid out in the root of the Salt file server like this:
apache/init.sls apache/httpd.conf
So the httpd.conf is just a file in the apache directory, and is referenced directly.
But when using more than one single SLS file, more components can be added to the toolkit. Consider this SSH example:
ssh/init.sls:
openssh-client: pkg.installed /etc/ssh/ssh_config: file.managed: - user: root - group: root - mode: 644 - source: salt://ssh/ssh_config - require: - pkg: openssh-client
ssh/server.sls:
include: - ssh openssh-server: pkg.installed sshd: service.running: - require: - pkg: openssh-client - pkg: openssh-server - file: /etc/ssh/banner - file: /etc/ssh/sshd_config /etc/ssh/sshd_config: file.managed: - user: root - group: root - mode: 644 - source: salt://ssh/sshd_config - require: - pkg: openssh-server /etc/ssh/banner: file: - managed - user: root - group: root - mode: 644 - source: salt://ssh/banner - require: - pkg: openssh-server
NOTE: Notice that we use two similar ways of denoting that a file is managed by Salt. In the /etc/ssh/sshd_config state section above, we use the file.managed state declaration whereas with the /etc/ssh/banner state section, we use the file state declaration and add a managed attribute to that state declaration. Both ways produce an identical result; the first way -- using file.managed -- is merely a shortcut.
Now our State Tree looks like this:
apache/init.sls apache/httpd.conf ssh/init.sls ssh/server.sls ssh/banner ssh/ssh_config ssh/sshd_config
This example now introduces the include statement. The include statement includes another SLS file so that components found in it can be required, watched or as will soon be demonstrated - extended.
The include statement allows for states to be cross linked. When an SLS has an include statement it is literally extended to include the contents of the included SLS files.
Note that some of the SLS files are called init.sls, while others are not. More info on what this means can be found in the States Tutorial.
Extending Included SLS Data
Sometimes SLS data needs to be extended. Perhaps the apache service needs to watch additional resources, or under certain circumstances a different file needs to be placed.
In these examples, the first will add a custom banner to ssh and the second will add more watchers to apache to include mod_python.
ssh/custom-server.sls:
include: - ssh.server extend: /etc/ssh/banner: file: - source: salt://ssh/custom-banner
python/mod_python.sls:
include: - apache extend: apache: service: - watch: - pkg: mod_python mod_python: pkg.installed
The custom-server.sls file uses the extend statement to overwrite where the banner is being downloaded from, and therefore changing what file is being used to configure the banner.
In the new mod_python SLS the mod_python package is added, but more importantly the apache service was extended to also watch the mod_python package.
- Using extend with require or watch
-
The extend statement works differently for require or watch. It appends to, rather than replacing the requisite component.
Understanding the Render System
Since SLS data is simply that (data), it does not need to be represented with YAML. Salt defaults to YAML because it is very straightforward and easy to learn and use. But the SLS files can be rendered from almost any imaginable medium, so long as a renderer module is provided.
The default rendering system is the yaml_jinja renderer. The yaml_jinja renderer will first pass the template through the Jinja2 templating system, and then through the YAML parser. The benefit here is that full programming constructs are available when creating SLS files.
Other renderers available are yaml_mako and yaml_wempy which each use the Mako or Wempy templating system respectively rather than the jinja templating system, and more notably, the pure Python or py, pydsl & pyobjects renderers. The py renderer allows for SLS files to be written in pure Python, allowing for the utmost level of flexibility and power when preparing SLS data; while the pydsl renderer provides a flexible, domain-specific language for authoring SLS data in Python; and the pyobjects renderer gives you a "Pythonic" interface to building state data.
NOTE: The templating engines described above aren't just available in SLS files. They can also be used in file.managed states, making file management much more dynamic and flexible. Some examples for using templates in managed files can be found in the documentation for the file states, as well as the MooseFS example below.
Getting to Know the Default - yaml_jinja
The default renderer - yaml_jinja, allows for use of the jinja templating system. A guide to the Jinja templating system can be found here: http://jinja.pocoo.org/docs
When working with renderers a few very useful bits of data are passed in. In the case of templating engine based renderers, three critical components are available, salt, grains, and pillar. The salt object allows for any Salt function to be called from within the template, and grains allows for the Grains to be accessed from within the template. A few examples:
apache/init.sls:
apache: pkg.installed: {% if grains['os'] == 'RedHat'%} - name: httpd {% endif %} service.running: {% if grains['os'] == 'RedHat'%} - name: httpd {% endif %} - watch: - pkg: apache - file: /etc/httpd/conf/httpd.conf - user: apache user.present: - uid: 87 - gid: 87 - home: /var/www/html - shell: /bin/nologin - require: - group: apache group.present: - gid: 87 - require: - pkg: apache /etc/httpd/conf/httpd.conf: file.managed: - source: salt://apache/httpd.conf - user: root - group: root - mode: 644
This example is simple. If the os grain states that the operating system is Red Hat, then the name of the Apache package and service needs to be httpd.
A more aggressive way to use Jinja can be found here, in a module to set up a MooseFS distributed filesystem chunkserver:
moosefs/chunk.sls:
include: - moosefs {% for mnt in salt['cmd.run']('ls /dev/data/moose*').split() %} /mnt/moose{{ mnt[-1] }}: mount.mounted: - device: {{ mnt }} - fstype: xfs - mkmnt: True file.directory: - user: mfs - group: mfs - require: - user: mfs - group: mfs {% endfor %} /etc/mfshdd.cfg: file.managed: - source: salt://moosefs/mfshdd.cfg - user: root - group: root - mode: 644 - template: jinja - require: - pkg: mfs-chunkserver /etc/mfschunkserver.cfg: file.managed: - source: salt://moosefs/mfschunkserver.cfg - user: root - group: root - mode: 644 - template: jinja - require: - pkg: mfs-chunkserver mfs-chunkserver: pkg.installed: [] mfschunkserver: service.running: - require: {% for mnt in salt['cmd.run']('ls /dev/data/moose*') %} - mount: /mnt/moose{{ mnt[-1] }} - file: /mnt/moose{{ mnt[-1] }} {% endfor %} - file: /etc/mfschunkserver.cfg - file: /etc/mfshdd.cfg - file: /var/lib/mfs
This example shows much more of the available power of Jinja. Multiple for loops are used to dynamically detect available hard drives and set them up to be mounted, and the salt object is used multiple times to call shell commands to gather data.
Introducing the Python, PyDSL, and the Pyobjects Renderers
Sometimes the chosen default renderer might not have enough logical power to accomplish the needed task. When this happens, the Python renderer can be used. Normally a YAML renderer should be used for the majority of SLS files, but an SLS file set to use another renderer can be easily added to the tree.
This example shows a very basic Python SLS file:
python/django.sls:
#!py def run(): ''' Install the django package ''' return {'include': ['python'], 'django': {'pkg': ['installed']}}
This is a very simple example; the first line has an SLS shebang that tells Salt to not use the default renderer, but to use the py renderer. Then the run function is defined, the return value from the run function must be a Salt friendly data structure, or better known as a Salt HighState data structure.
Alternatively, using the pydsl renderer, the above example can be written more succinctly as:
#!pydsl include('python', delayed=True) state('django').pkg.installed()
The pyobjects renderer provides an "Pythonic" object based approach for building the state data. The above example could be written as:
#!pyobjects include('python') Pkg.installed("django")
This Python examples would look like this if they were written in YAML:
include: - python django: pkg.installed
This example clearly illustrates that; one, using the YAML renderer by default is a wise decision and two, unbridled power can be obtained where needed by using a pure Python SLS.
Running and debugging salt states.
Once the rules in an SLS are ready, they should be tested to ensure they work properly. To invoke these rules, simply execute salt '*' state.highstate on the command line. If you get back only hostnames with a : after, but no return, chances are there is a problem with one or more of the sls files. On the minion, use the salt-call command: salt-call state.highstate -l debug to examine the output for errors. This should help troubleshoot the issue. The minions can also be started in the foreground in debug mode: salt-minion -l debug.
Next Reading
With an understanding of states, the next recommendation is to become familiar with Salt's pillar interface: Pillar Walkthrough
States tutorial, part 1 - Basic Usage
The purpose of this tutorial is to demonstrate how quickly you can configure a system to be managed by Salt States. For detailed information about the state system please refer to the full states reference.
This tutorial will walk you through using Salt to configure a minion to run the Apache HTTP server and to ensure the server is running.
Before continuing make sure you have a working Salt installation by following the installation and the configuration instructions.
- Stuck?
-
There are many ways to get help from the Salt community including our mailing list and our IRC channel #salt.
Setting up the Salt State Tree
States are stored in text files on the master and transferred to the minions on demand via the master's File Server. The collection of state files make up the State Tree.
To start using a central state system in Salt, the Salt File Server must first be set up. Edit the master config file (file_roots) and uncomment the following lines:
file_roots: base: - /srv/salt
NOTE: If you are deploying on FreeBSD via ports, the file_roots path defaults to /usr/local/etc/salt/states.
Restart the Salt master in order to pick up this change:
pkill salt-master salt-master -d
Preparing the Top File
On the master, in the directory uncommented in the previous step, (/srv/salt by default), create a new file called top.sls and add the following:
base: '*': - webserver
The top file is separated into environments (discussed later). The default environment is base. Under the base environment a collection of minion matches is defined; for now simply specify all hosts (*).
- Targeting minions
-
The expressions can use any of the targeting mechanisms used by Salt — minions can be matched by glob, PCRE regular expression, or by grains. For example:
base: 'os:Fedora': - match: grain - webserver
Create an sls file
In the same directory as the top file, create a file named webserver.sls, containing the following:
apache: # ID declaration pkg: # state declaration - installed # function declaration
The first line, called the id-declaration, is an arbitrary identifier. In this case it defines the name of the package to be installed.
NOTE: The package name for the Apache httpd web server may differ depending on OS or distro — for example, on Fedora it is httpd but on Debian/Ubuntu it is apache2.
The second line, called the state-declaration, defines which of the Salt States we are using. In this example, we are using the pkg state to ensure that a given package is installed.
The third line, called the function-declaration, defines which function in the pkg state module to call.
- Renderers
-
States sls files can be written in many formats. Salt requires only a simple data structure and is not concerned with how that data structure is built. Templating languages and DSLs are a dime-a-dozen and everyone has a favorite.
Building the expected data structure is the job of Salt renderers and they are dead-simple to write.
In this tutorial we will be using YAML in Jinja2 templates, which is the default format. The default can be changed by editing renderer in the master configuration file.
Install the package
Next, let's run the state we created. Open a terminal on the master and run:
% salt '*' state.highstate
Our master is instructing all targeted minions to run state.highstate. When a minion executes a highstate call it will download the top file and attempt to match the expressions. When it does match an expression the modules listed for it will be downloaded, compiled, and executed.
Once completed, the minion will report back with a summary of all actions taken and all changes made.
WARNING: If you have created custom grain modules, they will not be available in the top file until after the first highstate. To make custom grains available on a minion's first highstate, it is recommended to use this example to ensure that the custom grains are synced when the minion starts.
- SLS File Namespace
-
Note that in the example above, the SLS file webserver.sls was referred to simply as webserver. The namespace for SLS files follows a few simple rules:
- 1.
- The .sls is discarded (i.e. webserver.sls becomes webserver).
- 2.
- Subdirectories can be used for better organization.
- a.
- Each subdirectory is represented by a dot.
- b.
- webserver/dev.sls is referred to as webserver.dev.
- 3.
- A file called init.sls in a subdirectory is referred to by the path of the directory. So, webserver/init.sls is referred to as webserver.
- 4.
- If both webserver.sls and webserver/init.sls happen to exist, webserver/init.sls will be ignored and webserver.sls will be the file referred to as webserver.
- Troubleshooting Salt
-
If the expected output isn't seen, the following tips can help to narrow down the problem.
- Turn up logging
-
Salt can be quite chatty when you change the logging setting to
debug:
salt-minion -l debug
- Run the minion in the foreground
-
By not starting the minion in daemon mode (-d)
one can view any output from the minion as it works:
salt-minion &
Increase the default timeout value when running salt. For example, to change the default timeout to 60 seconds:
salt -t 60
For best results, combine all three:
salt-minion -l debug & # On the minion salt '*' state.highstate -t 60 # On the master
Next steps
This tutorial focused on getting a simple Salt States configuration working. Part 2 will build on this example to cover more advanced sls syntax and will explore more of the states that ship with Salt.
States tutorial, part 2 - More Complex States, Requisites
NOTE: This tutorial builds on topics covered in part 1. It is recommended that you begin there.
In the last part of the Salt States tutorial we covered the basics of installing a package. We will now modify our webserver.sls file to have requirements, and use even more Salt States.
Call multiple States
You can specify multiple state-declaration under an id-declaration. For example, a quick modification to our webserver.sls to also start Apache if it is not running:
apache: pkg.installed: [] service.running: - require: - pkg: apache
Try stopping Apache before running state.highstate once again and observe the output.
NOTE: For those running RedhatOS derivatives (Centos, AWS), you will want to specify the service name to be httpd. More on state service here, service state. With the example above, just add "- name: httpd" above the require line and with the same spacing.
Require other states
We now have a working installation of Apache so let's add an HTML file to customize our website. It isn't exactly useful to have a website without a webserver so we don't want Salt to install our HTML file until Apache is installed and running. Include the following at the bottom of your webserver/init.sls file:
apache: pkg.installed: [] service.running: - require: - pkg: apache /var/www/index.html: # ID declaration file: # state declaration - managed # function - source: salt://webserver/index.html # function arg - require: # requisite declaration - pkg: apache # requisite reference
line 9 is the id-declaration. In this example it is the location we want to install our custom HTML file. (Note: the default location that Apache serves may differ from the above on your OS or distro. /srv/www could also be a likely place to look.)
Line 10 the state-declaration. This example uses the Salt file state.
Line 11 is the function-declaration. The managed function will download a file from the master and install it in the location specified.
Line 12 is a function-arg-declaration which, in this example, passes the source argument to the managed function.
Line 13 is a requisite-declaration.
Line 14 is a requisite-reference which refers to a state and an ID. In this example, it is referring to the ID declaration from our example in part 1. This declaration tells Salt not to install the HTML file until Apache is installed.
Next, create the index.html file and save it in the webserver directory:
<html> <head><title>Salt rocks</title></head> <body> <h1>This file brought to you by Salt</h1> </body> </html>
Last, call state.highstate again and the minion will fetch and execute the highstate as well as our HTML file from the master using Salt's File Server:
salt '*' state.highstate
Verify that Apache is now serving your custom HTML.
- require vs. watch
-
There are two requisite-declaration, “require”, and “watch”. Not every state supports “watch”. The service state does support “watch” and will restart a service based on the watch condition.
For example, if you use Salt to install an Apache virtual host configuration file and want to restart Apache whenever that file is changed you could modify our Apache example from earlier as follows:
/etc/httpd/extra/httpd-vhosts.conf: file.managed: - source: salt://webserver/httpd-vhosts.conf apache: pkg.installed: [] service.running: - watch: - file: /etc/httpd/extra/httpd-vhosts.conf - require: - pkg: apache
If the pkg and service names differ on your OS or distro of choice you can specify each one separately using a name-declaration which explained in Part 3.
Next steps
In part 3 we will discuss how to use includes, extends, and templating to make a more complete State Tree configuration.
States tutorial, part 3 - Templating, Includes, Extends
NOTE: This tutorial builds on topics covered in part 1 and part 2. It is recommended that you begin there.
This part of the tutorial will cover more advanced templating and configuration techniques for sls files.
Templating SLS modules
SLS modules may require programming logic or inline execution. This is accomplished with module templating. The default module templating system used is Jinja2 and may be configured by changing the renderer value in the master config.
All states are passed through a templating system when they are initially read. To make use of the templating system, simply add some templating markup. An example of an sls module with templating markup may look like this:
{% for usr in ['moe','larry','curly'] %} {{ usr }}: user.present {% endfor %}
This templated sls file once generated will look like this:
moe: user.present larry: user.present curly: user.present
Here's a more complex example:
# Comments in yaml start with a hash symbol. # Since jinja rendering occurs before yaml parsing, if you want to include jinja # in the comments you may need to escape them using 'jinja' comments to prevent # jinja from trying to render something which is not well-defined jinja. # e.g. # {# iterate over the Three Stooges using a {% for %}..{% endfor %} loop # with the iterator variable {{ usr }} becoming the state ID. #} {% for usr in 'moe','larry','curly' %} {{ usr }}: group: - present user: - present - gid_from_name: True - require: - group: {{ usr }} {% endfor %}
Using Grains in SLS modules
Often times a state will need to behave differently on different systems. Salt grains objects are made available in the template context. The grains can be used from within sls modules:
apache: pkg.installed: {% if grains['os'] == 'RedHat' %} - name: httpd {% elif grains['os'] == 'Ubuntu' %} - name: apache2 {% endif %}
Using Environment Variables in SLS modules
You can use salt['environ.get']('VARNAME') to use an environment variable in a Salt state.
MYENVVAR="world" salt-call state.template test.sls
Create a file with contents from an environment variable: file.managed: - name: /tmp/hello - contents: {{ salt['environ.get']('MYENVVAR') }}
Error checking:
{% set myenvvar = salt['environ.get']('MYENVVAR') %} {% if myenvvar %} Create a file with contents from an environment variable: file.managed: - name: /tmp/hello - contents: {{ salt['environ.get']('MYENVVAR') }} {% else %} Fail - no environment passed in: test: A. fail_without_changes {% endif %}
Calling Salt modules from templates
All of the Salt modules loaded by the minion are available within the templating system. This allows data to be gathered in real time on the target system. It also allows for shell commands to be run easily from within the sls modules.
The Salt module functions are also made available in the template context as salt:
moe: user.present: - gid: {{ salt['file.group_to_gid']('some_group_that_exists') }}
Note that for the above example to work, some_group_that_exists must exist before the state file is processed by the templating engine.
Below is an example that uses the network.hw_addr function to retrieve the MAC address for eth0:
salt['network.hw_addr']('eth0')
Advanced SLS module syntax
Lastly, we will cover some incredibly useful techniques for more complex State trees.
Include declaration
A previous example showed how to spread a Salt tree across several files. Similarly, requisites span multiple files by using an include-declaration. For example:
python/python-libs.sls:
python-dateutil: pkg.installed
python/django.sls:
include: - python.python-libs django: pkg.installed: - require: - pkg: python-dateutil
Extend declaration
You can modify previous declarations by using an extend-declaration. For example the following modifies the Apache tree to also restart Apache when the vhosts file is changed:
apache/apache.sls:
apache: pkg.installed
apache/mywebsite.sls:
include: - apache.apache extend: apache: service: - running - watch: - file: /etc/httpd/extra/httpd-vhosts.conf /etc/httpd/extra/httpd-vhosts.conf: file.managed: - source: salt://apache/httpd-vhosts.conf
- Using extend with require or watch
-
The extend statement works differently for require or watch. It appends to, rather than replacing the requisite component.
Name declaration
You can override the id-declaration by using a name-declaration. For example, the previous example is a bit more maintainable if rewritten as follows:
apache/mywebsite.sls:
include: - apache.apache extend: apache: service: - running - watch: - file: mywebsite mywebsite: file.managed: - name: /etc/httpd/extra/httpd-vhosts.conf - source: salt://apache/httpd-vhosts.conf
Names declaration
Even more powerful is using a names-declaration to override the id-declaration for multiple states at once. This often can remove the need for looping in a template. For example, the first example in this tutorial can be rewritten without the loop:
stooges: user.present: - names: - moe - larry - curly
Next steps
In part 4 we will discuss how to use salt's file_roots to set up a workflow in which states can be "promoted" from dev, to QA, to production.
States tutorial, part 4
NOTE: This tutorial builds on topics covered in part 1, part 2 and part 3. It is recommended that you begin there.
This part of the tutorial will show how to use salt's file_roots to set up a workflow in which states can be "promoted" from dev, to QA, to production.
Salt fileserver path inheritance
Salt's fileserver allows for more than one root directory per environment, like in the below example, which uses both a local directory and a secondary location shared to the salt master via NFS:
# In the master config file (/etc/salt/master) file_roots: base: - /srv/salt - /mnt/salt-nfs/base
Salt's fileserver collapses the list of root directories into a single virtual environment containing all files from each root. If the same file exists at the same relative path in more than one root, then the top-most match "wins". For example, if /srv/salt/foo.txt and /mnt/salt-nfs/base/foo.txt both exist, then salt://foo.txt will point to /srv/salt/foo.txt.
NOTE: When using multiple fileserver backends, the order in which they are listed in the fileserver_backend parameter also matters. If both roots and git backends contain a file with the same relative path, and roots appears before git in the fileserver_backend list, then the file in roots will "win", and the file in gitfs will be ignored.
A more thorough explanation of how Salt's modular fileserver works can be found here. We recommend reading this.
Environment configuration
Configure a multiple-environment setup like so:
file_roots: base: - /srv/salt/prod qa: - /srv/salt/qa - /srv/salt/prod dev: - /srv/salt/dev - /srv/salt/qa - /srv/salt/prod
Given the path inheritance described above, files within /srv/salt/prod would be available in all environments. Files within /srv/salt/qa would be available in both qa, and dev. Finally, the files within /srv/salt/dev would only be available within the dev environment.
Based on the order in which the roots are defined, new files/states can be placed within /srv/salt/dev, and pushed out to the dev hosts for testing.
Those files/states can then be moved to the same relative path within /srv/salt/qa, and they are now available only in the dev and qa environments, allowing them to be pushed to QA hosts and tested.
Finally, if moved to the same relative path within /srv/salt/prod, the files are now available in all three environments.
Practical Example
As an example, consider a simple website, installed to /var/www/foobarcom. Below is a top.sls that can be used to deploy the website:
/srv/salt/prod/top.sls:
base: 'web*prod*': - webserver.foobarcom qa: 'web*qa*': - webserver.foobarcom dev: 'web*dev*': - webserver.foobarcom
Using pillar, roles can be assigned to the hosts:
/srv/pillar/top.sls:
base: 'web*prod*': - webserver.prod 'web*qa*': - webserver.qa 'web*dev*': - webserver.dev
/srv/pillar/webserver/prod.sls:
webserver_role: prod
/srv/pillar/webserver/qa.sls:
webserver_role: qa
/srv/pillar/webserver/dev.sls:
webserver_role: dev
And finally, the SLS to deploy the website:
/srv/salt/prod/webserver/foobarcom.sls:
{% if pillar.get('webserver_role', '') %} /var/www/foobarcom: file.recurse: - source: salt://webserver/src/foobarcom - env: {{ pillar['webserver_role'] }} - user: www - group: www - dir_mode: 755 - file_mode: 644 {% endif %}
Given the above SLS, the source for the website should initially be placed in /srv/salt/dev/webserver/src/foobarcom.
First, let's deploy to dev. Given the configuration in the top file, this can be done using state.highstate:
salt --pillar 'webserver_role:dev' state.highstate
However, in the event that it is not desirable to apply all states configured in the top file (which could be likely in more complex setups), it is possible to apply just the states for the foobarcom website, using state.sls:
salt --pillar 'webserver_role:dev' state.sls webserver.foobarcom
Once the site has been tested in dev, then the files can be moved from /srv/salt/dev/webserver/src/foobarcom to /srv/salt/qa/webserver/src/foobarcom, and deployed using the following:
salt --pillar 'webserver_role:qa' state.sls webserver.foobarcom
Finally, once the site has been tested in qa, then the files can be moved from /srv/salt/qa/webserver/src/foobarcom to /srv/salt/prod/webserver/src/foobarcom, and deployed using the following:
salt --pillar 'webserver_role:prod' state.sls webserver.foobarcom
Thanks to Salt's fileserver inheritance, even though the files have been moved to within /srv/salt/prod, they are still available from the same salt:// URI in both the qa and dev environments.
Continue Learning
The best way to continue learning about Salt States is to read through the reference documentation and to look through examples of existing state trees. Many pre-configured state trees can be found on Github in the saltstack-formulas collection of repositories.
If you have any questions, suggestions, or just want to chat with other people who are using Salt, we have a very active community and we'd love to hear from you.
In addition, by continuing to part 5, you can learn about the powerful orchestration of which Salt is capable.
States Tutorial, Part 5 - Orchestration with Salt
NOTE: This tutorial builds on some of the topics covered in the earlier States Walkthrough pages. It is recommended to start with Part 1 if you are not familiar with how to use states.
Orchestration is accomplished in salt primarily through the Orchestrate Runner. Added in version 0.17.0, this Salt Runner can use the full suite of requisites available in states, and can also execute states/functions using salt-ssh. This runner replaces the OverState.
The Orchestrate Runner
New in version 0.17.0.
NOTE: Orchestrate Deprecates OverState
The Orchestrate Runner (originally called the state.sls runner) offers all the functionality of the OverState, but with some advantages:
- •
- All requisites available in states can be used.
- •
-
The states/functions will also work on salt-ssh minions.
The Orchestrate Runner was added with the intent to eventually deprecate the OverState system, however the OverState will still be maintained until Salt Boron.
The orchestrate runner generalizes the Salt state system to a Salt master context. Whereas the state.sls, state.highstate, et al functions are concurrently and independently executed on each Salt minion, the state.orchestrate runner is executed on the master, giving it a master-level view and control over requisites, such as state ordering and conditionals. This allows for inter minion requisites, like ordering the application of states on different minions that must not happen simultaneously, or for halting the state run on all minions if a minion fails one of its states.
If you want to setup a load balancer in front of a cluster of web servers, for example, you can ensure the load balancer is setup before the web servers or stop the state run altogether if one of the minions does not set up correctly.
The state.sls, state.highstate, et al functions allow you to statefully manage each minion and the state.orchestrate runner allows you to statefully manage your entire infrastructure.
Executing the Orchestrate Runner
The Orchestrate Runner command format is the same as for the state.sls function, except that since it is a runner, it is executed with salt-run rather than salt. Assuming you have a state.sls file called /srv/salt/orch/webserver.sls the following command run on the master will apply the states defined in that file.
salt-run state.orchestrate orch.webserver
NOTE: state.orch is a synonym for state.orchestrate
Changed in version 2014.1.1: The runner function was renamed to state.orchestrate to avoid confusion with the state.sls execution function. In versions 0.17.0 through 2014.1.0, state.sls must be used.
Examples
Function
To execute a function, use salt.function:
# /srv/salt/orch/cleanfoo.sls cmd.run: salt.function: - tgt: '*' - arg: - rm -rf /tmp/foo
salt-run state.orchestrate orch.cleanfoo
State
To execute a state, use salt.state.
# /srv/salt/orch/webserver.sls install_nginx: salt.state: - tgt: 'web*' - sls: - nginx
salt-run state.orchestrate orch.webserver
Highstate
To run a highstate, set highstate: True in your state config:
# /srv/salt/orch/web_setup.sls webserver_setup: salt.state: - tgt: 'web*' - highstate: True
salt-run state.orchestrate orch.web_setup
More Complex Orchestration
Many states/functions can be configured in a single file, which when combined with the full suite of requisites, can be used to easily configure complex orchestration tasks. Additionally, the states/functions will be executed in the order in which they are defined, unless prevented from doing so by any requisites, as is the default in SLS files since 0.17.0.
cmd.run: salt.function: - tgt: 10.0.0.0/24 - tgt_type: ipcidr - arg: - bootstrap storage_setup: salt.state: - tgt: 'role:storage' - tgt_type: grain - sls: ceph - require: - salt: webserver_setup webserver_setup: salt.state: - tgt: 'web*' - highstate: True
Given the above setup, the orchestration will be carried out as follows:
- 1.
- The shell command bootstrap will be executed on all minions in the 10.0.0.0/24 subnet.
- 2.
- A Highstate will be run on all minions whose ID starts with "web", since the storage_setup state requires it.
- 3.
- Finally, the ceph SLS target will be executed on all minions which have a grain called role with a value of storage.
The OverState System
WARNING: The OverState runner is deprecated, and will be removed in the feature release of Salt codenamed Boron. (Three feature releases after 2014.7.0, which is codenamed Helium)
Often, servers need to be set up and configured in a specific order, and systems should only be set up if systems earlier in the sequence have been set up without any issues.
The OverState system can be used to orchestrate deployment in a smooth and reliable way across multiple systems in small to large environments.
The OverState SLS
The OverState system is managed by an SLS file named overstate.sls, located in the root of a Salt fileserver environment.
The overstate.sls configures an unordered list of stages, each stage defines the minions on which to execute the state, and can define what sls files to run, execute a state.highstate, or execute a function. Here's a sample overstate.sls:
mysql: match: 'db*' sls: - mysql.server - drbd webservers: match: 'web*' require: - mysql all: match: '*' require: - mysql - webservers
NOTE: The match argument uses compound matching
Given the above setup, the OverState will be carried out as follows:
- 1.
- The mysql stage will be executed first because it is required by the webservers and all stages. It will execute state.sls once for each of the two listed SLS targets (mysql.server and drbd). These states will be executed on all minions whose minion ID starts with "db".
- 2.
- The webservers stage will then be executed, but only if the mysql stage executes without any failures. The webservers stage will execute a state.highstate on all minions whose minion IDs start with "web".
- 3.
-
Finally, the all stage will execute, running state.highstate on all systems, if, and only if the mysql
and webservers stages completed without any failures.
Any failure in the above steps would cause the requires to fail, preventing the dependent stages from executing.
Using Functions with OverState
In the above example, you'll notice that the stages lacking an sls entry run a state.highstate. As mentioned earlier, it is also possible to execute other functions in a stage. This functionality was added in version 0.15.0.
Running a function is easy:
http: function: pkg.install: - httpd
The list of function arguments are defined after the declared function. So, the above stage would run pkg.install http. Requisites only function properly if the given function supports returning a custom return code.
Executing an OverState
Since the OverState is a Runner, it is executed using the salt-run command. The runner function for the OverState is state.over.
salt-run state.over
The function will by default look in the root of the base environment (as defined in file_roots) for a file called overstate.sls, and then execute the stages defined within that file.
Different environments and paths can be used as well, by adding them as positional arguments:
salt-run state.over dev /root/other-overstate.sls
The above would run an OverState using the dev fileserver environment, with the stages defined in /root/other-overstate.sls.
WARNING: Since these are positional arguments, when defining the path to the overstate file the environment must also be specified, even if it is the base environment.
NOTE: Remember, salt-run is always executed on the master.
Syslog-ng usage
Overview
Syslog_ng state module is for generating syslog-ng configurations. You can do the following things:
- •
- generate syslog-ng configuration from YAML,
- •
- use non-YAML configuration,
- •
-
start, stop or reload syslog-ng.
There is also an execution module, which can check the syntax of the configuration, get the version and other information about syslog-ng.
Configuration
Users can create syslog-ng configuration statements with the syslog_ng.config function. It requires a name and a config parameter. The name parameter determines the name of the generated statement and the config parameter holds a parsed YAML structure.
A statement can be declared in the following forms (both are equivalent):
source.s_localhost: syslog_ng.config: - config: - tcp: - ip: "127.0.0.1" - port: 1233
s_localhost: syslog_ng.config: - config: source: - tcp: - ip: "127.0.0.1" - port: 1233
The first one is called short form, because it needs less typing. Users can use lists and dictionaries to specify their configuration. The format is quite self describing and there are more examples [at the end](#examples) of this document.
Quotation
- The quotation can be tricky sometimes but here are some rules to follow:
- •
- when a string meant to be "string" in the generated configuration, it should be like '"string"' in the YAML document
- •
- similarly, users should write "'string'" to get 'string' in the generated configuration
Full example
The following configuration is an example, how a complete syslog-ng configuration looks like:
# Set the location of the configuration file set_location: module.run: - name: syslog_ng.set_config_file - m_name: "/home/tibi/install/syslog-ng/etc/syslog-ng.conf" # The syslog-ng and syslog-ng-ctl binaries are here. You needn't use # this method if these binaries can be found in a directory in your PATH. set_bin_path: module.run: - name: syslog_ng.set_binary_path - m_name: "/home/tibi/install/syslog-ng/sbin" # Writes the first lines into the config file, also erases its previous # content write_version: module.run: - name: syslog_ng.write_version - m_name: "3.6" # There is a shorter form to set the above variables set_variables: module.run: - name: syslog_ng.set_parameters - version: "3.6" - binary_path: "/home/tibi/install/syslog-ng/sbin" - config_file: "/home/tibi/install/syslog-ng/etc/syslog-ng.conf" # Some global options options.global_options: syslog_ng.config: - config: - time_reap: 30 - mark_freq: 10 - keep_hostname: "yes" source.s_localhost: syslog_ng.config: - config: - tcp: - ip: "127.0.0.1" - port: 1233 destination.d_log_server: syslog_ng.config: - config: - tcp: - "127.0.0.1" - port: 1234 log.l_log_to_central_server: syslog_ng.config: - config: - source: s_localhost - destination: d_log_server some_comment: module.run: - name: syslog_ng.write_config - config: | # Multi line # comment # An other mode to use comments or existing configuration snippets config.other_comment_form: syslog_ng.config: - config: | # Multi line # comment
The syslog_ng.reloaded function can generate syslog-ng configuration from YAML. If the statement (source, destination, parser, etc.) has a name, this function uses the id as the name, otherwise (log statement) it's purpose is like a mandatory comment.
After execution this example the syslog_ng state will generate this file:
#Generated by Salt on 2014-08-18 00:11:11 @version: 3.6 options { time_reap( 30 ); mark_freq( 10 ); keep_hostname( yes ); }; source s_localhost { tcp( ip( 127.0.0.1 ), port( 1233 ) ); }; destination d_log_server { tcp( 127.0.0.1, port( 1234 ) ); }; log { source( s_localhost ); destination( d_log_server ); }; # Multi line # comment # Multi line # comment
Users can include arbitrary texts in the generated configuration with using the config statement (see the example above).
Syslog_ng module functions
You can use syslog_ng.set_binary_path to set the directory which contains the syslog-ng and syslog-ng-ctl binaries. If this directory is in your PATH, you don't need to use this function. There is also a syslog_ng.set_config_file function to set the location of the configuration file.
Examples
Simple source
source s_tail { file( "/var/log/apache/access.log", follow_freq(1), flags(no-parse, validate-utf8) ); };
s_tail: # Salt will call the source function of syslog_ng module syslog_ng.config: - config: source: - file: - file: ''"/var/log/apache/access.log"'' - follow_freq : 1 - flags: - no-parse - validate-utf8
OR
s_tail: syslog_ng.config: - config: source: - file: - ''"/var/log/apache/access.log"'' - follow_freq : 1 - flags: - no-parse - validate-utf8
OR
source.s_tail: syslog_ng.config: - config: - file: - ''"/var/log/apache/access.log"'' - follow_freq : 1 - flags: - no-parse - validate-utf8
Complex source
source s_gsoc2014 { tcp( ip("0.0.0.0"), port(1234), flags(no-parse) ); };
s_gsoc2014: syslog_ng.config: - config: source: - tcp: - ip: 0.0.0.0 - port: 1234 - flags: no-parse
Filter
filter f_json { match( "@json:" ); };
f_json: syslog_ng.config: - config: filter: - match: - ''"@json:"''
Template
template t_demo_filetemplate { template( "$ISODATE $HOST $MSG " ); template_escape( no ); };
t_demo_filetemplate: syslog_ng.config: -config: template: - template: - '"$ISODATE $HOST $MSG\n"' - template_escape: - "no"
Rewrite
rewrite r_set_message_to_MESSAGE { set( "${.json.message}", value("$MESSAGE") ); };
r_set_message_to_MESSAGE: syslog_ng.config: - config: rewrite: - set: - '"${.json.message}"' - value : '"$MESSAGE"'
Global options
options { time_reap(30); mark_freq(10); keep_hostname(yes); };
global_options: syslog_ng.config: - config: options: - time_reap: 30 - mark_freq: 10 - keep_hostname: "yes"
Log
log { source(s_gsoc2014); junction { channel { filter(f_json); parser(p_json); rewrite(r_set_json_tag); rewrite(r_set_message_to_MESSAGE); destination { file( "/tmp/json-input.log", template(t_gsoc2014) ); }; flags(final); }; channel { filter(f_not_json); parser { syslog-parser( ); }; rewrite(r_set_syslog_tag); flags(final); }; }; destination { file( "/tmp/all.log", template(t_gsoc2014) ); }; };
l_gsoc2014: syslog_ng.config: - config: log: - source: s_gsoc2014 - junction: - channel: - filter: f_json - parser: p_json - rewrite: r_set_json_tag - rewrite: r_set_message_to_MESSAGE - destination: - file: - '"/tmp/json-input.log"' - template: t_gsoc2014 - flags: final - channel: - filter: f_not_json - parser: - syslog-parser: [] - rewrite: r_set_syslog_tag - flags: final - destination: - file: - "/tmp/all.log" - template: t_gsoc2014
Advanced Topics
SaltStack Walk-through
NOTE: Welcome to SaltStack! I am excited that you are interested in Salt and starting down the path to better infrastructure management. I developed (and am continuing to develop) Salt with the goal of making the best software available to manage computers of almost any kind. I hope you enjoy working with Salt and that the software can solve your real world needs!
- •
- Thomas S Hatch
- •
- Salt creator and Chief Developer
- •
- CTO of SaltStack, Inc.
Getting Started
What is Salt?
Salt is a different approach to infrastructure management, founded on the idea that high-speed communication with large numbers of systems can open up new capabilities. This approach makes Salt a powerful multitasking system that can solve many specific problems in an infrastructure.
The backbone of Salt is the remote execution engine, which creates a high-speed, secure and bi-directional communication net for groups of systems. On top of this communication system, Salt provides an extremely fast, flexible, and easy-to-use configuration management system called Salt States.
Installing Salt
SaltStack has been made to be very easy to install and get started. The installation documents contain instructions for all supported platforms.
Starting Salt
Salt functions on a master/minion topology. A master server acts as a central control bus for the clients, which are called minions. The minions connect back to the master.
Setting Up the Salt Master
Turning on the Salt Master is easy -- just turn it on! The default configuration is suitable for the vast majority of installations. The Salt Master can be controlled by the local Linux/Unix service manager:
On Systemd based platforms (OpenSuse, Fedora):
systemctl start salt-master
On Upstart based systems (Ubuntu, older Fedora/RHEL):
service salt-master start
On SysV Init systems (Debian, Gentoo etc.):
/etc/init.d/salt-master start
Alternatively, the Master can be started directly on the command-line:
salt-master -d
The Salt Master can also be started in the foreground in debug mode, thus greatly increasing the command output:
salt-master -l debug
The Salt Master needs to bind to two TCP network ports on the system. These ports are 4505 and 4506. For more in depth information on firewalling these ports, the firewall tutorial is available here.
Setting up a Salt Minion
NOTE: The Salt Minion can operate with or without a Salt Master. This walk-through assumes that the minion will be connected to the master, for information on how to run a master-less minion please see the master-less quick-start guide:
Masterless Minion Quickstart
The Salt Minion only needs to be aware of one piece of information to run, the network location of the master.
By default the minion will look for the DNS name salt for the master, making the easiest approach to set internal DNS to resolve the name salt back to the Salt Master IP.
Otherwise, the minion configuration file will need to be edited so that the configuration option master points to the DNS name or the IP of the Salt Master:
NOTE: The default location of the configuration files is /etc/salt. Most platforms adhere to this convention, but platforms such as FreeBSD and Microsoft Windows place this file in different locations.
/etc/salt/minion:
master: saltmaster.example.com
Now that the master can be found, start the minion in the same way as the master; with the platform init system or via the command line directly:
As a daemon:
salt-minion -d
In the foreground in debug mode:
salt-minion -l debug
When the minion is started, it will generate an id value, unless it has been generated on a previous run and cached in the configuration directory, which is /etc/salt by default. This is the name by which the minion will attempt to authenticate to the master. The following steps are attempted, in order to try to find a value that is not localhost:
- 1.
- The Python function socket.getfqdn() is run
- 2.
- /etc/hostname is checked (non-Windows only)
- 3.
-
/etc/hosts (%WINDIR%\system32\drivers\etc\hosts on Windows hosts) is
checked for hostnames that map to anything within 127.0.0.0/8.
If none of the above are able to produce an id which is not localhost, then a sorted list of IP addresses on the minion (excluding any within 127.0.0.0/8) is inspected. The first publicly-routable IP address is used, if there is one. Otherwise, the first privately-routable IP address is used.
If all else fails, then localhost is used as a fallback.
NOTE: Overriding the id
The minion id can be manually specified using the id parameter in the minion config file. If this configuration value is specified, it will override all other sources for the id.
Now that the minion is started, it will generate cryptographic keys and attempt to connect to the master. The next step is to venture back to the master server and accept the new minion's public key.
Using salt-key
Salt authenticates minions using public-key encryption and authentication. For a minion to start accepting commands from the master, the minion keys need to be accepted by the master.
The salt-key command is used to manage all of the keys on the master. To list the keys that are on the master:
salt-key -L
The keys that have been rejected, accepted, and pending acceptance are listed. The easiest way to accept the minion key is to accept all pending keys:
salt-key -A
NOTE: Keys should be verified! Print the master key fingerprint by running salt-key -F master on the Salt master. Copy the master.pub fingerprint from the Local Keys section, and then set this value as the master_finger in the minion configuration file. Restart the Salt minion.
On the master, run salt-key -f minion-id to print the fingerprint of the minion's public key that was received by the master. On the minion, run salt-call key.finger --local to print the fingerprint of the minion key.
On the master:
# salt-key -f foo.domain.com Unaccepted Keys: foo.domain.com: 39:f9:e4:8a:aa:74:8d:52:1a:ec:92:03:82:09:c8:f9
On the minion:
# salt-call key.finger --local local: 39:f9:e4:8a:aa:74:8d:52:1a:ec:92:03:82:09:c8:f9
If they match, approve the key with salt-key -a foo.domain.com.
Sending the First Commands
Now that the minion is connected to the master and authenticated, the master can start to command the minion.
Salt commands allow for a vast set of functions to be executed and for specific minions and groups of minions to be targeted for execution.
The salt command is comprised of command options, target specification, the function to execute, and arguments to the function.
A simple command to start with looks like this:
salt '*' test.ping
The * is the target, which specifies all minions.
test.ping tells the minion to run the test.ping function.
In the case of test.ping, test refers to a execution module. ping refers to the ping function contained in the aforementioned test module.
NOTE: Execution modules are the workhorses of Salt. They do the work on the system to perform various tasks, such as manipulating files and restarting services.
The result of running this command will be the master instructing all of the minions to execute test.ping in parallel and return the result.
This is not an actual ICMP ping, but rather a simple function which returns True. Using test.ping is a good way of confirming that a minion is connected.
NOTE: Each minion registers itself with a unique minion ID. This ID defaults to the minion's hostname, but can be explicitly defined in the minion config as well by using the id parameter.
Of course, there are hundreds of other modules that can be called just as test.ping can. For example, the following would return disk usage on all targeted minions:
salt '*' disk.usage
Getting to Know the Functions
Salt comes with a vast library of functions available for execution, and Salt functions are self-documenting. To see what functions are available on the minions execute the sys.doc function:
salt '*' sys.doc
This will display a very large list of available functions and documentation on them.
NOTE: Module documentation is also available on the web.
These functions cover everything from shelling out to package management to manipulating database servers. They comprise a powerful system management API which is the backbone to Salt configuration management and many other aspects of Salt.
NOTE: Salt comes with many plugin systems. The functions that are available via the salt command are called Execution Modules.
Helpful Functions to Know
The cmd module contains functions to shell out on minions, such as cmd.run and cmd.run_all:
salt '*' cmd.run 'ls -l /etc'
The pkg functions automatically map local system package managers to the same salt functions. This means that pkg.install will install packages via yum on Red Hat based systems, apt on Debian systems, etc.:
salt '*' pkg.install vim
NOTE: Some custom Linux spins and derivatives of other distributions are not properly detected by Salt. If the above command returns an error message saying that pkg.install is not available, then you may need to override the pkg provider. This process is explained here.
The network.interfaces function will list all interfaces on a minion, along with their IP addresses, netmasks, MAC addresses, etc:
salt '*' network.interfaces
Changing the Output Format
The default output format used for most Salt commands is called the nested outputter, but there are several other outputters that can be used to change the way the output is displayed. For instance, the pprint outputter can be used to display the return data using Python's pprint module:
root [at] saltmaster:~# salt myminion grains.item pythonpath --out=pprint {'myminion': {'pythonpath': ['/usr/lib64/python2.7', '/usr/lib/python2.7/plat-linux2', '/usr/lib64/python2.7/lib-tk', '/usr/lib/python2.7/lib-tk', '/usr/lib/python2.7/site-packages', '/usr/lib/python2.7/site-packages/gst-0.10', '/usr/lib/python2.7/site-packages/gtk-2.0']}}
The full list of Salt outputters, as well as example output, can be found here.
salt-call
The examples so far have described running commands from the Master using the salt command, but when troubleshooting it can be more beneficial to login to the minion directly and use salt-call.
Doing so allows you to see the minion log messages specific to the command you are running (which are not part of the return data you see when running the command from the Master using salt), making it unnecessary to tail the minion log. More information on salt-call and how to use it can be found here.
Grains
Salt uses a system called Grains to build up static data about minions. This data includes information about the operating system that is running, CPU architecture and much more. The grains system is used throughout Salt to deliver platform data to many components and to users.
Grains can also be statically set, this makes it easy to assign values to minions for grouping and managing.
A common practice is to assign grains to minions to specify what the role or roles a minion might be. These static grains can be set in the minion configuration file or via the grains.setval function.
Targeting
Salt allows for minions to be targeted based on a wide range of criteria. The default targeting system uses globular expressions to match minions, hence if there are minions named larry1, larry2, curly1, and curly2, a glob of larry* will match larry1 and larry2, and a glob of *1 will match larry1 and curly1.
Many other targeting systems can be used other than globs, these systems include:
- Regular Expressions
- Target using PCRE-compliant regular expressions
- Grains
- Target based on grains data: Targeting with Grains
- Pillar
- Target based on pillar data: Targeting with Pillar
- IP
- Target based on IP address/subnet/range
- Compound
- Create logic to target based on multiple targets: Targeting with Compound
- Nodegroup
-
Target with nodegroups:
Targeting with Nodegroup
The concepts of targets are used on the command line with Salt, but also function in many other areas as well, including the state system and the systems used for ACLs and user permissions.
Passing in Arguments
Many of the functions available accept arguments which can be passed in on the command line:
salt '*' pkg.install vim
This example passes the argument vim to the pkg.install function. Since many functions can accept more complex input then just a string, the arguments are parsed through YAML, allowing for more complex data to be sent on the command line:
salt '*' test.echo 'foo: bar'
In this case Salt translates the string 'foo: bar' into the dictionary "{'foo': 'bar'}"
NOTE: Any line that contains a newline will not be parsed by YAML.
Salt States
Now that the basics are covered the time has come to evaluate States. Salt States, or the State System is the component of Salt made for configuration management.
The state system is already available with a basic Salt setup, no additional configuration is required. States can be set up immediately.
NOTE: Before diving into the state system, a brief overview of how states are constructed will make many of the concepts clearer. Salt states are based on data modeling and build on a low level data structure that is used to execute each state function. Then more logical layers are built on top of each other.
The high layers of the state system which this tutorial will cover consists of everything that needs to be known to use states, the two high layers covered here are the sls layer and the highest layer highstate.
Understanding the layers of data management in the State System will help with understanding states, but they never need to be used. Just as understanding how a compiler functions assists when learning a programming language, understanding what is going on under the hood of a configuration management system will also prove to be a valuable asset.
The First SLS Formula
The state system is built on SLS formulas. These formulas are built out in files on Salt's file server. To make a very basic SLS formula open up a file under /srv/salt named vim.sls. The following state ensures that vim is installed on a system to which that state has been applied.
/srv/salt/vim.sls:
vim: pkg.installed
Now install vim on the minions by calling the SLS directly:
salt '*' state.sls vim
This command will invoke the state system and run the vim SLS.
Now, to beef up the vim SLS formula, a vimrc can be added:
/srv/salt/vim.sls:
vim: pkg.installed: [] /etc/vimrc: file.managed: - source: salt://vimrc - mode: 644 - user: root - group: root
Now the desired vimrc needs to be copied into the Salt file server to /srv/salt/vimrc. In Salt, everything is a file, so no path redirection needs to be accounted for. The vimrc file is placed right next to the vim.sls file. The same command as above can be executed to all the vim SLS formulas and now include managing the file.
NOTE: Salt does not need to be restarted/reloaded or have the master manipulated in any way when changing SLS formulas. They are instantly available.
Adding Some Depth
Obviously maintaining SLS formulas right in a single directory at the root of the file server will not scale out to reasonably sized deployments. This is why more depth is required. Start by making an nginx formula a better way, make an nginx subdirectory and add an init.sls file:
/srv/salt/nginx/init.sls:
nginx: pkg.installed: [] service.running: - require: - pkg: nginx
A few concepts are introduced in this SLS formula.
First is the service statement which ensures that the nginx service is running.
Of course, the nginx service can't be started unless the package is installed -- hence the require statement which sets up a dependency between the two.
The require statement makes sure that the required component is executed before and that it results in success.
NOTE: The require option belongs to a family of options called requisites. Requisites are a powerful component of Salt States, for more information on how requisites work and what is available see: Requisites
Also evaluation ordering is available in Salt as well: Ordering States
This new sls formula has a special name -- init.sls. When an SLS formula is named init.sls it inherits the name of the directory path that contains it. This formula can be referenced via the following command:
salt '*' state.sls nginx
NOTE: Reminder!
Just as one could call the test.ping or disk.usage execution modules, state.sls is simply another execution module. It simply takes the name of an SLS file as an argument.
Now that subdirectories can be used, the vim.sls formula can be cleaned up. To make things more flexible, move the vim.sls and vimrc into a new subdirectory called edit and change the vim.sls file to reflect the change:
/srv/salt/edit/vim.sls:
vim: pkg.installed /etc/vimrc: file.managed: - source: salt://edit/vimrc - mode: 644 - user: root - group: root
Only the source path to the vimrc file has changed. Now the formula is referenced as edit.vim because it resides in the edit subdirectory. Now the edit subdirectory can contain formulas for emacs, nano, joe or any other editor that may need to be deployed.
Next Reading
Two walk-throughs are specifically recommended at this point. First, a deeper run through States, followed by an explanation of Pillar.
- 1.
- Starting States
- 2.
-
Pillar Walkthrough
An understanding of Pillar is extremely helpful in using States.
Getting Deeper Into States
Two more in-depth States tutorials exist, which delve much more deeply into States functionality.
- 1.
- How Do I Use Salt States?, covers much more to get off the ground with States.
- 2.
-
The States Tutorial also provides a
fantastic introduction.
These tutorials include much more in-depth information including templating SLS formulas etc.
So Much More!
This concludes the initial Salt walk-through, but there are many more things still to learn! These documents will cover important core aspects of Salt:
- •
- Pillar
- •
-
Job Management
A few more tutorials are also available:
- •
- Remote Execution Tutorial
- •
-
Standalone Minion
This still is only scratching the surface, many components such as the reactor and event systems, extending Salt, modular components and more are not covered here. For an overview of all Salt features and documentation, look at the Table of Contents.
MinionFS Backend Walkthrough
Propagating Files
New in version 2014.1.0.
Sometimes, one might need to propagate files that are generated on a minion. Salt already has a feature to send files from a minion to the master.
Enabling File Propagation
To enable propagation, the file_recv option needs to be set to True.
file_recv: True
These changes require a restart of the master, then new requests for the salt://minion-id/ protocol will send files that are pushed by cp.push from minion-id to the master.
salt 'minion-id' cp.push /path/to/the/file
This command will store the file, including its full path, under cachedir /master/minions/minion-id/files. With the default cachedir the example file above would be stored as /var/cache/salt/master/minions/minion-id/files/path/to/the/file.
NOTE: This walkthrough assumes basic knowledge of Salt and cp.push. To get up to speed, check out the walkthrough.
MinionFS Backend
Since it is not a good idea to expose the whole cachedir, MinionFS should be used to send these files to other minions.
Simple Configuration
To use the minionfs backend only two configuration changes are required on the master. The fileserver_backend option needs to contain a value of minion and file_recv needs to be set to true:
fileserver_backend: - roots - minion file_recv: True
These changes require a restart of the master, then new requests for the salt://minion-id/ protocol will send files that are pushed by cp.push from minion-id to the master.
NOTE: All of the files that are pushed to the master are going to be available to all of the minions. If this is not what you want, please remove minion from fileserver_backend in the master config file.
NOTE: Having directories with the same name as your minions in the root that can be accessed like salt://minion-id/ might cause confusion.
Commandline Example
Lets assume that we are going to generate SSH keys on a minion called minion-source and put the public part in ~/.ssh/authorized_keys of root user of a minion called minion-destination.
First, lets make sure that /root/.ssh exists and has the right permissions:
[root [at] salt-master file]# salt '*' file.mkdir dir_path=/root/.ssh user=root group=root mode=700 minion-source: None minion-destination: None
We create an RSA key pair without a passphrase [*]:
[root [at] salt-master file]# salt 'minion-source' cmd.run 'ssh-keygen -N "" -f /root/.ssh/id_rsa' minion-source: Generating public/private rsa key pair. Your identification has been saved in /root/.ssh/id_rsa. Your public key has been saved in /root/.ssh/id_rsa.pub. The key fingerprint is: 9b:cd:1c:b9:c2:93:8e:ad:a3:52:a0:8b:0a:cc:d4:9b root [at] minion-source The key's randomart image is: +--[ RSA 2048]----+ | | | | | | | o . | | o o S o | |= + . B o | |o+ E B = | |+ . .+ o | |o ...ooo | +-----------------+
and we send the public part to the master to be available to all minions:
[root [at] salt-master file]# salt 'minion-source' cp.push /root/.ssh/id_rsa.pub minion-source: True
now it can be seen by everyone:
[root [at] salt-master file]# salt 'minion-destination' cp.list_master_dirs minion-destination: - . - etc - minion-source/root - minion-source/root/.ssh
Lets copy that as the only authorized key to minion-destination:
[root [at] salt-master file]# salt 'minion-destination' cp.get_file salt://minion-source/root/.ssh/id_rsa.pub /root/.ssh/authorized_keys minion-destination: /root/.ssh/authorized_keys
Or we can use a more elegant and salty way to add an SSH key:
[root [at] salt-master file]# salt 'minion-destination' ssh.set_auth_key_from_file user=root source=salt://minion-source/root/.ssh/id_rsa.pub minion-destination: new
- [*]
- Yes, that was the actual key on my server, but the server is already destroyed.
Automatic Updates / Frozen Deployments
New in version 0.10.3.d.
Salt has support for the Esky application freezing and update tool. This tool allows one to build a complete zipfile out of the salt scripts and all their dependencies - including shared objects / DLLs.
Getting Started
To build frozen applications, suitable build environment will be needed for each platform. You should probably set up a virtualenv in order to limit the scope of Q/A.
This process does work on Windows. Directions are available at https://github.com/saltstack/salt-windows-install for details on installing Salt in Windows. Only the 32-bit Python and dependencies have been tested, but they have been tested on 64-bit Windows.
Install bbfreeze, and then esky from PyPI in order to enable the bdist_esky command in setup.py. Salt itself must also be installed, in addition to its dependencies.
Building and Freezing
Once you have your tools installed and the environment configured, use setup.py to prepare the distribution files.
python setup.py sdist python setup.py bdist
Once the distribution files are in place, Esky can be used traverse the module tree and pack all the scripts up into a redistributable.
python setup.py bdist_esky
There will be an appropriately versioned salt-VERSION.zip in dist/ if everything went smoothly.
Windows
C:\Python27\lib\site-packages\zmq will need to be added to the PATH variable. This helps bbfreeze find the zmq DLL so it can pack it up.
Using the Frozen Build
Unpack the zip file in the desired install location. Scripts like salt-minion and salt-call will be in the root of the zip file. The associated libraries and bootstrapping will be in the directories at the same level. (Check the Esky documentation for more information)
To support updating your minions in the wild, put the builds on a web server that the minions can reach. salt.modules.saltutil.update() will trigger an update and (optionally) a restart of the minion service under the new version.
Troubleshooting
A Windows minion isn't responding
The process dispatch on Windows is slower than it is on *nix. It may be necessary to add '-t 15' to salt commands to give minions plenty of time to return.
Windows and the Visual Studio Redist
The Visual C++ 2008 32-bit redistributable will need to be installed on all Windows minions. Esky has an option to pack the library into the zipfile, but OpenSSL does not seem to acknowledge the new location. If a no OPENSSL_Applink error appears on the console when trying to start a frozen minion, the redistributable is not installed.
Mixed Linux environments and Yum
The Yum Python module doesn't appear to be available on any of the standard Python package mirrors. If RHEL/CentOS systems need to be supported, the frozen build should created on that platform to support all the Linux nodes. Remember to build the virtualenv with --system-site-packages so that the yum module is included.
Automatic (Python) module discovery
Automatic (Python) module discovery does not work with the late-loaded scheme that Salt uses for (Salt) modules. Any misbehaving modules will need to be explicitly added to the freezer_includes in Salt's setup.py. Always check the zipped application to make sure that the necessary modules were included.
Multi Master Tutorial
As of Salt 0.16.0, the ability to connect minions to multiple masters has been made available. The multi-master system allows for redundancy of Salt masters and facilitates multiple points of communication out to minions. When using a multi-master setup, all masters are running hot, and any active master can be used to send commands out to the minions.
NOTE: If you need failover capabilities with multiple masters, there is also a MultiMaster-PKI setup available, that uses a different topology MultiMaster-PKI with Failover Tutorial
In 0.16.0, the masters do not share any information, keys need to be accepted on both masters, and shared files need to be shared manually or use tools like the git fileserver backend to ensure that the file_roots are kept consistent.
Summary of Steps
- 1.
- Create a redundant master server
- 2.
- Copy primary master key to redundant master
- 3.
- Start redundant master
- 4.
- Configure minions to connect to redundant master
- 5.
- Restart minions
- 6.
- Accept keys on redundant master
Prepping a Redundant Master
The first task is to prepare the redundant master. If the redundant master is already running, stop it. There is only one requirement when preparing a redundant master, which is that masters share the same private key. When the first master was created, the master's identifying key pair was generated and placed in the master's pki_dir. The default location of the master's key pair is /etc/salt/pki/master/. Take the private key, master.pem, and copy it to the same location on the redundant master. Do the same for the master's public key, master.pub. Assuming that no minions have yet been connected to the new redundant master, it is safe to delete any existing key in this location and replace it.
NOTE: There is no logical limit to the number of redundant masters that can be used.
Once the new key is in place, the redundant master can be safely started.
Configure Minions
Since minions need to be master-aware, the new master needs to be added to the minion configurations. Simply update the minion configurations to list all connected masters:
master: - saltmaster1.example.com - saltmaster2.example.com
Now the minion can be safely restarted.
Now the minions will check into the original master and also check into the new redundant master. Both masters are first-class and have rights to the minions.
NOTE: Minions can automatically detect failed masters and attempt to reconnect to reconnect to them quickly. To enable this functionality, set master_alive_interval in the minion config and specify a number of seconds to poll the masters for connection status.
If this option is not set, minions will still reconnect to failed masters but the first command sent after a master comes back up may be lost while the minion authenticates.
Sharing Files Between Masters
Salt does not automatically share files between multiple masters. A number of files should be shared or sharing of these files should be strongly considered.
Minion Keys
Minion keys can be accepted the normal way using salt-key on both masters. Keys accepted, deleted, or rejected on one master will NOT be automatically managed on redundant masters; this needs to be taken care of by running salt-key on both masters or sharing the /etc/salt/pki/master/{minions,minions_pre,minions_rejected} directories between masters.
NOTE: While sharing the /etc/salt/pki/master directory will work, it is strongly discouraged, since allowing access to the master.pem key outside of Salt creates a SERIOUS security risk.
File_Roots
The file_roots contents should be kept consistent between masters. Otherwise state runs will not always be consistent on minions since instructions managed by one master will not agree with other masters.
The recommended way to sync these is to use a fileserver backend like gitfs or to keep these files on shared storage.
Pillar_Roots
Pillar roots should be given the same considerations as file_roots.
Master Configurations
While reasons may exist to maintain separate master configurations, it is wise to remember that each master maintains independent control over minions. Therefore, access controls should be in sync between masters unless a valid reason otherwise exists to keep them inconsistent.
These access control options include but are not limited to:
- •
- external_auth
- •
- client_acl
- •
- peer
- •
- peer_run
Multi-Master-PKI Tutorial With Failover
This tutorial will explain, how to run a salt-environment where a single minion can have multiple masters and fail-over between them if its current master fails.
The individual steps are
- •
- setup the master(s) to sign its auth-replies
- •
- setup minion(s) to verify master-public-keys
- •
- enable multiple masters on minion(s)
- •
- enable master-check on minion(s) Please note, that it is advised to have good knowledge of the salt- authentication and communication-process to understand this tutorial. All of the settings described here, go on top of the default authentication/communication process.
Motivation
The default behaviour of a salt-minion is to connect to a master and accept the masters public key. With each publication, the master sends his public-key for the minion to check and if this public-key ever changes, the minion complains and exits. Practically this means, that there can only be a single master at any given time.
Would it not be much nicer, if the minion could have any number of masters (1:n) and jump to the next master if its current master died because of a network or hardware failure?
NOTE: There is also a MultiMaster-Tutorial with a different approach and topology than this one, that might also suite your needs or might even be better suited Multi-Master Tutorial
It is also desirable, to add some sort of authenticity-check to the very first public key a minion receives from a master. Currently a minions takes the first masters public key for granted.
The Goal
Setup the master to sign the public key it sends to the minions and enable the minions to verify this signature for authenticity.
Prepping the master to sign its public key
For signing to work, both master and minion must have the signing and/or verification settings enabled. If the master signs the public key but the minion does not verify it, the minion will complain and exit. The same happens, when the master does not sign but the minion tries to verify.
The easiest way to have the master sign its public key is to set
master_sign_pubkey: True
After restarting the salt-master service, the master will automatically generate a new key-pair
master_sign.pem master_sign.pub
A custom name can be set for the signing key-pair by setting
master_sign_key_name: <name_without_suffix>
The master will then generate that key-pair upon restart and use it for creating the public keys signature attached to the auth-reply.
The computation is done for every auth-request of a minion. If many minions auth very often, it is advised to use conf_master:master_pubkey_signature and conf_master:master_use_pubkey_signature settings described below.
If multiple masters are in use and should sign their auth-replies, the signing key-pair master_sign.* has to be copied to each master. Otherwise a minion will fail to verify the masters public when connecting to a different master than it did initially. That is because the public keys signature was created with a different signing key-pair.
Prepping the minion to verify received public keys
The minion must have the public key (and only that one!) available to be able to verify a signature it receives. That public key (defaults to master_sign.pub) must be copied from the master to the minions pki-directory.
/etc/salt/pki/minion/master_sign.pub DO NOT COPY THE master_sign.pem FILE. IT MUST STAY ON THE MASTER AND ONLY THERE!
When that is done, enable the signature checking in the minions configuration
verify_master_pubkey_sign: True
and restart the minion. For the first try, the minion should be run in manual debug mode.
$ salt-minion -l debug
Upon connecting to the master, the following lines should appear on the output:
[DEBUG ] Attempting to authenticate with the Salt Master at 172.16.0.10 [DEBUG ] Loaded minion key: /etc/salt/pki/minion/minion.pem [DEBUG ] salt.crypt.verify_signature: Loading public key [DEBUG ] salt.crypt.verify_signature: Verifying signature [DEBUG ] Successfully verified signature of master public key with verification public key master_sign.pub [INFO ] Received signed and verified master pubkey from master 172.16.0.10 [DEBUG ] Decrypting the current master AES key
If the signature verification fails, something went wrong and it will look like this
[DEBUG ] Attempting to authenticate with the Salt Master at 172.16.0.10 [DEBUG ] Loaded minion key: /etc/salt/pki/minion/minion.pem [DEBUG ] salt.crypt.verify_signature: Loading public key [DEBUG ] salt.crypt.verify_signature: Verifying signature [DEBUG ] Failed to verify signature of public key [CRITICAL] The Salt Master server's public key did not authenticate!
In a case like this, it should be checked, that the verification pubkey (master_sign.pub) on the minion is the same as the one on the master.
Once the verification is successful, the minion can be started in daemon mode again.
For the paranoid among us, its also possible to verify the public whenever it is received from the master. That is, for every single auth-attempt which can be quite frequent. For example just the start of the minion will force the signature to be checked 6 times for various things like auth, mine, highstate, etc.
If that is desired, enable the setting
always_verify_signature: True
Multiple Masters For A Minion
Configuring multiple masters on a minion is done by specifying two settings:
- •
- a list of masters addresses
- •
-
what type of master is defined
master: - 172.16.0.10 - 172.16.0.11 - 172.16.0.12
master_type: failover
This tells the minion that all the master above are available for it to connect to. When started with this configuration, it will try the master in the order they are defined. To randomize that order, set
master_shuffle: True
The master-list will then be shuffled before the first connection attempt.
The first master that accepts the minion, is used by the minion. If the master does not yet know the minion, that counts as accepted and the minion stays on that master.
For the minion to be able to detect if its still connected to its current master enable the check for it
master_alive_interval: <seconds>
If the loss of the connection is detected, the minion will temporarily remove the failed master from the list and try one of the other masters defined (again shuffled if that is enabled).
Testing the setup
At least two running masters are needed to test the failover setup.
Both masters should be running and the minion should be running on the command line in debug mode
$ salt-minion -l debug
The minion will connect to the first master from its master list
[DEBUG ] Attempting to authenticate with the Salt Master at 172.16.0.10 [DEBUG ] Loaded minion key: /etc/salt/pki/minion/minion.pem [DEBUG ] salt.crypt.verify_signature: Loading public key [DEBUG ] salt.crypt.verify_signature: Verifying signature [DEBUG ] Successfully verified signature of master public key with verification public key master_sign.pub [INFO ] Received signed and verified master pubkey from master 172.16.0.10 [DEBUG ] Decrypting the current master AES key
A test.ping on the master the minion is currently connected to should be run to test connectivity.
If successful, that master should be turned off. A firewall-rule denying the minions packets will also do the trick.
Depending on the configured conf_minion:master_alive_interval, the minion will notice the loss of the connection and log it to its logfile.
[INFO ] Connection to master 172.16.0.10 lost [INFO ] Trying to tune in to next master from master-list
The minion will then remove the current master from the list and try connecting to the next master
[INFO ] Removing possibly failed master 172.16.0.10 from list of masters [WARNING ] Master ip address changed from 172.16.0.10 to 172.16.0.11 [DEBUG ] Attempting to authenticate with the Salt Master at 172.16.0.11
If everything is configured correctly, the new masters public key will be verified successfully
[DEBUG ] Loaded minion key: /etc/salt/pki/minion/minion.pem [DEBUG ] salt.crypt.verify_signature: Loading public key [DEBUG ] salt.crypt.verify_signature: Verifying signature [DEBUG ] Successfully verified signature of master public key with verification public key master_sign.pub
the authentication with the new master is successful
[INFO ] Received signed and verified master pubkey from master 172.16.0.11 [DEBUG ] Decrypting the current master AES key [DEBUG ] Loaded minion key: /etc/salt/pki/minion/minion.pem [INFO ] Authentication with master successful!
and the minion can be pinged again from its new master.
Performance Tuning
With the setup described above, the master computes a signature for every auth-request of a minion. With many minions and many auth-requests, that can chew up quite a bit of CPU-Power.
To avoid that, the master can use a pre-created signature of its public-key. The signature is saved as a base64 encoded string which the master reads once when starting and attaches only that string to auth-replies.
Enabling this also gives paranoid users the possibility, to have the signing key-pair on a different system than the actual salt-master and create the public keys signature there. Probably on a system with more restrictive firewall rules, without internet access, less users, etc.
That signature can be created with
$ salt-key --gen-signature
This will create a default signature file in the master pki-directory
/etc/salt/pki/master/master_pubkey_signature
It is a simple text-file with the binary-signature converted to base64.
If no signing-pair is present yet, this will auto-create the signing pair and the signature file in one call
$ salt-key --gen-signature --auto-create
Telling the master to use the pre-created signature is done with
master_use_pubkey_signature: True
That requires the file 'master_pubkey_signature' to be present in the masters pki-directory with the correct signature.
If the signature file is named differently, its name can be set with
master_pubkey_signature: <filename>
With many masters and many public-keys (default and signing), it is advised to use the salt-masters hostname for the signature-files name. Signatures can be easily confused because they do not provide any information about the key the signature was created from.
Verifying that everything works is done the same way as above.
How the signing and verification works
The default key-pair of the salt-master is
/etc/salt/pki/master/master.pem /etc/salt/pki/master/master.pub
To be able to create a signature of a message (in this case a public-key), another key-pair has to be added to the setup. Its default name is:
master_sign.pem master_sign.pub
The combination of the master.* and master_sign.* key-pairs give the possibility of generating signatures. The signature of a given message is unique and can be verified, if the public-key of the signing-key-pair is available to the recipient (the minion).
The signature of the masters public-key in master.pub is computed with
master_sign.pem master.pub M2Crypto.EVP.sign_update()
This results in a binary signature which is converted to base64 and attached to the auth-reply send to the minion.
With the signing-pairs public-key available to the minion, the attached signature can be verified with
master_sign.pub master.pub M2Cryptos EVP.verify_update().
When running multiple masters, either the signing key-pair has to be present on all of them, or the master_pubkey_signature has to be pre-computed for each master individually (because they all have different public-keys). DO NOT PUT THE SAME master.pub ON ALL MASTERS FOR EASE OF USE.
Preseed Minion with Accepted Key
In some situations, it is not convenient to wait for a minion to start before accepting its key on the master. For instance, you may want the minion to bootstrap itself as soon as it comes online. You may also want to to let your developers provision new development machines on the fly.
SEE ALSO: Many ways to preseed minion keys
Salt has other ways to generate and pre-accept minion keys in addition to the manual steps outlined below.
salt-cloud performs these same steps automatically when new cloud VMs are created (unless instructed not to).
salt-api exposes an HTTP call to Salt's REST API to generate and download the new minion keys as a tarball.
There is a general four step process to do this:
- 1.
-
Generate the keys on the master:
root [at] saltmaster# salt-key --gen-keys=[key_name]
Pick a name for the key, such as the minion's id.
- 2.
-
Add the public key to the accepted minion folder:
root [at] saltmaster# cp key_name.pub /etc/salt/pki/master/minions/[minion_id]
It is necessary that the public key file has the same name as your minion id. This is how Salt matches minions with their keys. Also note that the pki folder could be in a different location, depending on your OS or if specified in the master config file.
- 3.
-
Distribute the minion keys.
There is no single method to get the keypair to your minion. The difficulty is finding a distribution method which is secure. For Amazon EC2 only, an AWS best practice is to use IAM Roles to pass credentials. (See blog post, http://blogs.aws.amazon.com/security/post/Tx610S2MLVZWEA/Using-IAM-roles-to-distribute-non-AWS-credentials-to-your-EC2-instances )
- Security Warning
-
Since the minion key is already accepted on the master, distributing the private key poses a potential security risk. A malicious party will have access to your entire state tree and other sensitive data if they gain access to a preseeded minion key.
- 4.
-
Preseed the Minion with the keys
You will want to place the minion keys before starting the salt-minion daemon:
/etc/salt/pki/minion/minion.pem /etc/salt/pki/minion/minion.pub
Once in place, you should be able to start salt-minion and run salt-call state.highstate or any other salt commands that require master authentication.
Salt Bootstrap
The Salt Bootstrap script allows for a user to install the Salt Minion or Master on a variety of system distributions and versions. This shell script known as bootstrap-salt.sh runs through a series of checks to determine the operating system type and version. It then installs the Salt binaries using the appropriate methods. The Salt Bootstrap script installs the minimum number of packages required to run Salt. This means that in the event you run the bootstrap to install via package, Git will not be installed. Installing the minimum number of packages helps ensure the script stays as lightweight as possible, assuming the user will install any other required packages after the Salt binaries are present on the system. The script source is available on GitHub: https://github.com/saltstack/salt-bootstrap
Supported Operating Systems
- •
- Amazon Linux 2012.09
- •
- Arch
- •
- CentOS 5/6
- •
- Debian 6.x/7.x/8(git installations only)
- •
- Fedora 17/18
- •
- FreeBSD 9.1/9.2/10
- •
- Gentoo
- •
- Linaro
- •
- Linux Mint 13/14
- •
- OpenSUSE 12.x
- •
- Oracle Linux 5/5
- •
- Red Hat 5/6
- •
- Red Hat Enterprise 5/6
- •
- Scientific Linux 5/6
- •
- SmartOS
- •
- SuSE 11 SP1/11 SP2
- •
- Ubuntu 10.x/11.x/12.x/13.04/13.10
- •
-
Elementary OS 0.2
NOTE: In the event you do not see your distribution or version available please review the develop branch on Github as it main contain updates that are not present in the stable release: https://github.com/saltstack/salt-bootstrap/tree/develop
Example Usage
If you're looking for the one-liner to install salt, please scroll to the bottom and use the instructions for Installing via an Insecure One-Liner
NOTE: In every two-step example, you would be well-served to examine the downloaded file and examine it to ensure that it does what you expect.
Using curl to install latest git:
curl -L https://bootstrap.saltstack.com -o install_salt.sh sudo sh install_salt.sh git develop
Using wget to install your distribution's stable packages:
wget -O install_salt.sh https://bootstrap.saltstack.com sudo sh install_salt.sh
Install a specific version from git using wget:
wget -O install_salt.sh https://bootstrap.saltstack.com sudo sh install_salt.sh -P git v0.16.4
If you already have python installed, python 2.6, then it's as easy as:
python -m urllib "https://bootstrap.saltstack.com" > install_salt.sh sudo sh install_salt.sh git develop
All python versions should support the following one liner:
python -c 'import urllib; print urllib.urlopen("https://bootstrap.saltstack.com").read()' > install_salt.sh sudo sh install_salt.sh git develop
On a FreeBSD base system you usually don't have either of the above binaries available. You do have fetch available though:
fetch -o install_salt.sh https://bootstrap.saltstack.com sudo sh install_salt.sh
If all you want is to install a salt-master using latest git:
curl -o install_salt.sh -L https://bootstrap.saltstack.com sudo sh install_salt.sh -M -N git develop
If you want to install a specific release version (based on the git tags):
curl -o install_salt.sh -L https://bootstrap.saltstack.com sudo sh install_salt.sh git v0.16.4
To install a specific branch from a git fork:
curl -o install_salt.sh -L https://bootstrap.saltstack.com sudo sh install_salt.sh -g https://github.com/myuser/salt.git git mybranch
Installing via an Insecure One-Liner
The following examples illustrate how to install Salt via a one-liner.
NOTE: Warning! These methods do not involve a verification step and assume that the delivered file is trustworthy.
Examples
Installing the latest develop branch of Salt:
curl -L https://bootstrap.saltstack.com | sudo sh -s -- git develop
Any of the example above which use two-lines can be made to run in a single-line configuration with minor modifications.
Example Usage
The Salt Bootstrap script has a wide variety of options that can be passed as well as several ways of obtaining the bootstrap script itself.
For example, using curl to install your distribution's stable packages:
curl -L https://bootstrap.saltstack.com | sudo sh
Using wget to install your distribution's stable packages:
wget -O - https://bootstrap.saltstack.com | sudo sh
Installing the latest version available from git with curl:
curl -L https://bootstrap.saltstack.com | sudo sh -s -- git develop
Install a specific version from git using wget:
wget -O - https://bootstrap.saltstack.com | sh -s -- -P git v0.16.4
If you already have python installed, python 2.6, then it's as easy as:
python -m urllib "https://bootstrap.saltstack.com" | sudo sh -s -- git develop
All python versions should support the following one liner:
python -c 'import urllib; print urllib.urlopen("https://bootstrap.saltstack.com").read()' | \ sudo sh -s -- git develop
On a FreeBSD base system you usually don't have either of the above binaries available. You do have fetch available though:
fetch -o - https://bootstrap.saltstack.com | sudo sh
If all you want is to install a salt-master using latest git:
curl -L https://bootstrap.saltstack.com | sudo sh -s -- -M -N git develop
If you want to install a specific release version (based on the git tags):
curl -L https://bootstrap.saltstack.com | sudo sh -s -- git v0.16.4
Downloading the develop branch (from here standard command line options may be passed):
wget https://bootstrap.saltstack.com/develop
Command Line Options
Here's a summary of the command line options:
$ sh bootstrap-salt.sh -h Usage : bootstrap-salt.sh [options] <install-type> <install-type-args> Installation types: - stable (default) - daily (ubuntu specific) - git Examples: $ bootstrap-salt.sh $ bootstrap-salt.sh stable $ bootstrap-salt.sh daily $ bootstrap-salt.sh git $ bootstrap-salt.sh git develop $ bootstrap-salt.sh git v0.17.0 $ bootstrap-salt.sh git 8c3fadf15ec183e5ce8c63739850d543617e4357 Options: -h Display this message -v Display script version -n No colours. -D Show debug output. -c Temporary configuration directory -g Salt repository URL. (default: git://github.com/saltstack/salt.git) -k Temporary directory holding the minion keys which will pre-seed the master. -M Also install salt-master -S Also install salt-syndic -N Do not install salt-minion -X Do not start daemons after installation -C Only run the configuration function. This option automatically bypasses any installation. -P Allow pip based installations. On some distributions the required salt packages or its dependencies are not available as a package for that distribution. Using this flag allows the script to use pip as a last resort method. NOTE: This only works for functions which actually implement pip based installations. -F Allow copied files to overwrite existing(config, init.d, etc) -U If set, fully upgrade the system prior to bootstrapping salt -K If set, keep the temporary files in the temporary directories specified with -c and -k. -I If set, allow insecure connections while downloading any files. For example, pass '--no-check-certificate' to 'wget' or '--insecure' to 'curl' -A Pass the salt-master DNS name or IP. This will be stored under ${BS_SALT_ETC_DIR}/minion.d/99-master-address.conf -i Pass the salt-minion id. This will be stored under ${BS_SALT_ETC_DIR}/minion_id -L Install the Apache Libcloud package if possible(required for salt-cloud) -p Extra-package to install while installing salt dependencies. One package per -p flag. You're responsible for providing the proper package name.
Git Fileserver Backend Walkthrough
NOTE: This walkthrough assumes basic knowledge of Salt. To get up to speed, check out the Salt Walkthrough.
The gitfs backend allows Salt to serve files from git repositories. It can be enabled by adding git to the fileserver_backend list, and configuring one or more repositories in gitfs_remotes.
Branches and tags become Salt fileserver environments.
Installing Dependencies
Beginning with version 2014.7.0, both pygit2 and Dulwich are supported as alternatives to GitPython. The desired provider can be configured using the gitfs_provider parameter in the master config file.
If gitfs_provider is not configured, then Salt will prefer pygit2 if a suitable version is available, followed by GitPython and Dulwich.
NOTE: It is recommended to always run the most recent version of any the below dependencies. Certain features of gitfs may not be available without the most recent version of the chosen library.
pygit2
The minimum supported version of pygit2 is 0.20.3. Availability for this version of pygit2 is still limited, though the SaltStack team is working to get compatible versions available for as many platforms as possible.
For the Fedora/EPEL versions which have a new enough version packaged, the following command would be used to install pygit2:
# yum install python-pygit2
Provided a valid version is packaged for Debian/Ubuntu (which is not currently the case), the package name would be the same, and the following command would be used to install it:
# apt-get install python-pygit2
If pygit2 is not packaged for the platform on which the Master is running, the pygit2 website has installation instructions here. Keep in mind however that following these instructions will install libgit2 and pygit2 without system packages. Additionally, keep in mind that SSH authentication in pygit2 requires libssh2 (not libssh) development libraries to be present before libgit2 is built.
GitPython
GitPython 0.3.0 or newer is required to use GitPython for gitfs. For RHEL-based Linux distros, a compatible version is available in EPEL, and can be easily installed on the master using yum:
# yum install GitPython
Ubuntu 14.04 LTS and Debian Wheezy (7.x) also have a compatible version packaged:
# apt-get install python-git
If your master is running an older version (such as Ubuntu 12.04 LTS or Debian Squeeze), then you will need to install GitPython using either pip or easy_install (it is recommended to use pip). Version 0.3.2.RC1 is now marked as the stable release in PyPI, so it should be a simple matter of running pip install GitPython (or easy_install GitPython) as root.
WARNING: Keep in mind that if GitPython has been previously installed on the master using pip (even if it was subsequently uninstalled), then it may still exist in the build cache (typically /tmp/pip-build-root/GitPython) if the cache is not cleared after installation. The package in the build cache will override any requirement specifiers, so if you try upgrading to version 0.3.2.RC1 by running pip install 'GitPython==0.3.2.RC1' then it will ignore this and simply install the version from the cache directory. Therefore, it may be necessary to delete the GitPython directory from the build cache in order to ensure that the specified version is installed.
Dulwich
Dulwich 0.9.4 or newer is required to use Dulwich as backend for gitfs.
Dulwich is available in EPEL, and can be easily installed on the master using yum:
# yum install python-dulwich
For APT-based distros such as Ubuntu and Debian:
# apt-get install python-dulwich
IMPORTANT: If switching to Dulwich from GitPython/pygit2, or switching from GitPython/pygit2 to Dulwich, it is necessary to clear the gitfs cache to avoid unpredictable behavior. This is probably a good idea whenever switching to a new gitfs_provider, but it is less important when switching between GitPython and pygit2.
Beginning in version 2015.5.0, the gitfs cache can be easily cleared using the fileserver.clear_cache runner.
salt-run fileserver.clear_cache backend=git
If the Master is running an earlier version, then the cache can be cleared by removing the gitfs and file_lists/gitfs directories (both paths relative to the master cache directory, usually /var/cache/salt/master).
rm -rf /var/cache/salt/master{,/file_lists}/gitfs
Simple Configuration
To use the gitfs backend, only two configuration changes are required on the master:
- 1.
-
Include git in the fileserver_backend list in the master
config file:
fileserver_backend: - git
- 2.
-
Specify one or more git://, https://, file://, or ssh://
URLs in gitfs_remotes to configure which repositories to
cache and search for requested files:
gitfs_remotes: - https://github.com/saltstack-formulas/salt-formula.git
SSH remotes can also be configured using scp-like syntax:
gitfs_remotes: - git [at] github.com:user/repo.git - ssh://user@domain.tld/path/to/repo.git
Information on how to authenticate to SSH remotes can be found here.
NOTE: Dulwich does not recognize ssh:// URLs, git+ssh:// must be used instead. Salt version 2015.5.0 and later will automatically add the git+ to the beginning of these URLs before fetching, but earlier Salt versions will fail to fetch unless the URL is specified using git+ssh://.
- 3.
-
Restart the master to load the new configuration.
NOTE: In a master/minion setup, files from a gitfs remote are cached once by the master, so minions do not need direct access to the git repository.
Multiple Remotes
The gitfs_remotes option accepts an ordered list of git remotes to cache and search, in listed order, for requested files.
A simple scenario illustrates this cascading lookup behavior:
If the gitfs_remotes option specifies three remotes:
gitfs_remotes: - git://github.com/example/first.git - https://github.com/example/second.git - file:///root/third
And each repository contains some files:
first.git: top.sls edit/vim.sls edit/vimrc nginx/init.sls second.git: edit/dev_vimrc haproxy/init.sls third: haproxy/haproxy.conf edit/dev_vimrc
Salt will attempt to lookup the requested file from each gitfs remote repository in the order in which they are defined in the configuration. The git://github.com/example/first.git remote will be searched first. If the requested file is found, then it is served and no further searching is executed. For example:
- •
- A request for the file salt://haproxy/init.sls will be served from the https://github.com/example/second.git git repo.
- •
-
A request for the file salt://haproxy/haproxy.conf will be served from the
file:///root/third repo.
NOTE: This example is purposefully contrived to illustrate the behavior of the gitfs backend. This example should not be read as a recommended way to lay out files and git repos.
The file:// prefix denotes a git repository in a local directory. However, it will still use the given file:// URL as a remote, rather than copying the git repo to the salt cache. This means that any refs you want accessible must exist as local refs in the specified repo.
WARNING: Salt versions prior to 2014.1.0 are not tolerant of changing the order of remotes or modifying the URI of existing remotes. In those versions, when modifying remotes it is a good idea to remove the gitfs cache directory (/var/cache/salt/master/gitfs) before restarting the salt-master service.
Per-remote Configuration Parameters
New in version 2014.7.0.
The following master config parameters are global (that is, they apply to all configured gitfs remotes):
- •
- gitfs_base
- •
- gitfs_root
- •
- gitfs_mountpoint (new in 2014.7.0)
- •
- gitfs_user (pygit2 only, new in 2014.7.0)
- •
- gitfs_password (pygit2 only, new in 2014.7.0)
- •
- gitfs_insecure_auth (pygit2 only, new in 2014.7.0)
- •
- gitfs_pubkey (pygit2 only, new in 2014.7.0)
- •
- gitfs_privkey (pygit2 only, new in 2014.7.0)
- •
-
gitfs_passphrase (pygit2 only, new in 2014.7.0)
These parameters can now be overridden on a per-remote basis. This allows for a tremendous amount of customization. Here's some example usage:
gitfs_provider: pygit2 gitfs_base: develop gitfs_remotes: - https://foo.com/foo.git - https://foo.com/bar.git: - root: salt - mountpoint: salt://foo/bar/baz - base: salt-base - http://foo.com/baz.git: - root: salt/states - user: joe - password: mysupersecretpassword - insecure_auth: True
IMPORTANT: There are two important distinctions which should be noted for per-remote configuration:
- 1.
- The URL of a remote which has per-remote configuration must be suffixed with a colon.
- 2.
-
Per-remote configuration parameters are named like the global versions,
with the gitfs_ removed from the beginning.
In the example configuration above, the following is true:
- 1.
- The first and third gitfs remotes will use the develop branch/tag as the base environment, while the second one will use the salt-base branch/tag as the base environment.
- 2.
- The first remote will serve all files in the repository. The second remote will only serve files from the salt directory (and its subdirectories), while the third remote will only serve files from the salt/states directory (and its subdirectories).
- 3.
- The files from the second remote will be located under salt://foo/bar/baz, while the files from the first and third remotes will be located under the root of the Salt fileserver namespace (salt://).
- 4.
- The third remote overrides the default behavior of not authenticating to insecure (non-HTTPS) remotes.
Serving from a Subdirectory
The gitfs_root parameter allows files to be served from a subdirectory within the repository. This allows for only part of a repository to be exposed to the Salt fileserver.
Assume the below layout:
.gitignore README.txt foo/ foo/bar/ foo/bar/one.txt foo/bar/two.txt foo/bar/three.txt foo/baz/ foo/baz/top.sls foo/baz/edit/vim.sls foo/baz/edit/vimrc foo/baz/nginx/init.sls
The below configuration would serve only the files under foo/baz, ignoring the other files in the repository:
gitfs_remotes: - git://mydomain.com/stuff.git gitfs_root: foo/baz
The root can also be configured on a per-remote basis.
Mountpoints
New in version 2014.7.0.
The gitfs_mountpoint parameter will prepend the specified path to the files served from gitfs. This allows an existing repository to be used, rather than needing to reorganize a repository or design it around the layout of the Salt fileserver.
Before the addition of this feature, if a file being served up via gitfs was deeply nested within the root directory (for example, salt://webapps/foo/files/foo.conf, it would be necessary to ensure that the file was properly located in the remote repository, and that all of the the parent directories were present (for example, the directories webapps/foo/files/ would need to exist at the root of the repository).
The below example would allow for a file foo.conf at the root of the repository to be served up from the Salt fileserver path salt://webapps/foo/files/foo.conf.
gitfs_remotes: - https://mydomain.com/stuff.git gitfs_mountpoint: salt://webapps/foo/files
Mountpoints can also be configured on a per-remote basis.
Using gitfs Alongside Other Backends
Sometimes it may make sense to use multiple backends; for instance, if sls files are stored in git but larger files are stored directly on the master.
The cascading lookup logic used for multiple remotes is also used with multiple backends. If the fileserver_backend option contains multiple backends:
fileserver_backend: - roots - git
Then the roots backend (the default backend of files in /srv/salt) will be searched first for the requested file; then, if it is not found on the master, each configured git remote will be searched.
Branches, Environments, and Top Files
When using the gitfs backend, branches, and tags will be mapped to environments using the branch/tag name as an identifier.
There is one exception to this rule: the master branch is implicitly mapped to the base environment.
So, for a typical base, qa, dev setup, the following branches could be used:
master qa dev
top.sls files from different branches will be merged into one at runtime. Since this can lead to overly complex configurations, the recommended setup is to have the top.sls file only in the master branch and use environment-specific branches for state definitions.
To map a branch other than master as the base environment, use the gitfs_base parameter.
gitfs_base: salt-base
The base can also be configured on a per-remote basis.
Environment Whitelist/Blacklist
New in version 2014.7.0.
The gitfs_env_whitelist and gitfs_env_blacklist parameters allow for greater control over which branches/tags are exposed as fileserver environments. Exact matches, globs, and regular expressions are supported, and are evaluated in that order. If using a regular expression, ^ and $ must be omitted, and the expression must match the entire branch/tag.
gitfs_env_whitelist: - base - v1.* - 'mybranch\d+'
NOTE: v1.*, in this example, will match as both a glob and a regular expression (though it will have been matched as a glob, since globs are evaluated before regular expressions).
The behavior of the blacklist/whitelist will differ depending on which combination of the two options is used:
- •
- If only gitfs_env_whitelist is used, then only branches/tags which match the whitelist will be available as environments
- •
- If only gitfs_env_blacklist is used, then the branches/tags which match the blacklist will not be available as environments
- •
- If both are used, then the branches/tags which match the whitelist, but do not match the blacklist, will be available as environments.
Authentication
pygit2
New in version 2014.7.0.
Both HTTPS and SSH authentication are supported as of version 0.20.3, which is the earliest version of pygit2 supported by Salt for gitfs.
NOTE: The examples below make use of per-remote configuration parameters, a feature new to Salt 2014.7.0. More information on these can be found here.
HTTPS
For HTTPS repositories which require authentication, the username and password can be provided like so:
gitfs_remotes: - https://domain.tld/myrepo.git: - user: git - password: mypassword
If the repository is served over HTTP instead of HTTPS, then Salt will by default refuse to authenticate to it. This behavior can be overridden by adding an insecure_auth parameter:
gitfs_remotes: - http://domain.tld/insecure_repo.git: - user: git - password: mypassword - insecure_auth: True
SSH
SSH repositories can be configured using the ssh:// protocol designation, or using scp-like syntax. So, the following two configurations are equivalent:
- •
- ssh://git@github.com/user/repo.git
- •
-
git [at] github.com:user/repo.git
Both gitfs_pubkey and gitfs_privkey (or their per-remote counterparts) must be configured in order to authenticate to SSH-based repos. If the private key is protected with a passphrase, it can be configured using gitfs_passphrase (or simply passphrase if being configured per-remote). For example:
gitfs_remotes: - git [at] github.com:user/repo.git: - pubkey: /root/.ssh/id_rsa.pub - privkey: /root/.ssh/id_rsa - passphrase: myawesomepassphrase
Finally, the SSH host key must be added to the known_hosts file.
GitPython
With GitPython, only passphrase-less SSH public key authentication is supported. The auth parameters (pubkey, privkey, etc.) shown in the pygit2 authentication examples above do not work with GitPython.
gitfs_remotes: - ssh://git@github.com/example/salt-states.git
Since GitPython wraps the git CLI, the private key must be located in ~/.ssh/id_rsa for the user under which the Master is running, and should have permissions of 0600. Also, in the absence of a user in the repo URL, GitPython will (just as SSH does) attempt to login as the current user (in other words, the user under which the Master is running, usually root).
If a key needs to be used, then ~/.ssh/config can be configured to use the desired key. Information on how to do this can be found by viewing the manpage for ssh_config. Here's an example entry which can be added to the ~/.ssh/config to use an alternate key for gitfs:
Host github.com IdentityFile /root/.ssh/id_rsa_gitfs
The Host parameter should be a hostname (or hostname glob) that matches the domain name of the git repository.
It is also necessary to add the SSH host key to the known_hosts file. The exception to this would be if strict host key checking is disabled, which can be done by adding StrictHostKeyChecking no to the entry in ~/.ssh/config
Host github.com IdentityFile /root/.ssh/id_rsa_gitfs StrictHostKeyChecking no
However, this is generally regarded as insecure, and is not recommended.
Adding the SSH Host Key to the known_hosts File
To use SSH authentication, it is necessary to have the remote repository's SSH host key in the ~/.ssh/known_hosts file. If the master is also a minion, this can be done using the ssh.set_known_host function:
# salt mymaster ssh.set_known_host user=root hostname=github.com mymaster: ---------- new: ---------- enc: ssh-rsa fingerprint: 16:27:ac:a5:76:28:2d:36:63:1b:56:4d:eb:df:a6:48 hostname: |1|OiefWWqOD4kwO3BhoIGa0loR5AA=|BIXVtmcTbPER+68HvXmceodDcfI= key: AAAAB3NzaC1yc2EAAAABIwAAAQEAq2A7hRGmdnm9tUDbO9IDSwBK6TbQa+PXYPCPy6rbTrTtw7PHkccKrpp0yVhp5HdEIcKr6pLlVDBfOLX9QUsyCOV0wzfjIJNlGEYsdlLJizHhbn2mUjvSAHQqZETYP81eFzLQNnPHt4EVVUh7VfDESU84KezmD5QlWpXLmvU31/yMf+Se8xhHTvKSCZIFImWwoG6mbUoWf9nzpIoaSjB+weqqUUmpaaasXVal72J+UX2B+2RPW3RcT0eOzQgqlJL3RKrTJvdsjE3JEAvGq3lGHSZXy28G3skua2SmVi/w4yCE6gbODqnTWlg7+wC604ydGXA8VJiS5ap43JXiUFFAaQ== old: None status: updated
If not, then the easiest way to add the key is to su to the user (usually root) under which the salt-master runs and attempt to login to the server via SSH:
$ su Password: # ssh github.com The authenticity of host 'github.com (192.30.252.128)' can't be established. RSA key fingerprint is 16:27:ac:a5:76:28:2d:36:63:1b:56:4d:eb:df:a6:48. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'github.com,192.30.252.128' (RSA) to the list of known hosts. Permission denied (publickey).
It doesn't matter if the login was successful, as answering yes will write the fingerprint to the known_hosts file.
Verifying the Fingerprint
To verify that the correct fingerprint was added, it is a good idea to look it up. One way to do this is to use nmap:
$ nmap github.com --script ssh-hostkey Starting Nmap 5.51 ( http://nmap.org ) at 2014-08-18 17:47 CDT Nmap scan report for github.com (192.30.252.129) Host is up (0.17s latency). Not shown: 996 filtered ports PORT STATE SERVICE 22/tcp open ssh | ssh-hostkey: 1024 ad:1c:08:a4:40:e3:6f:9c:f5:66:26:5d:4b:33:5d:8c (DSA) |_2048 16:27:ac:a5:76:28:2d:36:63:1b:56:4d:eb:df:a6:48 (RSA) 80/tcp open http 443/tcp open https 9418/tcp open git Nmap done: 1 IP address (1 host up) scanned in 28.78 seconds
Another way is to check one's own known_hosts file, using this one-liner:
$ ssh-keygen -l -f /dev/stdin <<<`ssh-keyscan -t rsa github.com 2>/dev/null` | awk '{print $2}' 16:27:ac:a5:76:28:2d:36:63:1b:56:4d:eb:df:a6:48
Refreshing gitfs Upon Push
By default, Salt updates the remote fileserver backends every 60 seconds. However, if it is desirable to refresh quicker than that, the Reactor System can be used to signal the master to update the fileserver on each push, provided that the git server is also a Salt minion. There are three steps to this process:
- 1.
-
On the master, create a file /srv/reactor/update_fileserver.sls, with
the following contents:
update_fileserver: runner.fileserver.update
- 2.
-
Add the following reactor configuration to the master config file:
reactor: - 'salt/fileserver/gitfs/update': - /srv/reactor/update_fileserver.sls
- 3.
-
On the git server, add a post-receive hook with the following contents:
#!/usr/bin/env sh salt-call event.fire_master update salt/fileserver/gitfs/update
The update argument right after event.fire_master in this example can really be anything, as it represents the data being passed in the event, and the passed data is ignored by this reactor.
Similarly, the tag name salt/fileserver/gitfs/update can be replaced by anything, so long as the usage is consistent.
Using Git as an External Pillar Source
Git repositories can also be used to provide Pillar data, using the External Pillar system. Note that this is different from gitfs, and is not yet at feature parity with it.
To define a git external pillar, add a section like the following to the salt master config file:
ext_pillar: - git: <branch> <repo> [root=<gitroot>]
Changed in version 2014.7.0: The optional root parameter was added
The <branch> param is the branch containing the pillar SLS tree. The <repo> param is the URI for the repository. To add the master branch of the specified repo as an external pillar source:
ext_pillar: - git: master https://domain.com/pillar.git
Use the root parameter to use pillars from a subdirectory of a git repository:
ext_pillar: - git: master https://domain.com/pillar.git root=subdirectory
More information on the git external pillar can be found in the salt.pillar.git_pillar docs.
Why aren't my custom modules/states/etc. syncing to my Minions?
In versions 0.16.3 and older, when using the git fileserver backend, certain versions of GitPython may generate errors when fetching, which Salt fails to catch. While not fatal to the fetch process, these interrupt the fileserver update that takes place before custom types are synced, and thus interrupt the sync itself. Try disabling the git fileserver backend in the master config, restarting the master, and attempting the sync again.
This issue is worked around in Salt 0.16.4 and newer.
The MacOS X (Maverick) Developer Step By Step Guide To Salt Installation
This document provides a step-by-step guide to installing a Salt cluster consisting of one master, and one minion running on a local VM hosted on Mac OS X.
NOTE: This guide is aimed at developers who wish to run Salt in a virtual machine. The official (Linux) walkthrough can be found here.
The 5 Cent Salt Intro
Since you're here you've probably already heard about Salt, so you already know Salt lets you configure and run commands on hordes of servers easily. Here's a brief overview of a Salt cluster:
- •
- Salt works by having a "master" server sending commands to one or multiple "minion" servers [1]. The master server is the "command center". It is going to be the place where you store your configuration files, aka: "which server is the db, which is the web server, and what libraries and software they should have installed". The minions receive orders from the master. Minions are the servers actually performing work for your business.
- •
-
Salt has two types of configuration files:
1. the "salt communication channels" or "meta" or "config" configuration files (not official names): one for the master (usually is /etc/salt/master , on the master server), and one for minions (default is /etc/salt/minion or /etc/salt/minion.conf, on the minion servers). Those files are used to determine things like the Salt Master IP, port, Salt folder locations, etc.. If these are configured incorrectly, your minions will probably be unable to receive orders from the master, or the master will not know which software a given minion should install.
2. the "business" or "service" configuration files (once again, not an official name): these are configuration files, ending with ".sls" extension, that describe which software should run on which server, along with particular configuration properties for the software that is being installed. These files should be created in the /srv/salt folder by default, but their location can be changed using ... /etc/salt/master configuration file!
NOTE: This tutorial contains a third important configuration file, not to be confused with the previous two: the virtual machine provisioning configuration file. This in itself is not specifically tied to Salt, but it also contains some Salt configuration. More on that in step 3. Also note that all configuration files are YAML files. So indentation matters.
- [1]
- Salt also works with "masterless" configuration where a minion is autonomous (in which case salt can be seen as a local configuration tool), or in "multiple master" configuration. See the documentation for more on that.
Before Digging In, The Architecture Of The Salt Cluster
Salt Master
The "Salt master" server is going to be the Mac OS machine, directly. Commands will be run from a terminal app, so Salt will need to be installed on the Mac. This is going to be more convenient for toying around with configuration files.
Salt Minion
We'll only have one "Salt minion" server. It is going to be running on a Virtual Machine running on the Mac, using VirtualBox. It will run an Ubuntu distribution.
Step 1 - Configuring The Salt Master On Your Mac
official documentation
Because Salt has a lot of dependencies that are not built in Mac OS X, we will use Homebrew to install Salt. Homebrew is a package manager for Mac, it's great, use it (for this tutorial at least!). Some people spend a lot of time installing libs by hand to better understand dependencies, and then realize how useful a package manager is once they're configuring a brand new machine and have to do it all over again. It also lets you uninstall things easily.
NOTE: Brew is a Ruby program (Ruby is installed by default with your Mac). Brew downloads, compiles, and links software. The linking phase is when compiled software is deployed on your machine. It may conflict with manually installed software, especially in the /usr/local directory. It's ok, remove the manually installed version then refresh the link by typing brew link 'packageName'. Brew has a brew doctor command that can help you troubleshoot. It's a great command, use it often. Brew requires xcode command line tools. When you run brew the first time it asks you to install them if they're not already on your system. Brew installs software in /usr/local/bin (system bins are in /usr/bin). In order to use those bins you need your $PATH to search there first. Brew tells you if your $PATH needs to be fixed.
TIP: Use the keyboard shortcut cmd + shift + period in the "open" Mac OS X dialog box to display hidden files and folders, such as .profile.
Install Homebrew
Install Homebrew here http://brew.sh/ Or just type
ruby -e "$(curl -fsSL https://raw.github.com/Homebrew/homebrew/go/install)"
Now type the following commands in your terminal (you may want to type brew doctor after each to make sure everything's fine):
brew install python brew install swig brew install zmq
NOTE: zmq is ZeroMQ. It's a fantastic library used for server to server network communication and is at the core of Salt efficiency.
Install Salt
You should now have everything ready to launch this command:
pip install salt
NOTE: There should be no need for sudo pip install salt. Brew installed Python for your user, so you should have all the access. In case you would like to check, type which python to ensure that it's /usr/local/bin/python, and which pip which should be /usr/local/bin/pip.
Now type python in a terminal then, import salt. There should be no errors. Now exit the Python terminal using exit().
Create The Master Configuration
If the default /etc/salt/master configuration file was not created, copy-paste it from here: http://docs.saltstack.com/ref/configuration/examples.html#configuration-examples-master
NOTE: /etc/salt/master is a file, not a folder.
Salt Master configuration changes. The Salt master needs a few customization to be able to run on Mac OS X:
sudo launchctl limit maxfiles 4096 8192
In the /etc/salt/master file, change max_open_files to 8192 (or just add the line: max_open_files: 8192 (no quote) if it doesn't already exists).
You should now be able to launch the Salt master:
sudo salt-master --log-level=all
There should be no errors when running the above command.
NOTE: This command is supposed to be a daemon, but for toying around, we'll keep it running on a terminal to monitor the activity.
Now that the master is set, let's configure a minion on a VM.
Step 2 - Configuring The Minion VM
The Salt minion is going to run on a Virtual Machine. There are a lot of software options that let you run virtual machines on a mac, But for this tutorial we're going to use VirtualBox. In addition to virtualBox, we will use Vagrant, which allows you to create the base VM configuration.
Vagrant lets you build ready to use VM images, starting from an OS image and customizing it using "provisioners". In our case, we'll use it to:
- •
- Download the base Ubuntu image
- •
- Install salt on that Ubuntu image (Salt is going to be the "provisioner" for the VM).
- •
- Launch the VM
- •
- SSH into the VM to debug
- •
- Stop the VM once you're done.
Install VirtualBox
Go get it here: https://www.virtualBox.org/wiki/Downloads (click on VirtualBox for OS X hosts => x86/amd64)
Install Vagrant
Go get it here: http://downloads.vagrantup.com/ and choose the latest version (1.3.5 at time of writing), then the .dmg file. Double-click to install it. Make sure the vagrant command is found when run in the terminal. Type vagrant. It should display a list of commands.
Create The Minion VM Folder
Create a folder in which you will store your minion's VM. In this tutorial, it's going to be a minion folder in the $home directory.
cd $home mkdir minion
Initialize Vagrant
From the minion folder, type
vagrant init
This command creates a default Vagrantfile configuration file. This configuration file will be used to pass configuration parameters to the Salt provisioner in Step 3.
Import Precise64 Ubuntu Box
vagrant box add precise64 http://files.vagrantup.com/precise64.box
NOTE: This box is added at the global Vagrant level. You only need to do it once as each VM will use this same file.
Modify the Vagrantfile
Modify ./minion/Vagrantfile to use th precise64 box. Change the config.vm.box line to:
config.vm.box = "precise64"
Uncomment the line creating a host-only IP. This is the ip of your minion (you can change it to something else if that IP is already in use):
config.vm.network :private_network, ip: "192.168.33.10"
At this point you should have a VM that can run, although there won't be much in it. Let's check that.
Checking The VM
From the $home/minion folder type:
vagrant up
A log showing the VM booting should be present. Once it's done you'll be back to the terminal:
ping 192.168.33.10
The VM should respond to your ping request.
Now log into the VM in ssh using Vagrant again:
vagrant ssh
You should see the shell prompt change to something similar to vagrant [at] precise64:~$ meaning you're inside the VM. From there, enter the following:
ping 10.0.2.2
NOTE: That ip is the ip of your VM host (the Mac OS X OS). The number is a VirtualBox default and is displayed in the log after the Vagrant ssh command. We'll use that IP to tell the minion where the Salt master is. Once you're done, end the ssh session by typing exit.
It's now time to connect the VM to the salt master
Step 3 - Connecting Master and Minion
Creating The Minion Configuration File
Create the /etc/salt/minion file. In that file, put the following lines, giving the ID for this minion, and the IP of the master:
master: 10.0.2.2 id: 'minion1' file_client: remote
Minions authenticate with the master using keys. Keys are generated automatically if you don't provide one and can accept them later on. However, this requires accepting the minion key every time the minion is destroyed or created (which could be quite often). A better way is to create those keys in advance, feed them to the minion, and authorize them once.
Preseed minion keys
From the minion folder on your Mac run:
sudo salt-key --gen-keys=minion1
This should create two files: minion1.pem, and minion1.pub. Since those files have been created using sudo, but will be used by vagrant, you need to change ownership:
sudo chown youruser:yourgroup minion1.pem sudo chown youruser:yourgroup minion1.pub
Then copy the .pub file into the list of accepted minions:
sudo cp minion1.pub /etc/salt/pki/master/minions/minion1
Modify Vagrantfile to Use Salt Provisioner
Let's now modify the Vagrantfile used to provision the Salt VM. Add the following section in the Vagrantfile (note: it should be at the same indentation level as the other properties):
# salt-vagrant config config.vm.provision :salt do |salt| salt.run_highstate = true salt.minion_config = "/etc/salt/minion" salt.minion_key = "./minion1.pem" salt.minion_pub = "./minion1.pub" end
Now destroy the vm and recreate it from the /minion folder:
vagrant destroy vagrant up
If everything is fine you should see the following message:
"Bootstrapping Salt... (this may take a while) Salt successfully configured and installed!"
Checking Master-Minion Communication
To make sure the master and minion are talking to each other, enter the following:
sudo salt '*' test.ping
You should see your minion answering the ping. It's now time to do some configuration.
Step 4 - Configure Services to Install On the Minion
In this step we'll use the Salt master to instruct our minion to install Nginx.
Checking the system's original state
First, make sure that an HTTP server is not installed on our minion. When opening a browser directed at http://192.168.33.10/ You should get an error saying the site cannot be reached.
Initialize the top.sls file
System configuration is done in the /srv/salt/top.sls file (and subfiles/folders), and then applied by running the state.highstate command to have the Salt master give orders so minions will update their instructions and run the associated commands.
First Create an empty file on your Salt master (Mac OS X machine):
touch /srv/salt/top.sls
When the file is empty, or if no configuration is found for our minion an error is reported:
sudo salt 'minion1' state.highstate
Should return an error stating: "No Top file or external nodes data matches found".
Create The Nginx Configuration
Now is finally the time to enter the real meat of our server's configuration. For this tutorial our minion will be treated as a web server that needs to have Nginx installed.
Insert the following lines into the /srv/salt/top.sls file (which should current be empty).
base: 'minion1': - bin.nginx
Now create a /srv/salt/bin/nginx.sls file containing the following:
nginx: pkg.installed: - name: nginx service.running: - enable: True - reload: True
Check Minion State
Finally run the state.highstate command again:
sudo salt 'minion1' state.highstate
You should see a log showing that the Nginx package has been installed and the service configured. To prove it, open your browser and navigate to http://192.168.33.10/, you should see the standard Nginx welcome page.
Where To Go From Here
A full description of configuration management within Salt (sls files among other things) is available here: http://docs.saltstack.com/index.html#configuration-management
Writing Salt Tests
NOTE: THIS TUTORIAL IS A WORK IN PROGRESS
Salt comes with a powerful integration and unit test suite. The test suite allows for the fully automated run of integration and/or unit tests from a single interface. The integration tests are surprisingly easy to write and can be written to be either destructive or non-destructive.
Getting Set Up For Tests
To walk through adding an integration test, start by getting the latest development code and the test system from GitHub:
NOTE: The develop branch often has failing tests and should always be considered a staging area. For a checkout that tests should be running perfectly on, please check out a specific release tag (such as v2014.1.4).
git clone git [at] github.com:saltstack/salt.git pip install git+https://github.com/saltstack/salt-testing.git#egg=SaltTesting
Now that a fresh checkout is available run the test suite
Destructive vs Non-destructive
Since Salt is used to change the settings and behavior of systems, often, the best approach to run tests is to make actual changes to an underlying system. This is where the concept of destructive integration tests comes into play. Tests can be written to alter the system they are running on. This capability is what fills in the gap needed to properly test aspects of system management like package installation.
To write a destructive test import and use the destructiveTest decorator for the test method:
import integration from salttesting.helpers import destructiveTest class PkgTest(integration.ModuleCase): @destructiveTest def test_pkg_install(self): ret = self.run_function('pkg.install', name='finch') self.assertSaltTrueReturn(ret) ret = self.run_function('pkg.purge', name='finch') self.assertSaltTrueReturn(ret)
Automated Test Runs
SaltStack maintains a Jenkins server which can be viewed at http://jenkins.saltstack.com. The tests executed from this Jenkins server create fresh virtual machines for each test run, then execute the destructive tests on the new clean virtual machine. This allows for the execution of tests across supported platforms.
HTTP Modules
This tutorial demonstrates using the various HTTP modules available in Salt. These modules wrap the Python urllib2 and requests libraries, extending them in a manner that is more consistent with Salt workflows.
The salt.utils.http Library
This library forms the core of the HTTP modules. Since it is designed to be used from the minion as an execution module, in addition to the master as a runner, it was abstracted into this multi-use library. This library can also be imported by 3rd-party programs wishing to take advantage of its extended functionality.
Core functionality of the execution, state, and runner modules is derived from this library, so common usages between them are described here. Documentation specific to each module is described below.
This library can be imported with:
import salt.utils.http
Configuring Libraries
This library can make use of either urllib2, which ships with Python, or requests, which can be installed separately. By default, urllib2 will be used. In order to switch to requests, set the following variable:
requests_lib: True
This can be set in the master or minion configuration file, or passed as an option directly to any http.query() functions.
salt.utils.http.query()
This function forms a basic query, but with some add-ons not present in the urllib2 and requests libraries. Not all functionality currently available in these libraries has been added, but can be in future iterations.
A basic query can be performed by calling this function with no more than a single URL:
salt.utils.http.query('http://example.com')
By default the query will be performed with a GET method. The method can be overridden with the method argument:
salt.utils.http.query('http://example.com/delete/url', 'DELETE')
When using the POST method (and others, such as PUT), extra data is usually sent as well. This data can be sent directly, in whatever format is required by the remote server (XML, JSON, plain text, etc).
salt.utils.http.query( 'http://example.com/delete/url', method='POST', data=json.loads(mydict) )
Bear in mind that this data must be sent pre-formatted; this function will not format it for you. However, a templated file stored on the local system may be passed through, along with variables to populate it with. To pass through only the file (untemplated):
salt.utils.http.query( 'http://example.com/post/url', method='POST', data_file='/srv/salt/somefile.xml' )
To pass through a file that contains jinja + yaml templating (the default):
salt.utils.http.query( 'http://example.com/post/url', method='POST', data_file='/srv/salt/somefile.jinja', data_render=True, template_data={'key1': 'value1', 'key2': 'value2'} )
To pass through a file that contains mako templating:
salt.utils.http.query( 'http://example.com/post/url', method='POST', data_file='/srv/salt/somefile.mako', data_render=True, data_renderer='mako', template_data={'key1': 'value1', 'key2': 'value2'} )
Because this function uses Salt's own rendering system, any Salt renderer can be used. Because Salt's renderer requires __opts__ to be set, an opts dictionary should be passed in. If it is not, then the default __opts__ values for the node type (master or minion) will be used. Because this library is intended primarily for use by minions, the default node type is minion. However, this can be changed to master if necessary.
salt.utils.http.query( 'http://example.com/post/url', method='POST', data_file='/srv/salt/somefile.jinja', data_render=True, template_data={'key1': 'value1', 'key2': 'value2'}, opts=__opts__ ) salt.utils.http.query( 'http://example.com/post/url', method='POST', data_file='/srv/salt/somefile.jinja', data_render=True, template_data={'key1': 'value1', 'key2': 'value2'}, node='master' )
Headers may also be passed through, either as a header_list, a header_dict, or as a header_file. As with the data_file, the header_file may also be templated. Take note that because HTTP headers are normally syntactically-correct YAML, they will automatically be imported as an a Python dict.
salt.utils.http.query( 'http://example.com/delete/url', method='POST', header_file='/srv/salt/headers.jinja', header_render=True, header_renderer='jinja', template_data={'key1': 'value1', 'key2': 'value2'} )
Because much of the data that would be templated between headers and data may be the same, the template_data is the same for both. Correcting possible variable name collisions is up to the user.
The query() function supports basic HTTP authentication. A username and password may be passed in as username and password, respectively.
salt.utils.http.query( 'http://example.com', username='larry', password=`5700g3543v4r`, )
Cookies are also supported, using Python's built-in cookielib. However, they are turned off by default. To turn cookies on, set cookies to True.
salt.utils.http.query( 'http://example.com', cookies=True )
By default cookies are stored in Salt's cache directory, normally /var/cache/salt, as a file called cookies.txt. However, this location may be changed with the cookie_jar argument:
salt.utils.http.query( 'http://example.com', cookies=True, cookie_jar='/path/to/cookie_jar.txt' )
By default, the format of the cookie jar is LWP (aka, lib-www-perl). This default was chosen because it is a human-readable text file. If desired, the format of the cookie jar can be set to Mozilla:
salt.utils.http.query( 'http://example.com', cookies=True, cookie_jar='/path/to/cookie_jar.txt', cookie_format='mozilla' )
Because Salt commands are normally one-off commands that are piped together, this library cannot normally behave as a normal browser, with session cookies that persist across multiple HTTP requests. However, the session can be persisted in a separate cookie jar. The default filename for this file, inside Salt's cache directory, is cookies.session.p. This can also be changed.
salt.utils.http.query( 'http://example.com', persist_session=True, session_cookie_jar='/path/to/jar.p' )
The format of this file is msgpack, which is consistent with much of the rest of Salt's internal structure. Historically, the extension for this file is .p. There are no current plans to make this configurable.
Return Data
By default, query() will attempt to decode the return data. Because it was designed to be used with REST interfaces, it will attempt to decode the data received from the remote server. First it will check the Content-type header to try and find references to XML. If it does not find any, it will look for references to JSON. If it does not find any, it will fall back to plain text, which will not be decoded.
JSON data is translated into a dict using Python's built-in json library. XML is translated using salt.utils.xml_util, which will use Python's built-in XML libraries to attempt to convert the XML into a dict. In order to force either JSON or XML decoding, the decode_type may be set:
salt.utils.http.query( 'http://example.com', decode_type='xml' )
Once translated, the return dict from query() will include a dict called dict.
If the data is not to be translated using one of these methods, decoding may be turned off.
salt.utils.http.query( 'http://example.com', decode=False )
If decoding is turned on, and references to JSON or XML cannot be found, then this module will default to plain text, and return the undecoded data as text (even if text is set to False; see below).
The query() function can return the HTTP status code, headers, and/or text as required. However, each must individually be turned on.
salt.utils.http.query( 'http://example.com', status=True, headers=True, text=True )
The return from these will be found in the return dict as status, headers and text, respectively.
Writing Return Data to Files
It is possible to write either the return data or headers to files, as soon as the response is received from the server, but specifying file locations via the text_out or headers_out arguments. text and headers do not need to be returned to the user in order to do this.
salt.utils.http.query( 'http://example.com', text=False, headers=False, text_out='/path/to/url_download.txt', headers_out='/path/to/headers_download.txt', )
SSL Verification
By default, this function will verify SSL certificates. However, for testing or debugging purposes, SSL verification can be turned off.
salt.utils.http.query( 'https://example.com', verify_ssl=False, )
CA Bundles
The requests library has its own method of detecting which CA (certficate authority) bundle file to use. Usually this is implemented by the packager for the specific operating system distribution that you are using. However, urllib2 requires a little more work under the hood. By default, Salt will try to auto-detect the location of this file. However, if it is not in an expected location, or a different path needs to be specified, it may be done so using the ca_bundle variable.
salt.utils.http.query( 'https://example.com', ca_bundle='/path/to/ca_bundle.pem', )
Updating CA Bundles
The update_ca_bundle() function can be used to update the bundle file at a specified location. If the target location is not specified, then it will attempt to auto-detect the location of the bundle file. If the URL to download the bundle from does not exist, a bundle will be downloaded from the cURL website.
CAUTION: The target and the source should always be specified! Failure to specify the target may result in the file being written to the wrong location on the local system. Failure to specify the source may cause the upstream URL to receive excess unnecessary traffic, and may cause a file to be download which is hazardous or does not meet the needs of the user.
salt.utils.http.update_ca_bundle( target='/path/to/ca-bundle.crt', source='https://example.com/path/to/ca-bundle.crt', opts=__opts__, )
The opts parameter should also always be specified. If it is, then the target and the source may be specified in the relevant configuration file (master or minion) as ca_bundle and ca_bundle_url, respectively.
ca_bundle: /path/to/ca-bundle.crt ca_bundle_url: https://example.com/path/to/ca-bundle.crt
If Salt is unable to auto-detect the location of the CA bundle, it will raise an error.
The update_ca_bundle() function can also be passed a string or a list of strings which represent files on the local system, which should be appended (in the specified order) to the end of the CA bundle file. This is useful in environments where private certs need to be made available, and are not otherwise reasonable to add to the bundle file.
salt.utils.http.update_ca_bundle( opts=__opts__, merge_files=[ '/etc/ssl/private_cert_1.pem', '/etc/ssl/private_cert_2.pem', '/etc/ssl/private_cert_3.pem', ] )
Test Mode
This function may be run in test mode. This mode will perform all work up until the actual HTTP request. By default, instead of performing the request, an empty dict will be returned. Using this function with TRACE logging turned on will reveal the contents of the headers and POST data to be sent.
Rather than returning an empty dict, an alternate test_url may be passed in. If this is detected, then test mode will replace the url with the test_url, set test to True in the return data, and perform the rest of the requested operations as usual. This allows a custom, non-destructive URL to be used for testing when necessary.
Execution Module
The http execution module is a very thin wrapper around the salt.utils.http library. The opts can be passed through as well, but if they are not specified, the minion defaults will be used as necessary.
Because passing complete data structures from the command line can be tricky at best and dangerous (in terms of execution injection attacks) at worse, the data_file, and header_file are likely to see more use here.
All methods for the library are available in the execution module, as kwargs.
salt myminion http.query http://example.com/restapi method=POST \ username='larry' password='5700g3543v4r' headers=True text=True \ status=True decode_type=xml data_render=True \ header_file=/tmp/headers.txt data_file=/tmp/data.txt \ header_render=True cookies=True persist_session=True
Runner Module
Like the execution module, the http runner module is a very thin wrapper around the salt.utils.http library. The only significant difference is that because runners execute on the master instead of a minion, a target is not required, and default opts will be derived from the master config, rather than the minion config.
All methods for the library are available in the runner module, as kwargs.
salt-run http.query http://example.com/restapi method=POST \ username='larry' password='5700g3543v4r' headers=True text=True \ status=True decode_type=xml data_render=True \ header_file=/tmp/headers.txt data_file=/tmp/data.txt \ header_render=True cookies=True persist_session=True
State Module
The state module is a wrapper around the runner module, which applies stateful logic to a query. All kwargs as listed above are specified as usual in state files, but two more kwargs are available to apply stateful logic. A required parameter is match, which specifies a pattern to look for in the return text. By default, this will perform a string comparison of looking for the value of match in the return text. In Python terms this looks like:
if match in html_text: return True
If more complex pattern matching is required, a regular expression can be used by specifying a match_type. By default this is set to string, but it can be manually set to pcre instead. Please note that despite the name, this will use Python's re.search() rather than re.match().
Therefore, the following states are valid:
http://example.com/restapi: http.query: - match: 'SUCCESS' - username: 'larry' - password: '5700g3543v4r' - data_render: True - header_file: /tmp/headers.txt - data_file: /tmp/data.txt - header_render: True - cookies: True - persist_session: True http://example.com/restapi: http.query: - match_type: pcre - match: '(?i)succe[ss|ed]' - username: 'larry' - password: '5700g3543v4r' - data_render: True - header_file: /tmp/headers.txt - data_file: /tmp/data.txt - header_render: True - cookies: True - persist_session: True
In addition to, or instead of a match pattern, the status code for a URL can be checked. This is done using the status argument:
http://example.com/: http.query: - status: '200'
If both are specified, both will be checked, but if only one is True and the other is False, then False will be returned. In this case, the comments in the return data will contain information for troubleshooting.
Because this is a monitoring state, it will return extra data to code that expects it. This data will always include text and status. Optionally, headers and dict may also be requested by setting the headers and decode arguments to True, respectively.
LXC Management with Salt
NOTE: This walkthrough assumes basic knowledge of Salt. To get up to speed, check out the Salt Walkthrough.
Dependencies
Manipulation of LXC containers in Salt requires the minion to have an LXC version of at least 1.0 (an alpha or beta release of LXC 1.0 is acceptable). The following distributions are known to have new enough versions of LXC packaged:
- •
- RHEL/CentOS 6 and later (via EPEL)
- •
- Fedora (All non-EOL releases)
- •
- Debian 8.0 (Jessie)
- •
- Ubuntu 14.04 LTS and later (LXC templates are packaged separately as lxc-templates, it is recommended to also install this package)
- •
- openSUSE 13.2 and later
Profiles
Profiles allow for a sort of shorthand for commonly-used configurations to be defined in the minion config file, grains, pillar, or the master config file. The profile is retrieved by Salt using the config.get function, which looks in those locations, in that order. This allows for profiles to be defined centrally in the master config file, with several options for overriding them (if necessary) on groups of minions or individual minions.
There are two types of profiles:
- •
- One for defining the parameters used in container creation/clone.
- •
- One for defining the container's network interface(s) settings.
Container Profiles
LXC container profiles are defined defined underneath the lxc.container_profile config option:
lxc.container_profile: centos: template: centos backing: lvm vgname: vg1 lvname: lxclv size: 10G centos_big: template: centos backing: lvm vgname: vg1 lvname: lxclv size: 20G
Profiles are retrieved using the config.get function, with the recurse merge strategy. This means that a profile can be defined at a lower level (for example, the master config file) and then parts of it can be overridden at a higher level (for example, in pillar data). Consider the following container profile data:
In the Master config file:
lxc.container_profile: centos: template: centos backing: lvm vgname: vg1 lvname: lxclv size: 10G
In the Pillar data
lxc.container_profile: centos: size: 20G
Any minion with the above Pillar data would have the size parameter in the centos profile overridden to 20G, while those minions without the above Pillar data would have the 10G size value. This is another way of achieving the same result as the centos_big profile above, without having to define another whole profile that differs in just one value.
NOTE: In the 2014.7.x release cycle and earlier, container profiles are defined under lxc.profile. This parameter will still work in version 2015.5.0, but is deprecated and will be removed in a future release. Please note however that the profile merging feature described above will only work with profiles defined under lxc.container_profile, and only in versions 2015.5.0 and later.
Additionally, in version 2015.5.0 container profiles have been expanded to support passing template-specific CLI options to lxc.create. Below is a table describing the parameters which can be configured in container profiles:
Parameter | 2015.5.0 and Newer |
2014.7.x and Earlier
|
template1 | Yes |
Yes
|
options1 | Yes |
No
|
image1 | Yes |
Yes
|
backing | Yes |
Yes
|
snapshot2 | Yes |
Yes
|
lvname1 | Yes |
Yes
|
fstype1 | Yes |
Yes
|
size | Yes |
Yes
|
- 1.
- Parameter is only supported for container creation, and will be ignored if the profile is used when cloning a container.
- 2.
- Parameter is only supported for container cloning, and will be ignored if the profile is used when not cloning a container.
Network Profiles
LXC network profiles are defined defined underneath the lxc.network_profile config option. By default, the module uses a DHCP based configuration and try to guess a bridge to get connectivity.
WARNING: on pre 2015.5.2, you need to specify explicitly the network bridge
lxc.network_profile: centos: eth0: link: br0 type: veth flags: up ubuntu: eth0: link: lxcbr0 type: veth flags: up
As with container profiles, network profiles are retrieved using the config.get function, with the recurse merge strategy. Consider the following network profile data:
In the Master config file:
lxc.network_profile: centos: eth0: link: br0 type: veth flags: up
In the Pillar data
lxc.network_profile: centos: eth0: link: lxcbr0
Any minion with the above Pillar data would use the lxcbr0 interface as the bridge interface for any container configured using the centos network profile, while those minions without the above Pillar data would use the br0 interface for the same.
NOTE: In the 2014.7.x release cycle and earlier, network profiles are defined under lxc.nic. This parameter will still work in version 2015.5.0, but is deprecated and will be removed in a future release. Please note however that the profile merging feature described above will only work with profiles defined under lxc.network_profile, and only in versions 2015.5.0 and later.
The following are parameters which can be configured in network profiles. These will directly correspond to a parameter in an LXC configuration file (see man 5 lxc.container.conf).
- •
- type - Corresponds to lxc.network.type
- •
- link - Corresponds to lxc.network.link
- •
-
flags - Corresponds to lxc.network.flags
Interface-specific options (MAC address, IPv4/IPv6, etc.) must be passed on a container-by-container basis, for instance using the nic_opts argument to lxc.create:
salt myminion lxc.create container1 profile=centos network_profile=centos nic_opts='{eth0: {ipv4: 10.0.0.20/24, gateway: 10.0.0.1}}'
WARNING: The ipv4, ipv6, gateway, and link (bridge) settings in network profiles / nic_opts will only work if the container doesnt redefine the network configuration (for example in /etc/sysconfig/network-scripts/ifcfg-<interface_name> on RHEL/CentOS, or /etc/network/interfaces on Debian/Ubuntu/etc.). Use these with caution. The container images installed using the download template, for instance, typically are configured for eth0 to use DHCP, which will conflict with static IP addresses set at the container level.
Old lxc support (<1.0.7)
With saltstack 2015.5.2 and above, normally the setting is autoselected, but before, you'll need to teach your network profile to set lxc.network.ipv4.gateway to auto when using a classic ipv4 configuration.
Thus you'll need
lxc.network_profile.foo: etho: link: lxcbr0 ipv4.gateway: auto
Tricky network setups Examples
This example covers how to make a container with both an internal ip and a public routable ip, wired on two veth pairs.
The another interface which receives directly a public routable ip can't be on the first interface that we reserve for private inter LXC networking.
lxc.network_profile.foo: eth0: {gateway: null, bridge: lxcbr0} eth1: # replace that by your main interface 'link': 'br0' 'mac': '00:16:5b:01:24:e1' 'gateway': '2.20.9.14' 'ipv4': '2.20.9.1'
Creating a Container on the CLI
From a Template
LXC is commonly distributed with several template scripts in /usr/share/lxc/templates. Some distros may package these separately in an lxc-templates package, so make sure to check if this is the case.
There are LXC template scripts for several different operating systems, but some of them are designed to use tools specific to a given distribution. For instance, the ubuntu template uses deb_bootstrap, the centos template uses yum, etc., making these templates impractical when a container from a different OS is desired.
The lxc.create function is used to create containers using a template script. To create a CentOS container named container1 on a CentOS minion named mycentosminion, using the centos LXC template, one can simply run the following command:
salt mycentosminion lxc.create container1 template=centos
For these instances, there is a download template which retrieves minimal container images for several different operating systems. To use this template, it is necessary to provide an options parameter when creating the container, with three values:
- 1.
- dist - the Linux distribution (i.e. ubuntu or centos)
- 2.
- release - the release name/version (i.e. trusty or 6)
- 3.
-
arch - CPU architecture (i.e. amd64 or i386)
The lxc.images function (new in version 2015.5.0) can be used to list the available images. Alternatively, the releases can be viewed on http://images.linuxcontainers.org/images/. The images are organized in such a way that the dist, release, and arch can be determined using the following URL format: http://images.linuxcontainers.org/images/dist/release/arch. For example, http://images.linuxcontainers.org/images/centos/6/amd64 would correspond to a dist of centos, a release of 6, and an arch of amd64.
Therefore, to use the download template to create a new 64-bit CentOS 6 container, the following command can be used:
salt myminion lxc.create container1 template=download options='{dist: centos, release: 6, arch: amd64}'
NOTE: These command-line options can be placed into a container profile, like so:
lxc.container_profile.cent6: template: download options: dist: centos release: 6 arch: amd64
The options parameter is not supported in profiles for the 2014.7.x release cycle and earlier, so it would still need to be provided on the command-line.
Cloning an Existing Container
To clone a container, use the lxc.clone function:
salt myminion lxc.clone container2 orig=container1
Using a Container Image
While cloning is a good way to create new containers from a common base container, the source container that is being cloned needs to already exist on the minion. This makes deploying a common container across minions difficult. For this reason, Salt's lxc.create is capable of installing a container from a tar archive of another container's rootfs. To create an image of a container named cent6, run the following command as root:
tar czf cent6.tar.gz -C /var/lib/lxc/cent6 rootfs
NOTE: Before doing this, it is recommended that the container is stopped.
The resulting tarball can then be placed alongside the files in the salt fileserver and referenced using a salt:// URL. To create a container using an image, use the image parameter with lxc.create:
salt myminion lxc.create new-cent6 image=salt://path/to/cent6.tar.gz
NOTE: Making images of containers with LVM backing
For containers with LVM backing, the rootfs is not mounted, so it is necessary to mount it first before creating the tar archive. When a container is created using LVM backing, an empty rootfs dir is handily created within /var/lib/lxc/container_name, so this can be used as the mountpoint. The location of the logical volume for the container will be /dev/vgname/lvname, where vgname is the name of the volume group, and lvname is the name of the logical volume. Therefore, assuming a volume group of vg1, a logical volume of lxc-cent6, and a container name of cent6, the following commands can be used to create a tar archive of the rootfs:
mount /dev/vg1/lxc-cent6 /var/lib/lxc/cent6/rootfs tar czf cent6.tar.gz -C /var/lib/lxc/cent6 rootfs umount /var/lib/lxc/cent6/rootfs
WARNING: One caveat of using this method of container creation is that /etc/hosts is left unmodified. This could cause confusion for some distros if salt-minion is later installed on the container, as the functions that determine the hostname take /etc/hosts into account.
Additionally, when creating an rootfs image, be sure to remove /etc/salt/minion_id and make sure that id is not defined in /etc/salt/minion, as this will cause similar issues.
Initializing a New Container as a Salt Minion
The above examples illustrate a few ways to create containers on the CLI, but often it is desirable to also have the new container run as a Minion. To do this, the lxc.init function can be used. This function will do the following:
- 1.
- Create a new container
- 2.
- Optionally set password and/or DNS
- 3.
-
Bootstrap the minion (using either salt-bootstrap or a custom command)
By default, the new container will be pointed at the same Salt Master as the host machine on which the container was created. It will then request to authenticate with the Master like any other bootstrapped Minion, at which point it can be accepted.
salt myminion lxc.init test1 profile=centos salt-key -a test1
For even greater convenience, the LXC runner contains a runner function of the same name (lxc.init), which creates a keypair, seeds the new minion with it, and pre-accepts the key, allowing for the new Minion to be created and authorized in a single step:
salt-run lxc.init test1 host=myminion profile=centos
Running Commands Within a Container
For containers which are not running their own Minion, commands can be run within the container in a manner similar to using (cmd.run <salt.modules.cmdmod.run). The means of doing this have been changed significantly in version 2015.5.0 (though the deprecated behavior will still be supported for a few releases). Both the old and new usage are documented below.
2015.5.0 and Newer
New functions have been added to mimic the behavior of the functions in the cmd module. Below is a table with the cmd functions and their lxc module equivalents:
Description | cmd module |
lxc module
|
Run a command and get all output | cmd.run |
lxc.run
|
Run a command and get just stdout | cmd.run_stdout |
lxc.run_stdout
|
Run a command and get just stderr | cmd.run_stderr |
lxc.run_stderr
|
Run a command and get just the retcode | cmd.retcode |
lxc.retcode
|
Run a command and get all information | cmd.run_all |
lxc.run_all
|
2014.7.x and Earlier
Earlier Salt releases use a single function (lxc.run_cmd) to run commands within containers. Whether stdout, stderr, etc. are returned depends on how the function is invoked.
To run a command and return the stdout:
salt myminion lxc.run_cmd web1 'tail /var/log/messages'
To run a command and return the stderr:
salt myminion lxc.run_cmd web1 'tail /var/log/messages' stdout=False stderr=True
To run a command and return the retcode:
salt myminion lxc.run_cmd web1 'tail /var/log/messages' stdout=False stderr=False
To run a command and return all information:
salt myminion lxc.run_cmd web1 'tail /var/log/messages' stdout=True stderr=True
Container Management Using salt-cloud
Salt cloud uses under the hood the salt runner and module to manage containers, Please look at this chapter
Container Management Using States
Several states are being renamed or otherwise modified in version 2015.5.0. The information in this tutorial refers to the new states. For 2014.7.x and earlier, please refer to the documentation for the LXC states.
Ensuring a Container Is Present
To ensure the existence of a named container, use the lxc.present state. Here are some examples:
# Using a template web1: lxc.present: - template: download - options: dist: centos release: 6 arch: amd64 # Cloning web2: lxc.present: - clone_from: web-base # Using a rootfs image web3: lxc.present: - image: salt://path/to/cent6.tar.gz # Using profiles web4: lxc.present: - profile: centos_web - network_profile: centos
WARNING: The lxc.present state will not modify an existing container (in other words, it will not re-create the container). If an lxc.present state is run on an existing container, there will be no change and the state will return a True result.
The lxc.present state also includes an optional running parameter which can be used to ensure that a container is running/stopped. Note that there are standalone lxc.running and lxc.stopped states which can be used for this purpose.
Ensuring a Container Does Not Exist
To ensure that a named container is not present, use the lxc.absent state. For example:
web1: lxc.absent
Ensuring a Container is Running/Stopped/Frozen
Containers can be in one of three states:
- •
- running - Container is running and active
- •
- frozen - Container is running, but all process are blocked and the container is essentially non-active until the container is "unfrozen"
- •
-
stopped - Container is not running
Salt has three states (lxc.running, lxc.frozen, and lxc.stopped) which can be used to ensure a container is in one of these states:
web1: lxc.running # Restart the container if it was already running web2: lxc.running: - restart: True web3: lxc.stopped # Explicitly kill all tasks in container instead of gracefully stopping web4: lxc.stopped: - kill: True web5: lxc.frozen # If container is stopped, do not start it (in which case the state will fail) web6: lxc.frozen: - start: False
Salt Virt
Salt as a Cloud Controller
In Salt 0.14.0, an advanced cloud control system were introduced, allow private cloud vms to be managed directly with Salt. This system is generally referred to as Salt Virt.
The Salt Virt system already exists and is installed within Salt itself, this means that beside setting up Salt, no additional salt code needs to be deployed.
The main goal of Salt Virt is to facilitate a very fast and simple cloud. The cloud that can scale and fully featured. Salt Virt comes with the ability to set up and manage complex virtual machine networking, powerful image, and disk management, as well as virtual machine migration with and without shared storage.
This means that Salt Virt can be used to create a cloud from a blade center and a SAN, but can also create a cloud out of a swarm of Linux Desktops without a single shared storage system. Salt Virt can make clouds from truly commodity hardware, but can also stand up the power of specialized hardware as well.
Setting up Hypervisors
The first step to set up the hypervisors involves getting the correct software installed and setting up the hypervisor network interfaces.
Installing Hypervisor Software
Salt Virt is made to be hypervisor agnostic but currently the only fully implemented hypervisor is KVM via libvirt.
The required software for a hypervisor is libvirt and kvm. For advanced features install libguestfs or qemu-nbd.
NOTE: Libguestfs and qemu-nbd allow for virtual machine images to be mounted before startup and get pre-seeded with configurations and a salt minion
This sls will set up the needed software for a hypervisor, and run the routines to set up the libvirt pki keys.
NOTE: Package names and setup used is Red Hat specific, different package names will be required for different platforms
libvirt: pkg.installed: [] file.managed: - name: /etc/sysconfig/libvirtd - contents: 'LIBVIRTD_ARGS="--listen"' - require: - pkg: libvirt libvirt.keys: - require: - pkg: libvirt service.running: - name: libvirtd - require: - pkg: libvirt - network: br0 - libvirt: libvirt - watch: - file: libvirt libvirt-python: pkg.installed: [] libguestfs: pkg.installed: - pkgs: - libguestfs - libguestfs-tools
Hypervisor Network Setup
The hypervisors will need to be running a network bridge to serve up network devices for virtual machines, this formula will set up a standard bridge on a hypervisor connecting the bridge to eth0:
eth0: network.managed: - enabled: True - type: eth - bridge: br0 br0: network.managed: - enabled: True - type: bridge - proto: dhcp - require: - network: eth0
Virtual Machine Network Setup
Salt Virt comes with a system to model the network interfaces used by the deployed virtual machines; by default a single interface is created for the deployed virtual machine and is bridged to br0. To get going with the default networking setup, ensure that the bridge interface named br0 exists on the hypervisor and is bridged to an active network device.
NOTE: To use more advanced networking in Salt Virt, read the Salt Virt Networking document:
Libvirt State
One of the challenges of deploying a libvirt based cloud is the distribution of libvirt certificates. These certificates allow for virtual machine migration. Salt comes with a system used to auto deploy these certificates. Salt manages the signing authority key and generates keys for libvirt clients on the master, signs them with the certificate authority and uses pillar to distribute them. This is managed via the libvirt state. Simply execute this formula on the minion to ensure that the certificate is in place and up to date:
NOTE: The above formula includes the calls needed to set up libvirt keys.
libvirt_keys: libvirt.keys
Getting Virtual Machine Images Ready
Salt Virt, requires that virtual machine images be provided as these are not generated on the fly. Generating these virtual machine images differs greatly based on the underlying platform.
Virtual machine images can be manually created using KVM and running through the installer, but this process is not recommended since it is very manual and prone to errors.
Virtual Machine generation applications are available for many platforms:
- vm-builder:
-
https://wiki.debian.org/VMBuilder
SEE ALSO: vmbuilder-formula
Once virtual machine images are available, the easiest way to make them available to Salt Virt is to place them in the Salt file server. Just copy an image into /srv/salt and it can now be used by Salt Virt.
For purposes of this demo, the file name centos.img will be used.
Existing Virtual Machine Images
Many existing Linux distributions distribute virtual machine images which can be used with Salt Virt. Please be advised that NONE OF THESE IMAGES ARE SUPPORTED BY SALTSTACK.
CentOS
These images have been prepared for OpenNebula but should work without issue with Salt Virt, only the raw qcow image file is needed: http://wiki.centos.org/Cloud/OpenNebula
Fedora Linux
Images for Fedora Linux can be found here: http://fedoraproject.org/en/get-fedora#clouds
Ubuntu Linux
Images for Ubuntu Linux can be found here: http://cloud-images.ubuntu.com/
Using Salt Virt
With hypervisors set up and virtual machine images ready, Salt can start issuing cloud commands.
Start by running a Salt Virt hypervisor info command:
salt-run virt.hyper_info
This will query what the running hypervisor stats are and display information for all configured hypervisors. This command will also validate that the hypervisors are properly configured.
Now that hypervisors are available a virtual machine can be provisioned. The virt.init routine will create a new virtual machine:
salt-run virt.init centos1 2 512 salt://centos.img
This command assumes that the CentOS virtual machine image is sitting in the root of the Salt fileserver. Salt Virt will now select a hypervisor to deploy the new virtual machine on and copy the virtual machine image down to the hypervisor.
Once the VM image has been copied down the new virtual machine will be seeded. Seeding the VMs involves setting pre-authenticated Salt keys on the new VM and if needed, will install the Salt Minion on the new VM before it is started.
NOTE: The biggest bottleneck in starting VMs is when the Salt Minion needs to be installed. Making sure that the source VM images already have Salt installed will GREATLY speed up virtual machine deployment.
Now that the new VM has been prepared, it can be seen via the virt.query command:
salt-run virt.query
This command will return data about all of the hypervisors and respective virtual machines.
Now that the new VM is booted it should have contacted the Salt Master, a test.ping will reveal if the new VM is running.
Migrating Virtual Machines
Salt Virt comes with full support for virtual machine migration, and using the libvirt state in the above formula makes migration possible.
A few things need to be available to support migration. Many operating systems turn on firewalls when originally set up, the firewall needs to be opened up to allow for libvirt and kvm to cross communicate and execution migration routines. On Red Hat based hypervisors in particular port 16514 needs to be opened on hypervisors:
iptables -A INPUT -m state --state NEW -m tcp -p tcp --dport 16514 -j ACCEPT
NOTE: More in-depth information regarding distribution specific firewall settings can read in:
Opening the Firewall up for Salt
Salt also needs an additional flag to be turned on as well. The virt.tunnel option needs to be turned on. This flag tells Salt to run migrations securely via the libvirt TLS tunnel and to use port 16514. Without virt.tunnel libvirt tries to bind to random ports when running migrations. To turn on virt.tunnel simple apply it to the master config file:
virt.tunnel: True
Once the master config has been updated, restart the master and send out a call to the minions to refresh the pillar to pick up on the change:
salt \* saltutil.refresh_modules
Now, migration routines can be run! To migrate a VM, simply run the Salt Virt migrate routine:
salt-run virt.migrate centos <new hypervisor>
VNC Consoles
Salt Virt also sets up VNC consoles by default, allowing for remote visual consoles to be oped up. The information from a virt.query routine will display the vnc console port for the specific vms:
centos CPU: 2 Memory: 524288 State: running Graphics: vnc - hyper6:5900 Disk - vda: Size: 2.0G File: /srv/salt-images/ubuntu2/system.qcow2 File Format: qcow2 Nic - ac:de:48:98:08:77: Source: br0 Type: bridge
The line Graphics: vnc - hyper6:5900 holds the key. First the port named, in this case 5900, will need to be available in the hypervisor's firewall. Once the port is open, then the console can be easily opened via vncviewer:
vncviewer hyper6:5900
By default there is no VNC security set up on these ports, which suggests that keeping them firewalled and mandating that SSH tunnels be used to access these VNC interfaces. Keep in mind that activity on a VNC interface that is accessed can be viewed by any other user that accesses that same VNC interface, and any other user logging in can also operate with the logged in user on the virtual machine.
Conclusion
Now with Salt Virt running, new hypervisors can be seamlessly added just by running the above states on new bare metal machines, and these machines will be instantly available to Salt Virt.
Halite
Installing and Configuring Halite
In this tutorial, we'll walk through installing and setting up Halite. The current version of Halite is considered pre-alpha and is supported only in Salt v2014.1.0 or greater. Additional information is available on GitHub: https://github.com/saltstack/halite
Before beginning this tutorial, ensure that the salt-master is installed. To install the salt-master, please review the installation documentation: http://docs.saltstack.com/topics/installation/index.html
NOTE: Halite only works with Salt versions greater than 2014.1.0.
Installing Halite Via Package
On CentOS, RHEL, or Fedora:
$ yum install python-halite
NOTE: By default python-halite only installs CherryPy. If you would like to use a different webserver please review the instructions below to install pip and your server of choice. The package does not modify the master configuration with /etc/salt/master.
Installing Halite Using pip
To begin the installation of Halite from PyPI, you'll need to install pip. The Salt package, as well as the bootstrap, do not install pip by default.
On CentOS, RHEL, or Fedora:
$ yum install python-pip
On Debian:
$ apt-get install python-pip
Once you have pip installed, use it to install halite:
$ pip install -U halite
Depending on the webserver you want to run halite through, you'll need to install that piece as well. On RHEL based distros, use one of the following:
$ pip install cherrypy
$ pip install paste
$ yum install python-devel $ yum install gcc $ pip install gevent $ pip install pyopenssl
On Debian based distributions:
$ pip install CherryPy
$ pip install paste
$ apt-get install gcc $ apt-get install python-dev $ apt-get install libevent-dev $ pip install gevent $ pip install pyopenssl
Configuring Halite Permissions
Configuring Halite access permissions is easy. By default, you only need to ensure that the @runner group is configured. In the /etc/salt/master file, uncomment and modify the following lines:
external_auth: pam: testuser: - .* - '@runner'
NOTE: You cannot use the root user for pam login; it will fail to authenticate.
Halite uses the runner manage.present to get the status of minions, so runner permissions are required. For example:
external_auth: pam: mytestuser: - .* - '@runner' - '@wheel'
Currently Halite allows, but does not require, any wheel modules.
Configuring Halite Settings
Once you've configured the permissions for Halite, you'll need to set up the Halite settings in the /etc/salt/master file. Halite supports CherryPy, Paste, and Gevent out of the box.
To configure cherrypy, add the following to the bottom of your /etc/salt/master file:
halite: level: 'debug' server: 'cherrypy' host: '0.0.0.0' port: '8080' cors: False tls: True certpath: '/etc/pki/tls/certs/localhost.crt' keypath: '/etc/pki/tls/certs/localhost.key' pempath: '/etc/pki/tls/certs/localhost.pem'
If you wish to use paste:
halite: level: 'debug' server: 'paste' host: '0.0.0.0' port: '8080' cors: False tls: True certpath: '/etc/pki/tls/certs/localhost.crt' keypath: '/etc/pki/tls/certs/localhost.key' pempath: '/etc/pki/tls/certs/localhost.pem'
To use gevent:
halite: level: 'debug' server: 'gevent' host: '0.0.0.0' port: '8080' cors: False tls: True certpath: '/etc/pki/tls/certs/localhost.crt' keypath: '/etc/pki/tls/certs/localhost.key' pempath: '/etc/pki/tls/certs/localhost.pem'
The "cherrypy" and "gevent" servers require the certpath and keypath files to run tls/ssl. The .crt file holds the public cert and the .key file holds the private key. Whereas the "paste" server requires a single .pem file that contains both the cert and key. This can be created simply by concatenating the .crt and .key files.
If you want to use a self-signed cert, you can create one using the Salt.tls module:
NOTE: The following command needs to be run on your salt master.
salt-call tls.create_self_signed_cert tls
Note that certs generated by the above command can be found under the /etc/pki/tls/certs/ directory. When using self-signed certs, browsers will need approval before accepting the cert. If the web application page has been cached with a non-HTTPS version of the app, then the browser cache will have to be cleared before it will recognize and prompt to accept the self-signed certificate.
Starting Halite
Once you've configured the halite section of your /etc/salt/master, you can restart the salt-master service, and your halite instance will be available. Depending on your configuration, the instance will be available either at https://localhost:8080/app, https://domain:8080/app, or https://123.456.789.012:8080/app .
NOTE: halite requires an HTML 5 compliant browser.
All logs relating to halite are logged to the default /var/log/salt/master file.
LXC
Using Salt at scale
Using Salt at scale
The focus of this tutorial will be building a Salt infrastructure for handling large numbers of minions. This will include tuning, topology, and best practices.
For how to install the Salt Master please go here: Installing saltstack
NOTE: This tutorial is intended for large installations, although these same settings won't hurt, it may not be worth the complexity to smaller installations.
When used with minions, the term 'many' refers to at least a thousand and 'a few' always means 500.
For simplicity reasons, this tutorial will default to the standard ports used by Salt.
The Master
The most common problems on the Salt Master are:
- 1.
- too many minions authing at once
- 2.
- too many minions re-authing at once
- 3.
- too many minions re-connecting at once
- 4.
- too many minions returning at once
- 5.
-
too few resources (CPU/HDD)
The first three are all "thundering herd" problems. To mitigate these issues we must configure the minions to back-off appropriately when the Master is under heavy load.
The fourth is caused by masters with little hardware resources in combination with a possible bug in ZeroMQ. At least thats what it looks like till today (Issue 118651, Issue 5948, Mail thread)
To fully understand each problem, it is important to understand, how Salt works.
Very briefly, the Salt Master offers two services to the minions.
- •
- a job publisher on port 4505
- •
-
an open port 4506 to receive the minions returns
All minions are always connected to the publisher on port 4505 and only connect to the open return port 4506 if necessary. On an idle Master, there will only be connections on port 4505.
Too many minions authing
When the Minion service is first started up, it will connect to its Master's publisher on port 4505. If too many minions are started at once, this can cause a "thundering herd". This can be avoided by not starting too many minions at once.
The connection itself usually isn't the culprit, the more likely cause of master-side issues is the authentication that the Minion must do with the Master. If the Master is too heavily loaded to handle the auth request it will time it out. The Minion will then wait acceptance_wait_time to retry. If acceptance_wait_time_max is set then the Minion will increase its wait time by the acceptance_wait_time each subsequent retry until reaching acceptance_wait_time_max.
Too many minions re-authing
This is most likely to happen in the testing phase of a Salt deployment, when all Minion keys have already been accepted, but the framework is being tested and parameters are frequently changed in the Salt Master's configuration file(s).
The Salt Master generates a new AES key to encrypt its publications at certain events such as a Master restart or the removal of a Minion key. If you are encountering this problem of too many minions re-authing against the Master, you will need to recalibrate your setup to reduce the rate of events like a Master restart or Minion key removal (salt-key -d).
When the Master generates a new AES key, the minions aren't notified of this but will discover it on the next pub job they receive. When the Minion receives such a job it will then re-auth with the Master. Since Salt does minion-side filtering this means that all the minions will re-auth on the next command published on the master-- causing another "thundering herd". This can be avoided by setting the
random_reauth_delay: 60
in the minions configuration file to a higher value and stagger the amount of re-auth attempts. Increasing this value will of course increase the time it takes until all minions are reachable via Salt commands.
Too many minions re-connecting
By default the zmq socket will re-connect every 100ms which for some larger installations may be too quick. This will control how quickly the TCP session is re-established, but has no bearing on the auth load.
To tune the minions sockets reconnect attempts, there are a few values in the sample configuration file (default values)
recon_default: 100ms recon_max: 5000 recon_randomize: True
- •
- recon_default: the default value the socket should use, i.e. 100ms
- •
- recon_max: the max value that the socket should use as a delay before trying to reconnect
- •
-
recon_randomize: enables randomization between recon_default and recon_max
To tune this values to an existing environment, a few decision have to be made.
- 1.
- How long can one wait, before the minions should be online and reachable via Salt?
- 2.
-
How many reconnects can the Master handle without a syn flood?
These questions can not be answered generally. Their answers depend on the hardware and the administrators requirements.
Here is an example scenario with the goal, to have all minions reconnect within a 60 second time-frame on a Salt Master service restart.
recon_default: 1000 recon_max: 59000 recon_randomize: True
Each Minion will have a randomized reconnect value between 'recon_default' and 'recon_default + recon_max', which in this example means between 1000ms and 60000ms (or between 1 and 60 seconds). The generated random-value will be doubled after each attempt to reconnect (ZeroMQ default behavior).
Lets say the generated random value is 11 seconds (or 11000ms).
reconnect 1: wait 11 seconds reconnect 2: wait 22 seconds reconnect 3: wait 33 seconds reconnect 4: wait 44 seconds reconnect 5: wait 55 seconds reconnect 6: wait time is bigger than 60 seconds (recon_default + recon_max) reconnect 7: wait 11 seconds reconnect 8: wait 22 seconds reconnect 9: wait 33 seconds reconnect x: etc.
With a thousand minions this will mean
1000/60 = ~16
round about 16 connection attempts a second. These values should be altered to values that match your environment. Keep in mind though, that it may grow over time and that more minions might raise the problem again.
Too many minions returning at once
This can also happen during the testing phase, if all minions are addressed at once with
$ salt * test.ping
it may cause thousands of minions trying to return their data to the Salt Master open port 4506. Also causing a flood of syn-flood if the Master can't handle that many returns at once.
This can be easily avoided with Salt's batch mode:
$ salt * test.ping -b 50
This will only address 50 minions at once while looping through all addressed minions.
Too few resources
The masters resources always have to match the environment. There is no way to give good advise without knowing the environment the Master is supposed to run in. But here are some general tuning tips for different situations:
The Master is CPU bound
Salt uses RSA-Key-Pairs on the masters and minions end. Both generate 4096 bit key-pairs on first start. While the key-size for the Master is currently not configurable, the minions keysize can be configured with different key-sizes. For example with a 2048 bit key:
keysize: 2048
With thousands of decryptions, the amount of time that can be saved on the masters end should not be neglected. See here for reference: Pull Request 9235 how much influence the key-size can have.
Downsizing the Salt Master's key is not that important, because the minions do not encrypt as many messages as the Master does.
The Master is disk IO bound
By default, the Master saves every Minion's return for every job in its job-cache. The cache can then be used later, to lookup results for previous jobs. The default directory for this is:
cachedir: /var/cache/salt
and then in the /proc directory.
Each job return for every Minion is saved in a single file. Over time this directory can grow quite large, depending on the number of published jobs. The amount of files and directories will scale with the number of jobs published and the retention time defined by
keep_jobs: 24
250 jobs/day * 2000 minions returns = 500.000 files a day
If no job history is needed, the job cache can be disabled:
job_cache: False
If the job cache is necessary there are (currently) 2 options:
- •
- ext_job_cache: this will have the minions store their return data directly into a returner (not sent through the Master)
- •
- master_job_cache (New in 2014.7.0): this will make the Master store the job data using a returner (instead of the local job cache on disk).
TARGETING MINIONS
Targeting minions is specifying which minions should run a command or execute a state by matching against hostnames, or system information, or defined groups, or even combinations thereof.
For example the command salt web1 apache.signal restart to restart the Apache httpd server specifies the machine web1 as the target and the command will only be run on that one minion.
Similarly when using States, the following top file specifies that only the web1 minion should execute the contents of webserver.sls:
base: 'web1': - webserver
There are many ways to target individual minions or groups of minions in Salt:
Matching the minion id
Each minion needs a unique identifier. By default when a minion starts for the first time it chooses its FQDN as that identifier. The minion id can be overridden via the minion's id configuration setting.
TIP: minion id and minion keys
The minion id is used to generate the minion's public/private keys and if it ever changes the master must then accept the new key as though the minion was a new host.
Globbing
The default matching that Salt utilizes is shell-style globbing around the minion id. This also works for states in the top file.
NOTE: You must wrap salt calls that use globbing in single-quotes to prevent the shell from expanding the globs before Salt is invoked.
Match all minions:
salt '*' test.ping
Match all minions in the example.net domain or any of the example domains:
salt '*.example.net' test.ping salt '*.example.*' test.ping
Match all the webN minions in the example.net domain (web1.example.net, web2.example.net … webN.example.net):
salt 'web?.example.net' test.ping
Match the web1 through web5 minions:
salt 'web[1-5]' test.ping
Match the web1 and web3 minions:
salt 'web[1,3]' test.ping
Match the web-x, web-y, and web-z minions:
salt 'web-[x-z]' test.ping
NOTE: For additional targeting methods please review the compound matchers documentation.
Regular Expressions
Minions can be matched using Perl-compatible regular expressions (which is globbing on steroids and a ton of caffeine).
Match both web1-prod and web1-devel minions:
salt -E 'web1-(prod|devel)' test.ping
When using regular expressions in a State's top file, you must specify the matcher as the first option. The following example executes the contents of webserver.sls on the above-mentioned minions.
base: 'web1-(prod|devel)': - match: pcre - webserver
Lists
At the most basic level, you can specify a flat list of minion IDs:
salt -L 'web1,web2,web3' test.ping
Grains
Salt comes with an interface to derive information about the underlying system. This is called the grains interface, because it presents salt with grains of information. Grains are collected for the operating system, domain name, IP address, kernel, OS type, memory, and many other system properties.
The grains interface is made available to Salt modules and components so that the right salt minion commands are automatically available on the right systems.
Grain data is relatively static, though if system information changes (for example, if network settings are changed), or if a new value is assigned to a custom grain, grain data is refreshed.
NOTE: Grains resolve to lowercase letters. For example, FOO, and foo target the same grain.
IMPORTANT: See Is Targeting using Grain Data Secure? for important security information.
Match all CentOS minions:
salt -G 'os:CentOS' test.ping
Match all minions with 64-bit CPUs, and return number of CPU cores for each matching minion:
salt -G 'cpuarch:x86_64' grains.item num_cpus
Additionally, globs can be used in grain matches, and grains that are nested in a dictionary can be matched by adding a colon for each level that is traversed. For example, the following will match hosts that have a grain called ec2_tags, which itself is a dict with a key named environment, which has a value that contains the word production:
salt -G 'ec2_tags:environment:*production*'
Listing Grains
Available grains can be listed by using the 'grains.ls' module:
salt '*' grains.ls
Grains data can be listed by using the 'grains.items' module:
salt '*' grains.items
Grains in the Minion Config
Grains can also be statically assigned within the minion configuration file. Just add the option grains and pass options to it:
grains: roles: - webserver - memcache deployment: datacenter4 cabinet: 13 cab_u: 14-15
Then status data specific to your servers can be retrieved via Salt, or used inside of the State system for matching. It also makes targeting, in the case of the example above, simply based on specific data about your deployment.
Grains in /etc/salt/grains
If you do not want to place your custom static grains in the minion config file, you can also put them in /etc/salt/grains on the minion. They are configured in the same way as in the above example, only without a top-level grains: key:
roles: - webserver - memcache deployment: datacenter4 cabinet: 13 cab_u: 14-15
Matching Grains in the Top File
With correctly configured grains on the Minion, the top file used in Pillar or during Highstate can be made very efficient. For example, consider the following configuration:
'node_type:web': - match: grain - webserver 'node_type:postgres': - match: grain - database 'node_type:redis': - match: grain - redis 'node_type:lb': - match: grain - lb
For this example to work, you would need to have defined the grain node_type for the minions you wish to match. This simple example is nice, but too much of the code is similar. To go one step further, Jinja templating can be used to simplify the top file.
{% set node_type = salt['grains.get']('node_type', '') %} {% if node_type %} 'node_type:{{ self }}': - match: grain - {{ self }} {% endif %}
Using Jinja templating, only one match entry needs to be defined.
NOTE: The example above uses the grains.get function to account for minions which do not have the node_type grain set.
Writing Grains
The grains interface is derived by executing all of the "public" functions found in the modules located in the grains package or the custom grains directory. The functions in the modules of the grains must return a Python dict, where the keys in the dict are the names of the grains and the values are the values.
Custom grains should be placed in a _grains directory located under the file_roots specified by the master config file. The default path would be /srv/salt/_grains. Custom grains will be distributed to the minions when state.highstate is run, or by executing the saltutil.sync_grains or saltutil.sync_all functions.
Grains are easy to write, and only need to return a dictionary. A common approach would be code something similar to the following:
#!/usr/bin/env python def yourfunction(): # initialize a grains dictionary grains = {} # Some code for logic that sets grains like grains['yourcustomgrain']=True grains['anothergrain']='somevalue' return grains
Before adding a grain to Salt, consider what the grain is and remember that grains need to be static data. If the data is something that is likely to change, consider using Pillar instead.
WARNING: Custom grains will not be available in the top file until after the first highstate. To make custom grains available on a minion's first highstate, it is recommended to use this example to ensure that the custom grains are synced when the minion starts.
Precedence
Core grains can be overridden by custom grains. As there are several ways of defining custom grains, there is an order of precedence which should be kept in mind when defining them. The order of evaluation is as follows:
- 1.
- Core grains.
- 2.
- Custom grains in /etc/salt/grains.
- 3.
- Custom grains in /etc/salt/minion.
- 4.
-
Custom grain modules in _grains directory, synced to minions.
Each successive evaluation overrides the previous ones, so any grains defined by custom grains modules synced to minions that have the same name as a core grain will override that core grain. Similarly, grains from /etc/salt/minion override both core grains and custom grain modules, and grains in _grains will override any grains of the same name.
Examples of Grains
The core module in the grains package is where the main grains are loaded by the Salt minion and provides the principal example of how to write grains:
https://github.com/saltstack/salt/blob/develop/salt/grains/core.py
Syncing Grains
Syncing grains can be done a number of ways, they are automatically synced when state.highstate is called, or (as noted above) the grains can be manually synced and reloaded by calling the saltutil.sync_grains or saltutil.sync_all functions.
Targeting with Pillar
Pillar data can be used when targeting minions. This allows for ultimate control and flexibility when targeting minions.
salt -I 'somekey:specialvalue' test.ping
Like with Grains, it is possible to use globbing as well as match nested values in Pillar, by adding colons for each level that is being traversed. The below example would match minions with a pillar named foo, which is a dict containing a key bar, with a value beginning with baz:
salt -I 'foo:bar:baz*' test.ping
Subnet/IP Address Matching
Minions can easily be matched based on IP address, or by subnet (using CIDR notation).
salt -S 192.168.40.20 test.ping salt -S 10.0.0.0/24 test.ping
NOTE: Only IPv4 matching is supported at this time.
Compound matchers
Compound matchers allow very granular minion targeting using any of Salt's matchers. The default matcher is a glob match, just as with CLI and top file matching. To match using anything other than a glob, prefix the match string with the appropriate letter from the table below, followed by an @ sign.
Letter | Match Type |
Example
|
G | Grains glob |
G@os:Ubuntu
|
E | PCRE Minion ID |
E@web\d+\.(dev|qa|prod)\.loc
|
P | Grains PCRE |
P@os:(RedHat|Fedora|CentOS)
|
L | List of minions |
L@minion1.example.com,minion3.domain.com or bl*.domain.com
|
I | Pillar glob |
I@pdata:foobar
|
S | Subnet/IP address |
S@192.168.1.0/24 or S@192.168.1.100
|
R | Range cluster |
R@%foo.bar
|
Matchers can be joined using boolean and, or, and not operators.
For example, the following string matches all Debian minions with a hostname that begins with webserv, as well as any minions that have a hostname which matches the regular expression web-dc1-srv.*:
salt -C 'webserv* and G@os:Debian or E@web-dc1-srv.*' test.ping
That same example expressed in a top file looks like the following:
base: 'webserv* and G@os:Debian or E@web-dc1-srv.*': - match: compound - webserver
Note that a leading not is not supported in compound matches. Instead, something like the following must be done:
salt -C '* and not G@kernel:Darwin' test.ping
Excluding a minion based on its ID is also possible:
salt -C '* and not web-dc1-srv' test.ping
Precedence Matching
Matches can be grouped together with parentheses to explicitly declare precedence amongst groups.
salt -C '( ms-1 or G@id:ms-3 ) and G@id:ms-3' test.ping
NOTE: Be certain to note that spaces are required between the parentheses and targets. Failing to obey this rule may result in incorrect targeting!
Node groups
Nodegroups are declared using a compound target specification. The compound target documentation can be found here.
The nodegroups master config file parameter is used to define nodegroups. Here's an example nodegroup configuration within /etc/salt/master:
nodegroups: group1: 'L@foo.domain.com,bar.domain.com,baz.domain.com or bl*.domain.com' group2: 'G@os:Debian and foo.domain.com' group3: 'G@os:Debian and N@group1'
NOTE: The L within group1 is matching a list of minions, while the G in group2 is matching specific grains. See the compound matchers documentation for more details.
NOTE: Nodgroups can reference other nodegroups as seen in group3. Ensure that you do not have circular references. Circular references will be detected and cause partial expansion with a logged error message.
To match a nodegroup on the CLI, use the -N command-line option:
salt -N group1 test.ping
To match a nodegroup in your top file, make sure to put - match: nodegroup on the line directly following the nodegroup name.
base: group1: - match: nodegroup - webserver
NOTE: When adding or modifying nodegroups to a master configuration file, the master must be restarted for those changes to be fully recognized.
A limited amount of functionality, such as targeting with -N from the command-line may be available without a restart.
Batch Size
The -b (or --batch-size) option allows commands to be executed on only a specified number of minions at a time. Both percentages and finite numbers are supported.
salt '*' -b 10 test.ping salt -G 'os:RedHat' --batch-size 25% apache.signal restart
This will only run test.ping on 10 of the targeted minions at a time and then restart apache on 25% of the minions matching os:RedHat at a time and work through them all until the task is complete. This makes jobs like rolling web server restarts behind a load balancer or doing maintenance on BSD firewalls using carp much easier with salt.
The batch system maintains a window of running minions, so, if there are a total of 150 minions targeted and the batch size is 10, then the command is sent to 10 minions, when one minion returns then the command is sent to one additional minion, so that the job is constantly running on 10 minions.
SECO Range
SECO range is a cluster-based metadata store developed and maintained by Yahoo!
The Range project is hosted here:
https://github.com/ytoolshed/range
Learn more about range here:
https://github.com/ytoolshed/range/wiki/
Prerequisites
To utilize range support in Salt, a range server is required. Setting up a range server is outside the scope of this document. Apache modules are included in the range distribution.
With a working range server, cluster files must be defined. These files are written in YAML and define hosts contained inside a cluster. Full documentation on writing YAML range files is here:
https://github.com/ytoolshed/range/wiki/%22yamlfile%22-module-file-spec
Additionally, the Python seco range libraries must be installed on the salt master. One can verify that they have been installed correctly via the following command:
python -c 'import seco.range'
If no errors are returned, range is installed successfully on the salt master.
Preparing Salt
Range support must be enabled on the salt master by setting the hostname and port of the range server inside the master configuration file:
range_server: my.range.server.com:80
Following this, the master must be restarted for the change to have an effect.
Targeting with Range
Once a cluster has been defined, it can be targeted with a salt command by using the -R or --range flags.
For example, given the following range YAML file being served from a range server:
$ cat /etc/range/test.yaml CLUSTER: host1..100.test.com APPS: - frontend - backend - mysql
One might target host1 through host100 in the test.com domain with Salt as follows:
salt --range %test:CLUSTER test.ping
The following salt command would target three hosts: frontend, backend, and mysql:
salt --range %test:APPS test.ping
STORING STATIC DATA IN THE PILLAR
Pillar is an interface for Salt designed to offer global values that can be distributed to all minions. Pillar data is managed in a similar way as the Salt State Tree.
Pillar was added to Salt in version 0.9.8
NOTE: Storing sensitive data
Unlike state tree, pillar data is only available for the targeted minion specified by the matcher type. This makes it useful for storing sensitive data specific to a particular minion.
Declaring the Master Pillar
The Salt Master server maintains a pillar_roots setup that matches the structure of the file_roots used in the Salt file server. Like the Salt file server the pillar_roots option in the master config is based on environments mapping to directories. The pillar data is then mapped to minions based on matchers in a top file which is laid out in the same way as the state top file. Salt pillars can use the same matcher types as the standard top file.
The configuration for the pillar_roots in the master config file is identical in behavior and function as file_roots:
pillar_roots: base: - /srv/pillar
This example configuration declares that the base environment will be located in the /srv/pillar directory. It must not be in a subdirectory of the state tree.
The top file used matches the name of the top file used for States, and has the same structure:
/srv/pillar/top.sls
base: '*': - packages
In the above top file, it is declared that in the base environment, the glob matching all minions will have the pillar data found in the packages pillar available to it. Assuming the pillar_roots value of /srv/pillar taken from above, the packages pillar would be located at /srv/pillar/packages.sls.
Another example shows how to use other standard top matching types to deliver specific salt pillar data to minions with different properties.
Here is an example using the grains matcher to target pillars to minions by their os grain:
dev: 'os:Debian': - match: grain - servers
/srv/pillar/packages.sls
{% if grains['os'] == 'RedHat' %} apache: httpd git: git {% elif grains['os'] == 'Debian' %} apache: apache2 git: git-core {% endif %} company: Foo Industries
IMPORTANT: See Is Targeting using Grain Data Secure? for important security information.
The above pillar sets two key/value pairs. If a minion is running RedHat, then the apache key is set to httpd and the git key is set to the value of git. If the minion is running Debian, those values are changed to apache2 and git-core respctively. All minions that have this pillar targeting to them via a top file will have the key of company with a value of Foo Industries.
Consequently this data can be used from within modules, renderers, State SLS files, and more via the shared pillar dict:
apache: pkg.installed: - name: {{ pillar['apache'] }}
git: pkg.installed: - name: {{ pillar['git'] }}
Finally, the above states can utilize the values provided to them via Pillar. All pillar values targeted to a minion are available via the 'pillar' dictionary. As seen in the above example, Jinja substitution can then be utilized to access the keys and values in the Pillar dictionary.
Note that you cannot just list key/value-information in top.sls. Instead, target a minion to a pillar file and then list the keys and values in the pillar. Here is an example top file that illustrates this point:
base: '*': - common_pillar
And the actual pillar file at '/srv/pillar/common_pillar.sls':
foo: bar boo: baz
Pillar namespace flattened
The separate pillar files all share the same namespace. Given a top.sls of:
base: '*': - packages - services
a packages.sls file of:
bind: bind9
and a services.sls file of:
bind: named
Then a request for the bind pillar will only return named; the bind9 value is not available. It is better to structure your pillar files with more hierarchy. For example your package.sls file could look like:
packages: bind: bind9
Pillar Namespace Merges
With some care, the pillar namespace can merge content from multiple pillar files under a single key, so long as conflicts are avoided as described above.
For example, if the above example were modified as follows, the values are merged below a single key:
base: '*': - packages - services
And a packages.sls file like:
bind: package-name: bind9 version: 9.9.5
And a services.sls file like:
bind: port: 53 listen-on: any
The resulting pillar will be as follows:
$ salt-call pillar.get bind local: ---------- listen-on: any package-name: bind9 port: 53 version: 9.9.5
NOTE: Remember: conflicting keys will be overwritten in a non-deterministic manner!
Including Other Pillars
New in version 0.16.0.
Pillar SLS files may include other pillar files, similar to State files. Two syntaxes are available for this purpose. The simple form simply includes the additional pillar as if it were part of the same file:
include: - users
The full include form allows two additional options -- passing default values to the templating engine for the included pillar file as well as an optional key under which to nest the results of the included pillar:
include: - users: defaults: sudo: ['bob', 'paul'] key: users
With this form, the included file (users.sls) will be nested within the 'users' key of the compiled pillar. Additionally, the 'sudo' value will be available as a template variable to users.sls.
Viewing Minion Pillar
Once the pillar is set up the data can be viewed on the minion via the pillar module, the pillar module comes with functions, pillar.items and pillar.raw. pillar.items will return a freshly reloaded pillar and pillar.raw will return the current pillar without a refresh:
salt '*' pillar.items
NOTE: Prior to version 0.16.2, this function is named pillar.data. This function name is still supported for backwards compatibility.
Pillar get Function
New in version 0.14.0.
The pillar.get function works much in the same way as the get method in a python dict, but with an enhancement: nested dict components can be extracted using a : delimiter.
If a structure like this is in pillar:
foo: bar: baz: qux
Extracting it from the raw pillar in an sls formula or file template is done this way:
{{ pillar['foo']['bar']['baz'] }}
Now, with the new pillar.get function the data can be safely gathered and a default can be set, allowing the template to fall back if the value is not available:
{{ salt['pillar.get']('foo:bar:baz', 'qux') }}
This makes handling nested structures much easier.
NOTE: pillar.get() vs salt['pillar.get']()
It should be noted that within templating, the pillar variable is just a dictionary. This means that calling pillar.get() inside of a template will just use the default dictionary .get() function which does not include the extra : delimiter functionality. It must be called using the above syntax (salt['pillar.get']('foo:bar:baz', 'qux')) to get the salt function, instead of the default dictionary behavior.
Refreshing Pillar Data
When pillar data is changed on the master the minions need to refresh the data locally. This is done with the saltutil.refresh_pillar function.
salt '*' saltutil.refresh_pillar
This function triggers the minion to asynchronously refresh the pillar and will always return None.
Set Pillar Data at the Command Line
Pillar data can be set at the command line like the following example:
salt '*' state.highstate pillar='{"cheese": "spam"}'
This will create a dict with a key of 'cheese' and a value of 'spam'. A list can be created like this:
salt '*' state.highstate pillar='["cheese", "milk", "bread"]'
Master Config In Pillar
For convenience the data stored in the master configuration file can be made available in all minion's pillars. This makes global configuration of services and systems very easy but may not be desired if sensitive data is stored in the master configuration. This option is disabled by default.
To enable the master config from being added to the pillar set pillar_opts to True:
pillar_opts: True
Master Provided Pillar Error
By default if there is an error rendering a pillar, the detailed error is hidden and replaced with:
Rendering SLS 'my.sls' failed. Please see master log for details.
The error is protected because it's possible to contain templating data which would give that minion information it shouldn't know, like a password!
To have the master provide the detailed error that could potentially carry protected data set pillar_safe_render_error to False:
pillar_safe_render_error: True
REACTOR SYSTEM
Salt version 0.11.0 introduced the reactor system. The premise behind the reactor system is that with Salt's events and the ability to execute commands, a logic engine could be put in place to allow events to trigger actions, or more accurately, reactions.
This system binds sls files to event tags on the master. These sls files then define reactions. This means that the reactor system has two parts. First, the reactor option needs to be set in the master configuration file. The reactor option allows for event tags to be associated with sls reaction files. Second, these reaction files use highdata (like the state system) to define reactions to be executed.
Event System
A basic understanding of the event system is required to understand reactors. The event system is a local ZeroMQ PUB interface which fires salt events. This event bus is an open system used for sending information notifying Salt and other systems about operations.
The event system fires events with a very specific criteria. Every event has a tag. Event tags allow for fast top level filtering of events. In addition to the tag, each event has a data structure. This data structure is a dict, which contains information about the event.
Mapping Events to Reactor SLS Files
Reactor SLS files and event tags are associated in the master config file. By default this is /etc/salt/master, or /etc/salt/master.d/reactor.conf.
New in version 2014.7.0: Added Reactor support for salt:// file paths.
In the master config section 'reactor:' is a list of event tags to be matched and each event tag has a list of reactor SLS files to be run.
reactor: # Master config section "reactor" - 'salt/minion/*/start': # Match tag "salt/minion/*/start" - /srv/reactor/start.sls # Things to do when a minion starts - /srv/reactor/monitor.sls # Other things to do - 'salt/cloud/*/destroyed': # Globs can be used to matching tags - /srv/reactor/destroy/*.sls # Globs can be used to match file names - 'myco/custom/event/tag': # React to custom event tags - salt://reactor/mycustom.sls # Put reactor files under file_roots
Reactor sls files are similar to state and pillar sls files. They are by default yaml + Jinja templates and are passed familiar context variables.
They differ because of the addition of the tag and data variables.
- •
- The tag variable is just the tag in the fired event.
- •
-
The data variable is the event's data dict.
Here is a simple reactor sls:
{% if data['id'] == 'mysql1' %} highstate_run: local.state.highstate: - tgt: mysql1 {% endif %}
This simple reactor file uses Jinja to further refine the reaction to be made. If the id in the event data is mysql1 (in other words, if the name of the minion is mysql1) then the following reaction is defined. The same data structure and compiler used for the state system is used for the reactor system. The only difference is that the data is matched up to the salt command API and the runner system. In this example, a command is published to the mysql1 minion with a function of state.highstate. Similarly, a runner can be called:
{% if data['data']['overstate'] == 'refresh' %} overstate_run: runner.state.over {% endif %}
This example will execute the state.overstate runner and initiate an overstate execution.
Fire an event
To fire an event from a minion call event.send
salt-call event.send 'foo' '{overstate: refresh}'
After this is called, any reactor sls files matching event tag foo will execute with {{ data['data']['overstate'] }} equal to 'refresh'.
See salt.modules.event for more information.
Knowing what event is being fired
The best way to see exactly what events are fired and what data is available in each event is to use the state.event runner.
SEE ALSO: Common Salt Events
Example usage:
salt-run state.event pretty=True
Example output:
salt/job/20150213001905721678/new { "_stamp": "2015-02-13T00:19:05.724583", "arg": [], "fun": "test.ping", "jid": "20150213001905721678", "minions": [ "jerry" ], "tgt": "*", "tgt_type": "glob", "user": "root" } salt/job/20150213001910749506/ret/jerry { "_stamp": "2015-02-13T00:19:11.136730", "cmd": "_return", "fun": "saltutil.find_job", "fun_args": [ "20150213001905721678" ], "id": "jerry", "jid": "20150213001910749506", "retcode": 0, "return": {}, "success": true }
Debugging the Reactor
The best window into the Reactor is to run the master in the foreground with debug logging enabled. The output will include when the master sees the event, what the master does in response to that event, and it will also include the rendered SLS file (or any errors generated while rendering the SLS file).
- 1.
- Stop the master.
- 2.
-
Start the master manually:
salt-master -l debug
- 3.
-
Look for log entries in the form:
[DEBUG ] Gathering reactors for tag foo/bar [DEBUG ] Compiling reactions for tag foo/bar [DEBUG ] Rendered data from file: /path/to/the/reactor_file.sls: <... Rendered output appears here. ...>
The rendered output is the result of the Jinja parsing and is a good way to view the result of referencing Jinja variables. If the result is empty then Jinja produced an empty result and the Reactor will ignore it.
Understanding the Structure of Reactor Formulas
I.e., when to use `arg` and `kwarg` and when to specify the function arguments directly.
While the reactor system uses the same basic data structure as the state system, the functions that will be called using that data structure are different functions than are called via Salt's state system. The Reactor can call Runner modules using the runner prefix, Wheel modules using the wheel prefix, and can also cause minions to run Execution modules using the local prefix.
Changed in version 2014.7.0: The cmd prefix was renamed to local for consistency with other parts of Salt. A backward-compatible alias was added for cmd.
The Reactor runs on the master and calls functions that exist on the master. In the case of Runner and Wheel functions the Reactor can just call those functions directly since they exist on the master and are run on the master.
In the case of functions that exist on minions and are run on minions, the Reactor still needs to call a function on the master in order to send the necessary data to the minion so the minion can execute that function.
The Reactor calls functions exposed in Salt's Python API documentation. and thus the structure of Reactor files very transparently reflects the function signatures of those functions.
Calling Execution modules on Minions
The Reactor sends commands down to minions in the exact same way Salt's CLI interface does. It calls a function locally on the master that sends the name of the function as well as a list of any arguments and a dictionary of any keyword arguments that the minion should use to execute that function.
Specifically, the Reactor calls the async version of this function. You can see that function has 'arg' and 'kwarg' parameters which are both values that are sent down to the minion.
Executing remote commands maps to the LocalClient interface which is used by the salt command. This interface more specifically maps to the cmd_async method inside of the LocalClient class. This means that the arguments passed are being passed to the cmd_async method, not the remote method. A field starts with local to use the LocalClient subsystem. The result is, to execute a remote command, a reactor formula would look like this:
clean_tmp: local.cmd.run: - tgt: '*' - arg: - rm -rf /tmp/*
The arg option takes a list of arguments as they would be presented on the command line, so the above declaration is the same as running this salt command:
salt '*' cmd.run 'rm -rf /tmp/*'
Use the expr_form argument to specify a matcher:
clean_tmp: local.cmd.run: - tgt: 'os:Ubuntu' - expr_form: grain - arg: - rm -rf /tmp/* clean_tmp: local.cmd.run: - tgt: 'G@roles:hbase_master' - expr_form: compound - arg: - rm -rf /tmp/*
Any other parameters in the LocalClient().cmd() method can be specified as well.
Calling Runner modules and Wheel modules
Calling Runner modules and Wheel modules from the Reactor uses a more direct syntax since the function is being executed locally instead of sending a command to a remote system to be executed there. There are no 'arg' or 'kwarg' parameters (unless the Runner function or Wheel function accepts a parameter with either of those names.)
For example:
clear_the_grains_cache_for_all_minions: runner.cache.clear_grains
If the runner takes arguments then they can be specified as well:
spin_up_more_web_machines: runner.cloud.profile: - prof: centos_6 - instances: - web11 # These VM names would be generated via Jinja in a - web12 # real-world example.
Passing event data to Minions or Orchestrate as Pillar
An interesting trick to pass data from the Reactor script to state.highstate or state.sls is to pass it as inline Pillar data since both functions take a keyword argument named pillar.
The following example uses Salt's Reactor to listen for the event that is fired when the key for a new minion is accepted on the master using salt-key.
/etc/salt/master.d/reactor.conf:
reactor: - 'salt/key': - /srv/salt/haproxy/react_new_minion.sls
The Reactor then fires a state.sls command targeted to the HAProxy servers and passes the ID of the new minion from the event to the state file via inline Pillar.
/srv/salt/haproxy/react_new_minion.sls:
{% if data['act'] == 'accept' and data['id'].startswith('web') %} add_new_minion_to_pool: local.state.sls: - tgt: 'haproxy*' - arg: - haproxy.refresh_pool - kwarg: pillar: new_minion: {{ data['id'] }} {% endif %}
The above command is equivalent to the following command at the CLI:
salt 'haproxy*' state.sls haproxy.refresh_pool 'pillar={new_minion: minionid}'
This works with Orchestrate files as well:
call_some_orchestrate_file: runner.state.orchestrate: - mods: some_orchestrate_file - pillar: stuff: things
Which is equivalent to the following command at the CLI:
salt-run state.orchestrate some_orchestrate_file pillar='{stuff: things}'
Finally, that data is available in the state file using the normal Pillar lookup syntax. The following example is grabbing web server names and IP addresses from Salt Mine. If this state is invoked from the Reactor then the custom Pillar value from above will be available and the new minion will be added to the pool but with the disabled flag so that HAProxy won't yet direct traffic to it.
/srv/salt/haproxy/refresh_pool.sls:
{% set new_minion = salt['pillar.get']('new_minion') %} listen web *:80 balance source {% for server,ip in salt['mine.get']('web*', 'network.interfaces', ['eth0']).items() %} {% if server == new_minion %} server {{ server }} {{ ip }}:80 disabled {% else %} server {{ server }} {{ ip }}:80 check {% endif %} {% endfor %}
A Complete Example
In this example, we're going to assume that we have a group of servers that will come online at random and need to have keys automatically accepted. We'll also add that we don't want all servers being automatically accepted. For this example, we'll assume that all hosts that have an id that starts with 'ink' will be automatically accepted and have state.highstate executed. On top of this, we're going to add that a host coming up that was replaced (meaning a new key) will also be accepted.
Our master configuration will be rather simple. All minions that attempte to authenticate will match the tag of salt/auth. When it comes to the minion key being accepted, we get a more refined tag that includes the minion id, which we can use for matching.
/etc/salt/master.d/reactor.conf:
reactor: - 'salt/auth': - /srv/reactor/auth-pending.sls - 'salt/minion/ink*/start': - /srv/reactor/auth-complete.sls
In this sls file, we say that if the key was rejected we will delete the key on the master and then also tell the master to ssh in to the minion and tell it to restart the minion, since a minion process will die if the key is rejected.
We also say that if the key is pending and the id starts with ink we will accept the key. A minion that is waiting on a pending key will retry authentication every ten seconds by default.
/srv/reactor/auth-pending.sls:
{# Ink server faild to authenticate -- remove accepted key #} {% if not data['result'] and data['id'].startswith('ink') %} minion_remove: wheel.key.delete: - match: {{ data['id'] }} minion_rejoin: local.cmd.run: - tgt: salt-master.domain.tld - arg: - ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no "{{ data['id'] }}" 'sleep 10 && /etc/init.d/salt-minion restart' {% endif %} {# Ink server is sending new key -- accept this key #} {% if 'act' in data and data['act'] == 'pend' and data['id'].startswith('ink') %} minion_add: wheel.key.accept: - match: {{ data['id'] }} {% endif %}
No if statements are needed here because we already limited this action to just Ink servers in the master configuration.
/srv/reactor/auth-complete.sls:
{# When an Ink server connects, run state.highstate. #} highstate_run: local.state.highstate: - tgt: {{ data['id'] }} - ret: smtp
The above will also return the highstate result data using the smtp_return returner (use virtualname like when using from the command line with --return). The returner needs to be configured on the minion for this to work. See salt.returners.smtp_return documentation for that.
Syncing Custom Types on Minion Start
Salt will sync all custom types (by running a saltutil.sync_all) on every highstate. However, there is a chicken-and-egg issue where, on the initial highstate, a minion will not yet have these custom types synced when the top file is first compiled. This can be worked around with a simple reactor which watches for minion_start events, which each minion fires when it first starts up and connects to the master.
On the master, create /srv/reactor/sync_grains.sls with the following contents:
sync_grains: local.saltutil.sync_grains: - tgt: {{ data['id'] }}
And in the master config file, add the following reactor configuration:
reactor: - 'minion_start': - /srv/reactor/sync_grains.sls
This will cause the master to instruct each minion to sync its custom grains when it starts, making these grains available when the initial highstate is executed.
Other types can be synced by replacing local.saltutil.sync_grains with local.saltutil.sync_modules, local.saltutil.sync_all, or whatever else suits the intended use case.
THE SALT MINE
The Salt Mine is used to collect arbitrary data from Minions and store it on the Master. This data is then made available to all Minions via the salt.modules.mine module.
Mine data is gathered on the Minion and sent back to the Master where only the most recent data is maintained (if long term data is required use returners or the external job cache).
Mine vs Grains
Mine data is designed to be much more up-to-date than grain data. Grains are refreshed on a very limited basis and are largely static data. Mines are designed to replace slow peer publishing calls when Minions need data from other Minions. Rather than having a Minion reach out to all the other Minions for a piece of data, the Salt Mine, running on the Master, can collect it from all the Minions every mine-interval, resulting in almost fresh data at any given time, with much less overhead.
Mine Functions
To enable the Salt Mine the mine_functions option needs to be applied to a Minion. This option can be applied via the Minion's configuration file, or the Minion's Pillar. The mine_functions option dictates what functions are being executed and allows for arguments to be passed in. If no arguments are passed, an empty list must be added:
mine_functions: test.ping: [] network.ip_addrs: interface: eth0 cidr: '10.0.0.0/8'
Mine Functions Aliases
Function aliases can be used to provide usage intentions or to allow multiple calls of the same function with different arguments.
New in version 2014.7.0.
mine_functions: network.ip_addrs: [eth0] networkplus.internal_ip_addrs: [] internal_ip_addrs: mine_function: network.ip_addrs cidr: 192.168.0.0/16 loopback_ip_addrs: mine_function: network.ip_addrs lo: True
Mine Interval
The Salt Mine functions are executed when the Minion starts and at a given interval by the scheduler. The default interval is every 60 minutes and can be adjusted for the Minion via the mine_interval option:
mine_interval: 60
Mine in Salt-SSH
As of the 2015.5.0 release of salt, salt-ssh supports mine.get.
Because the Minions cannot provide their own mine_functions configuration, we retrieve the args for specified mine functions in one of three places, searched in the following order:
- 1.
- Roster data
- 2.
- Pillar
- 3.
-
Master config
The mine_functions are formatted exactly the same as in normal salt, just stored in a different location. Here is an example of a flat roster containing mine_functions:
test: host: 104.237.131.248 user: root mine_functions: cmd.run: ['echo "hello!"'] network.ip_addrs: interface: eth0
NOTE: Because of the differences in the architecture of salt-ssh, mine.get calls are somewhat inefficient. Salt must make a new salt-ssh call to each of the Minions in question to retrieve the requested data, much like a publish call. However, unlike publish, it must run the requested function as a wrapper function, so we can retrieve the function args from the pillar of the Minion in question. This results in a non-trivial delay in retrieving the requested data.
Example
One way to use data from Salt Mine is in a State. The values can be retrieved via Jinja and used in the SLS file. The following example is a partial HAProxy configuration file and pulls IP addresses from all Minions with the "web" grain to add them to the pool of load balanced servers.
/srv/pillar/top.sls:
base: 'G@roles:web': - web
/srv/pillar/web.sls:
mine_functions: network.ip_addrs: [eth0]
/etc/salt/minion.d/mine.conf:
mine_interval: 5
/srv/salt/haproxy.sls:
haproxy_config: file.managed: - name: /etc/haproxy/config - source: salt://haproxy_config - template: jinja
/srv/salt/haproxy_config:
<...file contents snipped...> {% for server, addrs in salt['mine.get']('roles:web', 'network.ip_addrs', expr_form='pillar').items() %} server {{ server }} {{ addrs[0] }}:80 check {% endfor %} <...file contents snipped...>
EXTERNAL AUTHENTICATION SYSTEM
Salt's External Authentication System (eAuth) allows for Salt to pass through command authorization to any external authentication system, such as PAM or LDAP.
Access Control System
The external authentication system allows for specific users to be granted access to execute specific functions on specific minions. Access is configured in the master configuration file and uses the access control system:
external_auth: pam: thatch: - 'web*': - test.* - network.* steve: - .*
The above configuration allows the user thatch to execute functions in the test and network modules on the minions that match the web* target. User steve is given unrestricted access to minion commands.
Salt respects the current PAM configuration in place, and uses the 'login' service to authenticate.
NOTE: The PAM module does not allow authenticating as root.
To allow access to wheel modules or runner modules the following @ syntax must be used:
external_auth: pam: thatch: - '@wheel' # to allow access to all wheel modules - '@runner' # to allow access to all runner modules - '@jobs' # to allow access to the jobs runner and/or wheel module
NOTE: The runner/wheel markup is different, since there are no minions to scope the acl to.
NOTE: Globs will not match wheel or runners! They must be explicitly allowed with @wheel or @runner.
The external authentication system can then be used from the command-line by any user on the same system as the master with the -a option:
$ salt -a pam web\* test.ping
The system will ask the user for the credentials required by the authentication system and then publish the command.
To apply permissions to a group of users in an external authentication system, append a % to the ID:
external_auth: pam: admins%: - '*': - 'pkg.*'
WARNING: All users that have external authentication privileges are allowed to run saltutil.findjob. Be aware that this could inadvertently expose some data such as minion IDs.
Tokens
With external authentication alone, the authentication credentials will be required with every call to Salt. This can be alleviated with Salt tokens.
Tokens are short term authorizations and can be easily created by just adding a -T option when authenticating:
$ salt -T -a pam web\* test.ping
Now a token will be created that has a expiration of 12 hours (by default). This token is stored in a file named .salt_token in the active user's home directory.
Once the token is created, it is sent with all subsequent communications. User authentication does not need to be entered again until the token expires.
Token expiration time can be set in the Salt master config file.
LDAP and Active Directory
Salt supports both user and group authentication for LDAP (and Active Directory accessed via its LDAP interface)
OpenLDAP and similar systems
LDAP configuration happens in the Salt master configuration file.
Server configuration values and their defaults:
# Server to auth against auth.ldap.server: localhost # Port to connect via auth.ldap.port: 389 # Use TLS when connecting auth.ldap.tls: False # LDAP scope level, almost always 2 auth.ldap.scope: 2 # Server specified in URI format auth.ldap.uri: '' # Overrides .ldap.server, .ldap.port, .ldap.tls above # Verify server's TLS certificate auth.ldap.no_verify: False # Bind to LDAP anonymously to determine group membership # Active Directory does not allow anonymous binds without special configuration auth.ldap.anonymous: False # FOR TESTING ONLY, this is a VERY insecure setting. # If this is True, the LDAP bind password will be ignored and # access will be determined by group membership alone with # the group memberships being retrieved via anonymous bind auth.ldap.auth_by_group_membership_only: False # Require authenticating user to be part of this Organizational Unit # This can be blank if your LDAP schema does not use this kind of OU auth.ldap.groupou: 'Groups' # Object Class for groups. An LDAP search will be done to find all groups of this # class to which the authenticating user belongs. auth.ldap.groupclass: 'posixGroup' # Unique ID attribute name for the user auth.ldap.accountattributename: 'memberUid' # These are only for Active Directory auth.ldap.activedirectory: False auth.ldap.persontype: 'person'
There are two phases to LDAP authentication. First, Salt authenticates to search for a users's Distinguished Name and group membership. The user it authenticates as in this phase is often a special LDAP system user with read-only access to the LDAP directory. After Salt searches the directory to determine the actual user's DN and groups, it re-authenticates as the user running the Salt commands.
If you are already aware of the structure of your DNs and permissions in your LDAP store are set such that users can look up their own group memberships, then the first and second users can be the same. To tell Salt this is the case, omit the auth.ldap.bindpw parameter. You can template the binddn like this:
auth.ldap.basedn: dc=saltstack,dc=com auth.ldap.binddn: uid={{ username }},cn=users,cn=accounts,dc=saltstack,dc=com
Salt will use the password entered on the salt command line in place of the bindpw.
To use two separate users, specify the LDAP lookup user in the binddn directive, and include a bindpw like so
auth.ldap.binddn: uid=ldaplookup,cn=sysaccounts,cn=etc,dc=saltstack,dc=com auth.ldap.bindpw: mypassword
As mentioned before, Salt uses a filter to find the DN associated with a user. Salt substitutes the {{ username }} value for the username when querying LDAP
auth.ldap.filter: uid={{ username }}
For OpenLDAP, to determine group membership, one can specify an OU that contains group data. This is prepended to the basedn to create a search path. Then the results are filtered against auth.ldap.groupclass, default posixGroup, and the account's 'name' attribute, memberUid by default.
auth.ldap.groupou: Groups
Active Directory
Active Directory handles group membership differently, and does not utilize the groupou configuration variable. AD needs the following options in the master config:
auth.ldap.activedirectory: True auth.ldap.filter: sAMAccountName={{username}} auth.ldap.accountattributename: sAMAccountName auth.ldap.groupclass: group auth.ldap.persontype: person
To determine group membership in AD, the username and password that is entered when LDAP is requested as the eAuth mechanism on the command line is used to bind to AD's LDAP interface. If this fails, then it doesn't matter what groups the user belongs to, he or she is denied access. Next, the distinguishedName of the user is looked up with the following LDAP search:
(&(<value of auth.ldap.accountattributename>={{username}}) (objectClass=<value of auth.ldap.persontype>) )
This should return a distinguishedName that we can use to filter for group membership. Then the following LDAP query is executed:
(&(member=<distinguishedName from search above>) (objectClass=<value of auth.ldap.groupclass>) )
external_auth: ldap: test_ldap_user: - '*': - test.ping
To configure an LDAP group, append a % to the ID:
external_auth: ldap: test_ldap_group%: - '*': - test.echo
ACCESS CONTROL SYSTEM
New in version 0.10.4.
Salt maintains a standard system used to open granular control to non administrative users to execute Salt commands. The access control system has been applied to all systems used to configure access to non administrative control interfaces in Salt.These interfaces include, the peer system, the external auth system and the client acl system.
The access control system mandated a standard configuration syntax used in all of the three aforementioned systems. While this adds functionality to the configuration in 0.10.4, it does not negate the old configuration.
Now specific functions can be opened up to specific minions from specific users in the case of external auth and client ACLs, and for specific minions in the case of the peer system.
The access controls are manifested using matchers in these configurations:
client_acl: fred: - web\*: - pkg.list_pkgs - test.* - apache.*
In the above example, fred is able to send commands only to minions which match the specified glob target. This can be expanded to include other functions for other minions based on standard targets.
external_auth: pam: dave: - test.ping - mongo\*: - network.* - log\*: - network.* - pkg.* - 'G@os:RedHat': - kmod.* steve: - .*
The above allows for all minions to be hit by test.ping by dave, and adds a few functions that dave can execute on other minions. It also allows steve unrestricted access to salt commands.
JOB MANAGEMENT
New in version 0.9.7.
Since Salt executes jobs running on many systems, Salt needs to be able to manage jobs running on many systems.
The Minion proc System
Salt Minions maintain a proc directory in the Salt cachedir. The proc directory maintains files named after the executed job ID. These files contain the information about the current running jobs on the minion and allow for jobs to be looked up. This is located in the proc directory under the cachedir, with a default configuration it is under /var/cache/salt/proc.
Functions in the saltutil Module
Salt 0.9.7 introduced a few new functions to the saltutil module for managing jobs. These functions are:
- 1.
- running Returns the data of all running jobs that are found in the proc directory.
- 2.
- find_job Returns specific data about a certain job based on job id.
- 3.
- signal_job Allows for a given jid to be sent a signal.
- 4.
- term_job Sends a termination signal (SIGTERM, 15) to the process controlling the specified job.
- 5.
-
kill_job
Sends a kill signal (SIGKILL, 9) to the process controlling the
specified job.
These functions make up the core of the back end used to manage jobs at the minion level.
The jobs Runner
A convenience runner front end and reporting system has been added as well. The jobs runner contains functions to make viewing data easier and cleaner.
The jobs runner contains a number of functions...
active
The active function runs saltutil.running on all minions and formats the return data about all running jobs in a much more usable and compact format. The active function will also compare jobs that have returned and jobs that are still running, making it easier to see what systems have completed a job and what systems are still being waited on.
# salt-run jobs.active
lookup_jid
When jobs are executed the return data is sent back to the master and cached. By default it is cached for 24 hours, but this can be configured via the keep_jobs option in the master configuration. Using the lookup_jid runner will display the same return data that the initial job invocation with the salt command would display.
# salt-run jobs.lookup_jid <job id number>
list_jobs
Before finding a historic job, it may be required to find the job id. list_jobs will parse the cached execution data and display all of the job data for jobs that have already, or partially returned.
# salt-run jobs.list_jobs
Scheduling Jobs
In Salt versions greater than 0.12.0, the scheduling system allows incremental executions on minions or the master. The schedule system exposes the execution of any execution function on minions or any runner on the master.
Scheduling is enabled via the schedule option on either the master or minion config files, or via a minion's pillar data. Schedules that are impletemented via pillar data, only need to refresh the minion's pillar data, for example by using saltutil.refresh_pillar. Schedules implemented in the master or minion config have to restart the application in order for the schedule to be implemented.
NOTE: The scheduler executes different functions on the master and minions. When running on the master the functions reference runner functions, when running on the minion the functions specify execution functions.
A scheduled run has no output on the minion unless the config is set to info level or higher. Refer to minion logging settings.
Specify maxrunning to ensure that there are no more than N copies of a particular routine running. Use this for jobs that may be long-running and could step on each other or otherwise double execute. The default for maxrunning is 1.
States are executed on the minion, as all states are. You can pass positional arguments and provide a yaml dict of named arguments.
schedule: job1: function: state.sls seconds: 3600 args: - httpd kwargs: test: True
This will schedule the command: state.sls httpd test=True every 3600 seconds (every hour)
schedule: job1: function: state.sls seconds: 3600 args: - httpd kwargs: test: True splay: 15
This will schedule the command: state.sls httpd test=True every 3600 seconds (every hour) splaying the time between 0 and 15 seconds
schedule: job1: function: state.sls seconds: 3600 args: - httpd kwargs: test: True splay: start: 10 end: 15
This will schedule the command: state.sls httpd test=True every 3600 seconds (every hour) splaying the time between 10 and 15 seconds
New in version 2014.7.0.
Frequency of jobs can also be specified using date strings supported by the python dateutil library. This requires python-dateutil to be installed on the minion.
schedule: job1: function: state.sls args: - httpd kwargs: test: True when: 5:00pm
This will schedule the command: state.sls httpd test=True at 5:00pm minion localtime.
schedule: job1: function: state.sls args: - httpd kwargs: test: True when: - Monday 5:00pm - Tuesday 3:00pm - Wednesday 5:00pm - Thursday 3:00pm - Friday 5:00pm
This will schedule the command: state.sls httpd test=True at 5pm on Monday, Wednesday, and Friday, and 3pm on Tuesday and Thursday.
schedule: job1: function: state.sls seconds: 3600 args: - httpd kwargs: test: True range: start: 8:00am end: 5:00pm
This will schedule the command: state.sls httpd test=True every 3600 seconds (every hour) between the hours of 8am and 5pm. The range parameter must be a dictionary with the date strings using the dateutil format. This requires python-dateutil to be installed on the minion.
New in version 2014.7.0.
The scheduler also supports ensuring that there are no more than N copies of a particular routine running. Use this for jobs that may be long-running and could step on each other or pile up in case of infrastructure outage.
The default for maxrunning is 1.
schedule: long_running_job: function: big_file_transfer jid_include: True
States
schedule: log-loadavg: function: cmd.run seconds: 3660 args: - 'logger -t salt < /proc/loadavg' kwargs: stateful: False shell: \bin\sh
Highstates
To set up a highstate to run on a minion every 60 minutes set this in the minion config or pillar:
schedule: highstate: function: state.highstate minutes: 60
Time intervals can be specified as seconds, minutes, hours, or days.
Runners
Runner executions can also be specified on the master within the master configuration file:
schedule: run_my_orch: function: state.orchestrate hours: 6 splay: 600 args: - orchestration.my_orch
The above configuration is analogous to running salt-run state.orch orchestration.my_orch every 6 hours.
Scheduler With Returner
The scheduler is also useful for tasks like gathering monitoring data about a minion, this schedule option will gather status data and send it to a MySQL returner database:
schedule: uptime: function: status.uptime seconds: 60 returner: mysql meminfo: function: status.meminfo minutes: 5 returner: mysql
Since specifying the returner repeatedly can be tiresome, the schedule_returner option is available to specify one or a list of global returners to be used by the minions when scheduling. In Salt versions greater than 0.12.0, the scheduling system allows incremental executions on minions or the master. The schedule system exposes the execution of any execution function on minions or any runner on the master.
Scheduling is enabled via the schedule option on either the master or minion config files, or via a minion's pillar data. Schedules that are impletemented via pillar data, only need to refresh the minion's pillar data, for example by using saltutil.refresh_pillar. Schedules implemented in the master or minion config have to restart the application in order for the schedule to be implemented.
NOTE: The scheduler executes different functions on the master and minions. When running on the master the functions reference runner functions, when running on the minion the functions specify execution functions.
A scheduled run has no output on the minion unless the config is set to info level or higher. Refer to minion logging settings.
Specify maxrunning to ensure that there are no more than N copies of a particular routine running. Use this for jobs that may be long-running and could step on each other or otherwise double execute. The default for maxrunning is 1.
States are executed on the minion, as all states are. You can pass positional arguments and provide a yaml dict of named arguments.
schedule: job1: function: state.sls seconds: 3600 args: - httpd kwargs: test: True
This will schedule the command: state.sls httpd test=True every 3600 seconds (every hour)
schedule: job1: function: state.sls seconds: 3600 args: - httpd kwargs: test: True splay: 15
This will schedule the command: state.sls httpd test=True every 3600 seconds (every hour) splaying the time between 0 and 15 seconds
schedule: job1: function: state.sls seconds: 3600 args: - httpd kwargs: test: True splay: start: 10 end: 15
This will schedule the command: state.sls httpd test=True every 3600 seconds (every hour) splaying the time between 10 and 15 seconds
New in version 2014.7.0.
Frequency of jobs can also be specified using date strings supported by the python dateutil library. This requires python-dateutil to be installed on the minion.
schedule: job1: function: state.sls args: - httpd kwargs: test: True when: 5:00pm
This will schedule the command: state.sls httpd test=True at 5:00pm minion localtime.
schedule: job1: function: state.sls args: - httpd kwargs: test: True when: - Monday 5:00pm - Tuesday 3:00pm - Wednesday 5:00pm - Thursday 3:00pm - Friday 5:00pm
This will schedule the command: state.sls httpd test=True at 5pm on Monday, Wednesday, and Friday, and 3pm on Tuesday and Thursday.
schedule: job1: function: state.sls seconds: 3600 args: - httpd kwargs: test: True range: start: 8:00am end: 5:00pm
This will schedule the command: state.sls httpd test=True every 3600 seconds (every hour) between the hours of 8am and 5pm. The range parameter must be a dictionary with the date strings using the dateutil format. This requires python-dateutil to be installed on the minion.
New in version 2014.7.0.
The scheduler also supports ensuring that there are no more than N copies of a particular routine running. Use this for jobs that may be long-running and could step on each other or pile up in case of infrastructure outage.
The default for maxrunning is 1.
schedule: long_running_job: function: big_file_transfer jid_include: True
States
schedule: log-loadavg: function: cmd.run seconds: 3660 args: - 'logger -t salt < /proc/loadavg' kwargs: stateful: False shell: \bin\sh
Highstates
To set up a highstate to run on a minion every 60 minutes set this in the minion config or pillar:
schedule: highstate: function: state.highstate minutes: 60
Time intervals can be specified as seconds, minutes, hours, or days.
Runners
Runner executions can also be specified on the master within the master configuration file:
schedule: run_my_orch: function: state.orchestrate hours: 6 splay: 600 args: - orchestration.my_orch
The above configuration is analogous to running salt-run state.orch orchestration.my_orch every 6 hours.
Scheduler With Returner
The scheduler is also useful for tasks like gathering monitoring data about a minion, this schedule option will gather status data and send it to a MySQL returner database:
schedule: uptime: function: status.uptime seconds: 60 returner: mysql meminfo: function: status.meminfo minutes: 5 returner: mysql
Since specifying the returner repeatedly can be tiresome, the schedule_returner option is available to specify one or a list of global returners to be used by the minions when scheduling.
MANAGING THE JOB CACHE
The Salt Master maintains a job cache of all job executions which can be queried via the jobs runner. This job cache is called the Default Job Cache.
Default Job Cache
A number of options are available when configuring the job cache. The default caching system uses local storage on the Salt Master and can be found in the job cache directory (on Linux systems this is typically /var/cache/salt/master/jobs). The default caching system is suitable for most deployments as it does not typically require any further configuration or management.
The default job cache is a temporary cache and jobs will be stored for 24 hours. If the default cache needs to store jobs for a different period the time can be easily adjusted by changing the keep_jobs parameter in the Salt Master configuration file. The value passed in is measured via hours:
keep_jobs: 24
Additional Job Cache Options
Many deployments may wish to use an external database to maintain a long term register of executed jobs. Salt comes with two main mechanisms to do this, the master job cache and the external job cache.
See Storing Job Results in an External System.
STORING JOB RESULTS IN AN EXTERNAL SYSTEM
After a job executes, job results are returned to the Salt Master by each Salt Minion. These results are stored in the Default Job Cache.
In addition to the Default Job Cache, Salt provides two additional mechanisms to send job results to other systems (databases, local syslog, and others):
- •
- External Job Cache
- •
-
Master Job Cache
The major difference between these two mechanism is from where results are returned (from the Salt Master or Salt Minion).
External Job Cache - Minion-Side Returner
When an External Job Cache is configured, data is returned to the Default Job Cache on the Salt Master like usual, and then results are also sent to an External Job Cache using a Salt returner module running on the Salt Minion. [image]
- •
- Advantages: Data is stored without placing additional load on the Salt Master.
- •
- Disadvantages: Each Salt Minion connects to the external job cache, which can result in a large number of connections. Also requires additional configuration to get returner module settings on all Salt Minions.
Master Job Cache - Master-Side Returner
New in version 2014.7.0.
Instead of configuring an External Job Cache on each Salt Minion, you can configure the Master Job Cache to send job results from the Salt Master instead. In this configuration, Salt Minions send data to the Default Job Cache as usual, and then the Salt Master sends the data to the external system using a Salt returner module running on the Salt Master. [image]
- •
- Advantages: A single connection is required to the external system. This is preferred for databases and similar systems.
- •
- Disadvantages: Places additional load on your Salt Master.
Configure an External or Master Job Cache
Step 1: Understand Salt Returners
Before you configure a job cache, it is essential to understand Salt returner modules ("returners"). Returners are pluggable Salt Modules that take the data returned by jobs, and then perform any necessary steps to send the data to an external system. For example, a returner might establish a connection, authenticate, and then format and transfer data.
The Salt Returner system provides the core functionality used by the External and Master Job Cache systems, and the same returners are used by both systems.
Salt currently provides many different returners that let you connect to a wide variety of systems. A complete list is available at all Salt returners. Each returner is configured differently, so make sure you read and follow the instructions linked from that page.
For example, the MySQL returner requires:
- •
- A database created using provided schema (structure is available at MySQL returner)
- •
- A user created with with privileges to the database
- •
-
Optional SSL configuration
A simpler returner, such as Slack or HipChat, requires:
- •
- An API key/version
- •
- The target channel/room
- •
- The username that should be used to send the message
Step 2: Configure the Returner
After you understand the configuration and have the external system ready, add the returner configuration settings to the Salt Minion configuration file for the External Job Cache, or to the Salt Master configuration file for the Master Job Cache.
For example, MySQL requires:
mysql.host: 'salt' mysql.user: 'salt' mysql.pass: 'salt' mysql.db: 'salt' mysql.port: 3306
Slack requires:
slack.channel: 'channel' slack.api_key: 'key' slack.from_name: 'name'
After you have configured the returner and added settings to the configuration file, you can enable the External or Master Job Cache.
Step 3: Enable the External or Master Job Cache
Configuration is a single line that specifies an already-configured returner to use to send all job data to an external system.
External Job Cache
To enable a returner as the External Job Cache (Minion-side), add the following line to the Salt Master configuration file:
ext_job_cache: <returner>
For example:
ext_job_cache: mysql
NOTE: When configuring an External Job Cache (Minion-side), the returner settings are added to the Minion configuration file, but the External Job Cache setting is configured in the Master configuration file.
Master Job Cache
To enable a returner as a Master Job Cache (Master-side), add the following line to the Salt Master configuration file:
master_job_cache: <returner>
For example:
master_job_cache: mysql
Verify that the returner configuration settings are in the Master configuration file, and be sure to restart the salt-master service after you make configuration changes. (service salt-master restart).
STORING DATA IN OTHER DATABASES
The SDB interface is designed to store and retrieve data that, unlike pillars and grains, is not necessarily minion-specific. The initial design goal was to allow passwords to be stored in a secure database, such as one managed by the keyring package, rather than as plain-text files. However, as a generic database interface, it could conceptually be used for a number of other purposes.
SDB was added to Salt in version 2014.7.0. SDB is currently experimental, and should probably not be used in production.
SDB Configuration
In order to use the SDB interface, a configuration profile must be set up in either the master or minion configuration file. The configuration stanza includes the name/ID that the profile will be referred to as, a driver setting, and any other arguments that are necessary for the SDB module that will be used. For instance, a profile called mykeyring, which uses the system service in the keyring module would look like:
mykeyring: driver: keyring service: system
It is recommended to keep the name of the profile simple, as it is used in the SDB URI as well.
SDB URIs
SDB is designed to make small database queries (hence the name, SDB) using a compact URL. This allows users to reference a database value quickly inside a number of Salt configuration areas, without a lot of overhead. The basic format of an SDB URI is:
sdb://<profile>/<args>
The profile refers to the configuration profile defined in either the master or the minion configuration file. The args are specific to the module referred to in the profile, but will typically only need to refer to the key of a key/value pair inside the database. This is because the profile itself should define as many other parameters as possible.
For example, a profile might be set up to reference credentials for a specific OpenStack account. The profile might look like:
kevinopenstack: driver: keyring service: salt.cloud.openstack.kevin
And the URI used to reference the password might look like:
sdb://kevinopenstack/password
Writing SDB Modules
There is currently one function that MUST exist in any SDB module (get()) and one that MAY exist (set_()). If using a (set_()) function, a __func_alias__ dictionary MUST be declared in the module as well:
__func_alias__ = { 'set_': 'set', }
This is because set is a Python built-in, and therefore functions should not be created which are called set(). The __func_alias__ functionality is provided via Salt's loader interfaces, and allows legally-named functions to be referred to using names that would otherwise be unwise to use.
The get() function is required, as it will be called via functions in other areas of the code which make use of the sdb:// URI. For example, the config.get function in the config execution module uses this function.
The set_() function may be provided, but is not required, as some sources may be read-only, or may be otherwise unwise to access via a URI (for instance, because of SQL injection attacks).
A simple example of an SDB module is salt/sdb/keyring_db.py, as it provides basic examples of most, if not all, of the types of functionality that are available not only for SDB modules, but for Salt modules in general.
SALT EVENT SYSTEM
The Salt Event System is used to fire off events enabling third party applications or external processes to react to behavior within Salt.
The event system is comprised of a two primary components:
- •
- The event sockets which publishes events.
- •
- The event library which can listen to events and send events into the salt system.
Event types
Salt Master Events
These events are fired on the Salt Master event bus. This list is not comprehensive.
Authentication events
- salt/auth
- Fired when a minion performs an authentication check with the master.
- Variables
- •
- id -- The minion ID.
- •
- act -- The current status of the minion key: accept, pend, reject.
- •
-
pub -- The minion public key.
NOTE: Minions fire auth events on fairly regular basis for a number of reasons. Writing reactors to respond to events through the auth cycle can lead to infinite reactor event loops (minion tries to auth, reactor responds by doing something that generates another auth event, minion sends auth event, etc.). Consider reacting to salt/key or salt/minion/<MID>/start or firing a custom event tag instead.
Start events
- salt/minion/<MID>/start
- Fired every time a minion connects to the Salt master.
- Variables
- id -- The minion ID.
Key events
- salt/key
- Fired when accepting and rejecting minions keys on the Salt master.
- Variables
- •
- id -- The minion ID.
- •
-
act -- The new status of the minion key: accept, pend,
reject.
WARNING: If a master is in auto_accept mode, salt/key events will not be fired when the keys are accepted. In addition, pre-seeding keys (like happens through Salt-Cloud) will not cause firing of these events.
Job events
- salt/job/<JID>/new
- Fired as a new job is sent out to minions.
- Variables
- •
- jid -- The job ID.
- •
- tgt -- The target of the job: *, a minion ID, G@os_family:RedHat, etc.
- •
- tgt_type -- The type of targeting used: glob, grain, compound, etc.
- •
- fun -- The function to run on minions: test.ping, network.interfaces, etc.
- •
- arg -- A list of arguments to pass to the function that will be called.
- •
- minions -- A list of minion IDs that Salt expects will return data for this job.
- •
- user -- The name of the user that ran the command as defined in Salt's Client ACL or external auth.
- salt/job/<JID>/ret/<MID>
- Fired each time a minion returns data for a job.
- Variables
- •
- id -- The minion ID.
- •
- jid -- The job ID.
- •
- retcode -- The return code for the job.
- •
- fun -- The function the minion ran. E.g., test.ping.
- •
- return -- The data returned from the execution module.
- salt/job/<JID>/prog/<MID>/<RUN NUM>
- Fired each time a each function in a state run completes execution. Must be enabled using the state_events option.
- Variables
- •
- data -- The data returned from the state module function.
- •
- id -- The minion ID.
- •
- jid -- The job ID.
Presence events
- salt/presence/present
- Events fired on a regular interval about currently connected, newly connected, or recently disconnected minions. Requires the presence_events setting to be enabled.
- Variables
- present -- A list of minions that are currently connected to the Salt master.
- salt/presence/change
- Fired when the Presence system detects new minions connect or disconnect.
- Variables
- •
- new -- A list of minions that have connected since the last presence event.
- •
- lost -- A list of minions that have disconnected since the last presence event.
Cloud Events
Unlike other Master events, salt-cloud events are not fired on behalf of a Salt Minion. Instead, salt-cloud events are fired on behalf of a VM. This is because the minion-to-be may not yet exist to fire events to or also may have been destroyed.
This behavior is reflected by the name variable in the event data for salt-cloud events as compared to the id variable for Salt Minion-triggered events.
- salt/cloud/<VM NAME>/creating
- Fired when salt-cloud starts the VM creation process.
- Variables
- •
- name -- the name of the VM being created.
- •
- event -- description of the event.
- •
- provider -- the cloud provider of the VM being created.
- •
- profile -- the cloud profile for the VM being created.
- salt/cloud/<VM NAME>/deploying
- Fired when the VM is available and salt-cloud begins deploying Salt to the new VM.
- Variables
- •
- name -- the name of the VM being created.
- •
- event -- description of the event.
- •
- kwargs -- options available as the deploy script is invoked: conf_file, deploy_command, display_ssh_output, host, keep_tmp, key_filename, make_minion, minion_conf, name, parallel, preseed_minion_keys, script, script_args, script_env, sock_dir, start_action, sudo, tmp_dir, tty, username
- salt/cloud/<VM NAME>/requesting
- Fired when salt-cloud sends the request to create a new VM.
- Variables
- •
- event -- description of the event.
- •
- location -- the location of the VM being requested.
- •
- kwargs -- options available as the VM is being requested: Action, ImageId, InstanceType, KeyName, MaxCount, MinCount, SecurityGroup.1
- salt/cloud/<VM NAME>/querying
- Fired when salt-cloud queries data for a new instance.
- Variables
- •
- event -- description of the event.
- •
- instance_id -- the ID of the new VM.
- salt/cloud/<VM NAME>/tagging
- Fired when salt-cloud tags a new instance.
- Variables
- •
- event -- description of the event.
- •
- tags -- tags being set on the new instance.
- salt/cloud/<VM NAME>/waiting_for_ssh
- Fired while the salt-cloud deploy process is waiting for ssh to become available on the new instance.
- Variables
- •
- event -- description of the event.
- •
- ip_address -- IP address of the new instance.
- salt/cloud/<VM NAME>/deploy_script
- Fired once the deploy script is finished.
- Variables
- event -- description of the event.
- salt/cloud/<VM NAME>/created
- Fired once the new instance has been fully created.
- Variables
- •
- name -- the name of the VM being created.
- •
- event -- description of the event.
- •
- instance_id -- the ID of the new instance.
- •
- provider -- the cloud provider of the VM being created.
- •
- profile -- the cloud profile for the VM being created.
- salt/cloud/<VM NAME>/destroying
- Fired when salt-cloud requests the destruction of an instance.
- Variables
- •
- name -- the name of the VM being created.
- •
- event -- description of the event.
- •
- instance_id -- the ID of the new instance.
- salt/cloud/<VM NAME>/destroyed
- Fired when an instance has been destroyed.
- Variables
- •
- name -- the name of the VM being created.
- •
- event -- description of the event.
- •
- instance_id -- the ID of the new instance.
Listening for Events
Salt's Event Bus is used heavily within Salt and it is also written to integrate heavily with existing tooling and scripts. There is a variety of ways to consume it.
From the CLI
The quickest way to watch the event bus is by calling the state.event runner:
salt-run state.event pretty=True
That runner is designed to interact with the event bus from external tools and shell scripts. See the documentation for more examples.
Remotely via the REST API
Salt's event bus can be consumed salt.netapi.rest_cherrypy.app.Events as an HTTP stream from external tools or services.
curl -SsNk https://salt-api.example.com:8000/events?token=05A3
From Python
Python scripts can access the event bus only as the same system user that Salt is running as.
The event system is accessed via the event library and can only be accessed by the same system user that Salt is running as. To listen to events a SaltEvent object needs to be created and then the get_event function needs to be run. The SaltEvent object needs to know the location that the Salt Unix sockets are kept. In the configuration this is the sock_dir option. The sock_dir option defaults to "/var/run/salt/master" on most systems.
The following code will check for a single event:
import salt.config import salt.utils.event opts = salt.config.client_config('/etc/salt/master') event = salt.utils.event.get_event( 'master', sock_dir=opts['sock_dir'], transport=opts['transport'], opts=opts) data = event.get_event()
Events will also use a "tag". Tags allow for events to be filtered by prefix. By default all events will be returned. If only authentication events are desired, then pass the tag "salt/auth".
The get_event method has a default poll time assigned of 5 seconds. To change this time set the "wait" option.
The following example will only listen for auth events and will wait for 10 seconds instead of the default 5.
data = event.get_event(wait=10, tag='salt/auth')
To retrieve the tag as well as the event data, pass full=True:
evdata = event.get_event(wait=10, tag='salt/job', full=True) tag, data = evdata['tag'], evdata['data']
Instead of looking for a single event, the iter_events method can be used to make a generator which will continually yield salt events.
The iter_events method also accepts a tag but not a wait time:
for data in event.iter_events(tag='salt/auth'): print(data)
And finally event tags can be globbed, such as they can be in the Reactor, using the fnmatch library.
import fnmatch import salt.config import salt.utils.event opts = salt.config.client_config('/etc/salt/master') sevent = salt.utils.event.get_event( 'master', sock_dir=opts['sock_dir'], transport=opts['transport'], opts=opts) while True: ret = sevent.get_event(full=True) if ret is None: continue if fnmatch.fnmatch(ret['tag'], 'salt/job/*/ret/*'): do_something_with_job_return(ret['data'])
Firing Events
It is possible to fire events on either the minion's local bus or to fire events intended for the master.
To fire a local event from the minion on the command line call the event.fire execution function:
salt-call event.fire '{"data": "message to be sent in the event"}' 'tag'
To fire an event to be sent up to the master from the minion call the event.send execution function. Remember YAML can be used at the CLI in function arguments:
salt-call event.send 'myco/mytag/success' '{success: True, message: "It works!"}'
If a process is listening on the minion, it may be useful for a user on the master to fire an event to it:
# Job on minion import salt.utils.event event = salt.utils.event.MinionEvent(**__opts__) for evdata in event.iter_events(tag='customtag/'): return evdata # do your processing here...
salt minionname event.fire '{"data": "message for the minion"}' 'customtag/african/unladen'
Firing Events from Python
From Salt execution modules
Events can be very useful when writing execution modules, in order to inform various processes on the master when a certain task has taken place. This is easily done using the normal cross-calling syntax:
# /srv/salt/_modules/my_custom_module.py def do_something(): ''' Do something and fire an event to the master when finished CLI Example:: salt '*' my_custom_module:do_something ''' # do something! __salt__['event.send']('myco/my_custom_module/finished', { 'finished': True, 'message': "The something is finished!", })
From Custom Python Scripts
Firing events from custom Python code is quite simple and mirrors how it is done at the CLI:
import salt.client caller = salt.client.Caller() caller.sminion.functions['event.send']( 'myco/myevent/success', { 'success': True, 'message': "It works!", } )
BEACONS
The beacon system allows the minion to hook into a variety of system processes and continually monitor these processes. When monitored activity occurs in a system process, an event is sent on the Salt event bus that can be used to trigger a reactor.
Salt beacons can currently monitor and send Salt events for many system activities, including:
- •
- file system changes
- •
- system load
- •
- service status
- •
- shell activity, such as user login
- •
-
network and disk usage
See beacon modules for a current list.
NOTE: Salt beacons are an event generation mechanism. Beacons leverage the Salt reactor system to make changes when beacon events occur.
Configuring Beacons
Salt beacons do not require any changes to the system process that is being monitored, everything is configured using Salt.
Beacons are typically enabled by placing a beacons: top level block in the minion configuration file:
beacons: inotify: /etc/httpd/conf.d: {} /opt: {}
The beacon system, like many others in Salt, can also be configured via the minion pillar, grains, or local config file.
Beacon Monitoring Interval
Beacons monitor on a 1-second interval by default. To set a different interval, provide an interval argument to a beacon. The following beacons run on 5- and 10-second intervals:
beacons: inotify: /etc/httpd/conf.d: {} /opt: {} interval: 5 load: - 1m: - 0.0 - 2.0 - 5m: - 0.0 - 1.5 - 15m: - 0.1 - 1.0 - interval: 10
Beacon Example
This example demonstrates configuring the inotify beacon to monitor a file for changes, and then create a backup each time a change is detected.
NOTE: The inotify beacon requires Pyinotify on the minion, install it using salt myminion pkg.install python-inotify.
First, on the Salt minion, add the following beacon configuration to /ect/salt/minion:
beacons: inotify: home/user/importantfile: mask: - modify
Replace user in the previous example with the name of your user account, and then save the configuration file and restart the minion service.
Next, create a file in your home directory named importantfile and add some simple content. The beacon is now set up to monitor this file for modifications.
View Events on the Master
On your Salt master, start the event runner using the following command:
salt-run state.event pretty=true
This runner displays events as they are received on the Salt event bus. To test the beacon you set up in the previous section, make and save a modification to the importantfile you created. You'll see an event similar to the following on the event bus:
salt/beacon/minion1/inotify/home/user/importantfile { "_stamp": "2015-09-09T15:59:37.972753", "data": { "change": "IN_IGNORED", "id": "minion1", "path": "/home/user/importantfile" }, "tag": "salt/beacon/minion1/inotify/home/user/importantfile" }
This indicates that the event is being captured and sent correctly. Now you can create a reactor to take action when this event occurs.
Create a Reactor
On your Salt master, create a file named srv/reactor/backup.sls. If the reactor directory doesn't exist, create it. Add the following to backup.sls:
backup file: cmd.file.copy: - tgt: {{ data['data']['id'] }} - arg: - {{ data['data']['path'] }} - {{ data['data']['path'] }}.bak
Next, add the code to trigger the reactor to ect/salt/master:
reactor: - salt/beacon/*/inotify/*/importantfile: - /srv/reactor/backup.sls
This reactor creates a backup each time a file named importantfile is modified on a minion that has the inotify beacon configured as previously shown.
NOTE: You can have only one top level reactor section, so if one already exists, add this code to the existing section. See Understanding the Structure of Reactor Formulas to learn more about reactor SLS syntax.
Start the Salt Master in Debug Mode
To help with troubleshooting, start the Salt master in debug mode:
service salt-master stop salt-master -l debug
When debug logging is enabled, event and reactor data are displayed so you can discover syntax and other issues.
Trigger the Reactor
On your minion, make and save another change to importantfile. On the Salt master, you'll see debug messages that indicate the event was received and the file.copy job was sent. When you list the directory on the minion, you'll now see importantfile.bak.
All beacons are configured using a similar process of enabling the beacon, writing a reactor SLS, and mapping a beacon event to the reactor SLS.
Writing Beacon Plugins
Beacon plugins use the standard Salt loader system, meaning that many of the constructs from other plugin systems holds true, such as the __virtual__ function.
The important function in the Beacon Plugin is the beacon function. When the beacon is configured to run, this function will be executed repeatedly by the minion. The beacon function therefore cannot block and should be as lightweight as possible. The beacon also must return a list of dicts, each dict in the list will be translated into an event on the master.
Please see the inotify beacon as an example.
RUNNING CUSTOM MASTER PROCESSES
In addition to the processes that the Salt Master automatically spawns, it is possible to configure it to start additional custom processes.
This is useful if a dedicated process is needed that should run throughout the life of the Salt Master. For periodic independent tasks, a scheduled runner may be more appropriate.
Processes started in this way will be restarted if they die and will be killed when the Salt Master is shut down.
Example Configuration
Processes are declared in the master config file with the ext_processes option. Processes will be started in the order they are declared.
ext_processes: - mymodule.TestProcess - mymodule.AnotherProcess
Example Process Class
# Import python libs import time import logging from multiprocessing import Process # Import Salt libs from salt.utils.event import SaltEvent log = logging.getLogger(__name__) class TestProcess(Process): def __init__(self, opts): Process.__init__(self) self.opts = opts def run(self): self.event = SaltEvent('master', self.opts['sock_dir']) i = 0 while True: self.event.fire_event({'iteration': i}, 'ext_processes/test{0}') time.sleep(60)
HIGH AVAILABILITY FEATURES IN SALT
Salt supports several features for high availability and fault tolerance. Brief documentation for these features is listed alongside their configuration parameters in Configuration file examples.
Multimaster
Salt minions can connect to multiple masters at one time by configuring the master configuration paramter as a YAML list of all the available masters. By default, all masters are "hot", meaning that any master can direct commands to the Salt infrastructure.
In a multimaster configuration, each master must have the same cryptographic keys, and minion keys must be accepted on all masters separately. The contents of file_roots and pillar_roots need to be kept in sync with processes external to Salt as well
A tutorial on setting up multimaster with "hot" masters is here:
Multimaster with Failover
Changing the master_type parameter from str to failover will cause minions to connect to the first responding master in the list of masters. Every master_alive_check seconds the minions will check to make sure the current master is still responding. If the master does not respond, the minion will attempt to connect to the next master in the list. If the minion runs out of masters, the list will be recycled in case dead masters have been restored. Note that master_alive_check must be present in the minion configuration, or else the recurring job to check master status will not get scheduled.
Failover can be combined with PKI-style encrypted keys, but PKI is NOT REQUIRED to use failover.
Multimaster with PKI and Failover is discussed in this tutorial
master_type: failover can be combined with master_shuffle: True to spread minion connections across all masters (one master per minion, not each minion connecting to all masters). Adding Salt Syndics into the mix makes it possible to create a load-balanced Salt infrastructure. If a master fails, minions will notice and select another master from the available list.
Syndic
Salt's Syndic feature is a way to create differing infrastructure topologies. It is not strictly an HA feature, but can be treated as such.
With the syndic, a Salt infrastructure can be partitioned in such a way that certain masters control certain segments of the infrastructure, and "Master of Masters" nodes can control multiple segments underneath them.
Syndics are covered in depth in Salt Syndic.
Syndic with Multimaster
New in version 2015.5.0.
Syndic with Multimaster lets you connect a syndic to multiple masters to provide an additional layer of redundancy in a syndic configuration.
Syndics are covered in depth in Salt Syndic.
SALT SYNDIC
The most basic or typical Salt topology consists of a single Master node controlling a group of Minion nodes. An intermediate node type, called Syndic, when used offers greater structural flexibility and scalability in the construction of Salt topologies than topologies constructed only out of Master and Minion node types.
A Syndic node can be thought of as a special passthrough Minion node. A Syndic node consists of a salt-syndic daemon and a salt-master daemon running on the same system. The salt-master daemon running on the Syndic node controls a group of lower level Minion nodes and the salt-syndic daemon connects higher level Master node, sometimes called a Master of Masters.
The salt-syndic daemon relays publications and events between the Master node and the local salt-master daemon. This gives the Master node control over the Minion nodes attached to the salt-master daemon running on the Syndic node.
Configuring the Syndic
To setup a Salt Syndic you need to tell the Syndic node and its Master node about each other. If your Master node is located at 10.10.0.1, then your configurations would be:
On the Syndic node:
# /etc/salt/master syndic_master: 10.10.0.1 # may be either an IP address or a hostname
# /etc/salt/minion # id is shared by the salt-syndic daemon and a possible salt-minion daemon # on the Syndic node id: my_syndic
On the Master node:
# /etc/salt/master order_masters: True
The syndic_master option tells the Syndic node where to find the Master node in the same way that the master option tells a Minion node where to find a Master node.
The id option is used by the salt-syndic daemon to identify with the Master node and if unset will default to the hostname or IP address of the Syndic just as with a Minion.
The order_masters option configures the Master node to send extra information with its publications that is needed by Syndic nodes connected directly to it.
NOTE: Each Syndic must provide its own file_roots directory. Files will not be automatically transferred from the Master node.
Configuring the Syndic with Multimaster
New in version 2015.5.0.
Syndic with Multimaster lets you connect a syndic to multiple masters to provide an additional layer of redundancy in a syndic configuration.
Higher level masters should first be configured in a multimaster configuration. See Multimaster Tutorial.
On the syndic, the syndic_master option is populated with a list of the higher level masters.
Since each syndic is connected to each master, jobs sent from any master are forwarded to minions that are connected to each syndic. If the master_id value is set in the master config on the higher level masters, job results are returned to the master that originated the request in a best effort fashion. Events/jobs without a master_id are returned to any available master.
Running the Syndic
The salt-syndic daemon is a separate process that needs to be started in addition to the salt-master daemon running on the Syndic node. Starting the salt-syndic daemon is the same as starting the other Salt daemons.
The Master node in many ways sees the Syndic as an ordinary Minion node. In particular, the Master will need to accept the Syndic's Minion key as it would for any other Minion.
On the Syndic node:
# salt-syndic or # service salt-syndic start
On the Master node:
# salt-key -a my_syndic
The Master node will now be able to control the Minion nodes connected to the Syndic. Only the Syndic key will be listed in the Master node's key registry but this also means that key activity between the Syndic's Minions and the Syndic does not encumber the Master node. In this way, the Syndic's key on the Master node can be thought of as a placeholder for the keys of all the Minion and Syndic nodes beneath it, giving the Master node a clear, high level structural view on the Salt cluster.
On the Master node:
# salt-key -L Accepted Keys: my_syndic Denied Keys: Unaccepted Keys: Rejected Keys: # salt '*' test.ping minion_1: True minion_2: True minion_4: True minion_3: True
Topology
A Master node (a node which is itself not a Syndic to another higher level Master node) must run a salt-master daemon and optionally a salt-minion daemon.
A Syndic node must run salt-syndic and salt-master daemons and optionally a salt-minion daemon.
A Minion node must run a salt-minion daemon.
When a salt-master daemon issues a command, it will be received by the Syndic and Minion nodes directly connected to it. A Minion node will process the command in the way it ordinarily would. On a Syndic node, the salt-syndic daemon will relay the command to the salt-master daemon running on the Syndic node, which then propagates the command to to the Minions and Syndics connected to it.
When events and job return data are generated by salt-minion daemons, they are aggregated by the salt-master daemon they are connected to, which salt-master daemon then relays the data back through its salt-syndic daemon until the data reaches the Master or Syndic node that issued the command.
Syndic wait
NOTE: To reduce the amount of time the CLI waits for Minions to respond, install a Minion on the Syndic or tune the value of the syndic_wait configuration.
While it is possible to run a Syndic without a Minion installed on the same system, it is recommended, for a faster CLI response time, to do so. Without a Minion installed on the Syndic node, the timeout value of syndic_wait increases significantly - about three-fold. With a Minion installed on the Syndic, the CLI timeout resides at the value defined in syndic_wait.
NOTE: If you have a very large infrastructure or many layers of Syndics, you may find that the CLI doesn't wait long enough for the Syndics to return their events. If you think this is the case, you can set the syndic_wait value in the Master configs on the Master or Syndic nodes from which commands are executed. The default value is 5, and should work for the majority of deployments.
In order for a Master or Syndic node to return information from Minions that are below their Syndics, the CLI requires a short wait time in order to allow the Syndics to gather responses from their Minions. This value is defined in the syndic_wait config option and has a default of five seconds.
Syndic config options
These are the options that can be used to configure a Syndic node. Note that other than id, Syndic config options are placed in the Master config on the Syndic node.
- •
- id: Syndic id (shared by the salt-syndic daemon with a potential salt-minion daemon on the same system)
- •
- syndic_master: Master node IP address or hostname
- •
- syndic_master_port: Master node ret_port
- •
- syndic_log_file: path to the logfile (absolute or not)
- •
- syndic_pidfile: path to the pidfile (absolute or not)
- •
- syndic_wait: time in seconds to wait on returns from this syndic
SALT PROXY MINION DOCUMENTATION
Proxy minions are a developing Salt feature that enables controlling devices that, for whatever reason, cannot run a standard salt-minion. Examples include network gear that has an API but runs a proprietary OS, devices with limited CPU or memory, or devices that could run a minion, but for security reasons, will not.
Proxy minions are not an "out of the box" feature. Because there are an infinite number of controllable devices, you will most likely have to write the interface yourself. Fortunately, this is only as difficult as the actual interface to the proxied device. Devices that have an existing Python module (PyUSB for example) would be relatively simple to interface. Code to control a device that has an HTML REST-based interface should be easy. Code to control your typical housecat would be excellent source material for a PhD thesis.
Salt proxy-minions provide the 'plumbing' that allows device enumeration and discovery, control, status, remote execution, and state management.
Getting Started
The following diagram may be helpful in understanding the structure of a Salt installation that includes proxy-minions: [image]
The key thing to remember is the left-most section of the diagram. Salt's nature is to have a minion connect to a master, then the master may control the minion. However, for proxy minions, the target device cannot run a minion, and thus must rely on a separate minion to fire up the proxy-minion and make the initial and persistent connection.
After the proxy minion is started and initiates its connection to the 'dumb' device, it connects back to the salt-master and ceases to be affiliated in any way with the minion that started it.
To create support for a proxied device one needs to create four things:
- 1.
- The proxy_connection_module (located in salt/proxy).
- 2.
- The grains support code (located in salt/grains).
- 3.
- Salt modules specific to the controlled device.
- 4.
- Salt states specific to the controlled device.
Configuration parameters on the master
Proxy minions require no configuration parameters in /etc/salt/master.
Salt's Pillar system is ideally suited for configuring proxy-minions. Proxies can either be designated via a pillar file in pillar_roots, or through an external pillar. External pillars afford the opportunity for interfacing with a configuration management system, database, or other knowledgeable system that that may already contain all the details of proxy targets. To use static files in pillar_roots, pattern your files after the following examples, which are based on the diagram above:
/srv/pillar/top.sls
base: minioncontroller1: - networkswitches minioncontroller2: - reallydumbdevices minioncontroller3: - smsgateway
/srv/pillar/networkswitches.sls
proxy: dumbdevice1: proxytype: networkswitch host: 172.23.23.5 username: root passwd: letmein dumbdevice2: proxytype: networkswitch host: 172.23.23.6 username: root passwd: letmein dumbdevice3: proxytype: networkswitch host: 172.23.23.7 username: root passwd: letmein
/srv/pillar/reallydumbdevices.sls
proxy: dumbdevice4: proxytype: i2c_lightshow i2c_address: 1 dumbdevice5: proxytype: i2c_lightshow i2c_address: 2 dumbdevice6: proxytype: 433mhz_wireless
/srv/pillar/smsgateway.sls
proxy: minioncontroller3: dumbdevice7: proxytype: sms_serial deventry: /dev/tty04
Note the contents of each minioncontroller key may differ widely based on the type of device that the proxy-minion is managing.
In the above example
- •
- dumbdevices 1, 2, and 3 are network switches that have a management interface available at a particular IP address.
- •
- dumbdevices 4 and 5 are very low-level devices controlled over an i2c bus. In this case the devices are physically connected to machine 'minioncontroller2', and are addressable on the i2c bus at their respective i2c addresses.
- •
- dumbdevice6 is a 433 MHz wireless transmitter, also physically connected to minioncontroller2
- •
-
dumbdevice7 is an SMS gateway connected to machine minioncontroller3 via a
serial port.
Because of the way pillar works, each of the salt-minions that fork off the proxy minions will only see the keys specific to the proxies it will be handling. In other words, from the above example, only minioncontroller1 will see the connection information for dumbdevices 1, 2, and 3. Minioncontroller2 will see configuration data for dumbdevices 4, 5, and 6, and minioncontroller3 will be privy to dumbdevice7.
Also, in general, proxy-minions are lightweight, so the machines that run them could conceivably control a large number of devices. The example above is just to illustrate that it is possible for the proxy services to be spread across many machines if necessary, or intentionally run on machines that need to control devices because of some physical interface (e.g. i2c and serial above). Another reason to divide proxy services might be security. In more secure environments only certain machines may have a network path to certain devices.
Now our salt-minions know if they are supposed to spawn a proxy-minion process to control a particular device. That proxy-minion process will initiate a connection back to the master to enable control.
Proxymodules
A proxy module encapsulates all the code necessary to interface with a device. Proxymodules are located inside the salt.proxy module. At a minimum a proxymodule object must implement the following functions:
__virtual__(): This function performs the same duty that it does for other types of Salt modules. Logic goes here to determine if the module can be loaded, checking for the presence of Python modules on which the proxy deepends. Returning False will prevent the module from loading.
init(opts): Perform any initialization that the device needs. This is a good place to bring up a persistent connection to a device, or authenticate to create a persistent authorization token.
id(opts): Returns a unique, unchanging id for the controlled device. This is the "name" of the device, and is used by the salt-master for targeting and key authentication.
shutdown(): Code to cleanly shut down or close a connection to a controlled device goes here. This function must exist, but can contain only the keyword pass if there is no shutdown logic required.
ping(): While not required, it is highly recommended that this function also be defined in the proxymodule. The code for ping should contact the controlled device and make sure it is really available.
Here is an example proxymodule used to interface to a very simple REST server. Code for the server is in the salt-contrib GitHub repository
This proxymodule enables "service" enumration, starting, stopping, restarting, and status; "package" installation, and a ping.
# -*- coding: utf-8 -*- ''' This is a simple proxy-minion designed to connect to and communicate with the bottle-based web service contained in https://github.com/saltstack/salt-contrib/proxyminion_rest_example ''' from __future__ import absolute_import # Import python libs import logging import salt.utils.http HAS_REST_EXAMPLE = True # This must be present or the Salt loader won't load this module __proxyenabled__ = ['rest_sample'] # Variables are scoped to this module so we can have persistent data # across calls to fns in here. GRAINS_CACHE = {} DETAILS = {} # Want logging! log = logging.getLogger(__file__) # This does nothing, it's here just as an example and to provide a log # entry when the module is loaded. def __virtual__(): ''' Only return if all the modules are available ''' log.debug('rest_sample proxy __virtual__() called...') return True # Every proxy module needs an 'init', though you can # just put a 'pass' here if it doesn't need to do anything. def init(opts): log.debug('rest_sample proxy init() called...') # Save the REST URL DETAILS['url'] = opts['proxy']['url'] # Make sure the REST URL ends with a '/' if not DETAILS['url'].endswith('/'): DETAILS['url'] += '/' def id(opts): ''' Return a unique ID for this proxy minion. This ID MUST NOT CHANGE. If it changes while the proxy is running the salt-master will get really confused and may stop talking to this minion ''' r = salt.utils.http.query(opts['proxy']['url']+'id', decode_type='json', decode=True) return r['dict']['id'].encode('ascii', 'ignore') def grains(): ''' Get the grains from the proxied device ''' if not GRAINS_CACHE: r = salt.utils.http.query(DETAILS['url']+'info', decode_type='json', decode=True) GRAINS_CACHE = r['dict'] return GRAINS_CACHE def grains_refresh(): ''' Refresh the grains from the proxied device ''' GRAINS_CACHE = {} return grains() def service_start(name): ''' Start a "service" on the REST server ''' r = salt.utils.http.query(DETAILS['url']+'service/start/'+name, decode_type='json', decode=True) return r['dict'] def service_stop(name): ''' Stop a "service" on the REST server ''' r = salt.utils.http.query(DETAILS['url']+'service/stop/'+name, decode_type='json', decode=True) return r['dict'] def service_restart(name): ''' Restart a "service" on the REST server ''' r = salt.utils.http.query(DETAILS['url']+'service/restart/'+name, decode_type='json', decode=True) return r['dict'] def service_list(): ''' List "services" on the REST server ''' r = salt.utils.http.query(DETAILS['url']+'service/list', decode_type='json', decode=True) return r['dict'] def service_status(name): ''' Check if a service is running on the REST server ''' r = salt.utils.http.query(DETAILS['url']+'service/status/'+name, decode_type='json', decode=True) return r['dict'] def package_list(): ''' List "packages" installed on the REST server ''' r = salt.utils.http.query(DETAILS['url']+'package/list', decode_type='json', decode=True) return r['dict'] def package_install(name, **kwargs): ''' Install a "package" on the REST server ''' cmd = DETAILS['url']+'package/install/'+name if 'version' in kwargs: cmd += '/'+kwargs['version'] else: cmd += '/1.0' r = salt.utils.http.query(cmd, decode_type='json', decode=True) def package_remove(name): ''' Remove a "package" on the REST server ''' r = salt.utils.http.query(DETAILS['url']+'package/remove/'+name, decode_type='json', decode=True) return r['dict'] def package_status(name): ''' Check the installation status of a package on the REST server ''' r = salt.utils.http.query(DETAILS['url']+'package/status/'+name, decode_type='json', decode=True) return r['dict'] def ping(): ''' Is the REST server up? ''' r = salt.utils.http.query(DETAILS['url']+'ping', decode_type='json', decode=True) try: return r['dict'].get('ret', False) except Exception: return False def shutdown(opts): ''' For this proxy shutdown is a no-op ''' log.debug('rest_sample proxy shutdown() called...') pass
Grains are data about minions. Most proxied devices will have a paltry amount of data as compared to a typical Linux server. By default, a proxy minion will have no grains set at all. Salt core code requires values for kernel, os, and os_family. To add them (and others) to your proxy minion for a particular device, create a file in salt/grains named [proxytype].py and place inside it the different functions that need to be run to collect the data you are interested in. Here's an example:
The __proxyenabled__ directive
Salt execution moduless, by, and large, cannot "automatically" work with proxied devices. Execution modules like pkg or sqlite3 have no meaning on a network switch or a housecat. For an execution module to be available to a proxy-minion, the __proxyenabled__ variable must be defined in the module as an array containing the names of all the proxytypes that this module can support. The array can contain the special value * to indicate that the module supports all proxies.
If no __proxyenabled__ variable is defined, then by default, the execution module is unavailable to any proxy.
Here is an excerpt from a module that was modified to support proxy-minions:
__proxyenabled__ = ['*'] [...] def ping(): if 'proxymodule' in __opts__: if 'ping' in __opts__['proxyobject'].__attr__(): return __opts['proxyobject'].ping() else: return False else: return True
And then in salt.proxy.rest_sample.py we find
def ping(): ''' Is the REST server up? ''' r = salt.utils.http.query(DETAILS['url']+'ping', decode_type='json', decode=True) try: return r['dict'].get('ret', False) except Exception: return False
THE RAET TRANSPORT
NOTE: The RAET transport is in very early development, it is functional but no promises are yet made as to its reliability or security. As for reliability and security, the encryption used has been audited and our tests show that raet is reliable. With this said we are still conducting more security audits and pushing the reliability. This document outlines the encryption used in RAET
New in version 2014.7.0.
The Reliable Asynchronous Event Transport, or RAET, is an alternative transport medium developed specifically with Salt in mind. It has been developed to allow queuing to happen up on the application layer and comes with socket layer encryption. It also abstracts a great deal of control over the socket layer and makes it easy to bubble up errors and exceptions.
RAET also offers very powerful message routing capabilities, allowing for messages to be routed between processes on a single machine all the way up to processes on multiple machines. Messages can also be restricted, allowing processes to be sent messages of specific types from specific sources allowing for trust to be established.
Using RAET in Salt
Using RAET in Salt is easy, the main difference is that the core dependencies change, instead of needing pycrypto, M2Crypto, ZeroMQ, and PYZMQ, the packages libsodium, libnacl, ioflo, and raet are required. Encryption is handled very cleanly by libnacl, while the queueing and flow control is handled by ioflo. Distribution packages are forthcoming, but libsodium can be easily installed from source, or many distributions do ship packages for it. The libnacl and ioflo packages can be easily installed from pypi, distribution packages are in the works.
Once the new deps are installed the 2014.7 release or higher of Salt needs to be installed.
Once installed, modify the configuration files for the minion and master to set the transport to raet:
/etc/salt/master:
transport: raet
/etc/salt/minion:
transport: raet
Now start salt as it would normally be started, the minion will connect to the master and share long term keys, which can then in turn be managed via salt-key. Remote execution and salt states will function in the same way as with Salt over ZeroMQ.
Limitations
The 2014.7 release of RAET is not complete! The Syndic and Multi Master have not been completed yet and these are slated for completion in the 2015.5.0 release.
Also, Salt-Raet allows for more control over the client but these hooks have not been implemented yet, thereforre the client still uses the same system as the ZeroMQ client. This means that the extra reliability that RAET exposes has not yet been implemented in the CLI client.
Why?
Customer and User Request
Why make an alternative transport for Salt? There are many reasons, but the primary motivation came from customer requests, many large companies came with requests to run Salt over an alternative transport, the reasoning was varied, from performance and scaling improvements to licensing concerns. These customers have partnered with SaltStack to make RAET a reality.
More Capabilities
RAET has been designed to allow salt to have greater communication capabilities. It has been designed to allow for development into features which out ZeroMQ topologies can't match.
Many of the proposed features are still under development and will be announced as they enter proof of concept phases, but these features include salt-fuse - a filesystem over salt, salt-vt - a parallel api driven shell over the salt transport and many others.
RAET Reliability
RAET is reliable, hence the name (Reliable Asynchronous Event Transport).
The concern posed by some over RAET reliability is based on the fact that RAET uses UDP instead of TCP and UDP does not have built in reliability.
RAET itself implements the needed reliability layers that are not natively present in UDP, this allows RAET to dynamically optimize packet delivery in a way that keeps it both reliable and asynchronous.
RAET and ZeroMQ
When using RAET, ZeroMQ is not required. RAET is a complete networking replacement. It is noteworthy that RAET is not a ZeroMQ replacement in a general sense, the ZeroMQ constructs are not reproduced in RAET, but they are instead implemented in such a way that is specific to Salt's needs.
RAET is primarily an async communication layer over truly async connections, defaulting to UDP. ZeroMQ is over TCP and abstracts async constructs within the socket layer.
Salt is not dropping ZeroMQ support and has no immediate plans to do so.
Encryption
RAET uses Dan Bernstein's NACL encryption libraries and CurveCP handshake. The libnacl python binding binds to both libsodium and tweetnacl to execute the underlying cryptography. This allows us to completely rely on an externally developed cryptography system.
For more information on libsodium and CurveCP please see: http://doc.libsodium.org/ http://curvecp.org/
Programming Intro
WINDOWS SOFTWARE REPOSITORY
The Salt Windows Software Repository provides a package manager and software repository similar to what is provided by yum and apt on Linux.
It permits the installation of software using the installers on remote windows machines. In many senses, the operation is similar to that of the other package managers salt is aware of:
- •
- the pkg.installed and similar states work on Windows.
- •
- the pkg.install and similar module functions work on Windows.
- •
-
each windows machine needs to have pkg.refresh_db executed
against it to pick up the latest version of the package database.
High level differences to yum and apt are:
- •
- The repository metadata (sls files) is hosted through either salt or git.
- •
- Packages can be downloaded from within the salt repository, a git repository or from http(s) or ftp urls.
- •
- No dependencies are managed. Dependencies between packages needs to be managed manually.
Operation
The install state/module function of the windows package manager works roughly as follows:
- 1.
- Execute pkg.list_pkgs and store the result
- 2.
- Check if any action needs to be taken. (i.e. compare required package and version against pkg.list_pkgs results)
- 3.
- If so, run the installer command.
- 4.
- Execute pkg.list_pkgs and compare to the result stored from before installation.
- 5.
-
Success/Failure/Changes will be reported based on the differences
between the original and final pkg.list_pkgs results.
If there are any problems in using the package manager it is likely to be due to the data in your sls files not matching the difference between the pre and post pkg.list_pkgs results.
Usage
By default, the Windows software repository is found at /srv/salt/win/repo This can be changed in the master config file (default location is /etc/salt/master) by modifying the win_repo variable. Each piece of software should have its own directory which contains the installers and a package definition file. This package definition file is a YAML file named init.sls.
The package definition file should look similar to this example for Firefox: /srv/salt/win/repo/firefox/init.sls
Firefox: '17.0.1': installer: 'salt://win/repo/firefox/English/Firefox Setup 17.0.1.exe' full_name: Mozilla Firefox 17.0.1 (x86 en-US) locale: en_US reboot: False install_flags: '-ms' uninstaller: '%ProgramFiles(x86)%/Mozilla Firefox/uninstall/helper.exe' uninstall_flags: '/S' '16.0.2': installer: 'salt://win/repo/firefox/English/Firefox Setup 16.0.2.exe' full_name: Mozilla Firefox 16.0.2 (x86 en-US) locale: en_US reboot: False install_flags: '-ms' uninstaller: '%ProgramFiles(x86)%/Mozilla Firefox/uninstall/helper.exe' uninstall_flags: '/S' '15.0.1': installer: 'salt://win/repo/firefox/English/Firefox Setup 15.0.1.exe' full_name: Mozilla Firefox 15.0.1 (x86 en-US) locale: en_US reboot: False install_flags: '-ms' uninstaller: '%ProgramFiles(x86)%/Mozilla Firefox/uninstall/helper.exe' uninstall_flags: '/S'
More examples can be found here: https://github.com/saltstack/salt-winrepo
The version number and full_name need to match the output from pkg.list_pkgs so that the status can be verified when running highstate. Note: It is still possible to successfully install packages using pkg.install even if they don't match which can make this hard to troubleshoot.
salt 'test-2008' pkg.list_pkgs test-2008 ---------- 7-Zip 9.20 (x64 edition): 9.20.00.0 Microsoft .NET Framework 4 Client Profile: 4.0.30319,4.0.30319 Microsoft .NET Framework 4 Extended: 4.0.30319,4.0.30319 Microsoft Visual C++ 2008 Redistributable - x64 9.0.21022: 9.0.21022 Mozilla Firefox 17.0.1 (x86 en-US): 17.0.1 Mozilla Maintenance Service: 17.0.1 NSClient++ (x64): 0.3.8.76 Notepad++: 6.4.2 Salt Minion 0.16.0: 0.16.0
If any of these preinstalled packages already exist in winrepo the full_name will be automatically renamed to their package name during the next update (running highstate or installing another package).
test-2008: ---------- 7zip: 9.20.00.0 Microsoft .NET Framework 4 Client Profile: 4.0.30319,4.0.30319 Microsoft .NET Framework 4 Extended: 4.0.30319,4.0.30319 Microsoft Visual C++ 2008 Redistributable - x64 9.0.21022: 9.0.21022 Mozilla Maintenance Service: 17.0.1 Notepad++: 6.4.2 Salt Minion 0.16.0: 0.16.0 firefox: 17.0.1 nsclient: 0.3.9.328
Add msiexec: True if using an MSI installer requiring the use of msiexec /i to install and msiexec /x to uninstall.
The install_flags and uninstall_flags are flags passed to the software installer to cause it to perform a silent install. These can often be found by adding /? or /h when running the installer from the command line. A great resource for finding these silent install flags can be found on the WPKG project's wiki:
7zip: 9.20.00.0: installer: salt://win/repo/7zip/7z920-x64.msi full_name: 7-Zip 9.20 (x64 edition) reboot: False install_flags: '/qn /norestart' msiexec: True uninstaller: '{23170F69-40C1-2702-0920-000001000000}' uninstall_flags: '/qn /norestart'
Alternatively the uninstaller can also simply repeat the URL of the msi file.
7zip: 9.20.00.0: installer: salt://win/repo/7zip/7z920-x64.msi full_name: 7-Zip 9.20 (x64 edition) reboot: False install_flags: '/qn /norestart' msiexec: True uninstaller: salt://win/repo/7zip/7z920-x64.msi uninstall_flags: '/qn /norestart'
Generate Repo Cache File
Once the sls file has been created, generate the repository cache file with the winrepo runner:
salt-run winrepo.genrepo
Then update the repository cache file on your minions, exactly how it's done for the Linux package managers:
salt '*' pkg.refresh_db
Install Windows Software
Now you can query the available version of Firefox using the Salt pkg module.
salt '*' pkg.available_version Firefox {'Firefox': {'15.0.1': 'Mozilla Firefox 15.0.1 (x86 en-US)', '16.0.2': 'Mozilla Firefox 16.0.2 (x86 en-US)', '17.0.1': 'Mozilla Firefox 17.0.1 (x86 en-US)'}}
As you can see, there are three versions of Firefox available for installation. You can refer a software package by its name or its full_name surround by single quotes.
salt '*' pkg.install 'Firefox'
The above line will install the latest version of Firefox.
salt '*' pkg.install 'Firefox' version=16.0.2
The above line will install version 16.0.2 of Firefox.
If a different version of the package is already installed it will be replaced with the version in winrepo (only if the package itself supports live updating).
You can also specify the full name:
salt '*' pkg.install 'Mozilla Firefox 17.0.1 (x86 en-US)'
Uninstall Windows Software
Uninstall software using the pkg module:
salt '*' pkg.remove 'Firefox' salt '*' pkg.purge 'Firefox'
pkg.purge just executes pkg.remove on Windows. At some point in the future pkg.purge may direct the installer to remove all configs and settings for software packages that support that option.
Standalone Minion Salt Windows Repo Module
In order to facilitate managing a Salt Windows software repo with Salt on a Standalone Minion on Windows, a new module named winrepo has been added to Salt. winrepo matches what is available in the salt runner and allows you to manage the Windows software repo contents. Example: salt '*' winrepo.genrepo
Git Hosted Repo
Windows software package definitions can also be hosted in one or more git repositories. The default repo is one hosted on Github.com by SaltStack,Inc., which includes package definitions for open source software. This repo points to the HTTP or ftp locations of the installer files. Anyone is welcome to send a pull request to this repo to add new package definitions. Browse the repo here: https://github.com/saltstack/salt-winrepo .
Configure which git repos the master can search for package definitions by modifying or extending the win_gitrepos configuration option list in the master config.
Checkout each git repo in win_gitrepos, compile your package repository cache and then refresh each minion's package cache:
salt-run winrepo.update_git_repos salt-run winrepo.genrepo salt '*' pkg.refresh_db
Troubleshooting
Incorrect name/version
If the package seems to install properly, but salt reports a failure then it is likely you have a version or full_name mismatch.
Check the exact full_name and version used by the package. Use pkg.list_pkgs to check that the names and version exactly match what is installed.
Changes to sls files not being picked up
Ensure you have (re)generated the repository cache file and then updated the repository cache on the relevant minions:
salt-run winrepo.genrepo salt 'MINION' pkg.refresh_db
Packages management under Windows 2003
On windows server 2003, you need to install optional windows component "wmi windows installer provider" to have full list of installed packages. If you don't have this, salt-minion can't report some installed software.
WINDOWS-SPECIFIC BEHAVIOUR
Salt is capable of managing Windows systems, however due to various differences between the operating systems, there are some things you need to keep in mind.
This document will contain any quirks that apply across Salt or generally across multiple module functions. Any Windows-specific behavior for particular module functions will be documented in the module function documentation. Therefore this document should be read in conjunction with the module function documentation.
Group parameter for files
Salt was originally written for managing Unix-based systems, and therefore the file module functions were designed around that security model. Rather than trying to shoehorn that model on to Windows, Salt ignores these parameters and makes non-applicable module functions unavailable instead.
One of the commonly ignored parameters is the group parameter for managing files. Under Windows, while files do have a 'primary group' property, this is rarely used. It generally has no bearing on permissions unless intentionally configured and is most commonly used to provide Unix compatibility (e.g. Services For Unix, NFS services).
Because of this, any file module functions that typically require a group, do not under Windows. Attempts to directly use file module functions that operate on the group (e.g. file.chgrp) will return a pseudo-value and cause a log message to appear. No group parameters will be acted on.
If you do want to access and change the 'primary group' property and understand the implications, use the file.get_pgid or file.get_pgroup functions or the pgroup parameter on the file.chown module function.
Dealing with case-insensitive but case-preserving names
Windows is case-insensitive, but however preserves the case of names and it is this preserved form that is returned from system functions. This causes some issues with Salt because it assumes case-sensitive names. These issues generally occur in the state functions and can cause bizarre looking errors.
To avoid such issues, always pretend Windows is case-sensitive and use the right case for names, e.g. specify user=Administrator instead of user=administrator.
Follow issue 11801 for any changes to this behavior.
Dealing with various username forms
Salt does not understand the various forms that Windows usernames can come in, e.g. username, mydomain\username, username [at] mydomain.tld can all refer to the same user. In fact, Salt generally only considers the raw username value, i.e. the username without the domain or host information.
Using these alternative forms will likely confuse Salt and cause odd errors to happen. Use only the raw username value in the correct case to avoid problems.
Follow issue 11801 for any changes to this behavior.
Specifying the None group
Each Windows system has built-in _None_ group. This is the default 'primary group' for files for users not on a domain environment.
Unfortunately, the word _None_ has special meaning in Python - it is a special value indicating 'nothing', similar to null or nil in other languages.
To specify the None group, it must be specified in quotes, e.g. ./salt '*' file.chpgrp C:\path\to\file "'None'".
Symbolic link loops
Under Windows, if any symbolic link loops are detected or if there are too many levels of symlinks (defaults to 64), an error is always raised.
For some functions, this behavior is different to the behavior on Unix platforms. In general, avoid symlink loops on either platform.
Modifying security properties (ACLs) on files
There is no support in Salt for modifying ACLs, and therefore no support for changing file permissions, besides modifying the owner/user.
SALT CLOUD
Getting Started
Salt Cloud is built-in to Salt and is configured on and executed from your Salt Master.
Define a Profile
The first step is to add the credentials for your cloud provider. Credentials and provider settings are stored in provider configuration files. Provider configurations contain the details needed to connect, and any global options that you want set on your cloud minions (such as the location of your Salt Master).
On your Salt Master, browse to /etc/salt/cloud.providers.d/ and create a file called <provider>.provider.conf, replacing <provider> with ec2, softlayer, and so on. The name helps you identify the contents, and is not important as long as the file ends in .conf.
Next, browse to the Provider specifics and add any required settings for your provider to this file. Here is an example for Amazon EC2:
my-ec2: provider: ec2 # Set the EC2 access credentials (see below) # id: 'HJGRYCILJLKJYG' key: 'kdjgfsgm;woormgl/aserigjksjdhasdfgn' # Make sure this key is owned by root with permissions 0400. # private_key: /etc/salt/my_test_key.pem keyname: my_test_key securitygroup: default # Optional: Set up the location of the Salt Master # minion: master: saltmaster.example.com
The required configuration varies between providers so make sure you read the provider specifics.
List Cloud Provider Options
You can now query the cloud provider you configured for available locations, images, and sizes. This information is used when you set up VM profiles.
salt-cloud --list-locations <provider_name> # my-ec2 in the previous example salt-cloud --list-images <provider_name> salt-cloud --list-sizes <provider_name>
Replace <provider_name> with the name of the provider configuration you defined.
Create VM Profiles
On your Salt Master, browse to /etc/salt/cloud.profiles.d/ and create a file called <provider>.profiles.conf, replacing <provider> with ec2, softlayer, and so on. The file must end in .conf.
You can now add any custom profiles you'd like to define to this file. Here are a few examples:
micro_ec2: provider: my-ec2 image: ami-d514f291 size: t1.micro medium_ec2: provider: my-ec2 image: ami-d514f291 size: m3.medium large_ec2: provider: my-ec2 image: ami-d514f291 size: m3.large
Notice that the provider in our profile matches the provider name that we defined? That is how Salt Cloud knows how to connect to create a VM with these attributes.
Create VMs
VMs are created by calling salt-cloud with the following options:
salt-cloud -p <profile> <name1> <name2> ...
For example:
salt-cloud -p micro_ec2 minion1 minion2
Destroy VMs
Add a -d and the minion name you provided to destroy:
salt-cloud -d minion1 minion2
Query VMs
You can view details about the VMs you've created using --query:
salt-cloud --query
Using Salt Cloud
salt-cloud
Provision virtual machines in the cloud with Salt
Synopsis
salt-cloud -m /etc/salt/cloud.map salt-cloud -m /etc/salt/cloud.map NAME salt-cloud -m /etc/salt/cloud.map NAME1 NAME2 salt-cloud -p PROFILE NAME salt-cloud -p PROFILE NAME1 NAME2 NAME3 NAME4 NAME5 NAME6
Description
Salt Cloud is the system used to provision virtual machines on various public clouds via a cleanly controlled profile and mapping system.
Options
- --version
- Print the version of Salt that is running.
- --versions-report
- Show program's dependencies and version number, and then exit
- -h, --help
- Show the help message and exit
- -c CONFIG_DIR, --config-dir=CONFIG_dir
- The location of the Salt configuration directory. This directory contains the configuration files for Salt master and minions. The default location on most systems is /etc/salt.
Execution Options
- -L LOCATION, --location=LOCATION
- Specify which region to connect to.
- -a ACTION, --action=ACTION
- Perform an action that may be specific to this cloud provider. This argument requires one or more instance names to be specified.
- -f <FUNC-NAME> <PROVIDER>, --function=<FUNC-NAME> <PROVIDER>
- Perform an function that may be specific to this cloud provider, that does not apply to an instance. This argument requires a provider to be specified (i.e.: nova).
- -p PROFILE, --profile=PROFILE
- Select a single profile to build the named cloud VMs from. The profile must be defined in the specified profiles file.
- -m MAP, --map=MAP
- Specify a map file to use. If used without any other options, this option will ensure that all of the mapped VMs are created. If the named VM already exists then it will be skipped.
- -H, --hard
- When specifying a map file, the default behavior is to ensure that all of the VMs specified in the map file are created. If the --hard option is set, then any VMs that exist on configured cloud providers that are not specified in the map file will be destroyed. Be advised that this can be a destructive operation and should be used with care.
- -d, --destroy
- Pass in the name(s) of VMs to destroy, salt-cloud will search the configured cloud providers for the specified names and destroy the VMs. Be advised that this is a destructive operation and should be used with care. Can be used in conjunction with the -m option to specify a map of VMs to be deleted.
- -P, --parallel
-
Normally when building many cloud VMs they are executed serially. The -P
option will run each cloud vm build in a separate process allowing for
large groups of VMs to be build at once.
Be advised that some cloud provider's systems don't seem to be well suited for this influx of vm creation. When creating large groups of VMs watch the cloud provider carefully.
- -u, --update-bootstrap
- Update salt-bootstrap to the latest develop version on GitHub.
- -y, --assume-yes
- Default yes in answer to all confirmation questions.
- -k, --keep-tmp
- Do not remove files from /tmp/ after deploy.sh finishes.
- --show-deploy-args
- Include the options used to deploy the minion in the data returned.
- --script-args=SCRIPT_ARGS
- Script arguments to be fed to the bootstrap script when deploying the VM.
Query Options
- -Q, --query
- Execute a query and return some information about the nodes running on configured cloud providers
- -F, --full-query
- Execute a query and print out all available information about all cloud VMs. Can be used in conjunction with -m to display only information about the specified map.
- -S, --select-query
- Execute a query and print out selected information about all cloud VMs. Can be used in conjunction with -m to display only information about the specified map.
- --list-providers
- Display a list of configured providers.
- --list-profiles
-
New in version 2014.7.0.
Display a list of configured profiles. Pass in a cloud provider to view the provider's associated profiles, such as digital_ocean, or pass in all to list all the configured profiles.
Cloud Providers Listings
- --list-locations=LIST_LOCATIONS
- Display a list of locations available in configured cloud providers. Pass the cloud provider that available locations are desired on, aka "linode", or pass "all" to list locations for all configured cloud providers
- --list-images=LIST_IMAGES
- Display a list of images available in configured cloud providers. Pass the cloud provider that available images are desired on, aka "linode", or pass "all" to list images for all configured cloud providers
- --list-sizes=LIST_SIZES
- Display a list of sizes available in configured cloud providers. Pass the cloud provider that available sizes are desired on, aka "AWS", or pass "all" to list sizes for all configured cloud providers
Cloud Credentials
- --set-password=<USERNAME> <PROVIDER>
- Configure password for a cloud provider and save it to the keyring. PROVIDER can be specified with or without a driver, for example: "--set-password bob rackspace" or more specific "--set-password bob rackspace:openstack" DEPRECATED!
Output Options
- --out
-
Pass in an alternative outputter to display the return of data. This
outputter can be any of the available outputters:
grains, highstate, json, key, overstatestage, pprint, raw, txt, yaml
Some outputters are formatted only for data returned from specific functions; for instance, the grains outputter will not work for non-grains data.
If an outputter is used that does not support the data passed into it, then Salt will fall back on the pprint outputter and display the return data using the Python pprint standard library module.
NOTE: If using --out=json, you will probably want --static as well. Without the static option, you will get a separate JSON string per minion which makes JSON output invalid as a whole. This is due to using an iterative outputter. So if you want to feed it to a JSON parser, use --static as well.
- --out-indent OUTPUT_INDENT, --output-indent OUTPUT_INDENT
- Print the output indented by the provided value in spaces. Negative values disable indentation. Only applicable in outputters that support indentation.
- --out-file=OUTPUT_FILE, --output-file=OUTPUT_FILE
- Write the output to the specified file.
- --no-color
- Disable all colored output
- --force-color
-
Force colored output
NOTE: When using colored output the color codes are as follows:
green denotes success, red denotes failure, blue denotes changes and success and yellow denotes a expected future change in configuration.
Examples
To create 4 VMs named web1, web2, db1, and db2 from specified profiles:
salt-cloud -p fedora_rackspace web1 web2 db1 db2
To read in a map file and create all VMs specified therein:
salt-cloud -m /path/to/cloud.map
To read in a map file and create all VMs specified therein in parallel:
salt-cloud -m /path/to/cloud.map -P
To delete any VMs specified in the map file:
salt-cloud -m /path/to/cloud.map -d
To delete any VMs NOT specified in the map file:
salt-cloud -m /path/to/cloud.map -H
To display the status of all VMs specified in the map file:
salt-cloud -m /path/to/cloud.map -Q
See also
salt-cloud(7) salt(7) salt-master(1) salt-minion(1)
Salt Cloud basic usage
Salt Cloud needs, at least, one configured Provider and Profile to be functional.
Creating a VM
To create a VM with salt cloud, use command:
salt-cloud -p <profile> name_of_vm
Assuming there is a profile configured as following:
fedora_rackspace: provider: rackspace image: Fedora 17 size: 256 server script: bootstrap-salt
Then, the command to create new VM named fedora_http_01 is:
salt-cloud -p fedora_rackspace fedora_http_01
Destroying a VM
To destroy a created-by-salt-cloud VM, use command:
salt-cloud -d name_of_vm
For example, to delete the VM created on above example, use:
salt-cloud -d fedora_http_01
VM Profiles
Salt cloud designates virtual machines inside the profile configuration file. The profile configuration file defaults to /etc/salt/cloud.profiles and is a yaml configuration. The syntax for declaring profiles is simple:
fedora_rackspace: provider: rackspace image: Fedora 17 size: 256 server script: bootstrap-salt
It should be noted that the script option defaults to bootstrap-salt, and does not normally need to be specified. Further examples in this document will not show the script option.
A few key pieces of information need to be declared and can change based on the public cloud provider. A number of additional parameters can also be inserted:
centos_rackspace: provider: rackspace image: CentOS 6.2 size: 1024 server minion: master: salt.example.com append_domain: webs.example.com grains: role: webserver
The image must be selected from available images. Similarly, sizes must be selected from the list of sizes. To get a list of available images and sizes use the following command:
salt-cloud --list-images openstack salt-cloud --list-sizes openstack
Some parameters can be specified in the main Salt cloud configuration file and then are applied to all cloud profiles. For instance if only a single cloud provider is being used then the provider option can be declared in the Salt cloud configuration file.
Multiple Configuration Files
In addition to /etc/salt/cloud.profiles, profiles can also be specified in any file matching cloud.profiles.d/*conf which is a sub-directory relative to the profiles configuration file(with the above configuration file as an example, /etc/salt/cloud.profiles.d/*.conf). This allows for more extensible configuration, and plays nicely with various configuration management tools as well as version control systems.
Larger Example
rhel_ec2: provider: ec2 image: ami-e565ba8c size: t1.micro minion: cheese: edam ubuntu_ec2: provider: ec2 image: ami-7e2da54e size: t1.micro minion: cheese: edam ubuntu_rackspace: provider: rackspace image: Ubuntu 12.04 LTS size: 256 server minion: cheese: edam fedora_rackspace: provider: rackspace image: Fedora 17 size: 256 server minion: cheese: edam cent_linode: provider: linode image: CentOS 6.2 64bit size: Linode 512 cent_gogrid: provider: gogrid image: 12834 size: 512MB cent_joyent: provider: joyent image: centos-6 size: Small 1GB
Cloud Map File
A number of options exist when creating virtual machines. They can be managed directly from profiles and the command line execution, or a more complex map file can be created. The map file allows for a number of virtual machines to be created and associated with specific profiles.
Map files have a simple format, specify a profile and then a list of virtual machines to make from said profile:
fedora_small: - web1 - web2 - web3 fedora_high: - redis1 - redis2 - redis3 cent_high: - riak1 - riak2 - riak3
This map file can then be called to roll out all of these virtual machines. Map files are called from the salt-cloud command with the -m option:
$ salt-cloud -m /path/to/mapfile
Remember, that as with direct profile provisioning the -P option can be passed to create the virtual machines in parallel:
$ salt-cloud -m /path/to/mapfile -P
NOTE: Due to limitations in the GoGrid API, instances cannot be provisioned in parallel with the GoGrid driver. Map files will work with GoGrid, but the -P argument should not be used on maps referencing GoGrid instances.
A map file can also be enforced to represent the total state of a cloud deployment by using the --hard option. When using the hard option any vms that exist but are not specified in the map file will be destroyed:
$ salt-cloud -m /path/to/mapfile -P -H
Be careful with this argument, it is very dangerous! In fact, it is so dangerous that in order to use it, you must explicitly enable it in the main configuration file.
enable_hard_maps: True
A map file can include grains and minion configuration options:
fedora_small: - web1: minion: log_level: debug grains: cheese: tasty omelet: du fromage - web2: minion: log_level: warn grains: cheese: more tasty omelet: with peppers
A map file may also be used with the various query options:
$ salt-cloud -m /path/to/mapfile -Q {'ec2': {'web1': {'id': 'i-e6aqfegb', 'image': None, 'private_ips': [], 'public_ips': [], 'size': None, 'state': 0}}, 'web2': {'Absent'}}
...or with the delete option:
$ salt-cloud -m /path/to/mapfile -d The following virtual machines are set to be destroyed: web1 web2 Proceed? [N/y]
WARNING: Specifying Nodes with Maps on the Command Line Specifying the name of a node or nodes with the maps options on the command line is not supported. This is especially important to remember when using --destroy with maps; salt-cloud will ignore any arguments passed in which are not directly relevant to the map file. When using ``--destroy`` with a map, every node in the map file will be deleted! Maps don't provide any useful information for destroying individual nodes, and should not be used to destroy a subset of a map.
Setting up New Salt Masters
Bootstrapping a new master in the map is as simple as:
fedora_small: - web1: make_master: True - web2 - web3
Notice that ALL bootstrapped minions from the map will answer to the newly created salt-master.
To make any of the bootstrapped minions answer to the bootstrapping salt-master as opposed to the newly created salt-master, as an example:
fedora_small: - web1: make_master: True minion: master: <the local master ip address> local_master: True - web2 - web3
The above says the minion running on the newly created salt-master responds to the local master, ie, the master used to bootstrap these VMs.
Another example:
fedora_small: - web1: make_master: True - web2 - web3: minion: master: <the local master ip address> local_master: True
The above example makes the web3 minion answer to the local master, not the newly created master.
Cloud Actions
Once a VM has been created, there are a number of actions that can be performed on it. The "reboot" action can be used across all providers, but all other actions are specific to the cloud provider. In order to perform an action, you may specify it from the command line, including the name(s) of the VM to perform the action on:
$ salt-cloud -a reboot vm_name $ salt-cloud -a reboot vm1 vm2 vm2
Or you may specify a map which includes all VMs to perform the action on:
$ salt-cloud -a reboot -m /path/to/mapfile
The following is a list of actions currently supported by salt-cloud:
all providers: - reboot ec2: - start - stop joyent: - stop
Another useful reference for viewing more salt-cloud actions is the :ref:Salt Cloud Feature Matrix <salt-cloud-feature-matrix>
Cloud Functions
Cloud functions work much the same way as cloud actions, except that they don't perform an operation on a specific instance, and so do not need a machine name to be specified. However, since they perform an operation on a specific cloud provider, that provider must be specified.
$ salt-cloud -f show_image ec2 image=ami-fd20ad94
There are three universal salt-cloud functions that are extremely useful for gathering information about instances on a provider basis:
- •
- list_nodes: Returns some general information about the instances for the given provider.
- •
- list_nodes_full: Returns all information about the instances for the given provider.
- •
-
list_nodes_select: Returns select information about the instances for the given provider.
$ salt-cloud -f list_nodes linode $ salt-cloud -f list_nodes_full linode $ salt-cloud -f list_nodes_select linode
Another useful reference for viewing salt-cloud functions is the :ref:Salt Cloud Feature Matrix <salt-cloud-feature-matrix>
Core Configuration
Install Salt Cloud
Salt Cloud is now part of Salt proper. It was merged in as of Salt version 2014.1.0.
On Ubuntu, install Salt Cloud by using following command:
sudo add-apt-repository ppa:saltstack/salt sudo apt-get install salt-cloud
If using Salt Cloud on OS X, curl-ca-bundle must be installed. Presently, this package is not available via brew, but it is available using MacPorts:
sudo port install curl-ca-bundle
Salt Cloud depends on apache-libcloud. Libcloud can be installed via pip with pip install apache-libcloud.
Installing Salt Cloud for development
Installing Salt for development enables Salt Cloud development as well, just make sure apache-libcloud is installed as per above paragraph.
See these instructions: Installing Salt for development.
Core Configuration
A number of core configuration options and some options that are global to the VM profiles can be set in the cloud configuration file. By default this file is located at /etc/salt/cloud.
Thread Pool Size
When salt cloud is operating in parallel mode via the -P argument, you can control the thread pool size by specifying the pool_size parameter with a positive integer value.
By default, the thread pool size will be set to the number of VMs that salt cloud is operating on.
pool_size: 10
Minion Configuration
The default minion configuration is set up in this file. Minions created by salt-cloud derive their configuration from this file. Almost all parameters found in Configuring the Salt Minion can be used here.
minion: master: saltmaster.example.com
In particular, this is the location to specify the location of the salt master and its listening port, if the port is not set to the default.
Cloud Configuration Syntax
The data specific to interacting with public clouds is set up here.
Cloud provider configuration syntax can live in several places. The first is in /etc/salt/cloud:
# /etc/salt/cloud providers: my-aws-migrated-config: id: HJGRYCILJLKJYG key: 'kdjgfsgm;woormgl/aserigjksjdhasdfgn' keyname: test securitygroup: quick-start private_key: /root/test.pem provider: aws
Cloud provider configuration data can also be housed in /etc/salt/cloud.providers or any file matching /etc/salt/cloud.providers.d/*.conf. All files in any of these locations will be parsed for cloud provider data.
Using the example configuration above:
# /etc/salt/cloud.providers # or could be /etc/salt/cloud.providers.d/*.conf my-aws-config: id: HJGRYCILJLKJYG key: 'kdjgfsgm;woormgl/aserigjksjdhasdfgn' keyname: test securitygroup: quick-start private_key: /root/test.pem provider: aws
NOTE: Salt Cloud provider configurations within /etc/cloud.provider.d/ should not specify the ``providers starting key.
It is also possible to have multiple cloud configuration blocks within the same alias block. For example:
production-config: - id: HJGRYCILJLKJYG key: 'kdjgfsgm;woormgl/aserigjksjdhasdfgn' keyname: test securitygroup: quick-start private_key: /root/test.pem provider: aws - user: example_user apikey: 123984bjjas87034 provider: rackspace
However, using this configuration method requires a change with profile configuration blocks. The provider alias needs to have the provider key value appended as in the following example:
rhel_aws_dev: provider: production-config:aws image: ami-e565ba8c size: t1.micro rhel_aws_prod: provider: production-config:aws image: ami-e565ba8c size: High-CPU Extra Large Instance database_prod: provider: production-config:rackspace image: Ubuntu 12.04 LTS size: 256 server
Notice that because of the multiple entries, one has to be explicit about the provider alias and name, from the above example, production-config: aws.
This data interactions with the salt-cloud binary regarding its --list-location, --list-images, and --list-sizes which needs a cloud provider as an argument. The argument used should be the configured cloud provider alias. If the provider alias has multiple entries, <provider-alias>: <provider-name> should be used.
To allow for a more extensible configuration, --providers-config, which defaults to /etc/salt/cloud.providers, was added to the cli parser. It allows for the providers' configuration to be added on a per-file basis.
Pillar Configuration
It is possible to configure cloud providers using pillars. This is only used when inside the cloud module. You can setup a variable called cloud that contains your profile and provider to pass that information to the cloud servers instead of having to copy the full configuration to every minion. In your pillar file, you would use something like this:
cloud: ssh_key_name: saltstack ssh_key_file: /root/.ssh/id_rsa update_cachedir: True diff_cache_events: True change_password: True providers: my-nova: identity_url: https://identity.api.rackspacecloud.com/v2.0/ compute_region: IAD user: myuser api_key: apikey tenant: 123456 provider: nova my-openstack: identity_url: https://identity.api.rackspacecloud.com/v2.0/tokens user: user2 apikey: apikey2 tenant: 654321 compute_region: DFW provider: openstack compute_name: cloudServersOpenStack profiles: ubuntu-nova: provider: my-nova size: performance1-8 image: bb02b1a3-bc77-4d17-ab5b-421d89850fca script_args: git develop ubuntu-openstack: provider: my-openstack size: performance1-8 image: bb02b1a3-bc77-4d17-ab5b-421d89850fca script_args: git develop
Cloud Configurations
Rackspace
Rackspace cloud requires two configuration options; a user and an apikey:
my-rackspace-config: user: example_user apikey: 123984bjjas87034 provider: rackspace-config
NOTE: In the cloud profile that uses this provider configuration, the syntax for the provider required field would be provider: my-rackspace-config.
Amazon AWS
A number of configuration options are required for Amazon AWS including id, key, keyname, securitygroup, and private_key:
my-aws-quick-start: id: HJGRYCILJLKJYG key: 'kdjgfsgm;woormgl/aserigjksjdhasdfgn' keyname: test securitygroup: quick-start private_key: /root/test.pem provider: aws my-aws-default: id: HJGRYCILJLKJYG key: 'kdjgfsgm;woormgl/aserigjksjdhasdfgn' keyname: test securitygroup: default private_key: /root/test.pem provider: aws
NOTE: In the cloud profile that uses this provider configuration, the syntax for the provider required field would be either provider: my-aws-quick-start or provider: my-aws-default.
Linode
Linode requires a single API key, but the default root password also needs to be set:
my-linode-config: apikey: asldkgfakl;sdfjsjaslfjaklsdjf;askldjfaaklsjdfhasldsadfghdkf password: F00barbaz ssh_pubkey: ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKHEOLLbeXgaqRQT9NBAopVz366SdYc0KKX33vAnq+2R user [at] host ssh_key_file: ~/.ssh/id_ed25519 provider: linode
The password needs to be 8 characters and contain lowercase, uppercase, and numbers.
NOTE: In the cloud profile that uses this provider configuration, the syntax for the provider required field would be provider: my-linode-config
Joyent Cloud
The Joyent cloud requires three configuration parameters: The username and password that are used to log into the Joyent system, as well as the location of the private SSH key associated with the Joyent account. The SSH key is needed to send the provisioning commands up to the freshly created virtual machine.
my-joyent-config: user: fred password: saltybacon private_key: /root/joyent.pem provider: joyent
NOTE: In the cloud profile that uses this provider configuration, the syntax for the provider required field would be provider: my-joyent-config
GoGrid
To use Salt Cloud with GoGrid, log into the GoGrid web interface and create an API key. Do this by clicking on "My Account" and then going to the API Keys tab.
The apikey and the sharedsecret configuration parameters need to be set in the configuration file to enable interfacing with GoGrid:
my-gogrid-config: apikey: asdff7896asdh789 sharedsecret: saltybacon provider: gogrid
NOTE: In the cloud profile that uses this provider configuration, the syntax for the provider required field would be provider: my-gogrid-config.
OpenStack
OpenStack configuration differs between providers, and at the moment several options need to be specified. This module has been officially tested against the HP and the Rackspace implementations, and some examples are provided for both.
# For HP my-openstack-hp-config: identity_url: 'https://region-a.geo-1.identity.hpcloudsvc.com:35357/v2.0/' compute_name: Compute compute_region: 'az-1.region-a.geo-1' tenant: myuser-tenant1 user: myuser ssh_key_name: mykey ssh_key_file: '/etc/salt/hpcloud/mykey.pem' password: mypass provider: openstack # For Rackspace my-openstack-rackspace-config: identity_url: 'https://identity.api.rackspacecloud.com/v2.0/tokens' compute_name: cloudServersOpenStack protocol: ipv4 compute_region: DFW protocol: ipv4 user: myuser tenant: 5555555 password: mypass provider: openstack
If you have an API key for your provider, it may be specified instead of a password:
my-openstack-hp-config: apikey: 901d3f579h23c8v73q9 my-openstack-rackspace-config: apikey: 901d3f579h23c8v73q9
NOTE: In the cloud profile that uses this provider configuration, the syntax for the provider required field would be either provider: my-openstack-hp-config or provider: my-openstack-rackspace-config.
You will certainly need to configure the user, tenant, and either password or apikey.
If your OpenStack instances only have private IP addresses and a CIDR range of private addresses are not reachable from the salt-master, you may set your preference to have Salt ignore it:
my-openstack-config: ignore_cidr: 192.168.0.0/16
For in-house OpenStack Essex installation, libcloud needs the service_type :
my-openstack-config: identity_url: 'http://control.openstack.example.org:5000/v2.0/' compute_name : Compute Service service_type : compute
DigitalOcean
Using Salt for DigitalOcean requires a client_key and an api_key. These can be found in the DigitalOcean web interface, in the "My Settings" section, under the API Access tab.
my-digitalocean-config: provider: digital_ocean personal_access_token: xxx location: New York 1
NOTE: In the cloud profile that uses this provider configuration, the syntax for the provider required field would be provider: my-digital-ocean-config.
Parallels
Using Salt with Parallels requires a user, password and URL. These can be obtained from your cloud provider.
my-parallels-config: user: myuser password: xyzzy url: https://api.cloud.xmission.com:4465/paci/v1.0/ provider: parallels
NOTE: In the cloud profile that uses this provider configuration, the syntax for the provider required field would be provider: my-parallels-config.
Proxmox
Using Salt with Proxmox requires a user, password, and URL. These can be obtained from your cloud provider. Both PAM and PVE users can be used.
my-proxmox-config: provider: proxmox user: saltcloud@pve password: xyzzy url: your.proxmox.host
NOTE: In the cloud profile that uses this provider configuration, the syntax for the provider required field would be provider: my-proxmox-config.
LXC
The lxc driver uses saltify to install salt and attach the lxc container as a new lxc minion. As soon as we can, we manage baremetal operation over SSH. You can also destroy those containers via this driver.
devhost10-lxc: target: devhost10 provider: lxc
And in the map file:
devhost10-lxc: provider: devhost10-lxc from_container: ubuntu backing: lvm sudo: True size: 3g ip: 10.0.3.9 minion: master: 10.5.0.1 master_port: 4506 lxc_conf: - lxc.utsname: superlxc
NOTE: In the cloud profile that uses this provider configuration, the syntax for the provider required field would be provider: devhost10-lxc.
Saltify
The Saltify driver is a new, experimental driver designed to install Salt on a remote machine, virtual or bare metal, using SSH. This driver is useful for provisioning machines which are already installed, but not Salted. For more information about using this driver and for configuration examples, please see the Gettting Started with Saltify documentation.
Extending Profiles and Cloud Providers Configuration
As of 0.8.7, the option to extend both the profiles and cloud providers configuration and avoid duplication was added. The extends feature works on the current profiles configuration, but, regarding the cloud providers configuration, only works in the new syntax and respective configuration files, i.e. /etc/salt/salt/cloud.providers or /etc/salt/cloud.providers.d/*.conf.
NOTE: Extending cloud profiles and providers is not recursive. For example, a profile that is extended by a second profile is possible, but the second profile cannot be extended by a third profile.
Also, if a profile (or provider) is extending another profile and each contains a list of values, the lists from the extending profile will override the list from the original profile. The lists are not merged together.
Extending Profiles
Some example usage on how to use extends with profiles. Consider /etc/salt/salt/cloud.profiles containing:
development-instances: provider: my-ec2-config size: t1.micro ssh_username: ec2_user securitygroup: - default deploy: False Amazon-Linux-AMI-2012.09-64bit: image: ami-54cf5c3d extends: development-instances Fedora-17: image: ami-08d97e61 extends: development-instances CentOS-5: provider: my-aws-config image: ami-09b61d60 extends: development-instances
The above configuration, once parsed would generate the following profiles data:
[{'deploy': False, 'image': 'ami-08d97e61', 'profile': 'Fedora-17', 'provider': 'my-ec2-config', 'securitygroup': ['default'], 'size': 't1.micro', 'ssh_username': 'ec2_user'}, {'deploy': False, 'image': 'ami-09b61d60', 'profile': 'CentOS-5', 'provider': 'my-aws-config', 'securitygroup': ['default'], 'size': 't1.micro', 'ssh_username': 'ec2_user'}, {'deploy': False, 'image': 'ami-54cf5c3d', 'profile': 'Amazon-Linux-AMI-2012.09-64bit', 'provider': 'my-ec2-config', 'securitygroup': ['default'], 'size': 't1.micro', 'ssh_username': 'ec2_user'}, {'deploy': False, 'profile': 'development-instances', 'provider': 'my-ec2-config', 'securitygroup': ['default'], 'size': 't1.micro', 'ssh_username': 'ec2_user'}]
Extending Providers
Some example usage on how to use extends within the cloud providers configuration. Consider /etc/salt/salt/cloud.providers containing:
my-develop-envs: - id: HJGRYCILJLKJYG key: 'kdjgfsgm;woormgl/aserigjksjdhasdfgn' keyname: test securitygroup: quick-start private_key: /root/test.pem location: ap-southeast-1 availability_zone: ap-southeast-1b provider: aws - user: myuser [at] mycorp.com password: mypass ssh_key_name: mykey ssh_key_file: '/etc/salt/ibm/mykey.pem' location: Raleigh provider: ibmsce my-productions-envs: - extends: my-develop-envs:ibmsce user: my-production-user [at] mycorp.com location: us-east-1 availability_zone: us-east-1
The above configuration, once parsed would generate the following providers data:
'providers': { 'my-develop-envs': [ {'availability_zone': 'ap-southeast-1b', 'id': 'HJGRYCILJLKJYG', 'key': 'kdjgfsgm;woormgl/aserigjksjdhasdfgn', 'keyname': 'test', 'location': 'ap-southeast-1', 'private_key': '/root/test.pem', 'provider': 'aws', 'securitygroup': 'quick-start' }, {'location': 'Raleigh', 'password': 'mypass', 'provider': 'ibmsce', 'ssh_key_file': '/etc/salt/ibm/mykey.pem', 'ssh_key_name': 'mykey', 'user': 'myuser [at] mycorp.com' } ], 'my-productions-envs': [ {'availability_zone': 'us-east-1', 'location': 'us-east-1', 'password': 'mypass', 'provider': 'ibmsce', 'ssh_key_file': '/etc/salt/ibm/mykey.pem', 'ssh_key_name': 'mykey', 'user': 'my-production-user [at] mycorp.com' } ] }
Windows Configuration
Spinning up Windows Minions
It is possible to use Salt Cloud to spin up Windows instances, and then install Salt on them. This functionality is available on all cloud providers that are supported by Salt Cloud. However, it may not necessarily be available on all Windows images.
Requirements
Salt Cloud makes use of impacket and winexe to set up the Windows Salt Minion installer.
impacket is usually available as either the impacket or the python-impacket package, depending on the distribution. More information on impacket can be found at the project home:
- •
-
impacket project home
winexe is less commonly available in distribution-specific repositories. However, it is currently being built for various distributions in 3rd party channels:
- •
- RPMs at pbone.net
- •
-
OpenSuse Build Service
Additionally, a copy of the Salt Minion Windows installer must be present on the system on which Salt Cloud is running. This installer may be downloaded from saltstack.com:
- •
- SaltStack Download Area
Firewall Settings
Because Salt Cloud makes use of smbclient and winexe, port 445 must be open on the target image. This port is not generally open by default on a standard Windows distribution, and care must be taken to use an image in which this port is open, or the Windows firewall is disabled.
If supported by the cloud provider, a PowerShell script may be used to open up this port automatically, using the cloud provider's userdata. The following script would open up port 445, and apply the changes:
<powershell> New-NetFirewallRule -Name "SMB445" -DisplayName "SMB445" -Protocol TCP -LocalPort 445 Set-Item (dir wsman:\localhost\Listener\*\Port -Recurse).pspath 445 -Force Restart-Service winrm </powershell>
For EC2, this script may be saved as a file, and specified in the provider or profile configuration as userdata_file. For instance:
userdata_file: /etc/salt/windows-firewall.ps1
Configuration
Configuration is set as usual, with some extra configuration settings. The location of the Windows installer on the machine that Salt Cloud is running on must be specified. This may be done in any of the regular configuration files (main, providers, profiles, maps). For example:
Setting the installer in /etc/salt/cloud.providers:
my-softlayer: provider: softlayer user: MYUSER1138 apikey: 'e3b68aa711e6deadc62d5b76355674beef7cc3116062ddbacafe5f7e465bfdc9' minion: master: saltmaster.example.com win_installer: /root/Salt-Minion-2014.7.0-AMD64-Setup.exe win_username: Administrator win_password: letmein
The default Windows user is Administrator, and the default Windows password is blank.
Auto-Generated Passwords on EC2
On EC2, when the win_password is set to auto, Salt Cloud will query EC2 for an auto-generated password. This password is expected to take at least 4 minutes to generate, adding additional time to the deploy process.
When the EC2 API is queried for the auto-generated password, it will be returned in a message encrypted with the specified keyname. This requires that the appropriate private_key file is also specified. Such a profile configuration might look like:
windows-server-2012: provider: my-ec2-config image: ami-c49c0dac size: m1.small securitygroup: windows keyname: mykey private_key: /root/mykey.pem userdata_file: /etc/salt/windows-firewall.ps1 win_installer: /root/Salt-Minion-2014.7.0-AMD64-Setup.exe win_username: Administrator win_password: auto
Cloud Provider Specifics
Getting Started With Aliyun ECS
The Aliyun ECS (Elastic Computer Service) is one of the most popular public cloud providers in China. This cloud provider can be used to manage aliyun instance using salt-cloud.
Dependencies
This driver requires the Python requests library to be installed.
Configuration
Using Salt for Aliyun ECS requires aliyun access key id and key secret. These can be found in the aliyun web interface, in the "User Center" section, under "My Service" tab.
# Note: This example is for /etc/salt/cloud.providers or any file in the # /etc/salt/cloud.providers.d/ directory. my-aliyun-config: # aliyun Access Key ID id: wDGEwGregedg3435gDgxd # aliyun Access Key Secret key: GDd45t43RDBTrkkkg43934t34qT43t4dgegerGEgg location: cn-qingdao provider: aliyun
Profiles
Cloud Profiles
Set up an initial profile at /etc/salt/cloud.profiles or in the /etc/salt/cloud.profiles.d/ directory:
aliyun_centos: provider: my-aliyun-config size: ecs.t1.small location: cn-qingdao securitygroup: G1989096784427999 image: centos6u3_64_20G_aliaegis_20130816.vhd
Sizes can be obtained using the --list-sizes option for the salt-cloud command:
# salt-cloud --list-sizes my-aliyun-config my-aliyun-config: ---------- aliyun: ---------- ecs.c1.large: ---------- CpuCoreCount: 8 InstanceTypeId: ecs.c1.large MemorySize: 16.0 ...SNIP...
Images can be obtained using the --list-images option for the salt-cloud command:
# salt-cloud --list-images my-aliyun-config my-aliyun-config: ---------- aliyun: ---------- centos5u8_64_20G_aliaegis_20131231.vhd: ---------- Architecture: x86_64 Description: ImageId: centos5u8_64_20G_aliaegis_20131231.vhd ImageName: CentOS 5.8 64位 ImageOwnerAlias: system ImageVersion: 1.0 OSName: CentOS 5.8 64位 Platform: CENTOS5 Size: 20 Visibility: public ...SNIP...
Locations can be obtained using the --list-locations option for the salt-cloud command:
my-aliyun-config: ---------- aliyun: ---------- cn-beijing: ---------- LocalName: 北京 RegionId: cn-beijing cn-hangzhou: ---------- LocalName: 杭州 RegionId: cn-hangzhou cn-hongkong: ---------- LocalName: 香港 RegionId: cn-hongkong cn-qingdao: ---------- LocalName: 青岛 RegionId: cn-qingdao
Security Group can be obtained using the -f list_securitygroup option for the salt-cloud command:
# salt-cloud --location=cn-qingdao -f list_securitygroup my-aliyun-config my-aliyun-config: ---------- aliyun: ---------- G1989096784427999: ---------- Description: G1989096784427999 SecurityGroupId: G1989096784427999
NOTE: Aliyun ECS REST API documentation is available from Aliyun ECS API.
Getting Started With Azure
New in version 2014.1.0.
Azure is a cloud service by Microsoft providing virtual machines, SQL services, media services, and more. This document describes how to use Salt Cloud to create a virtual machine on Azure, with Salt installed.
More information about Azure is located at http://www.windowsazure.com/.
Dependencies
- •
- The Azure Python SDK.
- •
- A Microsoft Azure account
- •
- OpenSSL (to generate the certificates)
- •
- Salt
Configuration
Set up the provider config at /etc/salt/cloud.providers.d/azure.conf:
# Note: This example is for /etc/salt/cloud.providers.d/azure.conf my-azure-config: provider: azure subscription_id: 3287abc8-f98a-c678-3bde-326766fd3617 certificate_path: /etc/salt/azure.pem # Set up the location of the salt master # minion: master: saltmaster.example.com # Optional management_host: management.core.windows.net
The certificate used must be generated by the user. OpenSSL can be used to create the management certificates. Two certificates are needed: a .cer file, which is uploaded to Azure, and a .pem file, which is stored locally.
To create the .pem file, execute the following command:
openssl req -x509 -nodes -days 365 -newkey rsa:1024 -keyout /etc/salt/azure.pem -out /etc/salt/azure.pem
To create the .cer file, execute the following command:
openssl x509 -inform pem -in /etc/salt/azure.pem -outform der -out /etc/salt/azure.cer
After creating these files, the .cer file will need to be uploaded to Azure via the "Upload a Management Certificate" action of the "Management Certificates" tab within the "Settings" section of the management portal.
Optionally, a management_host may be configured, if necessary for the region.
Cloud Profiles
Set up an initial profile at /etc/salt/cloud.profiles:
azure-ubuntu: provider: my-azure-config image: 'b39f27a8b8c64d52b05eac6a62ebad85__Ubuntu-12_04_3-LTS-amd64-server-20131003-en-us-30GB' size: Small location: 'East US' ssh_username: azureuser ssh_password: verybadpass slot: production media_link: 'http://portalvhdabcdefghijklmn.blob.core.windows.net/vhds'
These options are described in more detail below. Once configured, the profile can be realized with a salt command:
salt-cloud -p azure-ubuntu newinstance
This will create an salt minion instance named newinstance in Azure. If the command was executed on the salt-master, its Salt key will automatically be signed on the master.
Once the instance has been created with salt-minion installed, connectivity to it can be verified with Salt:
salt newinstance test.ping
Profile Options
The following options are currently available for Azure.
provider
The name of the provider as configured in /etc/salt/cloud.providers.d/azure.conf.
image
The name of the image to use to create a VM. Available images can be viewed using the following command:
salt-cloud --list-images my-azure-config
size
The name of the size to use to create a VM. Available sizes can be viewed using the following command:
salt-cloud --list-sizes my-azure-config
location
The name of the location to create a VM in. Available locations can be viewed using the following command:
salt-cloud --list-locations my-azure-config
ssh_username
The user to use to log into the newly-created VM to install Salt.
ssh_password
The password to use to log into the newly-created VM to install Salt.
slot
The environment to which the hosted service is deployed. Valid values are staging or production. When set to production, the resulting URL of the new VM will be <vm_name>.cloudapp.net. When set to staging, the resulting URL will contain a generated hash instead.
media_link
This is the URL of the container that will store the disk that this VM uses. Currently, this container must already exist. If a VM has previously been created in the associated account, a container should already exist. In the web interface, go into the Storage area and click one of the available storage selections. Click the Containers link, and then copy the URL from the container that will be used. It generally looks like:
http://portalvhdabcdefghijklmn.blob.core.windows.net/vhds
Show Instance
This action is a thin wrapper around --full-query, which displays details on a single instance only. In an environment with several machines, this will save a user from having to sort through all instance data, just to examine a single instance.
salt-cloud -a show_instance myinstance
Getting Started With DigitalOcean
DigitalOcean is a public cloud provider that specializes in Linux instances.
Configuration
Starting in Salt 2015.5.0, a new DigitalOcean driver was added to Salt Cloud to support DigitalOcean's new API, APIv2. The original driver, referred to digital_ocean will be supported throughout the 2015.5.x releases of Salt, but will then be removed in Salt Beryllium in favor of the APIv2 driver, digital_ocean_v2. The following documentation is relevant to the new driver, digital_ocean_v2. To see documentation related to the original digital_ocean driver, please see the DigitalOcean Salt Cloud Driver
NOTE: When Salt Beryllium is released, the original digital_ocean driver will no longer be supported and the digital_ocean_v2 driver will become the digital_ocean driver.
Using Salt for DigitalOcean requires a personal_access_token, an ssh_key_file, and at least one SSH key name in ssh_key_names. More can be added by separating each key with a comma. The personal_access_token can be found in the DigitalOcean web interface in the "Apps & API" section. The SSH key name can be found under the "SSH Keys" section.
# Note: This example is for /etc/salt/cloud.providers or any file in the # /etc/salt/cloud.providers.d/ directory. my-digitalocean-config: provider: digital_ocean personal_access_token: xxx ssh_key_file: /path/to/ssh/key/file ssh_key_names: my-key-name,my-key-name-2 location: New York 1
Profiles
Cloud Profiles
Set up an initial profile at /etc/salt/cloud.profiles or in the /etc/salt/cloud.profiles.d/ directory:
digitalocean-ubuntu: provider: my-digitalocean-config image: Ubuntu 14.04 x32 size: 512MB location: New York 1 private_networking: True backups_enabled: True ipv6: True
Locations can be obtained using the --list-locations option for the salt-cloud command:
# salt-cloud --list-locations my-digitalocean-config my-digitalocean-config: ---------- digital_ocean: ---------- Amsterdam 1: ---------- available: False features: [u'backups'] name: Amsterdam 1 sizes: [] slug: ams1 ...SNIP...
Sizes can be obtained using the --list-sizes option for the salt-cloud command:
# salt-cloud --list-sizes my-digitalocean-config my-digitalocean-config: ---------- digital_ocean: ---------- 512MB: ---------- cost_per_hour: 0.00744 cost_per_month: 5.0 cpu: 1 disk: 20 id: 66 memory: 512 name: 512MB slug: None ...SNIP...
Images can be obtained using the --list-images option for the salt-cloud command:
# salt-cloud --list-images my-digitalocean-config my-digitalocean-config: ---------- digital_ocean: ---------- Arch Linux 2013.05 x64: ---------- distribution: Arch Linux id: 350424 name: Arch Linux 2013.05 x64 public: True slug: None ...SNIP...
NOTE: DigitalOcean's concept of Applications is nothing more than a pre-configured instance (same as a normal Droplet). You will find examples such Docker 0.7 Ubuntu 13.04 x64 and Wordpress on Ubuntu 12.10 when using the --list-images option. These names can be used just like the rest of the standard instances when specifying an image in the cloud profile configuration.
NOTE: If your domain's DNS is managed with DigitalOcean, you can automatically create A-records for newly created droplets. Use create_dns_record: True in your config to enable this. Add delete_dns_record: True to also delete records when a droplet is destroyed.
NOTE: Additional documentation is available from DigitalOcean.
Getting Started With AWS EC2
Amazon EC2 is a very widely used public cloud platform and one of the core platforms Salt Cloud has been built to support.
Previously, the suggested provider for AWS EC2 was the aws provider. This has been deprecated in favor of the ec2 provider. Configuration using the old aws provider will still function, but that driver is no longer in active development.
Dependencies
This driver requires the Python requests library to be installed.
Configuration
The following example illustrates some of the options that can be set. These parameters are discussed in more detail below.
# Note: This example is for /etc/salt/cloud.providers or any file in the # /etc/salt/cloud.providers.d/ directory. my-ec2-southeast-public-ips: # Set up the location of the salt master # minion: master: saltmaster.example.com # Set up grains information, which will be common for all nodes # using this provider grains: node_type: broker release: 1.0.1 # Specify whether to use public or private IP for deploy script. # # Valid options are: # private_ips - The salt-cloud command is run inside the EC2 # public_ips - The salt-cloud command is run outside of EC2 # ssh_interface: public_ips # Optionally configure the Windows credential validation number of # retries and delay between retries. This defaults to 10 retries # with a one second delay betwee retries win_deploy_auth_retries: 10 win_deploy_auth_retry_delay: 1 # Set the EC2 access credentials (see below) # id: HJGRYCILJLKJYG key: 'kdjgfsgm;woormgl/aserigjksjdhasdfgn' # Make sure this key is owned by root with permissions 0400. # private_key: /etc/salt/my_test_key.pem keyname: my_test_key securitygroup: default # Optionally configure default region # Use salt-cloud --list-locations <provider> to obtain valid regions # location: ap-southeast-1 availability_zone: ap-southeast-1b # Configure which user to use to run the deploy script. This setting is # dependent upon the AMI that is used to deploy. It is usually safer to # configure this individually in a profile, than globally. Typical users # are: # # Amazon Linux -> ec2-user # RHEL -> ec2-user # CentOS -> ec2-user # Ubuntu -> ubuntu # ssh_username: ec2-user # Optionally add an IAM profile iam_profile: 'arn:aws:iam::123456789012:instance-profile/ExampleInstanceProfile' provider: ec2 my-ec2-southeast-private-ips: # Set up the location of the salt master # minion: master: saltmaster.example.com # Specify whether to use public or private IP for deploy script. # # Valid options are: # private_ips - The salt-master is also hosted with EC2 # public_ips - The salt-master is hosted outside of EC2 # ssh_interface: private_ips # Optionally configure the Windows credential validation number of # retries and delay between retries. This defaults to 10 retries # with a one second delay betwee retries win_deploy_auth_retries: 10 win_deploy_auth_retry_delay: 1 # Set the EC2 access credentials (see below) # id: HJGRYCILJLKJYG key: 'kdjgfsgm;woormgl/aserigjksjdhasdfgn' # Make sure this key is owned by root with permissions 0400. # private_key: /etc/salt/my_test_key.pem keyname: my_test_key securitygroup: default # Optionally configure default region # location: ap-southeast-1 availability_zone: ap-southeast-1b # Configure which user to use to run the deploy script. This setting is # dependent upon the AMI that is used to deploy. It is usually safer to # configure this individually in a profile, than globally. Typical users # are: # # Amazon Linux -> ec2-user # RHEL -> ec2-user # CentOS -> ec2-user # Ubuntu -> ubuntu # ssh_username: ec2-user # Optionally add an IAM profile iam_profile: 'my other profile name' provider: ec2
Access Credentials
The id and key settings may be found in the Security Credentials area of the AWS Account page:
https://portal.aws.amazon.com/gp/aws/securityCredentials
Both are located in the Access Credentials area of the page, under the Access Keys tab. The id setting is labeled Access Key ID, and the key setting is labeled Secret Access Key.
Windows Deploy Timeouts
For Windows instances, it may take longer than normal for the instance to be ready. In these circumstances, the provider configuration can be configured with a win_deploy_auth_retries and/or a win_deploy_auth_retry_delay setting, which default to 10 retries and a one second delay between retries. These retries and timeouts relate to validating the Administrator password once AWS provides the credentials via the AWS API.
Key Pairs
In order to create an instance with Salt installed and configured, a key pair will need to be created. This can be done in the EC2 Management Console, in the Key Pairs area. These key pairs are unique to a specific region. Keys in the us-east-1 region can be configured at:
https://console.aws.amazon.com/ec2/home?region=us-east-1#s=KeyPairs
Keys in the us-west-1 region can be configured at
https://console.aws.amazon.com/ec2/home?region=us-west-1#s=KeyPairs
...and so on. When creating a key pair, the browser will prompt to download a pem file. This file must be placed in a directory accessible by Salt Cloud, with permissions set to either 0400 or 0600.
Security Groups
An instance on EC2 needs to belong to a security group. Like key pairs, these are unique to a specific region. These are also configured in the EC2 Management Console. Security groups for the us-east-1 region can be configured at:
https://console.aws.amazon.com/ec2/home?region=us-east-1#s=SecurityGroups
...and so on.
A security group defines firewall rules which an instance will adhere to. If the salt-master is configured outside of EC2, the security group must open the SSH port (usually port 22) in order for Salt Cloud to install Salt.
IAM Profile
Amazon EC2 instances support the concept of an instance profile, which is a logical container for the IAM role. At the time that you launch an EC2 instance, you can associate the instance with an instance profile, which in turn corresponds to the IAM role. Any software that runs on the EC2 instance is able to access AWS using the permissions associated with the IAM role.
Scaffolding the profile is a 2-step configuration process:
- 1.
- Configure an IAM Role from the IAM Management Console.
- 2.
-
Attach this role to a new profile. It can be done with the AWS CLI:
> aws iam create-instance-profile --instance-profile-name PROFILE_NAME > aws iam add-role-to-instance-profile --instance-profile-name PROFILE_NAME --role-name ROLE_NAME
Once the profile is created, you can use the PROFILE_NAME to configure your cloud profiles.
Cloud Profiles
Set up an initial profile at /etc/salt/cloud.profiles:
base_ec2_private: provider: my-ec2-southeast-private-ips image: ami-e565ba8c size: t2.micro ssh_username: ec2-user base_ec2_public: provider: my-ec2-southeast-public-ips image: ami-e565ba8c size: t2.micro ssh_username: ec2-user base_ec2_db: provider: my-ec2-southeast-public-ips image: ami-e565ba8c size: m1.xlarge ssh_username: ec2-user volumes: - { size: 10, device: /dev/sdf } - { size: 10, device: /dev/sdg, type: io1, iops: 1000 } - { size: 10, device: /dev/sdh, type: io1, iops: 1000 } # optionally add tags to profile: tag: {'Environment': 'production', 'Role': 'database'} # force grains to sync after install sync_after_install: grains base_ec2_vpc: provider: my-ec2-southeast-public-ips image: ami-a73264ce size: m1.xlarge ssh_username: ec2-user script: /etc/salt/cloud.deploy.d/user_data.sh network_interfaces: - DeviceIndex: 0 PrivateIpAddresses: - Primary: True #auto assign public ip (not EIP) AssociatePublicIpAddress: True SubnetId: subnet-813d4bbf SecurityGroupId: - sg-750af413 del_root_vol_on_destroy: True del_all_vol_on_destroy: True volumes: - { size: 10, device: /dev/sdf } - { size: 10, device: /dev/sdg, type: io1, iops: 1000 } - { size: 10, device: /dev/sdh, type: io1, iops: 1000 } tag: {'Environment': 'production', 'Role': 'database'} sync_after_install: grains
The profile can now be realized with a salt command:
# salt-cloud -p base_ec2 ami.example.com # salt-cloud -p base_ec2_public ami.example.com # salt-cloud -p base_ec2_private ami.example.com
This will create an instance named ami.example.com in EC2. The minion that is installed on this instance will have an id of ami.example.com. If the command was executed on the salt-master, its Salt key will automatically be signed on the master.
Once the instance has been created with salt-minion installed, connectivity to it can be verified with Salt:
# salt 'ami.example.com' test.ping
Required Settings
The following settings are always required for EC2:
# Set the EC2 login data my-ec2-config: id: HJGRYCILJLKJYG key: 'kdjgfsgm;woormgl/aserigjksjdhasdfgn' keyname: test securitygroup: quick-start private_key: /root/test.pem provider: ec2
Optional Settings
EC2 allows a userdata file to be passed to the instance to be created. This functionality was added to Salt in the 2015.5.0 release.
my-ec2-config: # Pass userdata to the instance to be created userdata_file: /etc/salt/my-userdata-file
EC2 allows a location to be set for servers to be deployed in. Availability zones exist inside regions, and may be added to increase specificity.
my-ec2-config: # Optionally configure default region location: ap-southeast-1 availability_zone: ap-southeast-1b
EC2 instances can have a public or private IP, or both. When an instance is deployed, Salt Cloud needs to log into it via SSH to run the deploy script. By default, the public IP will be used for this. If the salt-cloud command is run from another EC2 instance, the private IP should be used.
my-ec2-config: # Specify whether to use public or private IP for deploy script # private_ips or public_ips ssh_interface: public_ips
Many EC2 instances do not allow remote access to the root user by default. Instead, another user must be used to run the deploy script using sudo. Some common usernames include ec2-user (for Amazon Linux), ubuntu (for Ubuntu instances), admin (official Debian) and bitnami (for images provided by Bitnami).
my-ec2-config: # Configure which user to use to run the deploy script ssh_username: ec2-user
Multiple usernames can be provided, in which case Salt Cloud will attempt to guess the correct username. This is mostly useful in the main configuration file:
my-ec2-config: ssh_username: - ec2-user - ubuntu - admin - bitnami
Multiple security groups can also be specified in the same fashion:
my-ec2-config: securitygroup: - default - extra
Your instances may optionally make use of EC2 Spot Instances. The following example will request that spot instances be used and your maximum bid will be $0.10. Keep in mind that different spot prices may be needed based on the current value of the various EC2 instance sizes. You can check current and past spot instance pricing via the EC2 API or AWS Console.
my-ec2-config: spot_config: spot_price: 0.10
By default, the spot instance type is set to 'one-time', meaning it will be launched and, if it's ever terminated for whatever reason, it will not be recreated. If you would like your spot instances to be relaunched after a termination (by your or AWS), set the type to 'persistent'.
NOTE: Spot instances are a great way to save a bit of money, but you do run the risk of losing your spot instances if the current price for the instance size goes above your maximum bid.
The following parameters may be set in the cloud configuration file to control various aspects of the spot instance launching:
- •
- wait_for_spot_timeout: seconds to wait before giving up on spot instance launch (default=600)
- •
- wait_for_spot_interval: seconds to wait in between polling requests to determine if a spot instance is available (default=30)
- •
- wait_for_spot_interval_multiplier: a multiplier to add to the interval in between requests, which is useful if AWS is throttling your requests (default=1)
- •
-
wait_for_spot_max_failures: maximum number of failures before giving up
on launching your spot instance (default=10)
If you find that you're being throttled by AWS while polling for spot instances, you can set the following in your core cloud configuration file that will double the polling interval after each request to AWS.
wait_for_spot_interval: 1 wait_for_spot_interval_multiplier: 2
See the AWS Spot Instances documentation for more information.
Block device mappings enable you to specify additional EBS volumes or instance store volumes when the instance is launched. This setting is also available on each cloud profile. Note that the number of instance stores varies by instance type. If more mappings are provided than are supported by the instance type, mappings will be created in the order provided and additional mappings will be ignored. Consult the AWS documentation for a listing of the available instance stores, and device names.
my-ec2-config: block_device_mappings: - DeviceName: /dev/sdb VirtualName: ephemeral0 - DeviceName: /dev/sdc VirtualName: ephemeral1
You can also use block device mappings to change the size of the root device at the provisioning time. For example, assuming the root device is '/dev/sda', you can set its size to 100G by using the following configuration.
my-ec2-config: block_device_mappings: - DeviceName: /dev/sda Ebs.VolumeSize: 100 Ebs.VolumeType: gp2 Ebs.SnapshotId: dummy0
Existing EBS volumes may also be attached (not created) to your instances or you can create new EBS volumes based on EBS snapshots. To simply attach an existing volume use the volume_id parameter.
device: /dev/xvdj volume_id: vol-12345abcd
Or, to create a volume from an EBS snapshot, use the snapshot parameter.
device: /dev/xvdj snapshot: snap-abcd12345
Note that volume_id will take precedence over the snapshot parameter.
Tags can be set once an instance has been launched.
my-ec2-config: tag: tag0: value tag1: value
Modify EC2 Tags
One of the features of EC2 is the ability to tag resources. In fact, under the hood, the names given to EC2 instances by salt-cloud are actually just stored as a tag called Name. Salt Cloud has the ability to manage these tags:
salt-cloud -a get_tags mymachine salt-cloud -a set_tags mymachine tag1=somestuff tag2='Other stuff' salt-cloud -a del_tags mymachine tag1,tag2,tag3
It is possible to manage tags on any resource in EC2 with a Resource ID, not just instances:
salt-cloud -f get_tags my_ec2 resource_id=af5467ba salt-cloud -f set_tags my_ec2 resource_id=af5467ba tag1=somestuff salt-cloud -f del_tags my_ec2 resource_id=af5467ba tag1,tag2,tag3
Rename EC2 Instances
As mentioned above, EC2 instances are named via a tag. However, renaming an instance by renaming its tag will cause the salt keys to mismatch. A rename function exists which renames both the instance, and the salt keys.
salt-cloud -a rename mymachine newname=yourmachine
EC2 Termination Protection
EC2 allows the user to enable and disable termination protection on a specific instance. An instance with this protection enabled cannot be destroyed.
salt-cloud -a enable_term_protect mymachine salt-cloud -a disable_term_protect mymachine
Rename on Destroy
When instances on EC2 are destroyed, there will be a lag between the time that the action is sent, and the time that Amazon cleans up the instance. During this time, the instance still retails a Name tag, which will cause a collision if the creation of an instance with the same name is attempted before the cleanup occurs. In order to avoid such collisions, Salt Cloud can be configured to rename instances when they are destroyed. The new name will look something like:
myinstance-DEL20f5b8ad4eb64ed88f2c428df80a1a0c
In order to enable this, add rename_on_destroy line to the main configuration file:
my-ec2-config: rename_on_destroy: True
Listing Images
Normally, images can be queried on a cloud provider by passing the --list-images argument to Salt Cloud. This still holds true for EC2:
salt-cloud --list-images my-ec2-config
However, the full list of images on EC2 is extremely large, and querying all of the available images may cause Salt Cloud to behave as if frozen. Therefore, the default behavior of this option may be modified, by adding an owner argument to the provider configuration:
owner: aws-marketplace
The possible values for this setting are amazon, aws-marketplace, self, <AWS account ID> or all. The default setting is amazon. Take note that all and aws-marketplace may cause Salt Cloud to appear as if it is freezing, as it tries to handle the large amount of data.
It is also possible to perform this query using different settings without modifying the configuration files. To do this, call the avail_images function directly:
salt-cloud -f avail_images my-ec2-config owner=aws-marketplace
EC2 Images
The following are lists of available AMI images, generally sorted by OS. These lists are on 3rd-party websites, are not managed by Salt Stack in any way. They are provided here as a reference for those who are interested, and contain no warranty (express or implied) from anyone affiliated with Salt Stack. Most of them have never been used, much less tested, by the Salt Stack team.
- •
- Arch Linux
- •
- FreeBSD
- •
- Fedora
- •
- CentOS
- •
- Ubuntu
- •
- Debian
- •
- OmniOS
- •
- All Images on Amazon
show_image
This is a function that describes an AMI on EC2. This will give insight as to the defaults that will be applied to an instance using a particular AMI.
$ salt-cloud -f show_image ec2 image=ami-fd20ad94
show_instance
This action is a thin wrapper around --full-query, which displays details on a single instance only. In an environment with several machines, this will save a user from having to sort through all instance data, just to examine a single instance.
$ salt-cloud -a show_instance myinstance
ebs_optimized
This argument enables switching of the EbsOptimized setting which default to 'false'. Indicates whether the instance is optimized for EBS I/O. This optimization provides dedicated throughput to Amazon EBS and an optimized configuration stack to provide optimal Amazon EBS I/O performance. This optimization isn't available with all instance types. Additional usage charges apply when using an EBS-optimized instance.
This setting can be added to the profile or map file for an instance.
If set to True, this setting will enable an instance to be EbsOptimized
ebs_optimized: True
This can also be set as a cloud provider setting in the EC2 cloud configuration:
my-ec2-config: ebs_optimized: True
del_root_vol_on_destroy
This argument overrides the default DeleteOnTermination setting in the AMI for the EBS root volumes for an instance. Many AMIs contain 'false' as a default, resulting in orphaned volumes in the EC2 account, which may unknowingly be charged to the account. This setting can be added to the profile or map file for an instance.
If set, this setting will apply to the root EBS volume
del_root_vol_on_destroy: True
This can also be set as a cloud provider setting in the EC2 cloud configuration:
my-ec2-config: del_root_vol_on_destroy: True
del_all_vols_on_destroy
This argument overrides the default DeleteOnTermination setting in the AMI for the not-root EBS volumes for an instance. Many AMIs contain 'false' as a default, resulting in orphaned volumes in the EC2 account, which may unknowingly be charged to the account. This setting can be added to the profile or map file for an instance.
If set, this setting will apply to any (non-root) volumes that were created by salt-cloud using the 'volumes' setting.
The volumes will not be deleted under the following conditions * If a volume is detached before terminating the instance * If a volume is created without this setting and attached to the instance
del_all_vols_on_destroy: True
This can also be set as a cloud provider setting in the EC2 cloud configuration:
my-ec2-config: del_all_vols_on_destroy: True
The setting for this may be changed on all volumes of an existing instance using one of the following commands:
salt-cloud -a delvol_on_destroy myinstance salt-cloud -a keepvol_on_destroy myinstance salt-cloud -a show_delvol_on_destroy myinstance
The setting for this may be changed on a volume on an existing instance using one of the following commands:
salt-cloud -a delvol_on_destroy myinstance device=/dev/sda1 salt-cloud -a delvol_on_destroy myinstance volume_id=vol-1a2b3c4d salt-cloud -a keepvol_on_destroy myinstance device=/dev/sda1 salt-cloud -a keepvol_on_destroy myinstance volume_id=vol-1a2b3c4d salt-cloud -a show_delvol_on_destroy myinstance device=/dev/sda1 salt-cloud -a show_delvol_on_destroy myinstance volume_id=vol-1a2b3c4d
EC2 Termination Protection
EC2 allows the user to enable and disable termination protection on a specific instance. An instance with this protection enabled cannot be destroyed. The EC2 driver adds a show_term_protect action to the regular EC2 functionality.
salt-cloud -a show_term_protect mymachine salt-cloud -a enable_term_protect mymachine salt-cloud -a disable_term_protect mymachine
Alternate Endpoint
Normally, EC2 endpoints are build using the region and the service_url. The resulting endpoint would follow this pattern:
ec2.<region>.<service_url>
This results in an endpoint that looks like:
ec2.us-east-1.amazonaws.com
There are other projects that support an EC2 compatibility layer, which this scheme does not account for. This can be overridden by specifying the endpoint directly in the main cloud configuration file:
my-ec2-config: endpoint: myendpoint.example.com:1138/services/Cloud
Volume Management
The EC2 driver has several functions and actions for management of EBS volumes.
Creating Volumes
A volume may be created, independent of an instance. A zone must be specified. A size or a snapshot may be specified (in GiB). If neither is given, a default size of 10 GiB will be used. If a snapshot is given, the size of the snapshot will be used.
salt-cloud -f create_volume ec2 zone=us-east-1b salt-cloud -f create_volume ec2 zone=us-east-1b size=10 salt-cloud -f create_volume ec2 zone=us-east-1b snapshot=snap12345678 salt-cloud -f create_volume ec2 size=10 type=standard salt-cloud -f create_volume ec2 size=10 type=io1 iops=1000
Attaching Volumes
Unattached volumes may be attached to an instance. The following values are required; name or instance_id, volume_id, and device.
salt-cloud -a attach_volume myinstance volume_id=vol-12345 device=/dev/sdb1
Show a Volume
The details about an existing volume may be retrieved.
salt-cloud -a show_volume myinstance volume_id=vol-12345 salt-cloud -f show_volume ec2 volume_id=vol-12345
Detaching Volumes
An existing volume may be detached from an instance.
salt-cloud -a detach_volume myinstance volume_id=vol-12345
Deleting Volumes
A volume that is not attached to an instance may be deleted.
salt-cloud -f delete_volume ec2 volume_id=vol-12345
Managing Key Pairs
The EC2 driver has the ability to manage key pairs.
Creating a Key Pair
A key pair is required in order to create an instance. When creating a key pair with this function, the return data will contain a copy of the private key. This private key is not stored by Amazon, will not be obtainable past this point, and should be stored immediately.
salt-cloud -f create_keypair ec2 keyname=mykeypair
Show a Key Pair
This function will show the details related to a key pair, not including the private key itself (which is not stored by Amazon).
salt-cloud -f show_keypair ec2 keyname=mykeypair
Delete a Key Pair
This function removes the key pair from Amazon.
salt-cloud -f delete_keypair ec2 keyname=mykeypair
Launching instances into a VPC
Simple launching into a VPC
In the amazon web interface, identify the id of the subnet into which your image should be created. Then, edit your cloud.profiles file like so:-
profile-id: provider: provider-name subnetid: subnet-XXXXXXXX image: ami-XXXXXXXX size: m1.medium ssh_username: ubuntu securitygroupid: - sg-XXXXXXXX
Specifying interface properties
New in version 2014.7.0.
Launching into a VPC allows you to specify more complex configurations for the network interfaces of your virtual machines, for example:-
profile-id: provider: provider-name image: ami-XXXXXXXX size: m1.medium ssh_username: ubuntu # Do not include either 'subnetid' or 'securitygroupid' here if you are # going to manually specify interface configuration # network_interfaces: - DeviceIndex: 0 SubnetId: subnet-XXXXXXXX SecurityGroupId: - sg-XXXXXXXX # Uncomment this to associate an existing Elastic IP Address with # this network interface: # # associate_eip: eni-XXXXXXXX # You can allocate more than one IP address to an interface. Use the # 'ip addr list' command to see them. # # SecondaryPrivateIpAddressCount: 2 # Uncomment this to allocate a new Elastic IP Address to this # interface (will be associated with the primary private ip address # of the interface # # allocate_new_eip: True # Uncomment this instead to allocate a new Elastic IP Address to # both the primary private ip address and each of the secondary ones # allocate_new_eips: True
Note that it is an error to assign a 'subnetid' or 'securitygroupid' to a
profile where the interfaces are manually configured like this. These are both
really properties of each network interface, not of the machine itself.
Getting Started With GoGrid
GoGrid is a public cloud provider supporting Linux and Windows.
Dependencies
- •
- Libcloud >= 0.13.2
Configuration
To use Salt Cloud with GoGrid log into the GoGrid web interface and create an API key. Do this by clicking on "My Account" and then going to the API Keys tab.
The apikey and the sharedsecret configuration parameters need to be set in the configuration file to enable interfacing with GoGrid:
# Note: This example is for /etc/salt/cloud.providers or any file in the # /etc/salt/cloud.providers.d/ directory. my-gogrid-config: provider: gogrid apikey: asdff7896asdh789 sharedsecret: saltybacon
NOTE: A Note about using Map files with GoGrid:
Due to limitations in the GoGrid API, instances cannot be provisioned in parallel
with the GoGrid driver. Map files will work with GoGrid, but the -P
argument should not be used on maps referencing GoGrid instances.
Profiles
Cloud Profiles
Set up an initial profile at /etc/salt/cloud.profiles or in the /etc/salt/cloud.profiles.d/ directory:
gogrid_512: provider: my-gogrid-config size: 512MB image: CentOS 6.2 (64-bit) w/ None
Sizes can be obtained using the --list-sizes option for the salt-cloud command:
# salt-cloud --list-sizes my-gogrid-config my-gogrid-config: ---------- gogrid: ---------- 512MB: ---------- bandwidth: None disk: 30 driver: get_uuid: id: 512MB name: 512MB price: 0.095 ram: 512 uuid: bde1e4d7c3a643536e42a35142c7caac34b060e9 ...SNIP...
Images can be obtained using the --list-images option for the salt-cloud command:
# salt-cloud --list-images my-gogrid-config my-gogrid-config: ---------- gogrid: ---------- CentOS 6.4 (64-bit) w/ None: ---------- driver: extra: ---------- get_uuid: id: 18094 name: CentOS 6.4 (64-bit) w/ None uuid: bfd4055389919e01aa6261828a96cf54c8dcc2c4 ...SNIP...
Getting Started With Google Compute Engine
Google Compute Engine (GCE) is Google-infrastructure as a service that lets you run your large-scale computing workloads on virtual machines. This document covers how to use Salt Cloud to provision and manage your virtual machines hosted within Google's infrastructure.
You can find out more about GCE and other Google Cloud Platform services
at https://cloud.google.com.
Dependencies
- •
- Libcloud >= 0.14.0-beta3
- •
- PyCrypto >= 2.1.
- •
- A Google Cloud Platform account with Compute Engine enabled
- •
- A registered Service Account for authorization
- •
- Oh, and obviously you'll need salt
Google Compute Engine Setup
- 1.
-
Sign up for Google Cloud Platform
Go to https://cloud.google.com and use your Google account to sign up for Google Cloud Platform and complete the guided instructions.
- 2.
-
Create a Project
Next, go to the console at https://cloud.google.com/console and create a new Project. Make sure to select your new Project if you are not automatically directed to the Project.
Projects are a way of grouping together related users, services, and billing. You may opt to create multiple Projects and the remaining instructions will need to be completed for each Project if you wish to use GCE and Salt Cloud to manage your virtual machines.
- 3.
-
Enable the Google Compute Engine service
In your Project, either just click Compute Engine to the left, or go to the APIs & auth section and APIs link and enable the Google Compute Engine service.
- 4.
-
Create a Service Account
To set up authorization, navigate to APIs & auth section and then the Credentials link and click the CREATE NEW CLIENT ID button. Select Service Account and click the Create Client ID button. This will automatically download a .json file which can be ignored.
Look for a new Service Account section in the page and record the generated email address for the matching key/fingerprint. The email address will be used in the service_account_email_address of the /etc/salt/cloud file.
- 5.
-
Key Format
If you are using ``libcloud >= 0.17.0`` it is recommended that you use the ``JSON format`` file you downloaded above and skip to the "Configuration" section below, using the JSON file **_in place of 'NEW.pem'_* in the documentation.
If you are using an older version of libcloud or are unsure of the version you have, please follow the instructions below to generate and format a new P12 key.*
In the new Service Account section, click Generate new P12 key, which will automatically download a .p12 private key file. The .p12 private key needs to be converted to a format compatible with libcloud. This new Google-generated private key was encrypted using notasecret as a passphrase. Use the following command and record the location of the converted private key and record the location for use in the service_account_private_key of the /etc/salt/cloud file:
openssl pkcs12 -in ORIG.p12 -passin pass:notasecret \ -nodes -nocerts | openssl rsa -out NEW.pem
Provider Configuration
Set up the provider cloud config at /etc/salt/cloud.providers or /etc/salt/cloud.providers.d/*.conf:
gce-config: # Set up the Project name and Service Account authorization project: "your-project-id" service_account_email_address: "123-a5gt [at] developer.gserviceaccount.com" service_account_private_key: "/path/to/your/NEW.pem" # Set up the location of the salt master minion: master: saltmaster.example.com # Set up grains information, which will be common for all nodes # using this provider grains: node_type: broker release: 1.0.1 provider: gce
NOTE:
The value provided for project must not contain underscores or spaces and
is labeled as "Project ID" on the Google Developers Console.
Profile Configuration
Set up an initial profile at /etc/salt/cloud.profiles or /etc/salt/cloud.profiles.d/*.conf:
my-gce-profile: image: centos-6 size: n1-standard-1 location: europe-west1-b network: default tags: '["one", "two", "three"]' metadata: '{"one": "1", "2": "two"}' use_persistent_disk: True delete_boot_pd: False deploy: True make_master: False provider: gce-config
The profile can be realized now with a salt command:
salt-cloud -p my-gce-profile gce-instance
This will create an salt minion instance named gce-instance in GCE. If the command was executed on the salt-master, its Salt key will automatically be signed on the master.
Once the instance has been created with a salt-minion installed, connectivity to it can be verified with Salt:
salt gce-instance test.ping
GCE Specific Settings
Consult the sample profile below for more information about GCE specific
settings. Some of them are mandatory and are properly labeled below but
typically also include a hard-coded default.
Initial Profile
Set up an initial profile at /etc/salt/cloud.profiles or /etc/salt/cloud.profiles.d/gce.conf:
my-gce-profile: image: centos-6 size: n1-standard-1 location: europe-west1-b network: default tags: '["one", "two", "three"]' metadata: '{"one": "1", "2": "two"}' use_persistent_disk: True delete_boot_pd: False ssh_interface: public_ips external_ip: "ephemeral"
image
Image is used to define what Operating System image should be used
to for the instance. Examples are Debian 7 (wheezy) and CentOS 6. Required.
size
A 'size', in GCE terms, refers to the instance's 'machine type'. See
the on-line documentation for a complete list of GCE machine types. Required.
location
A 'location', in GCE terms, refers to the instance's 'zone'. GCE
has the notion of both Regions (e.g. us-central1, europe-west1, etc)
and Zones (e.g. us-central1-a, us-central1-b, etc). Required.
network
Use this setting to define the network resource for the instance.
All GCE projects contain a network named 'default' but it's possible
to use this setting to create instances belonging to a different
network resource.
tags
GCE supports instance/network tags and this setting allows you to
set custom tags. It should be a list of strings and must be
parse-able by the python ast.literal_eval() function to convert it
to a python list.
metadata
GCE supports instance metadata and this setting allows you to
set custom metadata. It should be a hash of key/value strings and
parse-able by the python ast.literal_eval() function to convert it
to a python dictionary.
use_persistent_disk
Use this setting to ensure that when new instances are created,
they will use a persistent disk to preserve data between instance
terminations and re-creations.
delete_boot_pd
In the event that you wish the boot persistent disk to be permanently
deleted when you destroy an instance, set delete_boot_pd to True.
ssh_interface
New in version 2015.5.0.
Specify whether to use public or private IP for deploy script.
Valid options are:
- •
- private_ips: The salt-master is also hosted with GCE
- •
- public_ips: The salt-master is hosted outside of GCE
external_ip
Per instance setting: Used a named fixed IP address to this host.
Valid options are:
- •
- ephemeral: The host will use a GCE ephemeral IP
- •
-
None: No external IP will be configured on this host.
Optionally, pass the name of a GCE address to use a fixed IP address. If the address does not already exist, it will be created.
ex_disk_type
GCE supports two different disk types, pd-standard and pd-ssd. The default disk type setting is pd-standard. To specify using an SSD disk, set pd-ssd as the value.
New in version 2014.7.0.
SSH Remote Access
GCE instances do not allow remote access to the root user by default. Instead, another user must be used to run the deploy script using sudo. Append something like this to /etc/salt/cloud.profiles or /etc/salt/cloud.profiles.d/*.conf:
my-gce-profile: ... # SSH to GCE instances as gceuser ssh_username: gceuser # Use the local private SSH key file located here ssh_keyfile: /etc/cloud/google_compute_engine
If you have not already used this SSH key to login to instances in this GCE project you will also need to add the public key to your projects metadata at https://cloud.google.com/console. You could also add it via the metadata setting too:
my-gce-profile: ... metadata: '{"one": "1", "2": "two", "sshKeys": "gceuser:ssh-rsa <Your SSH Public Key> gceuser [at] host"}'
Single instance details
This action is a thin wrapper around --full-query, which displays details on a single instance only. In an environment with several machines, this will save a user from having to sort through all instance data, just to examine a single instance.
salt-cloud -a show_instance myinstance
Destroy, persistent disks, and metadata
As noted in the provider configuration, it's possible to force the boot persistent disk to be deleted when you destroy the instance. The way that this has been implemented is to use the instance metadata to record the cloud profile used when creating the instance. When destroy is called, if the instance contains a salt-cloud-profile key, it's value is used to reference the matching profile to determine if delete_boot_pd is set to True.
Be aware that any GCE instances created with salt cloud will contain this
custom salt-cloud-profile metadata entry.
List various resources
It's also possible to list several GCE resources similar to what can be done with other providers. The following commands can be used to list GCE zones (locations), machine types (sizes), and images.
salt-cloud --list-locations gce salt-cloud --list-sizes gce salt-cloud --list-images gce
Persistent Disk
The Compute Engine provider provides functions via salt-cloud to manage your
Persistent Disks. You can create and destroy disks as well as attach and
detach them from running instances.
Create
When creating a disk, you can create an empty disk and specify its size (in GB), or specify either an 'image' or 'snapshot'.
salt-cloud -f create_disk gce disk_name=pd location=us-central1-b size=200
Delete
Deleting a disk only requires the name of the disk to delete
salt-cloud -f delete_disk gce disk_name=old-backup
Attach
Attaching a disk to an existing instance is really an 'action' and requires both an instance name and disk name. It's possible to use this ation to create bootable persistent disks if necessary. Compute Engine also supports attaching a persistent disk in READ_ONLY mode to multiple instances at the same time (but then cannot be attached in READ_WRITE to any instance).
salt-cloud -a attach_disk myinstance disk_name=pd mode=READ_WRITE boot=yes
Detach
Detaching a disk is also an action against an instance and only requires the name of the disk. Note that this does not safely sync and umount the disk from the instance. To ensure no data loss, you must first make sure the disk is unmounted from the instance.
salt-cloud -a detach_disk myinstance disk_name=pd
Show disk
It's also possible to look up the details for an existing disk with either a function or an action.
salt-cloud -a show_disk myinstance disk_name=pd salt-cloud -f show_disk gce disk_name=pd
Create snapshot
You can take a snapshot of an existing disk's content. The snapshot can then in turn be used to create other persistent disks. Note that to prevent data corruption, it is strongly suggested that you unmount the disk prior to taking a snapshot. You must name the snapshot and provide the name of the disk.
salt-cloud -f create_snapshot gce name=backup-20140226 disk_name=pd
Delete snapshot
You can delete a snapshot when it's no longer needed by specifying the name of the snapshot.
salt-cloud -f delete_snapshot gce name=backup-20140226
Show snapshot
Use this function to look up information about the snapshot.
salt-cloud -f show_snapshot gce name=backup-20140226
Networking
Compute Engine supports multiple private networks per project. Instances within a private network can easily communicate with each other by an internal DNS service that resolves instance names. Instances within a private network can also communicate with either directly without needing special routing or firewall rules even if they span different regions/zones.
Networks also support custom firewall rules. By default, traffic between
instances on the same private network is open to all ports and protocols.
Inbound SSH traffic (port 22) is also allowed but all other inbound traffic
is blocked.
Create network
New networks require a name and CIDR range. New instances can be created and added to this network by setting the network name during create. It is not possible to add/remove existing instances to a network.
salt-cloud -f create_network gce name=mynet cidr=10.10.10.0/24
Destroy network
Destroy a network by specifying the name. Make sure that there are no instances associated with the network prior to deleting it or you'll have a bad day.
salt-cloud -f delete_network gce name=mynet
Show network
Specify the network name to view information about the network.
salt-cloud -f show_network gce name=mynet
Create address
Create a new named static IP address in a region.
salt-cloud -f create_address gce name=my-fixed-ip region=us-central1
Delete address
Delete an existing named fixed IP address.
salt-cloud -f delete_address gce name=my-fixed-ip region=us-central1
Show address
View details on a named address.
salt-cloud -f show_address gce name=my-fixed-ip region=us-central1
Create firewall
You'll need to create custom firewall rules if you want to allow other traffic than what is described above. For instance, if you run a web service on your instances, you'll need to explicitly allow HTTP and/or SSL traffic. The firewall rule must have a name and it will use the 'default' network unless otherwise specified with a 'network' attribute. Firewalls also support instance tags for source/destination
salt-cloud -f create_fwrule gce name=web allow=tcp:80,tcp:443,icmp
Delete firewall
Deleting a firewall rule will prevent any previously allowed traffic for the named firewall rule.
salt-cloud -f delete_fwrule gce name=web
Show firewall
Use this function to review an existing firewall rule's information.
salt-cloud -f show_fwrule gce name=web
Load Balancer
Compute Engine possess a load-balancer feature for splitting traffic across multiple instances. Please reference the documentation for a more complete discription.
The load-balancer functionality is slightly different than that described
in Google's documentation. The concept of TargetPool and ForwardingRule
are consolidated in salt-cloud/libcloud. HTTP Health Checks are optional.
HTTP Health Check
HTTP Health Checks can be used as a means to toggle load-balancing across instance members, or to detect if an HTTP site is functioning. A common use-case is to set up a health check URL and if you want to toggle traffic on/off to an instance, you can temporarily have it return a non-200 response. A non-200 response to the load-balancer's health check will keep the LB from sending any new traffic to the "down" instance. Once the instance's health check URL beings returning 200-responses, the LB will again start to send traffic to it. Review Compute Engine's documentation for allowable parameters. You can use the following salt-cloud functions to manage your HTTP health checks.
salt-cloud -f create_hc gce name=myhc path=/ port=80 salt-cloud -f delete_hc gce name=myhc salt-cloud -f show_hc gce name=myhc
Load-balancer
When creating a new load-balancer, it requires a name, region, port range, and list of members. There are other optional parameters for protocol, and list of health checks. Deleting or showing details about the LB only requires the name.
salt-cloud -f create_lb gce name=lb region=... ports=80 members=w1,w2,w3 salt-cloud -f delete_lb gce name=lb salt-cloud -f show_lb gce name=lb
You can also create a load balancer using a named fixed IP addressby specifying the name of the address. If the address does not exist yet it will be created.
salt-cloud -f create_lb gce name=my-lb region=us-central1 ports=234 members=s1,s2,s3 address=my-lb-ip
Attach and Detach LB
It is possible to attach or detach an instance from an existing load-balancer. Both the instance and load-balancer must exist before using these functions.
salt-cloud -f attach_lb gce name=lb member=w4 salt-cloud -f detach_lb gce name=lb member=oops
Getting Started With HP Cloud
HP Cloud is a major public cloud platform and uses the libcloud
openstack driver. The current version of OpenStack that HP Cloud
uses is Havana. When an instance is booted, it must have a
floating IP added to it in order to connect to it and further below
you will see an example that adds context to this statement.
Set up a cloud provider configuration file
To use the openstack driver for HP Cloud, set up the cloud provider configuration file as in the example shown below:
/etc/salt/cloud.providers.d/hpcloud.conf:
hpcloud-config: # Set the location of the salt-master # minion: master: saltmaster.example.com # Configure HP Cloud using the OpenStack plugin # identity_url: https://region-b.geo-1.identity.hpcloudsvc.com:35357/v2.0/tokens compute_name: Compute protocol: ipv4 # Set the compute region: # compute_region: region-b.geo-1 # Configure HP Cloud authentication credentials # user: myname tenant: myname-project1 password: xxxxxxxxx # keys to allow connection to the instance launched # ssh_key_name: yourkey ssh_key_file: /path/to/key/yourkey.priv provider: openstack
The subsequent example that follows is using the openstack driver.
Compute Region
Originally, HP Cloud, in its OpenStack Essex version (1.0), had 3 availability zones in one region, US West (region-a.geo-1), which each behaved each as a region.
This has since changed, and the current OpenStack Havana version of HP Cloud (1.1) now has simplified this and now has two regions to choose from:
region-a.geo-1 -> US West region-b.geo-1 -> US East
Authentication
The user is the same user as is used to log into the HP Cloud management
UI. The tenant can be found in the upper left under "Project/Region/Scope".
It is often named the same as user albeit with a -project1 appended.
The password is of course what you created your account with. The management
UI also has other information such as being able to select US East or US West.
Set up a cloud profile config file
The profile shown below is a know working profile for an Ubuntu instance. The profile configuration file is stored in the following location:
/etc/salt/cloud.profiles.d/hp_ae1_ubuntu.conf:
hp_ae1_ubuntu: provider: hp_ae1 image: 9302692b-b787-4b52-a3a6-daebb79cb498 ignore_cidr: 10.0.0.1/24 networks: - floating: Ext-Net size: standard.small ssh_key_file: /root/keys/test.key ssh_key_name: test ssh_username: ubuntu
Some important things about the example above:
- •
-
The image parameter can use either the image name or image ID which you can obtain by running in the example below (this case US East):
# salt-cloud --list-images hp_ae1
- •
- The parameter ignore_cidr specifies a range of addresses to ignore when trying to connect to the instance. In this case, it's the range of IP addresses used for an private IP of the instance.
- •
- The parameter networks is very important to include. In previous versions of Salt Cloud, this is what made it possible for salt-cloud to be able to attach a floating IP to the instance in order to connect to the instance and set up the minion. The current version of salt-cloud doesn't require it, though having it is of no harm either. Newer versions of salt-cloud will use this, and without it, will attempt to find a list of floating IP addresses to use regardless.
- •
- The ssh_key_file and ssh_key_name are the keys that will make it possible to connect to the instance to set up the minion
- •
- The ssh_username parameter, in this case, being that the image used will be ubuntu, will make it possible to not only log in but install the minion
Launch an instance
To instantiate a machine based on this profile (example):
# salt-cloud -p hp_ae1_ubuntu ubuntu_instance_1
After several minutes, this will create an instance named ubuntu_instance_1
running in HP Cloud in the US East region and will set up the minion and then
return information about the instance once completed.
Manage the instance
Once the instance has been created with salt-minion installed, connectivity to it can be verified with Salt:
# salt ubuntu_instance_1 ping
SSH to the instance
Additionally, the instance can be accessed via SSH using the floating IP assigned to it
# ssh ubuntu@<floating ip>
Using a private IP
Alternatively, in the cloud profile, using the private IP to log into the instance to set up the minion is another option, particularly if salt-cloud is running within the cloud on an instance that is on the same network with all the other instances (minions)
The example below is a modified version of the previous example. Note the use of ssh_interface:
hp_ae1_ubuntu: provider: hp_ae1 image: 9302692b-b787-4b52-a3a6-daebb79cb498 size: standard.small ssh_key_file: /root/keys/test.key ssh_key_name: test ssh_username: ubuntu ssh_interface: private_ips
With this setup, salt-cloud will use the private IP address to ssh into the instance and set up the salt-minion
Getting Started With Joyent
Joyent is a public cloud provider supporting SmartOS, Linux, FreeBSD, and
Windows.
Dependencies
This driver requires the Python requests library to be installed.
Configuration
The Joyent cloud requires three configuration parameters. The user name and password that are used to log into the Joyent system, and the location of the private ssh key associated with the Joyent account. The ssh key is needed to send the provisioning commands up to the freshly created virtual machine.
# Note: This example is for /etc/salt/cloud.providers or any file in the # /etc/salt/cloud.providers.d/ directory. my-joyent-config: provider: joyent user: fred password: saltybacon private_key: /root/mykey.pem keyname: mykey
Profiles
Cloud Profiles
Set up an initial profile at /etc/salt/cloud.profiles or in the /etc/salt/cloud.profiles.d/ directory:
joyent_512 provider: my-joyent-config size: Extra Small 512 MB image: Arch Linux 2013.06
Sizes can be obtained using the --list-sizes option for the salt-cloud command:
# salt-cloud --list-sizes my-joyent-config my-joyent-config: ---------- joyent: ---------- Extra Small 512 MB: ---------- default: false disk: 15360 id: Extra Small 512 MB memory: 512 name: Extra Small 512 MB swap: 1024 vcpus: 1 ...SNIP...
Images can be obtained using the --list-images option for the salt-cloud command:
# salt-cloud --list-images my-joyent-config my-joyent-config: ---------- joyent: ---------- base: ---------- description: A 32-bit SmartOS image with just essential packages installed. Ideal for users who are comfortable with setting up their own environment and tools. disabled: False files: ---------- - compression: bzip2 - sha1: 40cdc6457c237cf6306103c74b5f45f5bf2d9bbe - size: 82492182 name: base os: smartos owner: 352971aa-31ba-496c-9ade-a379feaecd52 public: True ...SNIP...
SmartDataCenter
This driver can also be used with the Joyent SmartDataCenter project. More details can be found at:
Using SDC requires that an api_host_suffix is set. The default value for this is .api.joyentcloud.com. All characters, including the leading ., should be included:
api_host_suffix: .api.myhostname.com
Miscellaneous Configuration
The following configuration items can be set in either provider or
profile confuration files.
use_ssl
When set to True (the default), attach https:// to any URL that does not
already have http:// or https:// included at the beginning. The best
practice is to leave the protocol out of the URL, and use this setting to manage
it.
verify_ssl
When set to True (the default), the underlying web library will verify the
SSL certificate. This should only be set to False for debugging.`
Getting Started With LXC
The LXC module is designed to install Salt in an LXC container on a controlled and possibly remote minion.
In other words, Salt will connect to a minion, then from that minion:
- •
- Provision and configure a container for networking access
- •
- Use those modules to deploy salt and re-attach to master.
- •
- lxc runner
- •
- lxc module
- •
- seed
Limitations
- •
- You can only act on one minion and one provider at a time.
- •
-
Listing images must be targeted to a particular LXC provider (nothing will be
outputted with all)
WARNING: On pre 2015.5.2, you need to specify explitly the network bridge
Operation
Salt's LXC support does use lxc.init via the lxc.cloud_init_interface and seeds the minion via seed.mkconfig.
You can provide to those lxc VMs a profile and a network profile like if you were directly using the minion module.
Order of operation:
- •
- Create the LXC container on the desired minion (clone or template)
- •
- Change LXC config options (if any need to be changed)
- •
- Start container
- •
- Change base passwords if any
- •
- Change base DNS configuration if necessary
- •
- Wait for LXC container to be up and ready for ssh
- •
- Test SSH connection and bailout in error
- •
- Upload deploy script and seeds, then re-attach the minion.
Provider configuration
Here is a simple provider configuration:
# Note: This example goes in /etc/salt/cloud.providers or any file in the # /etc/salt/cloud.providers.d/ directory. devhost10-lxc: target: devhost10 provider: lxc
Profile configuration
Please read tutorial-lxc before anything else. And specially tutorial-lxc-profiles.
Here are the options to configure your containers:
- target
- Host minion id to install the lxc Container into
- lxc_profile
- Name of the profile or inline options for the LXC vm creation/cloning, please see tutorial-lxc-profiles-container.
- network_profile
- Name of the profile or inline options for the LXC vm network settings, please see tutorial-lxc-profiles-network.
- nic_opts
-
Totally optionnal.
Per interface new-style configuration options mappings which will
override any profile default option:
eth0: {'mac': '00:16:3e:01:29:40', 'gateway': None, (default) 'link': 'br0', (default) 'gateway': None, (default) 'netmask': '', (default) 'ip': '22.1.4.25'}}
- password
- password for root and sysadmin users
- dnsservers
- List of DNS servers to use. This is optional.
- minion
- minion configuration (see Minion Configuration in Salt Cloud)
- bootstrap_shell
- shell for bootstraping script (default: /bin/sh)
- script
- defaults to salt-boostrap
- script_args
-
arguments which are given to the bootstrap script.
the {0} placeholder will be replaced by the path which contains the
minion config and key files, eg:
script_args="-c {0}"
Using profiles:
# Note: This example would go in /etc/salt/cloud.profiles or any file in the # /etc/salt/cloud.profiles.d/ directory. devhost10-lxc: provider: devhost10-lxc lxc_profile: foo network_profile: bar minion: master: 10.5.0.1 master_port: 4506
Using inline profiles (eg to override the network bridge):
devhost11-lxc: provider: devhost10-lxc lxc_profile: clone_from: foo network_profile: etho: link: lxcbr0 minion: master: 10.5.0.1 master_port: 4506
Template instead of a clone:
devhost11-lxc: provider: devhost10-lxc lxc_profile: template: ubuntu network_profile: etho: link: lxcbr0 minion: master: 10.5.0.1 master_port: 4506
Static ip:
# Note: This example would go in /etc/salt/cloud.profiles or any file in the # /etc/salt/cloud.profiles.d/ directory. devhost10-lxc: provider: devhost10-lxc nic_opts: eth0: ipv4: 10.0.3.9 minion: master: 10.5.0.1 master_port: 4506
DHCP:
# Note: This example would go in /etc/salt/cloud.profiles or any file in the # /etc/salt/cloud.profiles.d/ directory. devhost10-lxc: provider: devhost10-lxc minion: master: 10.5.0.1 master_port: 4506
Driver Support
- •
- Container creation
- •
- Image listing (LXC templates)
- •
- Running container information (IP addresses, etc.)
Getting Started With Linode
Linode is a public cloud provider with a focus on Linux instances.
Dependencies
- •
-
linode-python >= 1.1.1
OR
- •
-
Libcloud >= 0.13.2
This driver supports accessing Linode via linode-python or Apache Libcloud. Linode-python is recommended, it is more full-featured than Libcloud. In particular using linode-python enables stopping, starting, and cloning machines.
Driver selection is automatic. If linode-python is present it will be used. If it is absent, salt-cloud will fall back to Libcloud. If neither are present salt-cloud will abort.
NOTE: linode-python 1.1.1 or later is recommended. Earlier versions of linode-python should work but leak sensitive information into the debug logs.
Linode-python can be downloaded from https://github.com/tjfontaine/linode-python or installed via pip.
Configuration
Linode requires a single API key, but the default root password for new instances also needs to be set:
# Note: This example is for /etc/salt/cloud.providers or any file in the # /etc/salt/cloud.providers.d/ directory. my-linode-config: apikey: asldkgfakl;sdfjsjaslfjaklsdjf;askldjfaaklsjdfhasldsadfghdkf password: F00barbaz ssh_pubkey: ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKHEOLLbeXgaqRQT9NBAopVz366SdYc0KKX33vAnq+2R user [at] host ssh_key_file: ~/.ssh/id_ed25519 provider: linode
The password needs to be 8 characters and contain lowercase, uppercase, and
numbers.
Profiles
Cloud Profiles
Set up an initial profile at /etc/salt/cloud.profiles or in the /etc/salt/cloud.profiles.d/ directory:
linode_1024: provider: my-linode-config size: Linode 1024 image: Arch Linux 2013.06 location: london
Sizes can be obtained using the --list-sizes option for the salt-cloud command:
# salt-cloud --list-sizes my-linode-config my-linode-config: ---------- linode: ---------- Linode 1024: ---------- bandwidth: 2000 disk: 49152 driver: get_uuid: id: 1 name: Linode 1024 price: 20.0 ram: 1024 uuid: 03e18728ce4629e2ac07c9cbb48afffb8cb499c4 ...SNIP...
Images can be obtained using the --list-images option for the salt-cloud command:
# salt-cloud --list-images my-linode-config my-linode-config: ---------- linode: ---------- Arch Linux 2013.06: ---------- driver: extra: ---------- 64bit: 1 pvops: 1 get_uuid: id: 112 name: Arch Linux 2013.06 uuid: 8457f92eaffc92b7666b6734a96ad7abe1a8a6dd ...SNIP...
Locations can be obtained using the --list-locations option for the salt-cloud command:
# salt-cloud --list-locations my-linode-config my-linode-config: ---------- linode: ---------- Atlanta, GA, USA: ---------- abbreviation: atlanta id: 4 Dallas, TX, USA: ---------- abbreviation: dallas id: 2 ...SNIP...
Cloning
When salt-cloud accesses Linode via linode-python it can clone machines.
It is safest to clone a stopped machine. To stop a machine run
salt-cloud -a stop machine_to_clone
To create a new machine based on another machine, add an entry to your linode cloud profile that looks like this:
li-clone: provider: linode clonefrom: machine_to_clone script_args: -C
Then run salt-cloud as normal, specifying -p li-clone. The profile name can be anything--it doesn't have to be li-clone.
Clonefrom: is the name of an existing machine in Linode from which to clone.
Script_args: -C is necessary to avoid re-deploying Salt via salt-bootstrap.
-C will just re-deploy keys so the new minion will not have a duplicate key
or minion_id on the master.
Getting Started With OpenStack
OpenStack is one the most popular cloud projects. It's an open source project
to build public and/or private clouds. You can use Salt Cloud to launch
OpenStack instances.
Dependencies
- •
- Libcloud >= 0.13.2
Configuration
- •
-
Using the new format, set up the cloud configuration at
/etc/salt/cloud.providers or
/etc/salt/cloud.providers.d/openstack.conf:
my-openstack-config: # Set the location of the salt-master # minion: master: saltmaster.example.com # Configure the OpenStack driver # identity_url: http://identity.youopenstack.com/v2.0/tokens compute_name: nova protocol: ipv4 compute_region: RegionOne # Configure Openstack authentication credentials # user: myname password: 123456 # tenant is the project name tenant: myproject provider: openstack # skip SSL certificate validation (default false) insecure: false
Using nova client to get information from OpenStack
One of the best ways to get information about OpenStack is using the novaclient python package (available in pypi as python-novaclient). The client configuration is a set of environment variables that you can get from the Dashboard. Log in and then go to Project -> Access & security -> API Access and download the "OpenStack RC file". Then:
source /path/to/your/rcfile nova credentials nova endpoints
In the nova endpoints output you can see the information about
compute_region and compute_name.
Compute Region
It depends on the OpenStack cluster that you are using. Please, have a look at
the previous sections.
Authentication
The user and password is the same user as is used to log into the
OpenStack Dashboard.
Profiles
Here is an example of a profile:
openstack_512: provider: my-openstack-config size: m1.tiny image: cirros-0.3.1-x86_64-uec ssh_key_file: /tmp/test.pem ssh_key_name: test ssh_interface: private_ips
The following list explains some of the important properties.
- size
- can be one of the options listed in the output of nova flavor-list.
- image
- can be one of the options listed in the output of nova image-list.
- ssh_key_file
- The SSH private key that the salt-cloud uses to SSH into the VM after its first booted in order to execute a command or script. This private key's public key must be the openstack public key inserted into the authorized_key's file of the VM's root user account.
- ssh_key_name
- The name of the openstack SSH public key that is inserted into the authorized_keys file of the VM's root user account. Prior to using this public key, you must use openstack commands or the horizon web UI to load that key into the tenant's account. Note that this openstack tenant must be the one you defined in the cloud provider.
- ssh_interface
-
This option allows you to create a VM without a public IP. If this option
is omitted and the VM does not have a public IP, then the salt-cloud waits
for a certain period of time and then destroys the VM.
For more information concerning cloud profiles, see here.
change_password
If no ssh_key_file is provided, and the server already exists, change_password will use the api to change the root password of the server so that it can be bootstrapped.
change_password: True
userdata_file
Use userdata_file to specify the userdata file to upload for use with cloud-init if available.
userdata_file: /etc/salt/cloud-init/packages.yml
Getting Started With Parallels
Parallels Cloud Server is a product by Parallels that delivers a cloud hosting solution. The PARALLELS module for Salt Cloud enables you to manage instances hosted by a provider using PCS. Further information can be found at:
http://www.parallels.com/products/pcs/
- •
-
Using the old format, set up the cloud configuration at /etc/salt/cloud:
# Set up the location of the salt master # minion: master: saltmaster.example.com # Set the PARALLELS access credentials (see below) # PARALLELS.user: myuser PARALLELS.password: badpass # Set the access URL for your PARALLELS provider # PARALLELS.url: https://api.cloud.xmission.com:4465/paci/v1.0/
- •
-
Using the new format, set up the cloud configuration at
/etc/salt/cloud.providers or
/etc/salt/cloud.providers.d/parallels.conf:
my-parallels-config: # Set up the location of the salt master # minion: master: saltmaster.example.com # Set the PARALLELS access credentials (see below) # user: myuser password: badpass # Set the access URL for your PARALLELS provider # url: https://api.cloud.xmission.com:4465/paci/v1.0/ provider: parallels
Access Credentials
The user, password, and url will be provided to you by your cloud
provider. These are all required in order for the PARALLELS driver to work.
Cloud Profiles
Set up an initial profile at /etc/salt/cloud.profiles or /etc/salt/cloud.profiles.d/parallels.conf:
- •
-
Using the old cloud configuration format:
parallels-ubuntu: provider: parallels image: ubuntu-12.04-x86_64
- •
-
Using the new cloud configuration format and the cloud configuration example
from above:
parallels-ubuntu: provider: my-parallels-config image: ubuntu-12.04-x86_64
The profile can be realized now with a salt command:
# salt-cloud -p parallels-ubuntu myubuntu
This will create an instance named myubuntu on the cloud provider. The minion that is installed on this instance will have an id of myubuntu. If the command was executed on the salt-master, its Salt key will automatically be signed on the master.
Once the instance has been created with salt-minion installed, connectivity to it can be verified with Salt:
# salt myubuntu test.ping
Required Settings
The following settings are always required for PARALLELS:
- •
-
Using the old cloud configuration format:
PARALLELS.user: myuser PARALLELS.password: badpass PARALLELS.url: https://api.cloud.xmission.com:4465/paci/v1.0/
- •
-
Using the new cloud configuration format:
my-parallels-config: user: myuser password: badpass url: https://api.cloud.xmission.com:4465/paci/v1.0/ provider: parallels
Optional Settings
Unlike other cloud providers in Salt Cloud, Parallels does not utilize a size setting. This is because Parallels allows the end-user to specify a more detailed configuration for their instances, than is allowed by many other cloud providers. The following options are available to be used in a profile, with their default settings listed.
# Description of the instance. Defaults to the instance name. desc: <instance_name> # How many CPU cores, and how fast they are (in MHz) cpu_number: 1 cpu_power: 1000 # How many megabytes of RAM ram: 256 # Bandwidth available, in kbps bandwidth: 100 # How many public IPs will be assigned to this instance ip_num: 1 # Size of the instance disk (in GiB) disk_size: 10 # Username and password ssh_username: root password: <value from PARALLELS.password> # The name of the image, from ``salt-cloud --list-images parallels`` image: ubuntu-12.04-x86_64
Getting Started With Proxmox
Proxmox Virtual Environment is a complete server virtualization management solution, based on KVM virtualization and OpenVZ containers. Further information can be found at:
Dependencies
- •
- IPy >= 0.81
- •
-
requests >= 2.2.1
Please note: This module allows you to create both OpenVZ and KVM but installing Salt on it will only be done when the VM is an OpenVZ container rather than a KVM virtual machine.
- •
-
Set up the cloud configuration at
/etc/salt/cloud.providers or
/etc/salt/cloud.providers.d/proxmox.conf:
my-proxmox-config: # Set up the location of the salt master # minion: master: saltmaster.example.com # Set the PROXMOX access credentials (see below) # user: myuser@pve password: badpass # Set the access URL for your PROXMOX provider # url: your.proxmox.host provider: proxmox
Access Credentials
The user, password, and url will be provided to you by your cloud
provider. These are all required in order for the PROXMOX driver to work.
Cloud Profiles
Set up an initial profile at /etc/salt/cloud.profiles or /etc/salt/cloud.profiles.d/proxmox.conf:
- •
-
Configure a profile to be used:
proxmox-ubuntu: provider: proxmox image: local:vztmpl/ubuntu-12.04-standard_12.04-1_amd64.tar.gz technology: openvz host: myvmhost ip_address: 192.168.100.155 password: topsecret
The profile can be realized now with a salt command:
# salt-cloud -p proxmox-ubuntu myubuntu
This will create an instance named myubuntu on the cloud provider. The minion that is installed on this instance will have a hostname of myubuntu. If the command was executed on the salt-master, its Salt key will automatically be signed on the master.
Once the instance has been created with salt-minion installed, connectivity to it can be verified with Salt:
# salt myubuntu test.ping
Required Settings
The following settings are always required for PROXMOX:
- •
-
Using the new cloud configuration format:
my-proxmox-config: provider: proxmox user: saltcloud@pve password: xyzzy url: your.proxmox.host
Optional Settings
Unlike other cloud providers in Salt Cloud, Proxmox does not utilize a size setting. This is because Proxmox allows the end-user to specify a more detailed configuration for their instances, than is allowed by many other cloud providers. The following options are available to be used in a profile, with their default settings listed.
# Description of the instance. desc: <instance_name> # How many CPU cores, and how fast they are (in MHz) cpus: 1 cpuunits: 1000 # How many megabytes of RAM memory: 256 # How much swap space in MB swap: 256 # Whether to auto boot the vm after the host reboots onboot: 1 # Size of the instance disk (in GiB) disk: 10 # Host to create this vm on host: myvmhost # Nameservers. Defaults to host nameserver: 8.8.8.8 8.8.4.4 # Username and password ssh_username: root password: <value from PROXMOX.password> # The name of the image, from ``salt-cloud --list-images proxmox`` image: local:vztmpl/ubuntu-12.04-standard_12.04-1_amd64.tar.gz
Getting Started With Rackspace
Rackspace is a major public cloud platform which may be configured using either the rackspace or the openstack driver, depending on your needs.
Please note that the rackspace driver is only intended for 1st gen instances,
aka, "the old cloud" at Rackspace. It is required for 1st gen instances, but
will not work with OpenStack-based instances. Unless you explicitly have a
reason to use it, it is highly recommended that you use the openstack driver
instead.
Dependencies
- •
- Libcloud >= 0.13.2
Configuration
- To use the openstack driver (recommended), set up the cloud configuration at
-
/etc/salt/cloud.providers or
/etc/salt/cloud.providers.d/rackspace.conf:
my-rackspace-config: # Set the location of the salt-master # minion: master: saltmaster.example.com # Configure Rackspace using the OpenStack plugin # identity_url: 'https://identity.api.rackspacecloud.com/v2.0/tokens' compute_name: cloudServersOpenStack protocol: ipv4 # Set the compute region: # compute_region: DFW # Configure Rackspace authentication credentials # user: myname tenant: 123456 apikey: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx provider: openstack
- To use the rackspace driver, set up the cloud configuration at
-
/etc/salt/cloud.providers or
/etc/salt/cloud.providers.d/rackspace.conf:
my-rackspace-config: provider: rackspace # The Rackspace login user user: fred # The Rackspace user's apikey apikey: 901d3f579h23c8v73q9
The settings that follow are for using Rackspace with the openstack driver, and will not work with the rackspace driver.
Compute Region
Rackspace currently has six compute regions which may be used:
DFW -> Dallas/Forth Worth ORD -> Chicago SYD -> Sydney LON -> London IAD -> Northern Virginia HKG -> Hong Kong
Note: Currently the LON region is only available with a UK account, and UK accounts cannot access other regions
Authentication
The user is the same user as is used to log into the Rackspace Control Panel. The tenant and apikey can be found in the API Keys area of the Control Panel. The apikey will be labeled as API Key (and may need to be generated), and tenant will be labeled as Cloud Account Number.
An initial profile can be configured in /etc/salt/cloud.profiles or /etc/salt/cloud.profiles.d/rackspace.conf:
openstack_512: provider: my-rackspace-config size: 512 MB Standard image: Ubuntu 12.04 LTS (Precise Pangolin)
To instantiate a machine based on this profile:
# salt-cloud -p openstack_512 myinstance
This will create a virtual machine at Rackspace with the name myinstance. This operation may take several minutes to complete, depending on the current load at the Rackspace data center.
Once the instance has been created with salt-minion installed, connectivity to it can be verified with Salt:
# salt myinstance test.ping
RackConnect Environments
Rackspace offers a hybrid hosting configuration option called RackConnect that allows you to use a physical firewall appliance with your cloud servers. When this service is in use the public_ip assigned by nova will be replaced by a NAT ip on the firewall. For salt-cloud to work properly it must use the newly assigned "access ip" instead of the Nova assigned public ip. You can enable that capability by adding this to your profiles:
openstack_512: provider: my-openstack-config size: 512 MB Standard image: Ubuntu 12.04 LTS (Precise Pangolin) rackconnect: True
Managed Cloud Environments
Rackspace offers a managed service level of hosting. As part of the managed service level you have the ability to choose from base of lamp installations on cloud server images. The post build process for both the base and the lamp installations used Chef to install things such as the cloud monitoring agent and the cloud backup agent. It also takes care of installing the lamp stack if selected. In order to prevent the post installation process from stomping over the bootstrapping you can add the below to your profiles.
openstack_512: provider: my-rackspace-config size: 512 MB Standard image: Ubuntu 12.04 LTS (Precise Pangolin) managedcloud: True
First and Next Generation Images
Rackspace provides two sets of virtual machine images, first, and next generation. As of 0.8.9 salt-cloud will default to using the next generation images. To force the use of first generation images, on the profile configuration please add:
FreeBSD-9.0-512: provider: my-rackspace-config size: 512 MB Standard image: FreeBSD 9.0 force_first_gen: True
Private Subnets
By default salt-cloud will not add Rackspace private networks to new servers. To enable a private network to a server instantiated by salt cloud, add the following section to the provider file (typically /etc/salt/cloud.providers.d/rackspace.conf)
networks: - fixed: # This is the private network - private-network-id # This is Rackspace's "PublicNet" - 00000000-0000-0000-0000-000000000000 # This is Rackspace's "ServiceNet" - 11111111-1111-1111-1111-111111111111
To get the Rackspace private network ID, go to Networking, Networks and hover over the private network name.
The order of the networks in the above code block does not map to the order of the ethernet devices on newly created servers. Public IP will always be first ( eth0 ) followed by servicenet ( eth1 ) and then private networks.
Enabling the private network per above gives the option of using the private subnet for
all master-minion communication, including the bootstrap install of salt-minion. To
enable the minion to use the private subnet, update the master: line in the minion:
section of the providers file. To configure the master to only listen on the private
subnet IP, update the interface: line in the /etc/salt/master file to be the private
subnet IP of the salt master.
Getting Started With Saltify
The Saltify driver is a new, experimental driver for installing Salt on existing
machines (virtual or bare metal).
Dependencies
The Saltify driver has no external dependencies.
Configuration
Because the Saltify driver does not use an actual cloud provider host, it has a simple provider configuration. The only thing that is required to be set is the driver name, and any other potentially useful information, like the location of the salt-master:
# Note: This example is for /etc/salt/cloud.providers file or any file in # the /etc/salt/cloud.providers.d/ directory. my-saltify-config: minion: master: 111.222.333.444 provider: saltify
Profiles
Saltify requires a profile to be configured for each machine that needs Salt installed. The initial profile can be set up at /etc/salt/cloud.profiles or in the /etc/salt/cloud.profiles.d/ directory. Each profile requires both an ssh_host and an ssh_username key parameter as well as either an key_filename or a password.
Profile configuration example:
# /etc/salt/cloud.profiles.d/saltify.conf salt-this-machine: ssh_host: 12.34.56.78 ssh_username: root key_filename: '/etc/salt/mysshkey.pem' provider: my-saltify-config
The machine can now be "Salted" with the following command:
salt-cloud -p salt-this-machine my-machine
This will install salt on the machine specified by the cloud profile, salt-this-machine, and will give the machine the minion id of my-machine. If the command was executed on the salt-master, its Salt key will automatically be signed on the master.
Once a salt-minion has been successfully installed on the instance, connectivity to it can be verified with Salt:
salt my-machine test.ping
Using Map Files
The settings explained in the section above may also be set in a map file. An example of how to use the Saltify driver with a map file follows:
# /etc/salt/saltify-map make_salty: - my-instance-0: ssh_host: 12.34.56.78 ssh_username: root password: very-bad-password - my-instance-1: ssh_host: 44.33.22.11 ssh_username: root password: another-bad-pass
Note: When using a cloud map with the Saltify driver, the name of the profile to use, in this case make_salty, must be defined in a profile config. For example:
# /etc/salt/cloud.profiles.d/saltify.conf make_salty: provider: my-saltify-config
The machines listed in the map file can now be "Salted" by applying the following salt map command:
salt-cloud -m /etc/salt/saltify-map
This command will install salt on the machines specified in the map and will give each machine their minion id of my-instance-0 and my-instance-1, respectively. If the command was executed on the salt-master, its Salt key will automatically be signed on the master.
Connectivity to the new "Salted" instances can now be verified with Salt:
salt 'my-instance-*' test.ping
Getting Started With SoftLayer
SoftLayer is a public cloud provider, and baremetal hardware hosting provider.
Dependencies
The SoftLayer driver for Salt Cloud requires the softlayer package, which is available at PyPI:
https://pypi.python.org/pypi/SoftLayer
This package can be installed using pip or easy_install:
# pip install softlayer # easy_install softlayer
Configuration
Set up the cloud config at /etc/salt/cloud.providers:
# Note: These examples are for /etc/salt/cloud.providers my-softlayer: # Set up the location of the salt master minion: master: saltmaster.example.com # Set the SoftLayer access credentials (see below) user: MYUSER1138 apikey: 'e3b68aa711e6deadc62d5b76355674beef7cc3116062ddbacafe5f7e465bfdc9' provider: softlayer my-softlayer-hw: # Set up the location of the salt master minion: master: saltmaster.example.com # Set the SoftLayer access credentials (see below) user: MYUSER1138 apikey: 'e3b68aa711e6deadc62d5b76355674beef7cc3116062ddbacafe5f7e465bfdc9' provider: softlayer_hw
Access Credentials
The user setting is the same user as is used to log into the SoftLayer Administration area. The apikey setting is found inside the Admin area after logging in:
- •
- Hover over the Account menu item.
- •
- Click the Users link.
- •
- Find the API Key column and click View.
Profiles
Cloud Profiles
Set up an initial profile at /etc/salt/cloud.profiles:
base_softlayer_ubuntu: provider: my-softlayer image: UBUNTU_LATEST cpu_number: 1 ram: 1024 disk_size: 100 local_disk: True hourly_billing: True domain: example.com location: sjc01 # Optional max_net_speed: 1000 private_vlan: 396 private_network: True private_ssh: True # May be used _instead_of_ image global_identifier: 320d8be5-46c0-dead-cafe-13e3c51
Most of the above items are required; optional items are specified below.
image
Images to build an instance can be found using the --list-images option:
# salt-cloud --list-images my-softlayer
The setting used will be labeled as template.
cpu_number
This is the number of CPU cores that will be used for this instance. This number may be dependent upon the image that is used. For instance:
Red Hat Enterprise Linux 6 - Minimal Install (64 bit) (1 - 4 Core): ---------- name: Red Hat Enterprise Linux 6 - Minimal Install (64 bit) (1 - 4 Core) template: REDHAT_6_64 Red Hat Enterprise Linux 6 - Minimal Install (64 bit) (5 - 100 Core): ---------- name: Red Hat Enterprise Linux 6 - Minimal Install (64 bit) (5 - 100 Core) template: REDHAT_6_64
Note that the template (meaning, the image option) for both of these is the
same, but the names suggests how many CPU cores are supported.
ram
This is the amount of memory, in megabytes, that will be allocated to this
instance.
disk_size
The amount of disk space that will be allocated to this image, in megabytes.
local_disk
When true the disks for the computing instance will be provisioned on the host
which it runs, otherwise SAN disks will be provisioned.
hourly_billing
When true the computing instance will be billed on hourly usage, otherwise it
will be billed on a monthly basis.
domain
The domain name that will be used in the FQDN (Fully Qualified Domain Name) for
this instance. The domain setting will be used in conjunction with the
instance name to form the FQDN.
location
Images to build an instance can be found using the --list-locations option:
# salt-cloud --list-location my-softlayer
max_net_speed
Specifies the connection speed for the instance's network components. This
setting is optional. By default, this is set to 10.
public_vlan
If it is necessary for an instance to be created within a specific frontend VLAN, the ID for that VLAN can be specified in either the provider or profile configuration.
This ID can be queried using the list_vlans function, as described below. This
setting is optional.
private_vlan
If it is necessary for an instance to be created within a specific backend VLAN, the ID for that VLAN can be specified in either the provider or profile configuration.
This ID can be queried using the list_vlans function, as described below. This
setting is optional.
private_network
If a server is to only be used internally, meaning it does not have a public
VLAN associated with it, this value would be set to True. This setting is
optional. The default is False.
private_ssh
Whether to run the deploy script on the server using the public IP address
or the private IP address. If set to True, Salt Cloud will attempt to SSH into
the new server using the private IP address. The default is False. This
settiong is optional.
global_identifier
When creating an instance using a custom template, this option is set to the corresponding value obtained using the list_custom_images function. This option will not be used if an image is set, and if an image is not set, it is required.
The profile can be realized now with a salt command:
# salt-cloud -p base_softlayer_ubuntu myserver
Using the above configuration, this will create myserver.example.com.
Once the instance has been created with salt-minion installed, connectivity to it can be verified with Salt:
# salt 'myserver.example.com' test.ping
Cloud Profiles
Set up an initial profile at /etc/salt/cloud.profiles:
base_softlayer_hw_centos: provider: my-softlayer-hw # CentOS 6.0 - Minimal Install (64 bit) image: 13963 # 2 x 2.0 GHz Core Bare Metal Instance - 2 GB Ram size: 1921 # 500GB SATA II hdd: 1267 # San Jose 01 location: 168642 domain: example.com # Optional vlan: 396 port_speed: 273 banwidth: 248
Most of the above items are required; optional items are specified below.
image
Images to build an instance can be found using the --list-images option:
# salt-cloud --list-images my-softlayer-hw
A list of id`s and names will be provided. The `name will describe the
operating system and architecture. The id will be the setting to be used in
the profile.
size
Sizes to build an instance can be found using the --list-sizes option:
# salt-cloud --list-sizes my-softlayer-hw
A list of id`s and names will be provided. The `name will describe the speed
and quantity of CPU cores, and the amount of memory that the hardware will
contain. The id will be the setting to be used in the profile.
hdd
There is currently only one size of hard disk drive (HDD) that is available for hardware instances on SoftLayer:
1267: 500GB SATA II
The hdd setting in the profile should be 1267. Other sizes may be
added in the future.
location
Locations to build an instance can be found using the --list-images option:
# salt-cloud --list-locations my-softlayer-hw
A list of IDs and names will be provided. The location will describe the
location in human terms. The id will be the setting to be used in the profile.
domain
The domain name that will be used in the FQDN (Fully Qualified Domain Name) for
this instance. The domain setting will be used in conjunction with the
instance name to form the FQDN.
vlan
If it is necessary for an instance to be created within a specific VLAN, the ID for that VLAN can be specified in either the provider or profile configuration.
This ID can be queried using the list_vlans function, as described below.
port_speed
Specifies the speed for the instance's network port. This setting refers to an ID within the SoftLayer API, which sets the port speed. This setting is optional. The default is 273, or, 100 Mbps Public & Private Networks. The following settings are available:
- •
- 273: 100 Mbps Public & Private Networks
- •
- 274: 1 Gbps Public & Private Networks
- •
- 21509: 10 Mbps Dual Public & Private Networks (up to 20 Mbps)
- •
- 21513: 100 Mbps Dual Public & Private Networks (up to 200 Mbps)
- •
- 2314: 1 Gbps Dual Public & Private Networks (up to 2 Gbps)
- •
- 272: 10 Mbps Public & Private Networks
bandwidth
Specifies the network bandwidth available for the instance. This setting refers to an ID within the SoftLayer API, which sets the bandwidth. This setting is optional. The default is 248, or, 5000 GB Bandwidth. The following settings are available:
- •
- 248: 5000 GB Bandwidth
- •
- 129: 6000 GB Bandwidth
- •
- 130: 8000 GB Bandwidth
- •
- 131: 10000 GB Bandwidth
- •
- 36: Unlimited Bandwidth (10 Mbps Uplink)
- •
- 125: Unlimited Bandwidth (100 Mbps Uplink)
Actions
The following actions are currently supported by the SoftLayer Salt Cloud
driver.
show_instance
This action is a thin wrapper around --full-query, which displays details on a single instance only. In an environment with several machines, this will save a user from having to sort through all instance data, just to examine a single instance.
$ salt-cloud -a show_instance myinstance
Functions
The following functions are currently supported by the SoftLayer Salt Cloud
driver.
list_vlans
This function lists all VLANs associated with the account, and all known data from the SoftLayer API concerning those VLANs.
$ salt-cloud -f list_vlans my-softlayer $ salt-cloud -f list_vlans my-softlayer-hw
The id returned in this list is necessary for the vlan option when creating
an instance.
list_custom_images
This function lists any custom templates associated with the account, that can be used to create a new instance.
$ salt-cloud -f list_custom_images my-softlayer
The globalIdentifier returned in this list is necessary for the global_identifier option when creating an image using a custom template.
Optional Products for SoftLayer HW
The softlayer_hw provider supports the ability to add optional products, which are supported by SoftLayer's API. These products each have an ID associated with them, that can be passed into Salt Cloud with the optional_products option:
softlayer_hw_test: provider: my-softlayer-hw # CentOS 6.0 - Minimal Install (64 bit) image: 13963 # 2 x 2.0 GHz Core Bare Metal Instance - 2 GB Ram size: 1921 # 500GB SATA II hdd: 1267 # San Jose 01 location: 168642 domain: example.com optional_products: # MySQL for Linux - id: 28 # Business Continuance Insurance - id: 104
These values can be manually obtained by looking at the source of an order page on the SoftLayer web interface. For convenience, many of these values are listed here:
Public Secondary IP Addresses
- •
- 22: 4 Public IP Addresses
- •
- 23: 8 Public IP Addresses
Primary IPv6 Addresses
- •
- 17129: 1 IPv6 Address
Public Static IPv6 Addresses
- •
- 1481: /64 Block Static Public IPv6 Addresses
OS-Specific Addon
- •
- 17139: XenServer Advanced for XenServer 6.x
- •
- 17141: XenServer Enterprise for XenServer 6.x
- •
- 2334: XenServer Advanced for XenServer 5.6
- •
- 2335: XenServer Enterprise for XenServer 5.6
- •
- 13915: Microsoft WebMatrix
- •
- 21276: VMware vCenter 5.1 Standard
Control Panel Software
- •
- 121: cPanel/WHM with Fantastico and RVskin
- •
- 20778: Parallels Plesk Panel 11 (Linux) 100 Domain w/ Power Pack
- •
- 20786: Parallels Plesk Panel 11 (Windows) 100 Domain w/ Power Pack
- •
- 20787: Parallels Plesk Panel 11 (Linux) Unlimited Domain w/ Power Pack
- •
- 20792: Parallels Plesk Panel 11 (Windows) Unlimited Domain w/ Power Pack
- •
- 2340: Parallels Plesk Panel 10 (Linux) 100 Domain w/ Power Pack
- •
- 2339: Parallels Plesk Panel 10 (Linux) Unlimited Domain w/ Power Pack
- •
- 13704: Parallels Plesk Panel 10 (Windows) Unlimited Domain w/ Power Pack
Database Software
- •
- 29: MySQL 5.0 for Windows
- •
- 28: MySQL for Linux
- •
- 21501: Riak 1.x
- •
- 20893: MongoDB
- •
- 30: Microsoft SQL Server 2005 Express
- •
- 92: Microsoft SQL Server 2005 Workgroup
- •
- 90: Microsoft SQL Server 2005 Standard
- •
- 94: Microsoft SQL Server 2005 Enterprise
- •
- 1330: Microsoft SQL Server 2008 Express
- •
- 1340: Microsoft SQL Server 2008 Web
- •
- 1337: Microsoft SQL Server 2008 Workgroup
- •
- 1334: Microsoft SQL Server 2008 Standard
- •
- 1331: Microsoft SQL Server 2008 Enterprise
- •
- 2179: Microsoft SQL Server 2008 Express R2
- •
- 2173: Microsoft SQL Server 2008 Web R2
- •
- 2183: Microsoft SQL Server 2008 Workgroup R2
- •
- 2180: Microsoft SQL Server 2008 Standard R2
- •
- 2176: Microsoft SQL Server 2008 Enterprise R2
Anti-Virus & Spyware Protection
- •
- 594: McAfee VirusScan Anti-Virus - Windows
- •
- 414: McAfee Total Protection - Windows
Insurance
- •
- 104: Business Continuance Insurance
Monitoring
- •
- 55: Host Ping
- •
- 56: Host Ping and TCP Service Monitoring
Notification
- •
- 57: Email and Ticket
Advanced Monitoring
- •
- 2302: Monitoring Package - Basic
- •
- 2303: Monitoring Package - Advanced
- •
- 2304: Monitoring Package - Premium Application
Response
- •
- 58: Automated Notification
- •
- 59: Automated Reboot from Monitoring
- •
- 60: 24x7x365 NOC Monitoring, Notification, and Response
Intrusion Detection & Protection
- •
- 413: McAfee Host Intrusion Protection w/Reporting
Hardware & Software Firewalls
- •
- 411: APF Software Firewall for Linux
- •
- 894: Microsoft Windows Firewall
- •
- 410: 10Mbps Hardware Firewall
- •
- 409: 100Mbps Hardware Firewall
- •
- 408: 1000Mbps Hardware Firewall
Getting Started with VEXXHOST
VEXXHOST is an cloud computing provider which provides Canadian cloud computing services which are based in Monteral and uses the libcloud OpenStack driver. VEXXHOST currently runs the Havana release of OpenStack. When provisioning new instances, they automatically get a public IP and private IP address. Therefore, you do not need to assign a floating IP to access your instance once it's booted.
Cloud Provider Configuration
To use the openstack driver for the VEXXHOST public cloud, you will need to set up the cloud provider configuration file as in the example below:
/etc/salt/cloud.providers.d/vexxhost.conf: In order to use the VEXXHOST public cloud, you will need to setup a cloud provider configuration file as in the example below which uses the OpenStack driver.
vexxhost: # Set the location of the salt-master # minion: master: saltmaster.example.com # Configure VEXXHOST using the OpenStack plugin # identity_url: http://auth.api.thenebulacloud.com:5000/v2.0/tokens compute_name: nova # Set the compute region: # compute_region: na-yul-nhs1 # Configure VEXXHOST authentication credentials # user: your-tenant-id password: your-api-key tenant: your-tenant-name # keys to allow connection to the instance launched # ssh_key_name: yourkey ssh_key_file: /path/to/key/yourkey.priv provider: openstack
Authentication
All of the authentication fields that you need can be found by logging into your VEXXHOST customer center. Once you've logged in, you will need to click on "CloudConsole" and then click on "API Credentials".
Cloud Profile Configuration
In order to get the correct image UUID and the instance type to use in the cloud profile, you can run the following command respectively:
# salt-cloud --list-images=vexxhost-config # salt-cloud --list-sizes=vexxhost-config
Once you have that, you can go ahead and create a new cloud profile. This profile will build an Ubuntu 12.04 LTS nb.2G instance.
/etc/salt/cloud.profiles.d/vh_ubuntu1204_2G.conf:
vh_ubuntu1204_2G: provider: vexxhost image: 4051139f-750d-4d72-8ef0-074f2ccc7e5a size: nb.2G
Provision an instance
To create an instance based on the sample profile that we created above, you can run the following salt-cloud command.
# salt-cloud -p vh_ubuntu1204_2G vh_instance1
Typically, instances are provisioned in under 30 seconds on the VEXXHOST public cloud. After the instance provisions, it will be set up a minion and then return all the instance information once it's complete.
Once the instance has been setup, you can test connectivity to it by running the following command:
# salt vh_instance1 test.ping
You can now continue to provision new instances and they will all automatically be set up as minions of the master you've defined in the configuration file.
Getting Started With vSphere
NOTE: Deprecated since version Carbon: The vsphere cloud driver has been deprecated in favor of the vmware cloud driver and will be removed in Salt Carbon. Please refer to Getting started with VMware instead to get started with the configuration.
VMware vSphere is a management platform for virtual infrastructure and cloud computing.
Dependencies
The vSphere module for Salt Cloud requires the PySphere package, which is available at PyPI:
https://pypi.python.org/pypi/pysphere
This package can be installed using pip or easy_install:
# pip install pysphere # easy_install pysphere
Configuration
Set up the cloud config at /etc/salt/cloud.providers or in the /etc/salt/cloud.providers.d/ directory:
my-vsphere-config: provider: vsphere # Set the vSphere access credentials user: marco password: polo # Set the URL of your vSphere server url: 'vsphere.example.com'
Profiles
Cloud Profiles
vSphere uses a Managed Object Reference to identify objects located in vCenter. The MOR ID's are used when configuring a vSphere cloud profile. Use the following reference when locating the MOR's for the cloud profile.
http://kb.vmware.com/selfservice/microsites/search.do?cmd=displayKC&docType=kc&externalId=1017126&sliceId=1&docTypeID=DT_KB_1_1&dialogID=520386078&stateId=1%200%20520388386
Set up an initial profile at /etc/salt/cloud.profiles or in the /etc/salt/cloud.profiles.d directory:
vsphere-centos: provider: my-vsphere-config image: centos # Optional datastore: datastore-15 resourcepool: resgroup-8 folder: salt-cloud host: host-9 template: False
provider
Enter the name that was specified when the cloud provider profile was created.
image
Images available to build an instance can be found using the --list-images option:
# salt-cloud --list-images my-vsphere-config
datastore
The MOR of the datastore where the virtual machine should be located. If not specified, the current datastore is used.
resourcepool
The MOR of the resourcepool to be used for the new vm. If not set, it will use the same resourcepool as the original vm.
folder
Name of the folder that will contain the new VM. If not set, the VM will be added to the folder the original VM belongs to.
host
The MOR of the host where the vm should be registered.
- If not specified:
- •
- if resourcepool is not specified, the current host is used.
- •
- if resourcepool is specified, and the target pool represents a stand-alone host, the host is used.
- •
- if resourcepool is specified, and the target pool represents a DRS-enabled cluster, a host selected by DRS is used.
- •
- if resourcepool is specified, and the target pool represents a cluster without DRS enabled, an InvalidArgument exception will be thrown.
template
Specifies whether or not the new virtual machine should be marked as a template. Default is False.
Miscellaneous Options
Miscellaneous Salt Cloud Options
This page describes various miscellaneous options available in Salt Cloud
Deploy Script Arguments
Custom deploy scripts are unlikely to need custom arguments to be passed to them, but salt-bootstrap has been extended quite a bit, and this may be necessary. script_args can be specified in either the profile or the map file, to pass arguments to the deploy script:
ec2-amazon: provider: ec2 image: ami-1624987f size: t1.micro ssh_username: ec2-user script: bootstrap-salt script_args: -c /tmp/
This has also been tested to work with pipes, if needed:
script_args: | head
Selecting the File Transport
By default, Salt Cloud uses SFTP to transfer files to Linux hosts. However, if SFTP is not available, or specific SCP functionality is needed, Salt Cloud can be configured to use SCP instead.
file_transport: sftp file_transport: scp
Sync After Install
Salt allows users to create custom modules, grains, and states which can be synchronised to minions to extend Salt with further functionality.
This option will inform Salt Cloud to synchronise your custom modules, grains, states or all these to the minion just after it has been created. For this to happen, the following line needs to be added to the main cloud configuration file:
sync_after_install: all
The available options for this setting are:
modules grains states all
Setting up New Salt Masters
It has become increasingly common for users to set up multi-hierarchal infrastructures using Salt Cloud. This sometimes involves setting up an instance to be a master in addition to a minion. With that in mind, you can now lay down master configuration on a machine by specifying master options in the profile or map file.
make_master: True
This will cause Salt Cloud to generate master keys for the instance, and tell salt-bootstrap to install the salt-master package, in addition to the salt-minion package.
The default master configuration is usually appropriate for most users, and will not be changed unless specific master configuration has been added to the profile or map:
master: user: root interface: 0.0.0.0
Delete SSH Keys
When Salt Cloud deploys an instance, the SSH pub key for the instance is added to the known_hosts file for the user that ran the salt-cloud command. When an instance is deployed, a cloud provider generally recycles the IP address for the instance. When Salt Cloud attempts to deploy an instance using a recycled IP address that has previously been accessed from the same machine, the old key in the known_hosts file will cause a conflict.
In order to mitigate this issue, Salt Cloud can be configured to remove old keys from the known_hosts file when destroying the node. In order to do this, the following line needs to be added to the main cloud configuration file:
delete_sshkeys: True
Keeping /tmp/ Files
When Salt Cloud deploys an instance, it uploads temporary files to /tmp/ for salt-bootstrap to put in place. After the script has run, they are deleted. To keep these files around (mostly for debugging purposes), the --keep-tmp option can be added:
salt-cloud -p myprofile mymachine --keep-tmp
For those wondering why /tmp/ was used instead of /root/, this had to be done for images which require the use of sudo, and therefore do not allow remote root logins, even for file transfers (which makes /root/ unavailable).
Hide Output From Minion Install
By default Salt Cloud will stream the output from the minion deploy script directly to STDOUT. Although this can been very useful, in certain cases you may wish to switch this off. The following config option is there to enable or disable this output:
display_ssh_output: False
Connection Timeout
There are several stages when deploying Salt where Salt Cloud needs to wait for something to happen. The VM getting it's IP address, the VM's SSH port is available, etc.
If you find that the Salt Cloud defaults are not enough and your deployment fails because Salt Cloud did not wait log enough, there are some settings you can tweak.
- Note
-
All settings should be provided in lowercase All values should be provided in seconds
You can tweak these settings globally, per cloud provider, or event per profile definition.
wait_for_ip_timeout
The amount of time Salt Cloud should wait for a VM to start and get an IP back from the cloud provider. Default: 5 minutes.
wait_for_ip_interval
The amount of time Salt Cloud should sleep while querying for the VM's IP. Default: 5 seconds.
ssh_connect_timeout
The amount of time Salt Cloud should wait for a successful SSH connection to the VM. Default: 5 minutes.
wait_for_passwd_timeout
The amount of time until an ssh connection can be established via password or ssh key. Default 15 seconds.
wait_for_passwd_maxtries
The number of attempts to connect to the VM until we abandon. Default 15 attempts
wait_for_fun_timeout
Some cloud drivers check for an available IP or a successful SSH connection using a function, namely, SoftLayer, and SoftLayer-HW. So, the amount of time Salt Cloud should retry such functions before failing. Default: 5 minutes.
wait_for_spot_timeout
The amount of time Salt Cloud should wait before an EC2 Spot instance is available. This setting is only available for the EC2 cloud driver.
Salt Cloud Cache
Salt Cloud can maintain a cache of node data, for supported providers. The following options manage this functionality.
update_cachedir
On supported cloud providers, whether or not to maintain a cache of nodes returned from a --full-query. The data will be stored in msgpack format under <SALT_CACHEDIR>/cloud/active/<DRIVER>/<PROVIDER>/<NODE_NAME>.p. This setting can be True or False.
diff_cache_events
When the cloud cachedir is being managed, if differences are encountered between the data that is returned live from the cloud provider and the data in the cache, fire events which describe the changes. This setting can be True or False.
Some of these events will contain data which describe a node. Because some of the fields returned may contain sensitive data, the cache_event_strip_fields configuration option exists to strip those fields from the event return.
cache_event_strip_fields: - password - priv_key
The following are events that can be fired based on this data.
salt/cloud/minionid/cache_node_new
A new node was found on the cloud provider which was not listed in the cloud cachedir. A dict describing the new node will be contained in the event.
salt/cloud/minionid/cache_node_missing
A node that was previously listed in the cloud cachedir is no longer available on the cloud provider.
salt/cloud/minionid/cache_node_diff
One or more pieces of data in the cloud cachedir has changed on the cloud provider. A dict containing both the old and the new data will be contained in the event.
SSH Known Hosts
Normally when bootstrapping a VM, salt-cloud will ignore the SSH host key. This is because it does not know what the host key is before starting (because it doesn't exist yet). If strict host key checking is turned on without the key in the known_hosts file, then the host will never be available, and cannot be bootstrapped.
If a provider is able to determine the host key before trying to bootstrap it, that provider's driver can add it to the known_hosts file, and then turn on strict host key checking. This can be set up in the main cloud configuration file (normally /etc/salt/cloud) or in the provider-specific configuration file:
known_hosts_file: /path/to/.ssh/known_hosts
If this is not set, it will default to /dev/null, and strict host key checking will be turned off.
It is highly recommended that this option is not set, unless the user has verified that the provider supports this functionality, and that the image being used is capable of providing the necessary information. At this time, only the EC2 driver supports this functionality.
SSH Agent
New in version 2015.5.0.
If the ssh key is not stored on the server salt-cloud is being run on, set ssh_agent, and salt-cloud will use the forwarded ssh-agent to authenticate.
ssh_agent: True
File Map Upload
New in version 2014.7.0.
The file_map option allows an arbitrary group of files to be uploaded to the target system before running the deploy script. This functionality requires a provider uses salt.utils.cloud.bootstrap(), which is currently limited to the ec2, gce, openstack and nova drivers.
The file_map can be configured globally in /etc/salt/cloud, or in any cloud provider or profile file. For example, to upload an extra package or a custom deploy script, a cloud profile using file_map might look like:
ubuntu14: provider: ec2-config image: ami-98aa1cf0 size: t1.micro ssh_username: root securitygroup: default file_map: /local/path/to/custom/script: /remote/path/to/use/custom/script /local/path/to/package: /remote/path/to/store/package
Troubleshooting Steps
Troubleshooting Salt Cloud
This page describes various steps for troubleshooting problems that may arise while using Salt Cloud.
Virtual Machines Are Created, But Do Not Respond
Are TCP ports 4505 and 4506 open on the master? This is easy to overlook on new masters. Information on how to open firewall ports on various platforms can be found here.
Generic Troubleshooting Steps
This section describes a set of instructions that are useful to a large number of situations, and are likely to solve most issues that arise.
- Version Compatibility
-
One of the most common issues that Salt Cloud users run into is import errors. These are often caused by version compatibility issues with Salt.
Salt 0.16.x works with Salt Cloud 0.8.9 or greater.
Salt 0.17.x requires Salt Cloud 0.8.11.
Releases after 0.17.x (0.18 or greater) should not encounter issues as Salt Cloud has been merged into Salt itself.
Debug Mode
Frequently, running Salt Cloud in debug mode will reveal information about a deployment which would otherwise not be obvious:
salt-cloud -p myprofile myinstance -l debug
Keep in mind that a number of messages will appear that look at first like errors, but are in fact intended to give developers factual information to assist in debugging. A number of messages that appear will be for cloud providers that you do not have configured; in these cases, the message usually is intended to confirm that they are not configured.
Salt Bootstrap
By default, Salt Cloud uses the Salt Bootstrap script to provision instances:
This script is packaged with Salt Cloud, but may be updated without updating the Salt package:
salt-cloud -u
The Bootstrap Log
If the default deploy script was used, there should be a file in the /tmp/ directory called bootstrap-salt.log. This file contains the full output from the deployment, including any errors that may have occurred.
Keeping Temp Files
Salt Cloud uploads minion-specific files to instances once they are available via SSH, and then executes a deploy script to put them into the correct place and install Salt. The --keep-tmp option will instruct Salt Cloud not to remove those files when finished with them, so that the user may inspect them for problems:
salt-cloud -p myprofile myinstance --keep-tmp
By default, Salt Cloud will create a directory on the target instance called /tmp/.saltcloud/. This directory should be owned by the user that is to execute the deploy script, and should have permissions of 0700.
Most cloud providers are configured to use root as the default initial user for deployment, and as such, this directory and all files in it should be owned by the root user.
The /tmp/.saltcloud/ directory should the following files:
- •
- A deploy.sh script. This script should have permissions of 0755.
- •
- A .pem and .pub key named after the minion. The .pem file should have permissions of 0600. Ensure that the .pem and .pub files have been properly copied to the /etc/salt/pki/minion/ directory.
- •
- A file called minion. This file should have been copied to the /etc/salt/ directory.
- •
- Optionally, a file called grains. This file, if present, should have been copied to the /etc/salt/ directory.
Unprivileged Primary Users
Some providers, most notably EC2, are configured with a different primary user. Some common examples are ec2-user, ubuntu, fedora, and bitnami. In these cases, the /tmp/.saltcloud/ directory and all files in it should be owned by this user.
Some providers, such as EC2, are configured to not require these users to provide a password when using the sudo command. Because it is more secure to require sudo users to provide a password, other providers are configured that way.
If this instance is required to provide a password, it needs to be configured in Salt Cloud. A password for sudo to use may be added to either the provider configuration or the profile configuration:
sudo_password: mypassword
/tmp/ is Mounted as noexec
It is more secure to mount the /tmp/ directory with a noexec option. This is uncommon on most cloud providers, but very common in private environments. To see if the /tmp/ directory is mounted this way, run the following command:
mount | grep tmp
The if the output of this command includes a line that looks like this, then the /tmp/ directory is mounted as noexec:
tmpfs on /tmp type tmpfs (rw,noexec)
If this is the case, then the deploy_command will need to be changed in order to run the deploy script through the sh command, rather than trying to execute it directly. This may be specified in either the provider or the profile config:
deploy_command: sh /tmp/.saltcloud/deploy.sh
Please note that by default, Salt Cloud will place its files in a directory called /tmp/.saltcloud/. This may be also be changed in the provider or profile configuration:
tmp_dir: /tmp/.saltcloud/
If this directory is changed, then the deploy_command need to be changed in order to reflect the tmp_dir configuration.
Executing the Deploy Script Manually
If all of the files needed for deployment were successfully uploaded to the correct locations, and contain the correct permissions and ownerships, the deploy script may be executed manually in order to check for other issues:
cd /tmp/.saltcloud/ ./deploy.sh
Extending Salt Cloud
Writing Cloud Provider Modules
Salt Cloud runs on a module system similar to the main Salt project. The modules inside saltcloud exist in the salt/cloud/clouds directory of the salt source.
There are two basic types of cloud modules. If a cloud provider is supported by libcloud, then using it is the fastest route to getting a module written. The Apache Libcloud project is located at:
Not every cloud provider is supported by libcloud. Additionally, not every feature in a supported cloud provider is necessary supported by libcloud. In either of these cases, a module can be created which does not rely on libcloud.
All Modules
The following functions are required by all modules, whether or not they are based on libcloud.
The __virtual__() Function
This function determines whether or not to make this cloud module available upon execution. Most often, it uses get_configured_provider() to determine if the necessary configuration has been set up. It may also check for necessary imports, to decide whether to load the module. In most cases, it will return a True or False value. If the name of the driver used does not match the filename, then that name should be returned instead of True. An example of this may be seen in the Azure module:
https://github.com/saltstack/salt/tree/develop/salt/cloud/clouds/msazure.py
The get_configured_provider() Function
This function uses config.is_provider_configured() to determine wither all required information for this driver has been configured. The last value in the list of required settings should be followed by a comma.
Libcloud Based Modules
Writing a cloud module based on libcloud has two major advantages. First of all, much of the work has already been done by the libcloud project. Second, most of the functions necessary to Salt have already been added to the Salt Cloud project.
The create() Function
The most important function that does need to be manually written is the create() function. This is what is used to request a virtual machine to be created by the cloud provider, wait for it to become available, and then (optionally) log in and install Salt on it.
A good example to follow for writing a cloud provider module based on libcloud is the module provided for Linode:
https://github.com/saltstack/salt/tree/develop/salt/cloud/clouds/linode.py
The basic flow of a create() function is as follows:
- •
- Send a request to the cloud provider to create a virtual machine.
- •
- Wait for the virtual machine to become available.
- •
- Generate kwargs to be used to deploy Salt.
- •
- Log into the virtual machine and deploy Salt.
- •
-
Return a data structure that describes the newly-created virtual machine.
At various points throughout this function, events may be fired on the Salt event bus. Four of these events, which are described below, are required. Other events may be added by the user, where appropriate.
When the create() function is called, it is passed a data structure called vm_. This dict contains a composite of information describing the virtual machine to be created. A dict called __opts__ is also provided by Salt, which contains the options used to run Salt Cloud, as well as a set of configuration and environment variables.
The first thing the create() function must do is fire an event stating that it has started the create process. This event is tagged salt/cloud/<vm name>/creating. The payload contains the names of the VM, profile and provider.
A set of kwargs is then usually created, to describe the parameters required by the cloud provider to request the virtual machine.
An event is then fired to state that a virtual machine is about to be requested. It is tagged as salt/cloud/<vm name>/requesting. The payload contains most or all of the parameters that will be sent to the cloud provider. Any private information (such as passwords) should not be sent in the event.
After a request is made, a set of deploy kwargs will be generated. These will be used to install Salt on the target machine. Windows options are supported at this point, and should be generated, even if the cloud provider does not currently support Windows. This will save time in the future if the provider does eventually decide to support Windows.
An event is then fired to state that the deploy process is about to begin. This event is tagged salt/cloud/<vm name>/deploying. The payload for the event will contain a set of deploy kwargs, useful for debugging purposed. Any private data, including passwords and keys (including public keys) should be stripped from the deploy kwargs before the event is fired.
If any Windows options have been passed in, the salt.utils.cloud.deploy_windows() function will be called. Otherwise, it will be assumed that the target is a Linux or Unix machine, and the salt.utils.cloud.deploy_script() will be called.
Both of these functions will wait for the target machine to become available, then the necessary port to log in, then a successful login that can be used to install Salt. Minion configuration and keys will then be uploaded to a temporary directory on the target by the appropriate function. On a Windows target, the Windows Minion Installer will be run in silent mode. On a Linux/Unix target, a deploy script (bootstrap-salt.sh, by default) will be run, which will auto-detect the operating system, and install Salt using its native package manager. These do not need to be handled by the developer in the cloud module.
The salt.utils.cloud.validate_windows_cred() function has been extended to take the number of retries and retry_delay parameters in case a specific cloud provider has a delay between providing the Windows credentials and the credentials being available for use. In their create() function, or as a a sub-function called during the creation process, developers should use the win_deploy_auth_retries and win_deploy_auth_retry_delay parameters from the provider configuration to allow the end-user the ability to customize the number of tries and delay between tries for their particular provider.
After the appropriate deploy function completes, a final event is fired which describes the virtual machine that has just been created. This event is tagged salt/cloud/<vm name>/created. The payload contains the names of the VM, profile, and provider.
Finally, a dict (queried from the provider) which describes the new virtual machine is returned to the user. Because this data is not fired on the event bus it can, and should, return any passwords that were returned by the cloud provider. In some cases (for example, Rackspace), this is the only time that the password can be queried by the user; post-creation queries may not contain password information (depending upon the provider).
The libcloudfuncs Functions
A number of other functions are required for all cloud providers. However, with libcloud-based modules, these are all provided for free by the libcloudfuncs library. The following two lines set up the imports:
from salt.cloud.libcloudfuncs import * # pylint: disable=W0614,W0401 from salt.utils import namespaced_function
And then a series of declarations will make the necessary functions available within the cloud module.
get_size = namespaced_function(get_size, globals()) get_image = namespaced_function(get_image, globals()) avail_locations = namespaced_function(avail_locations, globals()) avail_images = namespaced_function(avail_images, globals()) avail_sizes = namespaced_function(avail_sizes, globals()) script = namespaced_function(script, globals()) destroy = namespaced_function(destroy, globals()) list_nodes = namespaced_function(list_nodes, globals()) list_nodes_full = namespaced_function(list_nodes_full, globals()) list_nodes_select = namespaced_function(list_nodes_select, globals()) show_instance = namespaced_function(show_instance, globals())
If necessary, these functions may be replaced by removing the appropriate declaration line, and then adding the function as normal.
These functions are required for all cloud modules, and are described in detail in the next section.
Non-Libcloud Based Modules
In some cases, using libcloud is not an option. This may be because libcloud has not yet included the necessary driver itself, or it may be that the driver that is included with libcloud does not contain all of the necessary features required by the developer. When this is the case, some or all of the functions in libcloudfuncs may be replaced. If they are all replaced, the libcloud imports should be absent from the Salt Cloud module.
A good example of a non-libcloud provider is the DigitalOcean module:
https://github.com/saltstack/salt/tree/develop/salt/cloud/clouds/digital_ocean.py
The create() Function
The create() function must be created as described in the libcloud-based module documentation.
The get_size() Function
This function is only necessary for libcloud-based modules, and does not need to exist otherwise.
The get_image() Function
This function is only necessary for libcloud-based modules, and does not need to exist otherwise.
The avail_locations() Function
This function returns a list of locations available, if the cloud provider uses multiple data centers. It is not necessary if the cloud provider only uses one data center. It is normally called using the --list-locations option.
salt-cloud --list-locations my-cloud-provider
The avail_images() Function
This function returns a list of images available for this cloud provider. There are not currently any known cloud providers that do not provide this functionality, though they may refer to images by a different name (for example, "templates"). It is normally called using the --list-images option.
salt-cloud --list-images my-cloud-provider
The avail_sizes() Function
This function returns a list of sizes available for this cloud provider. Generally, this refers to a combination of RAM, CPU, and/or disk space. This functionality may not be present on some cloud providers. For example, the Parallels module breaks down RAM, CPU, and disk space into separate options, whereas in other providers, these options are baked into the image. It is normally called using the --list-sizes option.
salt-cloud --list-sizes my-cloud-provider
The script() Function
This function builds the deploy script to be used on the remote machine. It is likely to be moved into the salt.utils.cloud library in the near future, as it is very generic and can usually be copied wholesale from another module. An excellent example is in the Azure driver.
The destroy() Function
This function irreversibly destroys a virtual machine on the cloud provider. Before doing so, it should fire an event on the Salt event bus. The tag for this event is salt/cloud/<vm name>/destroying. Once the virtual machine has been destroyed, another event is fired. The tag for that event is salt/cloud/<vm name>/destroyed.
This function is normally called with the -d options:
salt-cloud -d myinstance
The list_nodes() Function
This function returns a list of nodes available on this cloud provider, using the following fields:
- •
- id (str)
- •
- image (str)
- •
- size (str)
- •
- state (str)
- •
- private_ips (list)
- •
-
public_ips (list)
No other fields should be returned in this function, and all of these fields should be returned, even if empty. The private_ips and public_ips fields should always be of a list type, even if empty, and the other fields should always be of a str type. This function is normally called with the -Q option:
salt-cloud -Q
The list_nodes_full() Function
All information available about all nodes should be returned in this function. The fields in the list_nodes() function should also be returned, even if they would not normally be provided by the cloud provider. This is because some functions both within Salt and 3rd party will break if an expected field is not present. This function is normally called with the -F option:
salt-cloud -F
The list_nodes_select() Function
This function returns only the fields specified in the query.selection option in /etc/salt/cloud. Because this function is so generic, all of the heavy lifting has been moved into the salt.utils.cloud library.
A function to call list_nodes_select() still needs to be present. In general, the following code can be used as-is:
def list_nodes_select(call=None): ''' Return a list of the VMs that are on the provider, with select fields ''' return salt.utils.cloud.list_nodes_select( list_nodes_full('function'), __opts__['query.selection'], call, )
However, depending on the cloud provider, additional variables may be required. For instance, some modules use a conn object, or may need to pass other options into list_nodes_full(). In this case, be sure to update the function appropriately:
def list_nodes_select(conn=None, call=None): ''' Return a list of the VMs that are on the provider, with select fields ''' if not conn: conn = get_conn() # pylint: disable=E0602 return salt.utils.cloud.list_nodes_select( list_nodes_full(conn, 'function'), __opts__['query.selection'], call, )
This function is normally called with the -S option:
salt-cloud -S
The show_instance() Function
This function is used to display all of the information about a single node that is available from the cloud provider. The simplest way to provide this is usually to call list_nodes_full(), and return just the data for the requested node. It is normally called as an action:
salt-cloud -a show_instance myinstance
Actions and Functions
Extra functionality may be added to a cloud provider in the form of an --action or a --function. Actions are performed against a cloud instance/virtual machine, and functions are performed against a cloud provider.
Actions
Actions are calls that are performed against a specific instance or virtual machine. The show_instance action should be available in all cloud modules. Actions are normally called with the -a option:
salt-cloud -a show_instance myinstance
Actions must accept a name as a first argument, may optionally support any number of kwargs as appropriate, and must accept an argument of call, with a default of None.
Before performing any other work, an action should normally verify that it has been called correctly. It may then perform the desired feature, and return useful information to the user. A basic action looks like:
def show_instance(name, call=None): ''' Show the details from EC2 concerning an AMI ''' if call != 'action': raise SaltCloudSystemExit( 'The show_instance action must be called with -a or --action.' ) return _get_node(name)
Please note that generic kwargs, if used, are passed through to actions as kwargs and not **kwargs. An example of this is seen in the Functions section.
Functions
Functions are called that are performed against a specific cloud provider. An optional function that is often useful is show_image, which describes an image in detail. Functions are normally called with the -f option:
salt-cloud -f show_image my-cloud-provider image='Ubuntu 13.10 64-bit'
A function may accept any number of kwargs as appropriate, and must accept an argument of call with a default of None.
Before performing any other work, a function should normally verify that it has been called correctly. It may then perform the desired feature, and return useful information to the user. A basic function looks like:
def show_image(kwargs, call=None): ''' Show the details from EC2 concerning an AMI ''' if call != 'function': raise SaltCloudSystemExit( 'The show_image action must be called with -f or --function.' ) params = {'ImageId.1': kwargs['image'], 'Action': 'DescribeImages'} result = query(params) log.info(result) return result
Take note that generic kwargs are passed through to functions as kwargs and not **kwargs.
OS Support for Cloud VMs
Salt Cloud works primarily by executing a script on the virtual machines as soon as they become available. The script that is executed is referenced in the cloud profile as the script. In older versions, this was the os argument. This was changed in 0.8.2.
A number of legacy scripts exist in the deploy directory in the saltcloud source tree. The preferred method is currently to use the salt-bootstrap script. A stable version is included with each release tarball starting with 0.8.4. The most updated version can be found at:
https://github.com/saltstack/salt-bootstrap
If you do not specify a script argument, this script will be used at the default.
If the Salt Bootstrap script does not meet your needs, you may write your own. The script should be written in bash and is a Jinja template. Deploy scripts need to execute a number of functions to do a complete salt setup. These functions include:
- 1.
- Install the salt minion. If this can be done via system packages this method is HIGHLY preferred.
- 2.
- Add the salt minion keys before the minion is started for the first time. The minion keys are available as strings that can be copied into place in the Jinja template under the dict named "vm".
- 3.
- Start the salt-minion daemon and enable it at startup time.
- 4.
-
Set up the minion configuration file from the "minion" data available in
the Jinja template.
A good, well commented, example of this process is the Fedora deployment script:
https://github.com/saltstack/salt-cloud/blob/master/saltcloud/deploy/Fedora.sh
A number of legacy deploy scripts are included with the release tarball. None of them are as functional or complete as Salt Bootstrap, and are still included for academic purposes.
Other Generic Deploy Scripts
If you want to be assured of always using the latest Salt Bootstrap script, there are a few generic templates available in the deploy directory of your saltcloud source tree:
curl-bootstrap curl-bootstrap-git python-bootstrap wget-bootstrap wget-bootstrap-git
These are example scripts which were designed to be customized, adapted, and refit to meet your needs. One important use of them is to pass options to the salt-bootstrap script, such as updating to specific git tags.
Post-Deploy Commands
Once a minion has been deployed, it has the option to run a salt command. Normally, this would be the state.highstate command, which would finish provisioning the VM. Another common option is state.sls, or for just testing, test.ping. This is configured in the main cloud config file:
start_action: state.highstate
This is currently considered to be experimental functionality, and may not work well with all providers. If you experience problems with Salt Cloud hanging after Salt is deployed, consider using Startup States instead:
http://docs.saltstack.com/ref/states/startup.html
Skipping the Deploy Script
For whatever reason, you may want to skip the deploy script altogether. This results in a VM being spun up much faster, with absolutely no configuration. This can be set from the command line:
salt-cloud --no-deploy -p micro_aws my_instance
Or it can be set from the main cloud config file:
deploy: False
Or it can be set from the provider's configuration:
RACKSPACE.user: example_user RACKSPACE.apikey: 123984bjjas87034 RACKSPACE.deploy: False
Or even on the VM's profile settings:
ubuntu_aws: provider: aws image: ami-7e2da54e size: t1.micro deploy: False
The default for deploy is True.
In the profile, you may also set the script option to None:
script: None
This is the slowest option, since it still uploads the None deploy script and executes it.
Updating Salt Bootstrap
Salt Bootstrap can be updated automatically with salt-cloud:
salt-cloud -u salt-cloud --update-bootstrap
Bear in mind that this updates to the latest (unstable) version, so use with caution.
Keeping /tmp/ Files
When Salt Cloud deploys an instance, it uploads temporary files to /tmp/ for salt-bootstrap to put in place. After the script has run, they are deleted. To keep these files around (mostly for debugging purposes), the --keep-tmp option can be added:
salt-cloud -p myprofile mymachine --keep-tmp
For those wondering why /tmp/ was used instead of /root/, this had to be done for images which require the use of sudo, and therefore do not allow remote root logins, even for file transfers (which makes /root/ unavailable).
Deploy Script Arguments
Custom deploy scripts are unlikely to need custom arguments to be passed to them, but salt-bootstrap has been extended quite a bit, and this may be necessary. script_args can be specified in either the profile or the map file, to pass arguments to the deploy script:
aws-amazon: provider: aws image: ami-1624987f size: t1.micro ssh_username: ec2-user script: bootstrap-salt script_args: -c /tmp/
This has also been tested to work with pipes, if needed:
script_args: | head
Using Salt Cloud from Salt
Using the Salt Modules for Cloud
In addition to the salt-cloud command, Salt Cloud can be called from Salt, in a variety of different ways. Most users will be interested in either the execution module or the state module, but it is also possible to call Salt Cloud as a runner.
Because the actual work will be performed on a remote minion, the normal Salt Cloud configuration must exist on any target minion that needs to execute a Salt Cloud command. Because Salt Cloud now supports breaking out configuration into individual files, the configuration is easily managed using Salt's own file.managed state function. For example, the following directories allow this configuration to be managed easily:
/etc/salt/cloud.providers.d/ /etc/salt/cloud.profiles.d/
Minion Keys
Keep in mind that when creating minions, Salt Cloud will create public and private minion keys, upload them to the minion, and place the public key on the machine that created the minion. It will not attempt to place any public minion keys on the master, unless the minion which was used to create the instance is also the Salt Master. This is because granting arbitrary minions access to modify keys on the master is a serious security risk, and must be avoided.
Execution Module
The cloud module is available to use from the command line. At the moment, almost every standard Salt Cloud feature is available to use. The following commands are available:
list_images
This command is designed to show images that are available to be used to create an instance using Salt Cloud. In general they are used in the creation of profiles, but may also be used to create an instance directly (see below). Listing images requires a provider to be configured, and specified:
salt myminion cloud.list_images my-cloud-provider
list_sizes
This command is designed to show sizes that are available to be used to create an instance using Salt Cloud. In general they are used in the creation of profiles, but may also be used to create an instance directly (see below). This command is not available for all cloud providers; see the provider-specific documentation for details. Listing sizes requires a provider to be configured, and specified:
salt myminion cloud.list_sizes my-cloud-provider
list_locations
This command is designed to show locations that are available to be used to create an instance using Salt Cloud. In general they are used in the creation of profiles, but may also be used to create an instance directly (see below). This command is not available for all cloud providers; see the provider-specific documentation for details. Listing locations requires a provider to be configured, and specified:
salt myminion cloud.list_locations my-cloud-provider
query
This command is used to query all configured cloud providers, and display all instances associated with those accounts. By default, it will run a standard query, returning the following fields:
- id
- The name or ID of the instance, as used by the cloud provider.
- image
- The disk image that was used to create this instance.
- private_ips
- Any public IP addresses currently assigned to this instance.
- public_ips
- Any private IP addresses currently assigned to this instance.
- size
- The size of the instance; can refer to RAM, CPU(s), disk space, etc., depending on the cloud provider.
- state
-
The running state of the instance; for example, running, stopped,
pending, etc. This state is dependent upon the provider.
This command may also be used to perform a full query or a select query, as described below. The following usages are available:
salt myminion cloud.query salt myminion cloud.query list_nodes salt myminion cloud.query list_nodes_full
full_query
This command behaves like the query command, but lists all information concerning each instance as provided by the cloud provider, in addition to the fields returned by the query command.
salt myminion cloud.full_query
select_query
This command behaves like the query command, but only returned select fields as defined in the /etc/salt/cloud configuration file. A sample configuration for this section of the file might look like:
query.selection: - id - key_name
This configuration would only return the id and key_name fields, for those cloud providers that support those two fields. This would be called using the following command:
salt myminion cloud.select_query
profile
This command is used to create an instance using a profile that is configured on the target minion. Please note that the profile must be configured before this command can be used with it.
salt myminion cloud.profile ec2-centos64-x64 my-new-instance
Please note that the execution module does not run in parallel mode. Using multiple minions to create instances can effectively perform parallel instance creation.
create
This command is similar to the profile command, in that it is used to create a new instance. However, it does not require a profile to be pre-configured. Instead, all of the options that are normally configured in a profile are passed directly to Salt Cloud to create the instance:
salt myminion cloud.create my-ec2-config my-new-instance \ image=ami-1624987f size='t1.micro' ssh_username=ec2-user \ securitygroup=default delvol_on_destroy=True
Please note that the execution module does not run in parallel mode. Using multiple minions to create instances can effectively perform parallel instance creation.
destroy
This command is used to destroy an instance or instances. This command will search all configured providers and remove any instance(s) which matches the name(s) passed in here. The results of this command are non-reversable and should be used with caution.
salt myminion cloud.destroy myinstance salt myminion cloud.destroy myinstance1,myinstance2
action
This command implements both the action and the function commands used in the standard salt-cloud command. If one of the standard action commands is used, an instance name must be provided. If one of the standard function commands is used, a provider configuration must be named.
salt myminion cloud.action start instance=myinstance salt myminion cloud.action show_image provider=my-ec2-config \ image=ami-1624987f
The actions available are largely dependent upon the module for the specific cloud provider. The following actions are available for all cloud providers:
- list_nodes
- This is a direct call to the query function as described above, but is only performed against a single cloud provider. A provider configuration must be included.
- list_nodes_select
- This is a direct call to the full_query function as described above, but is only performed against a single cloud provider. A provider configuration must be included.
- list_nodes_select
- This is a direct call to the select_query function as described above, but is only performed against a single cloud provider. A provider configuration must be included.
- show_instance
- This is a thin wrapper around list_nodes, which returns the full information about a single instance. An instance name must be provided.
State Module
A subset of the execution module is available through the cloud state module. Not all functions are currently included, because there is currently insufficient code for them to perform statefully. For example, a command to create an instance may be issued with a series of options, but those options cannot currently be statefully managed. Additional states to manage these options will be released at a later time.
cloud.present
This state will ensure that an instance is present inside a particular cloud provider. Any option that is normally specified in the cloud.create execution module and function may be declared here, but only the actual presence of the instance will be managed statefully.
my-instance-name: cloud.present: - provider: my-ec2-config - image: ami-1624987f - size: 't1.micro' - ssh_username: ec2-user - securitygroup: default - delvol_on_destroy: True
cloud.profile
This state will ensure that an instance is present inside a particular cloud provider. This function calls the cloud.profile execution module and function, but as with cloud.present, only the actual presence of the instance will be managed statefully.
my-instance-name: cloud.profile: - profile: ec2-centos64-x64
cloud.absent
This state will ensure that an instance (identified by name) does not exist in any of the cloud providers configured on the target minion. Please note that this state is non-reversable and may be considered especially destructive when issued as a cloud state.
my-instance-name: cloud.absent
Runner Module
The cloud runner module is executed on the master, and performs actions using the configuration and Salt modules on the master itself. This means that any public minion keys will also be properly accepted by the master.
Using the functions in the runner module is no different than using those in the execution module, outside of the behavior described in the above paragraph. The following functions are available inside the runner:
- •
- list_images
- •
- list_sizes
- •
- list_locations
- •
- query
- •
- full_query
- •
- select_query
- •
- profile
- •
- destroy
- •
-
action
Outside of the standard usage of salt-run itself, commands are executed as usual:
salt-run cloud.profile ec2-centos64-x86_64 my-instance-name
CloudClient
The execution, state, and runner modules ultimately all use the CloudClient library that ships with Salt. To use the CloudClient library locally (either on the master or a minion), create a client object and issue a command against it:
import salt.cloud import pprint client = salt.cloud.CloudClient('/etc/salt/cloud') nodes = client.query() pprint.pprint(nodes)
Feature Comparison
Feature Matrix
A number of features are available in most cloud providers, but not all are available everywhere. This may be because the feature isn't supported by the cloud provider itself, or it may only be that the feature has not yet been added to Salt Cloud. In a handful of cases, it is because the feature does not make sense for a particular cloud provider (Saltify, for instance).
This matrix shows which features are available in which cloud providers, as far as Salt Cloud is concerned. This is not a comprehensive list of all features available in all cloud providers, and should not be used to make business decisions concerning choosing a cloud provider. In most cases, adding support for a feature to Salt Cloud requires only a little effort.
Legacy Drivers
Both AWS and Rackspace are listed as "Legacy". This is because those drivers have been replaced by other drivers, which are generally the preferred method for working with those providers.
The EC2 driver should be used instead of the AWS driver, when possible. The OpenStack driver should be used instead of the Rackspace driver, unless the user is dealing with instances in "the old cloud" in Rackspace.
Note for Developers
When adding new features to a particular cloud provider, please make sure to add the feature to this table. Additionally, if you notice a feature that is not properly listed here, pull requests to fix them is appreciated.
Standard Features
These are features that are available for almost every provider.
AWS (Legacy) | CloudStack | Digital Ocean | EC2 | GoGrid | JoyEnt | Linode | OpenStack | Parallels | Rackspace (Legacy) | Saltify | Softlayer | Softlayer Hardware |
Aliyun
| |
Query | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes |
Yes
| |
Full Query | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes |
Yes
| |
Selective Query | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes |
Yes
| |
List Sizes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes |
Yes
| |
List Images | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes |
Yes
| |
List Locations | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes |
Yes
| |
create | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes |
Yes
|
destroy | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes |
Yes
| |
Actions
These are features that are performed on a specific instance, and require an instance name to be passed in. For example:
# salt-cloud -a attach_volume ami.example.com
Actions | AWS (Legacy) | CloudStack | Digital Ocean | EC2 | GoGrid | JoyEnt | Linode | OpenStack | Parallels | Rackspace (Legacy) | Saltify | Softlayer | Softlayer Hardware |
Aliyun
|
attach_volume | Yes |
| ||||||||||||
create_attach_volumes | Yes | Yes |
| |||||||||||
del_tags | Yes | Yes |
| |||||||||||
delvol_on_destroy | Yes |
| ||||||||||||
detach_volume | Yes |
| ||||||||||||
disable_term_protect | Yes | Yes |
| |||||||||||
enable_term_protect | Yes | Yes |
| |||||||||||
get_tags | Yes | Yes |
| |||||||||||
keepvol_on_destroy | Yes |
| ||||||||||||
list_keypairs | Yes |
| ||||||||||||
rename | Yes | Yes |
| |||||||||||
set_tags | Yes | Yes |
| |||||||||||
show_delvol_on_destroy | Yes |
| ||||||||||||
show_instance | Yes | Yes | Yes | Yes | Yes |
Yes
| ||||||||
show_term_protect | Yes |
| ||||||||||||
start | Yes | Yes | Yes | Yes |
Yes
| |||||||||
stop | Yes | Yes | Yes | Yes |
Yes
| |||||||||
take_action | Yes |
| ||||||||||||
Functions
These are features that are performed against a specific cloud provider, and require the name of the provider to be passed in. For example:
# salt-cloud -f list_images my_digitalocean
Functions | AWS (Legacy) | CloudStack | Digital Ocean | EC2 | GoGrid | JoyEnt | Linode | OpenStack | Parallels | Rackspace (Legacy) | Saltify | Softlayer | Softlayer Hardware |
Aliyun
|
block_device_mappings | Yes |
| ||||||||||||
create_keypair | Yes |
| ||||||||||||
create_volume | Yes |
| ||||||||||||
delete_key | Yes |
| ||||||||||||
delete_keypair | Yes |
| ||||||||||||
delete_volume | Yes |
| ||||||||||||
get_image | Yes | Yes | Yes |
Yes
| ||||||||||
get_ip | Yes |
| ||||||||||||
get_key | Yes |
| ||||||||||||
get_keyid | Yes |
| ||||||||||||
get_keypair | Yes |
| ||||||||||||
get_networkid | Yes |
| ||||||||||||
get_node | Yes |
| ||||||||||||
get_password | Yes |
| ||||||||||||
get_size | Yes | Yes |
Yes
| |||||||||||
get_spot_config | Yes |
| ||||||||||||
get_subnetid | Yes |
| ||||||||||||
iam_profile | Yes | Yes |
Yes
| |||||||||||
import_key | Yes |
| ||||||||||||
key_list | Yes |
| ||||||||||||
keyname | Yes | Yes |
| |||||||||||
list_availability_zones | Yes |
Yes
| ||||||||||||
list_custom_images | Yes |
| ||||||||||||
list_keys | Yes |
| ||||||||||||
list_nodes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes |
Yes
|
list_nodes_full | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes |
Yes
|
list_nodes_select | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes |
Yes
|
list_vlans | Yes | Yes |
| |||||||||||
rackconnect | Yes |
| ||||||||||||
reboot | Yes | Yes |
Yes
| |||||||||||
reformat_node | Yes |
| ||||||||||||
securitygroup | Yes | Yes |
| |||||||||||
securitygroupid | Yes |
Yes
| ||||||||||||
show_image | Yes | Yes |
Yes
| |||||||||||
show_key | Yes |
| ||||||||||||
show_keypair | Yes | Yes |
| |||||||||||
show_volume | Yes |
Yes
| ||||||||||||
Tutorials
Using Salt Cloud with the Event Reactor
One of the most powerful features of the Salt framework is the Event Reactor. As the Reactor was in development, Salt Cloud was regularly updated to take advantage of the Reactor upon completion. As such, various aspects of both the creation and destruction of instances with Salt Cloud fire events to the Salt Master, which can be used by the Event Reactor.
Event Structure
As of this writing, all events in Salt Cloud have a tag, which includes the ID of the instance being managed, and a payload which describes the task that is currently being handled. A Salt Cloud tag looks like:
salt/cloud/<minion_id>/<task>
For instance, the first event fired when creating an instance named web1 would look like:
salt/cloud/web1/creating
Assuming this instance is using the ec2-centos profile, which is in turn using the ec2-config provider, the payload for this tag would look like:
{'name': 'web1', 'profile': 'ec2-centos', 'provider': 'ec2-config'}
Available Events
When an instance is created in Salt Cloud, whether by map, profile, or directly through an API, a minimum of five events are normally fired. More may be available, depending upon the cloud provider being used. Some of the common events are described below.
salt/cloud/<minion_id>/creating
This event states simply that the process to create an instance has begun. At this point in time, no actual work has begun. The payload for this event includes:
salt/cloud/<minion_id>/requesting
Salt Cloud is about to make a request to the cloud provider to create an instance. At this point, all of the variables required to make the request have been gathered, and the payload of the event will reflect those variables which do not normally pose a security risk. What is returned here is dependent upon the cloud provider. Some common variables are:
salt/cloud/<minion_id>/querying
The instance has been successfully requested, but the necessary information to log into the instance (such as IP address) is not yet available. This event marks the beginning of the process to wait for this information.
The payload for this event normally only includes the instance_id.
salt/cloud/<minion_id>/waiting_for_ssh
The information required to log into the instance has been retrieved, but the instance is not necessarily ready to be accessed. Following this event, Salt Cloud will wait for the IP address to respond to a ping, then wait for the specified port (usually 22) to respond to a connection, and on Linux systems, for SSH to become available. Salt Cloud will attempt to issue the date command on the remote system, as a means to check for availability. If no ssh_username has been specified, a list of usernames (starting with root) will be attempted. If one or more usernames was configured for ssh_username, they will be added to the beginning of the list, in order.
The payload for this event normally only includes the ip_address.
salt/cloud/<minion_id>/deploying
The necessary port has been detected as available, and now Salt Cloud can log into the instance, upload any files used for deployment, and run the deploy script. Once the script has completed, Salt Cloud will log back into the instance and remove any remaining files.
A number of variables are used to deploy instances, and the majority of these will be available in the payload. Any keys, passwords or other sensitive data will be scraped from the payload. Most of the variables returned will be related to the profile or provider config, and any default values that could have been changed in the profile or provider, but weren't.
salt/cloud/<minion_id>/created
The deploy sequence has completed, and the instance is now available, Salted, and ready for use. This event is the final task for Salt Cloud, before returning instance information to the user and exiting.
The payload for this event contains little more than the initial creating event. This event is required in all cloud providers.
Configuring the Event Reactor
The Event Reactor is built into the Salt Master process, and as such is configured via the master configuration file. Normally this will will be a YAML file located at /etc/salt/master. Additionally, master configuration items can be stored, in YAML format, inside the /etc/salt/master.d/ directory.
These configuration items may be stored in either location; however, they may only be stored in one location. For organizational and security purposes, it may be best to create a single configuration file, which contains only Event Reactor configuration, at /etc/salt/master.d/reactor.
The Event Reactor uses a top-level configuration item called reactor. This block contains a list of tags to be watched for, each of which also includes a list of sls files. For instance:
reactor: - 'salt/minion/*/start': - '/srv/reactor/custom-reactor.sls' - 'salt/cloud/*/created': - '/srv/reactor/cloud-alert.sls' - 'salt/cloud/*/destroyed': - '/srv/reactor/cloud-destroy-alert.sls'
The above configuration configures reactors for three different tags: one which is fired when a minion process has started and is available to receive commands, one which is fired when a cloud instance has been created, and one which is fired when a cloud instance is destroyed.
Note that each tag contains a wildcard (*) in it. For each of these tags, this will normally refer to a minion_id. This is not required of event tags, but is very common.
Reactor SLS Files
Reactor sls files should be placed in the /srv/reactor/ directory for consistency between environments, but this is not currently enforced by Salt.
Reactor sls files follow a similar format to other sls files in Salt. By default they are written in YAML and can be templated using Jinja, but since they are processed through Salt's rendering system, any available renderer (JSON, Mako, Cheetah, etc.) can be used.
As with other sls files, each stanza will start with a declaration ID, followed by the function to run, and then any arguments for that function. For example:
# /srv/reactor/cloud-alert.sls new_instance_alert: cmd.pagerduty.create_event: - tgt: alertserver - kwarg: description: "New instance: {{ data['name'] }}" details: "New cloud instance created on {{ data['provider'] }}" service_key: 1626dead5ecafe46231e968eb1be29c4 profile: my-pagerduty-account
When the Event Reactor receives an event notifying it that a new instance has been created, this sls will create a new incident in PagerDuty, using the configured PagerDuty account.
The declaration ID in this example is new_instance_alert. The function called is cmd.pagerduty.create_event. The cmd portion of this function specifies that an execution module and function will be called, in this case, the pagerduty.create_event function.
Because an execution module is specified, a target (tgt) must be specified on which to call the function. In this case, a minion called alertserver has been used. Any arguments passed through to the function are declared in the kwarg block.
Example: Reactor-Based Highstate
When Salt Cloud creates an instance, by default it will install the Salt Minion onto the instance, along with any specified minion configuration, and automatically accept that minion's keys on the master. One of the configuration options that can be specified is startup_states, which is commonly set to highstate. This will tell the minion to immediately apply a highstate, as soon as it is able to do so.
This can present a problem with some system images on some cloud providers. For instance, Salt Cloud can be configured to log in as either the root user, or a user with sudo access. While some providers commonly use images that lock out remote root access and require a user with sudo privileges to log in (notably EC2, with their ec2-user login), most cloud providers fall back to root as the default login on all images, including for operating systems (such as Ubuntu) which normally disallow remote root login.
For users of these operating systems, it is understandable that a highstate would include configuration to block remote root logins again. However, Salt Cloud may not have finished cleaning up its deployment files by the time the minion process has started, and kicked off a highstate run. Users have reported errors from Salt Cloud getting locked out while trying to clean up after itself.
The goal of a startup state may be achieved using the Event Reactor. Because a minion fires an event when it is able to receive commands, this event can effectively be used inside the reactor system instead. The following will point the reactor system to the right sls file:
reactor: - 'salt/cloud/*/created': - '/srv/reactor/startup_highstate.sls'
And the following sls file will start a highstate run on the target minion:
# /srv/reactor/startup_highstate.sls reactor_highstate: cmd.state.highstate: - tgt: {{ data['name'] }}
Because this event will not be fired until Salt Cloud has cleaned up after itself, the highstate run will not step on Salt Cloud's toes. And because every file on the minion is configurable, including /etc/salt/minion, the startup_states can still be configured for future minion restarts, if desired.
NETAPI MODULES
Writing netapi modules
netapi modules, put simply, bind a port and start a service. They are purposefully open-ended and can be used to present a variety of external interfaces to Salt, and even present multiple interfaces at once.
SEE ALSO: The full list of netapi modules
Configuration
All netapi configuration is done in the Salt master config and takes a form similar to the following:
rest_cherrypy: port: 8000 debug: True ssl_crt: /etc/pki/tls/certs/localhost.crt ssl_key: /etc/pki/tls/certs/localhost.key
The __virtual__ function
Like all module types in Salt, netapi modules go through Salt's loader interface to determine if they should be loaded into memory and then executed.
The __virtual__ function in the module makes this determination and should return False or a string that will serve as the name of the module. If the module raises an ImportError or any other errors, it will not be loaded.
The start function
The start() function will be called for each netapi module that is loaded. This function should contain the server loop that actually starts the service. This is started in a multiprocess.
Inline documentation
As with the rest of Salt, it is a best-practice to include liberal inline documentation in the form of a module docstring and docstrings on any classes, methods, and functions in your netapi module.
Loader “magic” methods
The loader makes the __opts__ data structure available to any function in a netapi module.
Introduction to netapi modules
netapi modules provide API-centric access to Salt. Usually externally-facing services such as REST or WebSockets, XMPP, XMLRPC, etc.
In general netapi modules bind to a port and start a service. They are purposefully open-ended. A single module can be configured to run as well as multiple modules simultaneously.
netapi modules are enabled by adding configuration to your Salt Master config file and then starting the salt-api daemon. Check the docs for each module to see external requirements and configuration settings.
Communication with Salt and Salt satellite projects is done using Salt's own Python API. A list of available client interfaces is below.
- salt-api
-
Prior to Salt's 2014.7.0 release, netapi modules lived in the separate sister projected salt-api. That project has been merged into the main Salt project.
SEE ALSO: The full list of netapi modules
Client interfaces
Salt's client interfaces expose executing functions by crafting a dictionary of values that are mapped to function arguments. This allows calling functions simply by creating a data structure. (And this is exactly how much of Salt's own internals work!)
- class salt.netapi.NetapiClient(opts)
-
Provide a uniform method of accessing the various client interfaces in Salt
in the form of low-data data structures. For example:
>>> client = NetapiClient(__opts__) >>> lowstate = {'client': 'local', 'tgt': '*', 'fun': 'test.ping', 'arg': ''} >>> client.run(lowstate)
- local(*args, **kwargs)
-
Run execution modules synchronously
See salt.client.LocalClient.cmd() for all available parameters.
Sends a command from the master to the targeted minions. This is the same interface that Salt's own CLI uses. Note the arg and kwarg parameters are sent down to the minion(s) and the given function, fun, is called with those parameters.
- Returns
- Returns the result from the execution module
- local_async(*args, **kwargs)
-
Run execution modules asynchronously
Wraps salt.client.LocalClient.run_job().
- Returns
- job ID
- local_batch(*args, **kwargs)
-
Run execution modules against batches of minions
New in version 0.8.4.
Wraps salt.client.LocalClient.cmd_batch()
- Returns
- Returns the result from the exeuction module for each batch of returns
- runner(fun, timeout=None, **kwargs)
-
Run runner modules <all-salt.runners> synchronously
Wraps salt.runner.RunnerClient.cmd_sync().
Note that runner functions must be called using keyword arguments. Positional arguments are not supported.
- Returns
- Returns the result from the runner module
- wheel(fun, **kwargs)
-
Run wheel modules synchronously
Wraps salt.wheel.WheelClient.master_call().
Note that wheel functions must be called using keyword arguments. Positional arguments are not supported.
- Returns
- Returns the result from the wheel module
SALT VIRT
The Salt Virt cloud controller capability was initially added to Salt in version 0.14.0 as an alpha technology.
The initial Salt Virt system supports core cloud operations:
- •
- Virtual machine deployment
- •
- Inspection of deployed VMs
- •
- Virtual machine migration
- •
- Network profiling
- •
- Automatic VM integration with all aspects of Salt
- •
-
Image Pre-seeding
Many features are currently under development to enhance the capabilities of the Salt Virt systems.
NOTE: It is noteworthy that Salt was originally developed with the intent of using the Salt communication system as the backbone to a cloud controller. This means that the Salt Virt system is not an afterthought, simply a system that took the back seat to other development. The original attempt to develop the cloud control aspects of Salt was a project called butter. This project never took off, but was functional and proves the early viability of Salt to be a cloud controller.
Salt Virt Tutorial
A tutorial about how to get Salt Virt up and running has been added to the tutorial section:
The Salt Virt Runner
The point of interaction with the cloud controller is the virt runner. The virt runner comes with routines to execute specific virtual machine routines.
Reference documentation for the virt runner is available with the runner module documentation:
Based on Live State Data
The Salt Virt system is based on using Salt to query live data about hypervisors and then using the data gathered to make decisions about cloud operations. This means that no external resources are required to run Salt Virt, and that the information gathered about the cloud is live and accurate.
Deploy from Network or Disk
Virtual Machine Disk Profiles
Salt Virt allows for the disks created for deployed virtual machines to be finely configured. The configuration is a simple data structure which is read from the config.option function, meaning that the configuration can be stored in the minion config file, the master config file, or the minion's pillar.
This configuration option is called virt.disk. The default virt.disk data structure looks like this:
virt.disk: default: - system: size: 8192 format: qcow2 model: virtio
NOTE: The format and model does not need to be defined, Salt will default to the optimal format used by the underlying hypervisor, in the case of kvm this it is qcow2 and virtio.
This configuration sets up a disk profile called default. The default profile creates a single system disk on the virtual machine.
Define More Profiles
Many environments will require more complex disk profiles and may require more than one profile, this can be easily accomplished:
virt.disk: default: - system: size: 8192 database: - system: size: 8192 - data: size: 30720 web: - system: size: 1024 - logs: size: 5120
This configuration allows for one of three profiles to be selected, allowing virtual machines to be created with different storage needs of the deployed vm.
Virtual Machine Network Profiles
Salt Virt allows for the network devices created for deployed virtual machines to be finely configured. The configuration is a simple data structure which is read from the config.option function, meaning that the configuration can be stored in the minion config file, the master config file, or the minion's pillar.
This configuration option is called virt.nic. By default the virt.nic option is empty but defaults to a data structure which looks like this:
virt.nic: default: eth0: bridge: br0 model: virtio
NOTE: The model does not need to be defined, Salt will default to the optimal model used by the underlying hypervisor, in the case of kvm this model is virtio
This configuration sets up a network profile called default. The default profile creates a single Ethernet device on the virtual machine that is bridged to the hypervisor's br0 interface. This default setup does not require setting up the virt.nic configuration, and is the reason why a default install only requires setting up the br0 bridge device on the hypervisor.
Define More Profiles
Many environments will require more complex network profiles and may require more than one profile, this can be easily accomplished:
virt.nic: dual: eth0: bridge: service_br eth1: bridge: storage_br single: eth0: bridge: service_br triple: eth0: bridge: service_br eth1: bridge: storage_br eth2: bridge: dmz_br all: eth0: bridge: service_br eth1: bridge: storage_br eth2: bridge: dmz_br eth3: bridge: database_br dmz: eth0: bridge: service_br eth1: bridge: dmz_br database: eth0: bridge: service_br eth1: bridge: database_br
This configuration allows for one of six profiles to be selected, allowing virtual machines to be created which attach to different network depending on the needs of the deployed vm.
UNDERSTANDING YAML
The default renderer for SLS files is the YAML renderer. YAML is a markup language with many powerful features. However, Salt uses a small subset of YAML that maps over very commonly used data structures, like lists and dictionaries. It is the job of the YAML renderer to take the YAML data structure and compile it into a Python data structure for use by Salt.
Though YAML syntax may seem daunting and terse at first, there are only three very simple rules to remember when writing YAML for SLS files.
Rule One: Indentation
YAML uses a fixed indentation scheme to represent relationships between data layers. Salt requires that the indentation for each level consists of exactly two spaces. Do not use tabs.
Rule Two: Colons
Python dictionaries are, of course, simply key-value pairs. Users from other languages may recognize this data type as hashes or associative arrays.
Dictionary keys are represented in YAML as strings terminated by a trailing colon. Values are represented by either a string following the colon, separated by a space:
my_key: my_value
In Python, the above maps to:
{'my_key': 'my_value'}
Alternatively, a value can be associated with a key through indentation.
my_key: my_value
NOTE: The above syntax is valid YAML but is uncommon in SLS files because most often, the value for a key is not singular but instead is a list of values.
In Python, the above maps to:
{'my_key': 'my_value'}
Dictionaries can be nested:
first_level_dict_key: second_level_dict_key: value_in_second_level_dict
And in Python:
{ 'first_level_dict_key': { 'second_level_dict_key': 'value_in_second_level_dict' } }
Rule Three: Dashes
To represent lists of items, a single dash followed by a space is used. Multiple items are a part of the same list as a function of their having the same level of indentation.
- list_value_one - list_value_two - list_value_three
Lists can be the value of a key-value pair. This is quite common in Salt:
my_dictionary: - list_value_one - list_value_two - list_value_three
In Python, the above maps to:
{'my_dictionary': ['list_value_one', 'list_value_two', 'list_value_three']}
Learning More
One easy way to learn more about how YAML gets rendered into Python data structures is to use an online YAML parser to see the Python output.
One excellent choice for experimenting with YAML parsing is: http://yaml-online-parser.appspot.com/
MASTER TOPS SYSTEM
In 0.10.4 the external_nodes system was upgraded to allow for modular subsystems to be used to generate the top file data for a highstate run on the master.
The old external_nodes option has been removed. The master tops system contains a number of subsystems that are loaded via the Salt loader interfaces like modules, states, returners, runners, etc.
Using the new master_tops option is simple:
master_tops: ext_nodes: cobbler-external-nodes
for Cobbler or:
master_tops: reclass: inventory_base_uri: /etc/reclass classes_uri: roles
for Reclass.
It's also possible to create custom master_tops modules. These modules must go in a subdirectory called tops in the extension_modules directory. The extension_modules directory is not defined by default (the default /srv/salt/_modules will NOT work as of this release)
Custom tops modules are written like any other execution module, see the source for the two modules above for examples of fully functional ones. Below is a degenerate example:
/etc/salt/master:
extension_modules: /srv/salt/modules master_tops: customtop: True
/srv/salt/modules/tops/customtop.py:
import logging import sys # Define the module's virtual name __virtualname__ = 'customtop' log = logging.getLogger(__name__) def __virtual__(): return __virtualname__ def top(**kwargs): log.debug('Calling top in customtop') return {'base': ['test']}
salt minion state.show_top should then display something like:
$ salt minion state.show_top minion ---------- base: - test
SALT SSH
Getting Started
Salt SSH is very easy to use, simply set up a basic roster file of the systems to connect to and run salt-ssh commands in a similar way as standard salt commands.
- •
- Salt ssh is considered production ready in version 2014.7.0
- •
- Python is required on the remote system (unless using the -r option to send raw ssh commands)
- •
- On many systems, the salt-ssh executable will be in its own package, usually named salt-ssh
- •
- The Salt SSH system does not supercede the standard Salt communication systems, it simply offers an SSH-based alternative that does not require ZeroMQ and a remote agent. Be aware that since all communication with Salt SSH is executed via SSH it is substantially slower than standard Salt with ZeroMQ.
- •
- At the moment fileserver operations must be wrapped to ensure that the relevant files are delivered with the salt-ssh commands. The state module is an exception, which compiles the state run on the master, and in the process finds all the references to salt:// paths and copies those files down in the same tarball as the state run. However, needed fileserver wrappers are still under development.
Salt SSH Roster
The roster system in Salt allows for remote minions to be easily defined.
NOTE: See the Roster documentation for more details.
Simply create the roster file, the default location is /etc/salt/roster:
web1: 192.168.42.1
This is a very basic roster file where a Salt ID is being assigned to an IP address. A more elaborate roster can be created:
web1: host: 192.168.42.1 # The IP addr or DNS hostname user: fred # Remote executions will be executed as user fred passwd: foobarbaz # The password to use for login, if omitted, keys are used sudo: True # Whether to sudo to root, not enabled by default web2: host: 192.168.42.2
NOTE: sudo works only if NOPASSWD is set for user in /etc/sudoers: fred ALL=(ALL) NOPASSWD: ALL
Deploy ssh key for salt-ssh
By default, salt-ssh will generate key pairs for ssh, the default path will be /etc/salt/pki/master/ssh/salt-ssh.rsa
You can use ssh-copy-id, (the OpenSSH key deployment tool) to deploy keys to your servers.
ssh-copy-id -i /etc/salt/pki/master/ssh/salt-ssh.rsa.pub user [at] server.demo.com
One could also create a simple shell script, named salt-ssh-copy-id.sh as follows:
#!/bin/bash if [ -z $1 ]; then echo $0 user [at] host.com exit 0 fi ssh-copy-id -i /etc/salt/pki/master/ssh/salt-ssh.rsa.pub $1
NOTE: Be certain to chmod +x salt-ssh-copy-id.sh.
./salt-ssh-copy-id.sh user [at] server1.host.com ./salt-ssh-copy-id.sh user [at] server2.host.com
Once keys are successfully deployed, salt-ssh can be used to control them.
Calling Salt SSH
The salt-ssh command can be easily executed in the same way as a salt command:
salt-ssh '*' test.ping
Commands with salt-ssh follow the same syntax as the salt command.
The standard salt functions are available! The output is the same as salt and many of the same flags are available. Please see http://docs.saltstack.com/ref/cli/salt-ssh.html for all of the available options.
Raw Shell Calls
By default salt-ssh runs Salt execution modules on the remote system, but salt-ssh can also execute raw shell commands:
salt-ssh '*' -r 'ifconfig'
States Via Salt SSH
The Salt State system can also be used with salt-ssh. The state system abstracts the same interface to the user in salt-ssh as it does when using standard salt. The intent is that Salt Formulas defined for standard salt will work seamlessly with salt-ssh and vice-versa.
The standard Salt States walkthroughs function by simply replacing salt commands with salt-ssh.
Targeting with Salt SSH
Due to the fact that the targeting approach differs in salt-ssh, only glob and regex targets are supported as of this writing, the remaining target systems still need to be implemented.
Configuring Salt SSH
Salt SSH takes its configuration from a master configuration file. Normally, this file is in /etc/salt/master. If one wishes to use a customized configuration file, the -c option to Salt SSH facilitates passing in a directory to look inside for a configuration file named master.
Minion Config
New in version 2015.5.1.
Minion config options can be defined globally using the master configuration option ssh_minion_opts. It can also be defined on a per-minion basis with the minion_opts entry in the roster.
Running Salt SSH as non-root user
By default, Salt read all the configuration from /etc/salt/. If you are running Salt SSH with a regular user you have to modify some paths or you will get "Permission denied" messages. You have to modify two parameters: pki_dir and cachedir. Those should point to a full path writable for the user.
It's recommed not to modify /etc/salt for this purpose. Create a private copy of /etc/salt for the user and run the command with -c /new/config/path.
Define CLI Options with Saltfile
If you are commonly passing in CLI options to salt-ssh, you can create a Saltfile to automatically use these options. This is common if you're managing several different salt projects on the same server.
So if you cd into a directory with a Saltfile with the following YAML contents:
salt-ssh: config_dir: path/to/config/dir max_prox: 30 wipe_ssh: true
Instead of having to call salt-ssh --config-dir=path/to/config/dir --max-procs=30 --wipe \* test.ping you can call salt-ssh \* test.ping.
Boolean-style options should be specified in their YAML representation.
NOTE: The option keys specified must match the destination attributes for the options specified in the parser salt.utils.parsers.SaltSSHOptionParser. For example, in the case of the --wipe command line option, its dest is configured to be wipe_ssh and thus this is what should be configured in the Saltfile. Using the names of flags for this option, being wipe: true or w: true, will not work.
Debugging salt-ssh
One common approach for debugging salt-ssh is to simply use the tarball that salt ships to the remote machine and call salt-call directly.
To determine the location of salt-call, simply run salt-ssh with the -ldebug flag and look for a line containing the string, SALT_ARGV. This contains the salt-call command that salt-ssh attempted to execute.
It is recommended that one modify this command a bit by removing the -l quiet, --metadata and --output json to get a better idea of what's going on on the target system.
SALT ROSTERS
Salt rosters are pluggable systems added in Salt 0.17.0 to facilitate the salt-ssh system. The roster system was created because salt-ssh needs a means to identify which systems need to be targeted for execution.
SEE ALSO: all-salt.roster
NOTE: The Roster System is not needed or used in standard Salt because the master does not need to be initially aware of target systems, since the Salt Minion checks itself into the master.
Since the roster system is pluggable, it can be easily augmented to attach to any existing systems to gather information about what servers are presently available and should be attached to by salt-ssh. By default the roster file is located at /etc/salt/roster.
How Rosters Work
The roster system compiles a data structure internally referred to as targets. The targets is a list of target systems and attributes about how to connect to said systems. The only requirement for a roster module in Salt is to return the targets data structure.
Targets Data
The information which can be stored in a roster target is the following:
<Salt ID>: # The id to reference the target system with host: # The IP address or DNS name of the remote host user: # The user to log in as passwd: # The password to log in with # Optional parameters port: # The target system's ssh port number sudo: # Boolean to run command via sudo priv: # File path to ssh private key, defaults to salt-ssh.rsa timeout: # Number of seconds to wait for response when establishing # an SSH connection timeout: # Number of seconds to wait for response minion_opts: # Dictionary of minion opts
REFERENCE
Full list of builtin auth modules
auto |
An "Always Approved" eauth interface to test against, not intended for
|
django |
Provide authentication using Django Web Framework
|
keystone |
Provide authentication using OpenStack Keystone
|
ldap |
Provide authentication using simple LDAP binds
|
mysql |
Provide authentication using MySQL.
|
pam |
Authenticate against PAM
|
pki |
Authenticate via a PKI certificate.
|
stormpath_mod |
Salt Stormpath Authentication
|
yubico |
Provide authentication using YubiKey.
|
salt.auth.auto
An "Always Approved" eauth interface to test against, not intended for production use
- salt.auth.auto.auth(username, password)
- Authenticate!
salt.auth.django
Provide authentication using Django Web Framework
- depends
- •
-
Django Web Framework
Django authentication depends on the presence of the django framework in the PYTHONPATH, the Django project's settings.py file being in the PYTHONPATH and accessible via the DJANGO_SETTINGS_MODULE environment variable.
Django auth can be defined like any other eauth module:
external_auth: django: fred: - .* - '@runner'
This will authenticate Fred via Django and allow him to run any execution module and all runners.
The authorization details can optionally be located inside the Django database. The relevant entry in the models.py file would look like this:
class SaltExternalAuthModel(models.Model): user_fk = models.ForeignKey(auth.User) minion_matcher = models.CharField() minion_fn = models.CharField()
The external_auth clause in the master config would then look like this:
external_auth: django: ^model: <fully-qualified reference to model class>
When a user attempts to authenticate via Django, Salt will import the package indicated via the keyword ^model. That model must have the fields indicated above, though the model DOES NOT have to be named 'SaltExternalAuthModel'.
- salt.auth.django.auth(username, password)
- Simple Django auth
- salt.auth.django.django_auth_setup()
- salt.auth.django.retrieve_auth_entries(u=None)
- Parameters
- •
- django_auth_class -- Reference to the django model class for auth
- •
- u -- Username to filter for
- Returns
-
Dictionary that can be slotted into the __opts__ structure for
eauth that designates the user associated ACL
Database records such as:
username minion_or_fn_matcher minion_fn fred test.ping fred server1 network.interfaces fred server1 raid.list fred server2 .* guru .* smartadmin server1 .* Should result in an eauth config such as:
fred: - test.ping - server1: - network.interfaces - raid.list - server2: - .* guru: - .* smartadmin: - server1: - .*
salt.auth.keystone
Provide authentication using OpenStack Keystone
- depends
- •
- keystoneclient Python module
- salt.auth.keystone.auth(username, password)
- Try and authenticate
- salt.auth.keystone.get_auth_url()
- Try and get the URL from the config, else return localhost
salt.auth.ldap
Provide authentication using simple LDAP binds
- depends
- •
- ldap Python module
- salt.auth.ldap.auth(username, password)
- Simple LDAP auth
- salt.auth.ldap.groups(username, **kwargs)
-
Authenticate against an LDAP group
Behavior is highly dependent on if Active Directory is in use.
AD handles group membership very differently than OpenLDAP. See the External Authentication documentation for a thorough discussion of available parameters for customizing the search.
OpenLDAP allows you to search for all groups in the directory and returns members of those groups. Then we check against the username entered.
salt.auth.mysql
Provide authentication using MySQL.
When using MySQL as an authentication backend, you will need to create or use an existing table that has a username and a password column.
To get started, create a simple table that holds just a username and a password. The password field will hold a SHA256 checksum.
CREATE TABLE `users` ( `id` int(11) NOT NULL AUTO_INCREMENT, `username` varchar(25) DEFAULT NULL, `password` varchar(70) DEFAULT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB AUTO_INCREMENT=2 DEFAULT CHARSET=latin1;
To create a user within MySQL, execute the following statement.
INSERT INTO users VALUES (NULL, 'diana', SHA2('secret', 256))
mysql_auth: hostname: localhost database: SaltStack username: root password: letmein auth_sql: 'SELECT username FROM users WHERE username = "{0}" AND password = SHA2("{1}", 256)'
The auth_sql contains the SQL that will validate a user to ensure they are correctly authenticated. This is where you can specify other SQL queries to authenticate users.
Enable MySQL authentication.
external_auth: mysql: damian: - test.*
- depends
- •
- MySQL-python Python module
- salt.auth.mysql.auth(username, password)
- Authenticate using a MySQL user table
salt.auth.pam
Authenticate against PAM
Provides an authenticate function that will allow the caller to authenticate a user against the Pluggable Authentication Modules (PAM) on the system.
Implemented using ctypes, so no compilation is necessary.
NOTE: PAM authentication will not work for the root user.
The Python interface to PAM does not support authenticating as root.
- class salt.auth.pam.PamConv
- Wrapper class for pam_conv structure
- appdata_ptr
- Structure/Union member
- conv
- Structure/Union member
- class salt.auth.pam.PamHandle
- Wrapper class for pam_handle_t
- handle
- Structure/Union member
- class salt.auth.pam.PamMessage
- Wrapper class for pam_message structure
- msg
- Structure/Union member
- msg_style
- Structure/Union member
- class salt.auth.pam.PamResponse
- Wrapper class for pam_response structure
- resp
- Structure/Union member
- resp_retcode
- Structure/Union member
- salt.auth.pam.auth(username, password, **kwargs)
- Authenticate via pam
- salt.auth.pam.authenticate(username, password, service='login')
-
Returns True if the given username and password authenticate for the
given service. Returns False otherwise
username: the username to authenticate
password: the password in plain text
- service: the PAM service to authenticate against.
- Defaults to 'login'
- salt.auth.pam.groups(username, *args, **kwargs)
-
Retrieve groups for a given user for this auth provider
Uses system groups
salt.auth.pki
Authenticate via a PKI certificate.
NOTE: This module is Experimental and should be used with caution
Provides an authenticate function that will allow the caller to authenticate a user via their public cert against a pre-defined Certificate Authority.
TODO: Add a 'ca_dir' option to configure a directory of CA files, a la Apache.
- depends
- •
- pyOpenSSL module
- salt.auth.pki.auth(pem, **kwargs)
-
Returns True if the given user cert was issued by the CA.
Returns False otherwise.
pem: a pem-encoded user public key (certificate)
Configure the CA cert in the master config file:
external_auth: pki: ca_file: /etc/pki/tls/ca_certs/trusted-ca.crt
salt.auth.stormpath_mod
Salt Stormpath Authentication
Module to provide authentication using Stormpath as the backend.
- depends
- •
- stormpath-sdk Python module
- configuration
-
This module requires the development branch of the
stormpath-sdk which can be found here:
https://github.com/stormpath/stormpath-sdk-python
The following config items are required in the master config:
stormpath.api_key_file: <path/to/apiKey.properties> stormpath.app_url: <Rest url of your Stormpath application>
Ensure that your apiKey.properties is readable by the user the Salt Master is running as, but not readable by other system users.
- salt.auth.stormpath_mod.auth(username, password)
- Try and authenticate
salt.auth.yubico
Provide authentication using YubiKey.
New in version 2015.5.0.
- depends
-
yubico-client Python module
To get your YubiKey API key you will need to visit the website below.
https://upgrade.yubico.com/getapikey/
The resulting page will show the generated Client ID (aka AuthID or API ID) and the generated API key (Secret Key). Make a note of both and use these two values in your /etc/salt/master configuration. /etc/salt/master
yubico_users: damian: id: 12345 key: ABCDEFGHIJKLMNOPQRSTUVWXYZ
external_auth: yubico: damian: - test.*
Please wait five to ten minutes after generating the key before testing so that the API key will be updated on all the YubiCloud servers.
- salt.auth.yubico.auth(username, password)
- Authentcate against yubico server
Command Line Reference
Salt can be controlled by a command line client by the root user on the Salt master. The Salt command line client uses the Salt client API to communicate with the Salt master server. The Salt client is straightforward and simple to use.
Using the Salt client commands can be easily sent to the minions.
Each of these commands accepts an explicit --config option to point to either the master or minion configuration file. If this option is not provided and the default configuration file does not exist then Salt falls back to use the environment variables SALT_MASTER_CONFIG and SALT_MINION_CONFIG.
Using the Salt Command
The Salt command needs a few components to send information to the Salt minions. The target minions need to be defined, the function to call and any arguments the function requires.
Defining the Target Minions
The first argument passed to salt, defines the target minions, the target minions are accessed via their hostname. The default target type is a bash glob:
salt '*foo.com' sys.doc
Salt can also define the target minions with regular expressions:
salt -E '.*' cmd.run 'ls -l | grep foo'
Or to explicitly list hosts, salt can take a list:
salt -L foo.bar.baz,quo.qux cmd.run 'ps aux | grep foo'
More Powerful Targets
The simple target specifications, glob, regex, and list will cover many use cases, and for some will cover all use cases, but more powerful options exist.
Targeting with Grains
The Grains interface was built into Salt to allow minions to be targeted by system properties. So minions running on a particular operating system can be called to execute a function, or a specific kernel.
Calling via a grain is done by passing the -G option to salt, specifying a grain and a glob expression to match the value of the grain. The syntax for the target is the grain key followed by a globexpression: "os:Arch*".
salt -G 'os:Fedora' test.ping
Will return True from all of the minions running Fedora.
To discover what grains are available and what the values are, execute the grains.item salt function:
salt '*' grains.items
more info on using targeting with grains can be found here.
Targeting with Executions
As of 0.8.8 targeting with executions is still under heavy development and this documentation is written to reference the behavior of execution matching in the future.
Execution matching allows for a primary function to be executed, and then based on the return of the primary function the main function is executed.
Execution matching allows for matching minions based on any arbitrary running data on the minions.
Compound Targeting
New in version 0.9.5.
Multiple target interfaces can be used in conjunction to determine the command targets. These targets can then be combined using and or or statements. This is well defined with an example:
salt -C 'G@os:Debian and webser* or E@db.*' test.ping
In this example any minion who's id starts with webser and is running Debian, or any minion who's id starts with db will be matched.
The type of matcher defaults to glob, but can be specified with the corresponding letter followed by the @ symbol. In the above example a grain is used with G@ as well as a regular expression with E@. The webser* target does not need to be prefaced with a target type specifier because it is a glob.
more info on using compound targeting can be found here.
Node Group Targeting
New in version 0.9.5.
For certain cases, it can be convenient to have a predefined group of minions on which to execute commands. This can be accomplished using what are called nodegroups. Nodegroups allow for predefined compound targets to be declared in the master configuration file, as a sort of shorthand for having to type out complicated compound expressions.
nodegroups:group1: 'L@foo.domain.com,bar.domain.com,baz.domain.com and bl*.domain.com' group2: 'G@os:Debian and foo.domain.com' group3: 'G@os:Debian and N@group1'
Calling the Function
The function to call on the specified target is placed after the target specification.
New in version 0.9.8.
Functions may also accept arguments, space-delimited:
salt '*' cmd.exec_code python 'import sys; print sys.version'
Optional, keyword arguments are also supported:
salt '*' pip.install salt timeout=5 upgrade=True
They are always in the form of kwarg=argument.
Arguments are formatted as YAML:
salt '*' cmd.run 'echo "Hello: $FIRST_NAME"' env='{FIRST_NAME: "Joe"}'
Note: dictionaries must have curly braces around them (like the env keyword argument above). This was changed in 0.15.1: in the above example, the first argument used to be parsed as the dictionary {'echo "Hello': '$FIRST_NAME"'}. This was generally not the expected behavior.
If you want to test what parameters are actually passed to a module, use the test.arg_repr command:
salt '*' test.arg_repr 'echo "Hello: $FIRST_NAME"' env='{FIRST_NAME: "Joe"}'
Finding available minion functions
The Salt functions are self documenting, all of the function documentation can be retried from the minions via the sys.doc() function:
salt '*' sys.doc
Compound Command Execution
If a series of commands needs to be sent to a single target specification then the commands can be sent in a single publish. This can make gathering groups of information faster, and lowers the stress on the network for repeated commands.
Compound command execution works by sending a list of functions and arguments instead of sending a single function and argument. The functions are executed on the minion in the order they are defined on the command line, and then the data from all of the commands are returned in a dictionary. This means that the set of commands are called in a predictable way, and the returned data can be easily interpreted.
Executing compound commands if done by passing a comma delimited list of functions, followed by a comma delimited list of arguments:
salt '*' cmd.run,test.ping,test.echo 'cat /proc/cpuinfo',,foo
The trick to look out for here, is that if a function is being passed no arguments, then there needs to be a placeholder for the absent arguments. This is why in the above example, there are two commas right next to each other. test.ping takes no arguments, so we need to add another comma, otherwise Salt would attempt to pass "foo" to test.ping.
If you need to pass arguments that include commas, then make sure you add spaces around the commas that separate arguments. For example:
salt '*' cmd.run,test.ping,test.echo 'echo "1,2,3"' , , foo
You may change the arguments separator using the --args-separator option:
salt --args-separator=:: '*' some.fun,test.echo params with , comma :: foo
CLI Completion
Shell completion scripts for the Salt CLI are available in the pkg Salt source directory.
salt-call
salt-call
Synopsis
salt-call [options]
Description
The salt-call command is used to run module functions locally on a minion instead of executing them from the master. Salt-call is used to run a Standalone Minion, and was originally created for troubleshooting.
The Salt Master is contacted to retrieve state files and other resources during execution unless the --local option is specified.
NOTE: salt-call commands execute from the current user's shell context, while salt commands execute from the system's default context.
Options
- --version
- Print the version of Salt that is running.
- --versions-report
- Show program's dependencies and version number, and then exit
- -h, --help
- Show the help message and exit
- -c CONFIG_DIR, --config-dir=CONFIG_dir
- The location of the Salt configuration directory. This directory contains the configuration files for Salt master and minions. The default location on most systems is /etc/salt.
- --hard-crash
- Raise any original exception rather than exiting gracefully Default: False
- -g, --grains
- Return the information generated by the Salt grains
- -m MODULE_DIRS, --module-dirs=MODULE_DIRS
- Specify an additional directory to pull modules from. Multiple directories can be provided by passing -m /--module-dirs multiple times.
- -d, --doc, --documentation
- Return the documentation for the specified module or for all modules if none are specified
- --master=MASTER
- Specify the master to use. The minion must be authenticated with the master. If this option is omitted, the master options from the minion config will be used. If multi masters are set up the first listed master that responds will be used.
- --return RETURNER
- Set salt-call to pass the return data to one or many returner interfaces. To use many returner interfaces specify a comma delimited list of returners.
- --local
- Run salt-call locally, as if there was no master running.
- --file-root=FILE_ROOT
- Set this directory as the base file root.
- --pillar-root=PILLAR_ROOT
- Set this directory as the base pillar root.
- --retcode-passthrough
- Exit with the salt call retcode and not the salt binary retcode
- --metadata
- Print out the execution metadata as well as the return. This will print out the outputter data, the return code, etc.
- --id=ID
- Specify the minion id to use. If this option is omitted, the id option from the minion config will be used.
- --skip-grains
- Do not load grains.
- --refresh-grains-cache
- Force a refresh of the grains cache
Logging Options
Logging options which override any settings defined on the configuration files.
- -l LOG_LEVEL, --log-level=LOG_LEVEL
- Console logging log level. One of all, garbage, trace, debug, info, warning, error, quiet. Default: info.
- --log-file=LOG_FILE
- Log file path. Default: /var/log/salt/minion.
- --log-file-level=LOG_LEVEL_LOGFILE
- Logfile logging log level. One of all, garbage, trace, debug, info, warning, error, quiet. Default: info.
Output Options
- --out
-
Pass in an alternative outputter to display the return of data. This
outputter can be any of the available outputters:
grains, highstate, json, key, overstatestage, pprint, raw, txt, yaml
Some outputters are formatted only for data returned from specific functions; for instance, the grains outputter will not work for non-grains data.
If an outputter is used that does not support the data passed into it, then Salt will fall back on the pprint outputter and display the return data using the Python pprint standard library module.
NOTE: If using --out=json, you will probably want --static as well. Without the static option, you will get a separate JSON string per minion which makes JSON output invalid as a whole. This is due to using an iterative outputter. So if you want to feed it to a JSON parser, use --static as well.
- --out-indent OUTPUT_INDENT, --output-indent OUTPUT_INDENT
- Print the output indented by the provided value in spaces. Negative values disable indentation. Only applicable in outputters that support indentation.
- --out-file=OUTPUT_FILE, --output-file=OUTPUT_FILE
- Write the output to the specified file.
- --no-color
- Disable all colored output
- --force-color
-
Force colored output
NOTE: When using colored output the color codes are as follows:
green denotes success, red denotes failure, blue denotes changes and success and yellow denotes a expected future change in configuration.
See also
salt(1) salt-master(1) salt-minion(1)
salt
salt
Synopsis
salt '*' [ options ] sys.docsalt -E '.*' [ options ] sys.doc cmd
salt -G 'os:Arch.*' [ options ] test.ping
salt -C 'G@os:Arch.* and webserv* or G@kernel:FreeBSD' [ options ] test.ping
Description
Salt allows for commands to be executed across a swath of remote systems in parallel. This means that remote systems can be both controlled and queried with ease.
Options
- --version
- Print the version of Salt that is running.
- --versions-report
- Show program's dependencies and version number, and then exit
- -h, --help
- Show the help message and exit
- -c CONFIG_DIR, --config-dir=CONFIG_dir
- The location of the Salt configuration directory. This directory contains the configuration files for Salt master and minions. The default location on most systems is /etc/salt.
- -t TIMEOUT, --timeout=TIMEOUT
- The timeout in seconds to wait for replies from the Salt minions. The timeout number specifies how long the command line client will wait to query the minions and check on running jobs. Default: 5
- -s, --static
- By default as of version 0.9.8 the salt command returns data to the console as it is received from minions, but previous releases would return data only after all data was received. Use the static option to only return the data with a hard timeout and after all minions have returned. Without the static option, you will get a separate JSON string per minion which makes JSON output invalid as a whole.
- --async
- Instead of waiting for the job to run on minions only print the job id of the started execution and complete.
- --state-output=STATE_OUTPUT
-
New in version 0.17.
Override the configured state_output value for minion output. One of full, terse, mixed, changes or filter. Default: full.
- --subset=SUBSET
- Execute the routine on a random subset of the targeted minions. The minions will be verified that they have the named function before executing.
- -v VERBOSE, --verbose
- Turn on verbosity for the salt call, this will cause the salt command to print out extra data like the job id.
- --hide-timeout
- Instead of showing the return data for all minions. This option prints only the online minions which could be reached.
- -b BATCH, --batch-size=BATCH
- Instead of executing on all targeted minions at once, execute on a progressive set of minions. This option takes an argument in the form of an explicit number of minions to execute at once, or a percentage of minions to execute on.
- -a EAUTH, --auth=EAUTH
- Pass in an external authentication medium to validate against. The credentials will be prompted for. The options are auto, keystone, ldap, pam, and stormpath. Can be used with the -T option.
- -T, --make-token
- Used in conjunction with the -a option. This creates a token that allows for the authenticated user to send commands without needing to re-authenticate.
- --return=RETURNER
- Choose an alternative returner to call on the minion, if an alternative returner is used then the return will not come back to the command line but will be sent to the specified return system. The options are carbon, cassandra, couchbase, couchdb, elasticsearch, etcd, hipchat, local, local_cache, memcache, mongo, mysql, odbc, postgres, redis, sentry, slack, sms, smtp, sqlite3, syslog, and xmpp.
- -d, --doc, --documentation
- Return the documentation for the module functions available on the minions
- --args-separator=ARGS_SEPARATOR
- Set the special argument used as a delimiter between command arguments of compound commands. This is useful when one wants to pass commas as arguments to some of the commands in a compound command.
Logging Options
Logging options which override any settings defined on the configuration files.
- -l LOG_LEVEL, --log-level=LOG_LEVEL
- Console logging log level. One of all, garbage, trace, debug, info, warning, error, quiet. Default: warning.
- --log-file=LOG_FILE
- Log file path. Default: /var/log/salt/master.
- --log-file-level=LOG_LEVEL_LOGFILE
- Logfile logging log level. One of all, garbage, trace, debug, info, warning, error, quiet. Default: warning.
Target Selection
- -E, --pcre
- The target expression will be interpreted as a PCRE regular expression rather than a shell glob.
- -L, --list
- The target expression will be interpreted as a comma-delimited list; example: server1.foo.bar,server2.foo.bar,example7.quo.qux
- -G, --grain
-
The target expression matches values returned by the Salt grains system on
the minions. The target expression is in the format of '<grain value>:<glob
expression>'; example: 'os:Arch*'
This was changed in version 0.9.8 to accept glob expressions instead of regular expression. To use regular expression matching with grains, use the --grain-pcre option.
- --grain-pcre
- The target expression matches values returned by the Salt grains system on the minions. The target expression is in the format of '<grain value>:< regular expression>'; example: 'os:Arch.*'
- -N, --nodegroup
- Use a predefined compound target defined in the Salt master configuration file.
- -R, --range
-
Instead of using shell globs to evaluate the target, use a range expression
to identify targets. Range expressions look like %cluster.
Using the Range option requires that a range server is set up and the location of the range server is referenced in the master configuration file.
- -C, --compound
- Utilize many target definitions to make the call very granular. This option takes a group of targets separated by and or or. The default matcher is a glob as usual. If something other than a glob is used, preface it with the letter denoting the type; example: 'webserv* and G@os:Debian or E@db*' Make sure that the compound target is encapsulated in quotes.
- -I, --pillar
- Instead of using shell globs to evaluate the target, use a pillar value to identify targets. The syntax for the target is the pillar key followed by a glob expression: "role:production*"
- -S, --ipcidr
- Match based on Subnet (CIDR notation) or IPv4 address.
Output Options
- --out
-
Pass in an alternative outputter to display the return of data. This
outputter can be any of the available outputters:
grains, highstate, json, key, overstatestage, pprint, raw, txt, yaml
Some outputters are formatted only for data returned from specific functions; for instance, the grains outputter will not work for non-grains data.
If an outputter is used that does not support the data passed into it, then Salt will fall back on the pprint outputter and display the return data using the Python pprint standard library module.
NOTE: If using --out=json, you will probably want --static as well. Without the static option, you will get a separate JSON string per minion which makes JSON output invalid as a whole. This is due to using an iterative outputter. So if you want to feed it to a JSON parser, use --static as well.
- --out-indent OUTPUT_INDENT, --output-indent OUTPUT_INDENT
- Print the output indented by the provided value in spaces. Negative values disable indentation. Only applicable in outputters that support indentation.
- --out-file=OUTPUT_FILE, --output-file=OUTPUT_FILE
- Write the output to the specified file.
- --no-color
- Disable all colored output
- --force-color
-
Force colored output
NOTE: When using colored output the color codes are as follows:
green denotes success, red denotes failure, blue denotes changes and success and yellow denotes a expected future change in configuration.
See also
salt(7) salt-master(1) salt-minion(1)
salt-cloud
salt-cp
salt-cp
Copy a file to a set of systems
Synopsis
salt-cp '*' [ options ] SOURCE DEST salt-cp -E '.*' [ options ] SOURCE DEST salt-cp -G 'os:Arch.*' [ options ] SOURCE DEST
Description
Salt copy copies a local file out to all of the Salt minions matched by the given target.
Options
- --version
- Print the version of Salt that is running.
- --versions-report
- Show program's dependencies and version number, and then exit
- -h, --help
- Show the help message and exit
- -c CONFIG_DIR, --config-dir=CONFIG_dir
- The location of the Salt configuration directory. This directory contains the configuration files for Salt master and minions. The default location on most systems is /etc/salt.
- -t TIMEOUT, --timeout=TIMEOUT
- The timeout in seconds to wait for replies from the Salt minions. The timeout number specifies how long the command line client will wait to query the minions and check on running jobs. Default: 5
Logging Options
Logging options which override any settings defined on the configuration files.
- -l LOG_LEVEL, --log-level=LOG_LEVEL
- Console logging log level. One of all, garbage, trace, debug, info, warning, error, quiet. Default: warning.
- --log-file=LOG_FILE
- Log file path. Default: /var/log/salt/master.
- --log-file-level=LOG_LEVEL_LOGFILE
- Logfile logging log level. One of all, garbage, trace, debug, info, warning, error, quiet. Default: warning.
Target Selection
- -E, --pcre
- The target expression will be interpreted as a PCRE regular expression rather than a shell glob.
- -L, --list
- The target expression will be interpreted as a comma-delimited list; example: server1.foo.bar,server2.foo.bar,example7.quo.qux
- -G, --grain
-
The target expression matches values returned by the Salt grains system on
the minions. The target expression is in the format of '<grain value>:<glob
expression>'; example: 'os:Arch*'
This was changed in version 0.9.8 to accept glob expressions instead of regular expression. To use regular expression matching with grains, use the --grain-pcre option.
- --grain-pcre
- The target expression matches values returned by the Salt grains system on the minions. The target expression is in the format of '<grain value>:< regular expression>'; example: 'os:Arch.*'
- -N, --nodegroup
- Use a predefined compound target defined in the Salt master configuration file.
- -R, --range
-
Instead of using shell globs to evaluate the target, use a range expression
to identify targets. Range expressions look like %cluster.
Using the Range option requires that a range server is set up and the location of the range server is referenced in the master configuration file.
See also
salt(1) salt-master(1) salt-minion(1)
salt-key
salt-key
Synopsis
salt-key [ options ]
Description
Salt-key executes simple management of Salt server public keys used for authentication.
On initial connection, a Salt minion sends its public key to the Salt master. This key must be accepted using the salt-key command on the Salt master.
Salt minion keys can be in one of the following states:
- •
- unaccepted: key is waiting to be accepted.
- •
- accepted: key was accepted and the minion can communicate with the Salt master.
- •
- rejected: key was rejected using the salt-key command. In this state the minion does not receive any communication from the Salt master.
- •
-
denied: key was rejected automatically by the Salt master.
This occurs when a minion has a duplicate ID, or when a minion was rebuilt or
had new keys generated and the previous key was not deleted from the Salt
master. In this state the minion does not receive any communication from the
Salt master.
To change the state of a minion key, use -d to delete the key and then accept or reject the key.
Options
- --version
- Print the version of Salt that is running.
- --versions-report
- Show program's dependencies and version number, and then exit
- -h, --help
- Show the help message and exit
- -c CONFIG_DIR, --config-dir=CONFIG_dir
- The location of the Salt configuration directory. This directory contains the configuration files for Salt master and minions. The default location on most systems is /etc/salt.
- -u USER, --user=USER
- Specify user to run salt-key
- --hard-crash
- Raise any original exception rather than exiting gracefully. Default is False.
- -q, --quiet
- Suppress output
- -y, --yes
- Answer 'Yes' to all questions presented, defaults to False
- --rotate-aes-key=ROTATE_AES_KEY
- Setting this to False prevents the master from refreshing the key session when keys are deleted or rejected, this lowers the security of the key deletion/rejection operation. Default is True.
Logging Options
Logging options which override any settings defined on the configuration files.
- --log-file=LOG_FILE
- Log file path. Default: /var/log/salt/minion.
- --log-file-level=LOG_LEVEL_LOGFILE
- Logfile logging log level. One of all, garbage, trace, debug, info, warning, error, quiet. Default: warning.
Output Options
- --out
-
Pass in an alternative outputter to display the return of data. This
outputter can be any of the available outputters:
grains, highstate, json, key, overstatestage, pprint, raw, txt, yaml
Some outputters are formatted only for data returned from specific functions; for instance, the grains outputter will not work for non-grains data.
If an outputter is used that does not support the data passed into it, then Salt will fall back on the pprint outputter and display the return data using the Python pprint standard library module.
NOTE: If using --out=json, you will probably want --static as well. Without the static option, you will get a separate JSON string per minion which makes JSON output invalid as a whole. This is due to using an iterative outputter. So if you want to feed it to a JSON parser, use --static as well.
- --out-indent OUTPUT_INDENT, --output-indent OUTPUT_INDENT
- Print the output indented by the provided value in spaces. Negative values disable indentation. Only applicable in outputters that support indentation.
- --out-file=OUTPUT_FILE, --output-file=OUTPUT_FILE
- Write the output to the specified file.
- --no-color
- Disable all colored output
- --force-color
-
Force colored output
NOTE: When using colored output the color codes are as follows:
green denotes success, red denotes failure, blue denotes changes and success and yellow denotes a expected future change in configuration.
Actions
- -l ARG, --list=ARG
- List the public keys. The args pre, un, and unaccepted will list unaccepted/unsigned keys. acc or accepted will list accepted/signed keys. rej or rejected will list rejected keys. Finally, all will list all keys.
- -L, --list-all
- List all public keys. (Deprecated: use --list all)
- -a ACCEPT, --accept=ACCEPT
- Accept the specified public key (use --include-all to match rejected keys in addition to pending keys). Globs are supported.
- -A, --accept-all
- Accepts all pending keys.
- -r REJECT, --reject=REJECT
- Reject the specified public key (use --include-all to match accepted keys in addition to pending keys). Globs are supported.
- -R, --reject-all
- Rejects all pending keys.
- --include-all
- Include non-pending keys when accepting/rejecting.
- -p PRINT, --print=PRINT
- Print the specified public key.
- -P, --print-all
- Print all public keys
- -d DELETE, --delete=DELETE
- Delete the specified key. Globs are supported.
- -D, --delete-all
- Delete all keys.
- -f FINGER, --finger=FINGER
- Print the specified key's fingerprint.
- -F, --finger-all
- Print all keys' fingerprints.
Key Generation Options
- --gen-keys=GEN_KEYS
- Set a name to generate a keypair for use with salt
- --gen-keys-dir=GEN_KEYS_DIR
- Set the directory to save the generated keypair. Only works with 'gen_keys_dir' option; default is the current directory.
- --keysize=KEYSIZE
- Set the keysize for the generated key, only works with the '--gen-keys' option, the key size must be 2048 or higher, otherwise it will be rounded up to 2048. The default is 2048.
- --gen-signature
- Create a signature file of the masters public-key named master_pubkey_signature. The signature can be send to a minion in the masters auth-reply and enables the minion to verify the masters public-key cryptographically. This requires a new signing-key- pair which can be auto-created with the --auto-create parameter.
- --priv=PRIV
- The private-key file to create a signature with
- --signature-path=SIGNATURE_PATH
- The path where the signature file should be written
- --pub=PUB
- The public-key file to create a signature for
- --auto-create
- Auto-create a signing key-pair if it does not yet exist
See also
salt(7) salt-master(1) salt-minion(1)
salt-master
salt-master
The Salt master daemon, used to control the Salt minions
Synopsis
salt-master [ options ]
Description
The master daemon controls the Salt minions
Options
- --version
- Print the version of Salt that is running.
- --versions-report
- Show program's dependencies and version number, and then exit
- -h, --help
- Show the help message and exit
- -c CONFIG_DIR, --config-dir=CONFIG_dir
- The location of the Salt configuration directory. This directory contains the configuration files for Salt master and minions. The default location on most systems is /etc/salt.
- -u USER, --user=USER
- Specify user to run salt-master
- -d, --daemon
- Run salt-master as a daemon
- --pid-file PIDFILE
- Specify the location of the pidfile. Default: /var/run/salt-master.pid
Logging Options
Logging options which override any settings defined on the configuration files.
- -l LOG_LEVEL, --log-level=LOG_LEVEL
- Console logging log level. One of all, garbage, trace, debug, info, warning, error, quiet. Default: warning.
- --log-file=LOG_FILE
- Log file path. Default: /var/log/salt/master.
- --log-file-level=LOG_LEVEL_LOGFILE
- Logfile logging log level. One of all, garbage, trace, debug, info, warning, error, quiet. Default: warning.
See also
salt(1) salt(7) salt-minion(1)
salt-minion
salt-minion
The Salt minion daemon, receives commands from a remote Salt master.
Synopsis
salt-minion [ options ]
Description
The Salt minion receives commands from the central Salt master and replies with the results of said commands.
Options
- --version
- Print the version of Salt that is running.
- --versions-report
- Show program's dependencies and version number, and then exit
- -h, --help
- Show the help message and exit
- -c CONFIG_DIR, --config-dir=CONFIG_dir
- The location of the Salt configuration directory. This directory contains the configuration files for Salt master and minions. The default location on most systems is /etc/salt.
- -u USER, --user=USER
- Specify user to run salt-minion
- -d, --daemon
- Run salt-minion as a daemon
- --pid-file PIDFILE
- Specify the location of the pidfile. Default: /var/run/salt-minion.pid
Logging Options
Logging options which override any settings defined on the configuration files.
- -l LOG_LEVEL, --log-level=LOG_LEVEL
- Console logging log level. One of all, garbage, trace, debug, info, warning, error, quiet. Default: warning.
- --log-file=LOG_FILE
- Log file path. Default: /var/log/salt/minion.
- --log-file-level=LOG_LEVEL_LOGFILE
- Logfile logging log level. One of all, garbage, trace, debug, info, warning, error, quiet. Default: warning.
See also
salt(1) salt(7) salt-master(1)
salt-run
salt-run
Synopsis
salt-run RUNNER
Description
salt-run is the frontend command for executing Salt Runners. Salt runners are simple modules used to execute convenience functions on the master
Options
- --version
- Print the version of Salt that is running.
- --versions-report
- Show program's dependencies and version number, and then exit
- -h, --help
- Show the help message and exit
- -c CONFIG_DIR, --config-dir=CONFIG_dir
- The location of the Salt configuration directory. This directory contains the configuration files for Salt master and minions. The default location on most systems is /etc/salt.
- -t TIMEOUT, --timeout=TIMEOUT
- The timeout in seconds to wait for replies from the Salt minions. The timeout number specifies how long the command line client will wait to query the minions and check on running jobs. Default: 1
- --hard-crash
- Raise any original exception rather than exiting gracefully. Default is False.
- -d, --doc, --documentation
- Display documentation for runners, pass a module or a runner to see documentation on only that module/runner.
Logging Options
Logging options which override any settings defined on the configuration files.
- -l LOG_LEVEL, --log-level=LOG_LEVEL
- Console logging log level. One of all, garbage, trace, debug, info, warning, error, quiet. Default: warning.
- --log-file=LOG_FILE
- Log file path. Default: /var/log/salt/master.
- --log-file-level=LOG_LEVEL_LOGFILE
- Logfile logging log level. One of all, garbage, trace, debug, info, warning, error, quiet. Default: warning.
See also
salt(1) salt-master(1) salt-minion(1)
salt-ssh
salt-ssh
Synopsis
salt-ssh '*' [ options ] sys.doc salt-ssh -E '.*' [ options ] sys.doc cmd
Description
Salt SSH allows for salt routines to be executed using only SSH for transport
Options
- -r, --raw, --raw-shell
- Execute a raw shell command.
- --priv
- Specify the SSH private key file to be used for authentication.
- --roster
- Define which roster system to use, this defines if a database backend, scanner, or custom roster system is used. Default is the flat file roster.
- --roster-file
-
Define an alternative location for the default roster file location. The
default roster file is called roster and is found in the same directory
as the master config file.
New in version 2014.1.0.
- --refresh, --refresh-cache
- Force a refresh of the master side data cache of the target's data. This is needed if a target's grains have been changed and the auto refresh timeframe has not been reached.
- --max-procs
- Set the number of concurrent minions to communicate with. This value defines how many processes are opened up at a time to manage connections, the more running process the faster communication should be, default is 25.
- -i, --ignore-host-keys
- Ignore the ssh host keys which by default are honored and connections would ask for approval.
- --passwd
- Set the default password to attempt to use when authenticating.
- --key-deploy
- Set this flag to attempt to deploy the authorized ssh key with all minions. This combined with --passwd can make initial deployment of keys very fast and easy.
- --version
- Print the version of Salt that is running.
- --versions-report
- Show program's dependencies and version number, and then exit
- -h, --help
- Show the help message and exit
- -c CONFIG_DIR, --config-dir=CONFIG_dir
- The location of the Salt configuration directory. This directory contains the configuration files for Salt master and minions. The default location on most systems is /etc/salt.
Target Selection
- -E, --pcre
- The target expression will be interpreted as a PCRE regular expression rather than a shell glob.
- -L, --list
- The target expression will be interpreted as a comma-delimited list; example: server1.foo.bar,server2.foo.bar,example7.quo.qux
- -G, --grain
-
The target expression matches values returned by the Salt grains system on
the minions. The target expression is in the format of '<grain value>:<glob
expression>'; example: 'os:Arch*'
This was changed in version 0.9.8 to accept glob expressions instead of regular expression. To use regular expression matching with grains, use the --grain-pcre option.
- --grain-pcre
- The target expression matches values returned by the Salt grains system on the minions. The target expression is in the format of '<grain value>:< regular expression>'; example: 'os:Arch.*'
- -N, --nodegroup
- Use a predefined compound target defined in the Salt master configuration file.
- -R, --range
-
Instead of using shell globs to evaluate the target, use a range expression
to identify targets. Range expressions look like %cluster.
Using the Range option requires that a range server is set up and the location of the range server is referenced in the master configuration file.
Logging Options
Logging options which override any settings defined on the configuration files.
- -l LOG_LEVEL, --log-level=LOG_LEVEL
- Console logging log level. One of all, garbage, trace, debug, info, warning, error, quiet. Default: warning.
- --log-file=LOG_FILE
- Log file path. Default: /var/log/salt/ssh.
- --log-file-level=LOG_LEVEL_LOGFILE
- Logfile logging log level. One of all, garbage, trace, debug, info, warning, error, quiet. Default: warning.
Output Options
- --out
-
Pass in an alternative outputter to display the return of data. This
outputter can be any of the available outputters:
grains, highstate, json, key, overstatestage, pprint, raw, txt, yaml
Some outputters are formatted only for data returned from specific functions; for instance, the grains outputter will not work for non-grains data.
If an outputter is used that does not support the data passed into it, then Salt will fall back on the pprint outputter and display the return data using the Python pprint standard library module.
NOTE: If using --out=json, you will probably want --static as well. Without the static option, you will get a separate JSON string per minion which makes JSON output invalid as a whole. This is due to using an iterative outputter. So if you want to feed it to a JSON parser, use --static as well.
- --out-indent OUTPUT_INDENT, --output-indent OUTPUT_INDENT
- Print the output indented by the provided value in spaces. Negative values disable indentation. Only applicable in outputters that support indentation.
- --out-file=OUTPUT_FILE, --output-file=OUTPUT_FILE
- Write the output to the specified file.
- --no-color
- Disable all colored output
- --force-color
-
Force colored output
NOTE: When using colored output the color codes are as follows:
green denotes success, red denotes failure, blue denotes changes and success and yellow denotes a expected future change in configuration.
See also
salt(7) salt-master(1) salt-minion(1)
salt-syndic
salt-syndic
The Salt syndic daemon, a special minion that passes through commands from a higher master
Synopsis
salt-syndic [ options ]
Description
The Salt syndic daemon, a special minion that passes through commands from a higher master.
Options
- --version
- Print the version of Salt that is running.
- --versions-report
- Show program's dependencies and version number, and then exit
- -h, --help
- Show the help message and exit
- -c CONFIG_DIR, --config-dir=CONFIG_dir
- The location of the Salt configuration directory. This directory contains the configuration files for Salt master and minions. The default location on most systems is /etc/salt.
- -u USER, --user=USER
- Specify user to run salt-syndic
- -d, --daemon
- Run salt-syndic as a daemon
- --pid-file PIDFILE
- Specify the location of the pidfile. Default: /var/run/salt-syndic.pid
Logging Options
Logging options which override any settings defined on the configuration files.
- -l LOG_LEVEL, --log-level=LOG_LEVEL
- Console logging log level. One of all, garbage, trace, debug, info, warning, error, quiet. Default: warning.
- --log-file=LOG_FILE
- Log file path. Default: /var/log/salt/master.
- --log-file-level=LOG_LEVEL_LOGFILE
- Logfile logging log level. One of all, garbage, trace, debug, info, warning, error, quiet. Default: warning.
See also
salt(1) salt-master(1) salt-minion(1)
salt-api
salt-api
Start interfaces used to remotely connect to the salt master
Synopsis
salt-api
Description
The Salt API system manages network api connectors for the Salt Master
Options
- --version
- Print the version of Salt that is running.
- --versions-report
- Show program's dependencies and version number, and then exit
- -h, --help
- Show the help message and exit
- -c CONFIG_DIR, --config-dir=CONFIG_dir
- The location of the Salt configuration directory. This directory contains the configuration files for Salt master and minions. The default location on most systems is /etc/salt.
- -d, --daemon
- Run the salt-api as a daemon
- --pid-file=PIDFILE
- Specify the location of the pidfile. Default: /var/run/salt-api.pid
Logging Options
Logging options which override any settings defined on the configuration files.
- -l LOG_LEVEL, --log-level=LOG_LEVEL
- Console logging log level. One of all, garbage, trace, debug, info, warning, error, quiet. Default: warning.
- --log-file=LOG_FILE
- Log file path. Default: /var/log/salt/api.
- --log-file-level=LOG_LEVEL_LOGFILE
- Logfile logging log level. One of all, garbage, trace, debug, info, warning, error, quiet. Default: warning.
See also
salt-api(7) salt(7) salt-master(1)
Client ACL system
The salt client ACL system is a means to allow system users other than root to have access to execute select salt commands on minions from the master.
The client ACL system is configured in the master configuration file via the client_acl configuration option. Under the client_acl configuration option the users open to send commands are specified and then a list of regular expressions which specify the minion functions which will be made available to specified user. This configuration is much like the peer configuration:
client_acl: # Allow thatch to execute anything. thatch: - .* # Allow fred to use test and pkg, but only on "web*" minions. fred: - web*: - test.* - pkg.*
Permission Issues
Directories required for client_acl must be modified to be readable by the users specified:
chmod 755 /var/cache/salt /var/cache/salt/master /var/cache/salt/master/jobs /var/run/salt /var/run/salt/master
NOTE: In addition to the changes above you will also need to modify the permissions of /var/log/salt and the existing log file to be writable by the user(s) which will be running the commands. If you do not wish to do this then you must disable logging or Salt will generate errors as it cannot write to the logs as the system users.
If you are upgrading from earlier versions of salt you must also remove any existing user keys and re-start the Salt master:
rm /var/cache/salt/.*key service salt-master restart
Python client API
Salt provides several entry points for interfacing with Python applications. These entry points are often referred to as *Client() APIs. Each client accesses different parts of Salt, either from the master or from a minion. Each client is detailed below.
SEE ALSO: There are many ways to access Salt programmatically.
Salt can be used from CLI scripts as well as via a REST interface.
See Salt's outputter system to retrieve structured data from Salt as JSON, or as shell-friendly text, or many other formats.
See the state.event runner to utilize Salt's event bus from shell scripts.
Salt's netapi module provides access to Salt externally via a REST interface. Review the netapi module documentation for more information.
Salt's opts dictionary
Some clients require access to Salt's opts dictionary. (The dictionary representation of the master or minion config files.)
A common pattern for fetching the opts dictionary is to defer to environment variables if they exist or otherwise fetch the config from the default location.
- salt.config.client_config(path, env_var='SALT_CLIENT_CONFIG', defaults=None)
-
Load Master configuration data
Usage:
import salt.config master_opts = salt.config.client_config('/etc/salt/master')
Returns a dictionary of the Salt Master configuration file with necessary options needed to communicate with a locally-running Salt Master daemon. This function searches for client specific configurations and adds them to the data from the master configuration.
This is useful for master-side operations like LocalClient.
- salt.config.minion_config(path, env_var='SALT_MINION_CONFIG', defaults=None, cache_minion_id=False)
-
Reads in the minion configuration file and sets up special options
This is useful for Minion-side operations, such as the Caller class, and manually running the loader interface.
import salt.client minion_opts = salt.config.minion_config('/etc/salt/minion')
Salt's Loader Interface
Modules in the Salt ecosystem are loaded into memory using a custom loader system. This allows modules to have conditional requirements (OS, OS version, installed libraries, etc) and allows Salt to inject special variables (__salt__, __opts__, etc).
Most modules can be manually loaded. This is often useful in third-party Python apps or when writing tests. However some modules require and expect a full, running Salt system underneath. Notably modules that facilitate master-to-minion communication such as the mine, publish, and peer execution modules. The error KeyError: 'master_uri' is a likely indicator for this situation. In those instances use the Caller class to execute those modules instead.
Each module type has a corresponding loader function.
- salt.loader.minion_mods(opts, context=None, whitelist=None, include_errors=False, initial_load=False, loaded_base_name=None)
-
Load execution modules
Returns a dictionary of execution modules appropriate for the current system by evaluating the __virtual__() function in each module.
import salt.config import salt.loader __opts__ = salt.config.minion_config('/etc/salt/minion') __grains__ = salt.loader.grains(__opts__) __opts__['grains'] = __grains__ __salt__ = salt.loader.minion_mods(__opts__) __salt__['test.ping']()
- salt.loader.raw_mod(opts, name, functions, mod='modules')
-
Returns a single module loaded raw and bypassing the __virtual__ function
import salt.config import salt.loader __opts__ = salt.config.minion_config('/etc/salt/minion') testmod = salt.loader.raw_mod(__opts__, 'test', None) testmod['test.ping']()
- salt.loader.states(opts, functions, whitelist=None)
-
Returns the state modules
import salt.config import salt.loader __opts__ = salt.config.minion_config('/etc/salt/minion') statemods = salt.loader.states(__opts__, None)
- salt.loader.grains(opts, force_refresh=False)
-
Return the functions for the dynamic grains and the values for the static
grains.
import salt.config import salt.loader __opts__ = salt.config.minion_config('/etc/salt/minion') __grains__ = salt.loader.grains(__opts__) print __grains__['id']
Salt's Client Interfaces
LocalClient
- class salt.client.LocalClient(c_path='/etc/salt/master', mopts=None, skip_perm_errors=False)
-
The interface used by the salt CLI tool on the Salt Master
LocalClient is used to send a command to Salt minions to execute execution modules and return the results to the Salt Master.
Importing and using LocalClient must be done on the same machine as the Salt Master and it must be done using the same user that the Salt Master is running as. (Unless external_auth is configured and authentication credentials are included in the execution).
import salt.client local = salt.client.LocalClient() local.cmd('*', 'test.fib', [10])
- cmd(tgt, fun, arg=(), timeout=None, expr_form='glob', ret='', jid='', kwarg=None, **kwargs)
-
Synchronously execute a command on targeted minions
The cmd method will execute and wait for the timeout period for all minions to reply, then it will return all minion data at once.
>>> import salt.client >>> local = salt.client.LocalClient() >>> local.cmd('*', 'cmd.run', ['whoami']) {'jerry': 'root'}
With extra keyword arguments for the command function to be run:
local.cmd('*', 'test.arg', ['arg1', 'arg2'], kwarg={'foo': 'bar'})
Compound commands can be used for multiple executions in a single publish. Function names and function arguments are provided in separate lists but the index values must correlate and an empty list must be used if no arguments are required.
>>> local.cmd('*', [ 'grains.items', 'sys.doc', 'cmd.run', ], [ [], [], ['uptime'], ])
- Parameters
- •
- tgt (string or list) -- Which minions to target for the execution. Default is shell glob. Modified by the expr_form option.
- •
-
fun (string or list of strings) --
The module and function to call on the specified minions of the form module.function. For example test.ping or grains.items.
- Compound commands
-
Multiple functions may be called in a single publish by
passing a list of commands. This can dramatically lower
overhead and speed up the application communicating with Salt.
This requires that the arg param is a list of lists. The fun list and the arg list must correlate by index meaning a function that does not take arguments must still have a corresponding empty list at the expected index.
- •
- arg (list or list-of-lists) -- A list of arguments to pass to the remote function. If the function takes no arguments arg may be omitted except when executing a compound command.
- •
- timeout -- Seconds to wait after the last minion returns but before all minions return.
- •
-
expr_form --
The type of tgt. Allowed values:
- •
- glob - Bash glob completion - Default
- •
- pcre - Perl style regular expression
- •
- list - Python list of hosts
- •
- grain - Match based on a grain comparison
- •
- grain_pcre - Grain comparison with a regex
- •
- pillar - Pillar data comparison
- •
- pillar_pcre - Pillar data comparison with a regex
- •
- nodegroup - Match on nodegroup
- •
- range - Use a Range server for matching
- •
-
compound - Pass a compound match string
- •
- ret -- The returner to use. The value passed can be single returner, or a comma delimited list of returners to call in order on the minions
- •
- kwarg -- A dictionary with keyword arguments for the function.
- •
-
kwargs --
Optional keyword arguments. Authentication credentials may be passed when using external_auth.
For example: local.cmd('*', 'test.ping', username='saltdev', password='saltdev', eauth='pam'). Or: local.cmd('*', 'test.ping', token='5871821ea51754fdcea8153c1c745433')
- Returns
- A dictionary with the result of the execution, keyed by minion ID. A compound command will return a sub-dictionary keyed by function name.
- cmd_async(tgt, fun, arg=(), expr_form='glob', ret='', jid='', kwarg=None, **kwargs)
-
Asynchronously send a command to connected minions
The function signature is the same as cmd() with the following exceptions.
- Returns
-
A job ID or 0 on failure.
>>> local.cmd_async('*', 'test.sleep', [300]) '20131219215921857715'
- cmd_batch(tgt, fun, arg=(), expr_form='glob', ret='', kwarg=None, batch='10%', **kwargs)
-
Iteratively execute a command on subsets of minions at a time
The function signature is the same as cmd() with the following exceptions.
- Parameters
- batch -- The batch identifier of systems to execute on
- Returns
-
A generator of minion returns
>>> returns = local.cmd_batch('*', 'state.highstate', bat='10%') >>> for ret in returns: ... print(ret) {'jerry': {...}} {'dave': {...}} {'stewart': {...}}
- cmd_iter(tgt, fun, arg=(), timeout=None, expr_form='glob', ret='', kwarg=None, **kwargs)
-
Yields the individual minion returns as they come in
The function signature is the same as cmd() with the following exceptions.
- Returns
-
A generator yielding the individual minion returns
>>> ret = local.cmd_iter('*', 'test.ping') >>> for i in ret: ... print(i) {'jerry': {'ret': True}} {'dave': {'ret': True}} {'stewart': {'ret': True}}
- cmd_iter_no_block(tgt, fun, arg=(), timeout=None, expr_form='glob', ret='', kwarg=None, **kwargs)
- Yields the individual minion returns as they come in, or None
-
when no returns are available.
The function signature is the same as cmd() with the following exceptions.
- Returns
-
A generator yielding the individual minion returns, or None
when no returns are available. This allows for actions to be
injected in between minion returns.
>>> ret = local.cmd_iter_no_block('*', 'test.ping') >>> for i in ret: ... print(i) None {'jerry': {'ret': True}} {'dave': {'ret': True}} None {'stewart': {'ret': True}}
- cmd_subset(tgt, fun, arg=(), expr_form='glob', ret='', kwarg=None, sub=3, cli=False, progress=False, **kwargs)
-
Execute a command on a random subset of the targeted systems
The function signature is the same as cmd() with the following exceptions.
- Parameters
-
sub -- The number of systems to execute on
>>> SLC.cmd_subset('*', 'test.ping', sub=1) {'jerry': True}
- get_cli_returns(jid, minions, timeout=None, tgt='*', tgt_type='glob', verbose=False, show_jid=False, **kwargs)
- Starts a watcher looking at the return data for a specified JID
- Returns
- all of the information for the JID
- get_event_iter_returns(jid, minions, timeout=None)
- Gather the return data from the event system, break hard when timeout is reached.
- run_job(tgt, fun, arg=(), expr_form='glob', ret='', timeout=None, jid='', kwarg=None, **kwargs)
-
Asynchronously send a command to connected minions
Prep the job directory and publish a command to any targeted minions.
- Returns
-
A dictionary of (validated) pub_data or an empty
dictionary on failure. The pub_data contains the job ID and a
list of all minions that are expected to return data.
>>> local.run_job('*', 'test.sleep', [300]) {'jid': '20131219215650131543', 'minions': ['jerry']}
Salt Caller
- class salt.client.Caller(c_path='/etc/salt/minion', mopts=None)
-
Caller is the same interface used by the salt-call
command-line tool on the Salt Minion.
Importing and using Caller must be done on the same machine as a Salt Minion and it must be done using the same user that the Salt Minion is running as.
Usage:
import salt.client caller = salt.client.Caller() caller.function('test.ping') # Or call objects directly caller.sminion.functions['cmd.run']('ls -l')
Note, a running master or minion daemon is not required to use this class. Running salt-call --local simply sets file_client to 'local'. The same can be achieved at the Python level by including that setting in a minion config file.
Instantiate a new Caller() instance using a file system path to the minion config file:
caller = salt.client.Caller('/path/to/custom/minion_config') caller.sminion.functions['grains.items']()
Instantiate a new Caller() instance using a dictionary of the minion config:
New in version 2014.7.0: Pass the minion config as a dictionary.
import salt.client import salt.config opts = salt.config.minion_config('/etc/salt/minion') opts['file_client'] = 'local' caller = salt.client.Caller(mopts=opts) caller.sminion.functions['grains.items']()
- function(fun, *args, **kwargs)
- Call a single salt function
RunnerClient
- class salt.runner.RunnerClient(opts)
-
The interface used by the salt-run CLI tool on the Salt Master
It executes runner modules which run on the Salt Master.
Importing and using RunnerClient must be done on the same machine as the Salt Master and it must be done using the same user that the Salt Master is running as.
Salt's external_auth can be used to authenticate calls. The eauth user must be authorized to execute runner modules: (@runner). Only the master_call() below supports eauth.
- async(fun, low, user='UNKNOWN')
- Execute the function in a multiprocess and return the event tag to use to watch for the return
- cmd(fun, arg=None, pub_data=None, kwarg=None)
-
Execute a function
>>> opts = salt.config.master_config('/etc/salt/master') >>> runner = salt.runner.RunnerClient(opts) >>> runner.cmd('jobs.list_jobs', []) { '20131219215650131543': { 'Arguments': [300], 'Function': 'test.sleep', 'StartTime': '2013, Dec 19 21:56:50.131543', 'Target': '*', 'Target-type': 'glob', 'User': 'saltdev' }, '20131219215921857715': { 'Arguments': [300], 'Function': 'test.sleep', 'StartTime': '2013, Dec 19 21:59:21.857715', 'Target': '*', 'Target-type': 'glob', 'User': 'saltdev' }, }
- cmd_async(low)
-
Execute a runner function asynchronously; eauth is respected
This function requires that external_auth is configured and the user is authorized to execute runner functions: (@runner).
runner.eauth_async({ 'fun': 'jobs.list_jobs', 'username': 'saltdev', 'password': 'saltdev', 'eauth': 'pam', })
- cmd_sync(low, timeout=None)
-
Execute a runner function synchronously; eauth is respected
This function requires that external_auth is configured and the user is authorized to execute runner functions: (@runner).
runner.eauth_sync({ 'fun': 'jobs.list_jobs', 'username': 'saltdev', 'password': 'saltdev', 'eauth': 'pam', })
WheelClient
- class salt.wheel.WheelClient(opts=None)
-
An interface to Salt's wheel modules
Wheel modules interact with various parts of the Salt Master.
Importing and using WheelClient must be done on the same machine as the Salt Master and it must be done using the same user that the Salt Master is running as. Unless external_auth is configured and the user is authorized to execute wheel functions: (@wheel).
Usage:
import salt.config import salt.wheel opts = salt.config.master_config('/etc/salt/master') wheel = salt.wheel.WheelClient(opts)
- async(fun, low, user='UNKNOWN')
- Execute the function in a multiprocess and return the event tag to use to watch for the return
- cmd(fun, arg=None, pub_data=None, kwarg=None)
-
Execute a function
>>> wheel.cmd('key.finger', ['jerry']) {'minions': {'jerry': '5d:f6:79:43:5e:d4:42:3f:57:b8:45:a8:7e:a4:6e:ca'}}
- cmd_async(low)
-
Execute a function asynchronously; eauth is respected
This function requires that external_auth is configured and the user is authorized
>>> wheel.cmd_async({ 'fun': 'key.finger', 'match': 'jerry', 'eauth': 'auto', 'username': 'saltdev', 'password': 'saltdev', }) {'jid': '20131219224744416681', 'tag': 'salt/wheel/20131219224744416681'}
- cmd_sync(low, timeout=None)
-
Execute a wheel function synchronously; eauth is respected
This function requires that external_auth is configured and the user is authorized to execute runner functions: (@wheel).
>>> wheel.cmd_sync({ 'fun': 'key.finger', 'match': 'jerry', 'eauth': 'auto', 'username': 'saltdev', 'password': 'saltdev', }) {'minions': {'jerry': '5d:f6:79:43:5e:d4:42:3f:57:b8:45:a8:7e:a4:6e:ca'}}
CloudClient
- class salt.cloud.CloudClient(path=None, opts=None, config_dir=None, pillars=None)
- The client class to wrap cloud interactions
- action(fun=None, cloudmap=None, names=None, provider=None, instance=None, kwargs=None)
-
Execute a single action via the cloud plugin backend
Examples:
client.action(fun='show_instance', names=['myinstance']) client.action(fun='show_image', provider='my-ec2-config', kwargs={'image': 'ami-10314d79'} )
- create(provider, names, **kwargs)
-
Create the named VMs, without using a profile
Example:
client.create(names=['myinstance'], provider='my-ec2-config', kwargs={'image': 'ami-1624987f', 'size': 't1.micro', 'ssh_username': 'ec2-user', 'securitygroup': 'default', 'delvol_on_destroy': True})
- destroy(names)
- Destroy the named VMs
- extra_action(names, provider, action, **kwargs)
-
Perform actions with block storage devices
Example:
client.extra_action(names=['myblock'], action='volume_create', provider='my-nova', kwargs={'voltype': 'SSD', 'size': 1000} ) client.extra_action(names=['salt-net'], action='network_create', provider='my-nova', kwargs={'cidr': '192.168.100.0/24'} )
- full_query(query_type='list_nodes_full')
- Query all instance information
- list_images(provider=None)
- List all available images in configured cloud systems
- list_locations(provider=None)
- List all available locations in configured cloud systems
- list_sizes(provider=None)
- List all available sizes in configured cloud systems
- low(fun, low)
- Pass the cloud function and low data structure to run
- map_run(path, **kwargs)
- Pass in a location for a map to execute
- min_query(query_type='list_nodes_min')
- Query select instance information
- profile(profile, names, vm_overrides=None, **kwargs)
-
Pass in a profile to create, names is a list of vm names to allocate
vm_overrides is a special dict that will be per node options
overrides
Example:
>>> client= salt.cloud.CloudClient(path='/etc/salt/cloud') >>> client.profile('do_512_git', names=['minion01',]) {'minion01': {u'backups_active': 'False', u'created_at': '2014-09-04T18:10:15Z', u'droplet': {u'event_id': 31000502, u'id': 2530006, u'image_id': 5140006, u'name': u'minion01', u'size_id': 66}, u'id': '2530006', u'image_id': '5140006', u'ip_address': '107.XXX.XXX.XXX', u'locked': 'True', u'name': 'minion01', u'private_ip_address': None, u'region_id': '4', u'size_id': '66', u'status': 'new'}}
- query(query_type='list_nodes')
- Query basic instance information
- select_query(query_type='list_nodes_select')
- Query select instance information
SSHClient
- class salt.client.ssh.client.SSHClient(c_path='/etc/salt/master', mopts=None)
-
Create a client object for executing routines via the salt-ssh backend
New in version 2015.5.0.
- cmd(tgt, fun, arg=(), timeout=None, expr_form='glob', kwarg=None, **kwargs)
-
Execute a single command via the salt-ssh subsystem and return all
routines at once
New in version 2015.5.0.
- cmd_iter(tgt, fun, arg=(), timeout=None, expr_form='glob', ret='', kwarg=None, **kwargs)
-
Execute a single command via the salt-ssh subsystem and return a
generator
New in version 2015.5.0.
Full list of Salt Cloud modules
aliyun |
AliYun ECS Cloud Module
|
botocore_aws |
The AWS Cloud Module
|
cloudstack |
CloudStack Cloud Module
|
digital_ocean |
DigitalOcean Cloud Module
|
digital_ocean_v2 |
DigitalOcean Cloud Module v2
|
ec2 |
The EC2 Cloud Module
|
gce |
Copyright 2013 Google Inc.
|
gogrid |
GoGrid Cloud Module
|
joyent |
Joyent Cloud Module
|
libcloud_aws |
The AWS Cloud Module
|
linode |
Linode Cloud Module using Apache Libcloud OR linode-python bindings
|
lxc |
Install Salt on an LXC Container
|
msazure |
Azure Cloud Module
|
nova |
OpenStack Nova Cloud Module
|
opennebula |
OpenNebula Cloud Module
|
openstack |
OpenStack Cloud Module
|
parallels |
Parallels Cloud Module
|
proxmox |
Proxmox Cloud Module
|
pyrax |
Pyrax Cloud Module
|
rackspace |
Rackspace Cloud Module
|
saltify |
Saltify Module ============== The Saltify module is designed to install Salt on a remote machine, virtual or bare metal, using SSH.
|
softlayer |
SoftLayer Cloud Module
|
softlayer_hw |
SoftLayer HW Cloud Module
|
vsphere |
vSphere Cloud Module
|
salt.cloud.clouds.aliyun
AliYun ECS Cloud Module
New in version 2014.7.0.
The Aliyun cloud module is used to control access to the aliyun ECS. http://www.aliyun.com/
Use of this module requires the id and key parameter to be set. Set up the cloud configuration at /etc/salt/cloud.providers or /etc/salt/cloud.providers.d/aliyun.conf:
my-aliyun-config: # aliyun Access Key ID id: wFGEwgregeqw3435gDger # aliyun Access Key Secret key: GDE43t43REGTrkilg43934t34qT43t4dgegerGEgg location: cn-qingdao provider: aliyun
- depends
- requests
- salt.cloud.clouds.aliyun.avail_images(kwargs=None, call=None)
- Return a list of the images that are on the provider
- salt.cloud.clouds.aliyun.avail_locations(call=None)
- Return a dict of all available VM locations on the cloud provider with relevant data
- salt.cloud.clouds.aliyun.avail_sizes(call=None)
- Return a list of the image sizes that are on the provider
- salt.cloud.clouds.aliyun.create(vm_)
- Create a single VM from a data dict
- salt.cloud.clouds.aliyun.create_node(kwargs)
- Convenience function to make the rest api call for node creation.
- salt.cloud.clouds.aliyun.destroy(name, call=None)
-
Destroy a node.
CLI Example:
salt-cloud -a destroy myinstance salt-cloud -d myinstance
- salt.cloud.clouds.aliyun.get_configured_provider()
- Return the first configured instance.
- salt.cloud.clouds.aliyun.get_image(vm_)
- Return the image object to use
- salt.cloud.clouds.aliyun.get_location(vm_=None)
- Return the aliyun region to use, in this order:
- •
- CLI parameter
- •
- VM parameter
- •
- Cloud profile setting
- salt.cloud.clouds.aliyun.get_securitygroup(vm_)
- Return the security group
- salt.cloud.clouds.aliyun.get_size(vm_)
- Return the VM's size. Used by create_node().
- salt.cloud.clouds.aliyun.list_availability_zones(call=None)
- List all availability zones in the current region
- salt.cloud.clouds.aliyun.list_monitor_data(kwargs=None, call=None)
-
Get monitor data of the instance. If instance name is
missing, will show all the instance monitor data on the region.
CLI Examples:
salt-cloud -f list_monitor_data aliyun salt-cloud -f list_monitor_data aliyun name=AY14051311071990225bd
- salt.cloud.clouds.aliyun.list_nodes(call=None)
- Return a list of the VMs that are on the provider
- salt.cloud.clouds.aliyun.list_nodes_full(call=None)
- Return a list of the VMs that are on the provider
- salt.cloud.clouds.aliyun.list_nodes_min(call=None)
- Return a list of the VMs that are on the provider. Only a list of VM names, and their state, is returned. This is the minimum amount of information needed to check for existing VMs.
- salt.cloud.clouds.aliyun.list_nodes_select(call=None)
- Return a list of the VMs that are on the provider, with select fields
- salt.cloud.clouds.aliyun.list_securitygroup(call=None)
- Return a list of security group
- salt.cloud.clouds.aliyun.query(params=None)
- Make a web call to aliyun ECS REST API
- salt.cloud.clouds.aliyun.reboot(name, call=None)
-
Reboot a node
CLI Examples:
salt-cloud -a reboot myinstance
- salt.cloud.clouds.aliyun.script(vm_)
- Return the script deployment object
- salt.cloud.clouds.aliyun.show_disk(name, call=None)
-
Show the disk details of the instance
CLI Examples:
salt-cloud -a show_disk aliyun myinstance
- salt.cloud.clouds.aliyun.show_image(kwargs, call=None)
- Show the details from aliyun image
- salt.cloud.clouds.aliyun.show_instance(name, call=None)
- Show the details from aliyun instance
- salt.cloud.clouds.aliyun.start(name, call=None)
-
Start a node
CLI Examples:
salt-cloud -a start myinstance
- salt.cloud.clouds.aliyun.stop(name, force=False, call=None)
-
Stop a node
CLI Examples:
salt-cloud -a stop myinstance salt-cloud -a stop myinstance force=True
salt.cloud.clouds.botocore_aws
The AWS Cloud Module
The AWS cloud module is used to interact with the Amazon Web Services system.
This module has been replaced by the EC2 cloud module, and is no longer supported. The documentation shown here is for reference only; it is highly recommended to change all usages of this driver over to the EC2 driver.
- If this driver is still needed, set up the cloud configuration at
-
/etc/salt/cloud.providers or /etc/salt/cloud.providers.d/aws.conf:
my-aws-botocore-config: # The AWS API authentication id id: GKTADJGHEIQSXMKKRBJ08H # The AWS API authentication key key: askdjghsdfjkghWupUjasdflkdfklgjsdfjajkghs # The ssh keyname to use keyname: default # The amazon security group securitygroup: ssh_open # The location of the private key which corresponds to the keyname private_key: /root/default.pem provider: aws
- salt.cloud.clouds.botocore_aws.disable_term_protect(name, call=None)
-
Disable termination protection on a node
CLI Example:
salt-cloud -a disable_term_protect mymachine
- salt.cloud.clouds.botocore_aws.enable_term_protect(name, call=None)
-
Enable termination protection on a node
CLI Example:
salt-cloud -a enable_term_protect mymachine
- salt.cloud.clouds.botocore_aws.get_configured_provider()
- Return the first configured instance.
salt.cloud.clouds.cloudstack
CloudStack Cloud Module
The CloudStack cloud module is used to control access to a CloudStack based Public Cloud.
- depends
-
libcloud
Use of this module requires the apikey, secretkey, host and path parameters.
my-cloudstack-cloud-config: apikey: <your api key > secretkey: <your secret key > host: localhost path: /client/api provider: cloudstack
- salt.cloud.clouds.cloudstack.avail_images(conn=None, call=None)
- Return a dict of all available VM images on the cloud provider with relevant data
- salt.cloud.clouds.cloudstack.avail_locations(conn=None, call=None)
- Return a dict of all available VM locations on the cloud provider with relevant data
- salt.cloud.clouds.cloudstack.avail_sizes(conn=None, call=None)
- Return a dict of all available VM images on the cloud provider with relevant data
- salt.cloud.clouds.cloudstack.block_device_mappings(vm_)
-
Return the block device mapping:
[{'DeviceName': '/dev/sdb', 'VirtualName': 'ephemeral0'}, {'DeviceName': '/dev/sdc', 'VirtualName': 'ephemeral1'}]
- salt.cloud.clouds.cloudstack.create(vm_)
- Create a single VM from a data dict
- salt.cloud.clouds.cloudstack.destroy(name, conn=None, call=None)
- Delete a single VM, and all of its volumes
- salt.cloud.clouds.cloudstack.get_configured_provider()
- Return the first configured instance.
- salt.cloud.clouds.cloudstack.get_conn()
- Return a conn object for the passed VM data
- salt.cloud.clouds.cloudstack.get_image(conn, vm_)
- Return the image object to use
- salt.cloud.clouds.cloudstack.get_ip(data)
- Return the IP address of the VM If the VM has public IP as defined by libcloud module then use it Otherwise try to extract the private IP and use that one.
- salt.cloud.clouds.cloudstack.get_key()
- Returns the ssh private key for VM access
- salt.cloud.clouds.cloudstack.get_keypair(vm_)
- Return the keypair to use
- salt.cloud.clouds.cloudstack.get_location(conn, vm_)
- Return the node location to use
- salt.cloud.clouds.cloudstack.get_networkid(vm_)
- Return the networkid to use, only valid for Advanced Zone
- salt.cloud.clouds.cloudstack.get_node(conn, name)
- Return a libcloud node for the named VM
- salt.cloud.clouds.cloudstack.get_password(vm_)
- Return the password to use
- salt.cloud.clouds.cloudstack.get_project(conn, vm_)
- Return the project to use.
- salt.cloud.clouds.cloudstack.get_size(conn, vm_)
- Return the VM's size object
- salt.cloud.clouds.cloudstack.list_nodes(conn=None, call=None)
- Return a list of the VMs that are on the provider
- salt.cloud.clouds.cloudstack.list_nodes_full(conn=None, call=None)
- Return a list of the VMs that are on the provider, with all fields
- salt.cloud.clouds.cloudstack.list_nodes_select(conn=None, call=None)
- Return a list of the VMs that are on the provider, with select fields
- salt.cloud.clouds.cloudstack.script(vm_)
- Return the script deployment object
- salt.cloud.clouds.cloudstack.show_instance(name, call=None)
- Show the details from the provider concerning an instance
salt.cloud.clouds.digital_ocean
DigitalOcean Cloud Module
The DigitalOcean cloud module is used to control access to the DigitalOcean VPS system.
NOTE: Due to Digital Ocean deprecating its original API, this salt-cloud driver for Digital Ocean will be deprecated in Salt Beryllium. The digital_ocean_v2 driver that is currently available on all 2015.5.x releases will be used instead. Starting in Beryllium, the digital_ocean_v2.py driver will be renamed to digital_ocean.py and this driver will be removed. Please convert any original digital_ocean provider configs to use the new digital_ocean_v2 provider configs.
Use of this module only requires the api_key parameter to be set. Set up the cloud configuration at /etc/salt/cloud.providers or /etc/salt/cloud.providers.d/digital_ocean.conf:
my-digital-ocean-config: # DigitalOcean account keys client_key: wFGEwgregeqw3435gDger api_key: GDE43t43REGTrkilg43934t34qT43t4dgegerGEgg provider: digital_ocean
- depends
- requests
- salt.cloud.clouds.digital_ocean.avail_images(call=None)
- Return a list of the images that are on the provider
- salt.cloud.clouds.digital_ocean.avail_locations(call=None)
- Return a dict of all available VM locations on the cloud provider with relevant data
- salt.cloud.clouds.digital_ocean.avail_sizes(call=None)
- Return a list of the image sizes that are on the provider
- salt.cloud.clouds.digital_ocean.create(vm_)
- Create a single VM from a data dict
- salt.cloud.clouds.digital_ocean.create_node(args)
- Create a node
- salt.cloud.clouds.digital_ocean.destroy(name, call=None)
-
Destroy a node. Will check termination protection and warn if enabled.
CLI Example:
salt-cloud --destroy mymachine
- salt.cloud.clouds.digital_ocean.get_configured_provider()
- Return the first configured instance.
- salt.cloud.clouds.digital_ocean.get_image(vm_)
- Return the image object to use
- salt.cloud.clouds.digital_ocean.get_keyid(keyname)
- Return the ID of the keyname
- salt.cloud.clouds.digital_ocean.get_location(vm_)
- Return the VM's location
- salt.cloud.clouds.digital_ocean.get_size(vm_)
- Return the VM's size. Used by create_node().
- salt.cloud.clouds.digital_ocean.list_keypairs(call=None)
- Return a dict of all available VM locations on the cloud provider with relevant data
- salt.cloud.clouds.digital_ocean.list_nodes(call=None)
- Return a list of the VMs that are on the provider
- salt.cloud.clouds.digital_ocean.list_nodes_full(call=None)
- Return a list of the VMs that are on the provider
- salt.cloud.clouds.digital_ocean.list_nodes_select(call=None)
- Return a list of the VMs that are on the provider, with select fields
- salt.cloud.clouds.digital_ocean.query(method='droplets', droplet_id=None, command=None, args=None)
- Make a web call to DigitalOcean
- salt.cloud.clouds.digital_ocean.script(vm_)
- Return the script deployment object
- salt.cloud.clouds.digital_ocean.show_instance(name, call=None)
- Show the details from DigitalOcean concerning a droplet
- salt.cloud.clouds.digital_ocean.show_keypair(kwargs=None, call=None)
- Show the details of an SSH keypair
salt.cloud.clouds.digital_ocean_v2
DigitalOcean Cloud Module v2
The DigitalOcean cloud module is used to control access to the DigitalOcean VPS system.
Use of this module only requires the personal_access_token parameter to be set. Set up the cloud configuration at /etc/salt/cloud.providers or /etc/salt/cloud.providers.d/digital_ocean.conf:
my-digital-ocean-config: personal_access_token: xxx provider: digital_ocean
- depends
- requests
- salt.cloud.clouds.digital_ocean_v2.avail_images(call=None)
- Return a list of the images that are on the provider
- salt.cloud.clouds.digital_ocean_v2.avail_locations(call=None)
- Return a dict of all available VM locations on the cloud provider with relevant data
- salt.cloud.clouds.digital_ocean_v2.avail_sizes(call=None)
- Return a list of the image sizes that are on the provider
- salt.cloud.clouds.digital_ocean_v2.create(vm_)
- Create a single VM from a data dict
- salt.cloud.clouds.digital_ocean_v2.create_dns_record(hostname, ip_address)
- Creates a DNS record for the given hostname if the domain is managed with DO.
- salt.cloud.clouds.digital_ocean_v2.create_node(args)
- Create a node
- salt.cloud.clouds.digital_ocean_v2.delete_dns_record(hostname)
- Deletes a DNS for the given hostname if the domain is managed with DO.
- salt.cloud.clouds.digital_ocean_v2.destroy(name, call=None)
-
Destroy a node. Will check termination protection and warn if enabled.
CLI Example:
salt-cloud --destroy mymachine
- salt.cloud.clouds.digital_ocean_v2.get_configured_provider()
- Return the first configured instance.
- salt.cloud.clouds.digital_ocean_v2.get_image(vm_)
- Return the image object to use
- salt.cloud.clouds.digital_ocean_v2.get_keyid(keyname)
- Return the ID of the keyname
- salt.cloud.clouds.digital_ocean_v2.get_location(vm_)
- Return the VM's location
- salt.cloud.clouds.digital_ocean_v2.get_size(vm_)
- Return the VM's size. Used by create_node().
- salt.cloud.clouds.digital_ocean_v2.list_keypairs(call=None)
- Return a dict of all available VM locations on the cloud provider with relevant data
- salt.cloud.clouds.digital_ocean_v2.list_nodes(call=None)
- Return a list of the VMs that are on the provider
- salt.cloud.clouds.digital_ocean_v2.list_nodes_full(call=None, forOutput=True)
- Return a list of the VMs that are on the provider
- salt.cloud.clouds.digital_ocean_v2.list_nodes_select(call=None)
- Return a list of the VMs that are on the provider, with select fields
- salt.cloud.clouds.digital_ocean_v2.query(method='droplets', droplet_id=None, command=None, args=None, http_method='get')
- Make a web call to DigitalOcean
- salt.cloud.clouds.digital_ocean_v2.script(vm_)
- Return the script deployment object
- salt.cloud.clouds.digital_ocean_v2.show_instance(name, call=None)
- Show the details from DigitalOcean concerning a droplet
- salt.cloud.clouds.digital_ocean_v2.show_keypair(kwargs=None, call=None)
- Show the details of an SSH keypair
salt.cloud.clouds.ec2
The EC2 Cloud Module
The EC2 cloud module is used to interact with the Amazon Elastic Cloud Computing.
- To use the EC2 cloud module, set up the cloud configuration at
-
/etc/salt/cloud.providers or /etc/salt/cloud.providers.d/ec2.conf:
my-ec2-config: # The EC2 API authentication id, set this and/or key to # 'use-instance-role-credentials' to use the instance role credentials # from the meta-data if running on an AWS instance id: GKTADJGHEIQSXMKKRBJ08H # The EC2 API authentication key, set this and/or id to # 'use-instance-role-credentials' to use the instance role credentials # from the meta-data if running on an AWS instance key: askdjghsdfjkghWupUjasdflkdfklgjsdfjajkghs # The ssh keyname to use keyname: default # The amazon security group securitygroup: ssh_open # The location of the private key which corresponds to the keyname private_key: /root/default.pem # Be default, service_url is set to amazonaws.com. If you are using this # driver for something other than Amazon EC2, change it here: service_url: amazonaws.com # The endpoint that is ultimately used is usually formed using the region # and the service_url. If you would like to override that entirely, you # can explicitly define the endpoint: endpoint: myendpoint.example.com:1138/services/Cloud # SSH Gateways can be used with this provider. Gateways can be used # when a salt-master is not on the same private network as the instance # that is being deployed. # Defaults to None # Required ssh_gateway: gateway.example.com # Defaults to port 22 # Optional ssh_gateway_port: 22 # Defaults to root # Optional ssh_gateway_username: root # One authentication method is required. If both # are specified, Private key wins. # Private key defaults to None ssh_gateway_private_key: /path/to/key.pem # Password defaults to None ssh_gateway_password: ExamplePasswordHere # Pass userdata to the instance to be created userdata_file: /etc/salt/my-userdata-file provider: ec2
- depends
- requests
- salt.cloud.clouds.ec2.attach_volume(name=None, kwargs=None, instance_id=None, call=None)
- Attach a volume to an instance
- salt.cloud.clouds.ec2.avail_images(kwargs=None, call=None)
- Return a dict of all available VM images on the cloud provider.
- salt.cloud.clouds.ec2.avail_locations(call=None)
- List all available locations
- salt.cloud.clouds.ec2.avail_sizes(call=None)
-
Return a dict of all available VM sizes on the cloud provider with
relevant data. Latest version can be found at:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-types.html
- salt.cloud.clouds.ec2.block_device_mappings(vm_)
-
Return the block device mapping:
[{'DeviceName': '/dev/sdb', 'VirtualName': 'ephemeral0'}, {'DeviceName': '/dev/sdc', 'VirtualName': 'ephemeral1'}]
- salt.cloud.clouds.ec2.copy_snapshot(kwargs=None, call=None)
- Copy a snapshot
- salt.cloud.clouds.ec2.create(vm_=None, call=None)
- Create a single VM from a data dict
- salt.cloud.clouds.ec2.create_attach_volumes(name, kwargs, call=None, wait_to_finish=True)
- Create and attach volumes to created node
- salt.cloud.clouds.ec2.create_keypair(kwargs=None, call=None)
- Create an SSH keypair
- salt.cloud.clouds.ec2.create_snapshot(kwargs=None, call=None, wait_to_finish=False)
- Create a snapshot.
- volume_id
- The ID of the Volume from which to create a snapshot.
- description
-
The optional description of the snapshot.
CLI Exampe:
salt-cloud -f create_snapshot my-ec2-config volume_id=vol-351d8826 salt-cloud -f create_snapshot my-ec2-config volume_id=vol-351d8826 \ description="My Snapshot Description"
- salt.cloud.clouds.ec2.create_volume(kwargs=None, call=None, wait_to_finish=False)
-
Create a volume
CLI Examples:
salt-cloud -f create_volume my-ec2-config zone=us-east-1b salt-cloud -f create_volume my-ec2-config zone=us-east-1b tags='{"tag1": "val1", "tag2", "val2"}'
- salt.cloud.clouds.ec2.del_tags(name=None, kwargs=None, call=None, instance_id=None, resource_id=None)
-
Delete tags for a resource. Normally a VM name or instance_id is passed in,
but a resource_id may be passed instead. If both are passed in, the
instance_id will be used.
CLI Examples:
salt-cloud -a del_tags mymachine tags=mytag, salt-cloud -a del_tags mymachine tags=tag1,tag2,tag3 salt-cloud -a del_tags resource_id=vol-3267ab32 tags=tag1,tag2,tag3
- salt.cloud.clouds.ec2.delete_keypair(kwargs=None, call=None)
- Delete an SSH keypair
- salt.cloud.clouds.ec2.delete_snapshot(kwargs=None, call=None)
- Delete a snapshot
- salt.cloud.clouds.ec2.delete_volume(name=None, kwargs=None, instance_id=None, call=None)
- Delete a volume
- salt.cloud.clouds.ec2.delvol_on_destroy(name, kwargs=None, call=None)
-
Delete all/specified EBS volumes upon instance termination
CLI Example:
salt-cloud -a delvol_on_destroy mymachine
- salt.cloud.clouds.ec2.describe_snapshots(kwargs=None, call=None)
- Describe a snapshot (or snapshots)
- snapshot_id
- One or more snapshot IDs. Multiple IDs must be separated by ",".
- owner
- Return the snapshots owned by the specified owner. Valid values include: self, amazon, <AWS Account ID>. Multiple values must be separated by ",".
- restorable_by
-
One or more AWS accounts IDs that can create volumes from the snapshot.
Multiple aws account IDs must be separated by ",".
TODO: Add all of the filters.
- salt.cloud.clouds.ec2.describe_volumes(kwargs=None, call=None)
- Describe a volume (or volumes)
- volume_id
-
One or more volume IDs. Multiple IDs must be separated by ",".
TODO: Add all of the filters.
- salt.cloud.clouds.ec2.destroy(name, call=None)
-
Destroy a node. Will check termination protection and warn if enabled.
CLI Example:
salt-cloud --destroy mymachine
- salt.cloud.clouds.ec2.detach_volume(name=None, kwargs=None, instance_id=None, call=None)
- Detach a volume from an instance
- salt.cloud.clouds.ec2.disable_term_protect(name, call=None)
-
Disable termination protection on a node
CLI Example:
salt-cloud -a disable_term_protect mymachine
- salt.cloud.clouds.ec2.enable_term_protect(name, call=None)
-
Enable termination protection on a node
CLI Example:
salt-cloud -a enable_term_protect mymachine
- salt.cloud.clouds.ec2.get_availability_zone(vm_)
- Return the availability zone to use
- salt.cloud.clouds.ec2.get_configured_provider()
- Return the first configured instance.
- salt.cloud.clouds.ec2.get_console_output(name=None, instance_id=None, call=None, kwargs=None)
-
Show the console output from the instance.
By default, returns decoded data, not the Base64-encoded data that is actually returned from the EC2 API.
- salt.cloud.clouds.ec2.get_location(vm_=None)
- Return the EC2 region to use, in this order:
- •
- CLI parameter
- •
- VM parameter
- •
- Cloud profile setting
- salt.cloud.clouds.ec2.get_password_data(name=None, kwargs=None, instance_id=None, call=None)
-
Return password data for a Windows instance.
By default only the encrypted password data will be returned. However, if a key_file is passed in, then a decrypted password will also be returned.
Note that the key_file references the private key that was used to generate the keypair associated with this instance. This private key will _not_ be transmitted to Amazon; it is only used internally inside of Salt Cloud to decrypt data _after_ it has been received from Amazon.
CLI Examples:
salt-cloud -a get_password_data mymachine salt-cloud -a get_password_data mymachine key_file=/root/ec2key.pem
Note: PKCS1_v1_5 was added in PyCrypto 2.5
- salt.cloud.clouds.ec2.get_placementgroup(vm_)
- Returns the PlacementGroup to use
- salt.cloud.clouds.ec2.get_provider(vm_=None)
- Extract the provider name from vm
- salt.cloud.clouds.ec2.get_salt_interface(vm_)
- Return the salt_interface type to connect to. Either 'public_ips' (default) or 'private_ips'.
- salt.cloud.clouds.ec2.get_spot_config(vm_)
- Returns the spot instance configuration for the provided vm
- salt.cloud.clouds.ec2.get_ssh_gateway_config(vm_)
- Return the ssh_gateway configuration.
- salt.cloud.clouds.ec2.get_subnetid(vm_)
- Returns the SubnetId to use
- salt.cloud.clouds.ec2.get_tags(name=None, instance_id=None, call=None, location=None, kwargs=None, resource_id=None)
-
Retrieve tags for a resource. Normally a VM name or instance_id is passed
in, but a resource_id may be passed instead. If both are passed in, the
instance_id will be used.
CLI Examples:
salt-cloud -a get_tags mymachine salt-cloud -a get_tags resource_id=vol-3267ab32
- salt.cloud.clouds.ec2.get_tenancy(vm_)
-
Returns the Tenancy to use.
Can be "dedicated" or "default". Cannot be present for spot instances.
- salt.cloud.clouds.ec2.iam_profile(vm_)
-
Return the IAM profile.
The IAM instance profile to associate with the instances. This is either the Amazon Resource Name (ARN) of the instance profile or the name of the role.
Type: String
Default: None
Required: No
Example: arn:aws:iam::111111111111:instance-profile/s3access
Example: s3access
- salt.cloud.clouds.ec2.keepvol_on_destroy(name, kwargs=None, call=None)
-
Do not delete all/specified EBS volumes upon instance termination
CLI Example:
salt-cloud -a keepvol_on_destroy mymachine
- salt.cloud.clouds.ec2.keyname(vm_)
- Return the keyname
- salt.cloud.clouds.ec2.list_nodes(call=None)
- Return a list of the VMs that are on the provider
- salt.cloud.clouds.ec2.list_nodes_full(location=None, call=None)
- Return a list of the VMs that are on the provider
- salt.cloud.clouds.ec2.list_nodes_min(location=None, call=None)
- Return a list of the VMs that are on the provider. Only a list of VM names, and their state, is returned. This is the minimum amount of information needed to check for existing VMs.
- salt.cloud.clouds.ec2.list_nodes_select(call=None)
- Return a list of the VMs that are on the provider, with select fields
- salt.cloud.clouds.ec2.optimize_providers(providers)
-
Return an optimized list of providers.
We want to reduce the duplication of querying the same region.
If a provider is using the same credentials for the same region the same data will be returned for each provider, thus causing un-wanted duplicate data and API calls to EC2.
- salt.cloud.clouds.ec2.query(params=None, setname=None, requesturl=None, location=None, return_url=False, return_root=False)
- salt.cloud.clouds.ec2.query_instance(vm_=None, call=None)
- Query an instance upon creation from the EC2 API
- salt.cloud.clouds.ec2.queue_instances(instances)
-
Queue a set of instances to be provisioned later. Expects a list.
Currently this only queries node data, and then places it in the cloud cache (if configured). If the salt-cloud-reactor is being used, these instances will be automatically provisioned using that.
For more information about the salt-cloud-reactor, see:
- salt.cloud.clouds.ec2.reboot(name, call=None)
-
Reboot a node.
CLI Example:
salt-cloud -a reboot mymachine
- salt.cloud.clouds.ec2.rename(name, kwargs, call=None)
-
Properly rename a node. Pass in the new name as "new name".
CLI Example:
salt-cloud -a rename mymachine newname=yourmachine
- salt.cloud.clouds.ec2.request_instance(vm_=None, call=None)
-
Put together all of the information necessary to request an instance on EC2,
and then fire off the request the instance.
Returns data about the instance
- salt.cloud.clouds.ec2.script(vm_)
- Return the script deployment object
- salt.cloud.clouds.ec2.securitygroup(vm_)
- Return the security group
- salt.cloud.clouds.ec2.securitygroupid(vm_)
- Returns the SecurityGroupId
- salt.cloud.clouds.ec2.set_tags(name=None, tags=None, call=None, location=None, instance_id=None, resource_id=None, kwargs=None)
-
Set tags for a resource. Normally a VM name or instance_id is passed in,
but a resource_id may be passed instead. If both are passed in, the
instance_id will be used.
CLI Examples:
salt-cloud -a set_tags mymachine tag1=somestuff tag2='Other stuff' salt-cloud -a set_tags resource_id=vol-3267ab32 tag=somestuff
- salt.cloud.clouds.ec2.show_delvol_on_destroy(name, kwargs=None, call=None)
-
Do not delete all/specified EBS volumes upon instance termination
CLI Example:
salt-cloud -a show_delvol_on_destroy mymachine
- salt.cloud.clouds.ec2.show_image(kwargs, call=None)
- Show the details from EC2 concerning an AMI
- salt.cloud.clouds.ec2.show_instance(name=None, instance_id=None, call=None, kwargs=None)
-
Show the details from EC2 concerning an AMI.
Can be called as an action (which requires a name):
salt-cloud -a show_instance myinstance
...or as a function (which requires either a name or instance_id):
salt-cloud -f show_instance my-ec2 name=myinstance salt-cloud -f show_instance my-ec2 instance_id=i-d34db33f
- salt.cloud.clouds.ec2.show_keypair(kwargs=None, call=None)
- Show the details of an SSH keypair
- salt.cloud.clouds.ec2.show_term_protect(name=None, instance_id=None, call=None, quiet=False)
- Show the details from EC2 concerning an AMI
- salt.cloud.clouds.ec2.show_volume(kwargs=None, call=None)
- Wrapper around describe_volumes. Here just to keep functionality. Might be depreciated later.
- salt.cloud.clouds.ec2.sign(key, msg)
- salt.cloud.clouds.ec2.ssh_interface(vm_)
- Return the ssh_interface type to connect to. Either 'public_ips' (default) or 'private_ips'.
- salt.cloud.clouds.ec2.start(name, call=None)
- Start a node
- salt.cloud.clouds.ec2.stop(name, call=None)
- Stop a node
- salt.cloud.clouds.ec2.wait_for_instance(vm_=None, data=None, ip_address=None, display_ssh_output=True, call=None)
- Wait for an instance upon creation from the EC2 API, to become available
salt.cloud.clouds.gce
Copyright 2013 Google Inc. All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
Google Compute Engine Module
The Google Compute Engine module. This module interfaces with Google Compute Engine. To authenticate to GCE, you will need to create a Service Account.
- Setting up Service Account Authentication:
- •
- Go to the Cloud Console at: https://cloud.google.com/console.
- •
- Create or navigate to your desired Project.
- •
- Make sure Google Compute Engine service is enabled under the Services section.
- •
- Go to "APIs and auth" section, and then the "Credentials" link.
- •
- Click the "CREATE NEW CLIENT ID" button.
- •
- Select "Service Account" and click "Create Client ID" button.
- •
- This will automatically download a .json file; ignore it.
- •
- Look for a new "Service Account" section in the page, click on the "Generate New P12 key" button
- •
- Copy the Email Address for inclusion in your /etc/salt/cloud file in the 'service_account_email_address' setting.
- •
- Download the Private Key
- •
- The key that you download is a PKCS12 key. It needs to be converted to the PEM format.
- •
- Convert the key using OpenSSL (the default password is 'notasecret'): C{openssl pkcs12 -in PRIVKEY.p12 -passin pass:notasecret -nodes -nocerts | openssl rsa -out ~/PRIVKEY.pem}
- •
- Add the full path name of the converted private key to your /etc/salt/cloud file as 'service_account_private_key' setting.
- •
-
Consider using a more secure location for your private key.
my-gce-config: # The Google Cloud Platform Project ID project: "my-project-id" # The Service ACcount client ID service_account_email_address: 1234567890 [at] developer.gserviceaccount.com # The location of the private key (PEM format) service_account_private_key: /home/erjohnso/PRIVKEY.pem provider: gce # Specify whether to use public or private IP for deploy script. # Valid options are: # private_ips - The salt-master is also hosted with GCE # public_ips - The salt-master is hosted outside of GCE ssh_interface: public_ips
- maintainer
- Eric Johnson <erjohnso [at] google.com>
- maturity
- new
- depends
- libcloud >= 0.14.1
- depends
- pycrypto >= 2.1
- salt.cloud.clouds.gce.attach_disk(name=None, kwargs=None, call=None)
-
Attach an existing disk to an existing instance.
CLI Example:
salt-cloud -a attach_disk myinstance disk_name=mydisk mode=READ_WRITE
- salt.cloud.clouds.gce.attach_lb(kwargs=None, call=None)
-
Add an existing node/member to an existing load-balancer configuration.
CLI Example:
salt-cloud -f attach_lb gce name=lb member=myinstance
- salt.cloud.clouds.gce.avail_images(conn=None)
-
Return a dict of all available VM images on the cloud provider with
relevant data
Note that for GCE, there are custom images within the project, but the generic images are in other projects. This returns a dict of images in the project plus images in 'debian-cloud' and 'centos-cloud' (If there is overlap in names, the one in the current project is used.)
- salt.cloud.clouds.gce.avail_locations(conn=None, call=None)
- Return a dict of all available VM locations on the cloud provider with relevant data
- salt.cloud.clouds.gce.avail_sizes(conn=None)
- Return a dict of available instances sizes (a.k.a machine types) and convert them to something more serializable.
- salt.cloud.clouds.gce.create(vm_=None, call=None)
- Create a single GCE instance from a data dict.
- salt.cloud.clouds.gce.create_address(kwargs=None, call=None)
-
Create a static address in a region.
CLI Example:
salt-cloud -f create_address gce name=my-ip region=us-central1 address=IP
- salt.cloud.clouds.gce.create_disk(kwargs=None, call=None)
-
Create a new persistent disk. Must specify disk_name and location.
Can also specify an image or snapshot but if neither of those are
specified, a size (in GB) is required.
CLI Example:
salt-cloud -f create_disk gce disk_name=pd size=300 location=us-central1-b
- salt.cloud.clouds.gce.create_fwrule(kwargs=None, call=None)
-
Create a GCE firewall rule. The 'default' network is used if not specified.
CLI Example:
salt-cloud -f create_fwrule gce name=allow-http allow=tcp:80
- salt.cloud.clouds.gce.create_hc(kwargs=None, call=None)
-
Create an HTTP health check configuration.
CLI Example:
salt-cloud -f create_hc gce name=hc path=/healthy port=80
- salt.cloud.clouds.gce.create_lb(kwargs=None, call=None)
-
Create a load-balancer configuration.
CLI Example:
salt-cloud -f create_lb gce name=lb region=us-central1 ports=80
- salt.cloud.clouds.gce.create_network(kwargs=None, call=None)
-
Create a GCE network.
CLI Example:
salt-cloud -f create_network gce name=mynet cidr=10.10.10.0/24
- salt.cloud.clouds.gce.create_snapshot(kwargs=None, call=None)
-
Create a new disk snapshot. Must specify name and disk_name.
CLI Example:
salt-cloud -f create_snapshot gce name=snap1 disk_name=pd
- salt.cloud.clouds.gce.delete_address(kwargs=None, call=None)
-
Permanently delete a static address.
CLI Example:
salt-cloud -f delete_address gce name=my-ip
- salt.cloud.clouds.gce.delete_disk(kwargs=None, call=None)
-
Permanently delete a persistent disk.
CLI Example:
salt-cloud -f delete_disk gce disk_name=pd
- salt.cloud.clouds.gce.delete_fwrule(kwargs=None, call=None)
-
Permanently delete a firewall rule.
CLI Example:
salt-cloud -f delete_fwrule gce name=allow-http
- salt.cloud.clouds.gce.delete_hc(kwargs=None, call=None)
-
Permanently delete a health check.
CLI Example:
salt-cloud -f delete_hc gce name=hc
- salt.cloud.clouds.gce.delete_lb(kwargs=None, call=None)
-
Permanently delete a load-balancer.
CLI Example:
salt-cloud -f delete_lb gce name=lb
- salt.cloud.clouds.gce.delete_network(kwargs=None, call=None)
-
Permanently delete a network.
CLI Example:
salt-cloud -f delete_network gce name=mynet
- salt.cloud.clouds.gce.delete_snapshot(kwargs=None, call=None)
-
Permanently delete a disk snapshot.
CLI Example:
salt-cloud -f delete_snapshot gce name=disk-snap-1
- salt.cloud.clouds.gce.destroy(vm_name, call=None)
-
Call 'destroy' on the instance. Can be called with "-a destroy" or -d
CLI Example:
salt-cloud -a destroy myinstance1 myinstance2 ... salt-cloud -d myinstance1 myinstance2 ...
- salt.cloud.clouds.gce.detach_disk(name=None, kwargs=None, call=None)
-
Detach a disk from an instance.
CLI Example:
salt-cloud -a detach_disk myinstance disk_name=mydisk
- salt.cloud.clouds.gce.detach_lb(kwargs=None, call=None)
-
Remove an existing node/member from an existing load-balancer configuration.
CLI Example:
salt-cloud -f detach_lb gce name=lb member=myinstance
- salt.cloud.clouds.gce.get_configured_provider()
- Return the first configured instance.
- salt.cloud.clouds.gce.get_conn()
- Return a conn object for the passed VM data
- salt.cloud.clouds.gce.get_lb_conn(gce_driver=None)
- Return a load-balancer conn object
- salt.cloud.clouds.gce.list_nodes(conn=None, call=None)
- Return a list of the VMs that are on the provider
- salt.cloud.clouds.gce.list_nodes_full(conn=None, call=None)
- Return a list of the VMs that are on the provider, with all fields
- salt.cloud.clouds.gce.list_nodes_select(conn=None, call=None)
- Return a list of the VMs that are on the provider, with select fields
- salt.cloud.clouds.gce.reboot(vm_name, call=None)
-
Call GCE 'reset' on the instance.
CLI Example:
salt-cloud -a reboot myinstance
- salt.cloud.clouds.gce.script(vm_)
- Return the script deployment object
- salt.cloud.clouds.gce.show_address(kwargs=None, call=None)
-
Show the details of an existing static address.
CLI Example:
salt-cloud -f show_address gce name=mysnapshot region=us-central1
- salt.cloud.clouds.gce.show_disk(name=None, kwargs=None, call=None)
-
Show the details of an existing disk.
CLI Example:
salt-cloud -a show_disk myinstance disk_name=mydisk salt-cloud -f show_disk gce disk_name=mydisk
- salt.cloud.clouds.gce.show_fwrule(kwargs=None, call=None)
-
Show the details of an existing firewall rule.
CLI Example:
salt-cloud -f show_fwrule gce name=allow-http
- salt.cloud.clouds.gce.show_hc(kwargs=None, call=None)
-
Show the details of an existing health check.
CLI Example:
salt-cloud -f show_hc gce name=hc
- salt.cloud.clouds.gce.show_instance(vm_name, call=None)
- Show the details of the existing instance.
- salt.cloud.clouds.gce.show_lb(kwargs=None, call=None)
-
Show the details of an existing load-balancer.
CLI Example:
salt-cloud -f show_lb gce name=lb
- salt.cloud.clouds.gce.show_network(kwargs=None, call=None)
-
Show the details of an existing network.
CLI Example:
salt-cloud -f show_network gce name=mynet
- salt.cloud.clouds.gce.show_snapshot(kwargs=None, call=None)
-
Show the details of an existing snapshot.
CLI Example:
salt-cloud -f show_snapshot gce name=mysnapshot
salt.cloud.clouds.gogrid
GoGrid Cloud Module
The GoGrid cloud module. This module interfaces with the gogrid public cloud service. To use Salt Cloud with GoGrid log into the GoGrid web interface and create an api key. Do this by clicking on "My Account" and then going to the API Keys tab.
- depends
-
libcloud >= 0.13.2
Set up the cloud configuration at /etc/salt/cloud.providers or /etc/salt/cloud.providers.d/gogrid.conf:
my-gogrid-config: # The generated api key to use apikey: asdff7896asdh789 # The apikey's shared secret sharedsecret: saltybacon provider: gogrid
NOTE: A Note about using Map files with GoGrid:
Due to limitations in the GoGrid API, instances cannot be provisioned in parallel with the GoGrid driver. Map files will work with GoGrid, but the -P argument should not be used on maps referencing GoGrid instances.
- salt.cloud.clouds.gogrid.avail_images(conn=None, call=None)
- Return a dict of all available VM images on the cloud provider with relevant data
- salt.cloud.clouds.gogrid.avail_locations(conn=None, call=None)
- Return a dict of all available VM locations on the cloud provider with relevant data
- salt.cloud.clouds.gogrid.avail_sizes(conn=None, call=None)
- Return a dict of all available VM images on the cloud provider with relevant data
- salt.cloud.clouds.gogrid.create(vm_)
- Create a single VM from a data dict
- salt.cloud.clouds.gogrid.destroy(name, conn=None, call=None)
- Delete a single VM
- salt.cloud.clouds.gogrid.get_configured_provider()
- Return the first configured instance.
- salt.cloud.clouds.gogrid.get_conn()
- Return a conn object for the passed VM data
- salt.cloud.clouds.gogrid.get_image(conn, vm_)
- Return the image object to use
- salt.cloud.clouds.gogrid.get_node(conn, name)
- Return a libcloud node for the named VM
- salt.cloud.clouds.gogrid.get_salt_interface(vm_)
- Return the salt_interface type to connect to. Either 'public_ips' (default) or 'private_ips'.
- salt.cloud.clouds.gogrid.get_size(conn, vm_)
- Return the VM's size object
- salt.cloud.clouds.gogrid.list_nodes(conn=None, call=None)
- Return a list of the VMs that are on the provider
- salt.cloud.clouds.gogrid.list_nodes_full(conn=None, call=None)
- Return a list of the VMs that are on the provider, with all fields
- salt.cloud.clouds.gogrid.list_nodes_select(conn=None, call=None)
- Return a list of the VMs that are on the provider, with select fields
- salt.cloud.clouds.gogrid.reboot(name, conn=None)
- Reboot a single VM
- salt.cloud.clouds.gogrid.script(vm_)
- Return the script deployment object
- salt.cloud.clouds.gogrid.show_instance(name, call=None)
- Show the details from the provider concerning an instance
salt.cloud.clouds.joyent
Joyent Cloud Module
The Joyent Cloud module is used to interact with the Joyent cloud system.
Set up the cloud configuration at /etc/salt/cloud.providers or /etc/salt/cloud.providers.d/joyent.conf:
my-joyent-config: provider: joyent # The Joyent login user user: fred # The Joyent user's password password: saltybacon # The location of the ssh private key that can log into the new VM private_key: /root/mykey.pem # The name of the private key private_key: mykey
When creating your profiles for the joyent cloud, add the location attribute to the profile, this will automatically get picked up when performing tasks associated with that vm. An example profile might look like:
joyent_512: provider: my-joyent-config size: Extra Small 512 MB image: centos-6 location: us-east-1
This driver can also be used with the Joyent SmartDataCenter project. More details can be found at:
Using SDC requires that an api_host_suffix is set. The default value for this is .api.joyentcloud.com. All characters, including the leading ., should be included:
api_host_suffix: .api.myhostname.com
- depends
- PyCrypto
- salt.cloud.clouds.joyent.avail_images(call=None)
-
Get list of available images
CLI Example:
salt-cloud --list-images
Can use a custom URL for images. Default is:
image_url: images.joyent.com/image
- salt.cloud.clouds.joyent.avail_locations(call=None)
- List all available locations
- salt.cloud.clouds.joyent.avail_sizes(call=None)
-
get list of available packages
CLI Example:
salt-cloud --list-sizes
- salt.cloud.clouds.joyent.create(vm_)
-
Create a single VM from a data dict
CLI Example:
salt-cloud -p profile_name vm_name
- salt.cloud.clouds.joyent.create_node(**kwargs)
- convenience function to make the rest api call for node creation.
- salt.cloud.clouds.joyent.delete_key(kwargs=None, call=None)
-
List the keys available
CLI Example:
salt-cloud -f delete_key joyent keyname=mykey
- salt.cloud.clouds.joyent.destroy(name, call=None)
- destroy a machine by name
- Parameters
- •
- name -- name given to the machine
- •
- call -- call value in this case is 'action'
- Returns
-
array of booleans , true if successfully stopped and true if
successfully removed
CLI Example:
salt-cloud -d vm_name
- salt.cloud.clouds.joyent.get_configured_provider()
- Return the first configured instance.
- salt.cloud.clouds.joyent.get_image(vm_)
- Return the image object to use
- salt.cloud.clouds.joyent.get_location(vm_=None)
- Return the joyent data center to use, in this order:
- •
- CLI parameter
- •
- VM parameter
- •
- Cloud profile setting
- salt.cloud.clouds.joyent.get_location_path(location='us-east-1', api_host_suffix='.api.joyentcloud.com')
- create url from location variable :param location: joyent data center location :return: url
- salt.cloud.clouds.joyent.get_node(name)
- gets the node from the full node list by name :param name: name of the vm :return: node object
- salt.cloud.clouds.joyent.get_salt_interface(vm_)
- Return the salt_interface type to connect to. Either 'public_ips' (default) or 'private_ips'.
- salt.cloud.clouds.joyent.get_size(vm_)
- Return the VM's size object
- salt.cloud.clouds.joyent.has_method(obj, method_name)
- Find if the provided object has a specific method
- salt.cloud.clouds.joyent.import_key(kwargs=None, call=None)
-
List the keys available
CLI Example:
salt-cloud -f import_key joyent keyname=mykey keyfile=/tmp/mykey.pub
- salt.cloud.clouds.joyent.joyent_node_state(id_)
- Convert joyent returned state to state common to other data center return values for consistency
- Parameters
- id -- joyent state value
- Returns
- libcloudfuncs state value
- salt.cloud.clouds.joyent.key_list(items=None)
- convert list to dictionary using the key as the identifier :param items: array to iterate over :return: dictionary
- salt.cloud.clouds.joyent.list_keys(kwargs=None, call=None)
- List the keys available
- salt.cloud.clouds.joyent.list_nodes(full=False, call=None)
-
list of nodes, keeping only a brief listing
CLI Example:
salt-cloud -Q
- salt.cloud.clouds.joyent.list_nodes_full(call=None)
-
list of nodes, maintaining all content provided from joyent listings
CLI Example:
salt-cloud -F
- salt.cloud.clouds.joyent.list_nodes_select(call=None)
- Return a list of the VMs that are on the provider, with select fields
- salt.cloud.clouds.joyent.query(action=None, command=None, args=None, method='GET', location=None, data=None)
- Make a web call to Joyent
- salt.cloud.clouds.joyent.query_instance(vm_=None, call=None)
- Query an instance upon creation from the Joyent API
- salt.cloud.clouds.joyent.reboot(name, call=None)
-
reboot a machine by name
:param name: name given to the machine
:param call: call value in this case is 'action'
:return: true if successful
CLI Example:
salt-cloud -a reboot vm_name
- salt.cloud.clouds.joyent.reformat_node(item=None, full=False)
- Reformat the returned data from joyent, determine public/private IPs and strip out fields if necessary to provide either full or brief content.
- Parameters
- •
- item -- node dictionary
- •
- full -- full or brief output
- Returns
- dict
- salt.cloud.clouds.joyent.show_instance(name, call=None)
-
get details about a machine
:param name: name given to the machine
:param call: call value in this case is 'action'
:return: machine information
CLI Example:
salt-cloud -a show_instance vm_name
- salt.cloud.clouds.joyent.show_key(kwargs=None, call=None)
- List the keys available
- salt.cloud.clouds.joyent.ssh_interface(vm_)
- Return the ssh_interface type to connect to. Either 'public_ips' (default) or 'private_ips'.
- salt.cloud.clouds.joyent.start(name, call=None)
-
start a machine by name
:param name: name given to the machine
:param call: call value in this case is 'action'
:return: true if successful
CLI Example:
salt-cloud -a start vm_name
- salt.cloud.clouds.joyent.stop(name, call=None)
-
stop a machine by name
:param name: name given to the machine
:param call: call value in this case is 'action'
:return: true if successful
CLI Example:
salt-cloud -a stop vm_name
- salt.cloud.clouds.joyent.take_action(name=None, call=None, command=None, data=None, method='GET', location='us-east-1')
- take action call used by start,stop, reboot :param name: name given to the machine :param call: call value in this case is 'action' :command: api path :data: any data to be passed to the api, must be in json format :method: GET,POST,or DELETE :location: data center to execute the command on :return: true if successful
salt.cloud.clouds.libcloud_aws
The AWS Cloud Module
The AWS cloud module is used to interact with the Amazon Web Services system.
This module has been replaced by the EC2 cloud module, and is no longer supported. The documentation shown here is for reference only; it is highly recommended to change all usages of this driver over to the EC2 driver.
- If this driver is still needed, set up the cloud configuration at
-
/etc/salt/cloud.providers or /etc/salt/cloud.providers.d/aws.conf:
my-aws-config: # The AWS API authentication id id: GKTADJGHEIQSXMKKRBJ08H # The AWS API authentication key key: askdjghsdfjkghWupUjasdflkdfklgjsdfjajkghs # The ssh keyname to use keyname: default # The amazon security group securitygroup: ssh_open # The location of the private key which corresponds to the keyname private_key: /root/default.pem provider: aws
- salt.cloud.clouds.libcloud_aws.block_device_mappings(vm_)
-
Return the block device mapping:
[{'DeviceName': '/dev/sdb', 'VirtualName': 'ephemeral0'}, {'DeviceName': '/dev/sdc', 'VirtualName': 'ephemeral1'}]
- salt.cloud.clouds.libcloud_aws.create(vm_)
- Create a single VM from a data dict
- salt.cloud.clouds.libcloud_aws.create_attach_volumes(volumes, location, data)
- Create and attach volumes to created node
- salt.cloud.clouds.libcloud_aws.del_tags(name, kwargs, call=None)
-
Delete tags for a node
CLI Example:
salt-cloud -a del_tags mymachine tag1,tag2,tag3
- salt.cloud.clouds.libcloud_aws.destroy(name)
- Wrap core libcloudfuncs destroy method, adding check for termination protection
- salt.cloud.clouds.libcloud_aws.get_availability_zone(conn, vm_)
- Return the availability zone to use
- salt.cloud.clouds.libcloud_aws.get_configured_provider()
- Return the first configured instance.
- salt.cloud.clouds.libcloud_aws.get_conn(**kwargs)
- Return a conn object for the passed VM data
- salt.cloud.clouds.libcloud_aws.get_location(vm_=None)
- Return the AWS region to use, in this order:
- •
- CLI parameter
- •
- Cloud profile setting
- •
- Global salt-cloud config
- salt.cloud.clouds.libcloud_aws.get_salt_interface(vm_)
- Return the salt_interface type to connect to. Either 'public_ips' (default) or 'private_ips'.
- salt.cloud.clouds.libcloud_aws.get_tags(name, call=None)
- Retrieve tags for a node
- salt.cloud.clouds.libcloud_aws.iam_profile(vm_)
- Return the IAM role
- salt.cloud.clouds.libcloud_aws.keyname(vm_)
- Return the keyname
- salt.cloud.clouds.libcloud_aws.rename(name, kwargs, call=None)
-
Properly rename a node. Pass in the new name as "new name".
CLI Example:
salt-cloud -a rename mymachine newname=yourmachine
- salt.cloud.clouds.libcloud_aws.securitygroup(vm_)
- Return the security group
- salt.cloud.clouds.libcloud_aws.set_tags(name, tags, call=None)
-
Set tags for a node
CLI Example:
salt-cloud -a set_tags mymachine tag1=somestuff tag2='Other stuff'
- salt.cloud.clouds.libcloud_aws.ssh_interface(vm_)
- Return the ssh_interface type to connect to. Either 'public_ips' (default) or 'private_ips'.
- salt.cloud.clouds.libcloud_aws.ssh_username(vm_)
- Return the ssh_username. Defaults to 'ec2-user'.
- salt.cloud.clouds.libcloud_aws.start(name, call=None)
- Start a node
- salt.cloud.clouds.libcloud_aws.stop(name, call=None)
- Stop a node
salt.cloud.clouds.linode
Linode Cloud Module using Apache Libcloud OR linode-python bindings
The Linode cloud module is used to control access to the Linode VPS system
Use of this module only requires the apikey parameter.
- depends
-
linode-python >= 1.1.1
OR
- depends
-
apache-libcloud >= 0.13.2
NOTE: The linode-python driver will work with earlier versions of linode-python, but it is highly recommended to use a minimum version of 1.1.1. Earlier versions leak sensitive information into the debug logs.
Set up the cloud configuration at /etc/salt/cloud.providers or /etc/salt/cloud.providers.d/linode.conf:
- my-linode-config:
-
# Linode account api key
apikey: JVkbSJDGHSDKUKSDJfhsdklfjgsjdkflhjlsdfffhgdgjkenrtuinv
provider: linode
When used with linode-python, this provider supports cloning existing Linodes. To clone, add a profile with a clonefrom key, and a script_args: -C.
Clonefrom should be the name of the that is the source for the clone. script_args: -C passes a -C to the bootstrap script, which only configures the minion and doesn't try to install a new copy of salt-minion. This way the minion gets new keys and the keys get pre-seeded on the master, and the /etc/salt/minion file has the right 'id:' declaration.
Cloning requires a post 2015-02-01 salt-bootstrap.
- salt.cloud.clouds.linode.avail_images(conn=None, call=None)
- Return a dict of all available VM images on the cloud provider with relevant data
- salt.cloud.clouds.linode.avail_locations(conn=None, call=None)
- Return a dict of all available VM locations on the cloud provider with relevant data
- salt.cloud.clouds.linode.avail_sizes(conn=None, call=None)
- Return a dict of all available VM images on the cloud provider with relevant data
- salt.cloud.clouds.linode.boot(LinodeID=None, configid=None)
- Execute a boot sequence on a linode
- salt.cloud.clouds.linode.create(vm_)
- Create a single VM from a data dict
- salt.cloud.clouds.linode.create_config(vm_, LinodeID=None, root_disk_id=None, swap_disk_id=None)
- Create a Linode Config
- salt.cloud.clouds.linode.create_disk_from_distro(vm_=None, LinodeID=None, swapsize=None)
- Create the disk for the linode
- salt.cloud.clouds.linode.create_swap_disk(vm_=None, LinodeID=None, swapsize=None)
- Create the disk for the linode
- salt.cloud.clouds.linode.destroy(name, conn=None, call=None)
- Delete a single VM
- salt.cloud.clouds.linode.get_auth(vm_)
- Return either NodeAuthSSHKey or NodeAuthPassword, preferring NodeAuthSSHKey if both are provided.
- salt.cloud.clouds.linode.get_configured_provider()
- Return the first configured instance.
- salt.cloud.clouds.linode.get_conn()
- Return a conn object for the passed VM data
- salt.cloud.clouds.linode.get_disk_size(vm_, size, swap)
- Return the size of of the root disk in MB
- salt.cloud.clouds.linode.get_image(conn, vm_)
- Return the image object to use
- salt.cloud.clouds.linode.get_kernels(conn=None)
- Get Linode's list of kernels available
- salt.cloud.clouds.linode.get_location(conn, vm_)
- Return the node location to use
- salt.cloud.clouds.linode.get_node(conn, name)
- Return a libcloud node for the named VM
- salt.cloud.clouds.linode.get_one_kernel(conn=None, name=None)
- Return data on one kernel name=None returns latest kernel
- salt.cloud.clouds.linode.get_password(vm_)
- Return the password to use
- salt.cloud.clouds.linode.get_private_ip(vm_)
- Return True if a private ip address is requested
- salt.cloud.clouds.linode.get_pubkey(vm_)
- Return the SSH pubkey to use
- salt.cloud.clouds.linode.get_size(conn, vm_)
- Return the VM's size object
- salt.cloud.clouds.linode.get_ssh_key_filename(vm_)
- Return path to filename if get_auth() returns a NodeAuthSSHKey.
- salt.cloud.clouds.linode.get_swap(vm_)
- Return the amount of swap space to use in MB
- salt.cloud.clouds.linode.list_nodes(conn=None, call=None)
- Return a list of the VMs that are on the provider
- salt.cloud.clouds.linode.list_nodes_full(conn=None, call=None)
- Return a list of the VMs that are on the provider, with all fields
- salt.cloud.clouds.linode.list_nodes_select(conn=None, call=None)
- Return a list of the VMs that are on the provider, with select fields
- salt.cloud.clouds.linode.remove_complex_types(dictionary)
- Linode-python is now returning some complex types that are not serializable by msgpack. Kill those.
- salt.cloud.clouds.linode.script(vm_)
- Return the script deployment object
- salt.cloud.clouds.linode.show_instance(name, call=None)
- Show the details from the provider concerning an instance
- salt.cloud.clouds.linode.waitfor_job(conn=None, LinodeID=None, JobID=None, timeout=300, quiet=True)
- salt.cloud.clouds.linode.waitfor_status(conn=None, LinodeID=None, status=None, timeout=300, quiet=True)
- Wait for a certain status
salt.cloud.clouds.lxc
Install Salt on an LXC Container
New in version 2014.7.0.
Please read core config documentation.
- salt.cloud.clouds.lxc.avail_images()
- salt.cloud.clouds.lxc.create(vm_, call=None)
-
Create an lxc Container.
This function is idempotent and will try to either provision
or finish the provision of an lxc container.
NOTE: Most of the initialization code has been moved and merged with the lxc runner and lxc.init functions
- salt.cloud.clouds.lxc.destroy(vm_, call=None)
- Destroy a lxc container
- salt.cloud.clouds.lxc.get_configured_provider(vm_=None)
- Return the contextual provider of None if no configured one can be found.
- salt.cloud.clouds.lxc.get_provider(name)
- salt.cloud.clouds.lxc.list_nodes(conn=None, call=None)
- salt.cloud.clouds.lxc.list_nodes_full(conn=None, call=None)
- salt.cloud.clouds.lxc.list_nodes_select(call=None)
- Return a list of the VMs that are on the provider, with select fields
- salt.cloud.clouds.lxc.show_instance(name, call=None)
- Show the details from the provider concerning an instance
salt.cloud.clouds.msazure
Azure Cloud Module
The Azure cloud module is used to control access to Microsoft Azure
- depends
- •
- Microsoft Azure SDK for Python
- configuration
- Required provider parameters:
- •
- apikey
- •
- certificate_path
- •
-
subscription_id
A Management Certificate (.pem and .crt files) must be created and the .pem file placed on the same machine that salt-cloud is run from. Information on creating the pem file to use, and uploading the associated cer file can be found at:
http://www.windowsazure.com/en-us/develop/python/how-to-guides/service-management/
Example /etc/salt/cloud.providers or /etc/salt/cloud.providers.d/azure.conf configuration:
my-azure-config: provider: azure subscription_id: 3287abc8-f98a-c678-3bde-326766fd3617 certificate_path: /etc/salt/azure.pem management_host: management.core.windows.net
- salt.cloud.clouds.msazure.avail_images(conn=None, call=None)
- List available images for Azure
- salt.cloud.clouds.msazure.avail_locations(conn=None, call=None)
- List available locations for Azure
- salt.cloud.clouds.msazure.avail_sizes(call=None)
- Because sizes are built into images with Azure, there will be no sizes to return here
- salt.cloud.clouds.msazure.create(vm_)
- Create a single VM from a data dict
- salt.cloud.clouds.msazure.create_attach_volumes(name, kwargs, call=None, wait_to_finish=True)
- Create and attach volumes to created node
- salt.cloud.clouds.msazure.destroy(name, conn=None, call=None, kwargs=None)
-
Destroy a VM
CLI Examples:
salt-cloud -d myminion salt-cloud -a destroy myminion service_name=myservice
- salt.cloud.clouds.msazure.get_configured_provider()
- Return the first configured instance.
- salt.cloud.clouds.msazure.get_conn()
- Return a conn object for the passed VM data
- salt.cloud.clouds.msazure.list_disks(conn=None, call=None)
- Destroy a VM
- salt.cloud.clouds.msazure.list_hosted_services(conn=None, call=None)
- List VMs on this Azure account, with full information
- salt.cloud.clouds.msazure.list_nodes(conn=None, call=None)
- List VMs on this Azure account
- salt.cloud.clouds.msazure.list_nodes_full(conn=None, call=None)
- List VMs on this Azure account, with full information
- salt.cloud.clouds.msazure.list_nodes_select(conn=None, call=None)
- Return a list of the VMs that are on the provider, with select fields
- salt.cloud.clouds.msazure.list_storage_services(conn=None, call=None)
- List VMs on this Azure account, with full information
- salt.cloud.clouds.msazure.script(vm_)
- Return the script deployment object
- salt.cloud.clouds.msazure.show_instance(name, call=None)
- Show the details from the provider concerning an instance
- salt.cloud.clouds.msazure.show_service(kwargs=None, conn=None, call=None)
- Show the details from the provider concerning an instance
salt.cloud.clouds.nova
OpenStack Nova Cloud Module
PLEASE NOTE: This module is currently in early development, and considered to be experimental and unstable. It is not recommended for production use. Unless you are actively developing code in this module, you should use the OpenStack module instead.
OpenStack is an open source project that is in use by a number a cloud providers, each of which have their own ways of using it.
The OpenStack Nova module for Salt Cloud was bootstrapped from the OpenStack module for Salt Cloud, which uses a libcloud-based connection. The Nova module is designed to use the nova and glance modules already built into Salt.
These modules use the Python novaclient and glanceclient libraries, respectively. In order to use this module, the proper salt configuration must also be in place. This can be specified in the master config, the minion config, a set of grains or a set of pillars.
my_openstack_profile: keystone.user: admin keystone.password: verybadpass keystone.tenant: admin keystone.auth_url: 'http://127.0.0.1:5000/v2.0/'
Note that there is currently a dependency upon netaddr. This can be installed on Debian-based systems by means of the python-netaddr package.
This module currently requires the latest develop branch of Salt to be installed.
This module has been tested to work with HP Cloud and Rackspace. See the documentation for specific options for either of these providers. These examples could be set up in the cloud configuration at /etc/salt/cloud.providers or /etc/salt/cloud.providers.d/openstack.conf:
my-openstack-config: # The ID of the minion that will execute the salt nova functions auth_minion: myminion # The name of the configuration profile to use on said minion config_profile: my_openstack_profile ssh_key_name: mykey provider: nova userdata_file: /tmp/userdata.txt
For local installations that only use private IP address ranges, the following option may be useful. Using the old syntax:
Note: For api use, you will need an auth plugin. The base novaclient does not support apikeys, but some providers such as rackspace have extended keystone to accept them
my-openstack-config: # Ignore IP addresses on this network for bootstrap ignore_cidr: 192.168.50.0/24 my-nova: identity_url: 'https://identity.api.rackspacecloud.com/v2.0/' compute_region: IAD user: myusername password: mypassword tenant: <userid> provider: nova my-api: identity_url: 'https://identity.api.rackspacecloud.com/v2.0/' compute_region: IAD user: myusername api_key: <api_key> os_auth_plugin: rackspace tenant: <userid> provider: nova networks: - net-id: 47a38ff2-fe21-4800-8604-42bd1848e743 - net-id: 00000000-0000-0000-0000-000000000000 - net-id: 11111111-1111-1111-1111-111111111111
Note: You must include the default net-ids when setting networks or the server will be created without the rest of the interfaces
Note: For rackconnect v3, rackconnectv3 needs to be specified with the rackconnect v3 cloud network as it's variable
- salt.cloud.clouds.nova.attach_volume(name, server_name, device='/dev/xvdb', **kwargs)
- Attach block volume
- salt.cloud.clouds.nova.avail_images()
- Return a dict of all available VM images on the cloud provider.
- salt.cloud.clouds.nova.avail_locations(conn=None, call=None)
- Return a list of locations
- salt.cloud.clouds.nova.avail_sizes()
- Return a dict of all available VM sizes on the cloud provider.
- salt.cloud.clouds.nova.create(vm_)
- Create a single VM from a data dict
- salt.cloud.clouds.nova.create_attach_volumes(name, call=None, **kwargs)
- Create and attach volumes to created node
- salt.cloud.clouds.nova.create_volume(name, size=100, snapshot=None, voltype=None, **kwargs)
- Create block storage device
- salt.cloud.clouds.nova.destroy(name, conn=None, call=None)
- Delete a single VM
- salt.cloud.clouds.nova.get_configured_provider()
- Return the first configured instance.
- salt.cloud.clouds.nova.get_conn()
- Return a conn object for the passed VM data
- salt.cloud.clouds.nova.get_image(conn, vm_)
- Return the image object to use
- salt.cloud.clouds.nova.get_salt_interface(vm_)
- Return the salt_interface type to connect to. Either 'public_ips' (default) or 'private_ips'.
- salt.cloud.clouds.nova.get_size(conn, vm_)
- Return the VM's size object
- salt.cloud.clouds.nova.ignore_cidr(vm_, ip)
- Return True if we are to ignore the specified IP. Compatible with IPv4.
- salt.cloud.clouds.nova.list_nodes(call=None, **kwargs)
- Return a list of the VMs that in this location
- salt.cloud.clouds.nova.list_nodes_full(call=None, **kwargs)
- Return a list of the VMs that in this location
- salt.cloud.clouds.nova.list_nodes_select(call=None)
- Return a list of the VMs that are on the provider, with select fields
- salt.cloud.clouds.nova.managedcloud(vm_)
- Determine if we should wait for the managed cloud automation before running. Either 'False' (default) or 'True'.
- salt.cloud.clouds.nova.network_create(name, **kwargs)
- Create private networks
- salt.cloud.clouds.nova.network_list(call=None, **kwargs)
- List private networks
- salt.cloud.clouds.nova.preferred_ip(vm_, ips)
- Return the preferred Internet protocol. Either 'ipv4' (default) or 'ipv6'.
- salt.cloud.clouds.nova.rackconnect(vm_)
- Determine if we should wait for rackconnect automation before running. Either 'False' (default) or 'True'.
- salt.cloud.clouds.nova.reboot(name, conn=None)
- Reboot a single VM
- salt.cloud.clouds.nova.request_instance(vm_=None, call=None)
-
Put together all of the information necessary to request an instance
through Novaclient and then fire off the request the instance.
Returns data about the instance
- salt.cloud.clouds.nova.script(vm_)
- Return the script deployment object
- salt.cloud.clouds.nova.show_instance(name, call=None)
- Show the details from the provider concerning an instance
- salt.cloud.clouds.nova.ssh_interface(vm_)
- Return the ssh_interface type to connect to. Either 'public_ips' (default) or 'private_ips'.
- salt.cloud.clouds.nova.virtual_interface_create(name, net_name, **kwargs)
- Create private networks
- salt.cloud.clouds.nova.virtual_interface_list(name, **kwargs)
- Create private networks
- salt.cloud.clouds.nova.volume_attach(name, server_name, device='/dev/xvdb', **kwargs)
- Attach block volume
- salt.cloud.clouds.nova.volume_create(name, size=100, snapshot=None, voltype=None, **kwargs)
- Create block storage device
- salt.cloud.clouds.nova.volume_create_attach(name, call=None, **kwargs)
- Create and attach volumes to created node
- salt.cloud.clouds.nova.volume_delete(name, **kwargs)
- Delete block storage device
- salt.cloud.clouds.nova.volume_detach(name, **kwargs)
- Detach block volume
- salt.cloud.clouds.nova.volume_list(**kwargs)
- List block devices
salt.cloud.clouds.opennebula
OpenNebula Cloud Module
The OpenNebula cloud module is used to control access to an OpenNebula cloud.
- depends
-
lxml
Use of this module requires the xml_rpc, user and password parameter to be set. Set up the cloud configuration at /etc/salt/cloud.providers or /etc/salt/cloud.providers.d/opennebula.conf:
my-opennebula-config: xml_rpc: http://localhost:2633/RPC2 user: oneadmin password: JHGhgsayu32jsa provider: opennebula
- salt.cloud.clouds.opennebula.avail_images(call=None)
- Return a list of the templates that are on the provider
- salt.cloud.clouds.opennebula.avail_locations(call=None)
- Return a dict of all available VM locations on the cloud provider with relevant data
- salt.cloud.clouds.opennebula.avail_sizes(call=None)
- Because sizes are built into templates with OpenNebula, there will be no sizes to return here
- salt.cloud.clouds.opennebula.create(vm_)
- Create a single VM from a data dict
- salt.cloud.clouds.opennebula.destroy(name, call=None)
-
Destroy a node. Will check termination protection and warn if enabled.
CLI Example:
salt-cloud --destroy mymachine
- salt.cloud.clouds.opennebula.get_configured_provider()
- Return the first configured instance.
- salt.cloud.clouds.opennebula.get_image(vm_)
- Return the image object to use
- salt.cloud.clouds.opennebula.get_location(vm_)
- Return the VM's location
- salt.cloud.clouds.opennebula.list_nodes(call=None)
- Return a list of the VMs that are on the provider
- salt.cloud.clouds.opennebula.list_nodes_full(call=None)
- Return a list of the VMs that are on the provider
- salt.cloud.clouds.opennebula.list_nodes_select(call=None)
- Return a list of the VMs that are on the provider, with select fields
- salt.cloud.clouds.opennebula.script(vm_)
- Return the script deployment object
- salt.cloud.clouds.opennebula.show_instance(name, call=None)
- Show the details from OpenNebula concerning a VM
salt.cloud.clouds.openstack
OpenStack Cloud Module
OpenStack is an open source project that is in use by a number a cloud providers, each of which have their own ways of using it.
- depends
-
libcloud >= 0.13.2
OpenStack provides a number of ways to authenticate. This module uses password- based authentication, using auth v2.0. It is likely to start supporting other methods of authentication provided by OpenStack in the future.
Note that there is currently a dependency upon netaddr. This can be installed on Debian-based systems by means of the python-netaddr package.
This module has been tested to work with HP Cloud and Rackspace. See the documentation for specific options for either of these providers. Some examples, using the old cloud configuration syntax, are provided below:
Set up in the cloud configuration at /etc/salt/cloud.providers or /etc/salt/cloud.providers.d/openstack.conf:
my-openstack-config: # The OpenStack identity service url identity_url: https://region-b.geo-1.identity.hpcloudsvc.com:35357/v2.0/ # The OpenStack compute region compute_region: region-b.geo-1 # The OpenStack compute service name compute_name: Compute # The OpenStack tenant name (not tenant ID) tenant: myuser-tenant1 # The OpenStack user name user: myuser # The OpenStack keypair name ssh_key_name: mykey # Skip SSL certificate validation insecure: false # The ssh key file ssh_key_file: /path/to/keyfile/test.pem # The OpenStack network UUIDs networks: - fixed: - 4402cd51-37ee-435e-a966-8245956dc0e6 - floating: - Ext-Net files: /path/to/dest.txt: /local/path/to/src.txt # Skips the service catalog API endpoint, and uses the following base_url: http://192.168.1.101:3000/v2/12345 provider: openstack userdata_file: /tmp/userdata.txt # config_drive is required for userdata at rackspace config_drive: True
For in-house Openstack Essex installation, libcloud needs the service_type :
my-openstack-config: identity_url: 'http://control.openstack.example.org:5000/v2.0/' compute_name : Compute Service service_type : compute
Either a password or an API key must also be specified:
my-openstack-password-or-api-config: # The OpenStack password password: letmein # The OpenStack API key apikey: 901d3f579h23c8v73q9
Optionally, if you don't want to save plain-text password in your configuration file, you can use keyring:
my-openstack-keyring-config: # The OpenStack password is stored in keyring # don't forget to set the password by running something like: # salt-cloud --set-password=myuser my-openstack-keyring-config password: USE_KEYRING
For local installations that only use private IP address ranges, the following option may be useful. Using the old syntax:
my-openstack-config: # Ignore IP addresses on this network for bootstrap ignore_cidr: 192.168.50.0/24
It is possible to upload a small set of files (no more than 5, and nothing too large) to the remote server. Generally this should not be needed, as salt itself can upload to the server after it is spun up, with nowhere near the same restrictions.
my-openstack-config: files: /path/to/dest.txt: /local/path/to/src.txt
Alternatively, one could use the private IP to connect by specifying:
my-openstack-config: ssh_interface: private_ips
- salt.cloud.clouds.openstack.avail_images(conn=None, call=None)
- Return a dict of all available VM images on the cloud provider with relevant data
- salt.cloud.clouds.openstack.avail_locations(conn=None, call=None)
- Return a dict of all available VM locations on the cloud provider with relevant data
- salt.cloud.clouds.openstack.avail_sizes(conn=None, call=None)
- Return a dict of all available VM images on the cloud provider with relevant data
- salt.cloud.clouds.openstack.create(vm_)
- Create a single VM from a data dict
- salt.cloud.clouds.openstack.destroy(name, conn=None, call=None)
- Delete a single VM
- salt.cloud.clouds.openstack.get_configured_provider()
- Return the first configured instance.
- salt.cloud.clouds.openstack.get_conn()
- Return a conn object for the passed VM data
- salt.cloud.clouds.openstack.get_image(conn, vm_)
- Return the image object to use
- salt.cloud.clouds.openstack.get_node(conn, name)
- Return a libcloud node for the named VM
- salt.cloud.clouds.openstack.get_salt_interface(vm_)
- Return the salt_interface type to connect to. Either 'public_ips' (default) or 'private_ips'.
- salt.cloud.clouds.openstack.get_size(conn, vm_)
- Return the VM's size object
- salt.cloud.clouds.openstack.ignore_cidr(vm_, ip)
- Return True if we are to ignore the specified IP. Compatible with IPv4.
- salt.cloud.clouds.openstack.list_nodes(conn=None, call=None)
- Return a list of the VMs that are on the provider
- salt.cloud.clouds.openstack.list_nodes_full(conn=None, call=None)
- Return a list of the VMs that are on the provider, with all fields
- salt.cloud.clouds.openstack.list_nodes_select(conn=None, call=None)
- Return a list of the VMs that are on the provider, with select fields
- salt.cloud.clouds.openstack.managedcloud(vm_)
- Determine if we should wait for the managed cloud automation before running. Either 'False' (default) or 'True'.
- salt.cloud.clouds.openstack.networks(vm_, kwargs=None)
- salt.cloud.clouds.openstack.preferred_ip(vm_, ips)
- Return the preferred Internet protocol. Either 'ipv4' (default) or 'ipv6'.
- salt.cloud.clouds.openstack.rackconnect(vm_)
- Determine if we should wait for rackconnect automation before running. Either 'False' (default) or 'True'.
- salt.cloud.clouds.openstack.reboot(name, conn=None)
- Reboot a single VM
- salt.cloud.clouds.openstack.request_instance(vm_=None, call=None)
-
Put together all of the information necessary to request an instance on Openstack
and then fire off the request the instance.
Returns data about the instance
- salt.cloud.clouds.openstack.script(vm_)
- Return the script deployment object
- salt.cloud.clouds.openstack.show_instance(name, call=None)
- Show the details from the provider concerning an instance
- salt.cloud.clouds.openstack.ssh_interface(vm_)
- Return the ssh_interface type to connect to. Either 'public_ips' (default) or 'private_ips'.
salt.cloud.clouds.parallels
Parallels Cloud Module
The Parallels cloud module is used to control access to cloud providers using the Parallels VPS system.
- Set up the cloud configuration at /etc/salt/cloud.providers or
-
/etc/salt/cloud.providers.d/parallels.conf:
my-parallels-config: # Parallels account information user: myuser password: mypassword url: https://api.cloud.xmission.com:4465/paci/v1.0/ provider: parallels
- salt.cloud.clouds.parallels.avail_images(call=None)
- Return a list of the images that are on the provider
- salt.cloud.clouds.parallels.create(vm_)
- Create a single VM from a data dict
- salt.cloud.clouds.parallels.create_node(vm_)
- Build and submit the XML to create a node
- salt.cloud.clouds.parallels.destroy(name, call=None)
-
Destroy a node.
CLI Example:
salt-cloud --destroy mymachine
- salt.cloud.clouds.parallels.get_configured_provider()
- Return the first configured instance.
- salt.cloud.clouds.parallels.get_image(vm_)
- Return the image object to use
- salt.cloud.clouds.parallels.list_nodes(call=None)
- Return a list of the VMs that are on the provider
- salt.cloud.clouds.parallels.list_nodes_full(call=None)
- Return a list of the VMs that are on the provider
- salt.cloud.clouds.parallels.list_nodes_select(call=None)
- Return a list of the VMs that are on the provider, with select fields
- salt.cloud.clouds.parallels.query(action=None, command=None, args=None, method='GET', data=None)
- Make a web call to a Parallels provider
- salt.cloud.clouds.parallels.script(vm_)
- Return the script deployment object
- salt.cloud.clouds.parallels.show_image(kwargs, call=None)
- Show the details from Parallels concerning an image
- salt.cloud.clouds.parallels.show_instance(name, call=None)
- Show the details from Parallels concerning an instance
- salt.cloud.clouds.parallels.start(name, call=None)
-
Start a node.
CLI Example:
salt-cloud -a start mymachine
- salt.cloud.clouds.parallels.stop(name, call=None)
-
Stop a node.
CLI Example:
salt-cloud -a stop mymachine
- salt.cloud.clouds.parallels.wait_until(name, state, timeout=300)
- Wait until a specific state has been reached on a node
salt.cloud.clouds.proxmox
Proxmox Cloud Module
New in version 2014.7.0.
The Proxmox cloud module is used to control access to cloud providers using the Proxmox system (KVM / OpenVZ).
- Set up the cloud configuration at /etc/salt/cloud.providers or
-
/etc/salt/cloud.providers.d/proxmox.conf:
my-proxmox-config: # Proxmox account information user: myuser@pam or myuser@pve password: mypassword url: hypervisor.domain.tld provider: proxmox verify_ssl: True
- maintainer
- Frank Klaassen <frank [at] cloudright.nl>
- maturity
- new
- depends
- requests >= 2.2.1
- depends
- IPy >= 0.81
- salt.cloud.clouds.proxmox.avail_images(call=None, location='local')
-
Return a list of the images that are on the provider
CLI Example:
salt-cloud --list-images my-proxmox-config
- salt.cloud.clouds.proxmox.avail_locations(call=None)
-
Return a list of the hypervisors (nodes) which this Proxmox PVE machine manages
CLI Example:
salt-cloud --list-locations my-proxmox-config
- salt.cloud.clouds.proxmox.create(vm_)
-
Create a single VM from a data dict
CLI Example:
salt-cloud -p proxmox-ubuntu vmhostname
- salt.cloud.clouds.proxmox.create_node(vm_)
- Build and submit the requestdata to create a new node
- salt.cloud.clouds.proxmox.destroy(name, call=None)
-
Destroy a node.
CLI Example:
salt-cloud --destroy mymachine
- salt.cloud.clouds.proxmox.get_configured_provider()
- Return the first configured instance.
- salt.cloud.clouds.proxmox.get_resources_nodes(call=None, resFilter=None)
-
Retrieve all hypervisors (nodes) available on this environment
CLI Example:
salt-cloud -f get_resources_nodes my-proxmox-config
- salt.cloud.clouds.proxmox.get_resources_vms(call=None, resFilter=None, includeConfig=True)
-
Retrieve all VMs available on this environment
CLI Example:
salt-cloud -f get_resources_vms my-proxmox-config
- salt.cloud.clouds.proxmox.get_vm_status(vmid=None, name=None)
- Get the status for a VM, either via the ID or the hostname
- salt.cloud.clouds.proxmox.get_vmconfig(vmid, node=None, node_type='openvz')
- Get VM configuration
- salt.cloud.clouds.proxmox.list_nodes(call=None)
-
Return a list of the VMs that are managed by the provider
CLI Example:
salt-cloud -Q my-proxmox-config
- salt.cloud.clouds.proxmox.list_nodes_full(call=None)
-
Return a list of the VMs that are on the provider
CLI Example:
salt-cloud -F my-proxmox-config
- salt.cloud.clouds.proxmox.list_nodes_select(call=None)
-
Return a list of the VMs that are on the provider, with select fields
CLI Example:
salt-cloud -S my-proxmox-config
- salt.cloud.clouds.proxmox.query(conn_type, option, post_data=None)
- Execute the HTTP request to the API
- salt.cloud.clouds.proxmox.script(vm_)
- Return the script deployment object
- salt.cloud.clouds.proxmox.set_vm_status(status, name=None, vmid=None)
- Convenience function for setting VM status
- salt.cloud.clouds.proxmox.show_instance(name, call=None)
- Show the details from Proxmox concerning an instance
- salt.cloud.clouds.proxmox.shutdown(name=None, vmid=None, call=None)
-
Shutdown a node via ACPI.
CLI Example:
salt-cloud -a shutdown mymachine
- salt.cloud.clouds.proxmox.start(name, vmid=None, call=None)
-
Start a node.
CLI Example:
salt-cloud -a start mymachine
- salt.cloud.clouds.proxmox.stop(name, vmid=None, call=None)
-
Stop a node ("pulling the plug").
CLI Example:
salt-cloud -a stop mymachine
- salt.cloud.clouds.proxmox.wait_for_created(upid, timeout=300)
- Wait until a the vm has been created successfully
- salt.cloud.clouds.proxmox.wait_for_state(vmid, state, timeout=300)
- Wait until a specific state has been reached on a node
salt.cloud.clouds.pyrax
Pyrax Cloud Module
PLEASE NOTE: This module is currently in early development, and considered to be experimental and unstable. It is not recommended for production use. Unless you are actively developing code in this module, you should use the OpenStack module instead.
- salt.cloud.clouds.pyrax.get_configured_provider()
- Return the first configured instance.
- salt.cloud.clouds.pyrax.get_conn(conn_type)
- Return a conn object for the passed VM data
- salt.cloud.clouds.pyrax.queues_create(call, kwargs)
- salt.cloud.clouds.pyrax.queues_delete(call, kwargs)
- salt.cloud.clouds.pyrax.queues_exists(call, kwargs)
- salt.cloud.clouds.pyrax.queues_show(call, kwargs)
salt.cloud.clouds.rackspace
Rackspace Cloud Module
The Rackspace cloud module. This module uses the preferred means to set up a libcloud based cloud module and should be used as the general template for setting up additional libcloud based modules.
- depends
-
libcloud >= 0.13.2
Please note that the rackspace driver is only intended for 1st gen instances, aka, "the old cloud" at Rackspace. It is required for 1st gen instances, but will not work with OpenStack-based instances. Unless you explicitly have a reason to use it, it is highly recommended that you use the openstack driver instead.
The rackspace cloud module interfaces with the Rackspace public cloud service and requires that two configuration parameters be set for use, user and apikey.
Set up the cloud configuration at /etc/salt/cloud.providers or /etc/salt/cloud.providers.d/rackspace.conf:
my-rackspace-config: provider: rackspace # The Rackspace login user user: fred # The Rackspace user's apikey apikey: 901d3f579h23c8v73q9
- salt.cloud.clouds.rackspace.avail_images(conn=None, call=None)
- Return a dict of all available VM images on the cloud provider with relevant data
- salt.cloud.clouds.rackspace.avail_locations(conn=None, call=None)
- Return a dict of all available VM locations on the cloud provider with relevant data
- salt.cloud.clouds.rackspace.avail_sizes(conn=None, call=None)
- Return a dict of all available VM images on the cloud provider with relevant data
- salt.cloud.clouds.rackspace.create(vm_)
- Create a single VM from a data dict
- salt.cloud.clouds.rackspace.destroy(name, conn=None, call=None)
- Delete a single VM
- salt.cloud.clouds.rackspace.get_configured_provider()
- Return the first configured instance.
- salt.cloud.clouds.rackspace.get_conn()
- Return a conn object for the passed VM data
- salt.cloud.clouds.rackspace.get_image(conn, vm_)
- Return the image object to use
- salt.cloud.clouds.rackspace.get_salt_interface(vm_)
- Return the salt_interface type to connect to. Either 'public_ips' (default) or 'private_ips'.
- salt.cloud.clouds.rackspace.get_size(conn, vm_)
- Return the VM's size object
- salt.cloud.clouds.rackspace.list_nodes(conn=None, call=None)
- Return a list of the VMs that are on the provider
- salt.cloud.clouds.rackspace.list_nodes_full(conn=None, call=None)
- Return a list of the VMs that are on the provider, with all fields
- salt.cloud.clouds.rackspace.list_nodes_select(conn=None, call=None)
- Return a list of the VMs that are on the provider, with select fields
- salt.cloud.clouds.rackspace.preferred_ip(vm_, ips)
- Return the preferred Internet protocol. Either 'ipv4' (default) or 'ipv6'.
- salt.cloud.clouds.rackspace.script(vm_)
- Return the script deployment object
- salt.cloud.clouds.rackspace.show_instance(name, call=None)
- Show the details from the provider concerning an instance
- salt.cloud.clouds.rackspace.ssh_interface(vm_)
- Return the ssh_interface type to connect to. Either 'public_ips' (default) or 'private_ips'.
salt.cloud.clouds.saltify
Saltify Module
The Saltify module is designed to install Salt on a remote machine, virtual or bare metal, using SSH. This module is useful for provisioning machines which are already installed, but not Salted.
Use of this module requires some configuration in cloud profile and provider files as described in the Gettting Started with Saltify documentation.
- salt.cloud.clouds.saltify.create(vm_)
- Provision a single machine
- salt.cloud.clouds.saltify.get_configured_provider()
- Return the first configured instance.
- salt.cloud.clouds.saltify.list_nodes()
- Because this module is not specific to any cloud providers, there will be no nodes to list.
- salt.cloud.clouds.saltify.list_nodes_full()
- Because this module is not specific to any cloud providers, there will be no nodes to list.
- salt.cloud.clouds.saltify.list_nodes_select()
- Because this module is not specific to any cloud providers, there will be no nodes to list.
salt.cloud.clouds.softlayer
SoftLayer Cloud Module
The SoftLayer cloud module is used to control access to the SoftLayer VPS system.
Use of this module only requires the apikey parameter. Set up the cloud configuration at:
/etc/salt/cloud.providers or /etc/salt/cloud.providers.d/softlayer.conf:
my-softlayer-config: # SoftLayer account api key user: MYLOGIN apikey: JVkbSJDGHSDKUKSDJfhsdklfjgsjdkflhjlsdfffhgdgjkenrtuinv provider: softlayer
The SoftLayer Python Library needs to be installed in order to use the SoftLayer salt.cloud modules. See: https://pypi.python.org/pypi/SoftLayer
- depends
- softlayer
- salt.cloud.clouds.softlayer.avail_images(call=None)
- Return a dict of all available VM images on the cloud provider.
- salt.cloud.clouds.softlayer.avail_locations(call=None)
- List all available locations
- salt.cloud.clouds.softlayer.avail_sizes(call=None)
- Return a dict of all available VM sizes on the cloud provider with relevant data. This data is provided in three dicts.
- salt.cloud.clouds.softlayer.create(vm_)
- Create a single VM from a data dict
- salt.cloud.clouds.softlayer.destroy(name, call=None)
-
Destroy a node.
CLI Example:
salt-cloud --destroy mymachine
- salt.cloud.clouds.softlayer.get_configured_provider()
- Return the first configured instance.
- salt.cloud.clouds.softlayer.get_conn(service='SoftLayer_Virtual_Guest')
- Return a conn object for the passed VM data
- salt.cloud.clouds.softlayer.get_location(vm_=None)
- Return the location to use, in this order:
- •
- CLI parameter
- •
- VM parameter
- •
- Cloud profile setting
- salt.cloud.clouds.softlayer.list_custom_images(call=None)
- Return a dict of all custom VM images on the cloud provider.
- salt.cloud.clouds.softlayer.list_nodes(call=None)
- Return a list of the VMs that are on the provider
- salt.cloud.clouds.softlayer.list_nodes_full(mask='mask[id]', call=None)
- Return a list of the VMs that are on the provider
- salt.cloud.clouds.softlayer.list_nodes_select(call=None)
- Return a list of the VMs that are on the provider, with select fields
- salt.cloud.clouds.softlayer.list_vlans(call=None)
- List all VLANs associated with the account
- salt.cloud.clouds.softlayer.script(vm_)
- Return the script deployment object
- salt.cloud.clouds.softlayer.show_instance(name, call=None)
- Show the details from SoftLayer concerning a guest
salt.cloud.clouds.softlayer_hw
SoftLayer HW Cloud Module
The SoftLayer HW cloud module is used to control access to the SoftLayer hardware cloud system
Use of this module only requires the apikey parameter. Set up the cloud configuration at:
/etc/salt/cloud.providers or /etc/salt/cloud.providers.d/softlayer.conf:
my-softlayer-config: # SoftLayer account api key user: MYLOGIN apikey: JVkbSJDGHSDKUKSDJfhsdklfjgsjdkflhjlsdfffhgdgjkenrtuinv provider: softlayer_hw
The SoftLayer Python Library needs to be installed in order to use the SoftLayer salt.cloud modules. See: https://pypi.python.org/pypi/SoftLayer
- depends
- softlayer
- salt.cloud.clouds.softlayer_hw.avail_images(call=None)
- Return a dict of all available VM images on the cloud provider.
- salt.cloud.clouds.softlayer_hw.avail_locations(call=None)
- List all available locations
- salt.cloud.clouds.softlayer_hw.avail_sizes(call=None)
- Return a dict of all available VM sizes on the cloud provider with relevant data. This data is provided in three dicts.
- salt.cloud.clouds.softlayer_hw.create(vm_)
- Create a single VM from a data dict
- salt.cloud.clouds.softlayer_hw.destroy(name, call=None)
-
Destroy a node.
CLI Example:
salt-cloud --destroy mymachine
- salt.cloud.clouds.softlayer_hw.get_configured_provider()
- Return the first configured instance.
- salt.cloud.clouds.softlayer_hw.get_conn(service='SoftLayer_Hardware')
- Return a conn object for the passed VM data
- salt.cloud.clouds.softlayer_hw.get_location(vm_=None)
- Return the location to use, in this order:
- •
- CLI parameter
- •
- VM parameter
- •
- Cloud profile setting
- salt.cloud.clouds.softlayer_hw.list_nodes(call=None)
- Return a list of the VMs that are on the provider
- salt.cloud.clouds.softlayer_hw.list_nodes_full(mask='mask[id, hostname, primaryIpAddress, primaryBackendIpAddress, processorPhysicalCoreAmount, memoryCount]', call=None)
- Return a list of the VMs that are on the provider
- salt.cloud.clouds.softlayer_hw.list_nodes_select(call=None)
- Return a list of the VMs that are on the provider, with select fields
- salt.cloud.clouds.softlayer_hw.list_vlans(call=None)
- List all VLANs associated with the account
- salt.cloud.clouds.softlayer_hw.script(vm_)
- Return the script deployment object
- salt.cloud.clouds.softlayer_hw.show_instance(name, call=None)
- Show the details from SoftLayer concerning a guest
salt.cloud.clouds.vsphere
vSphere Cloud Module
NOTE: Deprecated since version Carbon: The vsphere cloud driver has been deprecated in favor of the vmware cloud driver and will be removed in Salt Carbon. Please refer to Getting started with VMware to get started and convert your vsphere provider configurations to use the vmware driver.
The vSphere cloud module is used to control access to VMWare vSphere.
- depends
- •
-
PySphere Python module >= 0.1.8
Note: Ensure python pysphere module is installed by running following one-liner check. The output should be 0.
python -c "import pysphere" ; echo $? # if this fails install using pip install https://pysphere.googlecode.com/files/pysphere-0.1.8.zip
Use of this module only requires a URL, username and password. Set up the cloud configuration at:
/etc/salt/cloud.providers or /etc/salt/cloud.providers.d/vsphere.conf:
my-vsphere-config: provider: vsphere user: myuser password: verybadpass template_user: root template_password: mybadVMpassword url: 'https://10.1.1.1:443'
Note: Your URL may or may not look like any of the following, depending on how your VMWare installation is configured:
10.1.1.1 10.1.1.1:443 https://10.1.1.1:443 https://10.1.1.1:443/sdk 10.1.1.1:443/sdk
- folder
- Name of the folder that will contain the new VM. If not set, the VM will be added to the folder the original VM belongs to.
- resourcepool
- MOR of the resourcepool to be used for the new vm. If not set, it uses the same resourcepool than the original vm.
- datastore
- MOR of the datastore where the virtual machine should be located. If not specified, the current datastore is used.
- host
- MOR of the host where the virtual machine should be registered.
- Id not specified:
- •
- if resourcepool is not specified, current host is used.
- •
- if resourcepool is specified, and the target pool represents a stand-alone host, the host is used.
- •
- if resourcepool is specified, and the target pool represents a DRS-enabled cluster, a host selected by DRS is used.
- •
- if resourcepool is specified and the target pool represents a cluster without DRS enabled, an InvalidArgument exception will be thrown.
- template
- Specifies whether or not the new virtual machine should be marked as a template. Default is False.
- template_user
- Specifies the user to access the VM. Should be
- template_password
- The password with which to access the VM.
- sudo
-
The user to access the VM with sudo privileges.
New in version 2015.5.2.
- sudo_password
-
The password corresponding to the sudo user to access the VM with sudo privileges.
New in version 2015.5.2.
- salt.cloud.clouds.vsphere.avail_images()
- Return a dict of all available VM images on the cloud provider.
- salt.cloud.clouds.vsphere.avail_locations()
- Return a dict of all available VM locations on the cloud provider with relevant data
- salt.cloud.clouds.vsphere.create(vm_)
- Create a single VM from a data dict
- salt.cloud.clouds.vsphere.destroy(name, call=None)
-
Destroy a node.
CLI Example:
salt-cloud --destroy mymachine
- salt.cloud.clouds.vsphere.get_configured_provider()
- Return the first configured instance.
- salt.cloud.clouds.vsphere.get_conn()
- Return a conn object for the passed VM data
- salt.cloud.clouds.vsphere.list_clusters(kwargs=None, call=None)
- List the clusters for this VMware environment
- salt.cloud.clouds.vsphere.list_datacenters(kwargs=None, call=None)
- List the data centers for this VMware environment
- salt.cloud.clouds.vsphere.list_datastores(kwargs=None, call=None)
- List the datastores for this VMware environment
- salt.cloud.clouds.vsphere.list_folders(kwargs=None, call=None)
- List the folders for this VMWare environment
- salt.cloud.clouds.vsphere.list_hosts(kwargs=None, call=None)
- List the hosts for this VMware environment
- salt.cloud.clouds.vsphere.list_nodes(kwargs=None, call=None)
- Return a list of the VMs that are on the provider, with basic fields
- salt.cloud.clouds.vsphere.list_nodes_full(kwargs=None, call=None)
- Return a list of the VMs that are on the provider with full details
- salt.cloud.clouds.vsphere.list_nodes_min(kwargs=None, call=None)
- Return a list of the nodes in the provider, with no details
- salt.cloud.clouds.vsphere.list_nodes_select()
- Return a list of the VMs that are on the provider, with select fields
- salt.cloud.clouds.vsphere.list_resourcepools(kwargs=None, call=None)
- List the hosts for this VMware environment
- salt.cloud.clouds.vsphere.script(vm_)
- Return the script deployment object
- salt.cloud.clouds.vsphere.show_instance(name, call=None)
- Show the details from vSphere concerning a guest
- salt.cloud.clouds.vsphere.wait_for_ip(vm_)
Configuration file examples
- •
- Example master configuration file
- •
- Example minion configuration file
Example master configuration file
##### Primary configuration settings ##### ########################################## # This configuration file is used to manage the behavior of the Salt Master. # Values that are commented out but have an empty line after the comment are # defaults that do not need to be set in the config. If there is no blank line # after the comment then the value is presented as an example and is not the # default. # Per default, the master will automatically include all config files # from master.d/*.conf (master.d is a directory in the same directory # as the main master config file). #default_include: master.d/*.conf # The address of the interface to bind to: #interface: 0.0.0.0 # Whether the master should listen for IPv6 connections. If this is set to True, # the interface option must be adjusted, too. (For example: "interface: '::'") #ipv6: False # The tcp port used by the publisher: #publish_port: 4505 # The user under which the salt master will run. Salt will update all # permissions to allow the specified user to run the master. The exception is # the job cache, which must be deleted if this user is changed. If the # modified files cause conflicts, set verify_env to False. #user: root # Max open files # # Each minion connecting to the master uses AT LEAST one file descriptor, the # master subscription connection. If enough minions connect you might start # seeing on the console (and then salt-master crashes): # Too many open files (tcp_listener.cpp:335) # Aborted (core dumped) # # By default this value will be the one of `ulimit -Hn`, ie, the hard limit for # max open files. # # If you wish to set a different value than the default one, uncomment and # configure this setting. Remember that this value CANNOT be higher than the # hard limit. Raising the hard limit depends on your OS and/or distribution, # a good way to find the limit is to search the internet. For example: # raise max open files hard limit debian # #max_open_files: 100000 # The number of worker threads to start. These threads are used to manage # return calls made from minions to the master. If the master seems to be # running slowly, increase the number of threads. This setting can not be # set lower than 3. #worker_threads: 5 # The port used by the communication interface. The ret (return) port is the # interface used for the file server, authentication, job returns, etc. #ret_port: 4506 # Specify the location of the daemon process ID file: #pidfile: /var/run/salt-master.pid # The root directory prepended to these options: pki_dir, cachedir, # sock_dir, log_file, autosign_file, autoreject_file, extension_modules, # key_logfile, pidfile: #root_dir: / # Directory used to store public key data: #pki_dir: /etc/salt/pki/master # Directory to store job and cache data: #cachedir: /var/cache/salt/master # Directory for custom modules. This directory can contain subdirectories for # each of Salt's module types such as "runners", "output", "wheel", "modules", # "states", "returners", etc. #extension_modules: <no default> # Directory for custom modules. This directory can contain subdirectories for # each of Salt's module types such as "runners", "output", "wheel", "modules", # "states", "returners", etc. # Like 'extension_modules' but can take an array of paths #module_dirs: <no default> # - /var/cache/salt/minion/extmods # Verify and set permissions on configuration directories at startup: #verify_env: True # Set the number of hours to keep old job information in the job cache: #keep_jobs: 24 # Set the default timeout for the salt command and api. The default is 5 # seconds. #timeout: 5 # The loop_interval option controls the seconds for the master's maintenance # process check cycle. This process updates file server backends, cleans the # job cache and executes the scheduler. #loop_interval: 60 # Set the default outputter used by the salt command. The default is "nested". #output: nested # Return minions that timeout when running commands like test.ping #show_timeout: True # By default, output is colored. To disable colored output, set the color value # to False. #color: True # Do not strip off the colored output from nested results and state outputs # (true by default). # strip_colors: False # Set the directory used to hold unix sockets: #sock_dir: /var/run/salt/master # The master can take a while to start up when lspci and/or dmidecode is used # to populate the grains for the master. Enable if you want to see GPU hardware # data for your master. # enable_gpu_grains: False # The master maintains a job cache. While this is a great addition, it can be # a burden on the master for larger deployments (over 5000 minions). # Disabling the job cache will make previously executed jobs unavailable to # the jobs system and is not generally recommended. #job_cache: True # Cache minion grains and pillar data in the cachedir. #minion_data_cache: True # Store all returns in the given returner. # Setting this option requires that any returner-specific configuration also # be set. See various returners in salt/returners for details on required # configuration values. (See also, event_return_queue below.) # #event_return: mysql # On busy systems, enabling event_returns can cause a considerable load on # the storage system for returners. Events can be queued on the master and # stored in a batched fashion using a single transaction for multiple events. # By default, events are not queued. #event_return_queue: 0 # Only events returns matching tags in a whitelist # event_return_whitelist: # - salt/master/a_tag # - salt/master/another_tag # Store all event returns _except_ the tags in a blacklist # event_return_blacklist: # - salt/master/not_this_tag # - salt/master/or_this_one # Passing very large events can cause the minion to consume large amounts of # memory. This value tunes the maximum size of a message allowed onto the # master event bus. The value is expressed in bytes. #max_event_size: 1048576 # By default, the master AES key rotates every 24 hours. The next command # following a key rotation will trigger a key refresh from the minion which may # result in minions which do not respond to the first command after a key refresh. # # To tell the master to ping all minions immediately after an AES key refresh, set # ping_on_rotate to True. This should mitigate the issue where a minion does not # appear to initially respond after a key is rotated. # # Note that ping_on_rotate may cause high load on the master immediately after # the key rotation event as minions reconnect. Consider this carefully if this # salt master is managing a large number of minions. # # If disabled, it is recommended to handle this event by listening for the # 'aes_key_rotate' event with the 'key' tag and acting appropriately. # ping_on_rotate: False # By default, the master deletes its cache of minion data when the key for that # minion is removed. To preserve the cache after key deletion, set # 'preserve_minion_cache' to True. # # WARNING: This may have security implications if compromised minions auth with # a previous deleted minion ID. #preserve_minion_cache: False # If max_minions is used in large installations, the master might experience # high-load situations because of having to check the number of connected # minions for every authentication. This cache provides the minion-ids of # all connected minions to all MWorker-processes and greatly improves the # performance of max_minions. # con_cache: False # The master can include configuration from other files. To enable this, # pass a list of paths to this option. The paths can be either relative or # absolute; if relative, they are considered to be relative to the directory # the main master configuration file lives in (this file). Paths can make use # of shell-style globbing. If no files are matched by a path passed to this # option, then the master will log a warning message. # # Include a config file from some other path: # include: /etc/salt/extra_config # # Include config from several files and directories: # include: # - /etc/salt/extra_config ##### Security settings ##### ########################################## # Enable "open mode", this mode still maintains encryption, but turns off # authentication, this is only intended for highly secure environments or for # the situation where your keys end up in a bad state. If you run in open mode # you do so at your own risk! #open_mode: False # Enable auto_accept, this setting will automatically accept all incoming # public keys from the minions. Note that this is insecure. #auto_accept: False # Time in minutes that a incoming public key with a matching name found in # pki_dir/minion_autosign/keyid is automatically accepted. Expired autosign keys # are removed when the master checks the minion_autosign directory. # 0 equals no timeout # autosign_timeout: 120 # If the autosign_file is specified, incoming keys specified in the # autosign_file will be automatically accepted. This is insecure. Regular # expressions as well as globing lines are supported. #autosign_file: /etc/salt/autosign.conf # Works like autosign_file, but instead allows you to specify minion IDs for # which keys will automatically be rejected. Will override both membership in # the autosign_file and the auto_accept setting. #autoreject_file: /etc/salt/autoreject.conf # Enable permissive access to the salt keys. This allows you to run the # master or minion as root, but have a non-root group be given access to # your pki_dir. To make the access explicit, root must belong to the group # you've given access to. This is potentially quite insecure. If an autosign_file # is specified, enabling permissive_pki_access will allow group access to that # specific file. #permissive_pki_access: False # Allow users on the master access to execute specific commands on minions. # This setting should be treated with care since it opens up execution # capabilities to non root users. By default this capability is completely # disabled. #client_acl: # larry: # - test.ping # - network.* # # Blacklist any of the following users or modules # # This example would blacklist all non sudo users, including root from # running any commands. It would also blacklist any use of the "cmd" # module. This is completely disabled by default. # #client_acl_blacklist: # users: # - root # - '^(?!sudo_).*$' # all non sudo users # modules: # - cmd # Enforce client_acl & client_acl_blacklist when users have sudo # access to the salt command. # #sudo_acl: False # The external auth system uses the Salt auth modules to authenticate and # validate users to access areas of the Salt system. #external_auth: # pam: # fred: # - test.* # # Time (in seconds) for a newly generated token to live. Default: 12 hours #token_expire: 43200 # Allow minions to push files to the master. This is disabled by default, for # security purposes. #file_recv: False # Set a hard-limit on the size of the files that can be pushed to the master. # It will be interpreted as megabytes. Default: 100 #file_recv_max_size: 100 # Signature verification on messages published from the master. # This causes the master to cryptographically sign all messages published to its event # bus, and minions then verify that signature before acting on the message. # # This is False by default. # # Note that to facilitate interoperability with masters and minions that are different # versions, if sign_pub_messages is True but a message is received by a minion with # no signature, it will still be accepted, and a warning message will be logged. # Conversely, if sign_pub_messages is False, but a minion receives a signed # message it will be accepted, the signature will not be checked, and a warning message # will be logged. This behavior went away in Salt 2014.1.0 and these two situations # will cause minion to throw an exception and drop the message. # sign_pub_messages: False ##### Salt-SSH Configuration ##### ########################################## # Pass in an alternative location for the salt-ssh roster file #roster_file: /etc/salt/roster # Pass in minion option overrides that will be inserted into the SHIM for # salt-ssh calls. The local minion config is not used for salt-ssh. Can be # overridden on a per-minion basis in the roster (`minion_opts`) #ssh_minion_opts: # gpg_keydir: /root/gpg ##### Master Module Management ##### ########################################## # Manage how master side modules are loaded. # Add any additional locations to look for master runners: #runner_dirs: [] # Enable Cython for master side modules: #cython_enable: False ##### State System settings ##### ########################################## # The state system uses a "top" file to tell the minions what environment to # use and what modules to use. The state_top file is defined relative to the # root of the base environment as defined in "File Server settings" below. #state_top: top.sls # The master_tops option replaces the external_nodes option by creating # a plugable system for the generation of external top data. The external_nodes # option is deprecated by the master_tops option. # # To gain the capabilities of the classic external_nodes system, use the # following configuration: # master_tops: # ext_nodes: <Shell command which returns yaml> # #master_tops: {} # The external_nodes option allows Salt to gather data that would normally be # placed in a top file. The external_nodes option is the executable that will # return the ENC data. Remember that Salt will look for external nodes AND top # files and combine the results if both are enabled! #external_nodes: None # The renderer to use on the minions to render the state data #renderer: yaml_jinja # The Jinja renderer can strip extra carriage returns and whitespace # See http://jinja.pocoo.org/docs/api/#high-level-api # # If this is set to True the first newline after a Jinja block is removed # (block, not variable tag!). Defaults to False, corresponds to the Jinja # environment init variable "trim_blocks". #jinja_trim_blocks: False # # If this is set to True leading spaces and tabs are stripped from the start # of a line to a block. Defaults to False, corresponds to the Jinja # environment init variable "lstrip_blocks". #jinja_lstrip_blocks: False # The failhard option tells the minions to stop immediately after the first # failure detected in the state execution, defaults to False #failhard: False # The state_verbose and state_output settings can be used to change the way # state system data is printed to the display. By default all data is printed. # The state_verbose setting can be set to True or False, when set to False # all data that has a result of True and no changes will be suppressed. #state_verbose: True # The state_output setting changes if the output is the full multi line # output for each changed state if set to 'full', but if set to 'terse' # the output will be shortened to a single line. If set to 'mixed', the output # will be terse unless a state failed, in which case that output will be full. # If set to 'changes', the output will be full unless the state didn't change. #state_output: full # Automatically aggregate all states that have support for mod_aggregate by # setting to 'True'. Or pass a list of state module names to automatically # aggregate just those types. # # state_aggregate: # - pkg # #state_aggregate: False # Send progress events as each function in a state run completes execution # by setting to 'True'. Progress events are in the format # 'salt/job/<JID>/prog/<MID>/<RUN NUM>'. #state_events: False ##### File Server settings ##### ########################################## # Salt runs a lightweight file server written in zeromq to deliver files to # minions. This file server is built into the master daemon and does not # require a dedicated port. # The file server works on environments passed to the master, each environment # can have multiple root directories, the subdirectories in the multiple file # roots cannot match, otherwise the downloaded files will not be able to be # reliably ensured. A base environment is required to house the top file. # Example: # file_roots: # base: # - /srv/salt/ # dev: # - /srv/salt/dev/services # - /srv/salt/dev/states # prod: # - /srv/salt/prod/services # - /srv/salt/prod/states # #file_roots: # base: # - /srv/salt # The hash_type is the hash to use when discovering the hash of a file on # the master server. The default is md5, but sha1, sha224, sha256, sha384 # and sha512 are also supported. # # Prior to changing this value, the master should be stopped and all Salt # caches should be cleared. #hash_type: md5 # The buffer size in the file server can be adjusted here: #file_buffer_size: 1048576 # A regular expression (or a list of expressions) that will be matched # against the file path before syncing the modules and states to the minions. # This includes files affected by the file.recurse state. # For example, if you manage your custom modules and states in subversion # and don't want all the '.svn' folders and content synced to your minions, # you could set this to '/\.svn($|/)'. By default nothing is ignored. #file_ignore_regex: # - '/\.svn($|/)' # - '/\.git($|/)' # A file glob (or list of file globs) that will be matched against the file # path before syncing the modules and states to the minions. This is similar # to file_ignore_regex above, but works on globs instead of regex. By default # nothing is ignored. # file_ignore_glob: # - '*.pyc' # - '*/somefolder/*.bak' # - '*.swp' # File Server Backend # # Salt supports a modular fileserver backend system, this system allows # the salt master to link directly to third party systems to gather and # manage the files available to minions. Multiple backends can be # configured and will be searched for the requested file in the order in which # they are defined here. The default setting only enables the standard backend # "roots" which uses the "file_roots" option. #fileserver_backend: # - roots # # To use multiple backends list them in the order they are searched: #fileserver_backend: # - git # - roots # # Uncomment the line below if you do not want the file_server to follow # symlinks when walking the filesystem tree. This is set to True # by default. Currently this only applies to the default roots # fileserver_backend. #fileserver_followsymlinks: False # # Uncomment the line below if you do not want symlinks to be # treated as the files they are pointing to. By default this is set to # False. By uncommenting the line below, any detected symlink while listing # files on the Master will not be returned to the Minion. #fileserver_ignoresymlinks: True # # By default, the Salt fileserver recurses fully into all defined environments # to attempt to find files. To limit this behavior so that the fileserver only # traverses directories with SLS files and special Salt directories like _modules, # enable the option below. This might be useful for installations where a file root # has a very large number of files and performance is impacted. Default is False. # fileserver_limit_traversal: False # # The fileserver can fire events off every time the fileserver is updated, # these are disabled by default, but can be easily turned on by setting this # flag to True #fileserver_events: False # Git File Server Backend Configuration # # Gitfs can be provided by one of two python modules: GitPython or pygit2. If # using pygit2, both libgit2 and git must also be installed. #gitfs_provider: gitpython # # When using the git fileserver backend at least one git remote needs to be # defined. The user running the salt master will need read access to the repo. # # The repos will be searched in order to find the file requested by a client # and the first repo to have the file will return it. # When using the git backend branches and tags are translated into salt # environments. # Note: file:// repos will be treated as a remote, so refs you want used must # exist in that repo as *local* refs. #gitfs_remotes: # - git://github.com/saltstack/salt-states.git # - file:///var/git/saltmaster # # The gitfs_ssl_verify option specifies whether to ignore ssl certificate # errors when contacting the gitfs backend. You might want to set this to # false if you're using a git backend that uses a self-signed certificate but # keep in mind that setting this flag to anything other than the default of True # is a security concern, you may want to try using the ssh transport. #gitfs_ssl_verify: True # # The gitfs_root option gives the ability to serve files from a subdirectory # within the repository. The path is defined relative to the root of the # repository and defaults to the repository root. #gitfs_root: somefolder/otherfolder # # ##### Pillar settings ##### ########################################## # Salt Pillars allow for the building of global data that can be made selectively # available to different minions based on minion grain filtering. The Salt # Pillar is laid out in the same fashion as the file server, with environments, # a top file and sls files. However, pillar data does not need to be in the # highstate format, and is generally just key/value pairs. #pillar_roots: # base: # - /srv/pillar # #ext_pillar: # - hiera: /etc/hiera.yaml # - cmd_yaml: cat /etc/salt/yaml # The ext_pillar_first option allows for external pillar sources to populate # before file system pillar. This allows for targeting file system pillar from # ext_pillar. #ext_pillar_first: False # The pillar_gitfs_ssl_verify option specifies whether to ignore ssl certificate # errors when contacting the pillar gitfs backend. You might want to set this to # false if you're using a git backend that uses a self-signed certificate but # keep in mind that setting this flag to anything other than the default of True # is a security concern, you may want to try using the ssh transport. #pillar_gitfs_ssl_verify: True # The pillar_opts option adds the master configuration file data to a dict in # the pillar called "master". This is used to set simple configurations in the # master config file that can then be used on minions. #pillar_opts: False # The pillar_safe_render_error option prevents the master from passing pillar # render errors to the minion. This is set on by default because the error could # contain templating data which would give that minion information it shouldn't # have, like a password! When set true the error message will only show: # Rendering SLS 'my.sls' failed. Please see master log for details. #pillar_safe_render_error: True # The pillar_source_merging_strategy option allows you to configure merging strategy # between different sources. It accepts four values: recurse, aggregate, overwrite, # or smart. Recurse will merge recursively mapping of data. Aggregate instructs # aggregation of elements between sources that use the #!yamlex renderer. Overwrite # will verwrite elements according the order in which they are processed. This is # behavior of the 2014.1 branch and earlier. Smart guesses the best strategy based # on the "renderer" setting and is the default value. #pillar_source_merging_strategy: smart ##### Syndic settings ##### ########################################## # The Salt syndic is used to pass commands through a master from a higher # master. Using the syndic is simple, if this is a master that will have # syndic servers(s) below it set the "order_masters" setting to True, if this # is a master that will be running a syndic daemon for passthrough the # "syndic_master" setting needs to be set to the location of the master server # to receive commands from. # Set the order_masters setting to True if this master will command lower # masters' syndic interfaces. #order_masters: False # If this master will be running a salt syndic daemon, syndic_master tells # this master where to receive commands from. #syndic_master: masterofmaster # This is the 'ret_port' of the MasterOfMaster: #syndic_master_port: 4506 # PID file of the syndic daemon: #syndic_pidfile: /var/run/salt-syndic.pid # LOG file of the syndic daemon: #syndic_log_file: syndic.log ##### Peer Publish settings ##### ########################################## # Salt minions can send commands to other minions, but only if the minion is # allowed to. By default "Peer Publication" is disabled, and when enabled it # is enabled for specific minions and specific commands. This allows secure # compartmentalization of commands based on individual minions. # The configuration uses regular expressions to match minions and then a list # of regular expressions to match functions. The following will allow the # minion authenticated as foo.example.com to execute functions from the test # and pkg modules. #peer: # foo.example.com: # - test.* # - pkg.* # # This will allow all minions to execute all commands: #peer: # .*: # - .* # # This is not recommended, since it would allow anyone who gets root on any # single minion to instantly have root on all of the minions! # Minions can also be allowed to execute runners from the salt master. # Since executing a runner from the minion could be considered a security risk, # it needs to be enabled. This setting functions just like the peer setting # except that it opens up runners instead of module functions. # # All peer runner support is turned off by default and must be enabled before # using. This will enable all peer runners for all minions: #peer_run: # .*: # - .* # # To enable just the manage.up runner for the minion foo.example.com: #peer_run: # foo.example.com: # - manage.up # # ##### Mine settings ##### ########################################## # Restrict mine.get access from minions. By default any minion has a full access # to get all mine data from master cache. In acl definion below, only pcre matches # are allowed. # mine_get: # .*: # - .* # # The example below enables minion foo.example.com to get 'network.interfaces' mine # data only, minions web* to get all network.* and disk.* mine data and all other # minions won't get any mine data. # mine_get: # foo.example.com: # - network.interfaces # web.*: # - network.* # - disk.* ##### Logging settings ##### ########################################## # The location of the master log file # The master log can be sent to a regular file, local path name, or network # location. Remote logging works best when configured to use rsyslogd(8) (e.g.: # ``file:///dev/log``), with rsyslogd(8) configured for network logging. The URI # format is: <file|udp|tcp>://<host|socketpath>:<port-if-required>/<log-facility> #log_file: /var/log/salt/master #log_file: file:///dev/log #log_file: udp://loghost:10514 #log_file: /var/log/salt/master #key_logfile: /var/log/salt/key # The level of messages to send to the console. # One of 'garbage', 'trace', 'debug', info', 'warning', 'error', 'critical'. # # The following log levels are considered INSECURE and may log sensitive data: # ['garbage', 'trace', 'debug'] # #log_level: warning # The level of messages to send to the log file. # One of 'garbage', 'trace', 'debug', info', 'warning', 'error', 'critical'. # If using 'log_granular_levels' this must be set to the highest desired level. #log_level_logfile: warning # The date and time format used in log messages. Allowed date/time formating # can be seen here: http://docs.python.org/library/time.html#time.strftime #log_datefmt: '%H:%M:%S' #log_datefmt_logfile: '%Y-%m-%d %H:%M:%S' # The format of the console logging messages. Allowed formatting options can # be seen here: http://docs.python.org/library/logging.html#logrecord-attributes #log_fmt_console: '[%(levelname)-8s] %(message)s' #log_fmt_logfile: '%(asctime)s,%(msecs)03.0f [%(name)-17s][%(levelname)-8s] %(message)s' # This can be used to control logging levels more specificically. This # example sets the main salt library at the 'warning' level, but sets # 'salt.modules' to log at the 'debug' level: # log_granular_levels: # 'salt': 'warning' # 'salt.modules': 'debug' # #log_granular_levels: {} ##### Node Groups ##### ########################################## # Node groups allow for logical groupings of minion nodes. A group consists of a group # name and a compound target. #nodegroups: # group1: 'L@foo.domain.com,bar.domain.com,baz.domain.com and bl*.domain.com' # group2: 'G@os:Debian and foo.domain.com' ##### Range Cluster settings ##### ########################################## # The range server (and optional port) that serves your cluster information # https://github.com/ytoolshed/range/wiki/%22yamlfile%22-module-file-spec # #range_server: range:80 ##### Windows Software Repo settings ##### ############################################## # Location of the repo on the master: #win_repo: '/srv/salt/win/repo' # # Location of the master's repo cache file: #win_repo_mastercachefile: '/srv/salt/win/repo/winrepo.p' # # List of git repositories to include with the local repo: #win_gitrepos: # - 'https://github.com/saltstack/salt-winrepo.git' ##### Returner settings ###### ############################################ # Which returner(s) will be used for minion's result: #return: mysql
Example minion configuration file
##### Primary configuration settings ##### ########################################## # This configuration file is used to manage the behavior of the Salt Minion. # With the exception of the location of the Salt Master Server, values that are # commented out but have an empty line after the comment are defaults that need # not be set in the config. If there is no blank line after the comment, the # value is presented as an example and is not the default. # Per default the minion will automatically include all config files # from minion.d/*.conf (minion.d is a directory in the same directory # as the main minion config file). #default_include: minion.d/*.conf # Set the location of the salt master server. If the master server cannot be # resolved, then the minion will fail to start. #master: salt # If multiple masters are specified in the 'master' setting, the default behavior # is to always try to connect to them in the order they are listed. If random_master is # set to True, the order will be randomized instead. This can be helpful in distributing # the load of many minions executing salt-call requests, for example, from a cron job. # If only one master is listed, this setting is ignored and a warning will be logged. # NOTE: If master_type is set to failover, use master_shuffle instead. #random_master: False # Use if master_type is set to failover. #master_shuffle: False # Minions can connect to multiple masters simultaneously (all masters # are "hot"), or can be configured to failover if a master becomes # unavailable. Multiple hot masters are configured by setting this # value to "str". Failover masters can be requested by setting # to "failover". MAKE SURE TO SET master_alive_interval if you are # using failover. # master_type: str # Poll interval in seconds for checking if the master is still there. Only # respected if master_type above is "failover". To disable the interval entirely, # set the value to -1. (This may be necessary on machines which have high numbers # of TCP connections, such as load balancers.) # master_alive_interval: 30 # Set whether the minion should connect to the master via IPv6: #ipv6: False # Set the number of seconds to wait before attempting to resolve # the master hostname if name resolution fails. Defaults to 30 seconds. # Set to zero if the minion should shutdown and not retry. # retry_dns: 30 # Set the port used by the master reply and authentication server. #master_port: 4506 # The user to run salt. #user: root # Setting sudo_user will cause salt to run all execution modules under an sudo # to the user given in sudo_user. The user under which the salt minion process # itself runs will still be that provided in the user config above, but all # execution modules run by the minion will be rerouted through sudo. #sudo_user: saltdev # Specify the location of the daemon process ID file. #pidfile: /var/run/salt-minion.pid # The root directory prepended to these options: pki_dir, cachedir, log_file, # sock_dir, pidfile. #root_dir: / # The directory to store the pki information in #pki_dir: /etc/salt/pki/minion # Explicitly declare the id for this minion to use, if left commented the id # will be the hostname as returned by the python call: socket.getfqdn() # Since salt uses detached ids it is possible to run multiple minions on the # same machine but with different ids, this can be useful for salt compute # clusters. #id: # Append a domain to a hostname in the event that it does not exist. This is # useful for systems where socket.getfqdn() does not actually result in a # FQDN (for instance, Solaris). #append_domain: # Custom static grains for this minion can be specified here and used in SLS # files just like all other grains. This example sets 4 custom grains, with # the 'roles' grain having two values that can be matched against. #grains: # roles: # - webserver # - memcache # deployment: datacenter4 # cabinet: 13 # cab_u: 14-15 # # Where cache data goes. #cachedir: /var/cache/salt/minion # Verify and set permissions on configuration directories at startup. #verify_env: True # The minion can locally cache the return data from jobs sent to it, this # can be a good way to keep track of jobs the minion has executed # (on the minion side). By default this feature is disabled, to enable, set # cache_jobs to True. #cache_jobs: False # Set the directory used to hold unix sockets. #sock_dir: /var/run/salt/minion # Set the default outputter used by the salt-call command. The default is # "nested". #output: nested # # By default output is colored. To disable colored output, set the color value # to False. #color: True # Do not strip off the colored output from nested results and state outputs # (true by default). # strip_colors: False # Backup files that are replaced by file.managed and file.recurse under # 'cachedir'/file_backups relative to their original location and appended # with a timestamp. The only valid setting is "minion". Disabled by default. # # Alternatively this can be specified for each file in state files: # /etc/ssh/sshd_config: # file.managed: # - source: salt://ssh/sshd_config # - backup: minion # #backup_mode: minion # When waiting for a master to accept the minion's public key, salt will # continuously attempt to reconnect until successful. This is the time, in # seconds, between those reconnection attempts. #acceptance_wait_time: 10 # If this is nonzero, the time between reconnection attempts will increase by # acceptance_wait_time seconds per iteration, up to this maximum. If this is # set to zero, the time between reconnection attempts will stay constant. #acceptance_wait_time_max: 0 # If the master rejects the minion's public key, retry instead of exiting. # Rejected keys will be handled the same as waiting on acceptance. #rejected_retry: False # When the master key changes, the minion will try to re-auth itself to receive # the new master key. In larger environments this can cause a SYN flood on the # master because all minions try to re-auth immediately. To prevent this and # have a minion wait for a random amount of time, use this optional parameter. # The wait-time will be a random number of seconds between 0 and the defined value. #random_reauth_delay: 60 # When waiting for a master to accept the minion's public key, salt will # continuously attempt to reconnect until successful. This is the timeout value, # in seconds, for each individual attempt. After this timeout expires, the minion # will wait for acceptance_wait_time seconds before trying again. Unless your master # is under unusually heavy load, this should be left at the default. #auth_timeout: 60 # Number of consecutive SaltReqTimeoutError that are acceptable when trying to # authenticate. #auth_tries: 7 # If authentication fails due to SaltReqTimeoutError during a ping_interval, # cause sub minion process to restart. #auth_safemode: False # Ping Master to ensure connection is alive (minutes). #ping_interval: 0 # To auto recover minions if master changes IP address (DDNS) # auth_tries: 10 # auth_safemode: False # ping_interval: 90 # # Minions won't know master is missing until a ping fails. After the ping fail, # the minion will attempt authentication and likely fails out and cause a restart. # When the minion restarts it will resolve the masters IP and attempt to reconnect. # If you don't have any problems with syn-floods, don't bother with the # three recon_* settings described below, just leave the defaults! # # The ZeroMQ pull-socket that binds to the masters publishing interface tries # to reconnect immediately, if the socket is disconnected (for example if # the master processes are restarted). In large setups this will have all # minions reconnect immediately which might flood the master (the ZeroMQ-default # is usually a 100ms delay). To prevent this, these three recon_* settings # can be used. # recon_default: the interval in milliseconds that the socket should wait before # trying to reconnect to the master (1000ms = 1 second) # # recon_max: the maximum time a socket should wait. each interval the time to wait # is calculated by doubling the previous time. if recon_max is reached, # it starts again at recon_default. Short example: # # reconnect 1: the socket will wait 'recon_default' milliseconds # reconnect 2: 'recon_default' * 2 # reconnect 3: ('recon_default' * 2) * 2 # reconnect 4: value from previous interval * 2 # reconnect 5: value from previous interval * 2 # reconnect x: if value >= recon_max, it starts again with recon_default # # recon_randomize: generate a random wait time on minion start. The wait time will # be a random value between recon_default and recon_default + # recon_max. Having all minions reconnect with the same recon_default # and recon_max value kind of defeats the purpose of being able to # change these settings. If all minions have the same values and your # setup is quite large (several thousand minions), they will still # flood the master. The desired behavior is to have timeframe within # all minions try to reconnect. # # Example on how to use these settings. The goal: have all minions reconnect within a # 60 second timeframe on a disconnect. # recon_default: 1000 # recon_max: 59000 # recon_randomize: True # # Each minion will have a randomized reconnect value between 'recon_default' # and 'recon_default + recon_max', which in this example means between 1000ms # 60000ms (or between 1 and 60 seconds). The generated random-value will be # doubled after each attempt to reconnect. Lets say the generated random # value is 11 seconds (or 11000ms). # reconnect 1: wait 11 seconds # reconnect 2: wait 22 seconds # reconnect 3: wait 33 seconds # reconnect 4: wait 44 seconds # reconnect 5: wait 55 seconds # reconnect 6: wait time is bigger than 60 seconds (recon_default + recon_max) # reconnect 7: wait 11 seconds # reconnect 8: wait 22 seconds # reconnect 9: wait 33 seconds # reconnect x: etc. # # In a setup with ~6000 thousand hosts these settings would average the reconnects # to about 100 per second and all hosts would be reconnected within 60 seconds. # recon_default: 100 # recon_max: 5000 # recon_randomize: False # # # The loop_interval sets how long in seconds the minion will wait between # evaluating the scheduler and running cleanup tasks. This defaults to a # sane 60 seconds, but if the minion scheduler needs to be evaluated more # often lower this value #loop_interval: 60 # The grains_refresh_every setting allows for a minion to periodically check # its grains to see if they have changed and, if so, to inform the master # of the new grains. This operation is moderately expensive, therefore # care should be taken not to set this value too low. # # Note: This value is expressed in __minutes__! # # A value of 10 minutes is a reasonable default. # # If the value is set to zero, this check is disabled. #grains_refresh_every: 1 # Cache grains on the minion. Default is False. #grains_cache: False # Grains cache expiration, in seconds. If the cache file is older than this # number of seconds then the grains cache will be dumped and fully re-populated # with fresh data. Defaults to 5 minutes. Will have no effect if 'grains_cache' # is not enabled. # grains_cache_expiration: 300 # Windows platforms lack posix IPC and must rely on slower TCP based inter- # process communications. Set ipc_mode to 'tcp' on such systems #ipc_mode: ipc # Overwrite the default tcp ports used by the minion when in tcp mode #tcp_pub_port: 4510 #tcp_pull_port: 4511 # Passing very large events can cause the minion to consume large amounts of # memory. This value tunes the maximum size of a message allowed onto the # minion event bus. The value is expressed in bytes. #max_event_size: 1048576 # To detect failed master(s) and fire events on connect/disconnect, set # master_alive_interval to the number of seconds to poll the masters for # connection events. # #master_alive_interval: 30 # The minion can include configuration from other files. To enable this, # pass a list of paths to this option. The paths can be either relative or # absolute; if relative, they are considered to be relative to the directory # the main minion configuration file lives in (this file). Paths can make use # of shell-style globbing. If no files are matched by a path passed to this # option then the minion will log a warning message. # # Include a config file from some other path: # include: /etc/salt/extra_config # # Include config from several files and directories: #include: # - /etc/salt/extra_config # - /etc/roles/webserver # # # ##### Minion module management ##### ########################################## # Disable specific modules. This allows the admin to limit the level of # access the master has to the minion. #disable_modules: [cmd,test] #disable_returners: [] # # Modules can be loaded from arbitrary paths. This enables the easy deployment # of third party modules. Modules for returners and minions can be loaded. # Specify a list of extra directories to search for minion modules and # returners. These paths must be fully qualified! #module_dirs: [] #returner_dirs: [] #states_dirs: [] #render_dirs: [] #utils_dirs: [] # # A module provider can be statically overwritten or extended for the minion # via the providers option, in this case the default module will be # overwritten by the specified module. In this example the pkg module will # be provided by the yumpkg5 module instead of the system default. #providers: # pkg: yumpkg5 # # Enable Cython modules searching and loading. (Default: False) #cython_enable: False # # Specify a max size (in bytes) for modules on import. This feature is currently # only supported on *nix operating systems and requires psutil. # modules_max_memory: -1 ##### State Management Settings ##### ########################################### # The state management system executes all of the state templates on the minion # to enable more granular control of system state management. The type of # template and serialization used for state management needs to be configured # on the minion, the default renderer is yaml_jinja. This is a yaml file # rendered from a jinja template, the available options are: # yaml_jinja # yaml_mako # yaml_wempy # json_jinja # json_mako # json_wempy # #renderer: yaml_jinja # # The failhard option tells the minions to stop immediately after the first # failure detected in the state execution. Defaults to False. #failhard: False # # autoload_dynamic_modules turns on automatic loading of modules found in the # environments on the master. This is turned on by default. To turn of # autoloading modules when states run, set this value to False. #autoload_dynamic_modules: True # # clean_dynamic_modules keeps the dynamic modules on the minion in sync with # the dynamic modules on the master, this means that if a dynamic module is # not on the master it will be deleted from the minion. By default, this is # enabled and can be disabled by changing this value to False. #clean_dynamic_modules: True # # Normally, the minion is not isolated to any single environment on the master # when running states, but the environment can be isolated on the minion side # by statically setting it. Remember that the recommended way to manage # environments is to isolate via the top file. #environment: None # # If using the local file directory, then the state top file name needs to be # defined, by default this is top.sls. #state_top: top.sls # # Run states when the minion daemon starts. To enable, set startup_states to: # 'highstate' -- Execute state.highstate # 'sls' -- Read in the sls_list option and execute the named sls files # 'top' -- Read top_file option and execute based on that file on the Master #startup_states: '' # # List of states to run when the minion starts up if startup_states is 'sls': #sls_list: # - edit.vim # - hyper # # Top file to execute if startup_states is 'top': #top_file: '' # Automatically aggregate all states that have support for mod_aggregate by # setting to True. Or pass a list of state module names to automatically # aggregate just those types. # # state_aggregate: # - pkg # #state_aggregate: False ##### File Directory Settings ##### ########################################## # The Salt Minion can redirect all file server operations to a local directory, # this allows for the same state tree that is on the master to be used if # copied completely onto the minion. This is a literal copy of the settings on # the master but used to reference a local directory on the minion. # Set the file client. The client defaults to looking on the master server for # files, but can be directed to look at the local file directory setting # defined below by setting it to "local". Setting a local file_client runs the # minion in masterless mode. #file_client: remote # The file directory works on environments passed to the minion, each environment # can have multiple root directories, the subdirectories in the multiple file # roots cannot match, otherwise the downloaded files will not be able to be # reliably ensured. A base environment is required to house the top file. # Example: # file_roots: # base: # - /srv/salt/ # dev: # - /srv/salt/dev/services # - /srv/salt/dev/states # prod: # - /srv/salt/prod/services # - /srv/salt/prod/states # #file_roots: # base: # - /srv/salt # By default, the Salt fileserver recurses fully into all defined environments # to attempt to find files. To limit this behavior so that the fileserver only # traverses directories with SLS files and special Salt directories like _modules, # enable the option below. This might be useful for installations where a file root # has a very large number of files and performance is negatively impacted. Default # is False. #fileserver_limit_traversal: False # The hash_type is the hash to use when discovering the hash of a file in # the local fileserver. The default is md5, but sha1, sha224, sha256, sha384 # and sha512 are also supported. # # Warning: Prior to changing this value, the minion should be stopped and all # Salt caches should be cleared. #hash_type: md5 # The Salt pillar is searched for locally if file_client is set to local. If # this is the case, and pillar data is defined, then the pillar_roots need to # also be configured on the minion: #pillar_roots: # base: # - /srv/pillar # # ###### Security settings ##### ########################################### # Enable "open mode", this mode still maintains encryption, but turns off # authentication, this is only intended for highly secure environments or for # the situation where your keys end up in a bad state. If you run in open mode # you do so at your own risk! #open_mode: False # Enable permissive access to the salt keys. This allows you to run the # master or minion as root, but have a non-root group be given access to # your pki_dir. To make the access explicit, root must belong to the group # you've given access to. This is potentially quite insecure. #permissive_pki_access: False # The state_verbose and state_output settings can be used to change the way # state system data is printed to the display. By default all data is printed. # The state_verbose setting can be set to True or False, when set to False # all data that has a result of True and no changes will be suppressed. #state_verbose: True # The state_output setting changes if the output is the full multi line # output for each changed state if set to 'full', but if set to 'terse' # the output will be shortened to a single line. #state_output: full # The state_output_diff setting changes whether or not the output from # successful states is returned. Useful when even the terse output of these # states is cluttering the logs. Set it to True to ignore them. #state_output_diff: False # The state_output_profile setting changes whether profile information # will be shown for each state run. #state_output_profile: True # Fingerprint of the master public key to validate the identity of your Salt master # before the initial key exchange. The master fingerprint can be found by running # "salt-key -F master" on the Salt master. #master_finger: '' ###### Thread settings ##### ########################################### # Disable multiprocessing support, by default when a minion receives a # publication a new process is spawned and the command is executed therein. #multiprocessing: True ##### Logging settings ##### ########################################## # The location of the minion log file # The minion log can be sent to a regular file, local path name, or network # location. Remote logging works best when configured to use rsyslogd(8) (e.g.: # ``file:///dev/log``), with rsyslogd(8) configured for network logging. The URI # format is: <file|udp|tcp>://<host|socketpath>:<port-if-required>/<log-facility> #log_file: /var/log/salt/minion #log_file: file:///dev/log #log_file: udp://loghost:10514 # #log_file: /var/log/salt/minion #key_logfile: /var/log/salt/key # The level of messages to send to the console. # One of 'garbage', 'trace', 'debug', info', 'warning', 'error', 'critical'. # # The following log levels are considered INSECURE and may log sensitive data: # ['garbage', 'trace', 'debug'] # # Default: 'warning' #log_level: warning # The level of messages to send to the log file. # One of 'garbage', 'trace', 'debug', info', 'warning', 'error', 'critical'. # If using 'log_granular_levels' this must be set to the highest desired level. # Default: 'warning' #log_level_logfile: # The date and time format used in log messages. Allowed date/time formating # can be seen here: http://docs.python.org/library/time.html#time.strftime #log_datefmt: '%H:%M:%S' #log_datefmt_logfile: '%Y-%m-%d %H:%M:%S' # The format of the console logging messages. Allowed formatting options can # be seen here: http://docs.python.org/library/logging.html#logrecord-attributes #log_fmt_console: '[%(levelname)-8s] %(message)s' #log_fmt_logfile: '%(asctime)s,%(msecs)03.0f [%(name)-17s][%(levelname)-8s] %(message)s' # This can be used to control logging levels more specificically. This # example sets the main salt library at the 'warning' level, but sets # 'salt.modules' to log at the 'debug' level: # log_granular_levels: # 'salt': 'warning' # 'salt.modules': 'debug' # #log_granular_levels: {} # To diagnose issues with minions disconnecting or missing returns, ZeroMQ # supports the use of monitor sockets # to log connection events. This # feature requires ZeroMQ 4.0 or higher. # # To enable ZeroMQ monitor sockets, set 'zmq_monitor' to 'True' and log at a # debug level or higher. # # A sample log event is as follows: # # [DEBUG ] ZeroMQ event: {'endpoint': 'tcp://127.0.0.1:4505', 'event': 512, # 'value': 27, 'description': 'EVENT_DISCONNECTED'} # # All events logged will include the string 'ZeroMQ event'. A connection event # should be logged on the as the minion starts up and initially connects to the # master. If not, check for debug log level and that the necessary version of # ZeroMQ is installed. # #zmq_monitor: False ###### Module configuration ##### ########################################### # Salt allows for modules to be passed arbitrary configuration data, any data # passed here in valid yaml format will be passed on to the salt minion modules # for use. It is STRONGLY recommended that a naming convention be used in which # the module name is followed by a . and then the value. Also, all top level # data must be applied via the yaml dict construct, some examples: # # You can specify that all modules should run in test mode: #test: True # # A simple value for the test module: #test.foo: foo # # A list for the test module: #test.bar: [baz,quo] # # A dict for the test module: #test.baz: {spam: sausage, cheese: bread} # # ###### Update settings ###### ########################################### # Using the features in Esky, a salt minion can both run as a frozen app and # be updated on the fly. These options control how the update process # (saltutil.update()) behaves. # # The url for finding and downloading updates. Disabled by default. #update_url: False # # The list of services to restart after a successful update. Empty by default. #update_restart_services: [] ###### Keepalive settings ###### ############################################ # ZeroMQ now includes support for configuring SO_KEEPALIVE if supported by # the OS. If connections between the minion and the master pass through # a state tracking device such as a firewall or VPN gateway, there is # the risk that it could tear down the connection the master and minion # without informing either party that their connection has been taken away. # Enabling TCP Keepalives prevents this from happening. # Overall state of TCP Keepalives, enable (1 or True), disable (0 or False) # or leave to the OS defaults (-1), on Linux, typically disabled. Default True, enabled. #tcp_keepalive: True # How long before the first keepalive should be sent in seconds. Default 300 # to send the first keepalive after 5 minutes, OS default (-1) is typically 7200 seconds # on Linux see /proc/sys/net/ipv4/tcp_keepalive_time. #tcp_keepalive_idle: 300 # How many lost probes are needed to consider the connection lost. Default -1 # to use OS defaults, typically 9 on Linux, see /proc/sys/net/ipv4/tcp_keepalive_probes. #tcp_keepalive_cnt: -1 # How often, in seconds, to send keepalives after the first one. Default -1 to # use OS defaults, typically 75 seconds on Linux, see # /proc/sys/net/ipv4/tcp_keepalive_intvl. #tcp_keepalive_intvl: -1 ###### Windows Software settings ###### ############################################ # Location of the repository cache file on the master: #win_repo_cachefile: 'salt://win/repo/winrepo.p' ###### Returner settings ###### ############################################ # Which returner(s) will be used for minion's result: #return: mysql
Configuring Salt
Salt configuration is very simple. The default configuration for the master will work for most installations and the only requirement for setting up a minion is to set the location of the master in the minion configuration file.
The configuration files will be installed to /etc/salt and are named after the respective components, /etc/salt/master, and /etc/salt/minion.
Master Configuration
By default the Salt master listens on ports 4505 and 4506 on all interfaces (0.0.0.0). To bind Salt to a specific IP, redefine the "interface" directive in the master configuration file, typically /etc/salt/master, as follows:
- #interface: 0.0.0.0 + interface: 10.0.0.1
After updating the configuration file, restart the Salt master. See the master configuration reference for more details about other configurable options.
Minion Configuration
Although there are many Salt Minion configuration options, configuring a Salt Minion is very simple. By default a Salt Minion will try to connect to the DNS name "salt"; if the Minion is able to resolve that name correctly, no configuration is needed.
If the DNS name "salt" does not resolve to point to the correct location of the Master, redefine the "master" directive in the minion configuration file, typically /etc/salt/minion, as follows:
- #master: salt + master: 10.0.0.1
After updating the configuration file, restart the Salt minion. See the minion configuration reference for more details about other configurable options.
Running Salt
- 1.
-
Start the master in the foreground (to daemonize the process, pass the
-d flag):
salt-master
- 2.
-
Start the minion in the foreground (to daemonize the process, pass the
-d flag):
salt-minion
- Having trouble?
-
The simplest way to troubleshoot Salt is to run the master and minion in the foreground with log level set to debug:
salt-master --log-level=debug
For information on salt's logging system please see the logging document.
- Run as an unprivileged (non-root) user
-
To run Salt as another user, set the user parameter in the master config file.
Additionally, ownership, and permissions need to be set such that the desired user can read from and write to the following directories (and their subdirectories, where applicable):
- •
- /etc/salt
- •
- /var/cache/salt
- •
- /var/log/salt
- •
-
/var/run/salt
More information about running salt as a non-privileged user can be found here.
There is also a full troubleshooting guide available.
Key Identity
Salt provides commands to validate the identity of your Salt master and Salt minions before the initial key exchange. Validating key identity helps avoid inadvertently connecting to the wrong Salt master, and helps prevent a potential MiTM attack when establishing the initial connection.
Master Key Fingerprint
Print the master key fingerprint by running the following command on the Salt master:
salt-key -F master
Copy the master.pub fingerprint from the Local Keys section, and then set this value as the master_finger in the minion configuration file. Save the configuration file and then restart the Salt minion.
Minion Key Fingerprint
Run the following command on each Salt minion to view the minion key fingerprint:
salt-call --local key.finger
Compare this value to the value that is displayed when you run the salt-key --finger <MINION_ID> command on the Salt master.
Key Management
Salt uses AES encryption for all communication between the Master and the Minion. This ensures that the commands sent to the Minions cannot be tampered with, and that communication between Master and Minion is authenticated through trusted, accepted keys.
Before commands can be sent to a Minion, its key must be accepted on the Master. Run the salt-key command to list the keys known to the Salt Master:
[root [at] master ~]# salt-key -L Unaccepted Keys: alpha bravo charlie delta Accepted Keys:
This example shows that the Salt Master is aware of four Minions, but none of the keys has been accepted. To accept the keys and allow the Minions to be controlled by the Master, again use the salt-key command:
[root [at] master ~]# salt-key -A [root [at] master ~]# salt-key -L Unaccepted Keys: Accepted Keys: alpha bravo charlie delta
The salt-key command allows for signing keys individually or in bulk. The example above, using -A bulk-accepts all pending keys. To accept keys individually use the lowercase of the same option, -a keyname.
Sending Commands
Communication between the Master and a Minion may be verified by running the test.ping command:
[root [at] master ~]# salt alpha test.ping alpha: True
Communication between the Master and all Minions may be tested in a similar way:
[root [at] master ~]# salt '*' test.ping alpha: True bravo: True charlie: True delta: True
Each of the Minions should send a True response as shown above.
What's Next?
Understanding targeting is important. From there, depending on the way you wish to use Salt, you should also proceed to learn about States and Execution Modules.
Configuring the Salt Master
The Salt system is amazingly simple and easy to configure, the two components of the Salt system each have a respective configuration file. The salt-master is configured via the master configuration file, and the salt-minion is configured via the minion configuration file.
SEE ALSO: example master configuration file
The configuration file for the salt-master is located at /etc/salt/master by default. A notable exception is FreeBSD, where the configuration file is located at /usr/local/etc/salt. The available options are as follows:
Primary Master Configuration
interface
Default: 0.0.0.0 (all interfaces)
The local interface to bind to.
interface: 192.168.0.1
ipv6
Default: False
Whether the master should listen for IPv6 connections. If this is set to True, the interface option must be adjusted too (for example: "interface: '::'")
ipv6: True
publish_port
Default: 4505
The network port to set up the publication interface.
publish_port: 4505
master_id
Default: None
The id to be passed in the publish job to minions. This is used for MultiSyndics to return the job to the requesting master.
NOTE: This must be the same string as the syndic is configured with.
master_id: MasterOfMaster
user
Default: root
The user to run the Salt processes
user: root
max_open_files
Default: 100000
Each minion connecting to the master uses AT LEAST one file descriptor, the master subscription connection. If enough minions connect you might start seeing on the console(and then salt-master crashes):
Too many open files (tcp_listener.cpp:335) Aborted (core dumped)
max_open_files: 100000
By default this value will be the one of ulimit -Hn, i.e., the hard limit for max open files.
To set a different value than the default one, uncomment, and configure this setting. Remember that this value CANNOT be higher than the hard limit. Raising the hard limit depends on the OS and/or distribution, a good way to find the limit is to search the internet for something like this:
raise max open files hard limit debian
worker_threads
Default: 5
The number of threads to start for receiving commands and replies from minions. If minions are stalling on replies because you have many minions, raise the worker_threads value.
Worker threads should not be put below 3 when using the peer system, but can drop down to 1 worker otherwise.
NOTE: When the master daemon starts, it is expected behaviour to see multiple salt-master processes, even if 'worker_threads' is set to '1'. At a minimum, a controlling process will start along with a Publisher, an EventPublisher, and a number of MWorker processes will be started. The number of MWorker processes is tuneable by the 'worker_threads' configuration value while the others are not.
worker_threads: 5
ret_port
Default: 4506
The port used by the return server, this is the server used by Salt to receive execution returns and command executions.
ret_port: 4506
pidfile
Default: /var/run/salt-master.pid
Specify the location of the master pidfile.
pidfile: /var/run/salt-master.pid
root_dir
Default: /
The system root directory to operate from, change this to make Salt run from an alternative root.
root_dir: /
NOTE: This directory is prepended to the following options: pki_dir, cachedir, sock_dir, log_file, autosign_file, autoreject_file, pidfile.
pki_dir
Default: /etc/salt/pki
The directory to store the pki authentication keys.
pki_dir: /etc/salt/pki
extension_modules
Directory for custom modules. This directory can contain subdirectories for each of Salt's module types such as "runners", "output", "wheel", "modules", "states", "returners", etc. This path is appended to root_dir.
extension_modules: srv/modules
module_dirs
Default: []
Like extension_modules, but a list of extra directories to search for Salt modules.
module_dirs: - /var/cache/salt/minion/extmods
cachedir
Default: /var/cache/salt
The location used to store cache information, particularly the job information for executed salt commands.
cachedir: /var/cache/salt
verify_env
Default: True
Verify and set permissions on configuration directories at startup.
verify_env: True
keep_jobs
Default: 24
Set the number of hours to keep old job information.
timeout
Default: 5
Set the default timeout for the salt command and api.
loop_interval
Default: 60
The loop_interval option controls the seconds for the master's maintenance process check cycle. This process updates file server backends, cleans the job cache and executes the scheduler.
output
Default: nested
Set the default outputter used by the salt command.
color
Default: True
By default output is colored, to disable colored output set the color value to False.
color: False
sock_dir
Default: /var/run/salt/master
Set the location to use for creating Unix sockets for master process communication.
sock_dir: /var/run/salt/master
enable_gpu_grains
Default: True
Enable GPU hardware data for your master. Be aware that the master can take a while to start up when lspci and/or dmidecode is used to populate the grains for the master.
job_cache
Default: True
The master maintains a job cache, while this is a great addition it can be a burden on the master for larger deployments (over 5000 minions). Disabling the job cache will make previously executed jobs unavailable to the jobs system and is not generally recommended. Normally it is wise to make sure the master has access to a faster IO system or a tmpfs is mounted to the jobs dir.
minion_data_cache
Default: True
The minion data cache is a cache of information about the minions stored on the master, this information is primarily the pillar and grains data. The data is cached in the Master cachedir under the name of the minion and used to predetermine what minions are expected to reply from executions.
minion_data_cache: True
ext_job_cache
Default: ''
Used to specify a default returner for all minions, when this option is set the specified returner needs to be properly configured and the minions will always default to sending returns to this returner. This will also disable the local job cache on the master.
ext_job_cache: redis
event_return
New in version 2015.5.0.
Default: ''
Specify the returner to use to log events. A returner may have installation and configuration requirements. Read the returner's documentation.
NOTE: Not all returners support event returns. Verify that a returner has an event_return() function before configuring this option with a returner.
event_return: cassandra_cql
master_job_cache
New in version 2014.7.0.
Default: 'local_cache'
Specify the returner to use for the job cache. The job cache will only be interacted with from the salt master and therefore does not need to be accessible from the minions.
master_job_cache: redis
enforce_mine_cache
Default: False
By-default when disabling the minion_data_cache mine will stop working since it is based on cached data, by enabling this option we explicitly enabling only the cache for the mine system.
enforce_mine_cache: False
max_minions
Default: 0
The number of minions the master should allow to connect. Use this to accommodate the number of minions per master if you have different types of hardware serving your minions. The default of 0 means unlimited connections. Please note, that this can slow down the authentication process a bit in large setups.
max_minions: 100
con_cache
Default: False
If max_minions is used in large installations, the master might experience high-load situations because of having to check the number of connected minions for every authentication. This cache provides the minion-ids of all connected minions to all MWorker-processes and greatly improves the performance of max_minions.
con_cache: True
presence_events
Default: False
Causes the master to periodically look for actively connected minions. Presence events are fired on the event bus on a regular interval with a list of connected minions, as well as events with lists of newly connected or disconnected minions. This is a master-only operation that does not send executions to minions. Note, this does not detect minions that connect to a master via localhost.
presence_events: False
Salt-SSH Configuration
roster_file
Default: '/etc/salt/roster'
Pass in an alternative location for the salt-ssh roster file.
roster_file: /root/roster
ssh_minion_opts
Default: None
Pass in minion option overrides that will be inserted into the SHIM for salt-ssh calls. The local minion config is not used for salt-ssh. Can be overridden on a per-minion basis in the roster (minion_opts)
minion_opts: gpg_keydir: /root/gpg
Master Security Settings
open_mode
Default: False
Open mode is a dangerous security feature. One problem encountered with pki authentication systems is that keys can become "mixed up" and authentication begins to fail. Open mode turns off authentication and tells the master to accept all authentication. This will clean up the pki keys received from the minions. Open mode should not be turned on for general use. Open mode should only be used for a short period of time to clean up pki keys. To turn on open mode set this value to True.
open_mode: False
auto_accept
Default: False
Enable auto_accept. This setting will automatically accept all incoming public keys from minions.
auto_accept: False
autosign_timeout
New in version 2014.7.0.
Default: 120
Time in minutes that a incoming public key with a matching name found in pki_dir/minion_autosign/keyid is automatically accepted. Expired autosign keys are removed when the master checks the minion_autosign directory. This method to auto accept minions can be safer than an autosign_file because the keyid record can expire and is limited to being an exact name match. This should still be considered a less than secure option, due to the fact that trust is based on just the requesting minion id.
autosign_file
Default: not defined
If the autosign_file is specified incoming keys specified in the autosign_file will be automatically accepted. Matches will be searched for first by string comparison, then by globbing, then by full-string regex matching. This should still be considered a less than secure option, due to the fact that trust is based on just the requesting minion id.
autoreject_file
New in version 2014.1.0.
Default: not defined
Works like autosign_file, but instead allows you to specify minion IDs for which keys will automatically be rejected. Will override both membership in the autosign_file and the auto_accept setting.
client_acl
Default: {}
Enable user accounts on the master to execute specific modules. These modules can be expressed as regular expressions.
client_acl: fred: - test.ping - pkg.*
client_acl_blacklist
Default: {}
Blacklist users or modules
This example would blacklist all non sudo users, including root from running any commands. It would also blacklist any use of the "cmd" module.
This is completely disabled by default.
client_acl_blacklist: users: - root - '^(?!sudo_).*$' # all non sudo users modules: - cmd
external_auth
Default: {}
The external auth system uses the Salt auth modules to authenticate and validate users to access areas of the Salt system.
external_auth: pam: fred: - test.*
token_expire
Default: 43200
Time (in seconds) for a newly generated token to live.
Default: 12 hours
token_expire: 43200
file_recv
Default: False
Allow minions to push files to the master. This is disabled by default, for security purposes.
file_recv: False
master_sign_pubkey
Default: False
Sign the master auth-replies with a cryptographic signature of the masters public key. Please see the tutorial how to use these settings in the Multimaster-PKI with Failover Tutorial
master_sign_pubkey: True
master_sign_key_name
Default: master_sign
The customizable name of the signing-key-pair without suffix.
master_sign_key_name: <filename_without_suffix>
master_pubkey_signature
Default: master_pubkey_signature
The name of the file in the masters pki-directory that holds the pre-calculated signature of the masters public-key.
master_pubkey_signature: <filename>
master_use_pubkey_signature
Default: False
Instead of computing the signature for each auth-reply, use a pre-calculated signature. The master_pubkey_signature must also be set for this.
master_use_pubkey_signature: True
rotate_aes_key
Default: True
Rotate the salt-masters AES-key when a minion-public is deleted with salt-key. This is a very important security-setting. Disabling it will enable deleted minions to still listen in on the messages published by the salt-master. Do not disable this unless it is absolutely clear what this does.
rotate_aes_key: True
Master Module Management
runner_dirs
Default: []
Set additional directories to search for runner modules.
cython_enable
Default: False
Set to true to enable Cython modules (.pyx files) to be compiled on the fly on the Salt master.
cython_enable: False
Master State System Settings
state_top
Default: top.sls
The state system uses a "top" file to tell the minions what environment to use and what modules to use. The state_top file is defined relative to the root of the base environment.
state_top: top.sls
master_tops
Default: {}
The master_tops option replaces the external_nodes option by creating a pluggable system for the generation of external top data. The external_nodes option is deprecated by the master_tops option. To gain the capabilities of the classic external_nodes system, use the following configuration:
master_tops: ext_nodes: <Shell command which returns yaml>
external_nodes
Default: None
The external_nodes option allows Salt to gather data that would normally be placed in a top file from and external node controller. The external_nodes option is the executable that will return the ENC data. Remember that Salt will look for external nodes AND top files and combine the results if both are enabled and available!
external_nodes: cobbler-ext-nodes
renderer
Default: yaml_jinja
The renderer to use on the minions to render the state data.
renderer: yaml_jinja
failhard
Default: False
Set the global failhard flag, this informs all states to stop running states at the moment a single state fails.
failhard: False
state_verbose
Default: True
Controls the verbosity of state runs. By default, the results of all states are returned, but setting this value to False will cause salt to only display output for states which either failed, or succeeded without making any changes to the minion.
state_verbose: False
state_output
Default: full
The state_output setting changes if the output is the full multi line output for each changed state if set to 'full', but if set to 'terse' the output will be shortened to a single line. If set to 'mixed', the output will be terse unless a state failed, in which case that output will be full. If set to 'changes', the output will be full unless the state didn't change.
state_output: full
state_aggregate
Default: False
Automatically aggregate all states that have support for mod_aggregate by setting to True. Or pass a list of state module names to automatically aggregate just those types.
state_aggregate: - pkg
state_aggregate: True
state_events
Default: False
Send progress events as each function in a state run completes execution by setting to True. Progress events are in the format salt/job/<JID>/prog/<MID>/<RUN NUM>.
state_events: True
yaml_utf8
Default: False
Enable extra routines for YAML renderer used states containing UTF characters.
yaml_utf8: False
test
Default: False
Set all state calls to only test if they are going to actually make changes or just post what changes are going to be made.
test: False
Master File Server Settings
fileserver_backend
Default: ['roots']
Salt supports a modular fileserver backend system, this system allows the salt master to link directly to third party systems to gather and manage the files available to minions. Multiple backends can be configured and will be searched for the requested file in the order in which they are defined here. The default setting only enables the standard backend roots, which is configured using the file_roots option.
Example:
fileserver_backend: - roots - git
hash_type
Default: md5
The hash_type is the hash to use when discovering the hash of a file on the master server. The default is md5, but sha1, sha224, sha256, sha384, and sha512 are also supported.
hash_type: md5
file_buffer_size
Default: 1048576
The buffer size in the file server in bytes.
file_buffer_size: 1048576
file_ignore_regex
Default: ''
A regular expression (or a list of expressions) that will be matched against the file path before syncing the modules and states to the minions. This includes files affected by the file.recurse state. For example, if you manage your custom modules and states in subversion and don't want all the '.svn' folders and content synced to your minions, you could set this to '/.svn($|/)'. By default nothing is ignored.
file_ignore_regex: - '/\.svn($|/)' - '/\.git($|/)'
file_ignore_glob
Default ''
A file glob (or list of file globs) that will be matched against the file path before syncing the modules and states to the minions. This is similar to file_ignore_regex above, but works on globs instead of regex. By default nothing is ignored.
file_ignore_glob: - '\*.pyc' - '\*/somefolder/\*.bak' - '\*.swp'
roots: Master's Local File Server
file_roots
Default:
base: - /srv/salt
Salt runs a lightweight file server written in ZeroMQ to deliver files to minions. This file server is built into the master daemon and does not require a dedicated port.
The file server works on environments passed to the master. Each environment can have multiple root directories. The subdirectories in the multiple file roots cannot match, otherwise the downloaded files will not be able to be reliably ensured. A base environment is required to house the top file.
Example:
file_roots: base: - /srv/salt dev: - /srv/salt/dev/services - /srv/salt/dev/states prod: - /srv/salt/prod/services - /srv/salt/prod/states
git: Git Remote File Server Backend
gitfs_remotes
Default: []
When using the git fileserver backend at least one git remote needs to be defined. The user running the salt master will need read access to the repo.
The repos will be searched in order to find the file requested by a client and the first repo to have the file will return it. Branches and tags are translated into salt environments.
gitfs_remotes: - git://github.com/saltstack/salt-states.git - file:///var/git/saltmaster
NOTE: file:// repos will be treated as a remote and copied into the master's gitfs cache, so only the local refs for those repos will be exposed as fileserver environments.
As of 2014.7.0, it is possible to have per-repo versions of several of the gitfs configuration parameters. For more information, see the GitFS Walkthrough.
gitfs_provider
New in version 2014.7.0.
Specify the provider to be used for gitfs. More information can be found in the GitFS Walkthrough.
Specify one value among valid values: gitpython, pygit2, dulwich
gitfs_provider: dulwich
gitfs_ssl_verify
Default: True
The gitfs_ssl_verify option specifies whether to ignore SSL certificate errors when contacting the gitfs backend. You might want to set this to false if you're using a git backend that uses a self-signed certificate but keep in mind that setting this flag to anything other than the default of True is a security concern, you may want to try using the ssh transport.
gitfs_ssl_verify: True
gitfs_mountpoint
New in version 2014.7.0.
Default: ''
Specifies a path on the salt fileserver from which gitfs remotes are served. Can be used in conjunction with gitfs_root. Can also be configured on a per-remote basis, see here for more info.
gitfs_mountpoint: salt://foo/bar
NOTE: The salt:// protocol designation can be left off (in other words, foo/bar and salt://foo/bar are equivalent).
gitfs_root
Default: ''
Serve files from a subdirectory within the repository, instead of the root. This is useful when there are files in the repository that should not be available to the Salt fileserver. Can be used in conjunction with gitfs_mountpoint.
gitfs_root: somefolder/otherfolder
Changed in version 2014.7.0: Ability to specify gitfs roots on a per-remote basis was added. See here for more info.
gitfs_base
Default: master
Defines which branch/tag should be used as the base environment.
gitfs_base: salt
Changed in version 2014.7.0: Ability to specify the base on a per-remote basis was added. See here for more info.
gitfs_env_whitelist
New in version 2014.7.0.
Default: []
Used to restrict which environments are made available. Can speed up state runs if the repos in gitfs_remotes contain many branches/tags. More information can be found in the GitFS Walkthrough.
gitfs_env_whitelist: - base - v1.* - 'mybranch\d+'
gitfs_env_blacklist
New in version 2014.7.0.
Default: []
Used to restrict which environments are made available. Can speed up state runs if the repos in gitfs_remotes contain many branches/tags. More information can be found in the GitFS Walkthrough.
gitfs_env_blacklist: - base - v1.* - 'mybranch\d+'
GitFS Authentication Options
These parameters only currently apply to the pygit2 gitfs provider. Examples of how to use these can be found in the GitFS Walkthrough.
gitfs_user
New in version 2014.7.0.
Default: ''
Along with gitfs_password, is used to authenticate to HTTPS remotes.
gitfs_user: git
gitfs_password
New in version 2014.7.0.
Default: ''
Along with gitfs_user, is used to authenticate to HTTPS remotes. This parameter is not required if the repository does not use authentication.
gitfs_password: mypassword
gitfs_insecure_auth
New in version 2014.7.0.
Default: False
By default, Salt will not authenticate to an HTTP (non-HTTPS) remote. This parameter enables authentication over HTTP. Enable this at your own risk.
gitfs_insecure_auth: True
gitfs_pubkey
New in version 2014.7.0.
Default: ''
Along with gitfs_privkey (and optionally gitfs_passphrase), is used to authenticate to SSH remotes. This parameter (or its per-remote counterpart) is required for SSH remotes.
gitfs_pubkey: /path/to/key.pub
gitfs_privkey
New in version 2014.7.0.
Default: ''
Along with gitfs_pubkey (and optionally gitfs_passphrase), is used to authenticate to SSH remotes. This parameter (or its per-remote counterpart) is required for SSH remotes.
gitfs_privkey: /path/to/key
gitfs_passphrase
New in version 2014.7.0.
Default: ''
This parameter is optional, required only when the SSH key being used to authenticate is protected by a passphrase.
gitfs_passphrase: mypassphrase
hg: Mercurial Remote File Server Backend
hgfs_remotes
New in version 0.17.0.
Default: []
When using the hg fileserver backend at least one mercurial remote needs to be defined. The user running the salt master will need read access to the repo.
The repos will be searched in order to find the file requested by a client and the first repo to have the file will return it. Branches and/or bookmarks are translated into salt environments, as defined by the hgfs_branch_method parameter.
hgfs_remotes: - https://username@bitbucket.org/username/reponame