Salt is a remote execution and configuration management system written in Python (Private). It's designed to be heavily modular, and can be extended by simply dropping Python modules into the appropriate directories of the Salt state tree.


Salt is typically deployed in a client-server scenario, where the Salt Master hosts the configuration and distributes it to a number of Minions. In larger deployments, Salt Syndic offers an intermediary between the Minion and Server allowing larger deployments by shielding the Master from the load of all Minions below it.

Other deployment models include:

  • salt-masterless, where the Minion operates without a Master (file_client: local).
  • salt-ssh, allowing an agent-less configuration where the Master performs actions over SSH.

Salt is built on two messaging channels used for all master-minion communication:

  • Publish (pub) is used by the Master to communicate a job payload to a minion.
  • Returner (req) is used by the Minions to fetch files and send back job returns.


The messaging channels can be configured to use alternative transports. The default is ZeroMQ, using ports 4505/TCP and 4506/TCP for the pub and req channels.


State and pillar data is requested by the Minions from the Master, which behaves as a fileserver. By default, the fileserver uses the roots module to serve files from directories local to the Master, but others allow serving from remote repositories and storage shares.


Since there's a risk of secret exposure in both the files shipped to Minions and the responses shipped back, Salt encrypts traffic in both directions. Salt Masters have public and private keys, and they'll need a copy of the public keys for all accepted minions too. Salt Minions have public and private keys, and must have the public key of the Master stored.

Key acceptance can be automated using the Reactor, by pre-seeding keys or through salt-key on the Master.

Salt Cloud

Salt Cloud extends Salts management to Cloud computing providers like AWS and Azure.

Salt Thin

salt-thin is a transportless version of Salt that allows it to execute entirely standalone, and forms the basis of the platform's container orchestration tooling. It works by packaging the minion into a tarball which can be shipped to the destination system for execution.

Package management

Salt provides a Windows-specific package management system called winrepo that allows automated, unattended software installations similar to Linux systems.

Operating system support

Operating systemMaster supported?Minion supported?


Repositories for most major Linux distributions are made available. Salt Bootstrap simplifies installation by determining the most appropriate packages for the system given your version preference.


The Salt Master is designed to be backwards-compatible with older versions of the Minion, except where doing so isn't possible due to breaking security fixes. Always update the Salt Master prior to updating the Minions.

As Minions require a restart in order to apply an update, be sure to use scheduled restarts with service.restart.

Deployment over salt-ssh

This installation method is easier for onboarding large numbers of minions where no image-based deployment option is available. On the master, configure /etc/salt/roster with an entry for each host:

    user: admin
    passwd: secret
    sudo: True

Test connectivity and credentials:

salt-ssh foo

If the fingerprint looks good, accept the host key:

salt-ssh -i foo

Now go ahead with installation (assuming the state already exists):

salt-ssh foo state.apply salt.minion

State tree

A state tree contains the definitions of individual service or file states. Conventionally, a state tree named base exists alongside any number of application-specific trees. These should all be stored in /srv/salt.

top.sls defines the relationship between minions and the state tree, each entry selecting the appropriate nodes and listing states to apply to them. Each entry can match as many nodes as desired and as many states as desired, and a node can match multiple entries.

State files can exist directly in the root of the state tree, or nested in a directory below. When using directories, init.sls will serve as the default state within that directory, and will run if an entry named after the directory appears in the top file. You can specify individual state files within the directory by adding entries in the form directory.state.

The files are written in YAML, and are passed through the Jinja template engine (Private), allowing basic loops and execution of Salt modules. A few key variables defined in the templates:

  • salt allows accessing execution modules (like __salt__ in Python).
  • pillar provides dictionary access into the Salt pillar. Use salt['pillar.get']() for more complex accesses, including setting a default value.

As the matrix of supported operating systems increases, variables should be extracted to a map.jinja file alongside the state files, and imported via Jinja, e.g. to abstract package names and versions.

Pillar tree

Pillar trees contain sensitive data. The structure of the /srv/pillar directory and the format of files within matches that of the state directory.


Salt determines which configuration to apply to and which actions to perform on which Minions based on targeting data. This includes:

  • Minion ID allows targeting by the Salt Minion's id, usually derived from its hostname. This is the default behaviour if no targeting switch is specified when using salt-call, and allows specifying globs, e.g. wildcards (*) to target, multiple Minions.
  • Regular expressions against the Minion ID can be specified with -E.
  • Comma-delimited lists of Minion IDs can be specified with -L.
  • Grains, automatically- or administrator-assigned key-value pairs describing properties of a system, can be used with -G os:Debian. Common ones include:
    • os, containing the operating system, e.g. Debian or RedHat.
    • oscodename, e.g. buster.
    • osrelease, e.g. 10.
  • Node groups can be specified with -N.
  • Compound matches allow combining multiple different conditions, along with basic negation, with -C. For example, G@grain:value and N@nodegroup.

Node groups are configured in the Master's nodegroups option, comprising a name and a compound target specification or a YAML list. Node groups can be nested.


The mine allows targeting a set of minions to obtain data for storage on the master, which can then be queried via the mine execution module. This allows states to query information about the environment and template the state based on it.

The data to be collected is specified under the mine_functions key of the state, which must be targeted via the top file:

  network.ip_addrs: []

  # Note the use of the same mine function twice, but with an alias.
    mine_function: network.ip_addrs
    interface: eth1

Mine functions are executed every mine_interval (on the minion) minutes, but can be updated immediately using mine.update.


Orchestrations are stored in /srv/salt/orch and allow coordinating execution of Salt modules. For instance, to handle applying states to a newly provisioned backend application server, update the mine data to include the new server, and update the frontend configuration:

    name: state.apply
    tgt: {{ pillar['backend'] }}

    name: mine.update
    tgt: {{ pillar['backend'] }}
      - backend.apply-state

    name: state.apply
    tgt: {{ pillar['frontend'] }}
      - backend.update-mine

We can run from the master:

salt-run state.orch orch.sls pillar='{"backend":"backend42","frontend":"role:frontend"}'


Call module.function on Minions matching target with positional argument value1 and keyword argument arg2 set to value2:

salt target module.function value1 arg2=value2

Execute the same function on the local Minion:

salt-call module.function value1 arg2=arg2

Execute the same function on a master, without requiring that the Minion be installed:

salt-run module.function value1 arg2=arg2

Not ideal for larger files, and contents is logged:

salt-cp target sourcefile destfile


Virtual modules abstract away operating system-specific implementation details, allowing Salt to dynamically load the most appropriate implementation for a given platform.

After adding a new module to the state tree, ensure you call the appropriate saltutil.sync_* function to distribute it to the minions, or saltutil.sync_all to sync everything.

Execution modules

Execution modules (_modules/*.py) expose as many functions as desired, each of which performs an action and emits output.

Grains modules

Grains modules (_grains/*.py) extend Salt's targeting capabilities. All public functions exposed by modules in this directory will be merged with the grains dictionary.

Returner modules

Returners (_returners/*.py in the state tree) contain a single returner(ret) function which can process and/or store Salt activity and result data. They might be used to raise tickets, store results in a database for later analysis or to send alerting emails to administrators when a critical state fails to apply.

Runner modules

Runner modules (_runners/*.py) are Master-only execution modules, typically used for processing results or managing remote command execution.

State modules

State modules (_states/*.py) expose functions for stateful resource management, typically used in the state tree. They should generally wrap an execution module, first checking whether any action is necessary to remediate drift from the desired state, then format a return value indicating what differences were fixed.

Utility modules

Utility modules (_utils/*.py) are similar to execution modules, but designed for behaviours which are needed across other module types for reuse rather than being publicly exposed.