Salt States

Salt states. Without the spec sheet.

The state modules you'll use 90% of the time, the requisite system that ties them together, and four real-world patterns from production. One scrollable page.

v1.0 · Updated 2026-05-09

Anatomy of a state.

A Salt state is YAML describing a desired condition, not a sequence of actions. Salt looks at the current state of the box, compares to what you wrote, and changes only what's not yet correct. Run it ten times, the box ends up in the same place every time. That's idempotency, and it's the whole game.

The shape

# every state has three parts:
<state ID>:                       # a unique label — your choice
  <module>.<function>:            # what to do (file.managed, pkg.installed, etc.)
    - arg_name: arg_value         # module arguments
    - arg_name: arg_value

A real one:

nginx_running:                     # state ID
  service.running:                 # module = service, function = running
    - name: nginx                  # the service name
    - enable: True                 # enable on boot

Always dry-run before you commit. Append test=True to any apply command and Salt prints what it would change. salt 'web*' state.apply nginx test=True. Catches typos and unintended diffs before the fleet sees them.

The 6 state modules you'll use 90% of the time.

Salt ships hundreds of state modules. Six cover almost everything you'll do in a normal day. Master these, browse the rest at docs.saltproject.io when you need something exotic.

file

.directory · .managed · .symlink · .absent · .recurse

nginx_conf:
  file.managed:
    - name: /etc/nginx/nginx.conf
    - source: salt://nginx/nginx.conf
    - mode: '0644'
    - template: jinja

Folders, files, content, ownership, permissions. template: jinja renders templates with grains/pillar.

pkg

.installed · .removed · .uptodate · .latest

nginx_pkg:
  pkg.installed:
    - name: nginx
    - version: 1.24.*    # pin loosely

Cross-OS — same state on RHEL, Debian, Windows. Pin versions in production. Lists work too: - pkgs: [nginx, curl, rsync].

service

.running · .dead · .enabled · .disabled

nginx_service:
  service.running:
    - name: nginx
    - enable: True
    - watch:
      - file: nginx_conf

enable: True survives reboots. watch: auto-restarts on file changes.

user

.present · .absent

app_user:
  user.present:
    - name: appsvc
    - shell: /bin/bash
    - groups: [docker, sudo]
    - createhome: True

Cross-OS user management. Use pillar for any password — never hardcode.

group

.present · .absent

add_to_rdp:
  group.present:
    - name: Remote Desktop Users
    - addusers:
      - DOMAIN\\app-svc

Local group membership. Users must already exist (locally or AD). Double-backslash for AD users.

cmd

.run · .script · .wait

tag_machine:
  cmd.run:
    - name: echo 'managed' > /etc/managed
    - unless: test -f /etc/managed

Use sparingly. Always pair with unless: or onlyif: for idempotency. Real states are better.

Naming. Salt has a name: shortcut — if you skip it, Salt uses the state ID as the name. nginx_pkg: pkg.installed: - name: nginx can be shortened to just nginx: pkg.installed. Clean for one-offs, confusing in bigger files. Pick one style per project.

Requisites — making states cooperate.

Salt doesn't run states in file order. It runs them in dependency order. You declare dependencies with requisites. Get this right and complex setups become bulletproof. Get it wrong and you'll wonder why your service tried to start before its package existed.

The four you need

require:
Run after the listed states succeed. The bread-and-butter "do A before B" requisite. service: nginx_pkg means "wait until nginx_pkg is done."
watch:
Like require, plus react if anything changed. Service restarts automatically when its config file changes. This is the killer feature.
onchanges:
Run only if a watched state actually made a change. Useful for "run this audit script when config gets modified" — won't run on no-op highstates.
listen:
Defer until the end of the highstate. Same idea as watch, but batched. Useful for one-time post-deployment hooks.

Inverse forms

Each requisite has a _in mirror — require_in, watch_in, onchanges_in, listen_in. Same effect, declared from the other side. Use whichever reads better:

# both of these mean "service restarts on config change"

# forward — declared on the service
nginx_service:
  service.running:
    - name: nginx
    - watch:
      - file: nginx_conf

# inverse — declared on the file
nginx_conf:
  file.managed:
    - name: /etc/nginx/nginx.conf
    - source: salt://nginx/nginx.conf
    - watch_in:
      - service: nginx_service

The 30-second rule. If you can't tell from looking at a state file what runs in what order, your requisites are wrong. Fix the chain, not the symptom.

The canonical chain — package + config + service

Hundreds of state files end up looking like this. Memorize it.

nginx_pkg:
  pkg.installed:
    - name: nginx

nginx_conf:
  file.managed:
    - name: /etc/nginx/nginx.conf
    - source: salt://nginx/nginx.conf
    - require:
      - pkg: nginx_pkg                # package must exist first

nginx_service:
  service.running:
    - name: nginx
    - enable: True
    - require:
      - pkg: nginx_pkg                # package must exist
    - watch:
      - file: nginx_conf              # restart on config change

The top file — how Salt picks which minion gets what.

top.sls is Salt's routing table. It says "for each minion, here's the list of states that apply." Lives at /srv/salt/top.sls. Highstate reads it and figures out the rest.

The shape

# /srv/salt/top.sls
base:                              # environment (also: dev, staging, prod)
  '*':                             # every minion
    - common.baseline

  'web*':                          # glob — minion ID matching
    - nginx
    - app

  'os_family:RedHat':              # grain match
    - match: grain
    - linux.hardening

  'G@os:Windows and G@env:prod':   # compound
    - match: compound
    - windows.iis
    - windows.audit

Match types

The flow. salt '*' state.highstate → master reads top.sls → for each minion, gathers the matched state names → applies them → minion converges. state.apply <name> ignores top.sls and applies just the one. Both idempotent.

Real-world patterns.

Four sanitized patterns from production deployments. Each one is something you'd actually run, distilled to the parts that teach. Steal these.

Pattern 01

Pillar-driven user with sudoers — Linux

Idempotent demo user creation, with sudoers granted via a drop-in. Validates the sudoers fragment with visudo -cf before moving it into place — a malformed line can't break sudo system-wide.

# /srv/salt/linux/demo_user.sls
{% set sudo_user  = salt['pillar.get']('linux:sudoer:username', 'saltdemo') %}
{% set sudo_shell = salt['pillar.get']('linux:sudoer:shell', '/bin/bash') %}
{% set nopasswd   = salt['pillar.get']('linux:sudoer:nopasswd', True) %}

demo_user:
  user.present:
    - name: {{ sudo_user }}
    - shell: {{ sudo_shell }}
    - createhome: True

demo_user_sudoers:
  file.managed:
    - name: /etc/sudoers.d/{{ sudo_user }}
    - user: root
    - group: root
    - mode: '0440'
    - contents: |
        # Managed by Salt
        {{ sudo_user }} ALL=(ALL){% if nopasswd %} NOPASSWD:{% endif %} ALL
    - check_cmd: /usr/sbin/visudo -cf      # validate before commit
    - require:
      - user: demo_user

Why this is good: values come from pillar (per-environment override), check_cmd guards against malformed sudoers, require: ensures the user exists before adding rights, and the whole thing is conditional on Linux (in real life with a Jinja {% if grains['kernel'] == 'Linux' %} wrapper).

Pattern 02

Reboot, wait, reconnect — Windows

Reboot a Windows box from Salt and wait for it to come back before continuing the highstate. The "reboot orchestration" pattern that separates production-grade Salt from script-style Salt.

# /srv/salt/windows/reboot.sls

reboot_now:
  cmd.run:
    - name: shutdown /r /t 2
    - shell: cmd

wait_for_winrm:
  cmd.run:
    - name: |
        powershell -ExecutionPolicy Bypass -Command "
        $counter = 0
        while ($counter -lt 30) {
          if (Test-NetConnection -ComputerName $env:COMPUTERNAME `
              -Port 5985 -InformationLevel Quiet) { exit 0 }
          Start-Sleep -Seconds 10
          $counter++
        }
        exit 1"
    - shell: cmd
    - require:
      - cmd: reboot_now

post_reboot_smoke_test:
  cmd.run:
    - name: powershell -Command "Get-Service salt-minion"
    - shell: cmd
    - require:
      - cmd: wait_for_winrm

Why this works: WinRM (port 5985) coming back is a reliable signal that the box is genuinely up — services started, network up, ready to take work. The PowerShell loop polls for 5 minutes max, fails fast if the box doesn't come back. Subsequent states require: the wait, so the highstate doesn't try to do work on a half-booted box.

Pattern 03

Multi-host orchestration — RDS-style

One salt-run state.orch command, deploying state to multiple groups of minions in dependency order. This is the pattern for anything that involves "deploy A on these boxes, then B on those, then C on the others." Used here for a Remote Desktop Services build (license server first, then session hosts, then app publishing).

# /srv/salt/orch/rds_build.sls
# run with: sudo salt-run state.orch orch.rds_build

deploy_broker_gateway:
  salt.state:
    - tgt: 'G@server_role:rds_broker'
    - tgt_type: compound
    - sls: windows.rds.broker_gateway

deploy_license_server:
  salt.state:
    - tgt: 'G@server_role:rds_license'
    - tgt_type: compound
    - sls: windows.rds.license_server

deploy_session_hosts:
  salt.state:
    - tgt: 'G@server_role:rds_session_host'
    - tgt_type: compound
    - sls: windows.rds.session_host
    - require:
      - salt: deploy_broker_gateway      # brokers ready first
      - salt: deploy_license_server       # licensing ready first

publish_apps:
  salt.function:
    - tgt: 'G@server_role:rds_broker'
    - tgt_type: compound
    - name: cmd.run
    - arg:
      - powershell -File C:\scripts\publish_apps.ps1
    - require:
      - salt: deploy_session_hosts        # hosts ready first

Why this is the killer pattern: Salt is doing what would normally take a Bash-and-Ansible kludge with sleeps and pings — and doing it idempotently. salt.state applies states to grain-targeted groups; salt.function runs ad-hoc functions; require: chains them. Re-run anytime, only the parts that haven't converged actually do work.

Pattern 04

Reactor — event-driven Salt

Salt has an event bus. Anything happening on the master or minions fires an event. Reactors map event tags to state files. This turns Salt from "I tell it to do things" into "it does things when stuff happens."

# /etc/salt/master.d/reactor.conf
reactor:
  - 'orchestrate/rds/start':                    # custom event tag
    - /srv/salt/reactor/start_rds_build.sls
  - 'salt/minion/*/start':                      # built-in: minion came up
    - /srv/salt/reactor/welcome_new_minion.sls

# /srv/salt/reactor/start_rds_build.sls
start_rds_build:
  runner.state.orchestrate:
    - mods: orch.rds_build

Fire the event from anywhere — a CI pipeline, another minion, a webhook handler — and the orchestration kicks off automatically:

salt-call event.send 'orchestrate/rds/start'

Why this matters: blue/green deploys, automated remediation, scheduled builds — they all live here. The install guide's blue/green pattern is essentially this: an event fires, a reactor responds, an orchestration runs.

Anti-patterns.

Six things that look like they work and bite you six months later.

cmd.run without unless: or onlyif:

Runs on every highstate. Slows everything down, makes idempotency a lie. If a real state module exists (file.managed, pkg.installed, etc.), use that. If cmd.run is genuinely necessary, gate it.

Hardcoded values in state files

Hostnames, paths, credentials, environment names — they belong in pillar. Hardcode and you'll need a state file per environment. Pillar them and one state file works everywhere. Getting Started covers pillar.

Missing require: chains

Without it, Salt is free to run states in parallel. For "package must exist before service starts" that's a race. Salt won't always lose it — but when it does, you'll spend hours debugging.

One giant .sls with everything in it

Hundreds of states in /srv/salt/all.sls. Hard to read, hard to test, impossible to reuse. Split by concern: nginx/init.sls, nginx/config.sls, nginx/cert.sls. Use include: to compose.

Restarting on every highstate

Use watch:, not cmd.run with a reload command. watch: only fires when something actually changed. cmd.run fires every time. Difference between "graceful reload when needed" and "kicking nginx every 10 minutes."

Skipping test=True in production

Always dry-run. Especially after pillar edits, especially on critical fleets. The fix takes 5 seconds (state.apply test=True). The recovery if you skip it can take an afternoon.

See also.

Patterns distilled from Saltify's production deployments + the official Salt docs at salt/doc/ (Apache 2.0).

Stuck on a state pattern?

If you're staring at YAML and wondering why it's not converging, we've probably seen the bug. Tell us what's broken.

[email protected]