Three RHEL 8 or 9 VMs
Minimal install profile. Static IPs. Hostnames set. Latest patches.
- Masters: 16 vCPU / 16 GB / 200 GB each
- Data layer: 16 vCPU / 16 GB / 500 GB
Install Guide
Never roll back a database again. Build once, put everything in Git, and stop firefighting. The rule we live by: if Salt breaks and you can't fix it in 30 minutes — don't troubleshoot, rebuild. Blue/green swap or full rebuild from Git: ten minutes either way.
v1.3 · Updated 2026-05-09
One stack is three servers. Inside the stack, two masters keep things running if one dies. The data layer (RaaS, PostgreSQL, Valkey) sits on its own box.
Three servers. Two masters for HA. One data layer. Git is the source of truth.
Why three, not two or four? Two servers (the official Basic Enterprise model) means one master — fragile. Four servers (Distributed Enterprise) is overkill below ~10,000 minions. Three is the sweet spot: HA where you need it, simple where you don't.
This is the architecture's whole reason for existing. Blue/green isn't between the two masters in one stack. It's between two whole stacks. The 3-server stack from section 1 is the unit you swap.
Cutover = repoint the CNAME (or swap the LB target). Minions don't notice. Both stacks pull from the same Git, so they're identical by definition.
What this gives you: upgrades become "build a fresh stack on the new version, flip DNS." Failed install? Don't troubleshoot — rebuild. The 3-server stack is cheap and reproducible because everything's in Git. → Jump to the cutover steps in section 7 when you're ready to actually do one.
Line these up first. Most install pain comes from skipping prereqs.
Minimal install profile. Static IPs. Hostnames set. Latest patches.
You need a Broadcom Support Portal account and a current VCF Salt license — Aria Suite (Standard Plus / Advanced / Enterprise) or VCF Advanced Cyber Compliance.
Pull from the Broadcom Support Portal. Match your RHEL version.
# RHEL 8
VMware_Salt_Raas-<ver>-<build>.el8_Installer.tar.gz
# RHEL 9
VMware_Salt_Raas-<ver>-<build>.el9_Installer.tar.gz
# to RaaS
443/tcp # UI + API
5432/tcp # PostgreSQL
6379/tcp # Valkey
# to masters
4505/tcp # publish
4506/tcp # return
Don't point minions at master FQDNs. Point them at salt-master.example.com. When you do blue/green later, you flip the CNAME — minions don't need to be touched.
sudo yum install -y \
libsodium openssl \
git java unzip pinentry
Have an empty Git repo ready. /srv/salt on each master will clone from it. This is what makes the whole thing rebuildable.
Get a cert + key for raas.example.com. Self-signed works for a lab; use your internal CA for prod.
The architecture is identical. The install steps are identical. Only one thing changes.
If your servers have internet (or an internal PyPI proxy like Nexus / Artifactory), the Python dependencies install themselves: salt-pip install PyJWT pika pyspnego smbprotocol pypsexec.
If your servers are air-gapped, those wheels don't exist on the box yet. The bundle ships them in its pip/ folder — you install them manually. This is step 09 in the install below.
Run on the boxes from section 3. The first 3 steps are on every server. After that, each step says where to run. Steps 1–7 install Salt by hand. Step 8 is the pivot — from there, Salt configures itself.
Download VMware_Salt_Raas-<VERSION>-XXXXXXXX.el8_Installer.tar.gz from the Broadcom Support Portal. Verify with sha256sum. Stage on every node.
tar -xzvf VMware_Salt_Raas-*.tar.gz
cd Config-*/sse-installer
sudo rpmkeys --import keys/*.asc
Run on both master boxes.
cd ~/Config-*/rpm
sudo rpm -ivh salt-*.rpm
sudo rpm -ivh salt-master-*.rpm
sudo rpm -ivh salt-minion-*.rpm
sudo rpm -ivh salt-cloud-*.rpm
sudo rpm -ivh salt-api-*.rpm
Run on the RaaS + DB + Valkey box. Just the minion — no master here.
cd ~/Config-*/rpm
sudo rpm -ivh salt-*.rpm
sudo rpm -ivh salt-minion-*.rpm
This is the magic. Both masters use the same master.pem and master.pub. That's how a minion can talk to either master without re-keying — and how blue/green works later.
# on master 1 — generate the keys (first time only)
# the files don't exist until you start the service once
sudo systemctl start salt-master
sudo systemctl stop salt-master
# on master 2 — paste in master 1's content
sudo chmod 600 /etc/salt/pki/master/master.pem
sudo vim /etc/salt/pki/master/master.pem
sudo chmod 400 /etc/salt/pki/master/master.pem
sudo vim /etc/salt/pki/master/master.pub
Treat master.pem like a root password. Use a vault or out-of-band transfer — never commit it.
The other half of the magic. Edit /etc/salt/master on both masters.
user: root
auto_accept: True
If your security policy bans blanket auto-accept, use auto_accept_grain patterns — but the rule must match on both masters.
Three things on each box: tell it about the masters, give it a name, start the right services.
a) Point at both masters — same file on every box
# /etc/salt/minion.d/master.conf
master:
- prod-master1.example.com
- prod-master2.example.com
b) Set the minion ID — different on each box
Names are your choice. Pick something that describes the role, not the hostname. The IDs appear later in the pillar (step 8). A good convention: <env>-<role><N>.
# on master 1
echo "prod-master1" | sudo tee /etc/salt/minion_id
# on master 2
echo "prod-master2" | sudo tee /etc/salt/minion_id
# on the data node (RaaS + PG + Valkey)
echo "prod-raas" | sudo tee /etc/salt/minion_id
c) Start services — masters run three, data node runs one
# on both masters
sudo systemctl enable --now \
salt-master salt-minion salt-api
# on the data node
sudo systemctl enable --now salt-minion
From master 1. Accept all pending keys and confirm every minion answers.
sudo salt-key -L # list pending
sudo salt-key -A # accept all
sudo salt '*' test.ping
All three minions (both masters + the data node) should return True. If not, restart salt-minion on the silent box and retry.
This is the moment Salt starts configuring itself. Pillar is the data Salt reads to know what to build. Once you write these files, master 1 has the recipe — the highstate in step 10 reads them and uses them to install PostgreSQL, Valkey, and RaaS on the data node, and the master plugin on both masters. Steps 1–7 were hand-install. From here on, you write config and Salt does the rest.
On master 1 only:
cd ~/Config-*/sse-installer
sudo mkdir -p /srv/salt/pillar
sudo cp -r salt/sse /srv/salt/
sudo cp -r salt/top.sls /srv/salt/
sudo cp -r pillar/sse /srv/salt/pillar/
sudo cp -r pillar/top.sls /srv/salt/pillar/
/srv/salt/pillar/top.slsList the minion IDs of every box in this deployment. Salt uses this to know which boxes get the SSE pillar data.
{# Pillar Top File #}
{# Define VCF Salt servers #}
{% load_yaml as sse_servers %}
- prod-raas
- prod-master1
- prod-master2
{% endload %}
base:
{# Assign pillar data to each server #}
{% for server in sse_servers %}
'{{ server }}':
- sse
{% endfor %}
/srv/salt/pillar/sse/sse_settings.yamlThe file has five sections. Most fields are fine at default. The fields you actually need to set are highlighted in cyan below.
WHAT YOU EDIT:
pg_endpoint, redis_endpoint, eapi_endpoint to the data node's IP or DNS. All three point to the same box — PG, Valkey, and RaaS all live there.eapi_key with the helper below.customer_id with the helper below.eapi_standalone, eapi_failover_master) — keep defaults for now. Change passwords after install.# Section 1: Servers — which minion does what (use minion IDs from step 06)
servers:
pg_server: prod-raas
redis_server: prod-raas # variable retained — actual DB is Valkey
eapi_servers:
- prod-raas
salt_masters:
- prod-master1
- prod-master2 # comment out for single-master
# Section 2: PostgreSQL — endpoint is IP or DNS of the data node
pg:
pg_endpoint: raas.example.com
pg_port: 5432
pg_username: salteapi
pg_password: abc123 # default — change after install
pg_hba_by_ip: True
pg_hba_by_fqdn: True
pg_cert_cn: localhost
pg_cert_name: localhost
# Section 3: Valkey — same data node as PostgreSQL
redis:
redis_endpoint: raas.example.com
redis_port: 6379
redis_username: saltredis
redis_password: def456 # default — change after install
# Section 4: eAPI / RaaS service
eapi:
eapi_username: root # default — change after install
eapi_password: salt # default — change after install
eapi_endpoint: raas.example.com
eapi_ssl_enabled: True
eapi_ssl_validation: False
eapi_standalone: False # multi-node deploy = False
eapi_failover_master: False # False = active/active (recommended)
eapi_key: <openssl rand -hex 32>
eapi_server_cert_cn: localhost
eapi_server_cert_name: localhost
# Section 5: Identifiers — must be unique per install
ids:
customer_id: <uuidgen>
cluster_id: saltmaster_cluster_1
openssl rand -hex 32
cat /proc/sys/kernel/random/uuid
Ties masters together in the RaaS UI. Same value on both masters in this stack.
The variable is still redis_server — the database is Valkey, but don't rename it. Both eapi_key and customer_id must be unique per install.
Skip this step if your nodes have internet. The SSEAPE master plugin still needs to be installed (see the master plugin block below) — but the dependency wheels can come from PyPI directly.
Run on every node. The bundle ships these wheels in the pip/ folder. The SSEAPE master plugin is in sse-installer/.
cd ~/Config-*/pip
sudo salt-pip install PyJWT-*.whl
sudo salt-pip install pika-*.whl
sudo salt-pip install pyspnego-*.whl
sudo salt-pip install smbprotocol-*.whl
sudo salt-pip install pypsexec-*.whl
# master plugin — both masters only
cd ~/Config-*/sse-installer/salt/sse/eapi_plugin/files
sudo salt-pip install SSEAPE-*-py3-none-any.whl
With internet: sudo salt-pip install PyJWT pika pyspnego smbprotocol pypsexec pulls the same wheels from PyPI. The SSEAPE plugin still has to come from the bundle (it's not on PyPI).
Data layer first, then masters. The order matters and there's a one-line toggle in the middle. From master 1.
10a · Refresh and apply RaaS:
sudo salt '*' saltutil.refresh_grains
sudo salt '*' saltutil.refresh_pillar
sudo salt prod-raas state.highstate
10b · On the RaaS node, enable autoaccept for the master plugin:
# SSH into prod-raas, then:
sudo vim /etc/raas/raas
# add this line under the existing config, save:
master_autoaccept: true
sudo systemctl restart raas
Without this, the SSEAPE master plugin can't register with RaaS during the next highstate and the masters fail with a key-rejection error. We enable it here, run the masters, then strip it back out in step 11.
10c · Apply the masters:
sudo salt prod-master1 state.highstate
sudo salt prod-master2 state.highstate
Watch tail -f /var/log/messages on the RaaS node. If a master highstate fails, bounce its salt-minion and retry. Tip: snapshot the VMs before this step.
Post-install cleanup. Strip the install-time autoaccept, point each master's minion at itself, drop the cross-keys. Order matters — do RaaS first, then each master in turn.
11a · Remove autoaccept on RaaS:
# SSH into prod-raas, then:
sudo sed -i '/master_autoaccept: true/d' /etc/raas/raas
sudo systemctl restart raas
This was set in 10b so the master plugin could register. With install done, it has to come back out — leaving it on means any new master plugin registers without review.
11b · Lock down master 1:
# on prod-master1
echo "master: localhost" | sudo tee \
/etc/salt/minion.d/master.conf
sudo systemctl restart salt-master
sudo systemctl restart salt-minion
sudo salt '*' test.ping # minions still reachable?
sudo salt-key -d prod-master2 # drop the cross-key
sudo salt '*' test.ping # nothing else fell off?
11c · Lock down master 2:
# on prod-master2
echo "master: localhost" | sudo tee \
/etc/salt/minion.d/master.conf
sudo systemctl restart salt-master
sudo systemctl restart salt-minion
sudo salt '*' test.ping
sudo salt-key -d prod-master1
sudo salt '*' test.ping
The two test.ping calls per master are the sanity checks. If the count drops between them, you deleted a key you shouldn't have — restore from snapshot and try again.
✓ Phase 1 complete. The stack is installed and locked down. Section 6 below is the production hardening pass — license, certs, Git, GPG, master configs, minion bundles.
Don't skip step 11a. Leaving master_autoaccept: true in place after install means any new master plugin registers without review — that's a security hole. The line goes in for 10b, comes back out in 11a. Always.
A working stack isn't a production stack. Six things to do before you point real fleet at it.
On the RaaS node. The filename must end in _license.
echo "<license-key>" | sudo tee \
/etc/raas/saltstack_license
sudo chown raas:raas \
/etc/raas/saltstack_license
sudo systemctl restart raas
Drop your cert + key in /etc/pki/raas/certs/, then point /etc/raas/raas at them.
# in /etc/raas/raas
tls_crt: /etc/pki/raas/certs/<name>.crt
tls_key: /etc/pki/raas/certs/<name>.key
port: 443
Replace /srv/salt with a Git checkout on both masters. A 10-minute cron pull keeps it fresh. Two paths from here — pick the one that matches your situation.
Path A · First deployment
You just installed the masters. Your Git repo has nothing in it yet. Before Git can be the source of truth, the source has to exist.
The Getting Started tutorial walks through your first state files, pillar setup, and the initial git push. Once your repo has content, come back and run Path B to clone it onto both masters.
Path B · Existing repo
Re-deploying after a rebuild, seeding a new green stack, or running DR? Your repo already has the states. Wipe the SSE-shipped scaffold and clone — on both masters.
# /srv/salt was scaffolded by the SSE installer in step 8 —
# wipe it so Git can take over.
sudo rm -rf /srv/salt
sudo git clone <repo-url> /srv/salt
cd /srv/salt && git checkout <branch>
# cron — every 10 min on both masters
*/10 * * * * \
git -C /srv/salt pull \
> /var/log/salt_git_pull.log 2>&1
Pillar lives in Git. Secrets in pillar must be encrypted. Generate a GPG keypair on master 1, copy to master 2 (and to any future green stacks).
In /etc/salt/master.d/, three files do most of the work:
file_roots.conf — env-to-folder map (e.g., prod → /srv/salt/prod)pillar.conf — pillar root per envminion_deploy_delay.conf — Linux 180s, Windows 3600sSo Aria / vRA can deploy minions to new VMs at provision time.
cd ~/Config-*/minion-bundles
sudo cp * /etc/salt/cloud.deploy.d/
# /etc/salt/cloud.providers.d/saltify.conf
saltify_provider:
driver: saltify
Run order matters. License → SSL → Git → GPG → master configs → minion bundles. The deep-dives for GPG and master configs are below — read them in order.
GPG — THE FULL SETUP
Pillar holds your config: passwords, API keys, certs. Pillar lives in Git. So secrets in pillar must be encrypted, or you're committing plaintext credentials to a repo. Salt uses GPG to decrypt them at runtime.
Just like master keys (step 4), the GPG keypair must be identical on every master in this stack and on every future blue/green stack. Generate once, copy everywhere.
On master 1 only.
sudo yum install -y pinentry # required for the prompt
mkdir -p /tmp/gpgkeys
chmod 0700 /tmp/gpgkeys
gpg --full-generate-key --homedir /tmp/gpgkeys
The interactive prompt asks five things:
Choose 1 — RSA and RSA (default).
2048 for large fleets (faster). 4096 for small fleets (stronger).
0 — never expires. Rotating GPG = re-encrypting every pillar value.
Something descriptive — e.g., Prod Salt Master, [email protected].
Leave empty. Salt can't enter a passphrase at runtime — a passphrase here breaks decryption.
Now export the keys, package them, and install:
# export the public + secret keys
gpg --homedir /tmp/gpgkeys --expert --armor \
--export > salt_pubkey.asc
gpg --homedir /tmp/gpgkeys --expert --armor \
--export-secret-key > salt_seckey.asc
# package the gpg dir and unpack at /etc/salt/gpgkeys
cd ..
tar czvf gpgkeys.tgz gpgkeys
sudo tar -zxvf /tmp/gpgkeys.tgz -C /etc/salt/
sudo systemctl restart salt-master
# import into the master's keyring + verify
gpg --homedir /etc/salt/gpgkeys --import \
/etc/salt/gpgkeys/salt_pubkey.asc
gpg --homedir /etc/salt/gpgkeys --import \
/etc/salt/gpgkeys/salt_seckey.asc
gpg --homedir /etc/salt/gpgkeys --list-keys
gpg --homedir /etc/salt/gpgkeys --list-secret-keys
On master 2 (and on any future green-stack master), copy the same keys from master 1. Same pattern as mirroring master.pem in step 4.
# on the new master
sudo mkdir /etc/salt/gpgkeys
# paste contents from the existing master
sudo vim /etc/salt/gpgkeys/salt_seckey.asc
sudo vim /etc/salt/gpgkeys/salt_pubkey.asc
sudo systemctl restart salt-master
# import + verify
gpg --homedir /etc/salt/gpgkeys --import \
/etc/salt/gpgkeys/salt_pubkey.asc
gpg --homedir /etc/salt/gpgkeys --import \
/etc/salt/gpgkeys/salt_seckey.asc
gpg --homedir /etc/salt/gpgkeys --list-keys
One config file, then refresh.
# /etc/salt/master.d/gpg.conf
ssh_minion_opts:
gpg_keydir: /etc/salt/gpgkeys
sudo systemctl restart salt-master
sudo salt '*' saltutil.refresh_pillar
sudo salt '*' pillar.items # confirm pillar reads
Encrypt secrets one at a time, paste the output into your pillar .sls files, commit to Git.
echo -n 'mySuperSecret123' | gpg \
--homedir /etc/salt/gpgkeys \
--trust-model always \
-ear 'Prod Salt Master'
The output is an armored -----BEGIN PGP MESSAGE----- block. Paste it into pillar like this — note the #!yaml|gpg shebang on line 1, that's what tells Salt to decrypt the block before rendering:
# in your pillar .sls file — first line must be the renderer shebang
#!yaml|gpg
db_password: |
-----BEGIN PGP MESSAGE-----
hQEMA<...>
<several lines of base64 ciphertext from the gpg command above>
=<checksum>
-----END PGP MESSAGE-----
Skip the shebang and Salt won't decrypt anything. Without #!yaml|gpg on line 1, the file renders with the default yaml renderer and your minion gets the literal armored block as a string — not the decrypted secret. Silent failure, very confusing. The shebang has to be the first line. Blank line right after is fine.
salt_seckey.asc is your master decryption key. Treat it like a root password — vault it, transfer out-of-band, never commit it. The public key (salt_pubkey.asc) is fine to share — anyone can encrypt with it, only the private key can decrypt.
Drop each of these in /etc/salt/master.d/. Salt merges anything in that directory into the main master config at startup. Restart salt-master after each change.
file_roots.conf — envs to state foldersEach env points at its own state tree on disk. Match the env names to your top.sls and your Git branch strategy — typically one env per branch.
file_roots:
prod:
- /srv/salt/prod
staging:
- /srv/salt/staging
dr:
- /srv/salt/dr
pillar.conf — pillar roots per envSame shape, but for pillar data. Pillar is rendered per-minion at the master, so this is where env-specific secrets and values live. Keep prod and staging apart even if they look identical day one — you'll thank yourself the first time staging gets a destructive test value.
pillar_roots:
prod:
- /srv/salt/prod/pillar
staging:
- /srv/salt/staging/pillar
minion_deploy_delay.conf — Windows takes longerDefault deploy delay is too short for Windows. The salt-minion service install on Windows 2022 takes longer than the master expects, then RaaS marks the deployment failed even though it succeeded. Bump Windows to 3600 seconds. Linux 180 is fine.
sseapi_linux_minion_deploy_delay: 180
sseapi_windows_minion_deploy_delay: 3600
As you go, you'll add more:
raas.conf — RaaS cluster + auth settings (highstate writes this for you)gpg.conf — points Salt at the GPG keyring (covered in the GPG section above)reactor.conf — event-driven orchestration mappings (event → orchestration .sls)Anything in /etc/salt/master.d/ loads as YAML and merges with the main config. Easier to manage than one giant /etc/salt/master.
# after dropping any new file
sudo systemctl restart salt-master
You only need this when you're cutting over to a green stack — for an upgrade, a migration, or a "rebuild instead of troubleshoot." The concept and the diagram are in section 2 above.
3 fresh VMs. Run section 5 on them. Don't connect anything yet.
Copy master.pem, master.pub, and /etc/salt/gpgkeys/ from blue to both green masters. Confirm auto_accept: True.
Use a canary minion (hostfile override) to point at green. Run test.ping, a highstate, pillar decrypt. Everything must pass.
Day before cutover: drop the CNAME's TTL to 60s. Lets the swap propagate fast.
Repoint salt-master.example.com at green's masters. Same for raas.example.com. Or swap the LB pool if you have one.
Watch the RaaS UI. Minion connect rates, job success rates. Hold blue for 24–72 hours in case of rollback. Then decommission.
Things we've hit. Saves you a few late nights.
master_autoaccept: trueIt's required during install. It's a security hole afterwards. Always remove and restart RaaS once both masters are registered.
master.pem is read-only by defaultMode 0400. Run chmod 600 to edit, then chmod 400 back. The file doesn't exist on a brand-new master until you start the service once.
redis_* but the database is ValkeyBroadcom replaced Redis with Valkey but kept the old variable names for backward compat. Don't rename them. The installer will break.
If it sits for more than 30 minutes on the RaaS node, reboot the node and try again. Common when kernel/security baselines are still settling.
Bounce salt-minion on master 2, then re-run the highstate from master 1. Sometimes needs two passes.
Means at least one of: master keys drifted between stacks, GPG keys drifted, or auto-accept is off somewhere. All three must be in lockstep.
Set sseapi_windows_minion_deploy_delay: 3600 in the master config. Default is too short for Windows 2022.
Clear the old SSH host fingerprint first: ssh-keygen -R <old-fqdn-or-ip>. Otherwise SCP and Ansible-style tooling will fail with host-key warnings.
cloud-init on long-lived serversIf your RHEL template runs cloud-init, it can rewrite networking on reboot. sudo touch /etc/cloud/cloud-init.disabled on every Salt node.
pinentry not installed → GPG key generation hangssudo yum install -y pinentry first. The interactive prompt for the passphrase needs it.
We build it, hand it back, and turn your team into the experts on the way out.
[email protected]