Hubbry Logo
Salt (software)Salt (software)Main
Open search
Salt (software)
Community hub
Salt (software)
logo
7 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Salt (software)
Salt (software)
from Wikipedia
Original authorThomas S Hatch
DeveloperBroadcom
Initial release19 March 2011; 14 years ago (2011-03-19)
Stable release
3006.10 / 19 March 2025; 7 months ago (2025-03-19)[1]
Repository
Written inPython
Operating systemUnix-like, macOS, Microsoft Windows
TypeConfiguration management and Infrastructure as Code
LicenseApache License 2.0
Websitesaltproject.io Edit this on Wikidata

Salt or SaltStack is an infrastructure as code software tool for configuration management. It is written in Python and published under the Apache License 2.0.

History

[edit]

Salt originated from the need for high-speed data collection and task execution for data center systems administrators managing massive infrastructure scale and resulting complexity. The author of Salt, Thomas S. Hatch, had previously created several utilities for IT teams to solve the problem of systems management at scale, but found these and other open source solutions to be lacking.[2] Hatch decided to use the ZeroMQ messaging library to facilitate the high-speed requirements and built Salt using ZeroMQ for all networking layers.

In late May 2011 initial progress was made toward the delivery of configuration management built on the Salt remote execution engine.[3] This configuration management system stores all configuration (state) data inside an easily understood data structure that leverages YAML. While experimental functionality of the Salt State system was available in May 2011, it was not considered stable until the release of Salt 0.9.3 in November 2011.[4]

The Salt 0.14.0 release introduced an advanced cloud control system making private and public cloud VMs directly manageable with Salt. The Salt Cloud function allows for provisioning of any hybrid cloud host, then exposes Salt remote execution, configuration management, and event-driven automation capabilities to the newly provisioned hybrid cloud systems. New virtual machines and cloud instances are automatically connected to a Salt Master after creation.

Salt Cloud supports 25 public and private cloud systems including AWS, Azure, VMware, IBM Cloud, and OpenStack. Salt Cloud provides an interface for Salt to interact with cloud hosts and the cloud’s functionality such as DNS, storage, load balancers, etc.

In September 2020, VMware acquired SaltStack.[5][6]

Design

[edit]

The module design of Salt creates Python modules that handle certain aspects of the available Salt systems. These modules allow for the interactions within Salt to be detached and modified to suit the needs of a developer or system administrator.

The Salt system maintains many module types to manage specific actions. Modules can be added to any of the systems that support dynamic modules. These modules manage all the remote execution and state management behavior of Salt. The modules can be separated into six groups:

  • Execution modules are the workhorse for Salt's functionality. They represent the functions available for direct execution from the remote execution engine. These modules contain the specific cross platform information used by Salt to manage portability, and constitute the core API of system level functions used by Salt systems.[7]
  • State modules are the components that make up the backend for the Salt configuration management system. These modules execute the code needed to enforce, set up or change the configuration of a target system. Like other modules, more states become available when they are added to the states modules.
  • Grains are a system for detecting static information about a system and storing it in RAM for rapid gathering.[8]
  • Renderer modules are used to render the information passed to the Salt state system. The renderer system is what makes it possible to represent Salt's configuration management data in any serializable format.[9]
  • Returners: the remote execution calls made by Salt are detached from the calling system; this allows the return information generated by the remote execution to be returned to an arbitrary location. Management of arbitrary return locations is managed by the Returner Modules.[10]
  • Runners are master side convenience applications executed by the salt-run command.[11]

Security

[edit]

In April 2020, F-Secure revealed two high severity RCE (Remote Code Execution) vulnerabilities, identified as CVE-2020-11651 and CVE-2020-11652, with CVSS score reaching as high as 10. These critical vulnerabilities were found within Salt's default communication channel ZeroMQ, and the initial research discovered 6000 vulnerable Salt servers. Salt organization was notified before F-Secure's public announcement, and Salt soon released the patch in its updated releases: 2019.2.4 and 3000.2.[12]

See also

[edit]

References

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
Salt is an open-source, Python-based automation platform for , remote execution, and of , enabling the deployment, configuration, and of complex systems at scale. It operates on an that supports parallel execution across thousands of nodes, using for messaging and msgpack for serialization to achieve high speed and efficiency. Licensed under the Apache 2.0 License, Salt provides secure communication through public-key authentication and AES encryption in its master-minion model, where a central master server controls minion agents on target systems. Originally developed by Thomas S. Hatch to address limitations in existing tools like , Salt was first released in , as an innovative approach to that emphasized speed and . The project quickly gained adoption for its ability to handle diverse operating systems, including distributions like , Debian, RHEL, and , as well as Windows and macOS, and it has been integrated into enterprise solutions by companies such as , , and . In , Hatch co-founded SaltStack Inc. with Marc Chenn to commercialize the technology, leading to its acquisition by in September 2020 to enhance capabilities within the Tanzu portfolio. Following Broadcom's acquisition of in November 2023, the open-source project transitioned to the community-driven Salt Project, supported by , with ongoing releases such as the 3006 LTS and 3007 STS versions ensuring long-term stability and updates. Key features of Salt include its Salt States system for declarative , which defines desired system states through YAML-based SLS files, and its remote execution engine for running ad-hoc commands or queries across minions. The platform's event system allows for reactive automation, enabling self-healing infrastructure that responds to real-time events like outages or alerts. With over 3,000 community contributors, Salt excels in environments requiring , compliance, and multi-cloud , making it a versatile tool for and IT operations.

Introduction

Overview

Salt is a Python-based, open-source, event-driven remote execution framework designed for , , provisioning, and of . It enables administrators to manage systems at scale, handling everything from small networks to thousands of servers efficiently through a push-based model that prioritizes speed and real-time responsiveness. Unlike pull-based predecessors such as and , which require agents to periodically check for updates, Salt employs an agent-based architecture that allows for immediate command execution from a central master, making it lighter and faster for dynamic environments. This approach supports rapid deployment and while maintaining compatibility with declarative configuration states. As of November 2025, the latest stable release is version 3007.8, available under the Apache 2.0 , which permits both open-source and use. Created by Thomas S. Hatch, Salt was initially released in 2011, and has since become a foundational tool in practices.

Key Components

The Salt Master serves as the central server in the Salt architecture, responsible for issuing commands to managed systems, managing authentication keys, and coordinating the overall of tasks across the . It runs the salt-master service and publishes jobs that minions can subscribe to, receiving execution results in return. Salt Minions are the agent components installed on target systems, enabling them to connect to the master, execute received commands locally, and report results back to the master for centralized monitoring and management. These minions run the salt-minion service and form the primary interface for remote execution and configuration enforcement on diverse operating systems. For large-scale deployments, Salt Syndics act as proxy components that extend the master's reach hierarchically, allowing a higher-level master to control multiple subordinate masters through a special passthrough minion interface. A syndic node runs both salt-syndic and salt-master daemons, relaying publications and events to enable scaling across thousands of minions without overwhelming a single master. In scenarios where installing agents is impractical, Salt supports minionless modes such as Salt-SSH, which enables remote execution and state application over SSH without requiring a Salt Minion on the target systems. This approach uses a roster file to define connections and is suitable for one-off tasks or environments with strict agent restrictions, though it operates more slowly than the agent-based model due to the overhead of SSH transport. The Reactor system provides an event-driven mechanism for handling notifications across Salt components, allowing automated responses to specific events detected on the event bus, such as system alerts or job completions. Configured via YAML files on the master, it maps event tags to predefined reaction scripts or states, facilitating reactive like service restarts or notifications without manual intervention. Components communicate primarily via the messaging library to support these event flows efficiently.

History

Origins and Development

Salt (software), commonly known as SaltStack, was founded by Thomas S. Hatch in 2011 to overcome the performance limitations of existing tools like , which relied on slower Ruby-based architectures and struggled with in large environments. Hatch, drawing from his experience as a systems , sought to create a faster, more efficient solution for automating infrastructure at scale, emphasizing real-time execution and minimal latency. Initial development began in early 2011, driven by the need for a lightweight tool capable of handling thousands of nodes without the overhead of traditional polling mechanisms. The project was released as an open-source initiative under the Apache 2.0 license on March 19, 2011, coinciding with the formation of SaltStack, Inc. to support its commercial development. Early implementation focused on Python as the core programming language for its simplicity and extensibility, combined with ZeroMQ for high-performance, asynchronous messaging that enabled sub-second communication across distributed systems. This integration allowed Salt to prioritize speed and reliability from the outset, distinguishing it from predecessors that required more resource-intensive setups. A key early milestone came with version 0.8.0 in 2011, which introduced foundational remote execution capabilities, including tools like salt-cp for efficient file distribution and dynamic returners for storing command outputs in databases such as or . By 2012, Salt had evolved to include robust support for cloud provisioning through the introduction of salt-cloud, enabling seamless integration with providers like AWS and enabling automated VM orchestration. These advancements marked Salt's rapid maturation into a versatile platform. Following the initial release, Salt transitioned toward community-driven development, with Hatch actively encouraging contributions via to foster collaborative growth. By late 2011, the project had garnered initial external contributions, laying the groundwork for its expansion beyond Hatch's solo efforts into a broader open-source . This shift emphasized and extensibility, allowing developers to build upon the core framework for diverse use cases.

Major Releases and Acquisitions

Salt's development accelerated in the mid-2010s with key releases that expanded its capabilities for large-scale . Version 2014.7.0, codenamed and released in July 2014, introduced advanced features, enabling coordinated execution across multiple minions for complex workflows. This release marked a significant step in supporting event-driven and multi-stage deployments, building on Salt's core remote execution model. Subsequent updates further enhanced cloud and integration functionalities. The 2016.11.0 release, codenamed Carbon and issued in November 2016, added improved support for providers through better configuration merging and archive handling, facilitating easier provisioning in hybrid environments. In the , the shift to the 3000 series, starting with version 3000.0 () in March 2020, focused on Python 3 compatibility by dropping Python 2 support entirely, ensuring long-term viability on modern operating systems. Corporate changes reshaped Salt's trajectory and community focus. In September 2020, acquired SaltStack, integrating the technology into its vRealize suite for enhanced enterprise automation and cloud management. This led to deeper ties with 's ecosystem, including support for multi-cloud orchestration. Following 's acquisition of in November 2023, Salt's stewardship transferred to , maintaining its open-source roots while emphasizing commercial extensions. In May 2024, left but affirmed his ongoing commitment to the Salt Project community. Milestone events underscored Salt's growing prominence. The first SaltConf conference was held January 28–30, 2014, in , providing a dedicated forum for users to share advancements in practices. Amid the 2020 acquisition, Salt rebranded as the Salt Project to reinforce its commitment to open-source development separate from proprietary offerings. As of November 2025, Salt's support lifecycle reflects a structured approach to maintenance. The 3006.x (LTS) branch receives active development and bug fixes until January 31, 2026, with critical patches extending to January 31, 2027. Older versions, such as those in the 3000-3005 series, continue to get updates during extended phases, ensuring stability for legacy deployments. These acquisitions influenced development toward a hybrid open-source and commercial model, incorporating enterprise features like integrations without fundamentally changing the master-minion architecture. This evolution has sustained community contributions while broadening adoption in corporate infrastructures.

Architecture

Master-Minion Model

The Master-Minion model forms the core architecture of Salt, where a central Salt master server coordinates and issues commands to distributed Salt minion agents installed on target systems. In this publisher-subscriber paradigm, the master publishes jobs—such as remote executions or state enforcements—to a message bus, and minions subscribe to these jobs by maintaining persistent connections to the master. Minions independently evaluate whether a job targets them based on predefined selectors, execute the task locally if applicable, and asynchronously return results to the master via the same connection. This decoupled workflow ensures efficient, non-blocking operations, allowing the master to handle multiple concurrent jobs without waiting for individual responses. Scaling in the Master-Minion model relies on flexible targeting mechanisms to selectively execute jobs across large fleets of minions, often numbering in the thousands. Targeting options include glob patterns (e.g., web* for minions starting with "web"), Perl-compatible regular expressions (PCRE) for complex matches, and data-driven selectors like pillar values or grain attributes, enabling precise control without broadcasting to all nodes. For environments exceeding 10,000 minions, hierarchical setups using syndics address master overload by introducing intermediate layers: a top-level "master of masters" publishes to syndic minions, which act as lightweight masters relaying commands to their subordinate minions while aggregating results upward. This structure distributes key management and reduces the top master's visibility to only the syndics, enhancing scalability and fault tolerance in multimaster configurations. As an alternative to full minion installations on resource-constrained or incompatible devices, Salt supports proxy minions, which run on a separate host and interface with the target via custom proxy modules. Proxy minions emulate standard minion behavior, allowing the master to target and manage devices like IoT sensors or network switches that lack the ability to run a native agent, by translating Salt commands into device-specific protocols such as APIs or SSH. Performance in the Master-Minion model is optimized through its event-driven design and non-blocking I/O, powered by for asynchronous messaging, enabling sub-second job propagation and execution across extensive infrastructures. Multiple worker threads on the master further parallelize job processing, supporting high-throughput operations without bottlenecks.

Communication Protocols

Salt employs (ZMQ) as its primary messaging library for communication between the master and minions, utilizing a publish-subscribe (pub/sub) pattern to enable efficient, asynchronous distribution of commands and events. This approach provides a lightweight alternative to traditional HTTP/ protocols, supporting high-speed, socket-based networking without the overhead of persistent connections or session management. 's implementation in Salt facilitates real-time interactions across distributed systems, allowing the master to broadcast jobs to targeted minions via publisher sockets while minions subscribe to relevant messages. Messages in Salt are structured as encapsulated payloads, where job commands are wrapped in AES-encrypted content to secure transmission after initial public-key . Each job includes a unique job (JID) for identification and tracking, enabling the master to correlate returns from minions and manage execution lifecycle events. This structure ensures that payloads remain compact—typically under 1KB for commands—while supporting via request-reply sockets for bidirectional communication, such as minion responses. By default, Salt uses TCP as the transport protocol over , with the master listening on port 4505 for publishing (configurable via publish_port) and port 4506 for replies (configurable via ret_port). Alternative options include UDP for discovery and broadcasting in dynamic environments, as well as as a fallback transport for scenarios requiring direct, access without ZeroMQ. These transports can be customized through the transport configuration option, with support for multiple instances via transport_opts to handle varied network topologies. The event bus in Salt, accessible via the SaltEvent module, operates as a real-time pub/sub system built on , allowing components to publish and subscribe to events such as , job initiation, and returns. It supports presence detection of minions through configurable events like salt/presence/present and salt/presence/change, which track connected minions and report changes in availability when presence_events is enabled in the master configuration. This bus enables low-latency notifications, with a default maximum event size of 1MB to accommodate detailed payloads. For reliability, Salt incorporates configurable timeouts and retries at the protocol level, such as the default 5-second timeout for command responses and 10-second gather_job_timeout for aggregating job results. In multi-master setups, is achieved by listing multiple masters in the minion configuration with master_type: failover, where minions periodically check master availability via master_alive_interval (default 0, disabled) and automatically switch to the next master upon failure detection, effectively retrying connections. Additional safeguards include ZeroMQ's high-water mark (pub_hwm: 1000) to prevent message queuing overflows and TCP settings to maintain connection stability. These features collectively ensure robust operation in fault-tolerant environments, with job retention configurable up to 24 hours for recovery.

Core Functionality

Configuration Management

Salt's configuration management is built around a declarative using Salt States, which are defined in SLS (Salt State) files typically written in . These files specify the desired state of systems, such as installing packages, managing files, creating users, or configuring services, without prescribing the exact sequence of operations. For instance, an SLS file might declare that a specific package should be installed and running, allowing Salt to handle the underlying steps idempotently. This approach ensures that repeated applications of the same state result in no unintended changes, promoting consistency across managed nodes. The highstate feature represents the complete set of states to apply to minions, compiled from relevant SLS files. Administrators invoke highstate using the command salt '*' state.highstate, which targets all minions and enforces the defined configurations simultaneously. This process is idempotent, meaning it only makes changes necessary to achieve the desired state, such as updating a package if its version differs or skipping actions if the system already complies. Highstate draws from the state tree rooted in the master's file_roots configuration, enabling scalable management of . Top files serve as a central mechanism for mapping environments and minion targets to specific SLS files, facilitating targeted deployments across development, staging, and production settings. The default top file, top.sls, uses a structure where environments (e.g., base or dev) define matchers for minion groups (e.g., web*) and list the applicable SLS modules (e.g., [apache](/page/Apache)). For example, a top file might assign the webserver SLS to minions matching webserver* in the prod environment, ensuring environment-specific configurations without manual intervention for each run. To validate configurations without applying changes, Salt supports testing and dry-run modes through the test=True option in state application commands. Running salt '*' state.apply test=True simulates the highstate or specific SLS application, highlighting pending changes in output while reporting results as unchanged (e.g., None for no action). This allows administrators to preview impacts, detect errors, or confirm compliance before live deployment, with the option overridable via test=False if needed. SLS files integrate seamlessly with version control systems like to enable practices, where states are stored in repositories for versioning, collaboration, and reproducibility. Salt Formulas, which are pre-built collections of SLS files for common tasks, are often maintained as individual repositories that can be cloned into the master's file roots or mounted via GitFS remotes in the configuration. This setup allows teams to formulas, apply semantic versioning, and pull updates controllably, treating infrastructure definitions as auditable code rather than ad-hoc scripts.

Remote Execution

Salt's remote execution enables administrators to perform ad-hoc, imperative commands on targeted minions, allowing immediate task execution across distributed systems without predefined state enforcement. This contrasts with declarative by focusing on one-time or non-idempotent operations, such as querying system status or applying quick fixes. The primary interface for remote execution is the salt command-line tool, which sends instructions to minions via the master. A core example is salt '*' cmd.run 'uptime', which runs the shell command uptime on all minions (* as the target) and returns the output from each. The cmd module handles arbitrary shell execution, while other execution modules provide higher-level abstractions. Execution modules are Python-based components that offer cross-platform interfaces for common system tasks, abstracting underlying differences in operating systems. For instance, the module manages package installation and updates regardless of whether the system uses APT, YUM, or another manager, as in salt '*' pkg.install nginx. Similarly, the service module controls services like starting or restarting daemons (salt '*' service.start httpd), and the file module handles file operations such as copying or editing (salt '*' file.touch /tmp/testfile). These modules ensure portability and reduce the need for platform-specific scripting. Remote executions operate asynchronously, with the master assigning a unique job ID (JID) to each command for tracking across multiple minions. Results are cached on the master, and administrators can retrieve them using salt-run jobs.lookup_jid <jid>, which displays the output for a specific job even after the initial command completes. This system supports monitoring long-running tasks without blocking the CLI. To prevent resource overload during large-scale executions, Salt supports batch processing, which limits the number of concurrent minions processing a command. Using the --batch-size or -b option, such as salt '*' cmd.run 'reboot' batch=10%, executes the command on only 10% of the targeted minions at a time, staggering the rest as prior batches finish. This is particularly useful for resource-intensive operations like reboots. For handling output from remote executions, Salt provides various outputters to format results in structured ways suitable for human review or programmatic use. The json outputter (--out=json) serializes data into for easy parsing in scripts, while yaml (--out=yaml) offers a human-readable yet structured alternative. The highstate outputter is tailored for state-related results but can be used more broadly to organize execution summaries.

Advanced Features

Orchestration

Salt Orchestration enables the coordination of complex, multi-step workflows across multiple minions from the Salt master, allowing for sequenced execution of tasks that depend on the outcomes of prior operations. Introduced in Salt version 0.17.0, it generalizes the state to the master context, replacing the deprecated OverState system starting in 2015.8.0, and supports managing dependencies between states applied to different targets. Orchestration is defined using SLS files, typically stored in dedicated subdirectories like _orch/ within the master's file roots or remotes, which employ orchestration-specific primitives to invoke actions. These primitives include salt.state for applying state files to targeted minions (e.g., ensuring a highstate on web servers) and cmd.run for executing shell commands (e.g., removing temporary files). For instance, a basic SLS file might specify:

install_nginx: salt.state: - tgt: 'web*' - sls: - [nginx](/page/Nginx)

install_nginx: salt.state: - tgt: 'web*' - sls: - [nginx](/page/Nginx)

This targets minions matching the web* pattern and applies the nginx state module. Such files allow for declarative sequencing of tasks, extending single-minion management to cluster-wide operations. Execution occurs via runner modules on the master, invoked with commands like salt-run state.orchestrate orch.init or its alias salt-run state.orch orch.init, where orch.init references the SLS file name. This master-side runner processes the SLS, targeting minions by IDs, grains, or IP ranges (e.g., 10.0.0.0/24), and supports options like saltenv for environment selection or pillarenv for pillar data. Masterless orchestration is available since version 2016.11.0 using salt-call --local state.orchestrate. A representative workflow might involve setting up a database cluster before deploying an application across web servers. For example, an SLS could first apply a Ceph storage state to storage-role minions, then apply an application state to web minions only after the storage setup succeeds:

storage_setup: salt.state: - tgt: 'role:storage' - sls: ceph webserver_setup: salt.state: - tgt: 'role:web' - sls: app - require: - salt: storage_setup

storage_setup: salt.state: - tgt: 'role:storage' - sls: ceph webserver_setup: salt.state: - tgt: 'role:web' - sls: app - require: - salt: storage_setup

This ensures ordered execution, with the web deployment waiting for storage completion. Dependencies in orchestration leverage state requisites at the master level, including require to enforce sequential ordering (e.g., a task requires prior completion of another) and listen to trigger actions based on changes from watched states without altering execution order. These are applied within salt.state blocks to manage inter-state relationships across minions, ensuring reliable multi-system coordination. For example, listen can invoke a mod_watch function upon a watched state's success or failure. Salt Orchestration scales to thousands of nodes through built-in parallelism, where independent tasks execute concurrently across minions while requisites enforce ordering for dependent ones. Targeting mechanisms and the master's allow efficient handling of large infrastructures, with controls like batch sizes (via batch targeting) to manage resource usage during parallel runs.

Event-Driven Automation

Salt's event-driven automation leverages a publish-subscribe model where events are fired onto the SaltEvent bus, allowing components to listen and react in real time. This system enables the publication of custom events, such as those tagged with 'salt/auth' for authentication-related notifications, facilitating dynamic responses across the . The SaltEvent bus serves as the central hub for these interactions, supporting both internal Salt processes and external integrations for broader reactivity. Beacons provide minion-side monitoring capabilities, periodically generating events based on system conditions to trigger automated actions. For instance, the loadavg beacon tracks CPU load over 1-, 5-, and 15-minute intervals, firing events if thresholds are exceeded, while the network.info beacon reports on network interface statistics like bytes sent or received. These beacons run at configurable intervals, defaulting to every second, and integrate seamlessly with the system to enable proactive infrastructure management without manual intervention. Reactor SLS files, written in YAML with Jinja templating, define rules that map specific event tags to remediation actions, allowing for declarative event handling. Configured via the master's reactor option, these files associate tags like 'salt/minion/*/start' with SLS modules that execute states, runners, or orchestrations upon event receipt. For example, a can automatically accept minion authentication keys when a start event is detected, streamlining . Representative use cases demonstrate the system's reactivity, such as auto-remediation for low disk space: the diskusage beacon monitors filesystem thresholds and fires an event if utilization exceeds a limit (e.g., 90%), triggering a SLS to execute cleanup states like log rotation or temporary file removal. Similarly, high CPU load detected by the loadavg beacon can invoke a to auto-scale resources, such as provisioning additional cloud instances via cloud modules. triggers, like service downtime events from the service beacon, can activate reactors to migrate workloads or restart processes on standby nodes, ensuring . For advanced setups, the event system supports integration with external buses for handling, enabling third-party applications to listen and publish via Python APIs or REST interfaces, which can bridge to systems like for distributed messaging in large-scale environments. This extensibility allows Salt to participate in broader event-driven architectures beyond its native ZeroMQ-based bus.

Data Handling

States and Formulas

Salt states are defined in files with the .sls extension, using as the primary format to declaratively specify the desired configuration of systems. These files organize states into a hierarchical where each top-level key represents an ID for a state declaration, followed by the module name (e.g., , file, service) and its function, such as .installed to ensure a package is present. For instance, a basic SLS file might install and manage its configuration like this:

yaml

apache: pkg.installed: - name: httpd /etc/httpd/conf/httpd.conf: file.managed: - source: salt://apache/files/httpd.conf - require: - pkg: apache httpd: service.running: - enable: True - watch: - file: /etc/httpd/conf/httpd.conf

apache: pkg.installed: - name: httpd /etc/httpd/conf/httpd.conf: file.managed: - source: salt://apache/files/httpd.conf - require: - pkg: apache httpd: service.running: - enable: True - watch: - file: /etc/httpd/conf/httpd.conf

This structure allows Salt to apply configurations idempotently, meaning repeated executions yield the same result without unintended changes. Requisites in SLS files manage dependencies between states, ensuring ordered execution and conditional behavior to maintain system integrity. The require requisite enforces that a dependent state (e.g., service.running) only proceeds if the required state (e.g., pkg.installed) succeeds, preventing failures from misordered operations. Watch combines a require with reactive actions, such as restarting a service if a watched file changes, using the mod_watch function in the target module. Onchanges triggers a state only if a specified state reports changes, ideal for post-update tasks like reloading configurations. Other requisites like prereq (for anticipated changes), onfail (for error handling), and listen (for end-of-run notifications) further refine dependency management, with variants like require_in allowing reverse declarations. Wildcards and SLS-level requires simplify complex graphs. Jinja templating integrates seamlessly into SLS files to enable dynamic, parameterized content, using double curly braces {{ }} for expressions evaluated before YAML parsing. This allows conditional logic, loops, and variable substitution, such as retrieving sensitive data with {{ pillar.get('db_pass', 'default') }} or iterating over lists with {% for pkg in pillar.get('packages', []) %}. Best practices include defining variables at the file top with {% set %}, using macros for reusable blocks, and importing from other SLS files via {% from %} to promote and avoid hardcoded values. Jinja's sandboxed environment ensures secure execution, supporting Salt-specific functions like salt['grains.get'] for system-aware rendering. Formulas extend SLS reusability by bundling related states into modular, community-maintained collections, often structured as repositories for easy integration via GitFS. Each formula includes a map.sls for pillar-driven configuration and init.sls for core states, allowing customization without altering the base. For example, the formula installs the , manages modules, and sets virtual hosts based on pillar inputs, deployable by including it in a top file like base: '*': - apache. Hosted in the saltstack-formulas organization, these formulas standardize common setups like databases or firewalls, fostering collaboration while adhering to conventions for naming, testing, and documentation. Validation of SLS files emphasizes syntax checking and adherence to idempotency, where states should converge systems to a defined configuration without side effects from reapplications. The salt-call state.show_sls command renders and displays an SLS file's highstate locally, catching YAML or Jinja errors early without execution. Idempotency is achieved by designing states to check current conditions (e.g., via unless or onlyif in non-idempotent modules like cmd.run), ensuring no unnecessary changes occur on subsequent runs.

Pillars and Grains

In Salt, grains represent system-specific facts collected and provided by individual minions, offering a bottom-up approach to gathering environmental data. These grains include details such as the operating system (os), CPU architecture (cpuarch), memory usage, IP addresses, and kernel version, which are automatically detected upon minion startup or refresh. Custom grains can also be defined by users to include additional static data, such as application-specific attributes or environmental roles, via the minion configuration file under the grains: key, a dedicated /etc/salt/grains file, or custom Python modules in the _grains directory. Minions compile this information locally and make it available for targeting, state application, and module execution without requiring master intervention. To access grains, administrators can use the grains.items function, which returns a of all collected data for a minion, enabling dynamic decision-making in Salt configurations. Pillars, in contrast, provide a top-down mechanism for the Salt master to distribute secure, environment-specific to targeted minions, ensuring sensitive information like keys, passwords, or configuration variables remains isolated from state files. Defined in YAML-based SLS files under the pillar_roots directory (typically /srv/pillar), pillars are compiled on the master and pushed only to matching minions based on targeting criteria. This compilation process allows for per-minion customization, where can be structured hierarchically in nested dictionaries to avoid conflicts and support complex environments. Pillar SLS files enable environment-specific targeting through top files, such as /srv/pillar/top.sls, which map pillar to minions using matchers like grains or roles; for example, assigning webserver-specific configurations to minions with role:webserver. External pillars extend this capability by integrating with external stores for dynamic secrets management, including Vault for encrypted key-value storage and etcd for distributed key-value persistence. In Vault integration, pillars are pulled from paths like secret/salt via the ext_pillar configuration, supporting templating for minion-specific paths. Similarly, etcd requires a profile in the master config for host and port details, allowing pillar to be sourced from keys like /salt/%(minion_id)s. Best practices emphasize treating pillars as the sole repository for secrets to prevent hardcoding in states or files, with access in SLS files achieved via pillar.item or {{ pillar['key'] }} for secure retrieval. Encryption options, such as PGP via decrypt_pillar, should be enabled for sensitive SLS files, and pillar data refreshed with saltutil.refresh_pillar when external sources update. This separation ensures pillars handle runtime data securely, distinct from the declarative logic in states.

Security

Authentication Mechanisms

Salt employs asymmetric public-key cryptography, specifically RSA key pairs, as its primary mechanism for authenticating minions to the master, ensuring that only authorized minions can initiate secure communication. Upon installation, each Salt minion automatically generates an RSA key pair consisting of a private key (minion.pem) and a public key (minion.pub), stored in the minion's PKI directory, typically /etc/salt/pki/minion/. The minion then sends its public key to the Salt master during initial connection attempts, which caches it in the pending keys directory (/etc/salt/pki/master/minions_pre/) until manually or automatically accepted. This establishes the minion's identity for all subsequent interactions, with the master verifying the minion's signature on messages using the accepted public key. Key management is handled through the salt-key command-line tool on the master, allowing administrators to list, accept, reject, or delete keys to control minion access. To list pending, accepted, or rejected keys, administrators use salt-key -l <state>, where <state> can be 'pre' for pending, 'acc' for accepted, or 'rej' for rejected; for example, salt-key -L displays all keys in a formatted table. Acceptance is performed with salt-key -a <minion_id> to sign and move the key to the accepted directory (/etc/salt/pki/master/minions/), enabling the minion to authenticate; bulk acceptance uses salt-key -A. Rejection with salt-key -r <minion_id> or salt-key -R moves pending keys to the rejected directory (/etc/salt/pki/master/minions_rejected/), preventing unauthorized access. Deletion with salt-key -d <minion_id> or salt-key -D removes keys from accepted or rejected states. These operations support glob patterns for managing multiple minions efficiently, and the --include-all option extends actions to non-pending keys. For user-level authentication beyond minion-master interactions, Salt integrates external authentication systems (eAuth) such as PAM or LDAP, allowing the master to delegate identity verification to enterprise directories while maintaining Salt's command authorization. In PAM configuration, the master config file (/etc/salt/master) includes an external_auth section mapping users or groups to permissions, for instance:

external_auth: pam: admin: - .*

external_auth: pam: admin: - .*

This enables the admin user to execute any Salt command after PAM verifies their credentials; commands are then run with salt -a pam <target> <function>, such as salt -a pam '*' test.ping. LDAP integration requires the python-ldap library and configures connection details like server URI and bind credentials in the master config, followed by an external_auth entry like:

external_auth: ldap: test_user: - .*

external_auth: ldap: test_user: - .*

The master authenticates the user against LDAP during login, supporting TLS for secure binds and group-based matching for scalability in large environments. Minion authentication can be automated through master configuration options to streamline deployment in trusted environments. Setting auto_accept: True in the master's configuration (/etc/salt/master) instructs the master to automatically accept any new minion key upon receipt, bypassing manual salt-key intervention; this is suitable for air-gapped or pre-provisioned setups but increases risk if keys are compromised. For broader openness, open_mode: True accepts all keys regardless of ID conflicts, though it is recommended only for testing. Privileged actions on the master, such as via wheel modules (e.g., wheel.key list), require explicit eAuth permissions prefixed with @wheel, ensuring that only authenticated users can perform administrative tasks like accepting keys programmatically. Authentication events are tracked for auditing through the master's log files and event system, providing visibility into key exchanges and verification attempts. The master logs authentication successes, failures, and key-related actions at configurable log levels (e.g., info or debug) in /var/log/salt/master, with entries like "Authentication accepted for minion_id" during successful handshakes. Additionally, enabling auth_events: True in the master config fires Salt events (e.g., salt/auth) on the event bus for each authentication check, which occurs approximately every 30 seconds for unaccepted minions; these can be monitored via the salt event command or integrated with external tools for real-time auditing.

Encryption and Access Control

Salt employs AES-256 encryption for securing all communication payloads between the master and minions, ensuring data confidentiality in transit. The master generates a shared AES key, which is encrypted using each minion's public RSA key before distribution; minions decrypt this AES key with their private key and subsequently use it to encrypt and decrypt payloads. This key rotation occurs every 24 hours by default or upon minion deletion, configurable via the master's rotate_aes_key setting. For transport-level security, Salt supports optional TLS encryption on TCP and transports, configured through the ssl option in the master and minion files, which wraps connections using Python's ssl.wrap_socket with support for certificate verification and CA files. The default transport relies on the application-layer AES encryption for payload protection, though TLS can be layered via tunneling if needed; additionally, SSH mode enables secure, minionless execution for setups without persistent agents, leveraging the underlying SSH protocol's encryption. Access control in Salt is managed through the External Authentication System (eAuth), which integrates with external providers like LDAP or PAM for token-based , allowing fine-grained permissions without root privileges on the master. Users authenticate to generate 12-hour tokens (configurable expiration), enabling (RBAC) via the external_auth configuration, where roles define allowable functions, minion targets, and arguments—such as read-only access to test.version for LDAP users. Publisher ACLs further enforce these permissions for local and peer executions, using glob or regex patterns to restrict actions like pkg.list_pkgs on specific minion groups. Peer communication allows minions to publish commands directly to other minions, secured by master-approved ACLs in the peer configuration section, which specify permitted functions and targets to prevent unauthorized actions. For example, a minion might be allowed to run network.interfaces on peers matching web* but denied broader access. Sensitive data in peer-published pillars inherits from the broader pillar , ensuring encryption during transmission. To meet compliance requirements, Salt supports operation in mode when the host OS enforces FIPS-compliant , utilizing approved algorithms like AES-256 while avoiding non-compliant libraries through configuration adjustments such as hash_type: sha512. Audit trails for events are provided via Salt's event and , which records attempts, key rotations, and access denials in configurable log files, integrable with external monitoring tools for traceability.

Ecosystem and Extensions

Modules and Plugins

Salt's extensibility relies on a modular that enables users to develop custom execution modules, state modules, and returners to tailor the software's functionality to specific environments. These components are implemented as Python modules and integrate seamlessly with Salt's core systems, allowing for the addition of new commands, state enforcements, and data storage mechanisms without modifying the base . In addition to core and custom modules, the Salt Project maintains an official Salt Extensions framework, introduced to modularize and enhance extensibility. As of 2024, over 750 modules previously bundled in the core Salt repository were migrated to separate extension repositories under the salt-extensions organization, allowing for independent development and release cycles. Notable extensions include for managing Kubernetes resources, which has absorbed functionality from the deprecated core k8s module planned for removal in version 3009. Users install extensions via pip (e.g., pip install saltext-kubernetes) and enable them in Salt configurations. Contributions to extensions follow a similar pull request process on their respective repositories, promoting focused community involvement. Execution modules form the foundation for custom remote execution capabilities, consisting of Python files placed in the _modules/ directory at the root of the Salt file server, such as /srv/salt/_modules/. These modules define functions that return Python dictionaries containing execution results, enabling them to be invoked via Salt's remote execution interface. For instance, a basic execution module might include a function like def ping(): return True, which signals the minion's availability when called. To load these modules onto minions, administrators use the saltutil.sync_modules function, which synchronizes custom code from the master to the minions. Testing of execution modules can be performed locally on a minion using the salt-call command, such as salt-call mymodule.ping, to verify functionality before deployment. State modules extend Salt's declarative configuration management by allowing the creation of custom states that enforce specific system configurations. These modules are similarly placed in a _states/ directory and map YAML-based state declarations to underlying execution logic, often leveraging the __salt__ dictionary to call execution modules for implementation. A custom state module might define a new state like myapp.installed, which ensures a specific application is present and configured correctly on targeted systems. State modules must return a standardized dictionary with keys such as name, changes, result, and comment to indicate enforcement outcomes, supporting features like test mode for dry runs. Synchronization occurs via saltutil.sync_states, enabling seamless integration into Salt's state application process. Returners serve as plugins that redirect job results from Salt executions to external storage or notification systems, facilitating data persistence and analysis beyond the default master storage. These modules are configured on minions and invoked using option in Salt commands, such as salt '*' test.ping --return mongo_return. Examples include the returner, which stores serialized job data in a database for querying, and the SMTP returner, which emails results for alerting purposes. Returners handle data in format and support multiple simultaneous targets, like combining and for caching and logging. The development process for these modules emphasizes the use of Salt's loader system, which provides contextual dictionaries like __opts__ for configuration access and __salt__ for inter-module calls, ensuring modules operate within Salt's execution environment. Developers can enable interactive by IPython kernels in module code for real-time inspection during testing. Custom modules are versioned alongside Salt releases, with changes targeted to specific branches—bug fixes to the oldest supported branch and new features to the master branch—following the keepachangelog format for . Community contributions play a vital role in expanding Salt's module ecosystem, primarily through the official repository at https://github.com/saltstack/salt, where users submit pull requests for new modules, enhancements, or fixes. These contributions undergo review to ensure compatibility with Salt's and are integrated into official releases, maintaining versioning alignment with the core software. The process encourages forking the repository and using the fork-and-branch workflow, with guidance available in the contributor documentation. Contributions to Salt Extensions occur via their dedicated repositories.

Integrations and Cloud Support

Salt Cloud is a component of Salt that enables the provisioning and management of virtual machines across various providers, integrating seamlessly with the Salt master's capabilities. It supports drivers for major platforms including (AWS), , and (GCP, via ). For instance, provisioning an instance on AWS EC2 can be achieved with the command salt-cloud -p ubuntu ec2, where "ubuntu" refers to a predefined profile specifying image, size, and other parameters. Similarly, Azure instances can be created using salt-cloud -p azure-ubuntu, and GCP instances with salt-cloud -p my-gce-profile gce-instance, requiring appropriate service account credentials and project configurations. Salt provides a RESTful through the rest_cherrypy netapi module, which leverages the CherryPy WSGI server to expose Salt functionality over HTTP, allowing external tools to interact with the Salt master for remote execution and . This supports mechanisms like external auth and shared secrets, enabling secure integration with third-party systems. Additionally, Salt offers CLI wrappers and modules for compatibility with infrastructure-as-code tools such as Terraform and ; for example, the terraform roster module dynamically generates target lists from Terraform resources using the Terraform Salt provider, while the ansiblegate module discovers and executes Ansible playbooks directly from Salt states. In monitoring integrations, Salt's beacon system periodically checks minion states and emits events that can trigger reactors, while returners store job results in external systems. Returners support sending data to for indexing and querying Salt job outputs, configurable via minion settings to specify the Elasticsearch host and index. For Nagios compatibility, the nagios execution module allows running plugins from Salt jobs and retrieving return codes and outputs, facilitating hybrid monitoring setups where Salt executes checks on minions. Although direct beacon support for is handled through community extensions, Salt's event bus can export metrics compatible with Prometheus scraping via custom engines. Container support in Salt includes dedicated modules for Docker and Kubernetes, enabling management of containerized environments as part of Salt states. The dockermod execution module handles Docker operations such as creating, starting, and stopping containers, with states like docker_container ensuring desired container configurations. For Kubernetes, the k8s module provides functions to manage resources like pods, services, and namespaces, while the kubernetes state module declares these as idempotent states; note that some Kubernetes functionality has migrated to the official Salt Kubernetes extension for enhanced scalability. For hybrid and multi-cloud environments, Salt Cloud uses map files to define bulk provisioning across providers, ensuring consistent configurations via profile overrides and dependency mappings. A map file, typically in YAML format under /etc/salt/cloud.maps.d/, lists VM names tied to profiles from different providers (e.g., AWS and Azure), allowing parallel creation with salt-cloud -m mapfile and support for inter-VM dependencies through the requires key, thus maintaining uniformity in diverse setups.

References

Add your contribution
Related Hubs
User Avatar
No comments yet.