Skip to content

TrueNAS SCALE Notes⚓︎

This page captures practical evaluation notes for TrueNAS SCALE, including platform choice, ZFS pool design, monitoring considerations, and operational checks.

Why Consider SCALE⚓︎

TrueNAS CORE has historically been regarded as mature and stable, but SCALE has become the more actively developed platform for new features and ongoing product direction.

In practice, this means:

  • new feature development is centered on SCALE
  • some fixes may land in SCALE and not be backported to CORE
  • long-term planning should account for the vendor’s focus on SCALE

Useful reading:

ZFS Pool Layout Basics⚓︎

When evaluating a pool layout, it helps to compare these six metrics:

  1. Read IOPS
  2. Write IOPS
  3. Streaming read throughput
  4. Streaming write throughput
  5. Usable capacity efficiency
  6. Fault tolerance

Helpful reference:

Pool Layout Summary⚓︎

Striped vdev⚓︎

For an N-wide striped vdev:

  1. Read IOPS: N * read IOPS of one drive
  2. Write IOPS: N * write IOPS of one drive
  3. Streaming read: N * single-drive read throughput
  4. Streaming write: N * single-drive write throughput
  5. Space efficiency: 100%
  6. Fault tolerance: none

Mirrored vdev⚓︎

For an N-way mirror:

  1. Read IOPS: N * read IOPS of one drive
  2. Write IOPS: approximately single-drive write IOPS
  3. Streaming read: N * single-drive read throughput
  4. Streaming write: approximately single-drive write throughput

RAIDZ vdev⚓︎

For an N-wide RAIDZ vdev with parity level p:

  1. Read IOPS: approximately single-drive read IOPS
  2. Write IOPS: approximately single-drive write IOPS
  3. Streaming read: (N - p) * single-drive read throughput
  4. Streaming write: (N - p) * single-drive write throughput
  5. Space efficiency: (N - p) / N
  6. Fault tolerance depends on parity level: RAIDZ1 = 1 disk, RAIDZ2 = 2 disks, RAIDZ3 = 3 disks

Design Considerations⚓︎

For backup and file share workloads with many files, the tradeoff is usually between:

  • higher usable capacity
  • better random I/O behavior
  • rebuild and resilver risk
  • fault tolerance during disk failures

A balanced design often favors:

  • RAIDZ2 for a backup-oriented pool where resilience matters
  • mirrors where high IOPS matters more than space efficiency

Example Deployment Pattern⚓︎

A generalized backup-oriented layout might look like:

  • 2 x 6-wide RAIDZ2 vdevs
  • 2 spare disks
  • SSD or NVMe devices for boot and metadata-adjacent roles where appropriate
  • one high-speed data network
  • one management interface

This kind of design usually aims to provide:

  • reasonable capacity efficiency
  • tolerance for multiple disk failures per vdev
  • acceptable backup and file-share performance

ZFS Notes⚓︎

ZIL / SLOG⚓︎

  • the ZIL handles synchronous write intent
  • SLOG devices mainly benefit sync-write-heavy workloads
  • SLOG design should be planned carefully because a bad design decision can affect pool behavior and reliability

Reference:

ARC⚓︎

ARC is the main in-memory ZFS cache. It tracks both frequently used data and recently evicted blocks to improve read efficiency.

Hardware Planning⚓︎

Instead of documenting exact serializable inventory in a shared note, it is often better to track hardware by category:

  • CPU class
  • memory size
  • boot devices
  • data devices
  • HBA model
  • network interfaces
  • power design

For public or shared documentation, keep exact management addresses and device identifiers out of the page.

Disk Failure and Hardware Operations⚓︎

When handling disk failures, enclosure tools such as sesutil or the Linux equivalent can help identify and locate failed drives, depending on the operating system and hardware stack in use.

Reference:

Monitoring⚓︎

Monitoring should cover:

  • pool health
  • disk faults
  • capacity trends
  • network state
  • replication and backup tasks
  • service health

One common approach is:

  • enable SNMP where appropriate
  • integrate with Zabbix, Prometheus, or another monitoring stack
  • verify that templates or OIDs match the TrueNAS version in use

Reference:

Monitoring References⚓︎

General monitoring and dashboard references:

Community notes and discussions:

Video references:

Platform Notes⚓︎

Firewall management⚓︎

SCALE does not follow the same firewall management expectations as a traditional hand-managed Linux host. In many cases, direct local firewall customization is either restricted, discouraged, or expected to be handled through the platform’s own management model.

Package management⚓︎

APT is typically disabled by default on TrueNAS SCALE because the platform is intended to be managed through the appliance workflow and UI. Enabling unmanaged package changes can introduce drift or break supported behavior.

Useful Services and Logs⚓︎

Services⚓︎

  • ix-netif.service - network setup
  • networking.service - network interface activation
  • middlewared - the backend service layer used by the UI and API

Useful paths⚓︎

Text Only
/var/log/middlewared.log
/var/run/middleware

Example Commands⚓︎

Show pool health⚓︎

Bash
zpool status -v

Show pool I/O statistics⚓︎

Bash
zpool iostat 1

List snapshots⚓︎

Bash
zfs list -t snapshot

Check ashift⚓︎

Bash
zdb -C <pool-name> | grep ashift

Persistent VLAN Example⚓︎

If a deployment needs a persistent VLAN configuration, document it with placeholders rather than production addresses.

Example:

INI
auto <interface>
iface <interface> inet manual
    mtu 9000

auto <interface>.<vlan-id>
iface <interface>.<vlan-id> inet static
    address <ip-address>
    netmask <subnet-mask>
    mtu 9000

Suggested Evaluation Checklist⚓︎

  • confirm whether SCALE is the right long-term platform for the workload
  • validate pool layout against actual IOPS and resiliency requirements
  • test disk replacement workflow
  • validate monitoring coverage before production use
  • confirm backup and restore procedures
  • avoid unmanaged package drift
  • document network and service dependencies clearly

References⚓︎