TrueNAS SCALE Notes⚓︎
This page captures practical evaluation notes for TrueNAS SCALE, including platform choice, ZFS pool design, monitoring considerations, and operational checks.
Why Consider SCALE⚓︎
TrueNAS CORE has historically been regarded as mature and stable, but SCALE has become the more actively developed platform for new features and ongoing product direction.
In practice, this means:
- new feature development is centered on SCALE
- some fixes may land in SCALE and not be backported to CORE
- long-term planning should account for the vendor’s focus on SCALE
Useful reading:
ZFS Pool Layout Basics⚓︎
When evaluating a pool layout, it helps to compare these six metrics:
- Read IOPS
- Write IOPS
- Streaming read throughput
- Streaming write throughput
- Usable capacity efficiency
- Fault tolerance
Helpful reference:
Pool Layout Summary⚓︎
Striped vdev⚓︎
For an N-wide striped vdev:
- Read IOPS:
N * read IOPS of one drive - Write IOPS:
N * write IOPS of one drive - Streaming read:
N * single-drive read throughput - Streaming write:
N * single-drive write throughput - Space efficiency:
100% - Fault tolerance: none
Mirrored vdev⚓︎
For an N-way mirror:
- Read IOPS:
N * read IOPS of one drive - Write IOPS: approximately
single-drive write IOPS - Streaming read:
N * single-drive read throughput - Streaming write: approximately
single-drive write throughput
RAIDZ vdev⚓︎
For an N-wide RAIDZ vdev with parity level p:
- Read IOPS: approximately
single-drive read IOPS - Write IOPS: approximately
single-drive write IOPS - Streaming read:
(N - p) * single-drive read throughput - Streaming write:
(N - p) * single-drive write throughput - Space efficiency:
(N - p) / N - Fault tolerance depends on parity level:
RAIDZ1 = 1 disk,RAIDZ2 = 2 disks,RAIDZ3 = 3 disks
Design Considerations⚓︎
For backup and file share workloads with many files, the tradeoff is usually between:
- higher usable capacity
- better random I/O behavior
- rebuild and resilver risk
- fault tolerance during disk failures
A balanced design often favors:
- RAIDZ2 for a backup-oriented pool where resilience matters
- mirrors where high IOPS matters more than space efficiency
Example Deployment Pattern⚓︎
A generalized backup-oriented layout might look like:
2 x 6-wide RAIDZ2 vdevs2 spare disks- SSD or NVMe devices for boot and metadata-adjacent roles where appropriate
- one high-speed data network
- one management interface
This kind of design usually aims to provide:
- reasonable capacity efficiency
- tolerance for multiple disk failures per vdev
- acceptable backup and file-share performance
ZFS Notes⚓︎
ZIL / SLOG⚓︎
- the ZIL handles synchronous write intent
- SLOG devices mainly benefit sync-write-heavy workloads
- SLOG design should be planned carefully because a bad design decision can affect pool behavior and reliability
Reference:
ARC⚓︎
ARC is the main in-memory ZFS cache. It tracks both frequently used data and recently evicted blocks to improve read efficiency.
Hardware Planning⚓︎
Instead of documenting exact serializable inventory in a shared note, it is often better to track hardware by category:
- CPU class
- memory size
- boot devices
- data devices
- HBA model
- network interfaces
- power design
For public or shared documentation, keep exact management addresses and device identifiers out of the page.
Disk Failure and Hardware Operations⚓︎
When handling disk failures, enclosure tools such as sesutil or the Linux equivalent can help identify and locate failed drives, depending on the operating system and hardware stack in use.
Reference:
Monitoring⚓︎
Monitoring should cover:
- pool health
- disk faults
- capacity trends
- network state
- replication and backup tasks
- service health
One common approach is:
- enable SNMP where appropriate
- integrate with Zabbix, Prometheus, or another monitoring stack
- verify that templates or OIDs match the TrueNAS version in use
Reference:
Monitoring References⚓︎
General monitoring and dashboard references:
- Node Exporter Full Grafana dashboard
- TrueNAS Graphite Flux Grafana dashboard
- Grafana + Prometheus getting started
Community notes and discussions:
- How to expose data for Prometheus
- Metrics from TrueNAS SCALE into Grafana
- SNMP OID changes in TrueNAS SCALE
- Free disk space from TrueNAS Graphite discussion
Video references:
- TrueNAS monitoring walkthrough 1
- TrueNAS monitoring walkthrough 2
- TrueNAS monitoring walkthrough 3
- Prometheus / Grafana setup reference
- Zabbix SNMPv3 monitoring reference
Platform Notes⚓︎
Firewall management⚓︎
SCALE does not follow the same firewall management expectations as a traditional hand-managed Linux host. In many cases, direct local firewall customization is either restricted, discouraged, or expected to be handled through the platform’s own management model.
Package management⚓︎
APT is typically disabled by default on TrueNAS SCALE because the platform is intended to be managed through the appliance workflow and UI. Enabling unmanaged package changes can introduce drift or break supported behavior.
Useful Services and Logs⚓︎
Services⚓︎
ix-netif.service- network setupnetworking.service- network interface activationmiddlewared- the backend service layer used by the UI and API
Useful paths⚓︎
Example Commands⚓︎
Show pool health⚓︎
Show pool I/O statistics⚓︎
List snapshots⚓︎
Check ashift⚓︎
Persistent VLAN Example⚓︎
If a deployment needs a persistent VLAN configuration, document it with placeholders rather than production addresses.
Example:
auto <interface>
iface <interface> inet manual
mtu 9000
auto <interface>.<vlan-id>
iface <interface>.<vlan-id> inet static
address <ip-address>
netmask <subnet-mask>
mtu 9000
Suggested Evaluation Checklist⚓︎
- confirm whether SCALE is the right long-term platform for the workload
- validate pool layout against actual IOPS and resiliency requirements
- test disk replacement workflow
- validate monitoring coverage before production use
- confirm backup and restore procedures
- avoid unmanaged package drift
- document network and service dependencies clearly
References⚓︎
- TrueNAS SCALE download
- TrueNAS CORE download
- ZFS capacity calculator
- ZIL / SLOG reference
- ZFS storage pool layout white paper
- Intro to ZFS
- Uncle Fester's TrueNAS beginner's guide