FreeBSD manual
download PDF document: spl.4.pdf
SPL(4) FreeBSD Kernel Interfaces Manual SPL(4)
NAME
spl - parameters of the SPL kernel module
DESCRIPTION
spl_kmem_cache_kmem_threads=4 (uint)
The number of threads created for the spl_kmem_cache task queue.
This task queue is responsible for allocating new slabs for use
by the kmem caches. For the majority of systems and workloads
only a small number of threads are required.
spl_kmem_cache_reclaim=0 (uint)
When this is set it prevents Linux from being able to rapidly
reclaim all the memory held by the kmem caches. This may be
useful in circumstances where it's preferable that Linux reclaim
memory from some other subsystem first. Setting this will
increase the likelihood out of memory events on a memory
constrained system.
spl_kmem_cache_obj_per_slab=8 (uint)
The preferred number of objects per slab in the cache. In
general, a larger value will increase the caches memory footprint
while decreasing the time required to perform an allocation.
Conversely, a smaller value will minimize the footprint and
improve cache reclaim time but individual allocations may take
longer.
spl_kmem_cache_max_size=32 (64-bit) or 4 (32-bit) (uint)
The maximum size of a kmem cache slab in MiB. This effectively
limits the maximum cache object size to
spl_kmem_cache_max_size/spl_kmem_cache_obj_per_slab.
Caches may not be created with object sized larger than this
limit.
spl_kmem_cache_slab_limit=16384 (uint)
For small objects the Linux slab allocator should be used to make
the most efficient use of the memory. However, large objects are
not supported by the Linux slab and therefore the SPL
implementation is preferred. This value is used to determine the
cutoff between a small and large object.
Objects of size spl_kmem_cache_slab_limit or smaller will be
allocated using the Linux slab allocator, large objects use the
SPL allocator. A cutoff of 16K was determined to be optimal for
architectures using 4K pages.
spl_kmem_alloc_warn=32768 (uint)
As a general rule kmem_alloc() allocations should be small,
preferably just a few pages, since they must by physically
contiguous. Therefore, a rate limited warning will be printed to
the console for any kmem_alloc() which exceeds a reasonable
threshold.
The default warning threshold is set to eight pages but capped at
32K to accommodate systems using large pages. This value was
selected to be small enough to ensure the largest allocations are
quickly noticed and fixed. But large enough to avoid logging any
Large kmem_alloc() allocations will fail if they exceed
KMALLOC_MAX_SIZE. Allocations which are marginally smaller than
this limit may succeed but should still be avoided due to the
expense of locating a contiguous range of free pages. Therefore,
a maximum kmem size with reasonable safely margin of 4x is set.
kmem_alloc() allocations larger than this maximum will quickly
fail. vmem_alloc() allocations less than or equal to this value
will use kmalloc(), but shift to vmalloc() when exceeding this
value.
spl_kmem_cache_magazine_size=0 (uint)
Cache magazines are an optimization designed to minimize the cost
of allocating memory. They do this by keeping a per-cpu cache of
recently freed objects, which can then be reallocated without
taking a lock. This can improve performance on highly contended
caches. However, because objects in magazines will prevent
otherwise empty slabs from being immediately released this may
not be ideal for low memory machines.
For this reason, spl_kmem_cache_magazine_size can be used to set
a maximum magazine size. When this value is set to 0 the
magazine size will be automatically determined based on the
object size. Otherwise magazines will be limited to 2-256
objects per magazine (i.e per cpu). Magazines may never be
entirely disabled in this implementation.
spl_hostid=0 (ulong)
The system hostid, when set this can be used to uniquely identify
a system. By default this value is set to zero which indicates
the hostid is disabled. It can be explicitly enabled by placing
a unique non-zero value in /etc/hostid.
spl_hostid_path=/etc/hostid (charp)
The expected path to locate the system hostid when specified.
This value may be overridden for non-standard configurations.
spl_panic_halt=0 (uint)
Cause a kernel panic on assertion failures. When not enabled,
the thread is halted to facilitate further debugging.
Set to a non-zero value to enable.
spl_taskq_kick=0 (uint)
Kick stuck taskq to spawn threads. When writing a non-zero value
to it, it will scan all the taskqs. If any of them have a
pending task more than 5 seconds old, it will kick it to spawn
more threads. This can be used if you find a rare deadlock
occurs because one or more taskqs didn't spawn a thread when it
should.
spl_taskq_thread_bind=0 (int)
Bind taskq threads to specific CPUs. When enabled all taskq
threads will be distributed evenly across the available CPUs. By
default, this behavior is disabled to allow the Linux scheduler
the maximum flexibility to determine where a thread should run.
spl_taskq_thread_dynamic=1 (int)
Allow dynamic taskqs. When enabled taskqs which set the
TASKQ_DYNAMIC flag will by default create only a single thread.
Allow newly created taskq threads to set a non-default scheduler
priority. When enabled, the priority specified when a taskq is
created will be applied to all threads created by that taskq.
When disabled all threads will use the default Linux kernel
thread priority. By default, this behavior is enabled.
spl_taskq_thread_sequential=4 (int)
The number of items a taskq worker thread must handle without
interruption before requesting a new worker thread be spawned.
This is used to control how quickly taskqs ramp up the number of
threads processing the queue. Because Linux thread creation and
destruction are relatively inexpensive a small default value has
been selected. This means that normally threads will be created
aggressively which is desirable. Increasing this value will
result in a slower thread creation rate which may be preferable
for some configurations.
spl_max_show_tasks=512 (uint)
The maximum number of tasks per pending list in each taskq shown
in /proc/spl/taskq{,-all}. Write 0 to turn off the limit. The
proc file will walk the lists with lock held, reading it could
cause a lock-up if the list grow too large without limiting the
output. "(truncated)" will be shown if the list is larger than
the limit.
spl_taskq_thread_timeout_ms=10000 (uint)
(Linux-only) How long a taskq has to have had no work before we
tear it down. Previously, we would tear down a dynamic taskq
worker as soon as we noticed it had no work, but it was observed
that this led to a lot of churn in tearing down things we then
immediately spawned anew. In practice, it seems any nonzero
value will remove the vast majority of this churn, while the
nontrivially larger value was chosen to help filter out the
little remaining churn on a mostly idle system. Setting this
value to 0 will revert to the previous behavior.
FreeBSD 14.0-RELEASE-p11 August 24, 2020 FreeBSD 14.0-RELEASE-p11