{"id":4986,"date":"2016-07-14T14:10:11","date_gmt":"2016-07-14T11:10:11","guid":{"rendered":"http:\/\/skeletor.org.ua\/?p=4986"},"modified":"2023-07-12T08:33:19","modified_gmt":"2023-07-12T05:33:19","slug":"solaris-zfs_params","status":"publish","type":"post","link":"https:\/\/skeletor.org.ua\/?p=4986","title":{"rendered":"[Solaris] zfs_params"},"content":{"rendered":"<p>\u041d\u0438\u0436\u0435 \u0431\u0443\u0434\u0443\u0442 \u043f\u0435\u0440\u0435\u0447\u0438\u0441\u043b\u0435\u043d\u044b \u0438\u0441\u043f\u043e\u043b\u044c\u0437\u0443\u0435\u043c\u044b\u0435 \u043f\u0430\u0440\u0430\u043c\u0435\u0442\u0440\u044b \u0434\u043b\u044f \u0442\u044e\u043d\u0438\u043d\u0433\u0430 ZFS.<\/p>\n<p><!--more--><\/p>\n<p><span style=\"color: #ff0000;\"><strong>arc_reduce_dnlc_percent<\/strong><\/span><\/p>\n<p>If the ARC detects low memory (via arc_reclaim_needed()), then we call arc_kmem_reap_now() and subsequently dnlc_reduce_cache() &#8211; which reduces the # of dnlc entries by 3% (ARC_REDUCE_DNLC_PERCENT).<\/p>\n<p>So yeah, dnlc_nentries would be really interesting to see (especially if its &lt;&lt; ncsize).<br \/>\nThe version of statit that we&#8217;re using is still attached to ancient 32-bit counters that \/are\/ overflowing on our runs. I&#8217;m fixing this at the moment and I&#8217;ll send around a new binary this afternoon.<\/p>\n<p><span style=\"color: #ff6600;\"><em>Default<\/em><\/span>:\u00a00x3<\/p>\n<p><em><span style=\"color: #ff6600;\">How to change<\/span><\/em>:<\/p>\n<p><code># echo arc_reduce_dnlc_percent\/W0t2 | mdb -kw<\/code><\/p>\n<p><span style=\"color: #ff0000;\"><strong>zfs_arc_max,\u00a0zfs_arc_min (deprecated in 11.2)<\/strong><\/span><\/p>\n<p>Determines the maximum\/minimum size of the ZFS Adjustable Replacement Cache (ARC).\u00a0Solaris 11.2 deprecates the zfs_arc_max kernel parameter in favor of user_reserve_hint_pct and that\u2019s cool.<\/p>\n<p><em><span style=\"color: #ff6600;\">Default<\/span><\/em>:<\/p>\n<p><em><span style=\"color: #ff6600;\">How to change<span style=\"color: #000000;\">:<\/span><\/span><\/em><\/p>\n<p><span style=\"color: #ff0000;\"><strong>arc_shrink_shift<\/strong><\/span><\/p>\n<p>This variable controls the amount of RAM that arc_shrinks will try to reclaim. By default this is set to 5, which equates to shrinking by 1\/32 of arc_max. We tuned this to 11, which is 1\/2048 of arc_max. Based on that, we would be shrinking the arc by about 100MB per shrink event, rather than 6GB of RAM.<\/p>\n<p>Every second a process runs which checks if data can be removed from the ARC and evicts it. Default max 1\/32nd of the ARC can be evicted at a time. This is limited because evicting large amounts of data from ARC stalls all other processes. Back when 8GB was a lot of memory 1\/32nd meant 256MB max at a time. When you have 196GB of memory 1\/32nd is 6.3GB, which can cause up to 20-30 seconds of unresponsiveness (depending on the record size).<br \/>\n(where 11 is 1\/2\u00a0<sup>11<\/sup>\u00a0or 1\/2048th, 10 is \u00a01\/2\u00a0<sup>10<\/sup>\u00a0or 1\/1024th etc. Change depending on amount of RAM in your system).<\/p>\n<p><span style=\"color: #ff6600;\"><em>Default<\/em><\/span>: 0x5<\/p>\n<p><em><span style=\"color: #ff6600;\">How to change<\/span><\/em>:<\/p>\n<p><code># echo arc_shrink_shift\/W0xa | mdb -kw<\/code><\/p>\n<p><span style=\"color: #ff0000;\"><strong>zfs_mdcomp_disable<\/strong><\/span><\/p>\n<p>This parameter controls compression of ZFS metadata (indirect blocks only). ZFS data block compression is controlled by the ZFS compression property that can be set per file system.<\/p>\n<p><em><span style=\"color: #ff6600;\">Default<\/span><\/em>: 0<\/p>\n<p><em><span style=\"color: #ff6600;\">How to change<\/span><\/em>:<\/p>\n<p><code># echo zfs_mdcomp_disable\/W0t1 | mdb -kw<\/code><\/p>\n<p><strong><span style=\"color: #ff0000;\">zfs_prefetch_disable<\/span><\/strong><\/p>\n<p>This parameter determines a file-level prefetching mechanism called zfetch. This mechanism looks at the patterns of reads to files and anticipates on some reads, thereby reducing application wait times.<\/p>\n<p><em><span style=\"color: #ff6600;\">Default<\/span><\/em>: 0<\/p>\n<p><em><span style=\"color: #ff6600;\">How to change<\/span><\/em>:<\/p>\n<p><code># echo zfs_prefetch_disable\/W0t1 | mdb -kw<\/code><\/p>\n<p><span style=\"color: #ff0000;\"><strong>metaslab_aliquot\u00a0<\/strong><\/span><\/p>\n<p>Metaslab granularity, in bytes. This is roughly similar to what would be referred to as the &#8220;stripe size&#8221; in traditional RAID arrays. In normal operation, ZFS will try to write this amount of data to a top-level vdev before moving on to the next one.<\/p>\n<p>The traditional VDEV space re-balancing occurred by means of a bias based on a 512K metaslab_aliquot and the number of VDEV children.\u00a0 This bias mechanism will not function correctly with large allocation sizes. An alternate method may need to be devised to allow effective re-balancing when streams of large allocations occur.<\/p>\n<p>Intel is currently working on a alternate re-balancing solution for large blocks.<\/p>\n<p><em><span style=\"color: #ff6600;\">Default<\/span><\/em>: 0x80000<\/p>\n<p><em><span style=\"color: #ff6600;\">How to change<\/span><\/em>:\u00a0<code><\/code><\/p>\n<p><code># echo metaslab_aliquot\/W0x90000 | mdb -kw<\/code><\/p>\n<p><span style=\"color: #ff0000;\"><strong>spa_max_replication_override<\/strong><\/span><\/p>\n<p>\u041a\u043e\u043b\u0438\u0447\u0435\u0441\u0442\u0432\u043e DVA (data virtual address) \u0432 \u0443\u043a\u0430\u0437\u0430\u0442\u0435\u043b\u0435 \u0431\u043b\u043e\u043a\u0430, \u0442\u0430\u043a \u043d\u0430\u0437\u044b\u0432\u0430\u0435\u043c\u044b\u0435 ditto-blocks<\/p>\n<p><span style=\"color: #ff6600;\"><em>Default:<\/em><\/span> 0x3<\/p>\n<p><em><span style=\"color: #ff6600;\">How to change<\/span><\/em>:<\/p>\n<p><span style=\"color: #ff0000;\"><strong>spa_mode_global<\/strong><\/span><\/p>\n<p>Is used to define the mode in which given zpool can be initialized internally by ZFS, typically used as READ\/WRITE mode.<\/p>\n<p><em><span style=\"color: #ff6600;\">Default<\/span><\/em>:\u00a00x3<\/p>\n<p><em><span style=\"color: #ff6600;\">How to change<\/span><\/em>:<\/p>\n<p><span style=\"color: #ff0000;\"><strong>zfs_flags<\/strong><\/span><\/p>\n<p>Set additional debugging flags<\/p>\n<table>\n<thead>\n<tr>\n<th>flag value<\/th>\n<th>symbolic name<\/th>\n<th>description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>0x1<\/td>\n<td>ZFS_DEBUG_DPRINTF<\/td>\n<td>Enable dprintf entries in the debug log<\/td>\n<\/tr>\n<tr>\n<td>0x2<\/td>\n<td>ZFS_DEBUG_DBUF_VERIFY<\/td>\n<td>Enable extra dnode verifications<\/td>\n<\/tr>\n<tr>\n<td>0x4<\/td>\n<td>ZFS_DEBUG_DNODE_VERIFY<\/td>\n<td>Enable extra dnode verifications<\/td>\n<\/tr>\n<tr>\n<td>0x8<\/td>\n<td>ZFS_DEBUG_SNAPNAMES<\/td>\n<td>Enable snapshot name verification<\/td>\n<\/tr>\n<tr>\n<td>0x10<\/td>\n<td>ZFS_DEBUG_MODIFY<\/td>\n<td>Check for illegally modified ARC buffers<\/td>\n<\/tr>\n<tr>\n<td>0x20<\/td>\n<td>ZFS_DEBUG_SPA<\/td>\n<td>Enable spa_dbgmsg entries in the debug log<\/td>\n<\/tr>\n<tr>\n<td>0x40<\/td>\n<td>ZFS_DEBUG_ZIO_FREE<\/td>\n<td>Enable verification of block frees<\/td>\n<\/tr>\n<tr>\n<td>0x80<\/td>\n<td>ZFS_DEBUG_HISTOGRAM_VERIFY<\/td>\n<td>Enable extra spacemap histogram verifications<\/td>\n<\/tr>\n<tr>\n<td>0x100<\/td>\n<td>ZFS_DEBUG_METASLAB_VERIFY<\/td>\n<td>Verify space accounting on disk matches in-core range_trees<\/td>\n<\/tr>\n<tr>\n<td>0x200<\/td>\n<td>ZFS_DEBUG_SET_ERROR<\/td>\n<td>Enable SET_ERROR and dprintf entries in the debug log<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p><span style=\"color: #ff6600;\"><em>Default<\/em><\/span>: 0x0<\/p>\n<p><span style=\"color: #ff6600;\"><em>How to change<\/em><\/span>:<\/p>\n<p><code># echo zfs_flags\/W0x8 | mdb -kw<\/code><\/p>\n<p><span style=\"color: #ff0000;\"><strong>zfs_txg_synctime_ms<\/strong><\/span><\/p>\n<p>This sets how often (in milliseconds) the cache dumps to disk (tgx sync).<\/p>\n<p><em><span style=\"color: #ff6600;\">Default<\/span><\/em>:\u00a00x1388<\/p>\n<p><span style=\"color: #ff6600;\"><em>How to change<\/em><\/span>:<\/p>\n<p><code># echo zfs_txg_synctime_ms\/W0x2000 | mdb -kw<\/code><\/p>\n<p><span style=\"color: #ff0000;\"><strong>zfs_ssd_txg_synctime_ms<\/strong><\/span><\/p>\n<p>This sets how often (in milliseconds) the cache dumps to SSD disk (tgx sync). Only for SSD disks<\/p>\n<p><em><span style=\"color: #ff6600;\">Default<\/span><\/em>:\u00a00x2170<\/p>\n<p><span style=\"color: #ff6600;\"><em>How to change<\/em><\/span>:<\/p>\n<p><code># echo zfs_ssd_txg_synctime_ms\/W0x21700 | mdb -kw<\/code><\/p>\n<p><strong><span style=\"color: #ff0000;\">zfs_txg_timeout\u00a0<\/span><\/strong><\/p>\n<p>Seconds between transaction group commits (delay between ZIL commits changes)<\/p>\n<p><span style=\"color: #ff6600;\"><em>Default<\/em><\/span>:\u00a00x5<\/p>\n<p><span style=\"color: #ff6600;\"><em>How to change<\/em><\/span>:<\/p>\n<p><code>#echo zfs_txg_timeout\/W0t120 | mdb -kw<\/code><\/p>\n<p><span style=\"color: #ff0000;\"><strong>zfs_write_limit_min<\/strong><\/span><\/p>\n<p>Min tgx write limit<\/p>\n<p><span style=\"color: #ff6600;\"><em>Default<\/em><\/span>:\u00a0 0x800000<\/p>\n<p><span style=\"color: #ff6600;\"><em>How to change<\/em><\/span>:<\/p>\n<p><span style=\"color: #ff0000;\"><strong>zfs_write_limit_max<\/strong><\/span><\/p>\n<p>Max tgx write limit<\/p>\n<p><span style=\"color: #ff6600;\"><em>Default<\/em><\/span>:\u00a00xff98dc00<\/p>\n<p><span style=\"color: #ff6600;\"><em>How to change<\/em><\/span>:<\/p>\n<p><span style=\"color: #ff0000;\"><strong>zfs_write_limit_shift<\/strong><\/span><\/p>\n<p>log2(fraction of memory) per txg (int)<\/p>\n<p><span style=\"color: #ff6600;\"><em>Default<\/em><\/span>:\u00a00x3<\/p>\n<p><span style=\"color: #ff6600;\"><em>How to change<\/em><\/span>:<\/p>\n<p><span style=\"color: #ff0000;\"><strong>zfs_write_limit_override<\/strong><\/span><\/p>\n<p>Override txg write limit<\/p>\n<p><span style=\"color: #ff6600;\"><em>Default<\/em><\/span>:\u00a00x0<\/p>\n<p><span style=\"color: #ff6600;\"><em>How to change<\/em><\/span>:<\/p>\n<p><code># echo zfs_write_limit_override\/W0t402653184 | mdb -kw<\/code><\/p>\n<p><span style=\"color: #ff0000;\"><strong>zfs_no_write_throttle<\/strong><\/span><\/p>\n<p>Disable write throttling<\/p>\n<p><span style=\"color: #ff6600;\"><em>Default<\/em><\/span>: 0x0<\/p>\n<p><span style=\"color: #ff6600;\"><em>How to change<\/em><\/span>:<\/p>\n<p><code># echo zfs_no_write_throttle\/W 1\u00a0| mdb -kw<\/code><\/p>\n<p><span style=\"color: #ff0000;\"><strong>zfs_vdev_cache_max<\/strong><\/span><\/p>\n<p>essentially disables the vdev cache as the random I\/Os are not going to be lower than XXX<\/p>\n<p><span style=\"color: #ff6600;\"><em>Default<\/em><\/span>:\u00a00x4000<\/p>\n<p><span style=\"color: #ff6600;\"><em>How to change:<\/em><\/span><\/p>\n<p><span style=\"color: #ff0000;\"><strong>zfs_vdev_cache_size<\/strong><\/span><\/p>\n<p>Total size of the per-disk cache<\/p>\n<p><span style=\"color: #ff6600;\"><em>Default<\/em><\/span>:\u00a00x0<\/p>\n<p><span style=\"color: #ff6600;\"><em>How to change<\/em><\/span>:<\/p>\n<p><span style=\"color: #ff0000;\"><strong>zfs_vdev_cache_bshift<\/strong><\/span><\/p>\n<p>is the base 2 logarithm of the size used to read disks.<\/p>\n<p><span style=\"color: #ff6600;\"><em>Default<\/em><\/span>:\u00a00x10<\/p>\n<p><em><span style=\"color: #ff6600;\">How to change<\/span><\/em>:<\/p>\n<p><span style=\"color: #ff0000;\"><strong>zfs_vdev_max_pending<\/strong><\/span><\/p>\n<p>This parameter controls, how many I\/O requests can be pending per vdev. For example when you have 100 disks visible from your OS with a <strong>zfs:zfs_vdev_max_pending<\/strong> of 2, you have 200 request outstanding at maximum. When you have 100 disks hidden behind your storage controller just showing a single LUN, you will have \u2013 you will know it \u2013 2 pending requests at maximum.<\/p>\n<p><span style=\"color: #ff6600;\"><em>Default<\/em><\/span>:\u00a00xa<\/p>\n<p><span style=\"color: #ff6600;\"><em>How to change<\/em><\/span>:<\/p>\n<p><code># echo zfs_vdev_max_pending\/W0t35 | mdb \u2013kw<\/code><\/p>\n<p><span style=\"color: #ff0000;\"><strong>zfs_vdev_min_pending<\/strong><\/span><\/p>\n<p>same that above.<\/p>\n<p><span style=\"color: #ff6600;\"><em>Default<\/em><\/span>:\u00a00x4<\/p>\n<p><span style=\"color: #ff6600;\"><em>How to change<\/em><\/span>:<\/p>\n<p><span style=\"color: #ff0000;\"><strong>zfs_scrub_limit<\/strong><\/span><\/p>\n<p>maximum number of scrub\/resilver I\/O per leaf vdev<\/p>\n<p><span style=\"color: #ff6600;\"><em>Default<\/em><\/span>:\u00a00xa<\/p>\n<p><span style=\"color: #ff6600;\"><em>How to change<\/em><\/span>:<\/p>\n<p><span style=\"color: #ff0000;\"><strong>zfs_vdev_time_shift<\/strong><\/span><\/p>\n<p>Deadline time shift for vdev I\/O<\/p>\n<p><em><span style=\"color: #ff6600;\">Default<\/span><\/em>:\u00a00x6<\/p>\n<p><em><span style=\"color: #ff6600;\">How to change<span style=\"color: #000000;\">:<\/span><\/span><\/em><\/p>\n<p><span style=\"color: #ff0000;\"><strong>zfs_vdev_ramp_rate<\/strong><\/span><\/p>\n<p>Exponential I\/O issue ramp-up rate<\/p>\n<p><em><span style=\"color: #ff6600;\">Default<\/span><\/em>:\u00a00x2<\/p>\n<p><em><span style=\"color: #ff6600;\">How to change<span style=\"color: #000000;\">:<\/span><\/span><\/em><\/p>\n<p><span style=\"color: #ff0000;\"><strong>zfs_vdev_aggregation_limit<\/strong><\/span><\/p>\n<p>Max vdev I\/O aggregation size<\/p>\n<p><em><span style=\"color: #ff6600;\">Default<\/span><\/em>:\u00a00x20000<\/p>\n<p><em><span style=\"color: #ff6600;\">How to change<span style=\"color: #000000;\">:<\/span><\/span><\/em><\/p>\n<p><span style=\"color: #ff0000;\"><strong>zfs_nocacheflush<\/strong><\/span><\/p>\n<p>This parameter controls ZFS write cache flushes for the entire system.Oracle&#8217;s Sun hardware should not require tuning this parameter. If you need to tune cache flushing, considering tuning it per hardware device. See the general instructions below. Contact your storage vendor for instructions on how to tell the storage devices to ignore the cache flushes sent by ZFS.<\/p>\n<p><em><span style=\"color: #ff6600;\">Default<\/span><\/em>:\u00a00x1<\/p>\n<p><em><span style=\"color: #ff6600;\">How to change<span style=\"color: #000000;\">:<\/span><\/span><\/em><\/p>\n<p><span style=\"color: #ff0000;\"><strong>zil_replay_disable<\/strong><\/span><\/p>\n<p>Disable intent logging replay.\u00a0Can be disabled for recovery from corrupted ZIL. If <code class=\"docutils literal notranslate\"><span class=\"pre\">zil_replay_disable<\/span> <span class=\"pre\">=<\/span> <span class=\"pre\">1<\/span><\/code>, then when a volume or filesystem is brought online, no attempt to replay the ZIL is made and any existing ZIL is destroyed. This can result in loss of data without notice.<\/p>\n<p><em><span style=\"color: #ff6600;\">Default<\/span><\/em>:\u00a00x0<\/p>\n<p><em><span style=\"color: #ff6600;\">How to change<span style=\"color: #000000;\">:<\/span><\/span><\/em><\/p>\n<p><span style=\"color: #ff0000;\"><strong>metaslab_df_alloc_threshold<\/strong><\/span><\/p>\n<p>The minimum free space, in percent, which must be available in a space map to continue allocations in a first-fit fashion. Once the space map&#8217;s free space drops below this level we dynamically switch to using best-fit allocations.<\/p>\n<p><span style=\"color: #ff6600;\"><em>Default<\/em><\/span>: 0x100000<\/p>\n<p><span style=\"color: #ff6600;\"><em>How to change:<\/em><\/span><\/p>\n<p><span style=\"color: #ff0000;\"><strong>metaslab_df_free_pct<\/strong><\/span><\/p>\n<p>Percentage free space in metaslab<\/p>\n<p><span style=\"color: #ff6600;\"><em>Default<\/em><\/span>:\u00a00x4<\/p>\n<p><em><span style=\"color: #ff6600;\">How to change<span style=\"color: #000000;\">:<\/span><\/span><\/em><\/p>\n<p><span style=\"color: #ff0000;\"><strong>zio_injection_enabled<\/strong><\/span><\/p>\n<p>Enable fault injection.<br \/>\nTo handle fault injection, we keep track of a series of zinject_record_t structures which describe which logical block(s) should be injected with a fault. These are kept in a global list. Each record corresponds to a given spa_t and maintains a special hold on the spa_t so that it cannot be deleted or exported while the injection record exists. Device level injection is done using the &#8216;zi_guid&#8217; field. If this is set, it means that the error is destined for a particular device, not a piece of data. This is a rather poor data structure and algorithm, but we don&#8217;t expect more than a few faults at any one time, so it should be sufficient for our needs.<\/p>\n<p><em><span style=\"color: #ff6600;\">Default<\/span><\/em>:\u00a00x0<\/p>\n<p><em><span style=\"color: #ff6600;\">How to change<span style=\"color: #000000;\">:<\/span><\/span><\/em><\/p>\n<p><span style=\"color: #ff0000;\"><strong>zfs_immediate_write_sz<\/strong><\/span><\/p>\n<p>Limit on data size being sent to the\u00a0ZIL. (\u0421\u0438\u043d\u0445\u0440\u043e\u043d\u043d\u044b\u0435 \u0437\u0430\u043f\u0438\u0441\u0438 \u0431\u0443\u0434\u0443\u0442 \u0437\u0430\u043f\u0438\u0441\u044b\u0432\u0430\u0442\u044c\u0441\u044f \u043d\u0435\u043f\u043e\u0441\u0440\u0435\u0434\u0441\u0442\u0432\u0435\u043d\u043d\u043e \u0432 \u043f\u0443\u043b \u0438\u043b\u0438 \u0437\u0430\u043f\u0438\u0441\u044b\u0432\u0430\u0442\u044c\u0441\u044f \u0432 slog. \u041f\u043e \u0443\u043c\u043e\u043b\u0447\u0430\u043d\u0438\u044e \u044d\u0442\u043e 32k. \u041e\u043f\u0435\u0440\u0430\u0446\u0438\u0438 \u0437\u0430\u043f\u0438\u0441\u0438, \u043f\u0440\u0435\u0432\u044b\u0448\u0430\u044e\u0449\u0438\u0435 \u044d\u0442\u043e \u0437\u043d\u0430\u0447\u0435\u043d\u0438\u0435 \u0431\u0443\u0434\u0443 \u0432\u044b\u043f\u043e\u043b\u043d\u044f\u0442\u044c\u0441\u044f \u043d\u0435\u043f\u043e\u0441\u0440\u0435\u0434\u0441\u0442\u0432\u0435\u043d\u043d\u043e \u0432 \u043f\u0443\u043b\u0435)<\/p>\n<p><em><span style=\"color: #ff6600;\">Default<\/span><\/em>:\u00a00x8000<\/p>\n<p><em><span style=\"color: #ff6600;\">How to change<span style=\"color: #000000;\">:<\/span><\/span><\/em><\/p>\n<p><span style=\"color: #ff0000;\"><strong>zfs_read_chunk_size<\/strong><\/span><\/p>\n<p>Bytes to read per chunk<\/p>\n<p><em><span style=\"color: #ff6600;\">Default<\/span><\/em>:\u00a00x100000<\/p>\n<p><em><span style=\"color: #ff6600;\">How to change<span style=\"color: #000000;\">:<\/span><\/span><\/em><\/p>\n<p><span style=\"color: #ff0000;\"><strong>zfs_vdev_max_queue_wait<\/strong><\/span><\/p>\n<p>Is a factor used to trigger I\/O starvation avoidance behavior. Used in conjunction with\u00a0zfs_vdev_max_pending\u00a0to\u00a0track the earliest I\/O that has been issued. If more than zfs_vdev_max_queue_wait full pending queues have been issued since, this I\/O is being starved. Don&#8217;t accept any more I\/Os. This will drain the pending queue until the starved I\/O is processed.<\/p>\n<p><em><span style=\"color: #ff6600;\">Default<\/span><\/em>:\u00a00x4<\/p>\n<p><em><span style=\"color: #ff6600;\">How to change<span style=\"color: #000000;\">:<\/span><\/span><\/em><\/p>\n<p><span style=\"color: #ff0000;\"><strong>zfetch_max_streams<\/strong><\/span><\/p>\n<p>Max number of streams per zfetch (prefetch streams per<br \/>\nfile).<\/p>\n<p><em><span style=\"color: #ff6600;\">Default<\/span><\/em>: 0x8<\/p>\n<p><span style=\"color: #ff0000;\"><strong>zfetch_min_sec_reap<\/strong><\/span><\/p>\n<p>Min time before an active prefetch stream can be reclaimed<\/p>\n<p><em><span style=\"color: #ff6600;\">Default<\/span><\/em>: 0x2<\/p>\n<p><span style=\"color: #ff0000;\"><strong>zfetch_block_cap<\/strong><\/span><\/p>\n<p>Max number of blocks to prefetch at a time<\/p>\n<p>= 0x100<\/p>\n<p><span style=\"color: #ff0000;\"><strong>zfetch_array_rd_sz<\/strong><\/span><\/p>\n<p>If prefetching is enabled, disable prefetching for reads\u00a0larger than this size.<\/p>\n<p><em><span style=\"color: #ff6600;\">Default<\/span><\/em>: 0x100000<\/p>\n<p><span style=\"color: #ff0000;\"><strong>zfs_no_scrub_io<\/strong><\/span><\/p>\n<p>Set for no scrub I\/O.\u00a0Use <b>1<\/b> for yes and <b>0<\/b> for no (default).<\/p>\n<p><em><span style=\"color: #ff6600;\">Default<\/span><\/em>: 0x0<\/p>\n<p><span style=\"color: #ff0000;\"><strong>zfs_no_scrub_prefetch<\/strong><\/span><\/p>\n<p>Set for no scrub prefetching.\u00a0Use <b>1<\/b> for yes and <b>0<\/b> for no (default).<\/p>\n<p><em><span style=\"color: #ff6600;\">Default<\/span><\/em>: 0x0<\/p>\n<h1><span style=\"color: #0000ff;\"><strong>Unknown:<\/strong><\/span><\/h1>\n<ul>\n<li><span style=\"color: #ff6600;\"><strong><em>11.3<\/em><\/strong><\/span><\/li>\n<\/ul>\n<p>fzap_default_block_shift = 0xe<br \/>\nmetaslab_gang_threshold = 0x100001<br \/>\nvdev_mirror_shift = 0x15<br \/>\nzvol_immediate_write_sz = 0x8000<br \/>\nzfs_no_scan_io = 0x0<br \/>\nzfs_no_scan_prefetch = 0x0<br \/>\nzfetch_maxbytes_ub = 0x2000000<br \/>\nzfetch_maxbytes_lb = 0x400000<br \/>\nzfetch_target_blks = 0x100<br \/>\nzfetch_throttle_interval = 0xa<br \/>\nzfetch_num_hash_buckets = 0x400000<br \/>\nzfetch_ageout = 0xa<br \/>\nzfetch_ageout_sleep_time = 0x2<br \/>\nzfs_default_bs = 0x9<br \/>\nzfs_default_ibs = 0xe<br \/>\nzfs_vdev_future_reads = 0x2<br \/>\nzfs_vdev_future_read_bytes = 0x40000<br \/>\nzfs_vdev_future_writes = 0x2<br \/>\nzfs_vdev_future_write_bytes = 0x40000<\/p>\n<ul>\n<li><span style=\"color: #ff6600;\"><em><strong>11.2 \/ 11.1<\/strong><\/em><\/span><\/li>\n<\/ul>\n<p>zfs_vdev_future_pending = 0xa<\/p>\n<p>https:\/\/openzfs.github.io\/openzfs-docs\/Performance%20and%20Tuning\/Module%20Parameters.html<\/p>\n","protected":false},"excerpt":{"rendered":"<p>\u041d\u0438\u0436\u0435 \u0431\u0443\u0434\u0443\u0442 \u043f\u0435\u0440\u0435\u0447\u0438\u0441\u043b\u0435\u043d\u044b \u0438\u0441\u043f\u043e\u043b\u044c\u0437\u0443\u0435\u043c\u044b\u0435 \u043f\u0430\u0440\u0430\u043c\u0435\u0442\u0440\u044b \u0434\u043b\u044f \u0442\u044e\u043d\u0438\u043d\u0433\u0430 ZFS.<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[24],"tags":[],"class_list":["post-4986","post","type-post","status-publish","format-standard","hentry","category-solaris"],"_links":{"self":[{"href":"https:\/\/skeletor.org.ua\/index.php?rest_route=\/wp\/v2\/posts\/4986","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/skeletor.org.ua\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/skeletor.org.ua\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/skeletor.org.ua\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/skeletor.org.ua\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=4986"}],"version-history":[{"count":23,"href":"https:\/\/skeletor.org.ua\/index.php?rest_route=\/wp\/v2\/posts\/4986\/revisions"}],"predecessor-version":[{"id":6204,"href":"https:\/\/skeletor.org.ua\/index.php?rest_route=\/wp\/v2\/posts\/4986\/revisions\/6204"}],"wp:attachment":[{"href":"https:\/\/skeletor.org.ua\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=4986"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/skeletor.org.ua\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=4986"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/skeletor.org.ua\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=4986"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}