Skip site navigation (1)Skip section navigation (2)

FreeBSD Manual Pages

  
 
  

home | help
ZPOOL-STATUS(8)		    System Manager's Manual	       ZPOOL-STATUS(8)

NAME
       zpool-status -- show detailed health status for ZFS storage pools

SYNOPSIS
       zpool   status	[-DdegiLPpstvx]	  [-c  script1[,script2,]]  [-j|--json
	     [--json-flat-vdevs] [--json-int] [--json-pool-key-guid]] [-T d|u]
	     [--power] [pool] [interval	[count]]

DESCRIPTION
       Displays	the detailed health status for the given pools.	 If no pool is
       specified, then the status of each pool in  the	system	is  displayed.
       For more	information on pool and	device health, see the "Device Failure
       and Recovery" section of	zpoolconcepts(7).

       If  a  scrub  or	resilver is in progress, this command reports the per-
       centage done and	the estimated time to completion.  Both	of  these  are
       only  approximate, because the amount of	data in	the pool and the other
       workloads on the	system can change.

       -c script1[,script2,]
	       Run a script (or	scripts) on each vdev and include  the	output
	       as  a new column	in the zpool status output.  See the -c	option
	       of zpool	iostat for complete details.

       -D      Display a histogram of deduplication  statistics,  showing  the
	       allocated   (physically	 present   on	disk)  and  referenced
	       (logically referenced in	the pool) block	counts	and  sizes  by
	       reference  count.  If repeated, (-DD), also shows statistics on
	       how much	of the DDT is resident in the ARC.

       -d      Display the number of Direct I/O	read/write checksum verify er-
	       rors  that  have	  occurred   on	  a   top-level	  VDEV.	   See
	       "zfs_vdev_direct_write_verify"  in zfs(4) for details about the
	       conditions that can cause  Direct  I/O  write  checksum	verify
	       failures	to occur.  Direct I/O reads checksum verify errors can
	       also  occur if the contents of the buffer are being manipulated
	       after the I/O has been issued and is in flight.	In the case of
	       Direct I/O read checksum	verify errors, the I/O will  be	 reis-
	       sued through the	ARC.

       -e      Only show unhealthy vdevs (not-ONLINE or	with errors).

       -g      Display	vdev  GUIDs  instead  of the normal device names These
	       GUIDs can be used in place of device names for  the  zpool  de-
	       tach/offline/remove/replace commands.

       -i      Display vdev initialization status.

       -j, --json [--json-flat-vdevs] [--json-int] [--json-pool-key-guid]
	       Display	the  status  for  ZFS  pools  in JSON format.  Specify
	       --json-flat-vdevs to display vdevs in flat hierarchy instead of
	       nested vdev objects.  Specify --json-int	to display numbers  in
	       integer	   format     instead	  of	 strings.      Specify
	       --json-pool-key-guid to set pool	GUID as	key for	 pool  objects
	       instead of pool names.

       -L      Display	real  paths  for  vdevs	 resolving all symbolic	links.
	       This can	be used	to look	up the current block device  name  re-
	       gardless	of the /dev/disk/ path used to open it.

       -P      Display full paths for vdevs instead of only the	last component
	       of the path.  This can be used in conjunction with the -L flag.

       -p      Display numbers in parsable (exact) values.

       --power
	       Display vdev enclosure slot power status	(on or off).

       -s      Display	the  number of leaf vdev slow I/O operations.  This is
	       the  number  of	I/O  operations	 that	didn't	 complete   in
	       zio_slow_io_ms  milliseconds (30000 by default).	 This does not
	       necessarily mean	the I/O	operations failed  to  complete,  just
	       took  an	unreasonably long amount of time.  This	may indicate a
	       problem with the	underlying storage.

       -T d|u  Display a time stamp.  Specify d	for standard date format.  See
	       date(1).	 Specify u for a printed representation	of the	inter-
	       nal representation of time.  See	time(1).

       -t      Display vdev TRIM status.

       -v      Displays	 verbose  data	error information, printing out	a com-
	       plete list of all data errors  since  the  last	complete  pool
	       scrub.	If  the	 head_errlog feature is	enabled	and files con-
	       taining errors have been	removed	then the respective  filenames
	       will not	be reported in subsequent runs of this command.

       -x      Only display status for pools that are exhibiting errors	or are
	       otherwise unavailable.  Warnings	about pools not	using the lat-
	       est on-disk format will not be included.

EXAMPLES
   Example 1: Adding output columns
       Additional  columns  can	 be added to the zpool status and zpool	iostat
       output with -c.
	     # zpool status -c vendor,model,size
		NAME	 STATE	READ WRITE CKSUM vendor	 model	      size
		tank	 ONLINE	0    0	   0
		mirror-0 ONLINE	0    0	   0
		U1	 ONLINE	0    0	   0	 SEAGATE ST8000NM0075 7.3T
		U10	 ONLINE	0    0	   0	 SEAGATE ST8000NM0075 7.3T
		U11	 ONLINE	0    0	   0	 SEAGATE ST8000NM0075 7.3T
		U12	 ONLINE	0    0	   0	 SEAGATE ST8000NM0075 7.3T
		U13	 ONLINE	0    0	   0	 SEAGATE ST8000NM0075 7.3T
		U14	 ONLINE	0    0	   0	 SEAGATE ST8000NM0075 7.3T

	     # zpool iostat -vc	size
			   capacity	operations     bandwidth
	     pool	 alloc	 free	read  write   read  write  size
	     ----------	 -----	-----  -----  -----  -----  -----  ----
	     rpool	 14.6G	54.9G	   4	 55   250K  2.69M
	       sda1	 14.6G	54.9G	   4	 55   250K  2.69M   70G
	     ----------	 -----	-----  -----  -----  -----  -----  ----

   Example 2: Display the status output	in JSON	format
       zpool status can	output in JSON format if -j is specified.  -c  can  be
       used to run a script on each VDEV.
	     # zpool status -j -c vendor,model,size | jq
	     {
	       "output_version": {
		 "command": "zpool status",
		 "vers_major": 0,
		 "vers_minor": 1
	       },
	       "pools":	{
		 "tank": {
		   "name": "tank",
		   "state": "ONLINE",
		   "guid": "3920273586464696295",
		   "txg": "16597",
		   "spa_version": "5000",
		   "zpl_version": "5",
		   "status": "OK",
		   "vdevs": {
		     "tank": {
		       "name": "tank",
		       "alloc_space": "62.6G",
		       "total_space": "15.0T",
		       "def_space": "11.3T",
		       "read_errors": "0",
		       "write_errors": "0",
		       "checksum_errors": "0",
		       "vdevs":	{
			 "raidz1-0": {
			   "name": "raidz1-0",
			   "vdev_type":	"raidz",
			   "guid": "763132626387621737",
			   "state": "HEALTHY",
			   "alloc_space": "62.5G",
			   "total_space": "10.9T",
			   "def_space":	"7.26T",
			   "rep_dev_size": "10.9T",
			   "read_errors": "0",
			   "write_errors": "0",
			   "checksum_errors": "0",
			   "vdevs": {
			     "ca1eb824-c371-491d-ac13-37637e35c683": {
			       "name": "ca1eb824-c371-491d-ac13-37637e35c683",
			       "vdev_type": "disk",
			       "guid": "12841765308123764671",
			       "path": "/dev/disk/by-partuuid/ca1eb824-c371-491d-ac13-37637e35c683",
			       "state":	"HEALTHY",
			       "rep_dev_size": "3.64T",
			       "phys_space": "3.64T",
			       "read_errors": "0",
			       "write_errors": "0",
			       "checksum_errors": "0",
			       "vendor": "ATA",
			       "model":	"WDC WD40EFZX-68AWUN0",
			       "size": "3.6T"
			     },
			     "97cd98fb-8fb8-4ac4-bc84-bd8950a7ace7": {
			       "name": "97cd98fb-8fb8-4ac4-bc84-bd8950a7ace7",
			       "vdev_type": "disk",
			       "guid": "1527839927278881561",
			       "path": "/dev/disk/by-partuuid/97cd98fb-8fb8-4ac4-bc84-bd8950a7ace7",
			       "state":	"HEALTHY",
			       "rep_dev_size": "3.64T",
			       "phys_space": "3.64T",
			       "read_errors": "0",
			       "write_errors": "0",
			       "checksum_errors": "0",
			       "vendor": "ATA",
			       "model":	"WDC WD40EFZX-68AWUN0",
			       "size": "3.6T"
			     },
			     "e9ddba5f-f948-4734-a472-cb8aa5f0ff65": {
			       "name": "e9ddba5f-f948-4734-a472-cb8aa5f0ff65",
			       "vdev_type": "disk",
			       "guid": "6982750226085199860",
			       "path": "/dev/disk/by-partuuid/e9ddba5f-f948-4734-a472-cb8aa5f0ff65",
			       "state":	"HEALTHY",
			       "rep_dev_size": "3.64T",
			       "phys_space": "3.64T",
			       "read_errors": "0",
			       "write_errors": "0",
			       "checksum_errors": "0",
			       "vendor": "ATA",
			       "model":	"WDC WD40EFZX-68AWUN0",
			       "size": "3.6T"
			     }
			   }
			 }
		       }
		     }
		   },
		   "dedup": {
		     "mirror-2": {
		       "name": "mirror-2",
		       "vdev_type": "mirror",
		       "guid": "2227766268377771003",
		       "state":	"HEALTHY",
		       "alloc_space": "89.1M",
		       "total_space": "3.62T",
		       "def_space": "3.62T",
		       "rep_dev_size": "3.62T",
		       "read_errors": "0",
		       "write_errors": "0",
		       "checksum_errors": "0",
		       "vdevs":	{
			 "db017360-d8e9-4163-961b-144ca75293a3": {
			   "name": "db017360-d8e9-4163-961b-144ca75293a3",
			   "vdev_type":	"disk",
			   "guid": "17880913061695450307",
			   "path": "/dev/disk/by-partuuid/db017360-d8e9-4163-961b-144ca75293a3",
			   "state": "HEALTHY",
			   "rep_dev_size": "3.63T",
			   "phys_space": "3.64T",
			   "read_errors": "0",
			   "write_errors": "0",
			   "checksum_errors": "0",
			   "vendor": "ATA",
			   "model": "WDC WD40EFZX-68AWUN0",
			   "size": "3.6T"
			 },
			 "952c3baf-b08a-4a8c-b7fa-33a07af5fe6f": {
			   "name": "952c3baf-b08a-4a8c-b7fa-33a07af5fe6f",
			   "vdev_type":	"disk",
			   "guid": "10276374011610020557",
			   "path": "/dev/disk/by-partuuid/952c3baf-b08a-4a8c-b7fa-33a07af5fe6f",
			   "state": "HEALTHY",
			   "rep_dev_size": "3.63T",
			   "phys_space": "3.64T",
			   "read_errors": "0",
			   "write_errors": "0",
			   "checksum_errors": "0",
			   "vendor": "ATA",
			   "model": "WDC WD40EFZX-68AWUN0",
			   "size": "3.6T"
			 }
		       }
		     }
		   },
		   "special": {
		     "25d418f8-92bd-4327-b59f-7ef5d5f50d81": {
		       "name": "25d418f8-92bd-4327-b59f-7ef5d5f50d81",
		       "vdev_type": "disk",
		       "guid": "3935742873387713123",
		       "path": "/dev/disk/by-partuuid/25d418f8-92bd-4327-b59f-7ef5d5f50d81",
		       "state":	"HEALTHY",
		       "alloc_space": "37.4M",
		       "total_space": "444G",
		       "def_space": "444G",
		       "rep_dev_size": "444G",
		       "phys_space": "447G",
		       "read_errors": "0",
		       "write_errors": "0",
		       "checksum_errors": "0",
		       "vendor": "ATA",
		       "model":	"Micron_5300_MTFDDAK480TDS",
		       "size": "447.1G"
		     }
		   },
		   "error_count": "0"
		 }
	       }
	     }

SEE ALSO
       zpool-events(8),	  zpool-history(8),   zpool-iostat(8),	zpool-list(8),
       zpool-resilver(8), zpool-scrub(8), zpool-wait(8)

FreeBSD	15.0			 May 20, 2025		       ZPOOL-STATUS(8)

Want to link to this manual page? Use this URL:
<https://man.freebsd.org/cgi/man.cgi?query=zpool-status&sektion=8&manpath=FreeBSD+15.0-RELEASE+and+Ports>

home | help