Skip site navigation (1)Skip section navigation (2)

FreeBSD Manual Pages

  
 
  

home | help
ICE(4)			    Kernel Interfaces Manual			ICE(4)

NAME
       ice -- Intel Ethernet 800 Series	1GbE to	200GbE driver

SYNOPSIS
       device iflib
       device ice

       In loader.conf(5):
       if_ice_load
       hw.ice.enable_health_events
       hw.ice.irdma
       hw.ice.irdma_max_msix
       hw.ice.debug.enable_tx_fc_filter
       hw.ice.debug.enable_tx_lldp_filter
       hw.ice.debug.ice_tx_balance_en

       In sysctl.conf(5) or loader.conf(5):
       dev.ice.#.current_speed
       dev.ice.#.fw_version
       dev.ice.#.ddp_version
       dev.ice.#.pba_number
       dev.ice.#.hw.mac.*

DESCRIPTION
       The ice driver provides support for any PCI Express adapter or LOM (LAN
       On Motherboard) in the Intel Ethernet 800 Series.

       The following topics are	covered	in this	manual:

          "Features"
          "Dynamic Device Personalization"
          "Jumbo Frames"
          "Remote Direct Memory Access"
          "RDMA Monitoring"
          "Data Center	Bridging"
          "L3 QoS Mode"
          "Firmware Link Layer	Discovery Protocol Agent"
          "Link-Level Flow Control"
          "Forward Error Correction"
          "Speed and Duplex Configuration"
          "Disabling physical link when the interface is brought down"
          "Firmware Logging"
          "Debug Dump"
          "Debugging PHY Statistics"
          "Transmit Balancing"
          "Thermal Monitoring"
          "Network Memory Buffer Allocation"
          "Additional Utilities"
          "Optics and auto-negotiation"
          "PCI-Express	Slot Bandwidth"
          "HARDWARE"
          "LOADER TUNABLES"
          "SYSCTL VARIABLES"
          "INTERRUPT STORMS"
          "IOVCTL OPTIONS"
          "SUPPORT"
          "SEE	ALSO"
          "HISTORY"

   Features
       Support	for  Jumbo  Frames  is provided	via the	interface MTU setting.
       Selecting an MTU	larger than 1500 bytes with  the  ifconfig(8)  utility
       configures the adapter to receive and transmit Jumbo Frames.  The maxi-
       mum  MTU	 size for Jumbo	Frames is 9706.	 For more information, see the
       "Jumbo Frames" section.

       This driver version supports VLANs.  For	information on enabling	VLANs,
       see vlan(4).  For additional  information  on  configuring  VLANs,  see
       ifconfig(8)'s "VLAN Parameters" section.

       Offloads	are also controlled via	the interface, for instance, checksum-
       ming for	both IPv4 and IPv6 can be set and unset, TSO4 and/or TSO6, and
       finally LRO can be set and unset.

       For more	information on configuring this	device,	see ifconfig(8).

       The associated Virtual Function (VF) driver for this driver is iavf(4).

       The associated RDMA driver for this driver is irdma(4).

   Dynamic Device Personalization
       The  DDP	 package loads during device initialization.  The driver looks
       for the ice_ddp module and checks that it contains a valid DDP  package
       file.

       If  the driver is unable	to load	the DDP	package, the device will enter
       Safe Mode.  Safe	Mode disables advanced and  performance	 features  and
       supports	only basic traffic and minimal functionality, such as updating
       the NVM or downloading a	new driver or DDP package.  Safe Mode only ap-
       plies  to  the affected physical	function and does not impact any other
       PFs.  See the "Intel Ethernet Adapters and Devices User Guide" for more
       details on DDP and Safe Mode.

       If issues are encountered with the DDP package file, an updated	driver
       or  ice_ddp module may need to be downloaded.  See the log messages for
       more information.

       The DDP package cannot be updated if any	PF drivers are already loaded.
       To overwrite a package, unload all PFs and then reload the driver  with
       the new package.

       Only  one DDP package can be used per driver, even if more than one in-
       stalled device uses the driver.

       Only the	first loaded PF	per device can download	a package for that de-
       vice.

   Jumbo Frames
       Jumbo Frames support is enabled by changing  the	 Maximum  Transmission
       Unit (MTU) to a value larger than the default value of 1500.

       Use ifconfig(8) to increase the MTU size.

       The  maximum MTU	setting	for jumbo frames is 9706.  This	corresponds to
       the maximum jumbo frame size of 9728 bytes.

       This driver will	attempt	to use multiple	page sized buffers to  receive
       each  jumbo packet.  This should	help to	avoid buffer starvation	issues
       when allocating receive packets.

       Packet loss may have a greater impact on	throughput when	 jumbo	frames
       are  in use.  If	a drop in performance is observed after	enabling jumbo
       frames, enabling	flow control may mitigate the issue.

   Remote Direct Memory	Access
       Remote Direct Memory Access, or RDMA, allows a network device to	trans-
       fer data	directly to and	from application memory	on another system, in-
       creasing	throughput and lowering	latency	in certain networking environ-
       ments.

       The ice driver supports both the	iWARP (Internet	Wide Area RDMA	Proto-
       col)  and  RoCEv2  (RDMA	over Converged Ethernet) protocols.  The major
       difference is that iWARP	performs RDMA over TCP,	while RoCEv2 uses UDP.

       Devices based on	the Intel Ethernet 800 Series do not support RDMA when
       operating in multiport mode with	more than 4 ports.

       For detailed installation and configuration information for  RDMA,  see
       irdma(4).

   RDMA	Monitoring
       For  debugging/testing  purposes, a sysctl can be used to set up	a mir-
       roring interface	on a port.  The	interface can  receive	mirrored  RDMA
       traffic	for packet analysis tools like tcpdump(1).  This mirroring may
       impact performance.

       To use RDMA monitoring, more MSI-X interrupts may need to be  reserved.
       Before  the  ice	driver loads, configure	the following tunable provided
       by iflib(4):

	     dev.ice.<interface	#>.iflib.use_extra_msix_vectors=4

       The number of extra MSI-X interrupt vectors may need to be adjusted.

       To create/delete	the interface:

	     sysctl dev.ice.<interface #>.create_interface=1
	     sysctl dev.ice.<interface #>.delete_interface=1

       The mirrored interface receives both LAN	and RDMA traffic.   Additional
       filters can be configured in tcpdump.

       To differentiate	the mirrored interface from the	primary	interface, the
       network interface naming	convention is:

	     <driver name><port	number><modifier><modifier unit	number>

       For example, "ice0m0" is	the first mirroring interface on "ice0".

   Data	Center Bridging
       Data Center Bridging (DCB) is a configuration Quality of	Service	imple-
       mentation  in hardware.	It uses	the VLAN priority tag (802.1p) to fil-
       ter traffic.  That means	that there are	8  different  priorities  that
       traffic	can  be	 filtered into.	 It also enables priority flow control
       (802.1Qbb) which	can limit or eliminate the number of  dropped  packets
       during  network	stress.	  Bandwidth  can be allocated to each of these
       priorities, which is enforced at	the hardware level (802.1Qaz).

       DCB is normally configured on  the  network  using  the	DCBX  protocol
       (802.1Qaz), a specialization of LLDP (802.1AB). The ice driver supports
       the following mutually exclusive	variants of DCBX support:

          Firmware-based LLDP Agent
          Software-based LLDP Agent

       In  firmware-based  mode, firmware intercepts all LLDP traffic and han-
       dles DCBX negotiation transparently for the user.  In  this  mode,  the
       adapter	operates  in  "willing"	DCBX mode, receiving DCB settings from
       the link	partner	(typically a switch).  The local user can  only	 query
       the  negotiated DCB configuration.  For information on configuring DCBX
       parameters on a switch, please consult the switch manufacturer'ss docu-
       mentation.

       In software-based mode, LLDP traffic is forwarded to the	network	 stack
       and  user  space,  where	a software agent can handle it.	 In this mode,
       the adapter can operate in "nonwilling" DCBX mode and DCB configuration
       can be both queried and set locally.  This mode requires	 the  FW-based
       LLDP Agent to be	disabled.

       Firmware-based  mode  and  software-based  mode	are  controlled	by the
       "fw_lldp_agent" sysctl.	Refer to the  Firmware	Link  Layer  Discovery
       Protocol	Agent section for more information.

       Link-level  flow	 control and priority flow control are mutually	exclu-
       sive.  The ice driver will disable link flow control when priority flow
       control is enabled on any traffic class (TC).  It will disable priority
       flow control when link flow control is enabled.

       To enable/disable priority flow control in software-based DCBX mode:

	     sysctl dev.ice.<interface #>.pfc=1	(or 0 to disable)

       Enhanced	Transmission Selection (ETS) allows bandwidth to  be  assigned
       to  certain  TCs,  to help ensure traffic reliability.  To view the as-
       signed ETS configuration, use the following:

	     sysctl dev.ice.<interface #>.ets_min_rate

       To set the minimum ETS bandwidth	per TC,	separate the values by commas.
       All values must add up to 100.  For example, to set all TCs to a	 mini-
       mum bandwidth of	10% and	TC 7 to	30%, use the following:

	     sysctl dev.ice.<interface #>.ets_min_rate=10,10,10,10,10,10,10,30

       To  set the User	Priority (UP) to a TC mapping for a port, separate the
       values by commas.  For example, to map UP 0 and 1 to TC 0, UP 2	and  3
       to TC 1,	UP 4 and 5 to TC 2, and	UP 6 and 7 to TC 3, use	the following:

	     sysctl dev.ice.<interface #>.up2tc_map=0,0,1,1,2,2,3,3

   L3 QoS Mode
       The  ice	 driver	supports setting DSCP-based Layer 3 Quality of Service
       (L3 QoS)	in the PF driver.  The driver initializes in L2	 QoS  mode  by
       default;	 L3  QoS  is disabled by default.  Use the following sysctl to
       enable or disable L3 QoS:

	     sysctl dev.ice.<interface #>.pfc_mode=1 (or 0 to disable)

       If L3 QoS mode is disabled, it returns to L2 QoS	mode.

       To map a	DSCP value to a	traffic	class, separate	the values by  commas.
       For  example, to	map DSCPs 0-3 and DSCP 8 to DCB	TCs 0-3	and 4, respec-
       tively:

	     sysctl dev.ice.<interface #>.dscp2tc_map.0-7=0,1,2,3,0,0,0,0
	     sysctl dev.ice.<interface #>.dscp2tc_map.8-15=4,0,0,0,0,0,0,0

       To change the DSCP mapping back to the default traffic class,  set  all
       the values back to 0.

       To view the currently configured	mappings, use the following:

	     sysctl dev.ice.<interface #>.dscp2tc_map

       L3 QoS mode is not available when FW-LLDP is enabled.

       FW-LLDP cannot be enabled if L3 QoS mode	is active.

       Disable FW-LLDP before switching	to L3 QoS mode.

       Refer  to the "Firmware Link Layer Discovery Protocol Agent" section in
       this README for more information	on disabling FW-LLDP.

   Firmware Link Layer Discovery Protocol Agent
       Use sysctl to change FW-LLDP settings.  The FW-LLDP setting is per port
       and persists across boots.

       To enable the FW-LLDP Agent:

	     sysctl dev.ice.<interface #>.fw_lldp_agent=1

       To disable the FW-LLDP Agebt:

	     sysctl dev.ice.<interface #>.fw_lldp_agent=0

       To check	the current LLDP setting:

	     sysctl dev.ice.<interface #>.fw_lldp_agent

       The UEFI	HII LLDP Agent attribute must be enabled for this  setting  to
       take effect.  If	the "LLDP AGENT" attribute is set to disabled, the FW-
       LLDP Agent cannot be enabled from the driver.

   Link-Level Flow Control
       Ethernet	 Flow  Control	(IEEE  802.3x  or  LFC)	can be configured with
       sysctl(8) to enable receiving and transmitting pause  frames  for  ice.
       When  transmit  is enabled, pause frames	are generated when the receive
       packet buffer crosses a predefined threshold.  When receive is enabled,
       the transmit unit will  halt  for  the  time  delay  specified  in  the
       firmware	when a pause frame is received.

       Flow Control is disabled	by default.

       Use  sysctl  to change the flow control settings	for a single interface
       without reloading the driver:

	     sysctl dev.ice.<interface #>.fc

       The available values for	flow control are:

	     0 = Disable flow control
	     1 = Enable	Rx pause
	     2 = Enable	Tx pause
	     3 = Enable	Rx and Tx pause

       Verify that link	flow control was negotiated on the  link  by  checking
       the  interface entry in ifconfig(8) and looking for the flags "txpause"
       and/or "rxpause"	in the "media" status.

       The ice driver requires flow control on both the	port and link partner.
       If flow control is disabled on one of the sides,	the port may appear to
       hang on heavy traffic.

       For more	information on priority	 flow  control,	 refer	to  the	 "Data
       Center Bridging"	section.

       The VF driver does not have access to flow control.  It must be managed
       from the	host side.

   Forward Error Correction
       Forward	Error  Correction  (FEC) improves link stability but increases
       latency.	 Many high quality optics, direct attach cables, and backplane
       channels	can provide a stable link without FEC.

       For devices to benefit from this	feature, link partners must  have  FEC
       enabled.

       If the allow_no_fec_modules_in_auto sysctl is enabled Auto FEC negotia-
       tion  will include "FEC"	in case	the link partner does not have FEC en-
       abled or	is not FEC capable:

	     sysctl dev.ice.<interface #>.allow_no_fec_modules_in_auto=1

       NOTE: This flag is currently not	supported on the  Intel	 Ethernet  830
       Series.

       To show the current FEC settings	that are negotiated on the link:

	     sysctl dev.ice.<interface #>.negotiated_fec

       To view or set the FEC setting that was requested on the	link:

	     sysctl dev.ice.<interface #>.requested_fec

       To see the valid	FEC modes for the link:

	     sysctl -d dev.ice.<interface #>.requested_fec

   Speed and Duplex Configuration
       The speed and duplex settings cannot be hard set.

       To have the device change the speeds it will use	in auto-negotiation or
       force link with:

	     sysctl dev.ice.<interface #>.advertise_speed=<mask>

       Supported  speeds will vary by device.  Depending on the	speeds the de-
       vice supports, valid bits used in a speed mask could include:

	     0x0 - Auto
	     0x2 - 100 Mbps
	     0x4 - 1 Gbps
	     0x8 - 2.5 Gbps
	     0x10 - 5 Gbps
	     0x20 - 10 Gbps
	     0x80 - 25 Gbps
	     0x100 - 40	Gbps
	     0x200 - 50	Gbps
	     0x400 - 100 Gbps
	     0x800 - 200 Gbps

   Disabling physical link when	the interface is brought down
       When the	link_active_on_if_down sysctl is set to	"0", the  port's  link
       will go down when the interface is brought down.	 By default, link will
       stay up.

       To disable link when the	interface is down:

	     sysctl dev.ice.<interface #>.link_active_on_if_down=0

   Firmware Logging
       The ice driver allows for the generation	of firmware logs for supported
       categories  of events, to help debug issues with	Customer Support.  Re-
       fer to the "Intel Ethernet Adapters and	Devices	 User  Guide"  for  an
       overview	of this	feature	and additional tips.

       At a high level,	to capture a firmware log:
       1.   Set	the configuration for the firmware log.
       2.   Perform the	necessary steps	to reproduce the issue.
       3.   Capture the	firmware log.
       4.   Stop capturing the firmware	log.
       5.   Reset the firmware log settings as needed.
       6.   Work with Customer Support to debug	the issue.

       NOTE:  Firmware	logs  are generated in a binary	format and must	be de-
       coded by	Customer Support.  Information collected is  related  only  to
       firmware	and hardware for debug purposes.

       Once  the driver	is loaded, it will create the fw_log sysctl node under
       the debug section of the	driver's sysctl	list.  The driver groups these
       events into categories, called "modules".  Supported modules include:

	     general	    General (Bit 0)
	     ctrl	    Control (Bit 1)
	     link	    Link Management (Bit 2)
	     link_topo	    Link Topology Detection (Bit 3)
	     dnl	    Link Control Technology (Bit 4)
	     i2c	    I2C	(Bit 5)
	     sdp	    SDP	(Bit 6)
	     mdio	    MDIO (Bit 7)
	     adminq	    Admin Queue	(Bit 8)
	     hdma	    Host DMA (Bit 9)
	     lldp	    LLDP (Bit 10)
	     dcbx	    DCBx (Bit 11)
	     dcb	    DCB	(Bit 12)
	     xlr	    XLR	(function-level	resets;	Bit 13)
	     nvm	    NVM	(Bit 14)
	     auth	    Authentication (Bit	15)
	     vpd	    Vital Product Data (Bit 16)
	     iosf	    Intel On-Chip System Fabric	(Bit 17)
	     parser	    Parser (Bit	18)
	     sw		    Switch (Bit	19)
	     scheduler	    Scheduler (Bit 20)
	     txq	    TX Queue Management	(Bit 21)
	     acl	    ACL	(Access	Control	List; Bit 22)
	     post	    Post (Bit 23)
	     watchdog	    Watchdog (Bit 24)
	     task_dispatch  Task Dispatcher (Bit 25)
	     mng	    Manageability (Bit 26)
	     synce	    SyncE (Bit 27)
	     health	    Health (Bit	28)
	     tsdrv	    Time Sync (Bit 29)
	     pfreg	    PF Registration (Bit 30)
	     mdlver	    Module Version (Bit	31)

       The verbosity level of the firmware logs	can be modified.  It is	possi-
       ble to set only one log level per module, and each level	 includes  the
       verbosity  levels  lower	 than  it.  For	instance, setting the level to
       "normal"	will also log warning and error	messages.  Available verbosity
       levels are:

	     0 = none
	     1 = error
	     2 = warning
	     3 = normal
	     4 = verbose

       To set the desired verbosity level for  a  module,  use	the  following
       sysctl command and then register	it:

	     sysctl dev.ice.<interface #>.debug.fw_log.severity.<module>=<level>

       For example:

	     sysctl dev.ice.0.debug.fw_log.severity.link=1
	     sysctl dev.ice.0.debug.fw_log.severity.link_topo=2
	     sysctl dev.ice.0.debug.fw_log.register=1

       To  log firmware	messages after booting,	but before the driver initial-
       izes, use kenv(1) to set	the tunable.  The on_load  setting  tells  the
       device to register the variable as soon as possible during driver load.
       For example:

	     kenv dev.ice.0.debug.fw_log.severity.link=1
	     kenv dev.ice.0.debug.fw_log.severity.link_topo=2
	     kenv dev.ice.0.debug.fw_log.on_load=1

       To  view	the firmware logs and redirect them to a file, use the follow-
       ing command:

	     dmesg > log_output

       NOTE: Logging a large number of modules or  too	high  of  a  verbosity
       level  will add extraneous messages to dmesg and	could hinder debug ef-
       forts.

   Debug Dump
       Intel Ethernet 800 Series devices  support  debug  dump,	 which	allows
       gathering  of  runtime register values from the firmware	for "clusters"
       of events and then write	the results to a single	dump file, for	debug-
       ging complicated	issues in the field.

       This  debug  dump  contains  a  snapshot	of the device and its existing
       hardware	configuration, such as switch tables, transmit	scheduler  ta-
       bles,  and other	information.  Debug dump captures the current state of
       the specified cluster(s)	and is a stateless snapshot of the  whole  de-
       vice.

       NOTE:  Like  with firmware logs,	the contents of	the debug dump are not
       human-readable.	Work with Customer Support to decode the file.

       Debug dump is per device, not per PF.

       Debug dump writes all information to a single file.

       To generate a debug dump	file in	FreeBSD	do the following:

       Specify the cluster(s) to include in the	dump file, using a bitmask and
       the following command:

	     sysctl dev.ice.<interface #>.debug.dump.clusters=<bitmask>

       To print	the complete cluster bitmask and parameter list	to the screen,
       pass the	-d argument.  For example:

	     sysctl -d dev.ice.0.debug.dump.clusters

       Possible	bitmask	values for clusters are:
          0 - Dump all	clusters (only supported on Intel Ethernet E810	Series
	   and Intel Ethernet E830 Series)
          0x1 - Switch
          0x2 - ACL
          0x4 - Tx Scheduler
          0x8 - Profile Configuration
          0x20	- Link
          0x80	- DCB
          0x100 - L2P
          0x400000 - Manageability Transactions (only supported on Intel Eth-
	   ernet E810 Series)

       For example, to dump the	Switch,	DCB, and L2P clusters, use the follow-
       ing:

	     sysctl dev.ice.0.debug.dump.clusters=0x181

       To dump all clusters, use the following:

	     sysctl dev.ice.0.debug.dump.clusters=0

       NOTE: Using 0 will skip Manageability Transactions data.

       If a single cluster is not specified, the driver	will dump all clusters
       to a single file.  Issue	the debug dump command,	using the following:

	     sysctl -b dev.ice.<interface #>.debug.dump.dump=1 > dump.bin

       NOTE: The driver	will not receive the command if	the sysctl is not  set
       to "1".

       Replace "dump.bin" above	with the preferred file	name.

       To  clear  the clusters mask before a subsequent	debug dump and then do
       the dump:

	     sysctl dev.ice.0.debug.dump.clusters=0
	     sysctl dev.ice.0.debug.dump.dump=1

   Debugging PHY Statistics
       The ice driver supports the ability to obtain the  values  of  the  PHY
       registers  from	Intel(R) Ethernet 810 Series devices in	order to debug
       link and	connection issues during runtime.

       The driver provides information about:

          Rx and Tx Equalization parameters

          RS FEC correctable and uncorrectable	block counts

       Use the following sysctl	to read	the PHY	registers:

	     sysctl dev.ice.<interface #>.debug.phy_statistics

       NOTE: The contents of the registers are not human-readable.  Like  with
       firmware	 logs and debug	dump, work with	Customer Support to decode the
       file.

   Transmit Balancing
       Some Intel(R) Ethernet 800 Series devices allow for enabling a transmit
       balancing feature to improve transmit performance under certain	condi-
       tions.	When  enabled,	this  feature  should  provide more consistent
       transmit	performance across queues and/or PFs and VFs.

       By default, transmit balancing is disabled in the NVM.  To enable  this
       feature,	 use  one  of the following to persistently change the setting
       for the device:

          Use the Ethernet Port  Configuration	 Tool  (EPCT)  to  enable  the
	   tx_balancing	 option.   Refer  to the EPCT readme for more informa-
	   tion.

          Enable the Transmit Balancing device	setting	in UEFI	HII.

       When the	driver loads, it reads the transmit balancing setting from the
       NVM and configures the device accordingly.

       NOTE: The user selection	for transmit balancing in EPCT or HII is  per-
       sistent	across	reboots.  The system must be rebooted for the selected
       setting to take effect.

       This setting is device wide.

       The driver, NVM,	and DDP	package	must all support this functionality to
       enable the feature.

   Thermal Monitoring
       Intel(R)	Ethernet 810 Series and	Intel(R) Ethernet 830  Series  devices
       can display temperature data (in	degrees	Celsius) via:

	     sysctl dev.ice.<interface #>.temp

   Network Memory Buffer Allocation
       FreeBSD	may have a low number of network memory	buffers	(mbufs)	by de-
       fault.  If the number of	mbufs available	is too low, it may  cause  the
       driver  to  fail	 to initialize and/or cause the	system to become unre-
       sponsive.  Check	to see	if  the	 system	 is  mbuf-starved  by  running
       netstat -m.  Increase the number	of mbufs by editing the	lines below in
       /etc/sysctl.conf:

	     kern.ipc.nmbclusters
	     kern.ipc.nmbjumbop
	     kern.ipc.nmbjumbo9
	     kern.ipc.nmbjumbo16
	     kern.ipc.nmbufs

       The  amount  of memory that should be allocated is system specific, and
       may require some	trial and error.  Also,	increasing  the	 following  in
       /etc/sysctl.conf	could help increase network performance:

	     kern.ipc.maxsockbuf
	     net.inet.tcp.sendspace
	     net.inet.tcp.recvspace
	     net.inet.udp.maxdgram
	     net.inet.udp.recvspace

   Additional Utilities
       There  are  additional tools available from Intel to help configure and
       update the adapters covered by this driver.  These tools	can  be	 down-
       loaded  directly	 from  Intel  at  https://downloadcenter.intel.com, by
       searching for their names:

          To change the behavior of the QSFP28	ports on E810-C	adapters,  use
	   the Intel Ethernet Port Configuration Tool -	FreeBSD.

          To  update  the  firmware on	an adapter, use	the Intel Non-Volatile
	   Memory (NVM)	Update Utility for  Intel  Ethernet  Network  Adapters
	   E810	series - FreeBSD

   Optics and auto-negotiation
       Modules	based  on 100GBASE-SR4,	active optical cable (AOC), and	active
       copper cable (ACC) do not support auto-negotiation per the IEEE	speci-
       fication.   To obtain link with these modules, auto-negotiation must be
       turned off on the link partner's	switch ports.

       Note that adapters also support all passive and active limiting	direct
       attach  cables that comply with SFF-8431	v4.1 and SFF-8472 v10.4	speci-
       fications.

   PCI-Express Slot Bandwidth
       Some PCIe x8 slots are actually configured as x4	 slots.	  These	 slots
       have  insufficient bandwidth for	full line rate with dual port and quad
       port devices.  In addition, if a	PCIe v4.0 or v3.0-capable  adapter  is
       placed into into	a PCIe v2.x slot, full bandwidth will not be possible.

       The  driver  detects this situation and writes the following message in
       the system log:

	     PCI-Express bandwidth available for this device may be insuffi-
	     cient for optimal performance.  Please move the device to a dif-
	     ferent PCI-e link with more lanes and/or higher transfer rate.

       If this error occurs, moving the	adapter	to a true PCIe x8 or x16  slot
       will  resolve  the issue.  For best performance,	install	devices	in the
       following PCI slots:

          Any 100Gbps-capable Intel(R)	Ethernet 800 Series device: Install in
	   a PCIe v4.0 x8 or v3.0 x16 slot

          A 200Gbps-capable Intel(R) Ethernet 830 Series device: Install in a
	   PCIe	v5.0 x8	or v4.0	x16 slot

       For questions related to	hardware requirements, refer to	the documenta-
       tion supplied with the adapter.

HARDWARE
       The ice driver supports the following Intel 800	series	1Gb  to	 200Gb
       Ethernet	controllers:

          Intel Ethernet Controller E810-C
          Intel Ethernet Controller E810-XXV
          Intel Ethernet Connection E822-C
          Intel Ethernet Connection E822-L
          Intel Ethernet Connection E823-C
          Intel Ethernet Connection E823-L
          Intel Ethernet Connection E825-C
          Intel Ethernet Connection E830-C
          Intel Ethernet Connection E830-CC
          Intel Ethernet Connection E830-L
          Intel Ethernet Connection E830-XXV
          Intel Ethernet Connection E835-C
          Intel Ethernet Connection E835-CC
          Intel Ethernet Connection E835-L
          Intel Ethernet Connection E835-XXV

       The  ice	driver supports	some adapters in this series with SFP28/QSFP28
       cages which have	firmware that requires that  Intel  qualified  modules
       are used; these qualified modules are listed below.  This qualification
       check cannot be disabled	by the driver.

       The  ice	driver supports	100Gb Ethernet adapters	with these QSFP28 mod-
       ules:

          Intel 100G QSFP28 100GBASE-SR4   E100GQSFPSR28SRX
          Intel 100G QSFP28 100GBASE-SR4   SPTMBP1PMCDF
          Intel 100G QSFP28 100GBASE-CWDM4 SPTSBP3CLCCO
          Intel 100G QSFP28 100GBASE-DR    SPTSLP2SLCDF

       The ice driver supports 25Gb and	 10Gb  Ethernet	 adapters  with	 these
       SFP28 modules:

          Intel 10G/25G SFP28 25GBASE-SR E25GSFP28SR
          Intel     25G SFP28 25GBASE-SR E25GSFP28SRX (Extended Temp)
          Intel     25G SFP28 25GBASE-LR E25GSFP28LRX (Extended Temp)

       The  ice	driver supports	10Gb and 1Gb Ethernet adapters with these SFP+
       modules:

          Intel 1G/10G	SFP+ 10GBASE-SR	E10GSFPSR
          Intel 1G/10G	SFP+ 10GBASE-SR	E10GSFPSRG1P5
          Intel 1G/10G	SFP+ 10GBASE-SR	E10GSFPSRG2P5
          Intel    10G	SFP+ 10GBASE-SR	E10GSFPSRX (Extended Temp)
          Intel 1G/10G	SFP+ 10GBASE-LR	E10GSFPLR

LOADER TUNABLES
       Tunables	can be set at the loader(8) prompt before booting  the	kernel
       or stored in loader.conf(5).  See the iflib(4) man page for more	infor-
       mation on using iflib sysctl variables as tunables.

       hw.ice.enable_health_events
	       Set  to	1 to enable firmware health event reporting across all
	       devices.	 Enabled by default.

	       If enabled, when	the driver receives a  firmware	 health	 event
	       message,	 it  will  print out a description of the event	to the
	       kernel message buffer and if applicable,	 possible  actions  to
	       take to remedy it.

       hw.ice.irdma
	       Set  to	1 to enable the	RDMA client interface, required	by the
	       irdma(4)	driver.	 Enabled by default.

       hw.ice.rdma_max_msix
	       Set the maximum number of per-device MSI-X vectors that are al-
	       located for use by the irdma(4) driver.	Set to 64 by default.

       hw.ice.debug.enable_tx_fc_filter
	       Set to 1	to enable the TX Flow Control filter  across  all  de-
	       vices.  Enabled by default.

	       If  enabled,  the  hardware will	drop any transmitted Ethertype
	       0x8808 control frames that do not originate from	the hardware.

       hw.ice.debug.enable_tx_lldp_filter
	       Set to 1	to enable the TX LLDP filter across all	devices.   En-
	       abled by	default.

	       If  enabled,  the  hardware will	drop any transmitted Ethertype
	       0x88cc LLDP frames that do not  originate  from	the  hardware.
	       This must be disabled in	order to use LLDP daemon software such
	       as lldpd(8).

       hw.ice.debug.ice_tx_balance_en
	       Set  to	1  to allow the	driver to use the 5-layer Tx Scheduler
	       tree topology if	configured by the DDP package.

	       Enabled by default.

SYSCTL VARIABLES
       dev.ice.#.current_speed
	       This is a display of the	current	link speed of  the  interface.
	       This  is	 expected  to match the	speed of the media type	in-use
	       displayed by ifconfig(8).

       dev.ice.#.fw_version
	       Displays	the current firmware and NVM versions of the  adapter.
	       This information	should be submitted along with any support re-
	       quests.

       dev.ice.#.ddp_version
	       Displays	 the  current  DDP  package  version downloaded	to the
	       adapter.	 This information should be submitted along  with  any
	       support requests.

       dev.ice.#.pba_number
	       Displays	 the  Product  Board  Assembly Number.	May be used to
	       help identify the type of adapter in use.  This sysctl may  not
	       exist depending on the adapter type.

       dev.ice.#.hw.mac.*
	       This  sysctl tree contains statistics collected by the hardware
	       for the port.

INTERRUPT STORMS
       It is important to note that 100G operation can generate	 high  numbers
       of interrupts, often incorrectly	being interpreted as a storm condition
       in  the	kernel.	  It  is  suggested  that  this	be resolved by setting
       hw.intr_storm_threshold to 0.

IOVCTL OPTIONS
       The driver supports additional  optional	 parameters  for  created  VFs
       (Virtual	Functions) when	using iovctl(8):

       mac-addr	(unicast-mac)
	       Set the Ethernet	MAC address that the VF	will use.  If unspeci-
	       fied,  the  VF  will  use  a randomly generated MAC address and
	       "allow-set-mac" will be set to true.

       mac-anti-spoof (bool)
	       Prevent the VF from sending Ethernet frames with	a  source  ad-
	       dress that does not match its own.  Enabled by default.

       allow-set-mac (bool)
	       Allow  the  VF to set its own Ethernet MAC address.  Disallowed
	       by default.

       allow-promisc (bool)
	       Allow the VF to inspect all of the traffic  sent	 to  the  port
	       that it is created on.  Disabled	by default.

       num-queues (uint16_t)
	       Specify	the  number  of	 queues	the VF will have.  By default,
	       this is set to the number of MSI-X vectors supported by the  VF
	       minus one.

       mirror-src-vsi (uint16_t)
	       Specify	which  VSI  the	VF will	mirror traffic from by setting
	       this to a value other than -1.  All traffic from	that VSI  will
	       be  mirrored  to	this VF.  Can be used as an alternative	method
	       to mirror RDMA traffic to another interface than	the method de-
	       scribed in the "RDMA Monitoring"	section.  Not affected by  the
	       "allow-promisc" parameter.

       max-vlan-allowed	(uint16_t)
	       Specify	maximum	 number	 of  VLAN filters that the VF can use.
	       Receiving traffic on a VLAN requires a  hardware	 filter	 which
	       are a finite resource; this is used to prevent a	VF from	starv-
	       ing  other VFs or the PF	of filter resources.  By default, this
	       is set to 16.

       max-mac-filters (uint16_t)
	       Specify maximum number of MAC address filters that the  VF  can
	       use.  Each allowed MAC address requires a hardware filter which
	       are a finite resource; this is used to prevent a	VF from	starv-
	       ing  other VFs or the PF	of filter resources.  The VF's default
	       mac address does	not count towards  this	 limit.	  By  default,
	       this is set to 64.

       An up to	date list of parameters	and their defaults can be found	by us-
       ing iovctl(8) with the -S option.

       For   more  information	on  standard  and  mandatory  parameters,  see
       iovctl.conf(5).

SUPPORT
       For general information and support, go to the  Intel  support  website
       at: http://www.intel.com/support/.

       If  an  issue  is identified with this driver with a supported adapter,
       email  all  the	specific  information  related	 to   the   issue   to
       <freebsd@intel.com>.

SEE ALSO
       iflib(4), vlan(4), ifconfig(8), sysctl(8)

HISTORY
       The ice device driver first appeared in FreeBSD 12.2.

AUTHORS
       The ice driver was written by Intel Corporation <freebsd@intel.com>.

FreeBSD	15.0		       November	5, 2025				ICE(4)

Want to link to this manual page? Use this URL:
<https://man.freebsd.org/cgi/man.cgi?query=ice&sektion=4&manpath=FreeBSD+15.0-RELEASE+and+Ports>

home | help