FreeBSD Manual Pages
H2O.CONF(5) File Formats Manual H2O.CONF(5) NAME h2o.conf - The configuration file for H2O, the optimized HTTP/1.x, HTTP/2 server SYNOPSIS /etc/h2o/h2o.conf DESCRIPTION h2o.conf h2o.conf is a YAML configuration file. QUICK START In order to run the H2O standalone HTTP server, you need to write a configuration file. The minimal configuration file looks like as fol- lows. listen: port: 80 hosts: "myhost.example.com": listen: &listen_ssl port: 443 ssl: certificate-file: /path/to/certificate-file key-file: /path/to/key-file listen: <<: *listen_ssl type: quic paths: /: file.dir: /path/to/the/public-files user: nobody access-log: /path/to/the/access-log error-log: /path/to/the/error-log pid-file: /path/to/the/pid-file The configuration instructs the server to: listen on TCP port 80 for all hosts for myhost.example.com, listen on TCP port 443 using given TLS certificate and key pair listen on UDP port 443 (QUIC), reusing the previous setting named as listen_ssl serve files under /path/to/the/public-files under the privileges of nobody emit access logs to file: /path/to/the/access-log emit error logs to /path/to/the/error-log store the process id of the server in /path/to/the/pid-file Enter the command below to start the server. % sudo h2o -m daemon -c /path/to/the/configuration-file The command instructs the server to read the configuration file, and start in daemon mode, which dispatches a pair of master and worker processes that serves the HTTP requests. To stop the server, send SIGTERM to the server. % sudo kill -TERM `cat /path/to/the/pid-file` Next Step Now that you know how to start and stop the server, the next step is to learn the configuration directives and their structure, or see the con- figuration examples. SYNTAX AND STRUCTURE Syntax H2O uses YAML 1.1 as the syntax of its configuration file. Levels of Configuration When using the configuration directives of H2O, it is important to un- derstand that there are four configuration levels: global, host, path, extension. Global-level configurations affect the entire server. Host-level con- figurations affect the configuration for the specific hostname (i.e. corresponds to the <VirtualHost> directive of the Apache HTTP Server). Path-level configurations only affect the behavior of resources spe- cific to the path. Extension-level configuration affect how files with certain extensions are being served. For example, it is possible to map files with .php extension to the FastCGI handler running the php-cgi command. Consider the following example. hosts: "example.com": listen: port: 443 ssl: certificate-file: etc/site1.crt key-file: etc/site1.key paths: "/": file.dir: htdocs/site1 "/icons": file.dir: icons expires: 1 day "example.com:80": listen: port: 80 paths: "/": redirect: "https://example.com/" In the example, two host-level configurations exist (under the hosts mapping), each of them listening to different ports. The first host listens to port 443 using TLS (i.e. HTTPS) using the specified server certificate and key. It has two path-level configurations, one for / and the other for /icons, each of them pointing to different local di- rectories containing the files to be served. The latter also has the expires directive set, so that Cache-Control: max-age=86400 [1] header would be sent. The second host accepts connections on port 80 (via the plain-text HTTP protocol), and redirects all the requests to the first host using HTTPS. Certain configuration directives can be used in more than one levels. For example, the listen can be used either at the global level or at the host level. Expires can be used at all levels. On the other hand file.dir can only be used at the path level. Path-level configuration Values of the path-level configuration define the action(s) to be taken when the server processes a request that prefix-matches to the config- ured paths. Each entry of the mapping associated to the paths is eval- uated in the order they appear. Consider the following example. When receiving a request for https://example.com/foo, the file handler is first executed trying to serve a file named /path/to/doc-root/foo as the response. In case the file does not exist, then the FastCGI handler is invoked. hosts: "example.com": listen: port: 443 ssl: certificate-file: etc/site1.crt key-file: etc/site1.key paths: "/": file.dir: /path/to/doc-root fastcgi.connect: port: /path/to/fcgi.sock type: unix Starting from version 2.1, it is also possible to define the path-level configuration as a sequence of mappings instead of a single mapping. The following example is identical to the previous one. Notice the dashes placed before the handler directives. hosts: "example.com": listen: port: 443 ssl: certificate-file: etc/site1.crt key-file: etc/site1.key paths: "/": - file.dir: /path/to/doc-root - fastcgi.connect: port: /path/to/fcgi.sock type: unix Using YAML Alias H2O resolves YAML aliases before processing the configuration file. Therefore, it is possible to use an alias to reduce the redundancy of the configuration file. For example, the following configuration reuses the first paths element (that is given an anchor named de- fault_paths) in the following definitions. hosts: "example.com": listen: port: 443 ssl: certificate-file: /path/to/example.com.crt key-file: /path/to/example.com.crt paths: &default_paths "/": file.dir: /path/to/doc-root "example.org": listen: port: 443 ssl: certificate-file: /path/to/example.org.crt key-file: /path/to/example.org.crt paths: *default_paths Using YAML Merge Since version 2.0, H2O recognizes Merge Key Language-Independent Type for YAML Version 1.1. Users can use the feature to merge an existing mapping against another. The following example reuses the TLS configu- ration of example.com in example.org. hosts: "example.com": listen: port: 443 ssl: &default_ssl minimum-version: TLSv1.2 cipher-suite: ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256 certificate-file: /path/to/example.com.crt key-file: /path/to/example.com.crt paths: ... "example.org": listen: port: 443 ssl: <<: *default_ssl certificate-file: /path/to/example.org.crt key-file: /path/to/example.org.crt paths: ... Including Files Starting from version 2.1, it is possible to include a YAML file from the configuration file using !file custom YAML tag. The following ex- ample extracts the TLS configuration into default_ssl.conf and include it multiple times in h2o.conf. Example: default_ssl.conf minimum-version: TLSv1.2 cipher-suite: ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256 certificate-file: /path/to/example.com.crt key-file: /path/to/example.com.crt Example: h2o.conf hosts: "example.com": listen: port: 443 ssl: !file default_ssl.conf paths: ... "example.org": listen: port: 443 ssl: < Starting from version 2.3, it is possible to refer to an environment variable (intepreted as a scalar) from the configuration file by using !env custom YAML tag. Example: h2o.conf hosts: "example.com": listen: port: !env H2O_PORT paths: ... Notes: [1]1 day is equivalent to 86400 seconds BASE DIRECTIVES This document describes the configuration directives common to all the protocols and handlers. hosts Maps host:port to the mappings of per-host configs. The directive specifies the mapping between the authorities (the host or host:port section of an URL) and their configurations. The direc- tive is mandatory, and must at least contain one entry. When port is omitted, the entry will match the requests targetting the default ports (i.e. port 80 for HTTP, port 443 for HTTPS) with given hostname. Otherwise, the entry will match the requests targetting the specified port. Since version 1.7, a wildcard character * can be used as the first com- ponent of the hostname. If used, they are matched using the rule de- fined in RFC 2818 Section 3.1. For example, *.example.com will match HTTP requests for both foo.example.com and bar.example.com. For each HTTP request to be processed, the matching host entry is de- termined by the steps below: Among the host elements that do not use wildcards, find the first ele- ment that matches the host and port being specified by the URI. If none is found in the previous step, find a matching element among the entries that use wildcards. If none is found in the previous steps, use the first host element without a strict-match flag. When the hostname of the HTTP request is unknown (i.e., processing an HTTP/1.0 request without a host header field), only the last step is being used. Example: A host redirecting all HTTP requests to HTTPS hosts: "www.example.com:80": listen: port: 80 paths: "/": redirect: https://www.example.com/ "www.example.com:443": listen: port: 443 ssl: key-file: /path/to/ssl-key-file certificate-file: /path/to/ssl-certificate-file paths: "/": file.dir: /path/to/doc-root paths Mapping of paths and their configurations. The mapping is searched using prefix-match. The entry with the longest path is chosen when more than one matching paths were found. An 404 Not Found error is returned if no matching paths were found. Example: Configuration with two paths hosts: "www.example.com": listen: port: 80 paths: "/": file.dir: /path/to/doc-root "/assets": file.dir: /path/to/assets In releases prior to version 2.0, all the path entries are considered as directories. When H2O receives a request that exactly matches to an entry in paths that does not end with a slash, the server always re- turns a 301 redirect that appends a slash. Since 2.0, it depends on the handler of the path whether if a 301 redi- rect that appends a slash is returned. Server administrators can take advantage of this change to define per-path configurations (see the ex- amples in file.file and the FastCGI handler). file.dir is an exception that continues to perform the redirection; in case of the example above, access to /assets is redirected to /assets/. listen Specifies the port at which the server should listen to. In addition to specifying the port number, it is also possible to des- ignate the bind address or the SSL and HTTP/3 (QUIC) configuration. Example: Various ways of using the Listen Directive # accept HTTP on port 80 on default address (both IPv4 and IPv6) listen: 80 # accept HTTP on 127.0.0.1:8080 listen: host: 127.0.0.1 port: 8080 # accept HTTPS on port 443 listen: port: 443 ssl: key-file: /path/to/key-file certificate-file: /path/to/certificate-file # accept HTTPS on port 443 (using PROXY protocol) listen: port: 443 ssl: key-file: /path/to/key-file certificate-file: /path/to/certificate-file proxy-protocol: ON To configure HTTP/3 (QUIC), see HTTP/3. Configuration Levels The directive can be used either at global-level or at host-level. At least one listen directive must exist at the global level, or every host-level configuration must have at least one listen directive. Incoming connections accepted by global-level listeners will be dis- patched to one of the host-level contexts with the corresponding host:port, or to the first host-level context if none of the contexts were given host:port corresponding to the request. Host-level listeners specify bind addresses specific to the host-level context. However it is permitted to specify the same bind address for more than one host-level contexts, in which case hostname-based lookup will be performed between the host contexts that share the address. The feature is useful for setting up a HTTPS virtual host using Server- Name Indication (RFC 6066). Example: Using host-level listeners for HTTPS virtual-hosting hosts: "www.example.com:443": listen: port: 443 ssl: key-file: /path/to/www_example_com.key certificate-file: /path/to/www_example_com.crt paths: "/": file.dir: /path/to/doc-root_of_www_example_com "www.example.jp:443": listen: port: 443 ssl: key-file: /path/to/www_example_jp.key certificate-file: /path/to/www_example_jp.crt paths: "/": file.dir: /path/to/doc-root_of_www_example_jp SSL Attribute The ssl attribute must be defined as a mapping, and recognizes the fol- lowing attributes. certificate-file: Path of the SSL certificate file (mandatory). This attribute can spec- ify a PEM file containing either an X.509 certificate chain or a raw public key. When the latter form is being used, RFC 7250 handshake will be used. key-file: Path of the SSL private key file (mandatory). identity: List of certificate / key pairs. This attribute can be used in place of certificate-file and key-file to specify more than one pair of certifi- cates and keys. When a TLS handshake is performed, h2o uses the first pair that contains a compatible certificate / key. The last pair acts as the fallback. Example: Using RSA and ECDSA certificates ssl: identity: - key-file: /path/to/rsa.key certificate-file: /path/to/rsa.crt - key-file: /path/to/ecdsa.key certificate-file: /path/to/ecdsa.crt minimum-version: minimum protocol version, should be one of: SSLv2, SSLv3, TLSv1, TLSv1.1, TLSv1.2, TLSv1.3. Default is TLSv1. min-version: synonym of minimum-version (introduced in version 2.2) maximum-version: maximum protocol version. Introduced in version 2.2. Default is the maximum protocol version supported by the server. max-version: synonym of maximum-version. cipher-suite: list of cipher suites to be passed to OpenSSL via SSL_CTX_set_cipher_list (optional) cipher-suite-tls1.3: list of TLS 1.3 cipher suites to use; the list must be a YAML sequence where each ele- ment specifies the cipher suite using the name as registered to the IANA TLS Cipher Suite Registry; e.g., TLS_AES_128_GCM_SHA256, TLS_CHACHA20_POLY1305_SHA256. cipher-preference: side of the list that should be used for selecting the cipher-suite; should be either of: client, server. Default is client. dh-file: path of a PEM file containing the Diffie-Hellman parameters to be used. Use of the file is recommended for servers using Diffie-Hellman key agreement. (optional) ocsp-update-interval: interval for updating the OCSP stapling data (in seconds), or set to zero to disable OCSP stapling. Default is 14400 (4 hours). ocsp-max-failures: number of consecutive OCSP query failures before stopping to send OCSP stapling data to the client. Default is 3. ech: This experimental attribute controls the use of TLS Encrypted Client Hello extension (draft-15). The attribute takes a sequence of map- pings, each of them defining one ECH configuration. Example: Encrypted Clint hello ssl: key-file: /path/to/rsa.key certificate-file: /path/to/rsa.crt ech: - key-file: /path/to/ech.key config-id: 11 public-name: public-name.example.net cipher-suite: [ HKDF-SHA256/AES-128-GCM ] The example above defines one ECH configuration that uses /path/to/ech.key as the semi-static ECDH key with a config-id of 11, with the public-name being public-name.example.net, and the HPKE Sym- metricCipherSuite being HKDF-SHA256/AES-128-GCM. In addition to these four attributes, following attributes may be spec- ified. max-name-length specifies the maximum-name-length field of an ECH con- fguration (default: 64). advertise takes either YES (default) or NO as the argument. This argu- ment indicates if given ECH configuration should be advertised as part of retry_configs (draft-ietf-tls-esni-15; section 5). When removing a stale ECH configuration, its advertise attribute should be set at first to NO so that the stale configuration would not be ad- vertised. Then, after waiting for the expiry of caches containing the stale configuration, the stale ECH configuration can be removed. This may take long depending on the TTL of the HTTPS / SVC DNS resource record advertising the configuration. The ech attribute must be set only in the first ssl attribute that binds to a particular address. neverbleed: unless set to OFF, H2O isolates SSL private key operations to a differ- ent process by using Neverbleed. Default is ON. ssl-session-resumption directive is provided for tuning parameters re- lated to session resumption and session tickets. The CC Attribute The CC attribute specifies the congestion controller to be used for in- coming HTTP connections. For TCP connections, the congestion controller is set using the TCP_CONGESTION socket option on platforms that have support for that socket option. To find out the default and the list of supported con- gestion controllers, please refer to man 7 tcp. If the platform does not have support for that socket option, the attribute has no effect. For QUIC connections, the congestion controller is one of Reno, Cubic, Pico. The default is Reno. The Initcwnd Attribute The initcwnd attribute specifies the initial congestion window of each incoming HTTP connection in the unit of packets. At the moment, this option only applies to QUIC. It has no effect for TCP connections. The Proxy-Protocol Attribute The proxy-protocol attribute (i.e. the value of the attribute must be either ON or OFF) specifies if the server should recognize the informa- tion passed via "the PROXY protocol in the incoming connections. The protocol is used by L4 gateways such as AWS Elastic Load Balancing to send peer address to the servers behind the gateways. When set to ON, H2O standalone server tries to parse the first octets of the incoming connections as defined in version 1 of the specifica- tion, and if successful, passes the addresses obtained from the proto- col to the web applications and the logging handlers. If the first octets do not accord with the specification, it is considered as the start of the SSL handshake or as the beginning of an HTTP request de- pending on whether if the ssl attribute has been used. Default is OFF. The Sndbuf and Rcvbuf Attributes The sndbuf and rcvbuf attributes specify the send and receive buffer size for each TCP or UNIX socket used for accepting incoming HTTP con- nections. If set, the values of these attributes are applied to the sockets using SO_SNDBUF and SO_RCVBUF socket options. These attributes have no effect for QUIC connections. Listening to a Unix Socket If the type attribute is set to unix, then the port attribute is as- sumed to specify the path of the unix socket to which the standalone server should bound. Also following attributes are recognized. owner username of the owner of the socket file. If omitted, the socket file will be owned by the launching user. group name of the group of the socket file. If omitted, group ID associated to the socket file will be the group ID of the owner. permission an octal number specifying the permission of the socket file. Many op- erating systems require write permission for connecting to the socket file. If omitted, the permission of the socket file will reflect the umask of the calling process. Example: Listening to a Unix Socket accessible only by www-data listen: type: unix port: /tmp/h2o.sock owner: www-data permission: 600 capabilities Set capabilities to be added to the process before dropping root privi- leges. This directive can be used only on Linux. The argument is a YAML se- quence of capabilites, where each capability is a name that is accepted by cap_from_name. See man 7 capabilities for details. error-log Path of the file to which error logs should be appended. Default is stderr. If the path starts with |, the rest of the path is considered as a com- mand to which the logs should be piped. Example: Log errors to file error-log: /path/to/error-log-file Example: Log errors through pipe error-log: "| rotatelogs /path/to/error-log-file.%Y%m%d 86400" See also: error-log.emit-request-errors error-log.emit-request-errors (since v2.1) Sets whether if request-level errors should be emitted to the error log. By setting the value to OFF and by using the %{error}x specifier of the access-log directive, it is possible to log request-level errors only to the access log. See also: access-log error-log h2olog Under the path /.well-known/h2olog for the current host, registers a h2olog handler that emits the internals of traffic that the h2o process is handling. The h2olog command can be used to gather information through this end- point. As the handler emits the internals of h2o to the client, only requests via a UNIX socket are accepted. This directive takes one of the following three arguments: off The h2olog endpoint is disabled. on The h2olog endpoint is en- abled, but only information to gather performance data will be emitted. appdata The h2olog endpoint is enabled, and in addition to information necessary for gathering performance data, some payload of HTTP is emit- ted as well. The additional information might help diagnose issues spe- cific to certain HTTP connections but might include sensitive informa- tion (e.g., HTTP cookies). handshake-timeout Maximum time (in seconds) that can be spent by a connection before it becomes ready to accept an HTTP request. Times spent for receiving the PROXY protocol and TLS handshake are counted. limit-request-body Maximum size of request body in bytes (e.g. content of POST). Default is 1073741824 (1GB). max-connections Maximum number of incoming connections to handle at once. This includes TCP and QUIC connections. See also: max-quic-connections soft-connection-limit max-quic-connections Maximum number of incoming QUIC connections to handle at once. By default, maximum number of incoming connections is governed by max- connections regardless of the transport protocol (i.e., TCP or QUIC) being used. This directive introduces an additional cap for incoming QUIC connec- tions. By setting max-quic-connections to a value smaller than max-con- nections, it would be possible to serve incoming requests that arrive on top of TCP (i.e., HTTP/1 and HTTP/2) even when there are issues with handling QUIC connections. See also: num-quic-threads max-delegations Limits the number of delegations (i.e. fetching the response body from an alternate source as specified by the X-Reproxy-URL header). max-reprocesses Limits the number of internal redirects. neverbleed-offload Sets an offload engine to be used with neverbleed. When neverbleed is in use, RSA private key operations can be offload to accelerators using the Intel QuickAssist technology. This directive takes one of the three values that changes how the ac- celerators are used: OFF - the accelerator is not used QAT - use of QAT is enforced; startup will fail if the acclerator is unavailable QAT-AUTO - QAT is used if available num-name-resolution-threads Maximum number of threads to run for name resolution. num-ocsp-updaters (since v2.0) Maximum number of OCSP updaters. OSCP Stapling is an optimization that speeds up the time spent for es- tablishing a TLS connection. In order to staple OCSP information, a HTTP server is required to periodically contact the certificate author- ity. This directive caps the number of the processes spawn for col- lecting the information. The use and the update interval of OCSP can be configured using the SSL attributes of the listen configuration directive. num-threads Number of worker threads. By default, the number of worker threads spawned by h2o is the number of the CPU cores connected to the system as obtained by getconf NPROCESSORS_ONLN. This directive is used to override the behavior. If the argument is a YAML scalar, it specifies in integer the number of worker threads to spawn. If the argument is a YAML sequence, it specifies a list of CPU IDs on each of which one worker thread will be spawned and pinned. This mode can be used oly on systems that have pthread_setaffinity_np. num-quic-threads Restricts the number of worker threads that handle incoming QUIC con- nections. By default, all worker threads handle incoming QUIC connections as well as TCP connections. If num-threads was given a YAML sequence specifying the CPU IDs on which each worker thread will run, the threads pinned to first num- quic-threads threads will handle incoming QUIC connections. See also: max-quic-connections pid-file Name of the file to which the process id of the server should be writ- ten. Default is none. tcp-fastopen Size of the queue used for TCP Fast Open. TCP Fast Open is an extension to the TCP/IP protocol that reduces the time spent for establishing a connection. On Linux that support the feature, the default value is 4,096. On other platforms the default value is 0 (disabled). send-server-name (since v2.0) Sets whether if the server response header should be sent or forwarded from backend. By setting the value to (ON or OFF) indicating whether if the server response header should be sent. And by setting the value to preserve, it forwards the value received from the backend when proxying. See also: server-name server-name (since v2.0) Lets the user override the value of the server response header. The default value is h2o/VERSION-NUMBER. See also: send-server-name setenv (since v2.0) Sets one or more environment variables. Environment variables are a set of key-value pairs containing arbitrary strings, that can be read from applications invoked by the standalone server (e.g. fastcgi handler, mruby handler) and the access logger. The directive is applied from outer-level to inner-level. At each level, the directive is applied after the unsetenv directive at the corresponding level is applied. Environment variables are retained through internal redirections. Example: Setting an environment variable named FOO setenv: FOO: "value_of_FOO" See also: unsetenv unsetenv (since v2.0) Unsets one or more environment variables. The directive can be used to have an exception for the paths that have an environment variable set, or can be used to reset variables after an internal redirection. Example: Setting environment variable for example.com excluding /spe- cific-path hosts: example.com: setenv: FOO: "value_of_FOO" paths: /specific-path: unsetenv: - FOO ... See also: setenv send-informational (since v2.3) Specifies the client protocols to which H2O can send 1xx informational responses. This directive can be used to forward 1xx informational responses (e.g., 103 Early Hints) generated by upstream servers or headers direc- tive to the clients. If the value is all, H2O always sends informational responses to the client whenever possible (i.e. unless the procotol is HTTP/1.0). If the value is none, H2O never sends informational responses to the client. If the value is except-h1, H2O sends informational if the protocol is not HTTP/1.x. soft-connection-limit Number of connections above which idle connections are closed agres- sively. H2O accepts up to max-connections TCP connections and max-quic-connec- tions QUIC connections. Once the number of connections reach these maximums, new connection attempts are ignored until existing connec- tions close. To reduce the possibility of the number of connections reaching the maximum and new connection attempts getting ignored, soft-connection- limit can be used to introduce another threshold. When soft-connec- tion-limit is set, connections that have been idle at least for soft- connection-limit.min-age seconds will start to get closed until the number of connections becomes no greater than soft-connection-limit. As the intention of this directive is to close connections more agres- sively under high load than usual, soft-connection-limit.min-age should be set to a smaller value than the other idle timeouts; e.g., http1-re- quest-timeout, http2-idle-timeout. soft-connection-limit.min-age Minimum amount of idle time to be guaranteed for HTTP connections even when the connections are closed agressively due to the number of con- nections exceeding soft-connection-limit. See soft-connection-limit. ssl-offload Knob for changing how TLS encryption is handled. This directive takes one of the following values: OFF - TLS encryption is handled in userspace and the encrypted bytes are sent to the kernel using a write (2) system call. kernel - TLS en- cryption is offloaded to the kernel. When the network interface card supports TLS offloading, actual encryption might get offloaded to the interface, depending on the kernel configuration. zerocopy - TLS en- cryption is handled in userspace, but if the encryption logic is capa- ble of writing directly to main memory without polluting the cache, the encrypted data is passed to the kernel without copying (i.e., sendmsg (2) with MSG_ZEROCOPY socket option is used). Otherwise, this option is identical to OFF. This option minimizes cache pollution next to hardware offload. Kernel option can be used only on Linux. Zerocopy is only available on Linux running on CPUs that support the necessary features; see picotls PR#384 and H2O PR#3007. See also: proxy.zerocopy ssl-session-resumption Configures cache-based and ticket-based session resumption. To reduce the latency introduced by the TLS (SSL) handshake, two meth- ods to resume a previous encrypted session are defined by the Internet Engineering Task Force. H2O supports both of the methods: cache-based session resumption (defined in RFC 5246) and ticket-based session re- sumption (defined in RFC 5077). Example: Various session-resumption configurations # use both methods (storing data on internal memory) ssl-session-resumption: mode: all # use both methods (storing data on memcached running at 192.168.0.4:11211) ssl-session-resumption: mode: all cache-store: memcached ticket-store: memcached cache-memcached-num-threads: 8 memcached: host: 192.168.0.4 port: 11211 # use ticket-based resumption only (with secrets used for encrypting the tickets stored in a file) ssl-session-resumption: mode: ticket ticket-store: file ticket-file: /path/to/ticket-encryption-key.yaml Defining the Methods Used The mode attribute defines which methods should be used for resuming the TLS sessions. The value can be either of: off, cache, ticket, all. Default is all. If set to off, session resumption will be disabled, and all TLS connec- tions will be established via full handshakes. If set to all, both session-based and ticket-based resumptions will be used, with the pref- erence given to the ticket-based resumption for clients supporting both the methods. For each method, additional attributes can be used to customize their behaviors. Attributes that modify the behavior of the disabled method are ignored. Attributes for Cache-based Resumption Following attributes are recognized if the cache-based session resump- tion is enabled. Note that memcached attribute must be defined as well in case the memcached cache-store is used. cache-store: defines where the cache should be stored, must be one of: internal, memcached. Default is internal. Please note that if you compiled h2o with OpenSSL 1.1.0 ~ 1.1.0f, ses- sion resumption with external cache store would fail due to bug of OpenSSL. cache-memcached-num-threads: defines the maximum number of threads used for communicating with the memcached server. Default is 1. cache-memcached-prefix: for the memcached store specifies the key prefix used to store the se- crets on memcached. Default is h2o:ssl-session-cache:. Attributes for Ticket-based Resumption Ticket-based session resumption uses ticket encryption key(s) to en- crypt the keys used for encrypting the data transmitted over TLS con- nections. To achieve forward-secrecy (i.e. protect past communications from being decrypted in case the ticket encryption key gets obtained by a third party), it is essential to periodically roll over the encyrp- tion key. Among the three types of stores supported for ticket-based session re- sumption, the internal store and memcached store implement automatic roll-over of the secrets. A new ticket encryption key is created every 1/4 of the session lifetime (defined by the lifetime attribute), and they expire (and gets removed) after 5/4 of the session lifetime elapse. For the file store, it is the responsibility of the web-site adminis- trator to periodically update the secrets. H2O monitors the file and reloads the secrets when the file is altered. Following attributes are recognized if the ticket-based resumption is enabled. ticket-store: defines where the secrets for ticket-based resumption should be / is stored, must be one of: internal, file, memcached. De- fault is internal. ticket-cipher: for stores that implement automatic roll-over, specifies the cipher used for encrypting the tickets. The value must be one recognizable by EVP_get_cipherbyname. Default is aes-256-cbc. ticket-hash: for stores that implement automatic roll-over, specifies the cipher used for digitally-signing the tickets. The value must be one recog- nizable by EVP_get_digestbyname. Default is sha-256. ticket-file: for the file store specifies the file in which the secrets are stored ticket-memcached-key: for the memcached store specifies the key used to store the secrets on memcached. Default is h2o:ssl-session-ticket. Format of the Ticket Encryption Key Either as a file (specified by ticket-file attribute) or as a memcached entry (ticket-memcached-key), the encryption keys for the session tick- ets are stored as a sequence of YAML mappings. Each mapping must have all of the following attributes set. name a string of 32 hexadecimal characters representing the name of the ticket encryption key. The value is only used for identifying the key; it can be generated by calling a PRNG. cipher name of the symmetric cipher used to protect the session tickets. The only supported values are: aes-128-cbc and aes-256-cbc (the default). hash the hash algo- rithm to be used for validating the session tickets. The only sup- ported value is: sha256. key concatenation of the key for the symmet- ric cipher and the HMAC, encoded as hexadecimal characters. The length of the string should be the sum of the cipher key length plus the hash key length, mulitplied by two (due to hexadicimal encoding); i.e. 96 bytes for aes-128-cbc/sha256 or 128 bytes for aes-256-cbc/sha256. not_before the time from when the key can be used for encrypting the session tickets. The value is encoded as milliseconds since epoch (Jan 1 1970). When rotating the encryption keys manually on multiple servers, you should set the not_before attribute of the newly added key to some time in the future, so that the all the servers will start us- ing the new key at the same moment. not_after until when the key can be used for encrypting the session tickets The following example shows a YAML file containing two session ticket encryption keys. The first entry is used for encrypting new keys on Jan 5 2018. The second entry is used for encrypting new keys on Jan 6 2018. Example: session ticket key file - name: c173437296d6c2307fd39b40c944c227 cipher: aes-256-cbc hash: sha256 key: e54210a0f6a6319aa155a33b8babd772319bad9f27903746dfbe6df7a4058485a8cedb057cfc5b70080cda2354fc3e13 not_before: 1515110400000 # 2018-01-05 00:00:00.000 not_after: 1515196799999 # 2018-01-05 23:59:59.999 - name: bb1a15d75dc498624890dc5a7e164675 cipher: aes-256-cbc hash: sha256 key: b4120bc903d6521fefa357ac322561fc97aa9e5ae5e18eade64832439b9095ab80f8429d6b50ff9c4c5eca1f90717d30 not_before: 1515196800000 # 2018-01-06 00:00:00.000 not_after: 1515283199999 # 2018-01-06 23:59:59.999 Other Attributes Following attributes are common to cache-based and ticket-based session resumption. lifetime: defines the lifetime of a TLS session; when it expires the session cache entry is purged, and establishing a new connection will require a full TLS handshake. Default value is 3600 (in seconds). memcached: specifies the location of memcached used by the memcached stores. The value must be a mapping with host attribute specifying the address of the memcached server, and optionally a port attribute specifying the port number (default is 11211). By default, the memcached client uses the BINARY protocol. Users can opt-in to using the legacy ASCII proto- col by adding a protocol attribute set to ASCII. strict-match A boolean flag designating if the current host element should not be considered as the fallback element. See hosts. tcp-reuseport A boolean flag designating if TCP socket listeners should be opened with the SO_REUSEPORT option. temp-buffer-path (since v2.0) Directory in which temporary buffer files are created. H2O uses an internal structure called h2o_buffer_t for buffering vari- ous kinds of data (e.g. POST content, response from upstream HTTP or FastCGI server). When amount of the data allocated in the buffer ex- ceeds the default value of 32MB, it starts allocating storage from the directory pointed to by the directive. The threshold can be tuned or disabled using the temp-buffer-threshold directive. By using the directive, users can set the directory to one within a memory-backed file system (e.g. tmpfs) for speed, or specify a disk- based file system to avoid memory pressure. Note that the directory must be writable by the running user of the server. See also: user temp-buffer-threshold temp-buffer-threshold (since v2.2.5) Minimum size to offload a large memory allocation to a temporary buffer. Users can use this directive to tune the threshold for when the server should use temporary buffers. The minimum value accepted is 1MB (1048576) to avoid overusing these buffers, which will lead to perfor- mance degradation. If omitted, the default of 32MB is used. The user can disable temporary buffers altogether by setting this threshold to OFF. See also: temp-buffer-path user Username under which the server should handle incoming requests. If the directive is omitted and if the server is started under root privileges, the server will attempt to setuid to nobody. crash-handler (since v2.1) Script to invoke if h2o receives a fatal signal. Note: this feature is only available when linking to the GNU libc. The script is invoked if one of the SIGABRT, SIGBUS, SIGFPE, SIGILL or SIGSEGV signals is received by h2o. h2o writes the backtrace as provided by backtrace() and backtrace_sym- bols_fd to the standard input of the program. If the path is not absolute, it is prefixed with ${H2O_ROOT}/. crash-handler.wait-pipe-close (since v2.1) Whether h2o should wait for the crash handler pipe to close before exiting. When this setting is ON, h2o will wait for the pipe to the crash han- dler to be closed before exiting. This can be useful if you use a cus- tom handler that inspects the dying process. stash (since v2.3) Directive being used to store reusable YAML variables. This directive does nothing itself, but can be used to store YAML vari- ables and reuse those using YAML Alias. Example: Reusing stashed variables across multiple hosts stash: ssl: &ssl port: 443 paths: &paths /: file.dir: /path/to/root hosts: "example.com": listen: < COMPRESS DIRECTIVES The compress handler performs on-the-fly compression - it compresses the contents of an HTTP response as it is being sent, if the client in- dicates itself to be capable of decompressing the response transpar- ently with the use of Accept-Encoding header, and if the response is deemed compressible according to the following rules. If x-compress-hint response header does not exist or the value is auto, then whether if the response is considered compressible depends on the is_compressible attribute assigned to the content type (see file.mime.addtypes). If x-compress-hint response header exists and the value is on, the response is always considered to be compressible. If the value of the response header is set to off, then the response never gets compressed. The following are the configuration directives recognized by the han- dler. compress (since v2.0) Enables on-the-fly compression of HTTP response. If the argument is ON, both brotli and gzip compression are enabled. If the argument is OFF, on-the-fly compression is disabled. If the ar- gument is a sequence, the elements are the list of compression algo- rithms to be enabled. If the argument is a mapping, each key specifies the compression algorithm to be enabled, and the values specify the quality of the algorithms. When both brotli and gzip are enabled and if the client supports both, H2O is hard-coded to prefer brotli. Example: Enabling on-the-fly compression # enable all algorithms compress: ON # enable by name compress: [ gzip, br ] # enable gzip only compress: [ gzip ] See also: file.send-compressed, file.mime.addtypes compress-minimum-size (since v2.0) Defines the minimum size a files needs to have in order for H2O to compress the request. gzip (since v1.5) Enables on-the-fly compression of HTTP response using gzip. Equivalent to compress: [ gzip ]. See also: compress HTTP/1 DIRECTIVES This document describes the configuration directives for controlling the HTTP/1 protocol handler. http1-request-timeout Timeout for incoming requests in seconds. http1-request-io-timeout Timeout for incoming request I/O in seconds. http1-upgrade-to-http2 Boolean flag (ON or OFF) indicating whether or not to allow upgrade to HTTP/2. HTTP/2 DIRECTIVES H2O provides one of the world's most sophisticated HTTP/2 protocol im- plementation, including following features. Prioritization H2O is one of the few servers that fully implement prioritization of HTTP responses conformant to what is defined in the HTTP/2 specifica- tion. The server implements a O(1) scheduler that determines which HTTP response should be sent to the client, per every 16KB chunk. Unfortunately, some web browsers fail to specify response priorities that lead to best end-user experience. H2O is capable of detecting such web browsers, and if it does, uses server-driven prioritization; i.e. send responses with certain MIME-types before others. It is possible to tune or turn off server-driven prioritization using directives: file.mime.addtypes, http2-reprioritize-blocking-assets. See also: Download Timings Benchmark HTTP/2 (and H2O) improves user experience over HTTP/1.1 or SPDY Server push H2O recognizes link headers with preload keyword sent by a backend ap- plication server (reverse proxy or FastCGI) or an mruby handler, and pushes the designated resource to a client. Example: A link response header triggering HTTP/2 push link: ; rel=preload; as=script When the HTTP/2 driver of H2O recognizes a link response header with rel=preload attribute set, and if all of the following conditions are met, the specified resource is pushed to the client. configuration directive http2-push-preload is not set to OFF the link header does not have the nopush attribute set the link header is not part of a pushed response the client does not disable HTTP/2 push num- ber of the pushed responses in-flight is below the negotiated threshold authority of the resource specified is equivalent to the request that tried to trigger the push (for handlers that return the status code synchronously) the status code of the response to be pushed does not indicate an error (i.e. 4xx or 5xx) The server also provides a mechanism to track the clients' cache state via cookies, and to push the resources specified with the link header only when it does not exist within the clients' cache. For details, please refer to the documentation of http2-casper configuration direc- tive. When a resource is pushed, the priority is determined using the prior- ity attribute of the MIME-type configuration. If the priority is set to highest then the resource will be sent to the client before anything else; otherwise the resource will be sent to client after the main con- tent, as per defined by the HTTP/2 specification. HTTP/1.1 allows a server to send an informational response (see RFC 7230 section 6.2) before sending the final response. Starting from version 2.1, web applications can take advantage of the informational response to initiate HTTP/2 pushes before starting to process the re- quest. The following example shows how such responses would look like. Example: 100 response with link headers HTTP/1.1 100 Continue Link: ; rel=preload; as=style Link: ; rel=preload; as=script HTTP/1.1 200 OK Content-Type: text/html; charset=utf-8 Pushed responses will have x-http2-push: pushed header set; by looking for the header, it is possible to determine if a resource has been pushed. It is also possible to log the value in the access log by specifying %{x-http2-push}o, push responses but cancelled by CASPER will have the value of the header logged as cancelled. See also: Optimizing performance of multi-tier web applications using HTTP/2 push Latency Optimization When using HTTP/2, a client often issues high-priority requests (e.g. requests for CSS and JavaScript files that block the rendering) while a lower-priority response (e.g. HTML) is in flight. In such case, it is desirable for a server to switch to sending the response of the high- priority requests as soon as it observes the requests. In order to do so, send buffer of the TCP/IP stack should be kept empty except for the packets in-flight, and size of the TLS records must be small enough to avoid head-of-line blocking. The downside is that obeying the requirement increases the interaction between the server process and kernel, which result in consumption of more CPU cycles and slightly increased latency. Starting from version 2.1, H2O provides directives that lets the users tune how the TCP/IP stack is used depending on the observed RTT, CWND, and the additional latency imposed by the interaction between the server and the OS. For TCP/IP connections with greater RTT and smaller CWND than the con- figured threshold, the server will try to keep the size of HTTP/2 frames unsent as small as possible so that it can switch to sending a higher-priority response. Benchmarks suggest that users can expect in average 1 RTT reduction when this optimization is enabled. For connec- tions that do not meet the criteria, the server will utilize the TCP/IP stack in ordinary ways. The default values of the thresholds have been chosen that the opti- mization will come into action for mobile and long-distance networks but not when a proxy exists on the network. The optimization is supported only on Linux and OS X. The two are the operating systems that provide access to TCP_INFO and an interface to adjust the size of the unsent buffer (TCP_NOTSENT_LOWAT). Please refer to the documentation of the directives below to configure the optimization: http2-latency-optimization-min-rtt http2-latency-optimization-max-addi- tional-delay http2-latency-optimization-max-cwnd See also: Reorganizing Website Architecture for HTTP/2 and Beyond pp.14-21 The following describes the configuration directives for controlling the HTTP/2 protocol handler. http2-casper Configures CASPer (cache-aware server-push). When enabled, H2O maintains a fingerprint of the web browser cache, and cancels server-push suggested by the handlers if the client is known to be in possession of the content. The fingerprint is stored in a cookie named h2o_casper using Golomb-compressed sets (a compressed encoding of Bloom filter). If the value is OFF, the feature is disabled. Push requests (made by the handlers through the use of Link: rel=preload header) are processed regardless of whether if client already has the responses in its cache. If the value is ON, the feature is enabled with the defaults value specified below. If the value is mapping, the feature is enabled, rec- ognizing the following attributes. capacity-bits: number of bits used for the fingerprinting. Roughly speaking, the number of bits should be log2(1/P * number-of-assets-to- track) where P being the probability of false positives. Default is 13, enough for tracking about 100 asset files with 1/100 chance of false positives (i.e. log2(100 * 100) =~ 13). tracking-types: speci- fies the types of the content tracked by casper. If omitted or set to blocking-assets, maintains fingerprint (and cancels server push) for resources with mime-type of highest priority. If set to all, tracks all responses. It should be noted that the size of the cookie will be log2(P) * num- ber-of-assets-being-tracked bits multiplied by the overhead of Base 64 encoding (4/3). Therefore with current cookie-based implementation, it is necessary in many cases to restrict the resources being tracked to those have significant effect to user-perceived response time. Example: Enabling CASPer http2-casper: ON # `ON` is equivalent to: # http2-casper: # capacity-bits: 13 # tracking-types: blocking-assets See also: file.mime.addtypes, issue #421 http2-debug-state A directive to turn on the HTTP/2 Implementation Debug State. This experimental feature serves a JSON document at the fixed path /.well-known/h2/state, which describes an internal HTTP/2 state of the H2O server. To know the details about the response fields, please see the spec. This feature is only for developing and debugging use, so it's highly recommended that you disable this setting in the production environment. The value of this directive specifies the property set contained in the response. Available values are minimum or hpack. If hpack is speci- fied, the response will contain the internal hpack state of the same connection. If minimum is specified, the response doesn't contain the internal hpack state. In some circumstances, there may be a risk of information leakage on providing an internal hpack state. For example, the case that some proxies exist between the client and the server, and they share the connections among the clients. Therefore, you should specify hpack only when the server runs in the environments you can completely con- trol. This feature is considered experimental yet. For now, the implementa- tion conforms to the version draft-01 of the specification. See also: HTTP/2 Implementation Debug State (draft-01) http2-idle-timeout Timeout for idle connections in seconds. http2-input-window-size (since v2.3) Default window size for HTTP request body. The value is the maximum amount of request body (in bytes) that can be sent by the client in 1 RTT (round-trip time). http2-max-streams Maximum number of streams to advertise via HTTP2 SETTINGS_MAX_CONCUR- RENT_STREAMS. Limits the number of active requests that the proxy will accept on a client connection. Also see "http2-max-concurrent-requests-per-connec- tion" which controls the number of requests that the proxy will process concurrently. http2-max-concurrent-requests-per-connection Maximum number of requests to be handled concurrently within a single HTTP/2 connection. The value cannot exceed 256. http2-max-concurrent-streaming-requests-per-connection Maximum number of streaming requests to be handled concurrently within a single HTTP/2 connection. The value cannot exceed 256. http2-latency-optimization-min-rtt (since v2.1) Minimum RTT (in milliseconds) to enable latency optimiza- tion. Latency optimization is disabled for TCP connections with smaller RTT (round-trip time) than the specified value. Otherwise, whether if the optimization is used depends on other parameters. Setting this value to 4294967295 (i.e. UINT_MAX) effectively disables the optimization. http2-latency-optimization-max-additional-delay (since v2.1) Maximum additional delay (as the ratio to RTT) permitted to get latency optimization activated. Latency optimization is disabled if the additional delay imposed by the interaction between the OS and the TCP/IP stack is estimated to be greater than the given threshold. Otherwise, whether if the optimiza- tion is used depends on other parameters. http2-latency-optimization-max-cwnd (since v2.1) Maximum size (in octets) of CWND to get latency optimiza- tion activated. CWND is a per-TCP-connection variable that represents the number of bytes that can be sent within 1 RTT. The server will not use or stop using latency optimization mode if CWND becomes greater than the configured value. In such case, average size of HTTP/2 frames buffered unsent will be slightly above the tcp_not- sent_lowat sysctl value. http2-push-preload (since v2.1) A boolean flag (ON or OFF) indicating whether if the server should push resources when observing a link: rel=preload header. http2-reprioritize-blocking-assets A boolean flag (ON or OFF) indicating if the server should send con- tents with highest priority before anything else. To maximize the user-perceived responsiveness of a web page, it is es- sential for the web server to send blocking assets (i.e. CSS and JavaScript files in ) before any other files such as images. HTTP/2 provides a way for web browsers to specify such priorities to the web server. However, as of Sep. 2015, no major web browsers except Mozilla Firefox take advantage of the feature. This option, when enabled, works as a workaround for such web browsers, thereby improving experience of users using the web browsers. Technically speaking, it does the following: if the client uses dependency-based prioritization, do not reprioritize if the client does not use dependency-based prioritization, send the contents of which their types are given highest priority before any other responses See also: file.mime.addtypes, HTTP/2 (and H2O) improves user experience over HTTP/1.1 or SPDY http2-graceful-shutdown-timeout A timeout in seconds. How long to wait before closing the connection on graceful shutdown. Setting the timeout to 0 deactivates the feature: H2O will wait for the peer to close the connections. http2-allow-cross-origin-push (since v2.3) A boolean flag (ON or OFF) indicating whether if the server should push resources belonging to a different authority. http2-dos-delay HTTP request processing delay to be applied when the client behavior is suspicious, in the unit of milliseconds. When the client behavior is suspicious and there is a concern of De- nial-of-Service attack, h2o stops processing requests arriving on the suspicious connection for specified amount of time. HTTP/3 HTTP/3 uses QUIC as the transport protocol. A listen directive with a type attribute set to quic instructs the standalone server to bind to a UDP port on which QUIC packets will be sent and received. The binding must have an ssl attribute, as QUIC uses TLS/1.3 as the handshake pro- tocol. The example below setups a server that listens to both TCP port 443 and UDP port 443 using the same certificate and private key. First listen directive binds the server to TCP port 443 with specified credentials, marking that directive as an YAML alias called &lis- ten_ssl. Then, it reuses (YAML merge) the first listen directive, adding type: quic to create a UDP port 443 binding for accepting QUIC connections. Example: Serving HTTP/1,2 and 3 on port 443 listen: &listen_ssl port: 443 ssl: certificate-file: /path/to-ssl-certificate-file key-file: /path/to/ssl-key-file listen: <Fine-tuning QUIC Behavior To fine tune the behavior of QUIC, the quic attribute should be used in place of the type attribute specifying quic. The quic attribute accepts following parameters. amp-limit Amount of data that can be sent to the client before the client address is validated; see section 8.1 of RFC 9000. Default is 3. ecn A boolean flag (either ON or OFF) indicating whether the server should use ECN signals to detect congestion. The default setting is ON. This flag affects the server's sending behavior. Regardless of this configuration, the server sends back ECN signals it receives using ACK_ECN frames. handshake-timeout-rtt-multiplier Handshake timeout in the unit of round-trip time. Default is 400. jumpstart-default Jumpstart enhances the slow start phase of congestion control by pacing a large number of packets for an entire round-trip time (RTT), allowing the server to assess the network's capacity sooner. This parameter specifies the number of packets sent during the jumpstart phase. The default value is zero, indicating that jumpstart is disabled for new connections. jumpstart-max Upon resuming QUIC connections, clients present tokens issued by the server that contain the bandwidth of the previous connection. This information enables the server to instantly match the previous send rate by adjusting the jumpstart window accordingly. This parameter sets a cap on the maximum size of the jumpstart window. If set to zero (the default), the server disregards the bandwidth values in the tokens. Nonetheless, the server may still initiate a jumpstart without using previous information, based on the jumpstart-default parameter. max-initial-handshake-packets Maximum number of Initial packets to be sent before the handshake is deemed to have failed. Default is 1,000. max-streams-bidi Maximum number of client-initated bi-directional streams. This parameter controls the HTTP request concurrency of a HTTP/3 connection. Default is 100. max-udp-payload-size See Section 18.2 of RFC 9000. Default is 1,472. pacing A boolean flag (either OFF or ON) indicating whether sent packets should be paced. The default setting is OFF. qpack-encoder-table-capacity Size of the QPACK encoder table. Default is 4,096. respect-app-limited This boolean flag (either OFF or ON) indicates whether the server should respect the notion of rate limited traffic when adjusting the size of the congestion window, as detailed in RFC 7661. The default setting is ON. retry A boolean flag (OFF or ON) indicating if a Retry packet should be used for validating the client address. Use of Retry packets mitigate denial-of-service attacks at the cost of incurring one additional round-trip for processing the handshake. sndbuf, rcvbuf Size of send and receive buffers, in the unit of bytes. Unlike the TCP counterparts that are per-connection, these buffers are associated to the listening port and applies to all the connections bound to that port. The example below reuses a previous binding but sets the retry parameter to ON. Example: HTTP/3 endpoint using Retry packets listen: < Also, properties such as congestion controller and initial congestion window can be tuned using the top-level attribute of listen. HTTP/3 Directives Aside from QUIC-level properties, configuration directives listed below are provided for tuning HTTP/3 behavior. http3-graceful-shutdown-timeout Maximum duration to retain HTTP/3 connections in half-closed state, in seconds. When a graceful shutdown of h2o is initiated, h2o at first sends a GO- AWAY frame indicating the clients that it is initiating shutdown, then after one second, starts rejecting new HTTP requests (a.k.a. half- closed state). This directive controls how long h2o should wait for the peer to close the QUIC connection in this half-closed state, before exitting. If set to zero, this timeout is disabled. h2o will not shut down until all QUIC connections are closed by the clients or times out. http3-gso If Generic Segmentation Offload should be used when sending QUIC pack- ets. http3-idle-timeout Idle timeout in the unit of seconds. Unlike idle timeout of HTTP/1 and HTTP/2, this value should be small because it is faster to re-establish a new connection using 0-RTT than migrating to a different port due to NAT rebinding. http3-input-window-size Default window size for HTTP request body. See http2-input-window-size. http3-max-concurrent-streaming-requests-per-connection Maximum number of streaming requests to be handled concurrently within a single HTTP/3 connection. ACCESS LOG DIRECTIVES This document describes the configuration directives of the access_log handler. access-log The directive sets the path and optionally the format of the access log. If the supplied argument is a scalar, it is treated as the path of the log file, or if the value starts with a |, it is treated as a command to which the log should be emitted. The latter approach (i.e. |) needs to be used for rotating the logs. This is because the log file is opened (or the command that emits the log is spawned) before dropping privileges so that it can be owned by root or any other user; therefore it cannot be reopened by the server process itself once it starts running. Example: Emit access log to file access-log: /path/to/access-log-file Example: Emit access log through pipe access-log: "| rotatelogs /path/to/access-log-file.%Y%m%d 86400" If the supplied argument is a mapping, its path property is considered as the path of the log file or the pipe command, and the format prop- erty is treated as the format of the log file. Starting from version 2.2, escape property can be used to specify the escape sequence that should be used to emit unsafe octets. Two forms of escape sequences are supported. If apache is specified as the value of the escape property, unsafe octets are emitted in the form of , where N is a hexadecimal number in lower case. If json is speci- fied, unsafe octets are emitted in the form of 00NN. apache is the de- fault escape method. Example: Emit access log to file using Common Log Format access-log: path: /path/to/access-log-file format: "%h %l %u %t escape: apache The list of format strings recognized by H2O is as follows. Format StringDescription %%the percent sign %Alocal address (e.g. 4.5.6.7) %bsize of the response body in bytes %Hrequest protocol as sent by the client (e.g. HTTP/1.1) %hremote address (e.g. 1.2.3.4) %lremote logname (always -) %mrequest method (e.g. GET, POST) %plocal port (%{local}p is a synonym that is supported since version 2.2) %{re- mote}premote port (since version 2.2) %qquery string (? is prepended if exists, otherwise an empty string) %rrequest line (e.g. GET / HTTP/1.1) %sstatus code sent to client (e.g. 200) %status code received from up- stream (or initially generated) %ttime when the request was received in format: [02/Jan/2006:15:04:05 -0700] %{FORMAT}ttime when the request was received using the specified format. FORMAT should be an argument to strftime, or one of: secnumber of seconds since Epoch msecnumber of milliseconds since Epoch usecnumber of microseconds since Epoch msec_fracmillisecond fraction usec_fracmicrosecond fraction As an example, it is possible to log timestamps in millisecond resolu- tion using %{%Y/%m/%d:%H:%M:%S}t.%{msec_frac}t, which results in a timestamp like 2006-01-02:15:04:05.000. %Urequested URL path, not in- cluding the query string %uremote user if the request was authenticated (always -) %Vrequested server name (or the default server name if not specified by the client) %vcanonical server name %{VARNAME}erequest en- vironment variable (since version 2.3; see Logging Arbitrary Variable) %{HEADERNAME}ivalue of the given request header (e.g. %{user-agent}i) %{HEADERNAME}ovalue of the given response header sent to client (e.g. %{set-cookie}o) %<{HEADERNAME}ovalue of the response header received from upstream (or initially generated) %{NAME}xvarious extensions. NAME must be one listed in the following tables. A dash (-) is emitted if the directive is not applicable to the request being logged. Access Timings NameDescription connect-timetime spent to establish the connection (i.e. since connection gets accept(2)-ed until first octet of the request is received) request-header-timetime spent receiving re- quest headers request-body-timetime spent receiving request body re- quest-total-timesum of request-header-time and request-body-time process-timetime spent after receiving request, before starting to send response response-timetime spent sending response durationsum of re- quest-total-time, process-time, response-time total-timesame as dura- tion (since v2.3) Proxy Timings (since v2.3) NameDescription proxy.idle-timetime spent after receiving request, before starting to connect to the upstream proxy.connect-timetime spent to establish the connection (including SSL handshake) proxy.request-timetime spent sending request (header and body) proxy.process-timetime spent after sending request, before start- ing to receive response proxy.response-timetime spent receiving re- sponse proxy.total-timesum of proxy-request-time, proxy-process-time, proxy-response-time Proxy (since v2.3) NameDescription proxy.request-bytesnumber of bytes used by the proxy handler for sending the request (above TLS layer) proxy.request-bytes-headernumber of bytes used by the proxy handler for sending the request header (above TLS layer) proxy.request-bytes-bo- dynumber of bytes used by the proxy handler for sending the request body (above TLS layer) proxy.response-bytesnumber of bytes used by the proxy handler for receiving the response (above TLS layer) proxy.re- sponse-bytes-headernumber of bytes used by the proxy handler for re- ceiving the response header (above TLS layer) proxy.response-bytes-bo- dynumber of bytes used by the proxy handler for receiving the response body (above TLS layer) Connection (since v2.0) NameDescription connection-id64-bit internal ID assigned to every client connection ssl.protocol-versionSSL protocol version obtained from SSL_get_version ssl.session-reused1 if the SSL session was reused, or 0 if not [1] ssl.session-idbase64-encoded value of the session id used for resuming the session (since v2.2) ssl.ci- phername of the cipher suite being used, obtained from SSL_CI- PHER_get_name ssl.cipher-bitsstrength of the cipher suite in bits ssl.server-namehostname provided in Server Name Indication (SNI) exten- sion, if any Upstream Proxy Connection (since v2.3) NameDescription proxy.ssl.proto- col-versionSSL protocol version obtained from SSL_get_version proxy.ssl.session-reused1 if the SSL session was reused, or 0 if not proxy.ssl.ciphername of the cipher suite being used, obtained from SSL_CIPHER_get_name proxy.ssl.cipher-bitsstrength of the cipher suite in bits HTTP/2 (since v2.0) NameDescription http2.stream-idstream ID http2.pri- ority.receivedcolon-concatenated values of exclusive, parent, weight http2.priority.received.exclusiveexclusive bit of the most recent pri- ority specified by the client http2.priority.received.parentparent stream ID of the most recent priority specified by the client http2.priority.received.weightweight of the most recent priority speci- fied by the client Miscellaneous NameDescription errorrequest-level errors. Unless speci- fied otherwise by using the error-log.emit-request-errors directive, the same messages are emitted to the error-log. (since v2.1) The default format is %h %l %u %t "%r" %s %b "%{Referer}i" "%{User- agent}i", a.k.a. the NCSA extended/combined log format. Note that you may need to quote (and escape) the format string as re- quired by YAML (see Yaml Cookbook). See also: error-log error-log.emit-request-errors Notes: [1]A single SSL connection may transfer more than one HTTP request. ERRORDOC DIRECTIVES This document describes the configuration directives of the errordoc handler. error-doc Specifies the content to be sent when returning an error response (i.e. a response with 4xx or 5xx status code). The argument must be a mapping containing following attributes, or if it is a sequence, every element must be a mapping with the following attributes. status - three-digit number indicating the status code (or sequence of that from version 2.3) url - URL of the document to be served URL can either be absolute or relative. Only content-type, content- language, set-cookie headers obtained from the specified URL are served to the client. Example: Set error document for 404 status error-doc: status: 404 url: /404.html Example: Set error document for 500 and 503 status error-doc: - status: 500 url: /internal-error.html - status: 503 url: /service-unavailable.html Example: Set error document for 50x statuses (From version 2.3) error-doc: status: [500, 502, 503, 504] url: /50x.html EXPIRES DIRECTIVES This document describes the configuration directives of the expires handler. expires An optional directive for setting the Cache-Control: max-age= header. if the argument is OFF the feature is not used if the value is NUMBER UNIT then the header is set the units recognized are: second, minute, hour, day, month, year the units can also be in plural forms Example: Set Cache-Control: max-age=86400 expires: 1 day You can also find an example that conditionally sets the header depend- ing on the aspects of a request in Modifying the Response section of the Mruby directives documentation. FASTCGI DIRECTIVES This document describes the configuration directives of the FastCGI handler. The configuration directives of the FastCGI handler can be categorized into two groups. Fastcgi.connect and fastcgi.spawn define the address (or the process) to which the requests should be sent. Other direc- tives customize how the connections to the FastCGI processes should be maintained. fastcgi.connect The directive specifies the address at where the FastCGI daemon is run- ning. If the argument is a mapping, following properties are recognized. host name (or IP address) of the server running the FastCGI daemon (ig- nored if type is unix) port TCP port number or path to the unix socket type either tcp (default) or unix If the argument is a scalar, the value is considered as a TCP port num- ber and the host is assumed to be 127.0.0.1. Example: Map /app to FastCGI daemon listening to /tmp/fcgi.sock hosts: "example.com:80": paths: "/app": fastcgi.connect: port: /tmp/fcgi.sock type: unix fastcgi.document_root Sets the DOCUMENT_ROOT variable to be passed to the FastCGI applica- tion. fastcgi.spawn The directive specifies the command to start the FastCGI process man- ager. In contrast to fastcgi.connect that connects to a FastCGI server run- ning externally, this directive launches a FastCGI process manager un- der the control of H2O, and terminates it when H2O quits. The argument is a /bin/sh -c expression to be executed when H2O boots up. The HTTP server records the process id of the expression, and sends SIGTERM to the id when it exits. Example: Map .php files to 10 worker processes of /usr/local/bin/php- cgi file.custom-handler: extension: .php fastcgi.spawn: "PHP_FCGI_CHILDREN=10 exec /usr/local/bin/php-cgi" Example: Map any executable file in path /var/www/data/cgi-bin to fastcgi-cgi wrapper "/cgi-bin": file.dir: /var/www/data/cgi-bin file.custom-handler: extension: default # means "no extension" in this case fastcgi.spawn: command: "exec /usr/local/share/h2o/fastcgi-cgi" As of version 1.4.0, the spawned process is run under the privileges of user specified by the user directive (in version 1.3.x, the FastCGI process was spawned under the privileges that spawned the H2O stand- alone server). It is possible to specify a different user for running the FastCGI process, by providing a mapping that contains an attribute named user together with an attribute named command. Example: Running FastCGI processes under user fastcgi file.custom-handler: extension: .php fastcgi.spawn: command: "PHP_FCGI_CHILDREN=10 exec /usr/local/bin/php-cgi" user: fastcgi fastcgi.timeout.io Sets the I/O timeout of connections to the FastCGI process in millisec- onds. fastcgi.timeout.keepalive Sets the keepl-alive timeout for idle connections in milliseconds. FastCGI connections will not be persistent if the value is set to zero (default). fastcgi.send-delegated-uri Send the modified HTTP_HOST and REQUEST_URI being rewritten in case of internal redirect. In H2O, it is possible to perform internal redirects (a.k.a. delega- tions or URL rewrites) using the redirect directive or by returning X- Reproxy-URL headers from web applications. The directive specifies whether to send the original values to the FastCGI process (default), or if the rewritten values should be sent. FILE DIRECTIVES This document describes the configuration directives of the file han- dler - a handler that for serving static files. Two directives: file.dir and file.file are used to define the mapping. Other directives modify the behavior of the mappings defined by the two. file.custom-handler The directive maps extensions to a custom handler (e.g. FastCGI). The directive accepts a mapping containing configuration directives that can be used at the extension level, together with a property named extension specifying a extension (starting with .) or a sequence of ex- tensions to which the directives should be applied. If all the files (including those without extensions) shall be mapped, this property must be set to default. Only one handler must exist within the direc- tives. Example: Mapping PHP files to FastCGI file.custom-handler: extension: .php fastcgi.connect: port: /tmp/fcgi.sock type: unix file.dir The directive specifies the directory under which should be served for the corresponding path. Example: Serving files under different paths paths: "/": file.dir: /path/to/doc-root "/icons": file.dir: /path/to/icons-dir See also: file.dirlisting, file.file, file.index file.dirlisting A boolean flag (OFF, or ON) specifying whether or not to send the di- rectory listing in case none of the index files exist. See also: file.dir file.etag A boolean flag (OFF, or ON) specifying whether or not to send etags. file.file (since v2.0) The directive maps a path to a specific file. Example: Mapping a path to a specific file paths: /robots.txt: file.file: /path/to/robots.txt See also: file.dir file.index Specifies the names of the files that should be served when the client sends a request against the directory. The sequence of filenames are searched from left to right, and the first file that existed is sent to the client. See also: file.dir file.mime.addtypes The directive modifies the MIME mappings by adding the specified MIME type mappings. Example: Adding MIME mappings file.mime.addtypes: "application/javascript": ".js" "image/jpeg": [ ".jpg", ".jpeg" ] The default mappings is hard-coded in lib/handler/mimemap/defaults.c.h. It is also possible to set certain attributes for a MIME type. The ex- ample below maps .css files to text/css type, setting is_compressible flag to ON and priority to highest. Example: Setting MIME attributes file.mime.settypes: "text/css": extensions: [".css"] is_compressible: yes priority: highest Following attributes are recognized. AttributePossible ValuesDescription is_compressibleON, OFFif content is compressible priorityhighest, normalsend priority of the content The priority attribute affects how the HTTP/2 protocol implementation handles the request. For detail, please refer to the HTTP/2 directives listed in the see also section below. By default, mime-types for CSS and JavaScript files are the only ones that are given highest priority. See also: compress, http2-casper, http2-reprioritize-blocking-assets file.mime.removetypes Removes the MIME mappings for specified extensions supplied as a se- quence of extensions. Example: Removing MIME mappings file.mime.removetypes: [ ".jpg", ".jpeg" ] file.mime.setdefaulttype Sets the default MIME-type that is used when an extension does not ex- ist in the MIME mappings file.mime.settypes Resets the MIME mappings to given mapping. Example: Resetting the MIME mappings to minimum file.mime.settypes: "text/html": [ ".html", ".htm" ] "text/plain": ".txt" file.send-compressed (since v2.0) A flag indicating how a pre-compressed file should be served. If set to ON, the handler looks for a file with .br, .zstd, or .gz ap- pended and sends the file, if the client is capable of transparently decoding a brotli, zstd, or gzip-encoded response. For example, if a client requests a file named index.html with Accept-Encoding: gzip header and if index.html.gz exists, the .gz file is sent as a response together with a Content-Encoding: gzip response header. When both the client and the server support multiple content encodings, the encoding chosen is the first one listed in the following order: brotli, zstd, gzip. If set to OFF, the handler always serves the file specified by the client. Starting from version 2.2, gunzip is also supported. If set, the han- dler acts identical to when the value was set to ON. In addition, the handler will send an uncompressed response by dynamically decompressing the .gz file if the client and the server failed to agree on using a pre-compressed file as the response and if a non-compressed file was not found. The option is useful when conserving disk space is impor- tant; it is possible to remove the uncompressed files in place for gzipped ones. See also: compress file.send-gzip Obsoleted in 2.0. Synonym of file.send-compressed. HEADERS DIRECTIVES Headers directives can be used to manipulate response headers. This document describes the following configuration directives as well as when they are applied. All the directives accept one header field (specified by a YAML scalar), or multiple header fields (specified by a YAML sequence). header.add Adds a new header line to the response headers, regardless if a header with the same name already exists. Example. Setting the Set-Cookie header header.add: "Set-Cookie: test=1" header.append Adds a new header line, or appends the value to the existing header with the same name, separated by ,. header.merge Adds a new header line, or merges the value to the existing header of comma-separated values. The following example sets the must-revalidate attribute of the Cache- Control header when and only when the attribute is not yet being set. Example: Setting the must-revalidate attribute header.merge: "Cache-Control: must-revalidate" header.set Sets a header line, removing headers with the same name if exists. Example: Setting the X-Content-Type-Options: nosniff header header.set: "X-Content-Type-Options: nosniff" header.setifempty Sets a header line when and only when a header with the same name does not already exist. header.unset Removes headers with given name. Example: Removing the X-Powered-By header header.unset: "X-Powered-By" header.unsetunless Remove all headers but those listed. Example: Remove all headers but If-Match and If-Modified-Since header.unsetunless: - "If-Match" - "If-Modified-Since" Timing of Application Starting from v2.3, it is possible to specify the timing when the head- ers directives is applied. All of the header directives can take ei- ther of the following two forms. Example: Scalar Form header.add: "X-Foo: FOO": Example: Mapping Form header.add: header: "X-Foo: FOO": when: final The above two are identical. when can be either of: final, early, all. Default is final. If the value is final, the header directive is only applied to final (i.e. non-1xx) response. If the value is early, it's only applied to 1xx informational responses. If all is set, it's applied to both of final and 1xx responses. MRUBY DIRECTIVES The following are the configuration directives of the mruby handler. Please refer to Using mruby to find out how to write handlers using mruby. mruby.handler Upon start-up evaluates given mruby expression, and uses the returned mruby object to handle the incoming requests. Example: Hello-world in mruby mruby.handler: | Proc.new do |env| [200, {'content-type' => 'text/plain'}, ["Hello world0]] end Note that the provided expression is evaluated more than once (typi- cally for every thread that accepts incoming connections). See also: mruby.handler-file mruby.handler-file Upon start-up evaluates given mruby file, and uses the returned mruby object to handle the incoming requests. Example: Hello-world in mruby mruby.handler-file: /path/to/my-mruby-handler.rb Note that the provided expression is evaluated more than once (typi- cally for every thread that accepts incoming connections). See also: mruby.handler PROXY DIRECTIVES Proxy module is the proxy implementation for H2O - it implements a re- verse HTTP proxy and a CONNECT proxy. A reverse HTTP proxy is setup using the proxy.reverse.url directive. A CONNECT proxy is setup using the proxy.connect directive. When acting as a reverse HTTP proxy, following request headers are added and forwarded to the backend server: via x-forwarded-for x-forwarded-proto By default, all requests to the backend server are sent using HTTP/1.1. Use of HTTP/2 and HTTP/3 to backend servers is considered experimental; their use can be controlled via directives proxy.http2.ratio and proxy.http3.ratio. Following sections describe the configuration directives defined for the module. proxy.reverse.url Forwards the requests to the specified backends, and proxies the re- sponse. Example: Forwarding the requests to application server running on 127.0.0.1:8080 proxy.reverse.url: "http://127.0.0.1:8080/" Example: Forwarding the requests to multiple application server with different weight proxy.reverse.url: - http://10.0.0.1:8080/ - url: http://10.0.0.2:8080/different-path weight: 2 Example: Forwarding the requests to multiple application server with least connection proxy.reverse.url: backends: - http://10.0.0.1:8080/ - http://10.0.0.2:8080/ balancer: least-conn When more than one backend is declared, the load is distributed among the backends using the strategy specified by the balancer property. Currently we support round-robin (the default) and least-conn as the value of the property. The strategies are applied when establishing a new connection becomes necessary (i.e. when no pooled connections ex- ist). weight can be assigned to each backend as an integer between 1 and 256. The default value is 1. For the round-robin balancer, weight is respected in this way: each backend would be selected exactly weight times before next backend would be selected, except when the backend is not accessable. For least-conn balancer, weight is respected in this way: the selected backend should have the minimum value of (request count) / (weight). H2O will try to reconnect to different backends (in the order deter- mined by the load balancing strategy) until it successfully establishes a connection. It returns an error when it fails to connect to all of the backends. In addition to TCP/IP over IPv4 and IPv6, the proxy handler can also connect to an HTTP server listening to a Unix socket. Path to the unix socket should be surrounded by square brackets, and prefixed with unix: (e.g. http://[unix:/path/to/socket]/path). proxy.connect Setup a CONNECT proxy, taking an access control list as the argument. Each element of the access control list starts with either + or - fol- lowed by an wildcard (*) or an IP address with an optional netmask and an optional port number. When a CONNECT request is received and the name resolution of the con- nect target is complete, the access control list is searched from top to bottom. If the first entry that contains a matching address (and optionally the port number) starts with a +, the request is accepted and a tunnel is established. If the entry starts with a -, the request is rejected. If none of the entries match, the request is also rejected. Example: Simple HTTPS proxy proxy.connect: - "-192.168.0.0/24" # reject any attempts to local network - "+*:443" # accept attempts to port 443 of any host Note: The precise syntax of the access control list element is ad- dress:port/netmask. This is because the URL parser is reused. The directive can only be used for the root path (i.e., /), as the classic CONNECT does not specify the path. proxy.connect-udp Setup a CONNECT-UDP gateway defined by RFC 9298. Supplied argument is an access control list, using the same format as that of proxy.connect. Support for draft-03 of the CONNECT-UDP protocol is controlled sepa- rately; see proxy.connect.masque-draft-03. proxy.connect.emit-proxy-status A boolean flag (ON or OFF) designating if a proxy-status response header should be sent. See also: proxy.proxy-status.identity proxy.connect.masque-draft-03 A boolean flag (ON or OFF) indicating if CONNECT-UDP requests conform- ing to draft-ietf-masque-connect-udp-03 should be handled. This directive alters the behavior of proxy.connect because the CON- NECT-UDP method defined in draft-03 followed the approach of the CON- NECT method, which uses a HTTP proxy as a tunnel. The published RFC switched to specifying the tunnel by the target URI, and as a result, it is supported by a different directive: proxy.connect-udp. proxy.emit-x-forwarded-headers (since v2.1) A boolean flag(ON or OFF) indicating if the server will append or add the x-forwarded-proto and x-forwarded-for request head- ers. By default, when forwarding an HTTP request H2O sends its own x-for- warded-proto and x-forwarded-for request headers (or might append its value in the x-forwarded-proto case, see proxy.preserve-x-forwarded- proto). This might not be always desirable. Please keep in mind secu- rity implications when setting this of OFF, since it might allow an at- tacker to spoof the originator or the protocol of a request. See also: proxy.emit-via-header proxy.emit-via-header (since v2.2) A boolean flag (ON or OFF) indicating if the server adds or appends an entry to the via request header. See also: proxy.emit-x-forwarded-headers proxy.emit-missing-date-header (since v2.3) A boolean flag (ON or OFF) indicating if H2O should add a date header to the response, if that header is missing from the up- stream response. proxy.expect (since v2.3) A boolean flag (ON or OFF) indicating if H2O should send expect: 100-continue header to the request, and postpone sending re- quest body until it receives 100 response proxy.forward.close-connection A boolean flag indicating if closure of the backend connection should trigger the closure of the frontend HTTP/1.1 connection. proxy.happy-eyeballs.connection-attempt-delay Connection Attempt Delay parameter of Happy Eyeballs v2. When trying to establish a connection to the CONNECT target, H2O uses Happy Eyeballs v2 (RFC 8305). This parameter controls the Connection Attempt Delay parameter of Happy Eyeballs v2 in the unit of millisec- onds. At the moment, Happy Eyeballs is used only when acting as a CONNECT proxy. It is not used when running as an HTTP reverse proxy. proxy.happy-eyeballs.name-resolution-delay Name Resolution Delay parameter of Happy Eyeballs v2. For detail, see proxy.happy-eyeballs.connection-attempt-delay. proxy.header.add (since v2.2) Modifies the request headers sent to the application server. The behavior is identical to header.add except for the fact that it af- fects the request headers sent to the application server rather than the response headers sent to the client. Please refer to the documen- tation of the headers handler to see how the directives can be used to mangle the headers. proxy.header.append (since v2.2) Modifies the request headers sent to the application server. The behavior is identical to header.append except for the fact that it affects the request headers sent to the application server rather than the response headers sent to the client. Please refer to the documen- tation of the headers handler to see how the directives can be used to mangle the headers. proxy.header.merge (since v2.2) Modifies the request headers sent to the application server. The behavior is identical to header.merge except for the fact that it affects the request headers sent to the application server rather than the response headers sent to the client. Please refer to the documen- tation of the headers handler to see how the directives can be used to mangle the headers. proxy.header.set (since v2.2) Modifies the request headers sent to the application server. The behavior is identical to header.set except for the fact that it af- fects the request headers sent to the application server rather than the response headers sent to the client. Please refer to the documen- tation of the headers handler to see how the directives can be used to mangle the headers. proxy.header.setifempty (since v2.2) Modifies the request headers sent to the application server. The behavior is identical to header.setifempty except for the fact that it affects the request headers sent to the application server rather than the response headers sent to the client. Please refer to the doc- umentation of the headers handler to see how the directives can be used to mangle the headers. proxy.header.unset (since v2.2) Modifies the request headers sent to the application server. The behavior is identical to header.unset except for the fact that it affects the request headers sent to the application server rather than the response headers sent to the client. Please refer to the documen- tation of the headers handler to see how the directives can be used to mangle the headers. proxy.header.unsetunless (since v2.2) Modifies the request headers sent to the application server. The behavior is identical to header.unsetunless except for the fact that it affects the request headers sent to the application server rather than the response headers sent to the client. Please refer to the documentation of the headers handler to see how the directives can be used to mangle the headers. proxy.header.cookie.unset Removes cookies in the requests with given name. See also: header.unset/a> proxy.header.cookie.unsetunless Removes all cookies in the requests but those with given names. See also: header.unsetunless proxy.http2.force-cleartext See proxy.http2.ratio. proxy.http2.max-concurrent-streams Maxium number of concurrent requests issuable on one HTTP/2 connection to the backend server. Actual number of maximum requests inflight will be capped to the mini- mum of this setting and the value advertised in the HTTP/2 SETTINGS frame of the bakend server. See also: proxy.http2.ratio proxy.http2.ratio Ratio of forwarded HTTP requests with which use of HTTP/2 should be at- tempted. When the backend protocol is HTTPS, for given ratio of HTTP requests, h2o will either attempt to create or reuse an existing HTTP/2 connec- tion. Connection attempts to use HTTP/2 will be indicated to the server via ALPN, with fallback to HTTP/1.1. When the backend protocol is cleartext HTTP, this directive has impact only when the ratio is set to 100 with proxy.http2.force-cleartext set to ON. In such case, all backend connection will use HTTP/2 without ne- gotiation. proxy.http3.ratio Ratio of forwarded HTTP requests with which use of HTTP/3 should be at- tempted. When the backend protocol is HTTPS, for given ratio of HTTP requests, h2o will either attempt to create or reuse an existing HTTP/3 connec- tion. When the backend protocol is cleartext HTTP, this directive has no im- pact. proxy.max-buffer-size This setting specifies the maximum amount of userspace memory / disk space used for buffering each HTTP response being forwarded, in the unit of bytes. By default, h2o buffers unlimited amount of data being sent from back- end servers. The intention behind this approach is to free up backend connections as soon as possible, under the assumption that the backend server might have lower concurrency limits than h2o. But if the back- end server has enough concurrency, proxy.max-buffer-size can be used to restrict the memory / disk pressure caused by h2o at the cost of having more connections to the backend server. See also: temp-buffer-threshold proxy.max-spare-pipes This setting specifies the maximum number of pipes retained for reuse, when proxy.zerocopy is used. This maximum is applied per each worker thread. The intention of this setting is to reduce lock contention in the kernel under high load when zerocopy is used. When this setting is set to a non-zero value, speci- fied number of pipes will be allocated upon startup for each worker thread. Setting this value to 0 will cause no pipes to be retained by h2o; the pipes will be closed after they are used. In this case, h2o will cre- ate new pipes each time they are needed. See also: proxy.zerocopy proxy.preserve-host A boolean flag (ON or OFF) designating whether or not to pass Host header from incoming request to upstream. proxy.preserve-x-forwarded-proto (since v2.0) A boolean flag(ON or OFF) indicating if the server pre- serve the received x-forwarded-proto request header. By default, when transmitting a HTTP request to an upstream HTTP server, H2O removes the received x-forwarded-proto request header and sends its own, as a precaution measure to prevent an attacker connect- ing through HTTP to lie that they are connected via HTTPS. However in case H2O is run behind a trusted HTTPS proxy, such protection might not be desirable, and this configuration directive can be used to modify the behaviour. proxy.proxy-protocol (since v2.1) A boolean flag (ON or OFF) indicating if PROXY protocol should be used when connecting to the application server. When using the PROXY protocol, connections to the application server cannot be persistent (i.e. proxy.timeout.keepalive must be set to zero). See also: proxy.timeout.keepalive proxy.proxy-status.identity Specifies the name of the server to be emitted as part of the proxy- status header field. See also: proxy.connect.proxy-status proxy.ssl.cafile (since v2.0) Specifies the file storing the list of trusted root cer- tificates. By default, H2O uses share/h2o/ca-bundle.crt. The file contains a set of trusted root certificates maintained by Mozilla, downloaded and con- verted using mk-ca-bundle.pl. See also: proxy.ssl.verify-peer proxy.ssl.session-cache (since v2.1) Specifies whether if and how a session cache should be used for TLS connections to the application server. Since version 2.1, result of the TLS handshakes to the application server is memoized and later used to resume the connection, unless set to OFF using this directive. If the value is a mapping, then the fol- lowing two attributes must be specified: lifetime: validity of session cache entries in seconds capacity: maxmum number of entries to be kept in the session cache If set to ON, lifetime and capacity will be set to 86,400 (one day) and 4,096. proxy.ssl.verify-peer (since v2.0) A boolean flag (ON or OFF) indicating if the server cer- tificate and hostname should be verified. If set to ON, the HTTP client implementation of H2O verifies the peer's certificate using the list of trusted certificates as well as compares the hostname presented in the certificate against the connecting host- name. See also: proxy.ssl.cafile proxy.timeout.connect (since v2.3) Sets the timeout before establishing the upstream in mil- liseconds. When connecting to a TLS upstream, this timeout will run until the end of the SSL handshake. proxy.timeout.first_byte (since v2.3) Sets the timeout before receiving the first byte from up- stream. This sets the maxium time we will wait for the first byte from up- stream, after the establishment of the connection. proxy.timeout.io Sets the upstream I/O timeout in milliseconds. This value will be used for proxy.timeout.connect and proxy.time- out.first_byte as well, unless these parameters are explicitely set. proxy.timeout.keepalive Sets the upstream timeout for idle connections in milliseconds. Upstream connection becomes non-persistent if the value is set to zero. The value should be set to something smaller than that being set at the upstream server. proxy.tunnel A boolean flag (ON or OFF) indicating whether or not to allow tun- nelling to the backend server. When set to ON, CONNECT requests and WebSocket handshakes are forwarded to the backend server. Then, if the backend server accepts those re- quests, H2O forwards the HTTP response to the client and acts as a bi- directional tunnel. Timeouts are governed by properties proxy.timeout.connect and proxy.timeout.io. proxy.zerocopy Sets the use of zerocopy operations for forwarding the response body. By default, this flag is set to OFF, in which case the response bytes are read from the upstream socket to an internal buffer as they arrive, then shipped to the client. Maximum size of this buffer is controlled by proxy.max-buffer-size. The drawback of this approach is that it causes pressure on memory bandwidth. This knob provides two alternative modes to remedy the pressure: When set to enabled, if zerocopy operation in supported by the down- stream connection (i.e., downstream connection being cleoartext or en- crypted using kernel TLS), h2o uses a pipe as an internal buffer in- stead of using userspace memory. Data is moved to the pipe using the splice system call, then shipped to the downstream connection by an- other call to splice. Pressure to memory bandwidth is eliminated, as the splice system call merely moves the references to kernel memory be- tween file descriptors. When set to always, data from upstream is spliced into a pipe regard- less of downstream connection providing support for zerocopy. When the downstream connection does not support zerocopy, data is intially moved into the pipe, then gets read and written to the socket (as well as be- ing encrypted, if necessary) as late as it becomes possible to send the data. This approach does not reduce the total amount of bytes flowing through the CPU, but reduces the amount of userspace memory used by h2o by delaying the reads, thereby reducing cache spills. See also: ssl-offload, proxy.max-spare-pipes REDIRECT DIRECTIVES This document describes the configuration directives of the redirect handler. redirect Redirects the requests to given URL. The directive rewrites the URL by replacing the host and path part of the URL at which the directive is used with the given URL. For exam- ple, when using the configuration below, requests to http://exam- ple.com/abc.html will be redirected to https://example.com/abc.html. If the argument is a scalar, the value is considered as the URL to where the requests should be redirected. Following properties are recognized if the argument is a mapping. url URL to redirect to status the three-digit status code to use (e.g. 301) internal either YES or NO (default); if set to YES, then the server performs an internal redirect and return the content at the redirected URL Example: Redirect all HTTP to HTTPS permanently (except for the files under RSS) hosts: "example.com:80": paths: "/": redirect: status: 301 url: "https://example.com/" "/rss": file.dir: /path/to/rss REPROXY DIRECTIVES This document describes the configuration directives of the reproxy handler. reproxy A boolean flag (ON or OFF) indicating if the server should recognize the X-Reproxy-URL header sent from upstream servers. If H2O recognizes the header, it fetches the contents of the resource specified by the header, and sends the contents as the response to the client. If the status code associated with the X-Reproxy-URL header is 307 or 308, then the method of the original request is used to obtain the specified resource. Otherwise, the request method is changed to GET. For example, an upstream server may send an URL pointing to a large im- age using the X-Reproxy-URL header stored on a distributed file system, and let H2O fetch and return the content to the client, instead of fetching the image by itself. Doing so would reduce the load on the application server. STATUS DIRECTIVES The status handler exposes the current states of the HTTP server. This document describes the configuration directives of the handler. status (since v2.0) If the argument is ON, the directive registers the status handler to the current path. Access to the handler should be restricted, considering the fact that the status includes the details of in-flight HTTP requests. The exam- ple below uses Basic authentication. Example: Exposing status with Basic authentication paths: /server-status: mruby.handler: | require "htpasswd.rb" Htpasswd.new("/path/to/.htpasswd", "status") status: ON The information returned by the /json handler can be filtered out using the optional show=module1,module2 parameter. There are currently three modules defined: requests: displays the requests currently in-flight. durations: dis- plays durations statistics for requests since server start time in sec- onds (returns all zeros unless duration-stats is ON). errors: displays counters for internally generated errors. main: displays general dae- mon-wide stats. duration-stats (since v2.1) Gather timing stats for requests. If the argument is ON, this directive populates duration statistics in seconds, to be consumed by status handlers. Enabling this feature has a noticeable CPU and memory impact. Note that the time spent while processing a request in a blocking man- ner (such as opening a file or a mruby handler that does invoke a net- work operation) will not be reflected to the process_time element of the duration stats due to the fact that the timer being used for mea- suring the time spent is updated only once per loop. THROTTLE RESPONSE DIRECTIVES The throttle response handler performs per response traffic throttling, when an X-Traffic header exists in the response headers. The value of X-Traffic header should be an integer that represents the speed you want in bytes per second. This header CAN be set with header.add so that traffic for static assets can also be easily throt- tled. The following are the configuration directives recognized by the han- dler. throttle-response (since v2.1) Enables traffic throttle per HTTP response. If the argument is ON, the traffic per response is throttled as long as a legal X-Traffic header exists. If the argument is OFF, traffic throttle per response is disabled. Example: Enabling traffic throttle per response with static file con- figuration # enable throttle throttle-response: ON # an example host configuration that throttle traffic to ~100KB/s hosts: default: paths: /: file.dir: /path/to/assets header.add: "X-Traffic: 100000" USING BASIC AUTHENTICATION Starting from version 1.7, H2O comes with a mruby script named ht- passwd.rb that implements Basic Authentication. The script provides a Rack handler that implements Basic Authentication using password files generated by the htpasswd command. Below example uses the mruby script to restrict access to the path. If authentication fails, the mruby handler returns a 401 Unauthorized re- sponse. If authentication succeeds, the handler returns a 399 re- sponse, and the request is delegated internally to the next handler (i.e. file.dir). Example: Configuring HTTP authentication using htpasswd.rb paths: "/": mruby.handler: | require "htpasswd.rb" Htpasswd.new("/path/to/.htpasswd", "realm-name") file.dir: /path/to/doc_root In H2O versions prior to 2.0, you should specify "#{$H2O_ROOT}/share/h2o/mruby/htpasswd.rb" as the argument to require, since the directory is not registered as part of $LOAD_PATH. For convenience, the mruby script also forbids access to files or di- rectories that start with .ht. USING CGI Starting from version 1.7, H2O comes with a FastCGI-to-CGI gateway (fastcgi-cgi), which can be found under share/h2o directory of the in- stallation path. The gateway can be used for running CGI scripts through the FastCGI handler. The example below maps .cgi files to be executed by the gateway. It is also possible to run CGI scripts under different privileges by specify- ing the user attribute of the directive. Example: Execute .cgi files using FastCGI-to-CGI gateway file.custom-handler: extension: .cgi fastcgi.spawn: command: "exec $H2O_ROOT/share/h2o/fastcgi-cgi" The gateway also provides options to for tuning the behavior. A full list of options can be obtained by running the gateway directly with --help option. Example: Output of share/h2o/fastcgi-cgi --help $ share/h2o/fastcgi-cgi --help Usage: share/h2o/fastcgi-cgi [options] Options: --listen=sockfn path to the UNIX socket. If specified, the program will create a UNIX socket at given path replacing the existing file (should it exist). If not, file descriptor zero (0) will be used as the UNIX socket for accepting new connections. --max-workers=nnn maximum number of CGI processes (default: unlimited) --pass-authz if set, preserves HTTP_AUTHORIZATION parameter --verbose verbose mode USING MRUBY mruby is a lightweight implementation of the Ruby programming language. With H2O, users can implement their own request handling logic using mruby, either to generate responses or to fix-up the request / re- sponse. Rack-based Programming Interface The interface between the mruby program and the H2O server is based on Rack interface specification. Below is a simple configuration that re- turns hello world. Example: Hello-world in mruby paths: "/": mruby.handler: | Proc.new do |env| [200, {'content-type' => 'text/plain'}, ["Hello world0]] end It should be noted that as of H2O version 1.7.0, there are limitations when compared to ordinary web application server with support for Rack such as Unicorn: no libraries provided as part of Rack is available (only the interface is compatible) In addition to the Rack interface specification, H2O recognizes status code 399 which can be used to delegate request to the next handler. The feature can be used to implement access control and response header modifiers. Access Control By using the 399 status code, it is possible to implement access con- trol using mruby. The example below restricts access to requests from 192.168. private address. Example: Restricting access to 192.168. paths: "/": mruby.handler: | lambda do |env| if /168./.match(env["REMOTE_ADDR"]) return [399, {}, []] end [403, {'content-type' => 'text/plain'}, ["access forbidden0]] end Support for Basic Authentication is also provided by an mruby script. Delegating the Request When enabled using the reproxy directive, it is possible to delegate the request from the mruby handler to any other handler. Example: Rewriting URL with delegation paths: "/": mruby.handler: | lambda do |env| if /user([^]+)/.match(env["PATH_INFO"]) return [307, {"x-reproxy-url" => "/user.php?user=#{$1}"}, []] end return [399, {}, []] end Modifying the Response When the mruby handler returns status code 399, H2O delegates the re- quest to the next handler while preserving the headers emitted by the handler. The feature can be used to add extra headers to the response. For example, the following example sets cache-control header for re- quests against .css and .js files. Example: Setting cache-control header for certain types of files paths: "/": mruby.handler: | Proc.new do |env| headers = {} if /.(css|js).match(env["PATH_INFO"]) headers["cache-control"] = "max-age=86400" end [399, headers, []] end file.dir: /path/to/doc-root Or in the example below, the handler triggers HTTP/2 server push with the use of Link: rel=preload headers, and then requests a FastCGI ap- plication to process the request. Example: Pushing asset files paths: "/": mruby.handler: | Proc.new do |env| push_paths = [] # push css and js when request is to dir root or HTML if /(|.html).match(env["PATH_INFO"]) push_paths << ["/css/style.css", "style"] push_paths << ["/js/app.js", "script"] end [399, push_paths.empty? ? {} : {"link" => push_paths.map{|p| "<#{p[0]}>; rel=preload; as=#{p[1]}"}.join("0)}, []] end fastcgi.connect: ... Using the HTTP Client Starting from version 1.7, a HTTP client API is provided. HTTP re- quests issued through the API will be handled asynchronously; the client does not block the event loop of the HTTP server. Example: Mruby handler returning the response of http://example.com paths: "/": mruby.handler: | Proc.new do |env| req = http_request("http://example.com") status, headers, body = req.join [status, headers, body] end http_request is the method that issues a HTTP request. The method takes two arguments. First argument is the target URI. Second argument is an optional hash; method (defaults to GET), header, body attributes are recognized. The method returns a promise object. When #join method of the promise is invoked, a three-argument array containing the status code, response headers, and the body is returned. The response body is also a promise. Applications can choose from three ways when dealing with the body: a) call #each method to receive the contents, b) call #join to retrieve the body as a string, c) return the object as the response body of the mruby handler. The header and the body object passed to http_request should conform to the requirements laid out by the Rack specification for request header and request body. The response header and the response body object re- turned by the #join method of the promise returned by http_request con- forms to the requirements of the Rack specification. Since the API provides an asynchronous HTTP client, it is possible to effectively issue multiple HTTP requests concurrently and merge them into a single response. When HTTPS is used, servers are verified using the properties of proxy.ssl.cafile and proxy.ssl.verify-peer specified at the global level. Timeouts defined for the proxy handler (proxy.timeout.*) are applied to the requests that are issued by the http_request method. Logging Arbitrary Variable In version 2.3, it is possible from mruby to set and log an arbitrary- named variable that is associated to a HTTP request. A HTTP response header that starts with x-fallthru-set- is handled specially by the H2O server. Instead of sending the header downstream, the server accepts the value as a request environment variable, taking the suffix of the header name as the name of the variable. This example shows how to read request data, parse json and then log data from mruby. Example: Logging the content of a POST request via request environment variable paths: "/": mruby.handler: | Proc.new do |env| input = env["rack.input"] ? env["rack.input"].read : '{"default": "true"}' parsed_json = JSON.parse(input) parsed_json["time"] = Time.now.to_i logdata = parsed_json.to_s [204, {"x-fallthru-set-POSTDATA" => logdata}, []] end access-log: path: /path/to/access-log.json escape: json format: '{"POST": %{POSTDATA}e}' USING DOS DETECTION Starting from version 2.1, H2O comes with a mruby script named dos_de- tector.rb that implements DoS Detection feature. The script provides a Rack handler that detects HTTP flooding attacks based on the client's IP address. Basic Usage Below example uses the mruby script to detect DoS attacks. The default detecting strategy is simply counting requests within configured pe- riod. If the count exceeds configured threshold, the handler returns a 403 Forbidden response. Otherwise, the handler returns a 399 response, and the request is delegated internally to the next handler. Example: Configuring DoS Detection paths: "/": mruby.handler: | require "dos_detector.rb" DoSDetector.new({ :strategy => DoSDetector::CountingStrategy.new({ :period => 10, # default :threshold => 100, # default :ban_period => 300, # default }), }) file.dir: /path/to/doc_root In the example above, the handler countup the requests within 10 sec- onds for each IP address, and when the count exceeds 100, it returns a 403 Forbidden response for the request and marks the client as "Banned" for 300 seconds. While marked as "Banned", the handler returns a 403 Forbidden to all requests from the same IP address. Configuring Details You can pass the following parameters to DoSDetector.new . :strategy The algorithm to detect DoS attacks. You can write and pass your own strategies if needed. The default strategy is DoSDetector.Count- ingStrategy which takes the following parameters: :period Time window in seconds to count requests. The default value is 10. :threshold Threshold count of request. The default value is 100. :ban_period Duration in seconds in which "Banned" client continues to be re- stricted. The default value is 300. :callback The callback which is called by the handler with detecting result. You can define your own callback to return arbitrary response, set re- sponse headers, etc. The default callback returns 403 Forbidden if DoS detected, otherwise delegate the request to the next handler. :forwarded If set true, the handler uses X-HTTP-Forwarded-For header to get client's IP address if the header exists. The default value is true. :cache_size The capacity of the LRU cache which preserves client's IP address and associated request count. The default value is 128. Example: Configuring Details paths: "/": mruby.handler: | require "dos_detector.rb" DoSDetector.new({ :strategy => DoSDetector::CountingStrategy.new, :forwarded => false, :cache_size => 2048, :callback => proc {|env, detected, ip| if detected && ! ip.start_with?("192.168.") [503, {}, ["Service Unavailable"]] else [399, {}, []] end } }) file.dir: /path/to/doc_root Points to Notice For now, counting requests is "per-thread" and not shared between multiple threads. ACCESS CONTROL Starting from version 2.1, H2O comes with a DSL-like mruby library which makes it easy to write access control list (ACL). Example Below example uses this Access Control feature to write various access control. Example: Access Control paths: "/": mruby.handler: | acl { allow { addr == "127.0.0.1" } deny { user_agent.match(/curl/i) && ! addr.start_with?("192.168.") } respond(503, {}, ["Service Unavailable"]) { addr == malicious_ip } redirect("https://example.com/", 301) { path =~ /moved/ } use Htpasswd.new("/path/to/.htpasswd", "realm") { path.start_with?("/admin") } } file.dir: /path/to/doc_root In the example, the handler you get by calling acl method will do the following: if the remote IP address is exactly equal to "127.0.0.1", the re- quest will be delegated to the next handler (i.e. serve files under /path/to/doc_root) and all following acl settings are ignored otherwise, if the user agent string includes "curl" and the remote IP address doesn't start with "192.168.", this handler immediately re- turns 403 Forbidden response otherwise, if the remote IP address is exactly equal to the mali- cious_ip variable, this handler immediately returns 503 Service Un- available response otherwise, if the request path matches with the pattern /moved/i, this handler immediately redirects the client to "https://example.com" with 301 status code otherwise, if the request path starts with /admin, apply Basic Au- thentication to the request (for details of Basic Authentication, see here). otherwise, the request will be delegated to the next handler (i.e. serve files under /path/to/doc_root) ACL Methods An ACL handler is built by calling ACL methods, which can be used like directives. ACL methods can only be used in acl block. Each ACL method adds a filter to the handler, which checks whether the request matches the provided condition or not. Every ACL method can be accompanied by a condition block, which should return boolean value. The filter defined by the method that first matched the accompanying condition gets applied (e.g. response 403 Forbidden, redirect to some- where). If a condition block is omitted, all requests matches. If none of the conditions matches the request, the handler returns 399 and the request will be delegated to the next handler. allow Adds a filter which delegates the request to the next handler if the request matches the provided condition. allow { ..condition.. } deny Adds a filter which returns 403 Forbidden if the request matches the provided condition. deny { ..condition.. } redirect Adds a filter which redirects the client if the request matches the provided condition. redirect(location, status) { ..condition.. } Parameters: location: Location to which the client will be redirected. Re- quired. status: Status code of the response. Default value: 302 respond Adds a filter which returns arbitrary response if the request matches the provided condition. respond(status, header, body) { ..condition.. } Parameters: status: Status code of the response. Required. header: Header key-value pairs of the response. Default value: {} body: Body array of the response. Default value: [] use Adds a filter which applies the provided handler (callable object) if the request matches the provided condition. use(proc) { ..condition.. } Parameters: proc: Callable object that should be applied Matching Methods In a condition block, you can use helpful methods which return particu- lar properties of the request as string values. Matching methods can only be used in a condition block of the ACL methods. addr Returns the remote IP address of the request. addr(forwarded) Parameters: forwarded: If true, returns the value of X-Forwarded-For header if it exists. Default value: true path Returns the requested path string of the request. path() method Returns the HTTP method of the request. method() header Returns the header value of the request associated with the provided name. header(name) Parameters: name: Case-insensitive header name. Required. user_agent Shortcut for header("user-agent"). user_agent() Caution Several restrictions are introduced to avoid misconfiguration when us- ing acl method. acl method can be called only once in each handler configuration If acl method is used, the handler returned by the configuration directive must be the one returned by the acl method If a configuration violates these restrictions, the server will detect it and refuse to launch with error message. For example, both of the following examples violate the restrictions above, so the server will refuse to start up. Example: Misconfiguration Example 1 paths: "/": mruby.handler: | acl { # this block will be ignored! allow { addr == "127.0.0.1" } } acl { deny } file.dir: /path/to/doc_root Example: Misconfiguration Example 2 paths: "/": mruby.handler: | acl { # this block will be ignored! allow { addr == "127.0.0.1" } deny } proc {|env| [399, {}, []} file.dir: /path/to/doc_root You can correct these like the following: Example: Valid Configuration Example paths: "/": mruby.handler: | acl { allow { addr == "127.0.0.1" } deny } file.dir: /path/to/doc_root How-To Matching IP Address Blocks You can match an IP address against predefined list of address blocks using a script named trie_addr.rb. Below is an example. Example: Address Block Matching Example paths: "/": mruby.handler: | require "trie_addr.rb" trie = TrieAddr.new.add(["192.168.0.0/16", "172.16.0.0/12"]) acl { allow { trie.match?(addr) } deny } file.dir: /path/to/doc_root This library currently supports only IPv4 addresses. TrieAddr#match? returns false when it receives an invalid IPv4 address (including an IPv6 address) as an argument.. USING H2OLOG FOR TRACING h2olog is an experimental BPF (kernel doc) backed tracing tool for the H2O server. It can be used for tracing quicly and h2o USDT probes. Since h2olog is an experimental program, its command-line interface might change without notice. Installing from Source See requirements for build prerequisites. If dependencies are satis- fied, h2olog is built automatically. It is possible to manually turn on / off the build of h2olog by using the -DWITH_H2OLOG option. This op- tion takes either ON> or OFF as the argument. If you have BCC in- stalled to a non-standard path, use pkg-config for cmake. $ PKG_CONFIG_PATH=/path/to/bcc/lib/pkgconfig cmake [options] Requirements For building h2olog C++11 compiler CMake for generating the build files pkg-config for de- tecting dependencies Python 3 for the code generator BCC (BPF compiler collection, a.k.a. bpfcc; >= 0.11.0) installed on your system For Ubuntu 20.04 or later, you can install dependencies with: $ sudo apt install clang cmake python3 libbpfcc-dev linux-headers-$(uname -r) For running h2olog Root privilege to execute h2olog Linux kernel (>= 4.10) Quickstart h2olog -H -p $H2O_PID shows varnishlog-like tracing. $ sudo h2olog -H -p $(pgrep -o h2o) 11 0 RxProtocol HTTP/3.0 11 0 RxHeader :authority torumk.com 11 0 RxHeader :method GET 11 0 RxHeader :path / 11 0 RxHeader :scheme https 11 0 TxStatus 200 11 0 TxHeader content-length 123 11 0 TxHeader content-type text/html Tracing USDT events Server-side QUIC events can be traced using the quic subcommand. Events are rendered in JSON Lines format. $ sudo h2olog -p $(pgrep -o h2o) Heres an example trace. {"time":1584380825832,"type":"accept","conn":1,"dcid":"f8aa2066e9c3b3cf"} {"time":1584380825835,"type":"crypto-decrypt","conn":1,"pn":0,"len":1236} {"time":1584380825832,"type":"quictrace-recv","conn":1,"pn":0} {"time":1584380825836,"type":"crypto-handshake","conn":1,"ret":0} H2O.CONF(5)
NAME | SYNOPSIS | DESCRIPTION | QUICK START | SYNTAX AND STRUCTURE | BASE DIRECTIVES | COMPRESS DIRECTIVES | HTTP/1 DIRECTIVES | HTTP/2 DIRECTIVES | HTTP/3 | ACCESS LOG DIRECTIVES | ERRORDOC DIRECTIVES | EXPIRES DIRECTIVES | FASTCGI DIRECTIVES | FILE DIRECTIVES | HEADERS DIRECTIVES | MRUBY DIRECTIVES | PROXY DIRECTIVES | REDIRECT DIRECTIVES | REPROXY DIRECTIVES | STATUS DIRECTIVES | THROTTLE RESPONSE DIRECTIVES | USING BASIC AUTHENTICATION | USING CGI | USING MRUBY | USING DOS DETECTION | ACCESS CONTROL | USING H2OLOG FOR TRACING
Want to link to this manual page? Use this URL:
<https://man.freebsd.org/cgi/man.cgi?query=h2o.conf&sektion=5&manpath=FreeBSD+Ports+14.3.quarterly>