Skip site
navigation (1)Skip section navigation (2)
FreeBSD Manual Pages
- archive_read_support_filter_all(3), archive_read_support_filter_bzip2(3), archive_read_support_filter_compress(3), archive_read_support_filter_gzip(3), archive_read_support_filter_lz4(3), archive_read_support_filter_lzma(3), archive_read_support_filter_none(3), archive_read_support_filter_rpm(3), archive_read_support_filter_uu(3), archive_read_support_filter_xz(3), archive_read_support_filter_zstd(3), archive_read_support_filter_program(3), archive_read_support_filter_program_signature(3)
- functions for reading streaming archives
- archive_read_support_format_7zip(3), archive_read_support_format_all(3), archive_read_support_format_ar(3), archive_read_support_format_by_code(3), archive_read_support_format_cab(3), archive_read_support_format_cpio(3), archive_read_support_format_empty(3), archive_read_support_format_iso9660(3), archive_read_support_format_lha(3), archive_read_support_format_mtree(3), archive_read_support_format_rar(3), archive_read_support_format_rar5(3), archive_read_support_format_raw(3), archive_read_support_format_tar(3), archive_read_support_format_warc(3), archive_read_support_format_xar(3), archive_read_support_format_zip(3)
- functions for reading streaming archives
- archive_write_add_filter_b64encode(3), archive_write_add_filter_by_name(3), archive_write_add_filter_bzip2(3), archive_write_add_filter_compress(3), archive_write_add_filter_grzip(3), archive_write_add_filter_gzip(3), archive_write_add_filter_lrzip(3), archive_write_add_filter_lz4(3), archive_write_add_filter_lzip(3), archive_write_add_filter_lzma(3), archive_write_add_filter_lzop(3), archive_write_add_filter_none(3), archive_write_add_filter_program(3), archive_write_add_filter_uuencode(3), archive_write_add_filter_xz(3), archive_write_add_filter_zstd(3)
- functions enabling output filters
- archive_write_set_format(3), archive_write_set_format_7zip(3), archive_write_set_format_ar(3), archive_write_set_format_ar_bsd(3), archive_write_set_format_ar_svr4(3), archive_write_set_format_by_name(3), archive_write_set_format_cpio(3), archive_write_set_format_cpio_bin(3), archive_write_set_format_cpio_newc(3), archive_write_set_format_cpio_odc(3), archive_write_set_format_cpio_pwb(3), archive_write_set_format_filter_by_ext(3), archive_write_set_format_filter_by_ext_def(3), archive_write_set_format_gnutar(3), archive_write_set_format_iso9660(3), archive_write_set_format_mtree(3), archive_write_set_format_mtree_classic(3), archive_write_set_format_mtree_default(3), archive_write_set_format_pax(3), archive_write_set_format_pax_restricted(3), archive_write_set_format_raw(3), archive_write_set_format_shar(3), archive_write_set_format_shar_dump(3), archive_write_set_format_ustar(3), archive_write_set_format_v7tar(3), archive_write_set_format_warc(3), archive_write_set_format_xar(3), archive_write_set_format_zip(3)
- functions for creating archives
- bsdunzip(1)
- extract files from a ZIP archive
- bzip2(1), bunzip2(1)
- a block-sorting file compressor, v1.0.8 bzcat - decompresses files to stdout bzip2recover - recovers data from damaged bzip2 files
- geom_uzip(4)
- GEOM based compressed disk images and partitions
- gzip(1), gunzip(1), zcat(1)
- compression/decompression tool using Lempel-Ziv coding (LZ77)
- mkuzip(8)
- compress disk image for use with geom_uzip(4) class
- zforce(1)
- force gzip files to have a.gz suffix
- znew(1)
- convert compressed files to gzipped files
- zopen(3)
- open a gzip compressed stream
- 7zz(1)
- Standalone console version of the 7-Zip file archiver
- TclZipfs_AppHook.tcl90(3), TclZipfs_AppHook(3), TclZipfs_Mount(3), TclZipfs_MountBuffer(3), TclZipfs_Unmount(3)
- handle ZIP files as Tcl virtual filesystems
- ZIP_SOURCE_GET_ARGS(3)
- validate and cast arguments to source callback
- __zzip_fetch_disk_trailer(3)
- internal
- __zzip_parse_root_directory(3)
- internal
- __zzip_try_open(3)
- internal
- advzip(1)
- AdvanceCOMP ZIP Compression Utility
- archive_read_filter(3), archive_read_support_filter_all(3), archive_read_support_filter_bzip2(3), archive_read_support_filter_compress(3), archive_read_support_filter_gzip(3), archive_read_support_filter_lz4(3), archive_read_support_filter_lzma(3), archive_read_support_filter_none(3), archive_read_support_filter_rpm(3), archive_read_support_filter_uu(3), archive_read_support_filter_xz(3), archive_read_support_filter_zstd(3), archive_read_support_filter_program(3), archive_read_support_filter_program_signature(3)
- functions for reading streaming archives
- archive_read_format(3), archive_read_support_format_7zip(3), archive_read_support_format_all(3), archive_read_support_format_ar(3), archive_read_support_format_by_code(3), archive_read_support_format_cab(3), archive_read_support_format_cpio(3), archive_read_support_format_empty(3), archive_read_support_format_iso9660(3), archive_read_support_format_lha(3), archive_read_support_format_mtree(3), archive_read_support_format_rar(3), archive_read_support_format_rar5(3), archive_read_support_format_raw(3), archive_read_support_format_tar(3), archive_read_support_format_warc(3), archive_read_support_format_xar(3), archive_read_support_format_zip(3)
- functions for reading streaming archives
- archive_write_filter(3), archive_write_add_filter_b64encode(3), archive_write_add_filter_by_name(3), archive_write_add_filter_bzip2(3), archive_write_add_filter_compress(3), archive_write_add_filter_grzip(3), archive_write_add_filter_gzip(3), archive_write_add_filter_lrzip(3), archive_write_add_filter_lz4(3), archive_write_add_filter_lzip(3), archive_write_add_filter_lzma(3), archive_write_add_filter_lzop(3), archive_write_add_filter_none(3), archive_write_add_filter_program(3), archive_write_add_filter_uuencode(3), archive_write_add_filter_xz(3), archive_write_add_filter_zstd(3)
- functions enabling output filters
- archive_write_format(3), archive_write_set_format(3), archive_write_set_format_7zip(3), archive_write_set_format_ar(3), archive_write_set_format_ar_bsd(3), archive_write_set_format_ar_svr4(3), archive_write_set_format_by_name(3), archive_write_set_format_cpio(3), archive_write_set_format_cpio_bin(3), archive_write_set_format_cpio_newc(3), archive_write_set_format_cpio_odc(3), archive_write_set_format_cpio_pwb(3), archive_write_set_format_filter_by_ext(3), archive_write_set_format_filter_by_ext_def(3), archive_write_set_format_gnutar(3), archive_write_set_format_iso9660(3), archive_write_set_format_mtree(3), archive_write_set_format_mtree_classic(3), archive_write_set_format_mtree_default(3), archive_write_set_format_pax(3), archive_write_set_format_pax_restricted(3), archive_write_set_format_raw(3), archive_write_set_format_shar(3), archive_write_set_format_shar_dump(3), archive_write_set_format_ustar(3), archive_write_set_format_v7tar(3), archive_write_set_format_warc(3), archive_write_set_format_xar(3), archive_write_set_format_zip(3)
- functions for creating archives
- barman-cloud-backup(1) - Barman-cloud Commands Synopsis 0.0 3.5 barman-cloud-backup [ { -V | --version } ] [ --help ] [ { { -v | --verbose } | { -q | --quiet } } ] [ { -t | --test } ] [ --cloud-provider { aws-s3 | azure-blob-storage | google-cloud-storage } ] [ { { -z | --gzip } | { -j | --bzip2 } | --snappy } ] [ { -h | --host } HOST ] [ { -p | --port } PORT ] [ { -U | --user } USER ] [ { -d | --dbname } DBNAME ] [ { -n | --name } BACKUP_NAME ] [ { -J | --jobs } JOBS ] [ { -S | --max-archive-size } MAX_ARCHIVE_SIZE ] [ --immediate-checkpoint ] [ --min-chunk-size MIN_CHUNK_SIZE ] [ --max-bandwidth MAX_BANDWIDTH ] [ --snapshot-instance SNAPSHOT_INSTANCE ] [ --snapshot-disk NAME [ --snapshot-disk NAME ... ] ] [ --snapshot-zone GCP_ZONE ] [ -snapshot-gcp-project GCP_PROJECT ] [ --tag KEY,VALUE [ --tag KEY,VALUE ... ] ] [ --endpoint-url ENDPOINT_URL ] [ { -P | --aws-profile } AWS_PROFILE ] [ --profile AWS_PROFILE ] [ --read-timeout READ_TIMEOUT ] [ { -e | --encryption } { AES256 | aws:kms } ] [ --sse-kms-key-id SSE_KMS_KEY_ID ] [ --aws-region AWS_REGION ] [ --aws-await-snapshots-timeout AWS_AWAIT_SNAPSHOTS_TIMEOUT ] [ --aws-snapshot-lock-mode { compliance | governance } ] [ --aws-snapshot-lock-duration DAYS ] [ --aws-snapshot-lock-cool-off-period HOURS ] [ --aws-snapshot-lock-expiration-date DATETIME ] [ { --azure-credential | --credential } { azure-cli | managed-identity | default } ] [ --encryption-scope ENCRYPTION_SCOPE ] [ --azure-subscription-id AZURE_SUBSCRIPTION_ID ] [ --azure-resource-group AZURE_RESOURCE_GROUP ] [ --gcp-project GCP_PROJECT ] [ --kms-key-name KMS_KEY_NAME ] [ --gcp-zone GCP_ZONE ] DESTINATION_URL SERVER_NAME Description The barman-cloud-backup script is used to create a local backup of a Postgres server and transfer it to a supported cloud provider, bypassing the Barman server. It can also be utilized as a hook script for copying Barman backups from the Barman server to one of the supported clouds (post_backup_retry_script). This script requires read access to PGDATA and tablespaces, typically run as the postgres user. When used on a Barman server, it requires read access to the directory where Barman backups are stored. If --snapshot- arguments are used and snapshots are supported by the selected cloud provider, the backup will be performed using snapshots of the specified disks (--snapshot-disk). The backup label and metadata will also be uploaded to the cloud. NOTE: 0.0 3.5 For GCP, only authentication with GOOGLE_APPLICATION_CREDENTIALS env is supported. IMPORTANT: 0.0 3.5 The cloud upload may fail if any file larger than the configured --max-archive-size is present in the data directory or tablespaces. However, Postgres files up to 1GB are always allowed, regardless of the --max-archive-size setting. Parameters 0.0 SERVER_NAME Name of the server to be backed up. DESTINATION_URL URL of the cloud destination, such as a bucket in AWS S3. For example: s3://bucket/path/to/folder. -V / --version Show version and exit. --help show this help message and exit. -v / --verbose Increase output verbosity (e.g., -vv is more than -v). -q / --quiet Decrease output verbosity (e.g., -qq is less than -q). -t / --test Test cloud connectivity and exit. --cloud-provider The cloud provider to use as a storage backend. Allowed options: 7.0 (bu 2 aws-s3. (bu 2 azure-blob-storage. (bu 2 google-cloud-storage. -z / --gzip gzip-compress the backup while uploading to the cloud (should not be used with python < 3.2). -j / --bzip2 bzip2-compress the backup while uploading to the cloud (should not be used with python < 3.3). --snappy snappy-compress the backup while uploading to the cloud (requires optional python-snappy library). -h / --host Host or Unix socket for Postgres connection (default: libpq settings). -p / --port Port for Postgres connection (default: libpq settings). -U / --user User name for Postgres connection (default: libpq settings). -d / --dbname Database name or conninfo string for Postgres connection (default: (dqpostgres(dq). -n / --name A name which can be used to reference this backup in commands such as barman-cloud-restore and barman-cloud-backup-delete. -J / --jobs Number of subprocesses to upload data to cloud storage (default: 2). -S / --max-archive-size Maximum size of an archive when uploading to cloud storage (default: 100GB). --immediate-checkpoint Forces the initial checkpoint to be done as quickly as possible. --min-chunk-size Minimum size of an individual chunk when uploading to cloud storage (default: 5MB for aws-s3, 64KB for azure-blob-storage, not applicable for google-cloud-storage). --max-bandwidth The maximum amount of data to be uploaded per second when backing up to object storages (default: 0 - no limit). --snapshot-instance Instance where the disks to be backed up as snapshots are attached. --snapshot-disk Name of a disk from which snapshots should be taken. --tag Tag to be added to all uploaded files in cloud storage, and/or to snapshots created, if snapshots are used. --tags Tags to be added to all uploaded files in cloud storage, and/or to snapshots created, if snapshots are used. NOTE: 0.0 3.5 If you are using --tags before positional arguments, you must insert -- after it to indicate the end of optional arguments. This tells the parser to treat everything after -- as positional arguments. Without the --, Barman may misinterpret positional arguments as values for the last option. Deprecated since version 3.15: --tags is deprecated. Use --tag instead. Extra options for the AWS cloud provider 0.0 --endpoint-url Override default S3 endpoint URL with the given one. -P / --aws-profile Profile name (e.g. INI section in AWS credentials file). --profile (deprecated) Profile name (e.g. INI section in AWS credentials file)
- replaced by --aws-profile. --read-timeout The time in seconds until a timeout is raised when waiting to read from a connection (defaults to 60 seconds). -e / --encryption The encryption algorithm used when storing the uploaded data in S3. Allowed options: 7.0 (bu 2 AES256. (bu 2 aws:kms. --sse-kms-key-id The AWS KMS key ID that should be used for encrypting the uploaded data in S3. Can be specified using the key ID on its own or using the full ARN for the key. Only allowed if -e / --encryption is set to aws:kms. --aws-region The name of the AWS region containing the EC2 VM and storage volumes defined by the --snapshot-instance and --snapshot-disk arguments. --aws-await-snapshots-timeout The length of time in seconds to wait for snapshots to be created in AWS before timing out (default: 3600 seconds). --aws-snapshot-lock-mode The lock mode for the snapshot. This is only valid if --snapshot-instance and --snapshot-disk are set. Allowed options: 7.0 (bu 2 compliance. (bu 2 governance. --aws-snapshot-lock-duration The lock duration is the period of time (in days) for which the snapshot is to remain locked, ranging from 1 to 36,500. Set either the lock duration or the expiration date (not both). --aws-snapshot-lock-cool-off-period The cooling-off period is an optional period of time (in hours) that you can specify when you lock a snapshot in compliance mode, ranging from 1 to 72. --aws-snapshot-lock-expiration-date The lock duration is determined by an expiration date in the future. It must be at least 1 day after the snapshot creation date and time, using the format YYYY-MM-DDTHH:MM:SS.sssZ. Set either the lock duration or the expiration date (not both). Extra options for the Azure cloud provider 0.0 --azure-credential / --credential Optionally specify the type of credential to use when authenticating with Azure. If omitted then Azure Blob Storage credentials will be obtained from the environment and the default Azure authentication flow will be used for authenticating with all other Azure services. If no credentials can be found in the environment then the default Azure authentication flow will also be used for Azure Blob Storage. Allowed options: 7.0 (bu 2 azure-cli. (bu 2 managed-identity. (bu 2 default. --encryption-scope The name of an encryption scope defined in the Azure Blob Storage service which is to be used to encrypt the data in Azure. --azure-subscription-id The ID of the Azure subscription which owns the instance and storage volumes defined by the --snapshot-instance and --snapshot-disk arguments. --azure-resource-group The name of the Azure resource group to which the compute instance and disks defined by the --snapshot-instance and --snapshot-disk arguments belong. Extra options for GCP cloud provider 0.0 --gcp-project GCP project under which disk snapshots should be stored. --snapshot-gcp-project (deprecated) GCP project under which disk snapshots should be stored - replaced by --gcp-project. --kms-key-name The name of the GCP KMS key which should be used for encrypting the uploaded data in GCS. --gcp-zone Zone of the disks from which snapshots should be taken. --snapshot-zone (deprecated) Zone of the disks from which snapshots should be taken - replaced by --gcp-zone
- barman-cloud-wal-archive(1) - Barman-cloud Commands Synopsis 0.0 3.5 barman-cloud-wal-archive [ { -V | --version } ] [ --help ] [ { { -v | --verbose } | { -q | --quiet } } ] [ { -t | --test } ] [ --cloud-provider { aws-s3 | azure-blob-storage | google-cloud-storage } ] [ { { -z | --gzip } | { -j | --bzip2 } | --xz | --snappy | --zstd | --lz4 } ] [ --compression-level COMPRESSION_LEVEL ] [ --tag KEY,VALUE [ --tag KEY,VALUE ... ] ] [ --history-tag KEY,VALUE [ --history-tag KEY,VALUE ... ] ] [ --endpoint-url ENDPOINT_URL ] [ { -P | --aws-profile } AWS_PROFILE ] [ --profile AWS_PROFILE ] [ --read-timeout READ_TIMEOUT ] [ { -e | --encryption } ENCRYPTION ] [ --sse-kms-key-id SSE_KMS_KEY_ID ] [ { --azure-credential | --credential } { azure-cli | managed-identity | default } ] [ --encryption-scope ENCRYPTION_SCOPE ] [ --max-block-size MAX_BLOCK_SIZE ] [ --max-concurrency MAX_CONCURRENCY ] [ --max-single-put-size MAX_SINGLE_PUT_SIZE ] [ --kms-key-name KMS_KEY_NAME ] DESTINATION_URL SERVER_NAME [ WAL_PATH ] Description The barman-cloud-wal-archive command is designed to be used in the archive_command of a Postgres server to directly ship WAL files to cloud storage. NOTE: 0.0 3.5 If you are using Python 2 or unsupported versions of Python 3, avoid using the compression options --gzip or --bzip2. The script cannot restore gzip-compressed WALs on Python < 3.2 or bzip2-compressed WALs on Python < 3.3. This script enables the direct transfer of WAL files to cloud storage, bypassing the Barman server. Additionally, it can be utilized as a hook script for WAL archiving (pre_archive_retry_script). NOTE: 0.0 3.5 For GCP, only authentication with GOOGLE_APPLICATION_CREDENTIALS env is supported. Parameters 0.0 SERVER_NAME Name of the server that will have the WALs archived. DESTINATION_URL URL of the cloud destination, such as a bucket in AWS S3. For example: s3://bucket/path/to/folder. WAL_PATH The value of the (aq%p(aq keyword (according to archive_command). -V / --version Show version and exit. --help show this help message and exit. -v / --verbose Increase output verbosity (e.g., -vv is more than -v). -q / --quiet Decrease output verbosity (e.g., -qq is less than -q). -t / --test Test cloud connectivity and exit. --cloud-provider The cloud provider to use as a storage backend. Allowed options are: 7.0 (bu 2 aws-s3. (bu 2 azure-blob-storage. (bu 2 google-cloud-storage. -z / --gzip gzip-compress the WAL while uploading to the cloud. -j / --bzip2 bzip2-compress the WAL while uploading to the cloud. --xz xz-compress the WAL while uploading to the cloud. --snappy snappy-compress the WAL while uploading to the cloud (requires the python-snappy Python library to be installed). --zstd zstd-compress the WAL while uploading to the cloud (requires the zstandard Python library to be installed). --lz4 lz4-compress the WAL while uploading to the cloud (requires the lz4 Python library to be installed). --compression-level A compression level to be used by the selected compression algorithm. Valid values are integers within the supported range of the chosen algorithm or one of the predefined labels: low, medium, and high. The range of each algorithm as well as what level each predefined label maps to can be found in %compression_level. --tag Tag to be added to archived WAL files in cloud storage. --tags Tag to be added to archived WAL files in cloud storage. NOTE: 0.0 3.5 If you are using --tags before positional arguments, you must insert -- after it to indicate the end of optional arguments. This tells the parser to treat everything after -- as positional arguments. Without the --, Barman may misinterpret positional arguments as values for the last option. Deprecated since version 3.15: --tags is deprecated. Use --tag instead. 0.0 --history-tag Tag to be added to archived history files in cloud storage. --history-tags Tags to be added to archived history files in cloud storage. NOTE: 0.0 3.5 If you are using --history-tags before positional arguments, you must insert -- after it to indicate the end of optional arguments. This tells the parser to treat everything after -- as positional arguments. Without the --, Barman may misinterpret positional arguments as values for the last option. Deprecated since version 3.15: --history-tags is deprecated. Use --history-tag instead. Extra options for the AWS cloud provider 0.0 --endpoint-url Override default S3 endpoint URL with the given one. -P / --aws-profile Profile name (e.g. INI section in AWS credentials file). --profile (deprecated) Profile name (e.g. INI section in AWS credentials file)
- replaced by --aws-profile. --read-timeout The time in seconds until a timeout is raised when waiting to read from a connection (defaults to 60 seconds). -e / --encryption The encryption algorithm used when storing the uploaded data in S3. Allowed options: 7.0 (bu 2 AES256. (bu 2 aws:kms. --sse-kms-key-id The AWS KMS key ID that should be used for encrypting the uploaded data in S3. Can be specified using the key ID on its own or using the full ARN for the key. Only allowed if -e / --encryption is set to aws:kms. Extra options for the Azure cloud provider 0.0 --azure-credential / --credential Optionally specify the type of credential to use when authenticating with Azure. If omitted then Azure Blob Storage credentials will be obtained from the environment and the default Azure authentication flow will be used for authenticating with all other Azure services. If no credentials can be found in the environment then the default Azure authentication flow will also be used for Azure Blob Storage. Allowed options are: 7.0 (bu 2 azure-cli. (bu 2 managed-identity. (bu 2 default. --encryption-scope The name of an encryption scope defined in the Azure Blob Storage service which is to be used to encrypt the data in Azure. --max-block-size The chunk size to be used when uploading an object via the concurrent chunk method (default: 4MB). --max-concurrency The maximum number of chunks to be uploaded concurrently (default: 1). --max-single-put-size Maximum size for which the Azure client will upload an object in a single request (default: 64MB). If this is set lower than the Postgres WAL segment size after any applied compression then the concurrent chunk upload method for WAL archiving will be used. Extra options for GCP cloud provider 0.0 --kms-key-name The name of the GCP KMS key which should be used for encrypting the uploaded data in GCS
- bgzip(1)
- Block compression/decompression utility EX in +$1 nf ft CR . EE ft fi in
- bsdunzip(1)
- extract files from a ZIP archive
- bz3grep(1)
- print lines matching a pattern in bzip3-compressed files
- bz3less(1)
- view bzip3-compressed files
- bz3more(1)
- view bzip3-compressed files
- bz3most(1)
- view bzip3-compressed files
- bzgrep(1), bzfgrep(1), bzegrep(1)
- search possibly bzip2 compressed files for a regular expression
- bzip(1), bunzip(1)
- a block-sorting file compressor, v0.21
- bzip2(1), bunzip2(1)
- a block-sorting file compressor, v1.0.8 bzcat - decompresses files to stdout bzip2recover - recovers data from damaged bzip2 files
- bzip3(1)
- an efficient statistical file compressor and spiritual successor to bzip2
- bzmore(1), bzless(1)
- file perusal filter for crt viewing of bzip2 compressed text
- dictzip(1), dictunzip(1)
- compress (or expand) files, allowing random access
- fcrackzip(1)
- a Free/Fast Zip Password Cracker
- ftimes-grabber(1)
- Parse FTimes output, grab files, and zip them up
- funzip(1)
- filter for extracting from a ZIP archive in a pipe
- fuse-zip(1)
- a FUSE filesystem for zip archives with write support
- gdal-vsi(1)
- Entry point for GDAL Virtual System Interface (VSI) commands Added in version 3.11. The subcommands of gdal vsi allow manipulation of files located on the %GDAL Virtual File Systems (compressed, network hosted, etc...): /vsimem, /vsizip, /vsitar, /vsicurl,
- gdal-vsi-sozip(1)
- SOZIP (Seek-Optimized ZIP) related commands. Added in version 3.11
- git-diagnose(1)
- Generate a zip archive of diagnostic information
- gpg-zip(1)
- Encrypt or sign files into an archive
- gzip(1), gunzip(1), zcat(1)
- compress or expand files
- humanzip(1), humanunzip(1)
- (un)compress text files in a human readable way
- hunzip(1)
- compress and encrypt dictionary files
- hunzip(1)
- decompress and decrypt hzip files to the standard output
- igzip(1)
- compress or decompress files similar to gzip
- jzip(1)
- execute Infocom v1-5 and Inform v1-8 game files
- lbzip2(1)
- parallel bzip2 utility