FreeBSD Manual Pages
kopia(1) 20250413 kopia(1) NAME kopia SYNOPSIS kopia [<flags>] <command> [<args> ...] DESCRIPTION Kopia - Fast And Secure Open-Source Backup OPTIONS --help Show context-sensitive help (also try --help-long and --help- man). --version Show application version. --log-file=LOG-FILE Override log file. --log-dir="/root/.cache/kopia" Directory where log files should be written. --log-level=info Console log level --file-log-level=debug File log level --help-full Show help for all commands, including hidden --config-file="repository.config" Specify the config file to use -p, --password=PASSWORD Repository password. --persist-credentials Persist credentials COMMANDS help [<command>...] Show help. blob delete <blobIDs>... Delete blobs by ID blob gc [<flags>] Garbage-collect unused blobs --delete=DELETE Whether to delete unused blobs --parallel=16 Number of parallel blob scans --prefix=PREFIX Only GC blobs with given prefix --safety=full Safety level blob list [<flags>] List BLOBs --prefix=PREFIX Blob ID prefix --exclude-prefix=EXCLUDE-PREFIX Blob ID prefixes to exclude --min-size=MIN-SIZE Minimum size --max-size=MAX-SIZE Maximum size --data-only Only list data blobs --json Output result in JSON format to stdout blob show [<flags>] <blobID>... Show contents of BLOBs --decrypt Decrypt blob if possible blob stats [<flags>] Blob statistics -r, --raw Raw numbers --prefix=PREFIX Blob name prefix benchmark compression --data-file=DATA-FILE [<flags>] Run compression benchmarks --repeat=0 Number of repetitions --data-file=DATA-FILE Use data from the given file --by-size Sort results by size --by-alloc Sort results by allocated bytes --parallel=1 Number of parallel goroutines --operations=both Operations --verify-stable Verify that compression is stable --print-options Print out options usable for repository creation --deprecated Included deprecated compression algorithms --algorithms=ALGORITHMS Comma-separated list of algorithms to benchmark benchmark crypto [<flags>] Run combined hash and encryption benchmarks --block-size=1MB Size of a block to encrypt --repeat=100 Number of repetitions --deprecated Include deprecated algorithms --parallel=1 Number of parallel goroutines --print-options Print out options usable for repository creation benchmark splitter [<flags>] Run splitter benchmarks --rand-seed=42 Random seed --data-size=32MB Size of a data to split --block-count=16 Number of data blocks to split --print-options Print out the fastest dynamic splitter option --parallel=1 Number of parallel goroutines benchmark hashing [<flags>] Run hashing function benchmarks --block-size=1MB Size of a block to hash --repeat=10 Number of repetitions --parallel=1 Number of parallel goroutines --print-options Print out options usable for repository creation benchmark encryption [<flags>] Run encryption benchmarks --block-size=1MB Size of a block to encrypt --repeat=1000 Number of repetitions --deprecated Include deprecated algorithms --parallel=1 Number of parallel goroutines --print-options Print out options usable for repository creation benchmark ecc [<flags>] Run ECC benchmarks --block-size=10MB Size of a block to encrypt --repeat=100 Number of repetitions --parallel=1 Number of parallel goroutines --print-options Print out options usable for repository creation cache clear [<flags>] Clears the cache --partial=PARTIAL Specifies the cache to clear cache info [<flags>] Displays cache information and statistics --path Only display cache path cache prefetch [<flags>] <object>... Prefetches the provided objects into cache --hint=HINT Prefetch hint cache set [<flags>] Sets parameters local caching of repository data --content-cache-size-mb=MB Desired size of local content cache (soft limit) --content-cache-size-limit-mb=MB Maximum size of local content cache (hard limit) --content-min-sweep-age=CONTENT-MIN-SWEEP-AGE Minimal age of content cache item to be subject to sweeping --metadata-cache-size-mb=MB Desired size of local metadata cache (soft limit) --metadata-cache-size-limit-mb=MB Maximum size of local metadata cache (hard limit) --metadata-min-sweep-age=METADATA-MIN-SWEEP-AGE Minimal age of metadata cache item to be subject to sweeping --index-min-sweep-age=INDEX-MIN-SWEEP-AGE Minimal age of index cache item to be subject to sweeping --max-list-cache-duration=MAX-LIST-CACHE-DURATION Duration of index cache --cache-directory=CACHE-DIRECTORY Directory where to store cache files cache sync [<flags>] Synchronizes the metadata cache with blobs in storage --parallel=16 Fetch parallelism content delete <id>... Remove content content list [<flags>] List contents -l, --long Long output -c, --compression Compression --deleted Include deleted content --deleted-only Only show deleted content -s, --summary Summarize the list -h, --human Human-readable output --prefix=PREFIX Content ID prefix --prefixed Apply to content IDs with (any) prefix --non-prefixed Apply to content IDs without prefix --json Output result in JSON format to stdout content rewrite [<flags>] [<contentID>...] Rewrite content using most recent format --parallelism=16 Number of parallel workers --short Rewrite contents from short packs --format-version=-1 Rewrite contents using the provided format version --pack-prefix=PACK-PREFIX Only rewrite contents from pack blobs with a given prefix -n, --dry-run Do not actually rewrite, only print what would happen --prefix=PREFIX Content ID prefix --prefixed Apply to content IDs with (any) prefix --non-prefixed Apply to content IDs without prefix --safety=full Safety level content show [<flags>] <id>... Show contents by ID. -j, --json Pretty-print JSON content -z, --unzip Transparently decompress the content content stats [<flags>] Content statistics -r, --raw Raw numbers --prefix=PREFIX Content ID prefix --prefixed Apply to content IDs with (any) prefix --non-prefixed Apply to content IDs without prefix content verify [<flags>] Verify that each content is backed by a valid blob --parallel=16 Parallelism --full Full verification (including download) --include-deleted Include deleted contents --download-percent=DOWNLOAD-PERCENT Download a percentage of files [0.0 .. 100.0] --progress-interval=3s Progress output interval --prefix=PREFIX Content ID prefix --prefixed Apply to content IDs with (any) prefix --non-prefixed Apply to content IDs without prefix diff [<flags>] <object-path1> <object-path2> Displays differences between two repository objects (files or directo- ries) -f, --files Compare files by launching diff command for all pairs of (old,new) --diff-command="diff -u" Displays differences between two repository objects (files or directories) index epoch list List the status of epochs. index inspect [<flags>] [<blobs>...] Inspect index blob --all Inspect all index blobs in the repository, including inactive --active Inspect all active index blobs --content-id=CONTENT-ID Inspect all active index blobs --parallel=8 Parallelism index list [<flags>] List content indexes --summary Display index blob summary --superseded Include inactive index files superseded by compaction --sort=time Index blob sort order --json Output result in JSON format to stdout index optimize [<flags>] Optimize indexes blobs. --max-small-blobs=1 Maximum number of small index blobs that can be left after com- paction. --drop-deleted-older-than=DROP-DELETED-OLDER-THAN Drop deleted contents above given age --drop-contents=DROP-CONTENTS Drop contents with given IDs --all Optimize all indexes, even those above maximum size. index recover [<flags>] Recover indexes from pack blobs --blob-prefixes=BLOB-PREFIXES Prefixes of pack blobs to recover from (default=all packs) --blobs=BLOBS Names of pack blobs to recover from (default=all packs) --parallel=1 Recover parallelism --ignore-errors Ignore errors when recovering --delete-indexes Delete all indexes before recovering --commit Commit recovered content list [<flags>] <object-path> List a directory stored in repository object. -l, --long Long output -r, --recursive Recursive output -o, --show-object-id Show object IDs --error-summary Emit error summary logs cleanup [<flags>] Clean up logs --max-age=720h Maximal age --max-count=10000 Maximal number of files to keep --max-total-size-mb=1024 Maximal total size in MiB --dry-run Do not delete logs list [<flags>] List logs. --all Show all logs -n, --latest=LATEST Include last N logs, by default the last one is shown --younger-than=YOUNGER-THAN Include logs younger than X (e.g. '1h') --older-than=OLDER-THAN Include logs older than X (e.g. '1h') logs show [<flags>] [<session-id>...] Show contents of the log. When no flags or arguments are specified, only the last log is shown. --all Show all logs -n, --latest=LATEST Include last N logs, by default the last one is shown --younger-than=YOUNGER-THAN Include logs younger than X (e.g. '1h') --older-than=OLDER-THAN Include logs older than X (e.g. '1h') notification profile configure email --profile-name=PROFILE-NAME [<flags>] E-mail notification. --profile-name=PROFILE-NAME Profile name --send-test-notification Test the notification --min-severity=MIN-SEVERITY Minimum severity --smtp-server=SMTP-SERVER SMTP server --smtp-port=SMTP-PORT SMTP port --smtp-identity=SMTP-IDENTITY SMTP identity --smtp-username=SMTP-USERNAME SMTP username --smtp-password=SMTP-PASSWORD SMTP password --mail-from=MAIL-FROM From address --mail-to=MAIL-TO To address --mail-cc=MAIL-CC CC address --format=FORMAT Format of the message notification profile configure pushover --profile-name=PROFILE-NAME [<flags>] Pushover notification. --profile-name=PROFILE-NAME Profile name --send-test-notification Test the notification --min-severity=MIN-SEVERITY Minimum severity --app-token=APP-TOKEN Pushover App Token --user-key=USER-KEY Pushover User Key --format=FORMAT Format of the message notification profile configure webhook --profile-name=PROFILE-NAME [<flags>] Webhook notification. --profile-name=PROFILE-NAME Profile name --send-test-notification Test the notification --min-severity=MIN-SEVERITY Minimum severity --endpoint=ENDPOINT SMTP server --method=METHOD HTTP Method --http-header=HTTP-HEADER HTTP Header (key:value) --format=FORMAT Format of the message notification profile delete --profile-name=PROFILE-NAME Delete notification profile --profile-name=PROFILE-NAME Profile name notification profile test --profile-name=PROFILE-NAME Send test notification --profile-name=PROFILE-NAME Profile name notification profile list [<flags>] List notification profiles --json Output result in JSON format to stdout --raw Raw output notification profile show --profile-name=PROFILE-NAME [<flags>] Show notification profile --json Output result in JSON format to stdout --profile-name=PROFILE-NAME Profile name --raw Raw output notification template list [<flags>] List templates --json Output result in JSON format to stdout notification template set [<flags>] <template> Set the notification template --from-stdin Read new template from stdin --from-file=FROM-FILE Read new template from file --editor Edit template using default editor notification template show [<flags>] <template> Show template --format=FORMAT Template format --original Show original template --html Convert the output to HTML notification template remove <template> Remove the notification template server start [<flags>] Start Kopia server --html=HTML Server the provided HTML at the root URL --ui Start the server with HTML UI --grpc Start the GRPC server --control-api Start the control API --refresh-interval=4h Frequency for refreshing repository status --max-concurrency=0 Maximum number of server goroutines --server-control-username="server-control" Server control username --server-control-password=PASSWORD Server control password --tls-cert-file=TLS-CERT-FILE TLS certificate PEM --tls-key-file=TLS-KEY-FILE TLS key PEM file --persistent-logs Persist logs in a file --ui-preferences-file=UI-PREFERENCES-FILE Path to JSON file storing UI preferences --shutdown-grace-period=5s Grace period for shutting down the server --kopiaui-notifications Enable notifications to be printed to stdout for KopiaUI --address="http://127.0.0.1:51515" Server address --server-username="kopia" HTTP server username (basic auth) --server-password=SERVER-PASSWORD HTTP server password (basic auth) --cache-directory=PATH Cache directory --content-cache-size-mb=MB Desired size of local content cache (soft limit) --content-cache-size-limit-mb=MB Maximum size of local content cache (hard limit) --content-min-sweep-age=CONTENT-MIN-SWEEP-AGE Minimal age of content cache item to be subject to sweeping --metadata-cache-size-mb=MB Desired size of local metadata cache (soft limit) --metadata-cache-size-limit-mb=MB Maximum size of local metadata cache (hard limit) --metadata-min-sweep-age=METADATA-MIN-SWEEP-AGE Minimal age of metadata cache item to be subject to sweeping --index-min-sweep-age=INDEX-MIN-SWEEP-AGE Minimal age of index cache item to be subject to sweeping --max-list-cache-duration=MAX-LIST-CACHE-DURATION Duration of index cache --check-for-updates Periodically check for Kopia updates on GitHub --readonly Make repository read-only to avoid accidental changes --description=DESCRIPTION Human-readable description of the repository --enable-actions Allow snapshot actions server acl add --user=USER --target=TARGET --access=ACCESS [<flags>] Add ACL entry --user=USER User the ACL targets --target=TARGET Manifests targeted by the rule (type:T,key1:value1,...,keyN:val- ueN) --access=ACCESS Access the user gets to subject --overwrite Overwrite existing rule with the same user and target server acl delete [<flags>] [<id>...] Delete ACL entry --all Remove all ACL entries --delete Really delete server acl enable [<flags>] Enable ACLs and install default entries --reset Reset all ACLs to default server acl list [<flags>] List ACL entries --json Output result in JSON format to stdout server users add [<flags>] <username> Add new repository user --ask-password Ask for user password --user-password=USER-PASSWORD Password --user-password-hash=USER-PASSWORD-HASH Password hash server users set [<flags>] <username> Set password for a repository user. --ask-password Ask for user password --user-password=USER-PASSWORD Password --user-password-hash=USER-PASSWORD-HASH Password hash server users delete <username> Delete user server users hash-password [<flags>] Hash a user password that can be passed to the 'server user add/set' command --user-password=USER-PASSWORD Password server users info <username> Info about particular user server users list [<flags>] List users --json Output result in JSON format to stdout server status [<flags>] Status of Kopia server --remote Show remote sources --address="http://127.0.0.1:51515" Address of the server to connect to --server-control-username=SERVER-CONTROL-USERNAME Server control username --server-control-password=PASSWORD Server control password --server-cert-fingerprint=SHA256-FINGERPRINT Server certificate fingerprint server refresh [<flags>] Refresh the cache in Kopia server to observe new sources, etc. --address="http://127.0.0.1:51515" Address of the server to connect to --server-control-username=SERVER-CONTROL-USERNAME Server control username --server-control-password=PASSWORD Server control password --server-cert-fingerprint=SHA256-FINGERPRINT Server certificate fingerprint server flush [<flags>] Flush the state of Kopia server to persistent storage, etc. --address="http://127.0.0.1:51515" Address of the server to connect to --server-control-username=SERVER-CONTROL-USERNAME Server control username --server-control-password=PASSWORD Server control password --server-cert-fingerprint=SHA256-FINGERPRINT Server certificate fingerprint server shutdown [<flags>] Gracefully shutdown the server --address="http://127.0.0.1:51515" Address of the server to connect to --server-control-username=SERVER-CONTROL-USERNAME Server control username --server-control-password=PASSWORD Server control password --server-cert-fingerprint=SHA256-FINGERPRINT Server certificate fingerprint server snapshot [<flags>] [<source>] Trigger upload for one or more existing sources --all All paths managed by server --address="http://127.0.0.1:51515" Address of the server to connect to --server-control-username=SERVER-CONTROL-USERNAME Server control username --server-control-password=PASSWORD Server control password --server-cert-fingerprint=SHA256-FINGERPRINT Server certificate fingerprint server cancel [<flags>] [<source>] Cancels in-progress uploads for one or more sources --all All paths managed by server --address="http://127.0.0.1:51515" Address of the server to connect to --server-control-username=SERVER-CONTROL-USERNAME Server control username --server-control-password=PASSWORD Server control password --server-cert-fingerprint=SHA256-FINGERPRINT Server certificate fingerprint server pause [<flags>] [<source>] Pause the scheduled snapshots for one or more sources --all All paths managed by server --address="http://127.0.0.1:51515" Address of the server to connect to --server-control-username=SERVER-CONTROL-USERNAME Server control username --server-control-password=PASSWORD Server control password --server-cert-fingerprint=SHA256-FINGERPRINT Server certificate fingerprint server resume [<flags>] [<source>] Resume the scheduled snapshots for one or more sources --all All paths managed by server --address="http://127.0.0.1:51515" Address of the server to connect to --server-control-username=SERVER-CONTROL-USERNAME Server control username --server-control-password=PASSWORD Server control password --server-cert-fingerprint=SHA256-FINGERPRINT Server certificate fingerprint server throttle get [<flags>] Get throttling parameters for a running server --address="http://127.0.0.1:51515" Address of the server to connect to --server-control-username=SERVER-CONTROL-USERNAME Server control username --server-control-password=PASSWORD Server control password --server-cert-fingerprint=SHA256-FINGERPRINT Server certificate fingerprint --json Output result in JSON format to stdout server throttle set [<flags>] Set throttling parameters for a running server --address="http://127.0.0.1:51515" Address of the server to connect to --server-control-username=SERVER-CONTROL-USERNAME Server control username --server-control-password=PASSWORD Server control password --server-cert-fingerprint=SHA256-FINGERPRINT Server certificate fingerprint --download-bytes-per-second=DOWNLOAD-BYTES-PER-SECOND Set the download bytes per second --upload-bytes-per-second=UPLOAD-BYTES-PER-SECOND Set the upload bytes per second --read-requests-per-second=READ-REQUESTS-PER-SECOND Set max reads per second --write-requests-per-second=WRITE-REQUESTS-PER-SECOND Set max writes per second --list-requests-per-second=LIST-REQUESTS-PER-SECOND Set max lists per second --concurrent-reads=CONCURRENT-READS Set max concurrent reads --concurrent-writes=CONCURRENT-WRITES Set max concurrent writes session list List sessions restore [<flags>] <sources>... Restore a directory or a file. Restore can operate in two modes: * from a snapshot: restoring (possibly shallowly) a specified file or directory from a snapshot into a target path. By default, the target path will be created by the restore command if it does not exist. * by expanding a shallow placeholder in situ where the placeholder was created by a previous restore. In the from-snapshot mode: The source to be restored is specified in the form of a directory or file ID and optionally a sub-directory path. For example, the following source and target arguments will restore the contents of the 'kffbb7c28ea6c34d6cbe555d1cf80faa9' directory into a new, local directory named 'd1' Similarly, the following command will restore the contents of a subdi- rectory directory named 'sd2' When restoring to a target path that already has existing data, by de- fault the restore will attempt to overwrite, unless one or more of the following flags has been set (to prevent overwrite of each type): --no-overwrite-files --no-overwrite-directories --no-overwrite-symlinks If the '--shallow' option is provided, files and directories this depth and below in the directory hierarchy will be represented by compact placeholder files of the form 'entry.kopia-entry' instead of being re- stored. (I.e. setting '--shallow' to 0 will only shallow restore.) Snapshots created of directory contents represented by placeholder files will be identical to snapshots of the equivalent fully expanded tree. In the expanding-a-placeholder mode: The source to be restored is a pre-existing placeholder entry of the form of the expansion and defaults to 0. For example: will remove the d3.kopiadir placeholder and restore the referenced repository contents into path d3 where the contents of the newly cre- ated path d3 will themselves be placeholder files. --overwrite-directories Overwrite existing directories --overwrite-files Specifies whether or not to overwrite already existing files --overwrite-symlinks Specifies whether or not to overwrite already existing symlinks --write-sparse-files When doing a restore, attempt to write files sparsely-allocating the minimum amount of disk space needed. --consistent-attributes When multiple snapshots match, fail if they have inconsistent attributes --mode=auto Override restore mode --parallel=8 Restore parallelism (1=disable) --skip-owners Skip owners during restore --skip-permissions Skip permissions during restore --skip-times Skip times during restore --ignore-permission-errors Ignore permission errors --write-files-atomically Write files atomically to disk, ensuring they are either fully committed, or not written at all, preventing partially written files --ignore-errors Ignore all errors --skip-existing Skip files and symlinks that exist in the output --shallow=SHALLOW Shallow restore the directory hierarchy starting at this level (default is to deep restore the entire hierarchy.) --shallow-minsize=SHALLOW-MINSIZE When doing a shallow restore, write actual files instead of placeholders smaller than this size. --snapshot-time="latest" When using a path as the source, use the latest snapshot avail- able before this date. Default is latest show <object-path> Displays the contents of a repository object. snapshot copy-history [<flags>] <source> [<destination>] Performs a copy of the history of snapshots from another user or host. This command will copy snapshot manifests of the specified source to the respective destination. This is typically used when renam- ing a host, switching username or moving directory around to main- tain snapshot history. Both source and destination can be specified using user@host, @host or user@host:/path where destination values override the corresponding parts of the source, so both targeted and mass copy is supported. Source: Destination Behavior --------------------------------------------------- @host1 @host2 copy snapshots from all users of host1 @host1 user2@host2 (disallowed as it would potentially collapse users) @host1 user2@host2:/path2 (disallowed as it would potentially collapse paths) user1@host1 @host2 copy all snapshots to user1@host2 user1@host1 user2@host2 copy all snapshots to user2@host2 user1@host1 user2@host2:/path2 (disallowed as it would potentially collapse paths) user1@host1:/path1 @host2 copy to user1@host2:/path1 user1@host1:/path1 user2@host2 copy to user2@host2:/path1 user1@host1:/path1 user2@host2:/path2 copy snapshots from single path. -n, --dry-run Do not actually copy snapshots, only print what would happen snapshot move-history [<flags>] <source> [<destination>] Performs a move of the history of snapshots from another user or host. This command will move snapshot manifests of the specified source to the respective destination. This is typically used when renam- ing a host, switching username or moving directory around to main- tain snapshot history. Both source and destination can be specified using user@host, @host or user@host:/path where destination values override the corresponding parts of the source, so both targeted and mass move is supported. Source: Destination Behavior --------------------------------------------------- @host1 @host2 move snapshots from all users of host1 @host1 user2@host2 (disallowed as it would potentially collapse users) @host1 user2@host2:/path2 (disallowed as it would potentially collapse paths) user1@host1 @host2 move all snapshots to user1@host2 user1@host1 user2@host2 move all snapshots to user2@host2 user1@host1 user2@host2:/path2 (disallowed as it would potentially collapse paths) user1@host1:/path1 @host2 move to user1@host2:/path1 user1@host1:/path1 user2@host2 move to user2@host2:/path1 user1@host1:/path1 user2@host2:/path2 move snapshots from single path. -n, --dry-run Do not actually copy snapshots, only print what would happen snapshot create [<flags>] [<source>...] Creates a snapshot of local directory or file. --all Create snapshots for files or directories previously backed up by this user on this computer. Cannot be used when a source path argument is also specified. --upload-limit-mb=MB Stop the backup process after the specified amount of data (in MB) has been uploaded. --description=DESCRIPTION Free-form snapshot description. --fail-fast Fail fast when creating snapshot. --force-hash=0 Force hashing of source files for a given percentage of files [0.0 .. 100.0] --parallel=N Upload N files in parallel --start-time=START-TIME Override snapshot start timestamp. --end-time=END-TIME Override snapshot end timestamp. --stdin-file=STDIN-FILE File path to be used for stdin data snapshot. --tags=TAGS Tags applied on the snapshot. Must be provided in the <key>:<value> format. --pin=PIN Create a pinned snapshot that will not expire automatically --override-source=OVERRIDE-SOURCE Override the source of the snapshot. --send-snapshot-report Send a snapshot report notification using configured notifica- tion profiles --log-dir-detail=LOG-DIR-DETAIL Override log level for directories --log-entry-detail=LOG-ENTRY-DETAIL Override log level for entries --json Output result in JSON format to stdout snapshot delete [<flags>] <id>... Explicitly delete a snapshot by providing a snapshot ID. --all-snapshots-for-source Delete all snapshots for a source --delete Confirm deletion snapshot estimate [<flags>] <source> Estimate the snapshot size and upload time. --show-files Show files -q, --quiet Do not display scanning progress --upload-speed=mbit/s Upload speed to use for estimation --max-examples-per-bucket=10 Max examples per bucket snapshot expire [<flags>] [<path>...] Remove old snapshots according to defined expiration policies. --all Expire all snapshots --delete Whether to actually delete snapshots snapshot fix invalid-files [<flags>] Remove references to any invalid (unreadable) files from snapshots. --manifest-id=MANIFEST-ID Manifest IDs --source=SOURCE Source to target (username@hostname:/path) --commit Update snapshot manifests --parallel=PARALLEL Parallelism --invalid-directory-handling=stub Handling of invalid directories --invalid-file-handling=stub How to handle invalid files --verify-files-percent=0 Verify a percentage of files by fully downloading them [0.0 .. 100.0] snapshot fix remove-files [<flags>] Remove references to the specified files from snapshots. --manifest-id=MANIFEST-ID Manifest IDs --source=SOURCE Source to target (username@hostname:/path) --commit Update snapshot manifests --parallel=PARALLEL Parallelism --invalid-directory-handling=stub Handling of invalid directories --object-id=OBJECT-ID Remove files by their object ID --filename=FILENAME Remove files by filename (wildcards are supported) snapshot list [<flags>] [<source>] List snapshots of files and directories. -i, --incomplete Include incomplete. --human-readable Show human-readable units -d, --delta Include deltas. -m, --manifest-id Include manifest item ID. --retention Include retention reasons. --mtime Include file mod time --owner Include owner -l, --show-identical Show identical snapshots --storage-stats Compute and show storage statistics --reverse Reverse sort order -a, --all Show all snapshots (not just current username/host) -n, --max-results=MAX-RESULTS Maximum number of entries per source. --tags=TAGS Tag filters to apply on the list items. Must be provided in the <key>:<value> format. --json Output result in JSON format to stdout snapshot migrate --source-config=SOURCE-CONFIG [<flags>] Migrate snapshots from another repository --source-config=SOURCE-CONFIG Configuration file for the source repository --sources=SOURCES List of sources to migrate --all Migrate all sources --policies Migrate policies too --overwrite-policies Overwrite policies --latest-only Only migrate the latest snapshot --parallel=1 Number of sources to migrate in parallel --apply-ignore-rules When migrating also apply current ignore rules snapshot pin [<flags>] <id>... Add or remove pins preventing snapshot deletion --add=ADD Add pins --remove=REMOVE Remove pins snapshot restore [<flags>] <sources>... Restore a directory or a file. Restore can operate in two modes: * from a snapshot: restoring (possibly shallowly) a specified file or directory from a snapshot into a target path. By default, the target path will be created by the restore command if it does not exist. * by expanding a shallow placeholder in situ where the placeholder was created by a previous restore. In the from-snapshot mode: The source to be restored is specified in the form of a directory or file ID and optionally a sub-directory path. For example, the following source and target arguments will restore the contents of the 'kffbb7c28ea6c34d6cbe555d1cf80faa9' directory into a new, local directory named 'd1' Similarly, the following command will restore the contents of a subdi- rectory directory named 'sd2' When restoring to a target path that already has existing data, by de- fault the restore will attempt to overwrite, unless one or more of the following flags has been set (to prevent overwrite of each type): --no-overwrite-files --no-overwrite-directories --no-overwrite-symlinks If the '--shallow' option is provided, files and directories this depth and below in the directory hierarchy will be represented by compact placeholder files of the form 'entry.kopia-entry' instead of being re- stored. (I.e. setting '--shallow' to 0 will only shallow restore.) Snapshots created of directory contents represented by placeholder files will be identical to snapshots of the equivalent fully expanded tree. In the expanding-a-placeholder mode: The source to be restored is a pre-existing placeholder entry of the form of the expansion and defaults to 0. For example: will remove the d3.kopiadir placeholder and restore the referenced repository contents into path d3 where the contents of the newly cre- ated path d3 will themselves be placeholder files. --overwrite-directories Overwrite existing directories --overwrite-files Specifies whether or not to overwrite already existing files --overwrite-symlinks Specifies whether or not to overwrite already existing symlinks --write-sparse-files When doing a restore, attempt to write files sparsely-allocating the minimum amount of disk space needed. --consistent-attributes When multiple snapshots match, fail if they have inconsistent attributes --mode=auto Override restore mode --parallel=8 Restore parallelism (1=disable) --skip-owners Skip owners during restore --skip-permissions Skip permissions during restore --skip-times Skip times during restore --ignore-permission-errors Ignore permission errors --write-files-atomically Write files atomically to disk, ensuring they are either fully committed, or not written at all, preventing partially written files --ignore-errors Ignore all errors --skip-existing Skip files and symlinks that exist in the output --shallow=SHALLOW Shallow restore the directory hierarchy starting at this level (default is to deep restore the entire hierarchy.) --shallow-minsize=SHALLOW-MINSIZE When doing a shallow restore, write actual files instead of placeholders smaller than this size. --snapshot-time="latest" When using a path as the source, use the latest snapshot avail- able before this date. Default is latest snapshot verify [<flags>] [<snapshot-ids>...] Verify the contents of stored snapshot --max-errors=0 Maximum number of errors before stopping --directory-id=DIRECTORY-ID Directory object IDs to verify --file-id=FILE-ID File object IDs to verify --sources=SOURCES Verify the provided sources --parallel=8 Parallelization --file-queue-length=20000 Queue length for file verification --file-parallelism=FILE-PARALLELISM Parallelism for file verification --verify-files-percent=0 Randomly verify a percentage of files by downloading them [0.0 .. 100.0] manifest delete <item>... Remove manifest items manifest list [<flags>] List manifest items --filter=FILTER List of key:value pairs --sort=SORT List of keys to sort by --json Output result in JSON format to stdout manifest show <item>... Show manifest items policy edit [<flags>] [<target>...] Set snapshot policy for a single directory, user@host or a global pol- icy. --global Select the global policy. policy list [<flags>] List policies. --json Output result in JSON format to stdout policy delete [<flags>] [<target>...] Remove snapshot policy for a single directory, user@host or a global policy. --global Select the global policy. -n, --dry-run Do not remove policy set [<flags>] [<target>...] Set snapshot policy for a single directory, user@host or a global pol- icy. --global Select the global policy. --inherit=INHERIT Enable or disable inheriting policies from the parent --before-folder-action=COMMAND Path to before-folder action command ('none' to remove) --after-folder-action=COMMAND Path to after-folder action command ('none' to remove) --before-snapshot-root-action=COMMAND Path to before-snapshot-root action command ('none' to remove or 'inherit') --after-snapshot-root-action=COMMAND Path to after-snapshot-root action command ('none' to remove or 'inherit') --action-command-timeout=5m Max time allowed for an action to run in seconds --action-command-mode=essential Action command mode --persist-action-script Persist action script --compression=COMPRESSION Compression algorithm --compression-min-size=COMPRESSION-MIN-SIZE Min size of file to attempt compression for --compression-max-size=COMPRESSION-MAX-SIZE Max size of file to attempt compression for --add-only-compress=PATTERN List of extensions to add to the only-compress list --remove-only-compress=PATTERN List of extensions to remove from the only-compress list --clear-only-compress Clear list of extensions in the only-compress list --add-never-compress=PATTERN List of extensions to add to the never compress list --remove-never-compress=PATTERN List of extensions to remove from the never compress list --clear-never-compress Clear list of extensions in the never compress list --metadata-compression=METADATA-COMPRESSION Metadata Compression algorithm --splitter=SPLITTER Splitter algorithm override --ignore-file-errors=IGNORE-FILE-ERRORS Ignore errors reading files while traversing ('true', 'false', 'inherit') --ignore-dir-errors=IGNORE-DIR-ERRORS Ignore errors reading directories while traversing ('true', 'false', 'inherit --ignore-unknown-types=IGNORE-UNKNOWN-TYPES Ignore unknown entry types in directories ('true', 'false', 'in- herit --add-ignore=PATTERN List of paths to add to the ignore list --remove-ignore=PATTERN List of paths to remove from the ignore list --clear-ignore Clear list of paths in the ignore list --add-dot-ignore=FILENAME List of paths to add to the dot-ignore list --remove-dot-ignore=FILENAME List of paths to remove from the dot-ignore list --clear-dot-ignore Clear list of paths in the dot-ignore list --max-file-size=N Exclude files above given size --one-file-system=ONE-FILE-SYSTEM Stay in parent filesystem when finding files ('true', 'false', 'inherit') --ignore-cache-dirs=IGNORE-CACHE-DIRS Ignore cache directories ('true', 'false', 'inherit') --log-dir-snapshotted=N Log detail when a directory is snapshotted (or 'inherit') --log-dir-ignored=N Log detail when a directory is ignored (or 'inherit') --log-entry-snapshotted=N Log detail when an entry is snapshotted (or 'inherit') --log-entry-ignored=N Log detail when an entry is ignored (or 'inherit') --log-entry-cache-hit=N Log detail on entry cache hit (or 'inherit') --log-entry-cache-miss=N Log detail on entry cache miss (or 'inherit') --keep-latest=N Number of most recent backups to keep per source (or 'inherit') --keep-hourly=N Number of most-recent hourly backups to keep per source (or 'in- herit') --keep-daily=N Number of most-recent daily backups to keep per source (or 'in- herit') --keep-weekly=N Number of most-recent weekly backups to keep per source (or 'in- herit') --keep-monthly=N Number of most-recent monthly backups to keep per source (or 'inherit') --keep-annual=N Number of most-recent annual backups to keep per source (or 'in- herit') --ignore-identical-snapshots=IGNORE-IDENTICAL-SNAPSHOTS Do not save identical snapshots (or 'inherit') --snapshot-interval=SNAPSHOT-INTERVAL Interval between snapshots --snapshot-time=SNAPSHOT-TIME Comma-separated times of day when to take snapshot (HH:mm,HH:mm,...) or 'inherit' to remove override --snapshot-time-crontab=SNAPSHOT-TIME-CRONTAB Semicolon-separated crontab-compatible expressions (or 'in- herit') --run-missed=RUN-MISSED Run missed time-of-day or cron snapshots ('true', 'false', 'in- herit') --manual Only create snapshots manually --enable-volume-shadow-copy=MODE Enable Volume Shadow Copy snapshots ('never', 'always', 'when- available', 'inherit') --max-parallel-file-reads=MAX-PARALLEL-FILE-READS Maximum number of parallel file reads --max-parallel-snapshots=MAX-PARALLEL-SNAPSHOTS Maximum number of parallel snapshots (server, KopiaUI only) --parallel-upload-above-size-mib=PARALLEL-UPLOAD-ABOVE-SIZE-MIB Use parallel uploads above size policy show [<flags>] [<target>...] Show snapshot policy. --global Select the global policy. --json Output result in JSON format to stdout policy export [<flags>] [<target>...] Exports the policy to the specified file, or to stdout if none is spec- ified. --to-file=TO-FILE File path to export to --overwrite Overwrite the file if it exists --global Select the global policy. policy import [<flags>] [<target>...] Imports policies from a specified file, or stdin if no file is speci- fied. --from-file=FROM-FILE File path to import from --allow-unknown-fields Allow unknown fields in the policy file --delete-other-policies Delete all other policies, keeping only those that got imported --global Select the global policy. mount [<flags>] [<path>] [<mountPoint>] Mount repository object as a local filesystem. --browse Open file browser --trace-fs Trace filesystem operations --fuse-allow-other Allows other users to access the file system. --fuse-allow-non-empty-mount Allows the mounting over a non-empty directory. The files in it will be shadowed by the freshly created mount. --webdav Use WebDAV to mount the repository object regardless of fuse availability. --max-cached-entries=100000 Limit the number of cached directory entries --max-cached-dirs=100 Limit the number of cached directories maintenance info [<flags>] Display maintenance information --json Output result in JSON format to stdout maintenance run [<flags>] Run repository maintenance --full Full maintenance --safety=full Safety level maintenance set [<flags>] Set maintenance parameters --owner=OWNER Set maintenance owner user@hostname --enable-quick=ENABLE-QUICK Enable or disable quick maintenance --enable-full=ENABLE-FULL Enable or disable full maintenance --quick-interval=QUICK-INTERVAL Set quick maintenance interval --full-interval=FULL-INTERVAL Set full maintenance interval --pause-quick=PAUSE-QUICK Pause quick maintenance for a specified duration --pause-full=PAUSE-FULL Pause full maintenance for a specified duration --max-retained-log-count=MAX-RETAINED-LOG-COUNT Set maximum number of log sessions to retain --max-retained-log-age=MAX-RETAINED-LOG-AGE Set maximum age of log sessions to retain --max-retained-log-size-mb=MAX-RETAINED-LOG-SIZE-MB Set maximum total size of log sessions --extend-object-locks=EXTEND-OBJECT-LOCKS Extend retention period of locked objects as part of full main- tenance. --list-parallelism=LIST-PARALLELISM Override list parallelism. repository connect server --url=URL [<flags>] Connect to a repository API Server. --url=URL Server URL --server-cert-fingerprint=SERVER-CERT-FINGERPRINT Server certificate fingerprint repository connect from-config [<flags>] Connect to repository in the provided configuration file --file=FILE Path to the configuration file --token=TOKEN Configuration token --token-file=TOKEN-FILE Path to the configuration token file --token-stdin Read configuration token from stdin repository connect azure --container=CONTAINER --storage-account=STORAGE- ACCOUNT [<flags>] Connect to repository in an Azure blob storage --container=CONTAINER Name of the Azure blob container --storage-account=STORAGE-ACCOUNT Azure storage account name (overrides AZURE_STORAGE_ACCOUNT en- vironment variable) --storage-key=STORAGE-KEY Azure storage account key (overrides AZURE_STORAGE_KEY environ- ment variable) --storage-domain=STORAGE-DOMAIN Azure storage domain --sas-token=SAS-TOKEN Azure SAS Token --prefix=PREFIX Prefix to use for objects in the bucket --tenant-id=TENANT-ID Azure service principle tenant ID (overrides AZURE_TENANT_ID en- vironment variable) --client-id=CLIENT-ID Azure service principle client ID (overrides AZURE_CLIENT_ID en- vironment variable) --client-secret=CLIENT-SECRET Azure service principle client secret (overrides AZURE_CLIENT_SECRET environment variable) --max-download-speed=BYTES_PER_SEC Limit the download speed. --max-upload-speed=BYTES_PER_SEC Limit the upload speed. --point-in-time=2006-01-02T15:04:05Z07:00 Use a point-in-time view of the storage repository when sup- ported repository connect b2 --bucket=BUCKET --key-id=KEY-ID --key=KEY [<flags>] Connect to repository in a B2 bucket --bucket=BUCKET Name of the B2 bucket --key-id=KEY-ID Key ID (overrides B2_KEY_ID environment variable) --key=KEY Secret key (overrides B2_KEY environment variable) --prefix=PREFIX Prefix to use for objects in the bucket --max-download-speed=BYTES_PER_SEC Limit the download speed. --max-upload-speed=BYTES_PER_SEC Limit the upload speed. repository connect filesystem --path=PATH [<flags>] Connect to repository in a filesystem --path=PATH Path to the repository --owner-uid=USER User ID owning newly created files --owner-gid=GROUP Group ID owning newly created files --file-mode=MODE File mode for newly created files (0600) --dir-mode=MODE Mode of newly directory files (0700) --flat Use flat directory structure --max-download-speed=BYTES_PER_SEC Limit the download speed. --max-upload-speed=BYTES_PER_SEC Limit the upload speed. repository connect gcs --bucket=BUCKET [<flags>] Connect to repository in a Google Cloud Storage bucket --bucket=BUCKET Name of the Google Cloud Storage bucket --prefix=PREFIX Prefix to use for objects in the bucket --read-only Use read-only GCS scope to prevent write access --credentials-file=CREDENTIALS-FILE Use the provided JSON file with credentials --embed-credentials Embed GCS credentials JSON in Kopia configuration --max-download-speed=BYTES_PER_SEC Limit the download speed. --max-upload-speed=BYTES_PER_SEC Limit the upload speed. --point-in-time=2006-01-02T15:04:05Z07:00 Use a point-in-time view of the storage repository when sup- ported repository connect gdrive --folder-id=FOLDER-ID [<flags>] Connect to repository in a Google Drive folder --folder-id=FOLDER-ID FolderID to use for objects in the bucket --read-only Use read-only scope to prevent write access --credentials-file=CREDENTIALS-FILE Use the provided JSON file with credentials --embed-credentials Embed GCS credentials JSON in Kopia configuration --max-download-speed=BYTES_PER_SEC Limit the download speed. --max-upload-speed=BYTES_PER_SEC Limit the upload speed. repository connect rclone --remote-path=REMOTE-PATH [<flags>] Connect to repository in a rclone-based provided --remote-path=REMOTE-PATH Rclone remote:path --flat Use flat directory structure --rclone-exe=RCLONE-EXE Path to rclone binary --rclone-args=RCLONE-ARGS Pass additional parameters to rclone --rclone-env=RCLONE-ENV Pass additional environment (key=value) to rclone --embed-rclone-config=EMBED-RCLONE-CONFIG Embed the provider RClone config --atomic-writes Assume provider writes are atomic --max-download-speed=BYTES_PER_SEC Limit the download speed. --max-upload-speed=BYTES_PER_SEC Limit the upload speed. repository connect s3 --bucket=BUCKET --access-key=ACCESS-KEY --secret-ac- cess-key=SECRET-ACCESS-KEY [<flags>] Connect to repository in an S3 bucket --bucket=BUCKET Name of the S3 bucket --endpoint="s3.amazonaws.com" Endpoint to use --region="" S3 Region --access-key=ACCESS-KEY Access key ID (overrides AWS_ACCESS_KEY_ID environment variable) --secret-access-key=SECRET-ACCESS-KEY Secret access key (overrides AWS_SECRET_ACCESS_KEY environment variable) --session-token=SESSION-TOKEN Session token (overrides AWS_SESSION_TOKEN environment variable) --prefix=PREFIX Prefix to use for objects in the bucket. Put trailing slash (/) if you want to use prefix as directory. e.g my-backup-dir/ would put repository contents inside my-backup-dir directory --disable-tls Disable TLS security (HTTPS) --disable-tls-verification Disable TLS (HTTPS) certificate verification --max-download-speed=BYTES_PER_SEC Limit the download speed. --max-upload-speed=BYTES_PER_SEC Limit the upload speed. --point-in-time=2006-01-02T15:04:05Z07:00 Use a point-in-time view of the storage repository when sup- ported --root-ca-pem-base64=ROOT-CA-PEM-BASE64 Certificate authority in-line (base64 enc.) --root-ca-pem-path=ROOT-CA-PEM-PATH Certificate authority file path repository connect sftp --path=PATH --host=HOST --username=USERNAME [<flags>] Connect to repository in an SFTP storage --path=PATH Path to the repository in the SFTP/SSH server --host=HOST SFTP/SSH server hostname --port=22 SFTP/SSH server port --username=USERNAME SFTP/SSH server username --sftp-password=SFTP-PASSWORD SFTP/SSH server password --keyfile=KEYFILE path to private key file for SFTP/SSH server --key-data=KEY-DATA private key data --known-hosts=KNOWN-HOSTS path to known_hosts file --known-hosts-data=KNOWN-HOSTS-DATA known_hosts file entries --embed-credentials Embed key and known_hosts in Kopia configuration --external Launch external passwordless SSH command --ssh-command="ssh" SSH command --ssh-args=SSH-ARGS Arguments to external SSH command --flat Use flat directory structure --max-download-speed=BYTES_PER_SEC Limit the download speed. --max-upload-speed=BYTES_PER_SEC Limit the upload speed. repository connect webdav --url=URL [<flags>] Connect to repository in a WebDAV storage --url=URL URL of WebDAV server --flat Use flat directory structure --webdav-username=WEBDAV-USERNAME WebDAV username --webdav-password=WEBDAV-PASSWORD WebDAV password --atomic-writes Assume WebDAV provider implements atomic writes --max-download-speed=BYTES_PER_SEC Limit the download speed. --max-upload-speed=BYTES_PER_SEC Limit the upload speed. repository create from-config [<flags>] Create repository in the provided configuration file --file=FILE Path to the configuration file --token=TOKEN Configuration token --token-file=TOKEN-FILE Path to the configuration token file --token-stdin Read configuration token from stdin repository create azure --container=CONTAINER --storage-account=STORAGE-AC- COUNT [<flags>] Create repository in an Azure blob storage --container=CONTAINER Name of the Azure blob container --storage-account=STORAGE-ACCOUNT Azure storage account name (overrides AZURE_STORAGE_ACCOUNT en- vironment variable) --storage-key=STORAGE-KEY Azure storage account key (overrides AZURE_STORAGE_KEY environ- ment variable) --storage-domain=STORAGE-DOMAIN Azure storage domain --sas-token=SAS-TOKEN Azure SAS Token --prefix=PREFIX Prefix to use for objects in the bucket --tenant-id=TENANT-ID Azure service principle tenant ID (overrides AZURE_TENANT_ID en- vironment variable) --client-id=CLIENT-ID Azure service principle client ID (overrides AZURE_CLIENT_ID en- vironment variable) --client-secret=CLIENT-SECRET Azure service principle client secret (overrides AZURE_CLIENT_SECRET environment variable) --max-download-speed=BYTES_PER_SEC Limit the download speed. --max-upload-speed=BYTES_PER_SEC Limit the upload speed. --point-in-time=2006-01-02T15:04:05Z07:00 Use a point-in-time view of the storage repository when sup- ported repository create b2 --bucket=BUCKET --key-id=KEY-ID --key=KEY [<flags>] Create repository in a B2 bucket --bucket=BUCKET Name of the B2 bucket --key-id=KEY-ID Key ID (overrides B2_KEY_ID environment variable) --key=KEY Secret key (overrides B2_KEY environment variable) --prefix=PREFIX Prefix to use for objects in the bucket --max-download-speed=BYTES_PER_SEC Limit the download speed. --max-upload-speed=BYTES_PER_SEC Limit the upload speed. repository create filesystem --path=PATH [<flags>] Create repository in a filesystem --path=PATH Path to the repository --owner-uid=USER User ID owning newly created files --owner-gid=GROUP Group ID owning newly created files --file-mode=MODE File mode for newly created files (0600) --dir-mode=MODE Mode of newly directory files (0700) --flat Use flat directory structure --max-download-speed=BYTES_PER_SEC Limit the download speed. --max-upload-speed=BYTES_PER_SEC Limit the upload speed. repository create gcs --bucket=BUCKET [<flags>] Create repository in a Google Cloud Storage bucket --bucket=BUCKET Name of the Google Cloud Storage bucket --prefix=PREFIX Prefix to use for objects in the bucket --read-only Use read-only GCS scope to prevent write access --credentials-file=CREDENTIALS-FILE Use the provided JSON file with credentials --embed-credentials Embed GCS credentials JSON in Kopia configuration --max-download-speed=BYTES_PER_SEC Limit the download speed. --max-upload-speed=BYTES_PER_SEC Limit the upload speed. --point-in-time=2006-01-02T15:04:05Z07:00 Use a point-in-time view of the storage repository when sup- ported repository create gdrive --folder-id=FOLDER-ID [<flags>] Create repository in a Google Drive folder --folder-id=FOLDER-ID FolderID to use for objects in the bucket --read-only Use read-only scope to prevent write access --credentials-file=CREDENTIALS-FILE Use the provided JSON file with credentials --embed-credentials Embed GCS credentials JSON in Kopia configuration --max-download-speed=BYTES_PER_SEC Limit the download speed. --max-upload-speed=BYTES_PER_SEC Limit the upload speed. repository create rclone --remote-path=REMOTE-PATH [<flags>] Create repository in a rclone-based provided --remote-path=REMOTE-PATH Rclone remote:path --flat Use flat directory structure --rclone-exe=RCLONE-EXE Path to rclone binary --rclone-args=RCLONE-ARGS Pass additional parameters to rclone --rclone-env=RCLONE-ENV Pass additional environment (key=value) to rclone --embed-rclone-config=EMBED-RCLONE-CONFIG Embed the provider RClone config --atomic-writes Assume provider writes are atomic --max-download-speed=BYTES_PER_SEC Limit the download speed. --max-upload-speed=BYTES_PER_SEC Limit the upload speed. repository create s3 --bucket=BUCKET --access-key=ACCESS-KEY --secret-ac- cess-key=SECRET-ACCESS-KEY [<flags>] Create repository in an S3 bucket --bucket=BUCKET Name of the S3 bucket --endpoint="s3.amazonaws.com" Endpoint to use --region="" S3 Region --access-key=ACCESS-KEY Access key ID (overrides AWS_ACCESS_KEY_ID environment variable) --secret-access-key=SECRET-ACCESS-KEY Secret access key (overrides AWS_SECRET_ACCESS_KEY environment variable) --session-token=SESSION-TOKEN Session token (overrides AWS_SESSION_TOKEN environment variable) --prefix=PREFIX Prefix to use for objects in the bucket. Put trailing slash (/) if you want to use prefix as directory. e.g my-backup-dir/ would put repository contents inside my-backup-dir directory --disable-tls Disable TLS security (HTTPS) --disable-tls-verification Disable TLS (HTTPS) certificate verification --max-download-speed=BYTES_PER_SEC Limit the download speed. --max-upload-speed=BYTES_PER_SEC Limit the upload speed. --point-in-time=2006-01-02T15:04:05Z07:00 Use a point-in-time view of the storage repository when sup- ported --root-ca-pem-base64=ROOT-CA-PEM-BASE64 Certificate authority in-line (base64 enc.) --root-ca-pem-path=ROOT-CA-PEM-PATH Certificate authority file path repository create sftp --path=PATH --host=HOST --username=USERNAME [<flags>] Create repository in an SFTP storage --path=PATH Path to the repository in the SFTP/SSH server --host=HOST SFTP/SSH server hostname --port=22 SFTP/SSH server port --username=USERNAME SFTP/SSH server username --sftp-password=SFTP-PASSWORD SFTP/SSH server password --keyfile=KEYFILE path to private key file for SFTP/SSH server --key-data=KEY-DATA private key data --known-hosts=KNOWN-HOSTS path to known_hosts file --known-hosts-data=KNOWN-HOSTS-DATA known_hosts file entries --embed-credentials Embed key and known_hosts in Kopia configuration --external Launch external passwordless SSH command --ssh-command="ssh" SSH command --ssh-args=SSH-ARGS Arguments to external SSH command --flat Use flat directory structure --max-download-speed=BYTES_PER_SEC Limit the download speed. --max-upload-speed=BYTES_PER_SEC Limit the upload speed. repository create webdav --url=URL [<flags>] Create repository in a WebDAV storage --url=URL URL of WebDAV server --flat Use flat directory structure --webdav-username=WEBDAV-USERNAME WebDAV username --webdav-password=WEBDAV-PASSWORD WebDAV password --atomic-writes Assume WebDAV provider implements atomic writes --max-download-speed=BYTES_PER_SEC Limit the download speed. --max-upload-speed=BYTES_PER_SEC Limit the upload speed. repository disconnect Disconnect from a repository. repository repair from-config [<flags>] Repair repository in the provided configuration file --file=FILE Path to the configuration file --token=TOKEN Configuration token --token-file=TOKEN-FILE Path to the configuration token file --token-stdin Read configuration token from stdin repository repair azure --container=CONTAINER --storage-account=STORAGE-AC- COUNT [<flags>] Repair repository in an Azure blob storage --container=CONTAINER Name of the Azure blob container --storage-account=STORAGE-ACCOUNT Azure storage account name (overrides AZURE_STORAGE_ACCOUNT en- vironment variable) --storage-key=STORAGE-KEY Azure storage account key (overrides AZURE_STORAGE_KEY environ- ment variable) --storage-domain=STORAGE-DOMAIN Azure storage domain --sas-token=SAS-TOKEN Azure SAS Token --prefix=PREFIX Prefix to use for objects in the bucket --tenant-id=TENANT-ID Azure service principle tenant ID (overrides AZURE_TENANT_ID en- vironment variable) --client-id=CLIENT-ID Azure service principle client ID (overrides AZURE_CLIENT_ID en- vironment variable) --client-secret=CLIENT-SECRET Azure service principle client secret (overrides AZURE_CLIENT_SECRET environment variable) --max-download-speed=BYTES_PER_SEC Limit the download speed. --max-upload-speed=BYTES_PER_SEC Limit the upload speed. --point-in-time=2006-01-02T15:04:05Z07:00 Use a point-in-time view of the storage repository when sup- ported repository repair b2 --bucket=BUCKET --key-id=KEY-ID --key=KEY [<flags>] Repair repository in a B2 bucket --bucket=BUCKET Name of the B2 bucket --key-id=KEY-ID Key ID (overrides B2_KEY_ID environment variable) --key=KEY Secret key (overrides B2_KEY environment variable) --prefix=PREFIX Prefix to use for objects in the bucket --max-download-speed=BYTES_PER_SEC Limit the download speed. --max-upload-speed=BYTES_PER_SEC Limit the upload speed. repository repair filesystem --path=PATH [<flags>] Repair repository in a filesystem --path=PATH Path to the repository --owner-uid=USER User ID owning newly created files --owner-gid=GROUP Group ID owning newly created files --file-mode=MODE File mode for newly created files (0600) --dir-mode=MODE Mode of newly directory files (0700) --flat Use flat directory structure --max-download-speed=BYTES_PER_SEC Limit the download speed. --max-upload-speed=BYTES_PER_SEC Limit the upload speed. repository repair gcs --bucket=BUCKET [<flags>] Repair repository in a Google Cloud Storage bucket --bucket=BUCKET Name of the Google Cloud Storage bucket --prefix=PREFIX Prefix to use for objects in the bucket --read-only Use read-only GCS scope to prevent write access --credentials-file=CREDENTIALS-FILE Use the provided JSON file with credentials --embed-credentials Embed GCS credentials JSON in Kopia configuration --max-download-speed=BYTES_PER_SEC Limit the download speed. --max-upload-speed=BYTES_PER_SEC Limit the upload speed. --point-in-time=2006-01-02T15:04:05Z07:00 Use a point-in-time view of the storage repository when sup- ported repository repair gdrive --folder-id=FOLDER-ID [<flags>] Repair repository in a Google Drive folder --folder-id=FOLDER-ID FolderID to use for objects in the bucket --read-only Use read-only scope to prevent write access --credentials-file=CREDENTIALS-FILE Use the provided JSON file with credentials --embed-credentials Embed GCS credentials JSON in Kopia configuration --max-download-speed=BYTES_PER_SEC Limit the download speed. --max-upload-speed=BYTES_PER_SEC Limit the upload speed. repository repair rclone --remote-path=REMOTE-PATH [<flags>] Repair repository in a rclone-based provided --remote-path=REMOTE-PATH Rclone remote:path --flat Use flat directory structure --rclone-exe=RCLONE-EXE Path to rclone binary --rclone-args=RCLONE-ARGS Pass additional parameters to rclone --rclone-env=RCLONE-ENV Pass additional environment (key=value) to rclone --embed-rclone-config=EMBED-RCLONE-CONFIG Embed the provider RClone config --atomic-writes Assume provider writes are atomic --max-download-speed=BYTES_PER_SEC Limit the download speed. --max-upload-speed=BYTES_PER_SEC Limit the upload speed. repository repair s3 --bucket=BUCKET --access-key=ACCESS-KEY --secret-ac- cess-key=SECRET-ACCESS-KEY [<flags>] Repair repository in an S3 bucket --bucket=BUCKET Name of the S3 bucket --endpoint="s3.amazonaws.com" Endpoint to use --region="" S3 Region --access-key=ACCESS-KEY Access key ID (overrides AWS_ACCESS_KEY_ID environment variable) --secret-access-key=SECRET-ACCESS-KEY Secret access key (overrides AWS_SECRET_ACCESS_KEY environment variable) --session-token=SESSION-TOKEN Session token (overrides AWS_SESSION_TOKEN environment variable) --prefix=PREFIX Prefix to use for objects in the bucket. Put trailing slash (/) if you want to use prefix as directory. e.g my-backup-dir/ would put repository contents inside my-backup-dir directory --disable-tls Disable TLS security (HTTPS) --disable-tls-verification Disable TLS (HTTPS) certificate verification --max-download-speed=BYTES_PER_SEC Limit the download speed. --max-upload-speed=BYTES_PER_SEC Limit the upload speed. --point-in-time=2006-01-02T15:04:05Z07:00 Use a point-in-time view of the storage repository when sup- ported --root-ca-pem-base64=ROOT-CA-PEM-BASE64 Certificate authority in-line (base64 enc.) --root-ca-pem-path=ROOT-CA-PEM-PATH Certificate authority file path repository repair sftp --path=PATH --host=HOST --username=USERNAME [<flags>] Repair repository in an SFTP storage --path=PATH Path to the repository in the SFTP/SSH server --host=HOST SFTP/SSH server hostname --port=22 SFTP/SSH server port --username=USERNAME SFTP/SSH server username --sftp-password=SFTP-PASSWORD SFTP/SSH server password --keyfile=KEYFILE path to private key file for SFTP/SSH server --key-data=KEY-DATA private key data --known-hosts=KNOWN-HOSTS path to known_hosts file --known-hosts-data=KNOWN-HOSTS-DATA known_hosts file entries --embed-credentials Embed key and known_hosts in Kopia configuration --external Launch external passwordless SSH command --ssh-command="ssh" SSH command --ssh-args=SSH-ARGS Arguments to external SSH command --flat Use flat directory structure --max-download-speed=BYTES_PER_SEC Limit the download speed. --max-upload-speed=BYTES_PER_SEC Limit the upload speed. repository repair webdav --url=URL [<flags>] Repair repository in a WebDAV storage --url=URL URL of WebDAV server --flat Use flat directory structure --webdav-username=WEBDAV-USERNAME WebDAV username --webdav-password=WEBDAV-PASSWORD WebDAV password --atomic-writes Assume WebDAV provider implements atomic writes --max-download-speed=BYTES_PER_SEC Limit the download speed. --max-upload-speed=BYTES_PER_SEC Limit the upload speed. repository set-client [<flags>] Set repository client options. --read-only Set repository to read-only --read-write Set repository to read-write --description=DESCRIPTION Change description --username=USERNAME Change username --hostname=HOSTNAME Change hostname --repository-format-cache-duration=REPOSITORY-FORMAT-CACHE-DURATION Duration of kopia.repository format blob cache --disable-repository-format-cache Disable caching of kopia.repository format blob repository set-parameters [<flags>] Set repository parameters. --max-pack-size-mb=MB Set max pack file size --index-version=INDEX-VERSION Set version of index format used for writing --retention-mode=RETENTION-MODE Set the blob retention-mode for supported storage backends. --retention-period=RETENTION-PERIOD Set the blob retention-period for supported storage backends. --upgrade Upgrade repository to the latest stable format --epoch-refresh-frequency=EPOCH-REFRESH-FREQUENCY Epoch refresh frequency --epoch-min-duration=EPOCH-MIN-DURATION Minimal duration of a single epoch --epoch-cleanup-safety-margin=EPOCH-CLEANUP-SAFETY-MARGIN Epoch cleanup safety margin --epoch-advance-on-count=EPOCH-ADVANCE-ON-COUNT Advance epoch if the number of indexes exceeds given threshold --epoch-advance-on-size-mb=EPOCH-ADVANCE-ON-SIZE-MB Advance epoch if the total size of indexes exceeds given thresh- old --epoch-delete-parallelism=EPOCH-DELETE-PARALLELISM Epoch delete parallelism --epoch-checkpoint-frequency=EPOCH-CHECKPOINT-FREQUENCY Checkpoint frequency repository status [<flags>] Display the status of connected repository. -t, --reconnect-token Display reconnect command -s, --reconnect-token-with-password Include password in reconnect token --json Output result in JSON format to stdout repository sync-to from-config [<flags>] Synchronize repository data to another repository in the provided con- figuration file --file=FILE Path to the configuration file --token=TOKEN Configuration token --token-file=TOKEN-FILE Path to the configuration token file --token-stdin Read configuration token from stdin repository sync-to azure --container=CONTAINER --storage-account=STORAGE- ACCOUNT [<flags>] Synchronize repository data to another repository in an Azure blob storage --container=CONTAINER Name of the Azure blob container --storage-account=STORAGE-ACCOUNT Azure storage account name (overrides AZURE_STORAGE_ACCOUNT en- vironment variable) --storage-key=STORAGE-KEY Azure storage account key (overrides AZURE_STORAGE_KEY environ- ment variable) --storage-domain=STORAGE-DOMAIN Azure storage domain --sas-token=SAS-TOKEN Azure SAS Token --prefix=PREFIX Prefix to use for objects in the bucket --tenant-id=TENANT-ID Azure service principle tenant ID (overrides AZURE_TENANT_ID en- vironment variable) --client-id=CLIENT-ID Azure service principle client ID (overrides AZURE_CLIENT_ID en- vironment variable) --client-secret=CLIENT-SECRET Azure service principle client secret (overrides AZURE_CLIENT_SECRET environment variable) --max-download-speed=BYTES_PER_SEC Limit the download speed. --max-upload-speed=BYTES_PER_SEC Limit the upload speed. --point-in-time=2006-01-02T15:04:05Z07:00 Use a point-in-time view of the storage repository when sup- ported repository sync-to b2 --bucket=BUCKET --key-id=KEY-ID --key=KEY [<flags>] Synchronize repository data to another repository in a B2 bucket --bucket=BUCKET Name of the B2 bucket --key-id=KEY-ID Key ID (overrides B2_KEY_ID environment variable) --key=KEY Secret key (overrides B2_KEY environment variable) --prefix=PREFIX Prefix to use for objects in the bucket --max-download-speed=BYTES_PER_SEC Limit the download speed. --max-upload-speed=BYTES_PER_SEC Limit the upload speed. repository sync-to filesystem --path=PATH [<flags>] Synchronize repository data to another repository in a filesystem --path=PATH Path to the repository --owner-uid=USER User ID owning newly created files --owner-gid=GROUP Group ID owning newly created files --file-mode=MODE File mode for newly created files (0600) --dir-mode=MODE Mode of newly directory files (0700) --flat Use flat directory structure --max-download-speed=BYTES_PER_SEC Limit the download speed. --max-upload-speed=BYTES_PER_SEC Limit the upload speed. repository sync-to gcs --bucket=BUCKET [<flags>] Synchronize repository data to another repository in a Google Cloud Storage bucket --bucket=BUCKET Name of the Google Cloud Storage bucket --prefix=PREFIX Prefix to use for objects in the bucket --read-only Use read-only GCS scope to prevent write access --credentials-file=CREDENTIALS-FILE Use the provided JSON file with credentials --embed-credentials Embed GCS credentials JSON in Kopia configuration --max-download-speed=BYTES_PER_SEC Limit the download speed. --max-upload-speed=BYTES_PER_SEC Limit the upload speed. --point-in-time=2006-01-02T15:04:05Z07:00 Use a point-in-time view of the storage repository when sup- ported repository sync-to gdrive --folder-id=FOLDER-ID [<flags>] Synchronize repository data to another repository in a Google Drive folder --folder-id=FOLDER-ID FolderID to use for objects in the bucket --read-only Use read-only scope to prevent write access --credentials-file=CREDENTIALS-FILE Use the provided JSON file with credentials --embed-credentials Embed GCS credentials JSON in Kopia configuration --max-download-speed=BYTES_PER_SEC Limit the download speed. --max-upload-speed=BYTES_PER_SEC Limit the upload speed. repository sync-to rclone --remote-path=REMOTE-PATH [<flags>] Synchronize repository data to another repository in a rclone-based provided --remote-path=REMOTE-PATH Rclone remote:path --flat Use flat directory structure --rclone-exe=RCLONE-EXE Path to rclone binary --rclone-args=RCLONE-ARGS Pass additional parameters to rclone --rclone-env=RCLONE-ENV Pass additional environment (key=value) to rclone --embed-rclone-config=EMBED-RCLONE-CONFIG Embed the provider RClone config --atomic-writes Assume provider writes are atomic --max-download-speed=BYTES_PER_SEC Limit the download speed. --max-upload-speed=BYTES_PER_SEC Limit the upload speed. repository sync-to s3 --bucket=BUCKET --access-key=ACCESS-KEY --secret-ac- cess-key=SECRET-ACCESS-KEY [<flags>] Synchronize repository data to another repository in an S3 bucket --bucket=BUCKET Name of the S3 bucket --endpoint="s3.amazonaws.com" Endpoint to use --region="" S3 Region --access-key=ACCESS-KEY Access key ID (overrides AWS_ACCESS_KEY_ID environment variable) --secret-access-key=SECRET-ACCESS-KEY Secret access key (overrides AWS_SECRET_ACCESS_KEY environment variable) --session-token=SESSION-TOKEN Session token (overrides AWS_SESSION_TOKEN environment variable) --prefix=PREFIX Prefix to use for objects in the bucket. Put trailing slash (/) if you want to use prefix as directory. e.g my-backup-dir/ would put repository contents inside my-backup-dir directory --disable-tls Disable TLS security (HTTPS) --disable-tls-verification Disable TLS (HTTPS) certificate verification --max-download-speed=BYTES_PER_SEC Limit the download speed. --max-upload-speed=BYTES_PER_SEC Limit the upload speed. --point-in-time=2006-01-02T15:04:05Z07:00 Use a point-in-time view of the storage repository when sup- ported --root-ca-pem-base64=ROOT-CA-PEM-BASE64 Certificate authority in-line (base64 enc.) --root-ca-pem-path=ROOT-CA-PEM-PATH Certificate authority file path repository sync-to sftp --path=PATH --host=HOST --username=USERNAME [<flags>] Synchronize repository data to another repository in an SFTP storage --path=PATH Path to the repository in the SFTP/SSH server --host=HOST SFTP/SSH server hostname --port=22 SFTP/SSH server port --username=USERNAME SFTP/SSH server username --sftp-password=SFTP-PASSWORD SFTP/SSH server password --keyfile=KEYFILE path to private key file for SFTP/SSH server --key-data=KEY-DATA private key data --known-hosts=KNOWN-HOSTS path to known_hosts file --known-hosts-data=KNOWN-HOSTS-DATA known_hosts file entries --embed-credentials Embed key and known_hosts in Kopia configuration --external Launch external passwordless SSH command --ssh-command="ssh" SSH command --ssh-args=SSH-ARGS Arguments to external SSH command --flat Use flat directory structure --max-download-speed=BYTES_PER_SEC Limit the download speed. --max-upload-speed=BYTES_PER_SEC Limit the upload speed. repository sync-to webdav --url=URL [<flags>] Synchronize repository data to another repository in a WebDAV storage --url=URL URL of WebDAV server --flat Use flat directory structure --webdav-username=WEBDAV-USERNAME WebDAV username --webdav-password=WEBDAV-PASSWORD WebDAV password --atomic-writes Assume WebDAV provider implements atomic writes --max-download-speed=BYTES_PER_SEC Limit the download speed. --max-upload-speed=BYTES_PER_SEC Limit the upload speed. repository throttle get [<flags>] Get throttling parameters for a repository --json Output result in JSON format to stdout repository throttle set [<flags>] Set throttling parameters for a repository --download-bytes-per-second=DOWNLOAD-BYTES-PER-SECOND Set the download bytes per second --upload-bytes-per-second=UPLOAD-BYTES-PER-SECOND Set the upload bytes per second --read-requests-per-second=READ-REQUESTS-PER-SECOND Set max reads per second --write-requests-per-second=WRITE-REQUESTS-PER-SECOND Set max writes per second --list-requests-per-second=LIST-REQUESTS-PER-SECOND Set max lists per second --concurrent-reads=CONCURRENT-READS Set max concurrent reads --concurrent-writes=CONCURRENT-WRITES Set max concurrent writes repository change-password [<flags>] Change repository password --new-password=NEW-PASSWORD New password repository validate-provider [<flags>] Validates that a repository provider is compatible with Kopia --num-storage-connections=NUM-STORAGE-CONNECTIONS Number of storage connections --concurrency-test-duration=CONCURRENCY-TEST-DURATION Duration of concurrency test --put-blob-workers=PUT-BLOB-WORKERS Number of PutBlob workers --get-blob-workers=GET-BLOB-WORKERS Number of GetBlob workers --get-metadata-workers=GET-METADATA-WORKERS Number of GetMetadata workers repository upgrade begin [<flags>] Begin upgrade. --io-drain-timeout=15m0s Max time it should take all other Kopia clients to drop reposi- tory connections --status-poll-interval=60s An advisory polling interval to check for the status of upgrade --max-permitted-clock-drift=5m0s The maximum drift between repository and client clocks repository upgrade rollback [<flags>] Rollback the repository upgrade. --force Force rollback the repository upgrade, this action can cause repository corruption repository upgrade validate Validate the upgraded indexes. build: 0.19.0 kopia(1)
NAME | SYNOPSIS | DESCRIPTION | OPTIONS | COMMANDS
Want to link to this manual page? Use this URL:
<https://man.freebsd.org/cgi/man.cgi?query=kopia&sektion=1&manpath=FreeBSD+Ports+14.3.quarterly>
