Skip site navigation (1)Skip section navigation (2)

FreeBSD Manual Pages

  
 
  

home | help
elftc_set_timestamps(3)
set file timestamps
elftc_timestamp(3)
return the current or environment-provided timestamp
geom_stats_open(3), geom_stats_close(3), geom_stats_resync(3), geom_stats_snapshot_get(3), geom_stats_snapshot_free(3), geom_stats_snapshot_timestamp(3), geom_stats_snapshot_reset(3), geom_stats_snapshot_next(3), gctl_get_handle(3), gctl_ro_param(3), gctl_rw_param(3), gctl_issue(3), gctl_free(3), gctl_dump(3), geom_getxml(3), geom_xml2tree(3), geom_gettree(3), geom_deletetree(3), g_open(3), g_close(3), g_mediasize(3), g_sectorsize(3), g_stripeoffset(3), g_stripesize(3), g_flush(3), g_delete(3), g_device_path(3), g_get_ident(3), g_get_name(3), g_open_by_ident(3), g_providername(3)
userland API library for kernel GEOM subsystem
hx509_lock(3), hx509 lock functions(3)
See the Locking and unlocking certificates and encrypted data. for description and examples
krb5_principal_intro(3)
The principal handing functions. A Kerberos principal is a email address looking string that contains to parts separeted by a @. The later part is the kerbero realm the principal belongs to and the former is a list of 0 or more components. For example lha@SU.SE host/hummel.it.su.se@SU.SE host/admin@H5L.ORG See the library functions here: Heimdal Kerberos 5 principal functions
led(4)
API for manipulating LED's, lamps and other annunciators
pcap-tstamp(7)
packet time stamps in libpcap
pcap_get_tstamp_precision(3)
get the time stamp precision returned in captures
pcap_list_tstamp_types(3), pcap_free_tstamp_types(3)
get a list of time stamp types supported by a capture device, and free that list
pcap_open_dead(3), pcap_open_dead_with_tstamp_precision(3)
open a fake pcap_t for compiling filters or opening a capture for output
pcap_open_offline(3), pcap_open_offline_with_tstamp_precision(3), pcap_fopen_offline(3), pcap_fopen_offline_with_tstamp_precision(3)
open a saved capture file for reading
pcap_set_tstamp_precision(3)
set the time stamp precision returned in captures
pcap_set_tstamp_type(3)
set the time stamp type to be used by a capture device
pcap_tstamp_type_name_to_val(3)
get the time stamp type value corresponding to a time stamp type name
pcap_tstamp_type_val_to_name(3), pcap_tstamp_type_val_to_description(3)
get a name or description for a time stamp type value
pmc.tsc(3)
measurements using the i386 timestamp counter
pmc_set(3)
set the reload count of a sampling PMC
vfs_timestamp(9)
generate current timestamp
SCT_new(3), SCT_new_from_base64(3), SCT_free(3), SCT_LIST_free(3), SCT_get_version(3), SCT_set_version(3), SCT_get_log_entry_type(3), SCT_set_log_entry_type(3), SCT_get0_log_id(3), SCT_set0_log_id(3), SCT_set1_log_id(3), SCT_get_timestamp(3), SCT_set_timestamp(3), SCT_get_signature_nid(3), SCT_set_signature_nid(3), SCT_get0_signature(3), SCT_set0_signature(3), SCT_set1_signature(3), SCT_get0_extensions(3), SCT_set0_extensions(3), SCT_set1_extensions(3), SCT_get_source(3), SCT_set_source(3)
A Certificate Transparency Signed Certificate Timestamp
SCT_print(3), SCT_LIST_print(3), SCT_validation_status_string(3)
Prints Signed Certificate Timestamps in a human-readable way
SCT_validate(3), SCT_LIST_validate(3), SCT_get_validation_status(3)
checks Signed Certificate Timestamps (SCTs) are valid
TS_RESP_CTX_new_ex(3), TS_RESP_CTX_new(3), TS_RESP_CTX_free(3)
Timestamp response context object creation
o2i_SCT_LIST(3), i2o_SCT_LIST(3), o2i_SCT(3), i2o_SCT(3)
decode and encode Signed Certificate Timestamp lists in TLS wire format
openssl-ca(1)
sample minimal CA application
openssl-ts(1)
Time Stamping Authority command
tsget(1)
Time Stamping HTTP/HTTPS client
AG_CustomEventLoop(3)
agar GUI custom event loop example
ALLEGRO_SAMPLE(3)
Allegro 5 API
ALLEGRO_SAMPLE_ID(3)
Allegro 5 API
ALLEGRO_SAMPLE_INSTANCE(3)
Allegro 5 API
BasketLosses(1)
Example of Modeling Losses Across Correlated Assets
BermudanSwaption(1)
Example of using QuantLib
Bonds(1)
Example of bond pricing
CDS(1)
Example of Credit-Default Swap pricing
CVAIRS(1)
Example of Credit Value Adjustment for Interest Rate Swap
CallableBonds(1)
Example of callable-bond pricing
ConvertibleBonds(1)
Example of using QuantLib to value convertible bonds
DiscreteHedging(1)
Example of using QuantLib
EquityOption(1)
Example of using QuantLib to value equity options
FRA(1)
Example of using QuantLib
FcConfigUptoDate(3)
Check timestamps on config files
FittedBondCurve(1)
Example of using QuantLib to fit discount curves
Gaussian1dModels(1)
Example of Gaussian Short Rate Model for Interest Rate Derivatives
Gc.Memprof(3o)
Memprof is a sampling engine for allocated memory words
GlobalOptimizer(1)
Example of Global Optimization Using Different Methods
LatentModel(1)
Example of Modeling Correlated Defaults
MPI_T_category_changed(3)
Get the timestamp indicating the last change to the categories
MPI_T_event_get_timestamp(3)
Returns the timestamp of when the event was initially observed by the implementation
MPI_T_source_get_timestamp(3)
Returns a current timestamp from the source identified by the source_index argument
MarketModels(1), MarketModel(1)
Example of Interst Rate Derivative Pricing
MrmRegisterNames(3)
Registers the values associated with the names referenced in UIL (for example, UIL callback function names or UIL identifier names) "MrmRegisterNames" "uil functions" "MrmRegisterNames"
MrmRegisterNamesInHierarchy(3)
Registers the values associated with the names referenced in UIL within a single hierarchy (for example, UIL callback function names or UIL identifier names) "MrmRegisterNamesIn%Hierarchy" "uil functions" "MrmRegisterNamesIn%Hierarchy"
MulticurveBootstrapping(1)
Example of using QuantLib
MultidimIntegral(1)
Example of Multi-dimensional Numerical Integration
Replication(1)
Example of using QuantLib
Repo(1)
Example of using QuantLib
SAMPLE(3)
Stores sound data. Allegro game programming library
SCT_new(3), SCT_new_from_base64(3), SCT_free(3), SCT_LIST_free(3), SCT_get_version(3), SCT_set_version(3), SCT_get_log_entry_type(3), SCT_set_log_entry_type(3), SCT_get0_log_id(3), SCT_set0_log_id(3), SCT_set1_log_id(3), SCT_get_timestamp(3), SCT_set_timestamp(3), SCT_get_signature_nid(3), SCT_set_signature_nid(3), SCT_get0_signature(3), SCT_set0_signature(3), SCT_set1_signature(3), SCT_get0_extensions(3), SCT_set0_extensions(3), SCT_set1_extensions(3), SCT_get_source(3), SCT_set_source(3)
A Certificate Transparency Signed Certificate Timestamp
SCT_new(3ossl), SCT_new_from_base64(3ossl), SCT_free(3ossl), SCT_LIST_free(3ossl), SCT_get_version(3ossl), SCT_set_version(3ossl), SCT_get_log_entry_type(3ossl), SCT_set_log_entry_type(3ossl), SCT_get0_log_id(3ossl), SCT_set0_log_id(3ossl), SCT_set1_log_id(3ossl), SCT_get_timestamp(3ossl), SCT_set_timestamp(3ossl), SCT_get_signature_nid(3ossl), SCT_set_signature_nid(3ossl), SCT_get0_signature(3ossl), SCT_set0_signature(3ossl), SCT_set1_signature(3ossl), SCT_get0_extensions(3ossl), SCT_set0_extensions(3ossl), SCT_set1_extensions(3ossl), SCT_get_source(3ossl), SCT_set_source(3ossl)
A Certificate Transparency Signed Certificate Timestamp
SCT_print(3), SCT_LIST_print(3), SCT_validation_status_string(3)
Prints Signed Certificate Timestamps in a human-readable way
SCT_print(3ossl), SCT_LIST_print(3ossl), SCT_validation_status_string(3ossl)
Prints Signed Certificate Timestamps in a human-readable way
SCT_validate(3), SCT_LIST_validate(3), SCT_get_validation_status(3)
checks Signed Certificate Timestamps (SCTs) are valid
SCT_validate(3ossl), SCT_LIST_validate(3ossl), SCT_get_validation_status(3ossl)
checks Signed Certificate Timestamps (SCTs) are valid
SDL_GetGammaRamp(3)
Gets the color gamma lookup tables for the display
SDL_SetGammaRamp(3)
Sets the color gamma lookup tables for the display
Smokeping_matchers_CheckLatency(3), Smokeping::matchers::CheckLatency(3)
Edge triggered alert to check latency is under a value for x number of samples
Smokeping_matchers_CheckLoss(3), Smokeping::matchers::CheckLoss(3)
Edge triggered alert to check loss is under a value for x number of samples
Sympa::Message::Plugin::FixEncoding(3Sympa)
Example module for message hook to correct charset and encoding of messages
TS_RESP_CTX_new_ex(3ossl), TS_RESP_CTX_new(3ossl), TS_RESP_CTX_free(3ossl)
Timestamp response context object creation
Tcl_ZlibAdler32.tcl86(3), Tcl_ZlibAdler32(3), Tcl_ZlibCRC32(3), Tcl_ZlibDeflate(3), Tcl_ZlibInflate(3), Tcl_ZlibStreamChecksum(3), Tcl_ZlibStreamClose(3), Tcl_ZlibStreamEof(3), Tcl_ZlibStreamGet(3), Tcl_ZlibStreamGetCommandName(3), Tcl_ZlibStreamInit(3), Tcl_ZlibStreamPut(3)
compression and decompression functions
Tcl_ZlibAdler32.tcl90(3), Tcl_ZlibAdler32(3), Tcl_ZlibCRC32(3), Tcl_ZlibDeflate(3), Tcl_ZlibInflate(3), Tcl_ZlibStreamChecksum(3), Tcl_ZlibStreamClose(3), Tcl_ZlibStreamEof(3), Tcl_ZlibStreamGet(3), Tcl_ZlibStreamGetCommandName(3), Tcl_ZlibStreamInit(3), Tcl_ZlibStreamPut(3)
compression and decompression functions
XF86VidModeQueryExtension(3), XF86VidModeQueryVersion(3), XF86VidModeSetClientVersion(3), XF86VidModeGetModeLine(3), XF86VidModeGetAllModeLines(3), XF86VidModeAddModeLine(3), XF86VidModeDeleteModeLine(3), XF86VidModeModModeLine(3), XF86VidModeValidateModeLine(3), XF86VidModeSwitchMode(3), XF86VidModeSwitchToMode(3), XF86VidModeLockModeSwitch(3), XF86VidModeGetMonitor(3), XF86VidModeGetViewPort(3), XF86VidModeSetViewPort(3), XF86VidModeGetDotClocks(3), XF86VidModeGetGamma(3), XF86VidModeSetGamma(3), XF86VidModeGetGammaRamp(3), XF86VidModeSetGammaRamp(3), XF86VidModeGetGammaRampSize(3), XF86VidModeGetPermissions(3)
Extension library for the XFree86-VidMode X extension
XcmsTekHVCQueryMaxC(3), XcmsTekHVCQueryMaxV(3), XcmsTekHVCQueryMaxVC(3), XcmsTekHVCQueryMaxVSamples(3), XcmsTekHVCQueryMinV(3)
obtain the TekHVC coordinates
XmGetDragContext(3)
A Drag and Drop function that retrieves the DragContext widget ID associated with a timestamp "XmGetDragContext" "Drag and Drop functions" "XmGetDragContext"
XtLastEventProcessed(3), XtLastTimestampProcessed(3)
last event, last timestamp processed
_Users_Julie_Documents_Boulot_MyGitHub_lapack_CBLAS_examples_(3), CBLAS/examples(3)
Directory Reference
aafire(1), aainfo(1), aasavefont(1), aatest(1)
aalib example programs
addts(1)
add timestamps at the beginning of each line
adjust_sample(3)
Alters the parameters of a sample while it is playing. Allegro game programming library
afGetFrameCount(3), afGetTrackBytes(3), afGetDataOffset(3)
get the total sample frame count, length of audio track in bytes, offset of the audio track for a track in an audio file
afInitSampleFormat(3), afInitByteOrder(3), afInitChannels(3), afInitRate(3)
initialize audio data format for a track in an audio file setup
afReadFrames(3)
read sample frames from a track in an audio file
afSeekFrame(3), afTellFrame(3)
update or access the current sample frame position for a track in an audio file
afSetVirtualByteOrder(3), afSetVirtualChannels(3), afSetVirtualPCMMapping(3), afSetVirtualSampleFormat(3)
set the virtual data format for a track in an audio file
afWriteFrames(3)
write sample frames to a track in an audio file
akaiextract(1)
Extract audio samples from an AKAI media or AKAI disk image file
al_attach_sample_instance_to_mixer(3)
Allegro 5 API
al_attach_sample_instance_to_voice(3)
Allegro 5 API
al_create_sample(3)
Allegro 5 API
al_create_sample_instance(3)
Allegro 5 API
al_destroy_sample(3)
Allegro 5 API
al_destroy_sample_instance(3)
Allegro 5 API
al_detach_sample_instance(3)
Allegro 5 API
al_get_audio_stream_played_samples(3)
Allegro 5 API
al_get_bitmap_samples(3)
Allegro 5 API
al_get_new_bitmap_samples(3)
Allegro 5 API
al_get_sample(3)
Allegro 5 API
al_get_sample_channels(3)
Allegro 5 API
al_get_sample_data(3)
Allegro 5 API
al_get_sample_depth(3)
Allegro 5 API
al_get_sample_frequency(3)
Allegro 5 API
al_get_sample_instance_attached(3)
Allegro 5 API
al_get_sample_instance_channels(3)
Allegro 5 API
al_get_sample_instance_depth(3)
Allegro 5 API
al_get_sample_instance_frequency(3)
Allegro 5 API
al_get_sample_instance_gain(3)
Allegro 5 API
al_get_sample_instance_length(3)
Allegro 5 API
al_get_sample_instance_pan(3)
Allegro 5 API
al_get_sample_instance_playing(3)
Allegro 5 API
al_get_sample_instance_playmode(3)
Allegro 5 API
al_get_sample_instance_position(3)
Allegro 5 API
al_get_sample_instance_speed(3)
Allegro 5 API
al_get_sample_instance_time(3)
Allegro 5 API
al_get_sample_length(3)
Allegro 5 API
al_load_sample(3)
Allegro 5 API
al_load_sample_f(3)
Allegro 5 API
al_lock_sample_id(3)
Allegro 5 API
al_play_sample(3)
Allegro 5 API
al_play_sample_instance(3)
Allegro 5 API
al_register_sample_loader(3)
Allegro 5 API
al_register_sample_loader_f(3)
Allegro 5 API
al_register_sample_saver(3)
Allegro 5 API
al_register_sample_saver_f(3)
Allegro 5 API
al_reserve_samples(3)
Allegro 5 API
al_save_sample(3)
Allegro 5 API
al_save_sample_f(3)
Allegro 5 API
al_set_new_bitmap_samples(3)
Allegro 5 API
al_set_sample(3)
Allegro 5 API
al_set_sample_instance_channel_matrix(3)
Allegro 5 API
al_set_sample_instance_gain(3)
Allegro 5 API
al_set_sample_instance_length(3)
Allegro 5 API
al_set_sample_instance_pan(3)
Allegro 5 API
al_set_sample_instance_playing(3)
Allegro 5 API
al_set_sample_instance_playmode(3)
Allegro 5 API
al_set_sample_instance_position(3)
Allegro 5 API
al_set_sample_instance_speed(3)
Allegro 5 API
al_set_shader_sampler(3)
Allegro 5 API
al_stop_sample(3)
Allegro 5 API
al_stop_sample_instance(3)
Allegro 5 API
al_stop_samples(3)
Allegro 5 API
al_unlock_sample_id(3)
Allegro 5 API
allocate_voice(3)
Allocates a sound card voice for a sample. Allegro game programming library
ampache(1), Ampache(1)
is a Web-based Audio file manager
ampctl(1)
control radio amplifiers
ampctld(1)
TCP amplifier control daemon
ampgsql(8)
Amanda Application to interface with PostgreSQL
arbtt-capture(1)
collect data samples for arbtt
arbtt-dump(1)
dumps arbtt data samples
arbtt-import(1)
imports dumped arbtt data samples
arbtt-stats(1)
generate statistics from the arbtt data samples
astscript-psf-stamp(1)
Make normalized stamp with other sources masked
aubiocut(1)
a command line tool to slice sound files at onset or beat timestamps
barman-cloud-backup(1) - Barman-cloud Commands Synopsis 0.0 3.5 barman-cloud-backup [ { -V | --version } ] [ --help ] [ { { -v | --verbose } | { -q | --quiet } } ] [ { -t | --test } ] [ --cloud-provider { aws-s3 | azure-blob-storage | google-cloud-storage } ] [ { { -z | --gzip } | { -j | --bzip2 } | --snappy } ] [ { -h | --host } HOST ] [ { -p | --port } PORT ] [ { -U | --user } USER ] [ { -d | --dbname } DBNAME ] [ { -n | --name } BACKUP_NAME ] [ { -J | --jobs } JOBS ] [ { -S | --max-archive-size } MAX_ARCHIVE_SIZE ] [ --immediate-checkpoint ] [ --min-chunk-size MIN_CHUNK_SIZE ] [ --max-bandwidth MAX_BANDWIDTH ] [ --snapshot-instance SNAPSHOT_INSTANCE ] [ --snapshot-disk NAME ] [ --snapshot-zone GCP_ZONE ] [ -snapshot-gcp-project GCP_PROJECT ] [ --tags TAG [ TAG ... ] ] [ --endpoint-url ENDPOINT_URL ] [ { -P | --aws-profile } AWS_PROFILE ] [ --profile AWS_PROFILE ] [ --read-timeout READ_TIMEOUT ] [ { -e | --encryption } { AES256 | aws:kms } ] [ --sse-kms-key-id SSE_KMS_KEY_ID ] [ --aws-region AWS_REGION ] [ --aws-await-snapshots-timeout AWS_AWAIT_SNAPSHOTS_TIMEOUT ] [ --aws-snapshot-lock-mode { compliance | governance } ] [ --aws-snapshot-lock-duration DAYS ] [ --aws-snapshot-lock-cool-off-period HOURS ] [ --aws-snapshot-lock-expiration-date DATETIME ] [ { --azure-credential | --credential } { azure-cli | managed-identity | default } ] [ --encryption-scope ENCRYPTION_SCOPE ] [ --azure-subscription-id AZURE_SUBSCRIPTION_ID ] [ --azure-resource-group AZURE_RESOURCE_GROUP ] [ --gcp-project GCP_PROJECT ] [ --kms-key-name KMS_KEY_NAME ] [ --gcp-zone GCP_ZONE ] DESTINATION_URL SERVER_NAME Description The barman-cloud-backup script is used to create a local backup of a Postgres server and transfer it to a supported cloud provider, bypassing the Barman server. It can also be utilized as a hook script for copying Barman backups from the Barman server to one of the supported clouds (post_backup_retry_script). This script requires read access to PGDATA and tablespaces, typically run as the postgres user. When used on a Barman server, it requires read access to the directory where Barman backups are stored. If --snapshot- arguments are used and snapshots are supported by the selected cloud provider, the backup will be performed using snapshots of the specified disks (--snapshot-disk). The backup label and metadata will also be uploaded to the cloud. NOTE: 0.0 3.5 For GCP, only authentication with GOOGLE_APPLICATION_CREDENTIALS env is supported. IMPORTANT: 0.0 3.5 The cloud upload may fail if any file larger than the configured --max-archive-size is present in the data directory or tablespaces. However, Postgres files up to 1GB are always allowed, regardless of the --max-archive-size setting. Parameters 0.0 SERVER_NAME Name of the server to be backed up. DESTINATION_URL URL of the cloud destination, such as a bucket in AWS S3. For example: s3://bucket/path/to/folder. -V / --version Show version and exit. --help show this help message and exit. -v / --verbose Increase output verbosity (e.g., -vv is more than -v). -q / --quiet Decrease output verbosity (e.g., -qq is less than -q). -t / --test Test cloud connectivity and exit. --cloud-provider The cloud provider to use as a storage backend. Allowed options: 7.0 (bu 2 aws-s3. (bu 2 azure-blob-storage. (bu 2 google-cloud-storage. -z / --gzip gzip-compress the backup while uploading to the cloud (should not be used with python < 3.2). -j / --bzip2 bzip2-compress the backup while uploading to the cloud (should not be used with python < 3.3). --snappy snappy-compress the backup while uploading to the cloud (requires optional python-snappy library). -h / --host Host or Unix socket for Postgres connection (default: libpq settings). -p / --port Port for Postgres connection (default: libpq settings). -U / --user User name for Postgres connection (default: libpq settings). -d / --dbname Database name or conninfo string for Postgres connection (default: (dqpostgres(dq). -n / --name A name which can be used to reference this backup in commands such as barman-cloud-restore and barman-cloud-backup-delete. -J / --jobs Number of subprocesses to upload data to cloud storage (default: 2). -S / --max-archive-size Maximum size of an archive when uploading to cloud storage (default: 100GB). --immediate-checkpoint Forces the initial checkpoint to be done as quickly as possible. --min-chunk-size Minimum size of an individual chunk when uploading to cloud storage (default: 5MB for aws-s3, 64KB for azure-blob-storage, not applicable for google-cloud-storage). --max-bandwidth The maximum amount of data to be uploaded per second when backing up to object storages (default: 0 - no limit). --snapshot-instance Instance where the disks to be backed up as snapshots are attached. --snapshot-disk Name of a disk from which snapshots should be taken. --tags Tags to be added to all uploaded files in cloud storage, and/or to snapshots created, if snapshots are used. Extra options for the AWS cloud provider 0.0 --endpoint-url Override default S3 endpoint URL with the given one. -P / --aws-profile Profile name (e.g. INI section in AWS credentials file). --profile (deprecated) Profile name (e.g. INI section in AWS credentials file)
replaced by --aws-profile. --read-timeout The time in seconds until a timeout is raised when waiting to read from a connection (defaults to 60 seconds). -e / --encryption The encryption algorithm used when storing the uploaded data in S3. Allowed options: 7.0 (bu 2 AES256. (bu 2 aws:kms. --sse-kms-key-id The AWS KMS key ID that should be used for encrypting the uploaded data in S3. Can be specified using the key ID on its own or using the full ARN for the key. Only allowed if -e / --encryption is set to aws:kms. --aws-region The name of the AWS region containing the EC2 VM and storage volumes defined by the --snapshot-instance and --snapshot-disk arguments. --aws-await-snapshots-timeout The length of time in seconds to wait for snapshots to be created in AWS before timing out (default: 3600 seconds). --aws-snapshot-lock-mode The lock mode for the snapshot. This is only valid if --snapshot-instance and --snapshot-disk are set. Allowed options: 7.0 (bu 2 compliance. (bu 2 governance. --aws-snapshot-lock-duration The lock duration is the period of time (in days) for which the snapshot is to remain locked, ranging from 1 to 36,500. Set either the lock duration or the expiration date (not both). --aws-snapshot-lock-cool-off-period The cooling-off period is an optional period of time (in hours) that you can specify when you lock a snapshot in compliance mode, ranging from 1 to 72. --aws-snapshot-lock-expiration-date The lock duration is determined by an expiration date in the future. It must be at least 1 day after the snapshot creation date and time, using the format YYYY-MM-DDTHH:MM:SS.sssZ. Set either the lock duration or the expiration date (not both). Extra options for the Azure cloud provider 0.0 --azure-credential / --credential Optionally specify the type of credential to use when authenticating with Azure. If omitted then Azure Blob Storage credentials will be obtained from the environment and the default Azure authentication flow will be used for authenticating with all other Azure services. If no credentials can be found in the environment then the default Azure authentication flow will also be used for Azure Blob Storage. Allowed options: 7.0 (bu 2 azure-cli. (bu 2 managed-identity. (bu 2 default. --encryption-scope The name of an encryption scope defined in the Azure Blob Storage service which is to be used to encrypt the data in Azure. --azure-subscription-id The ID of the Azure subscription which owns the instance and storage volumes defined by the --snapshot-instance and --snapshot-disk arguments. --azure-resource-group The name of the Azure resource group to which the compute instance and disks defined by the --snapshot-instance and --snapshot-disk arguments belong. Extra options for GCP cloud provider 0.0 --gcp-project GCP project under which disk snapshots should be stored. --snapshot-gcp-project (deprecated) GCP project under which disk snapshots should be stored - replaced by --gcp-project. --kms-key-name The name of the GCP KMS key which should be used for encrypting the uploaded data in GCS. --gcp-zone Zone of the disks from which snapshots should be taken. --snapshot-zone (deprecated) Zone of the disks from which snapshots should be taken - replaced by --gcp-zone
barman-cloud-backup-delete(1) - Barman-cloud Commands Synopsis 0.0 3.5 barman-cloud-backup-delete [ { -V | --version } ] [ --help ] [ { { -v | --verbose } | { -q | --quiet } } ] [ { -t | --test } ] [ --cloud-provider { aws-s3 | azure-blob-storage | google-cloud-storage } ] [ --endpoint-url ENDPOINT_URL ] [ { -r | --retention-policy } RETENTION_POLICY ] [ { -m | --minimum-redundancy } MINIMUM_REDUNDANCY ] [ { -b | --backup-id } BACKUP_ID] [ --dry-run ] [ { -P | --aws-profile } AWS_PROFILE ] [ --profile AWS_PROFILE ] [ --read-timeout READ_TIMEOUT ] [ { --azure-credential | --credential } { azure-cli | managed-identity | default } ] [--batch-size DELETE_BATCH_SIZE] SOURCE_URL SERVER_NAME Description The barman-cloud-backup-delete script is used to delete one or more backups created with the barman-cloud-backup command from cloud storage and to remove the associated WAL files. Backups can be specified for deletion either by their backup ID (as obtained from barman-cloud-backup-list) or by a retention policy. Retention policies mirror those used by the Barman server, deleting all backups that are not required to meet the specified policy. When a backup is deleted, any unused WAL files associated with that backup are also removed. WALs are considered unused if: 0.0 (bu 2 The WALs predate the begin_wal value of the oldest remaining backup. (bu 2 The WALs are not required by any archival backups stored in the cloud. NOTE: 0.0 3.5 For GCP, only authentication with GOOGLE_APPLICATION_CREDENTIALS env is supported. IMPORTANT: 0.0 3.5 Each backup deletion involves three separate requests to the cloud provider: one for the backup files, one for the backup.info file, and one for the associated WALs. Deleting by retention policy may result in a high volume of delete requests if a large number of backups are accumulated in cloud storage. Parameters 0.0 SERVER_NAME Name of the server that holds the backup to be deleted. SOURCE_URL URL of the cloud source, such as a bucket in AWS S3. For example: s3://bucket/path/to/folder. -V / --version Show version and exit. --help show this help message and exit. -v / --verbose Increase output verbosity (e.g., -vv is more than -v). -q / --quiet Decrease output verbosity (e.g., -qq is less than -q). -t / --test Test cloud connectivity and exit. --cloud-provider The cloud provider to use as a storage backend. Allowed options are: 7.0 (bu 2 aws-s3. (bu 2 azure-blob-storage. (bu 2 google-cloud-storage. -b / --backup-id ID of the backup to be deleted -m / --minimum-redundancy The minimum number of backups that should always be available. -r / --retention-policy If specified, delete all backups eligible for deletion according to the supplied retention policy. Syntax: REDUNDANCY value | RECOVERY WINDOW OF value { DAYS | WEEKS | MONTHS } --batch-size The maximum number of objects to be deleted in a single request to the cloud provider. If unset then the maximum allowed batch size for the specified cloud provider will be used (1000 for aws-s3, 256 for azure-blob-storage and 100 for google-cloud-storage). --dry-run Find the objects which need to be deleted but do not delete them. Extra options for the AWS cloud provider 0.0 --endpoint-url Override default S3 endpoint URL with the given one. -P / --aws-profile Profile name (e.g. INI section in AWS credentials file). --profile (deprecated) Profile name (e.g. INI section in AWS credentials file)
replaced by --aws-profile. --read-timeout The time in seconds until a timeout is raised when waiting to read from a connection (defaults to 60 seconds). Extra options for the Azure cloud provider 0.0 --azure-credential / --credential Optionally specify the type of credential to use when authenticating with Azure. If omitted then Azure Blob Storage credentials will be obtained from the environment and the default Azure authentication flow will be used for authenticating with all other Azure services. If no credentials can be found in the environment then the default Azure authentication flow will also be used for Azure Blob Storage. Allowed options are: 7.0 (bu 2 azure-cli. (bu 2 managed-identity. (bu 2 default
barman-cloud-backup-keep(1) - Barman-cloud Commands Synopsis 0.0 3.5 barman-cloud-backup-keep [ { -V | --version } ] [ --help ] [ { { -v | --verbose } | { -q | --quiet } } ] [ { -t | --test } ] [ --cloud-provider { aws-s3 | azure-blob-storage | google-cloud-storage } ] [ --endpoint-url ENDPOINT_URL ] [ { -P | --aws-profile } AWS_PROFILE ] [ --profile AWS_PROFILE ] [ --read-timeout READ_TIMEOUT ] [ { --azure-credential | --credential } { azure-cli | managed-identity | default } ] [ { { -r | --release } | { -s | --status } | --target { full | standalone } } ] SOURCE_URL SERVER_NAME BACKUP_ID Description Use this script to designate backups in cloud storage as archival backups, ensuring their indefinite retention regardless of retention policies. This script allows you to mark backups previously created with barman-cloud-backup as archival backups. Once flagged as archival, these backups are preserved indefinitely and are not subject to standard retention policies. NOTE: 0.0 3.5 For GCP, only authentication with GOOGLE_APPLICATION_CREDENTIALS env is supported. Parameters 0.0 SERVER_NAME Name of the server that holds the backup to be kept. SOURCE_URL URL of the cloud source, such as a bucket in AWS S3. For example: s3://bucket/path/to/folder. BACKUP_ID The ID of the backup to be kept. -V / --version Show version and exit. --help show this help message and exit. -v / --verbose Increase output verbosity (e.g., -vv is more than -v). -q / --quiet Decrease output verbosity (e.g., -qq is less than -q). -t / --test Test cloud connectivity and exit. --cloud-provider The cloud provider to use as a storage backend. Allowed options are: 7.0 (bu 2 aws-s3. (bu 2 azure-blob-storage. (bu 2 google-cloud-storage. -r / --release If specified, the command will remove the keep annotation and the backup will be eligible for deletion. -s / --status Print the keep status of the backup. --target Specify the recovery target for this backup. Allowed options are: 7.0 (bu 2 full (bu 2 standalone Extra options for the AWS cloud provider 0.0 --endpoint-url Override default S3 endpoint URL with the given one. -P / --aws-profile Profile name (e.g. INI section in AWS credentials file). --profile (deprecated) Profile name (e.g. INI section in AWS credentials file)
replaced by --aws-profile. --read-timeout The time in seconds until a timeout is raised when waiting to read from a connection (defaults to 60 seconds). Extra options for the Azure cloud provider 0.0 --azure-credential / --credential Optionally specify the type of credential to use when authenticating with Azure. If omitted then Azure Blob Storage credentials will be obtained from the environment and the default Azure authentication flow will be used for authenticating with all other Azure services. If no credentials can be found in the environment then the default Azure authentication flow will also be used for Azure Blob Storage. Allowed options are: 7.0 (bu 2 azure-cli. (bu 2 managed-identity. (bu 2 default
barman-cloud-backup-list(1) - Barman-cloud Commands Synopsis 0.0 3.5 barman-cloud-backup-list [ { -V | --version } ] [ --help ] [ { { -v | --verbose } | { -q | --quiet } } ] [ { -t | --test } ] [ --cloud-provider { aws-s3 | azure-blob-storage | google-cloud-storage } ] [ --endpoint-url ENDPOINT_URL ] [ { -P | --aws-profile } AWS_PROFILE ] [ --profile AWS_PROFILE ] [ --read-timeout READ_TIMEOUT ] [ { --azure-credential | --credential } { azure-cli | managed-identity | default } ] [ --format FORMAT ] SOURCE_URL SERVER_NAME Description This script lists backups stored in the cloud that were created using the barman-cloud-backup command. NOTE: 0.0 3.5 For GCP, only authentication with GOOGLE_APPLICATION_CREDENTIALS env is supported. Parameters 0.0 SERVER_NAME Name of the server that holds the backup to be listed. SOURCE_URL URL of the cloud source, such as a bucket in AWS S3. For example: s3://bucket/path/to/folder. -V / --version Show version and exit. --help show this help message and exit. -v / --verbose Increase output verbosity (e.g., -vv is more than -v). -q / --quiet Decrease output verbosity (e.g., -qq is less than -q). -t / --test Test cloud connectivity and exit. --cloud-provider The cloud provider to use as a storage backend. Allowed options are: 7.0 (bu 2 aws-s3. (bu 2 azure-blob-storage. (bu 2 google-cloud-storage. --format Output format (console or json). Default console. Extra options for the AWS cloud provider 0.0 --endpoint-url Override default S3 endpoint URL with the given one. -P / --aws-profile Profile name (e.g. INI section in AWS credentials file). --profile (deprecated) Profile name (e.g. INI section in AWS credentials file)
replaced by --aws-profile. --read-timeout The time in seconds until a timeout is raised when waiting to read from a connection (defaults to 60 seconds). Extra options for the Azure cloud provider 0.0 --azure-credential / --credential Optionally specify the type of credential to use when authenticating with Azure. If omitted then Azure Blob Storage credentials will be obtained from the environment and the default Azure authentication flow will be used for authenticating with all other Azure services. If no credentials can be found in the environment then the default Azure authentication flow will also be used for Azure Blob Storage. Allowed options are: 7.0 (bu 2 azure-cli. (bu 2 managed-identity. (bu 2 default
barman-cloud-backup-show(1) - Barman-cloud Commands Synopsis 0.0 3.5 barman-cloud-backup-show [ { -V | --version } ] [ --help ] [ { { -v | --verbose } | { -q | --quiet } } ] [ { -t | --test } ] [ --cloud-provider { aws-s3 | azure-blob-storage | google-cloud-storage } ] [ --endpoint-url ENDPOINT_URL ] [ { -P | --aws-profile } AWS_PROFILE ] [ --profile AWS_PROFILE ] [ --read-timeout READ_TIMEOUT ] [ { --azure-credential | --credential } { azure-cli | managed-identity | default } ] [ --format FORMAT ] SOURCE_URL SERVER_NAME BACKUP_ID Description This script displays detailed information about a specific backup created with the barman-cloud-backup command. The output is similar to the barman show-backup from the %barman show-backup command reference, but it has fewer information. NOTE: 0.0 3.5 For GCP, only authentication with GOOGLE_APPLICATION_CREDENTIALS env is supported. Parameters 0.0 BACKUP_ID The ID of the backup. SERVER_NAME Name of the server that holds the backup to be displayed. SOURCE_URL URL of the cloud source, such as a bucket in AWS S3. For example: s3://bucket/path/to/folder. -V / --version Show version and exit. --help show this help message and exit. -v / --verbose Increase output verbosity (e.g., -vv is more than -v). -q / --quiet Decrease output verbosity (e.g., -qq is less than -q). -t / --test Test cloud connectivity and exit. --cloud-provider The cloud provider to use as a storage backend. Allowed options are: 7.0 (bu 2 aws-s3. (bu 2 azure-blob-storage. (bu 2 google-cloud-storage. --format Output format (console or json). Default console. Extra options for the AWS cloud provider 0.0 --endpoint-url Override default S3 endpoint URL with the given one. -P / --aws-profile Profile name (e.g. INI section in AWS credentials file). --profile (deprecated) Profile name (e.g. INI section in AWS credentials file)
replaced by --aws-profile. --read-timeout The time in seconds until a timeout is raised when waiting to read from a connection (defaults to 60 seconds). Extra options for the Azure cloud provider 0.0 --azure-credential / --credential Optionally specify the type of credential to use when authenticating with Azure. If omitted then Azure Blob Storage credentials will be obtained from the environment and the default Azure authentication flow will be used for authenticating with all other Azure services. If no credentials can be found in the environment then the default Azure authentication flow will also be used for Azure Blob Storage. Allowed options are: 7.0 (bu 2 azure-cli. (bu 2 managed-identity. (bu 2 default
barman-cloud-check-wal-archive(1) - Barman-cloud Commands Synopsis 0.0 3.5 barman-cloud-check-wal-archive [ { -V | --version } ] [ --help ] [ { { -v | --verbose } | { -q | --quiet } } ] [ { -t | --test } ] [ --cloud-provider { aws-s3 | azure-blob-storage | google-cloud-storage } ] [ --endpoint-url ENDPOINT_URL ] [ { -P | --aws-profile } AWS_PROFILE ] [ --profile AWS_PROFILE ] [ --read-timeout READ_TIMEOUT ] [ { --azure-credential | --credential } { azure-cli | managed-identity | default } ] [ --timeline TIMELINE ] DESTINATION_URL SERVER_NAME Description Verify that the WAL archive destination for a server is suitable for use with a new Postgres cluster. By default, the check will succeed if the WAL archive is empty or if the target bucket is not found. Any other conditions will result in a failure. NOTE: 0.0 3.5 For GCP, only authentication with GOOGLE_APPLICATION_CREDENTIALS env is supported. Parameters 0.0 SERVER_NAME Name of the server that needs to be checked. DESTINATION_URL URL of the cloud destination, such as a bucket in AWS S3. For example: s3://bucket/path/to/folder. -V / --version Show version and exit. --help show this help message and exit. -v / --verbose Increase output verbosity (e.g., -vv is more than -v). -q / --quiet Decrease output verbosity (e.g., -qq is less than -q). -t / --test Test cloud connectivity and exit. --cloud-provider The cloud provider to use as a storage backend. Allowed options are: 7.0 (bu 2 aws-s3. (bu 2 azure-blob-storage. (bu 2 google-cloud-storage. --timeline The earliest timeline whose WALs should cause the check to fail. Extra options for the AWS cloud provider 0.0 --endpoint-url Override default S3 endpoint URL with the given one. -P / --aws-profile Profile name (e.g. INI section in AWS credentials file). --profile (deprecated) Profile name (e.g. INI section in AWS credentials file)
replaced by --aws-profile. --read-timeout The time in seconds until a timeout is raised when waiting to read from a connection (defaults to 60 seconds). Extra options for the Azure cloud provider 0.0 --azure-credential / --credential Optionally specify the type of credential to use when authenticating with Azure. If omitted then Azure Blob Storage credentials will be obtained from the environment and the default Azure authentication flow will be used for authenticating with all other Azure services. If no credentials can be found in the environment then the default Azure authentication flow will also be used for Azure Blob Storage. Allowed options are: 7.0 (bu 2 azure-cli. (bu 2 managed-identity. (bu 2 default
barman-cloud-restore(1) - Barman-cloud Commands Synopsis 0.0 3.5 barman-cloud-restore [ { -V | --version } ] [ --help ] [ { { -v | --verbose } | { -q | --quiet } } ] [ { -t | --test } ] [ --cloud-provider { aws-s3 | azure-blob-storage | google-cloud-storage } ] [ --endpoint-url ENDPOINT_URL ] [ { -P | --aws-profile } AWS_PROFILE ] [ --profile AWS_PROFILE ] [ --read-timeout READ_TIMEOUT ] [ { --azure-credential | --credential } { azure-cli | managed-identity | default } ] [ --snapshot-recovery-instance SNAPSHOT_RECOVERY_INSTANCE ] [ --snapshot-recovery-zone GCP_ZONE ] [ --aws-region AWS_REGION ] [ --gcp-zone GCP_ZONE ] [ --azure-resource-group AZURE_RESOURCE_GROUP ] [ --tablespace NAME:LOCATION [ --tablespace NAME:LOCATION ... ] ] [ --target-lsn LSN ] [ --target-time TIMESTAMP ] [ --target-tli TLI ] SOURCE_URL SERVER_NAME BACKUP_ID RECOVERY_DESTINATION Description Use this script to restore a backup directly from cloud storage that was created with the barman-cloud-backup command. Additionally, this script can prepare for recovery from a snapshot backup by verifying that attached disks were cloned from the correct snapshots and by downloading the backup label from object storage. NOTE: 0.0 3.5 For GCP, only authentication with GOOGLE_APPLICATION_CREDENTIALS env is supported. Parameters 0.0 SERVER_NAME Name of the server that holds the backup to be restored. SOURCE_URL URL of the cloud source, such as a bucket in AWS S3. For example: s3://bucket/path/to/folder. BACKUP_ID The ID of the backup to be restored. Use auto to have Barman automatically find the most suitable backup for the restore operation. RECOVERY_DESTINATION The path to a directory for recovery. -V / --version Show version and exit. --help show this help message and exit. -v / --verbose Increase output verbosity (e.g., -vv is more than -v). -q / --quiet Decrease output verbosity (e.g., -qq is less than -q). -t / --test Test cloud connectivity and exit. --cloud-provider The cloud provider to use as a storage backend. Allowed options are: 7.0 (bu 2 aws-s3. (bu 2 azure-blob-storage. (bu 2 google-cloud-storage. --snapshot-recovery-instance Instance where the disks recovered from the snapshots are attached. --tablespace Tablespace relocation rule. --target-lsn The recovery target lsn, e.g., 3/64000000. --target-time The recovery target timestamp with or without timezone, in the format %Y-%m-%d %H:%M:%S. --target-tli The recovery target timeline. Extra options for the AWS cloud provider 0.0 --endpoint-url Override default S3 endpoint URL with the given one. -P / --aws-profile Profile name (e.g. INI section in AWS credentials file). --profile (deprecated) Profile name (e.g. INI section in AWS credentials file)
replaced by --aws-profile. --read-timeout The time in seconds until a timeout is raised when waiting to read from a connection (defaults to 60 seconds). --aws-region The name of the AWS region containing the EC2 VM and storage volumes defined by the --snapshot-instance and --snapshot-disk arguments. Extra options for the Azure cloud provider 0.0 --azure-credential / --credential Optionally specify the type of credential to use when authenticating with Azure. If omitted then Azure Blob Storage credentials will be obtained from the environment and the default Azure authentication flow will be used for authenticating with all other Azure services. If no credentials can be found in the environment then the default Azure authentication flow will also be used for Azure Blob Storage. Allowed options are: 7.0 (bu 2 azure-cli. (bu 2 managed-identity. (bu 2 default. --azure-resource-group The name of the Azure resource group to which the compute instance and disks defined by the --snapshot-instance and --snapshot-disk arguments belong. Extra options for GCP cloud provider 0.0 --gcp-zone Zone of the disks from which snapshots should be taken. --snapshot-recovery-zone (deprecated) Zone containing the instance and disks for the snapshot recovery - replaced by --gcp-zone
barman-cloud-wal-archive(1) - Barman-cloud Commands Synopsis 0.0 3.5 barman-cloud-wal-archive [ { -V | --version } ] [ --help ] [ { { -v | --verbose } | { -q | --quiet } } ] [ { -t | --test } ] [ --cloud-provider { aws-s3 | azure-blob-storage | google-cloud-storage } ] [ { { -z | --gzip } | { -j | --bzip2 } | --xz | --snappy | --zstd | --lz4 } ] [ --tags TAG [ TAG ... ] ] [ --history-tags HISTORY_TAG [ HISTORY_TAG ... ] ] [ --endpoint-url ENDPOINT_URL ] [ { -P | --aws-profile } AWS_PROFILE ] [ --profile AWS_PROFILE ] [ --read-timeout READ_TIMEOUT ] [ { -e | --encryption } ENCRYPTION ] [ --sse-kms-key-id SSE_KMS_KEY_ID ] [ { --azure-credential | --credential } { azure-cli | managed-identity | default } ] [ --encryption-scope ENCRYPTION_SCOPE ] [ --max-block-size MAX_BLOCK_SIZE ] [ --max-concurrency MAX_CONCURRENCY ] [ --max-single-put-size MAX_SINGLE_PUT_SIZE ] [ --kms-key-name KMS_KEY_NAME ] DESTINATION_URL SERVER_NAME [ WAL_PATH ] Description The barman-cloud-wal-archive command is designed to be used in the archive_command of a Postgres server to directly ship WAL files to cloud storage. NOTE: 0.0 3.5 If you are using Python 2 or unsupported versions of Python 3, avoid using the compression options --gzip or --bzip2. The script cannot restore gzip-compressed WALs on Python < 3.2 or bzip2-compressed WALs on Python < 3.3. This script enables the direct transfer of WAL files to cloud storage, bypassing the Barman server. Additionally, it can be utilized as a hook script for WAL archiving (pre_archive_retry_script). NOTE: 0.0 3.5 For GCP, only authentication with GOOGLE_APPLICATION_CREDENTIALS env is supported. Parameters 0.0 SERVER_NAME Name of the server that will have the WALs archived. DESTINATION_URL URL of the cloud destination, such as a bucket in AWS S3. For example: s3://bucket/path/to/folder. WAL_PATH The value of the (aq%p(aq keyword (according to archive_command). -V / --version Show version and exit. --help show this help message and exit. -v / --verbose Increase output verbosity (e.g., -vv is more than -v). -q / --quiet Decrease output verbosity (e.g., -qq is less than -q). -t / --test Test cloud connectivity and exit. --cloud-provider The cloud provider to use as a storage backend. Allowed options are: 7.0 (bu 2 aws-s3. (bu 2 azure-blob-storage. (bu 2 google-cloud-storage. -z / --gzip gzip-compress the WAL while uploading to the cloud. -j / --bzip2 bzip2-compress the WAL while uploading to the cloud. --xz xz-compress the WAL while uploading to the cloud. --snappy snappy-compress the WAL while uploading to the cloud (requires optional python-snappy library). --zstd zstd-compress the WAL while uploading to the cloud (requires optional zstandard library). --lz4 lz4-compress the WAL while uploading to the cloud (requires optional lz4 library). --tags Tags to be added to archived WAL files in cloud storage. --history-tags Tags to be added to archived history files in cloud storage. Extra options for the AWS cloud provider 0.0 --endpoint-url Override default S3 endpoint URL with the given one. -P / --aws-profile Profile name (e.g. INI section in AWS credentials file). --profile (deprecated) Profile name (e.g. INI section in AWS credentials file)
replaced by --aws-profile. --read-timeout The time in seconds until a timeout is raised when waiting to read from a connection (defaults to 60 seconds). -e / --encryption The encryption algorithm used when storing the uploaded data in S3. Allowed options: 7.0 (bu 2 AES256. (bu 2 aws:kms. --sse-kms-key-id The AWS KMS key ID that should be used for encrypting the uploaded data in S3. Can be specified using the key ID on its own or using the full ARN for the key. Only allowed if -e / --encryption is set to aws:kms. Extra options for the Azure cloud provider 0.0 --azure-credential / --credential Optionally specify the type of credential to use when authenticating with Azure. If omitted then Azure Blob Storage credentials will be obtained from the environment and the default Azure authentication flow will be used for authenticating with all other Azure services. If no credentials can be found in the environment then the default Azure authentication flow will also be used for Azure Blob Storage. Allowed options are: 7.0 (bu 2 azure-cli. (bu 2 managed-identity. (bu 2 default. --encryption-scope The name of an encryption scope defined in the Azure Blob Storage service which is to be used to encrypt the data in Azure. --max-block-size The chunk size to be used when uploading an object via the concurrent chunk method (default: 4MB). --max-concurrency The maximum number of chunks to be uploaded concurrently (default: 1). --max-single-put-size Maximum size for which the Azure client will upload an object in a single request (default: 64MB). If this is set lower than the Postgres WAL segment size after any applied compression then the concurrent chunk upload method for WAL archiving will be used. Extra options for GCP cloud provider 0.0 --kms-key-name The name of the GCP KMS key which should be used for encrypting the uploaded data in GCS
barman-cloud-wal-restore(1) - Barman-cloud Commands Synopsis 0.0 3.5 barman-cloud-wal-restore [ { -V | --version } ] [ --help ] [ { { -v | --verbose } | { -q | --quiet } } ] [ { -t | --test } ] [ --cloud-provider { aws-s3 | azure-blob-storage | google-cloud-storage } ] [ --endpoint-url ENDPOINT_URL ] [ { -P | --aws-profile } AWS_PROFILE ] [ --profile AWS_PROFILE ] [ --read-timeout READ_TIMEOUT ] [ { --azure-credential | --credential } { azure-cli | managed-identity | default } ] [ --no-partial ] SOURCE_URL SERVER_NAME WAL_NAME WAL_DEST Description The barman-cloud-wal-restore script functions as the restore_command for retrieving WAL files from cloud storage and placing them directly into a Postgres standby server, bypassing the Barman server. This script is used to download WAL files that were previously archived with the barman-cloud-wal-archive command. Disable automatic download of .partial files by calling --no-partial option. IMPORTANT: 0.0 3.5 On the target Postgres node, when pg_wal and the spool directory are on the same filesystem, files are moved via renaming, which is faster than copying and deleting. This speeds up serving WAL files significantly. If the directories are on different filesystems, the process still involves copying and deleting, so there(aqs no performance gain in that case. NOTE: 0.0 3.5 For GCP, only authentication with GOOGLE_APPLICATION_CREDENTIALS env is supported. Parameters 0.0 SERVER_NAME Name of the server that will have WALs restored. SOURCE_URL URL of the cloud source, such as a bucket in AWS S3. For example: s3://bucket/path/to/folder. WAL_NAME The value of the (aq%f(aq keyword (according to restore_command). WAL_DEST The value of the (aq%p(aq keyword (according to restore_command). -V / --version Show version and exit. --help show this help message and exit. -v / --verbose Increase output verbosity (e.g., -vv is more than -v). -q / --quiet Decrease output verbosity (e.g., -qq is less than -q). -t / --test Test cloud connectivity and exit. --cloud-provider The cloud provider to use as a storage backend. Allowed options are: 7.0 (bu 2 aws-s3. (bu 2 azure-blob-storage. (bu 2 google-cloud-storage. --no-partial Do not download partial WAL files Extra options for the AWS cloud provider 0.0 --endpoint-url Override default S3 endpoint URL with the given one. -P / --aws-profile Profile name (e.g. INI section in AWS credentials file). --profile (deprecated) Profile name (e.g. INI section in AWS credentials file)
replaced by --aws-profile. --read-timeout The time in seconds until a timeout is raised when waiting to read from a connection (defaults to 60 seconds). Extra options for the Azure cloud provider 0.0 --azure-credential / --credential Optionally specify the type of credential to use when authenticating with Azure. If omitted then Azure Blob Storage credentials will be obtained from the environment and the default Azure authentication flow will be used for authenticating with all other Azure services. If no credentials can be found in the environment then the default Azure authentication flow will also be used for Azure Blob Storage. Allowed options are: 7.0 (bu 2 azure-cli. (bu 2 managed-identity. (bu 2 default
beetsconfig(5)
beets configuration file Beets has an extensive configuration system that lets you customize nearly every aspect of its operation. To configure beets, you create a file called config.yaml. The location of the file depends on your platform (type beet config -p to see the path on your system): 0.0 (bu 2 On Unix-like OSes, write ~/.config/beets/config.yaml. (bu 2 On Windows, use %APPDATA%ebeetseconfig.yaml. This is usually in a directory like C:eUserseYoueAppDataeRoaming. (bu 2 On OS X, you can use either the Unix location or ~/Library/Application Support/beets/config.yaml. You can launch your text editor to create or update your configuration by typing beet config -e. (See the X'tty: link #config-cmd'%configX'tty: link' command for details.) It is also possible to customize the location of the configuration file and even use multiple layers of configuration. See %Configuration Location, below. The config file uses X'tty: link https://yaml.org/'%YAMLX'tty: link' syntax. You can use the full power of YAML, but most configuration options are simple key/value pairs. This means your config file will look like this: 0.0 3.5 option: value another_option: foo bigger_option: key: value foo: bar In YAML, you will need to use spaces (not tabs!) to indent some lines. If you have questions about more sophisticated syntax, take a look at the X'tty: link https://yaml.org/'%YAMLX'tty: link' documentation. The rest of this page enumerates the dizzying litany of configuration options available in beets. You might also want to see an %example. 0.0 (bu 2 %Global Options 2.0 (bu 2 %library (bu 2 %directory (bu 2 %plugins (bu 2 %include (bu 2 %pluginpath (bu 2 %ignore (bu 2 %ignore_hidden (bu 2 %replace (bu 2 %path_sep_replace (bu 2 %asciify_paths (bu 2 %art_filename (bu 2 %threaded (bu 2 %format_item (bu 2 %format_album (bu 2 %sort_item (bu 2 %sort_album (bu 2 %sort_case_insensitive (bu 2 %original_date (bu 2 %artist_credit (bu 2 %per_disc_numbering (bu 2 %aunique (bu 2 %sunique (bu 2 %terminal_encoding (bu 2 %clutter (bu 2 %max_filename_length (bu 2 %id3v23 (bu 2 %va_name (bu 2 %UI Options 2.0 (bu 2 %color (bu 2 %colors (bu 2 %terminal_width (bu 2 %length_diff_thresh (bu 2 %import (bu 2 %Importer Options 2.0 (bu 2 %write (bu 2 %copy (bu 2 %move (bu 2 %link (bu 2 %hardlink (bu 2 %reflink (bu 2 %resume (bu 2 %incremental (bu 2 %incremental_skip_later (bu 2 %from_scratch (bu 2 %quiet (bu 2 %quiet_fallback (bu 2 %none_rec_action (bu 2 %timid (bu 2 %log (bu 2 %default_action (bu 2 %languages (bu 2 %ignored_alias_types (bu 2 %detail (bu 2 %group_albums (bu 2 %autotag (bu 2 %duplicate_keys (bu 2 %duplicate_action (bu 2 %duplicate_verbose_prompt (bu 2 %bell (bu 2 %set_fields (bu 2 %singleton_album_disambig (bu 2 %MusicBrainz Options 2.0 (bu 2 %enabled (bu 2 %searchlimit (bu 2 %extra_tags (bu 2 %genres (bu 2 %external_ids (bu 2 %Autotagger Matching Options 2.0 (bu 2 %max_rec (bu 2 %preferred (bu 2 %ignored (bu 2 %required (bu 2 %ignored_media (bu 2 %ignore_data_tracks (bu 2 %ignore_video_tracks (bu 2 %Path Format Configuration (bu 2 %Configuration Location 2.0 (bu 2 %Environment Variable (bu 2 %Command-Line Option (bu 2 %Default Location (bu 2 %Example
benchmark_timestamp(3), benchmark_timestamp (3)
Sample a timekeeping source
biosig2gdf(1)
converts different biomedical signal file formats into a simplified version of GDF, and can stream the result to stdout. This is useful for reading the data by an unnamed pipe into different programming environment, while a simplified parser can be used to decode the data. The conversion performs a conversion to GDFv3 format, where all channels have the same data type and the same sampling rate
bl_vcf_get_sample_ids(3), bl_vcf_get_sample_ids()(3)
Extract sample IDs from a VCF header
bl_vcf_read_ss_call(3), bl_vcf_read_ss_call()(3)
Read a single-sample VCF call
bl_vcf_write_ss_call(3), bl_vcf_write_ss_call()(3)
Write a single-sample VCF call
bouncingcow(6)
a happy cow on a trampoline in 3D. Moo
brrrip(1)
rip SNES BRR sound samples
clfmerge(1)
merge Common-Log Format web logs based on time-stamps
cmemit(1)
sample sequences from a covariance model
convert(1)
convert between image formats as well as resize an image, blur, crop, despeckle, dither, draw on, flip, join, re-sample, and much more
courierperlfilter(8)
Sample Perl-based mail filter
create_sample(3)
Constructs a new sample structure of the specified type. Allegro game programming library
csg_resample(1)
Part of the VOTCA package
cual(6), Cual(6)
Cuyo Animation Language Cual is the main language used to describe the animations in cuyo. Strictly speaking it's the stuff between the << >> brackets in the level description files (xxx.ld). On the other hand this man page aims at being a complete description of how to write levels for cuyo. But it's still under construction. See the file "example.ld" to get an idea of how the rest of the level description works. There's also a bit of example Cual code in "example.ld". And of course, all the existing levels are examples. Note that Cual is probably still very buggy. So if strange things happen and you're sure it's not your fault, tell me (cuyo@karimmi.de)
cxGetDirDate(3)
Returns the timestamp of the specified directory
cxGetFileDate(3)
Returns the timestamp of the specified file
cxGetFsNodeDate(3)
Returns the timestamp of the specified node
cxSetDirDate(3)
Set the timestamp of the specified directory
cxSetFileDate(3)
Sets the timestamp of the specified file
cxSetFsNodeDate(3)
Sets the timestamp of the specified node
d11amp(1)
simple MP3 player
dc_datetime_gmtime(3)
convert an timestamp to GMT date and time
dc_datetime_localtime(3)
convert an timestamp to local date and time
dc_datetime_mktime(3)
convert an local date and time to a timestamp
dc_datetime_now(3)
return the current integral timestamp
dc_parser_samples_foreach(3)
iterate over samples taken during a dive
destroy_sample(3)
Destroys a sample structure when you are done with it. Allegro game programming library
digi_recorder(3)
Hook notifying you when a new sample buffer becomes available. Allegro game programming library
django-admin(1)
Utility script for the Django web framework django-admin is Django(aqs command-line utility for administrative tasks. This document outlines all it can do. In addition, manage.py is automatically created in each Django project. It does the same thing as django-admin but also sets the X'tty: link #envvar-DJANGO_SETTINGS_MODULE'%DJANGO_SETTINGS_MODULEX'tty: link' environment variable so that it points to your project(aqs settings.py file. The django-admin script should be on your system path if you installed Django via pip. If it(aqs not in your path, ensure you have your virtual environment activated. Generally, when working on a single Django project, it(aqs easier to use manage.py than django-admin. If you need to switch between multiple Django settings files, use django-admin with X'tty: link #envvar-DJANGO_SETTINGS_MODULE'%DJANGO_SETTINGS_MODULEX'tty: link' or the %--settings command line option. The command-line examples throughout this document use django-admin to be consistent, but any example can use manage.py or python -m django just as well
dnsjit.input.zero(3)
Dummy layer to example.input.zero
dnsjit.output.null(3)
Dummy layer to example.output.null
downsample-fits(1)
Scale down a FITS image
drumkv1(1)
an old-school drum-kit sampler
dupfilter(8)
Sample Courier mail filter
dwgadd(5), dwgadd.5(5)
Format of the LibreDWG dwgadd example input-file
ecasound(1)
sample editor, multitrack recorder, fx-processor, etc
encapsulate(1)
multiplex several channels over a single socket with sampling of remote process exit status, and provide conversation termination without closing the socket. netpipes 4.2
engrampa(1), Engrampa(1)
Archive Manager for MATE
explain_futimens_or_die(3)
change file timestamps and report errors require_index { "change file timestamps with nanosecond precision and report errors" }
explain_futimes_or_die(3)
change file timestamps and report errors require_index { "change file timestamps and report errors" }
explain_futimesat_or_die(3)
change timestamps of a file
explain_lutimes_or_die(3)
modify file timestamps and report errors require_index { "modify file timestamps and report errors" }
explain_utimens_or_die(3)
change file timestamps and report errors require_index { "change file last access and modification times and report errors" }
explain_utimensat_or_die(3)
change file timestamps and report errors require_index { "change file timestamps with nanosecond precision and report errors" }
exsample(3)
Playing digital samples. Allegro game programming library
ffmpeg-resampler(1)
FFmpeg Resampler
fiberlamp(6)
Fiber Optic Lamp
fido_bio_enroll_new(3), fido_bio_enroll_free(3), fido_bio_enroll_last_status(3), fido_bio_enroll_remaining_samples(3)
FIDO2 biometric enrollment API
fido_bio_info_new(3), fido_bio_info_free(3), fido_bio_info_type(3), fido_bio_info_max_samples(3)
FIDO2 biometric sensor information API
filtermailex(5)
filtermail configuration file examples
fixadd(3)
Safe function to add fixed point numbers clamping overflow. Allegro game programming library
fixsub(3)
Safe function to subtract fixed point numbers clamping underflow. Allegro game programming library
flow-tools-examples(1)
Example usage of flow-tools
fntsample(1)
PDF and PostScript font samples generator " macros SAMPLE . ESAMPLE
get_mixer_buffer_length(3)
Returns the number of samples per channel in the mixer buffer. Allegro game programming library
get_sound_input_cap_bits(3)
Checks which audio input sample formats are supported. Allegro game programming library
get_sound_input_cap_rate(3)
Returns the maximum sample frequency for recording. Allegro game programming library
getabstime(1)
Get Current Time Stamp
gig2mono(1)
Converts Gigasampler (.gig) files from stereo to mono
gig2stereo(1)
Converts Gigasampler (.gig) files from mono pairs to true stereo
gigdump(1)
List information about a Gigasampler (.gig) file
gigextract(1)
Extract samples from Gigasampler (.gig) files
gigmerge(1)
Merges several Gigasampler (.gig) files to one Gigasampler file
git-ignore-io(1)
Get sample gitignore file
git-stamp(1)
Stamp the last commit message
gluLoadSamplingMatrices(3), "gluLoadSamplingMatrices(3)
load NURBS sampling and culling matrices
gmx-make_edi(1)
Generate input files for essential dynamics sampling
gmx-wham(1)
Perform weighted histogram analysis after umbrella sampling
gramps(1)
Genealogical Research and Analysis Management Programming System. rst2man-indent-level 0 1 rstReportMargin $1
gsl-randist(1)
generate random samples from various distributions
gtouch(1), touch(1)
change file timestamps
hmmemit(1)
sample sequences from a profile
http-get(1)
simple http client and siod example program
icedax(1)
a sampling utility that dumps CD audio data into wav sound files
input_samplebase(3), input_samplebase (3)
Change coordinate system origo for a device
inputanalog_filter(3), inputanalog_filter (3)
Change filtering / sampling options for a single analog input source
iv_examples(3)
ivykis examples
jack_monitor_client(1)
The JACK Audio Connection Kit example client
jack_samplerate(1)
JACK toolkit client to print current samplerate
jack_showtime(1)
The JACK Audio Connection Kit example client
jack_simple_client(1)
The JACK Audio Connection Kit example client
jack_unload(1)
The JACK Audio Connection Kit example client
jags(1)
just another gibbs sampler
jclient.pl(1)
sample client for module mode (perl version)
jcontrol(1)
a sample module client written in C
kitty(1)
The fast, feature rich terminal emulator 0.0 3.5 kitty [options] [program-to-run ...] Run the kitty terminal emulator. You can also specify the program to run inside kitty as normal arguments following the options. For example: kitty --hold sh -c (dqecho hello, world(dq For comprehensive documentation for kitty, please see: %https://sw.kovidgoyal.net/kitty/
krb5_principal_intro(3)
The principal handing functions. A Kerberos principal is a email address looking string that contains two parts separated by . The second part is the kerberos realm the principal belongs to and the first is a list of 0 or more components. For example lha@SU.SE host/hummel.it.su.se@SU.SE host/admin@H5L.ORG See the library functions here: Heimdal Kerberos 5 principal functions
ldnsd(1)
simple daemon example code
libowfat_scan_iso8601(3), scan_iso8601(3)
parse an ISO-8601 timestamp
home | help