Skip site navigation (1)Skip section navigation (2)

FreeBSD Manual Pages


home | help
s3cmd(1)		    General Commands Manual		      s3cmd(1)

       s3cmd - tool for	managing Amazon	S3 storage space and Amazon CloudFront
       content delivery	network


       s3cmd is	a command line client for  copying  files  to/from  Amazon  S3
       (Simple	Storage	 Service)  and performing other	related	tasks, for in-
       stance creating and removing buckets, listing objects, etc.

       s3cmd can do several actions specified by the following commands.

       s3cmd mb	s3://BUCKET
	      Make bucket

       s3cmd rb	s3://BUCKET
	      Remove bucket

       s3cmd ls	[s3://BUCKET[/PREFIX]]
	      List objects or buckets

       s3cmd la
	      List all object in all buckets

       s3cmd put FILE [FILE...]	s3://BUCKET[/PREFIX]
	      Put file into bucket

       s3cmd get s3://BUCKET/OBJECT LOCAL_FILE
	      Get file from bucket

       s3cmd del s3://BUCKET/OBJECT
	      Delete file from bucket

       s3cmd rm	s3://BUCKET/OBJECT
	      Delete file from bucket (alias for del)

       s3cmd restore s3://BUCKET/OBJECT
	      Restore file from	Glacier	storage

       s3cmd sync LOCAL_DIR s3://BUCKET[/PREFIX] or  s3://BUCKET[/PREFIX]  LO-
	      Synchronize a directory tree to S3 (checks files freshness using
	      size and md5 checksum, unless overridden by options, see below)

       s3cmd du	[s3://BUCKET[/PREFIX]]
	      Disk usage by buckets

       s3cmd info s3://BUCKET[/OBJECT]
	      Get various information about Buckets or Files

       s3cmd cp	s3://BUCKET1/OBJECT1 s3://BUCKET2[/OBJECT2]
	      Copy object

       s3cmd modify s3://BUCKET1/OBJECT
	      Modify object metadata

       s3cmd mv	s3://BUCKET1/OBJECT1 s3://BUCKET2[/OBJECT2]
	      Move object

       s3cmd setacl s3://BUCKET[/OBJECT]
	      Modify Access control list for Bucket or Files

       s3cmd setpolicy FILE s3://BUCKET
	      Modify Bucket Policy

       s3cmd delpolicy s3://BUCKET
	      Delete Bucket Policy

       s3cmd setcors FILE s3://BUCKET
	      Modify Bucket CORS

       s3cmd delcors s3://BUCKET
	      Delete Bucket CORS

       s3cmd payer s3://BUCKET
	      Modify Bucket Requester Pays policy

       s3cmd multipart s3://BUCKET [Id]
	      Show multipart uploads

       s3cmd abortmp s3://BUCKET/OBJECT	Id
	      Abort a multipart	upload

       s3cmd listmp s3://BUCKET/OBJECT Id
	      List parts of a multipart	upload

       s3cmd accesslog s3://BUCKET
	      Enable/disable bucket access logging

       s3cmd sign STRING-TO-SIGN
	      Sign arbitrary string using the secret key

       s3cmd signurl s3://BUCKET/OBJECT	_expiry_epoch|+expiry_offset_
	      Sign an S3 URL to	provide	limited	public access with expiry

       s3cmd fixbucket s3://BUCKET[/PREFIX]
	      Fix invalid file names in	a bucket

       s3cmd expire s3://BUCKET
	      Set or delete expiration rule for	the bucket

       s3cmd setlifecycle FILE s3://BUCKET
	      Upload a lifecycle policy	for the	bucket

       s3cmd getlifecycle s3://BUCKET
	      Get a lifecycle policy for the bucket

       s3cmd dellifecycle s3://BUCKET
	      Remove a lifecycle policy	for the	bucket

       Commands	for static WebSites configuration

       s3cmd ws-create s3://BUCKET
	      Create Website from bucket

       s3cmd ws-delete s3://BUCKET
	      Delete Website

       s3cmd ws-info s3://BUCKET
	      Info about Website

       Commands	for CloudFront management

       s3cmd cflist
	      List CloudFront distribution points

       s3cmd cfinfo [cf://DIST_ID]
	      Display CloudFront distribution point parameters

       s3cmd cfcreate s3://BUCKET
	      Create CloudFront	distribution point

       s3cmd cfdelete cf://DIST_ID
	      Delete CloudFront	distribution point

       s3cmd cfmodify cf://DIST_ID
	      Change CloudFront	distribution point parameters

       s3cmd cfinvalinfo cf://DIST_ID[/INVAL_ID]
	      Display CloudFront invalidation request(s) status

       Some of the below specified options can have their default  values  set
       in  s3cmd  config file (by default $HOME/.s3cmd). As it's a simple text
       file feel free to open it with your favorite text  editor  and  do  any
       changes you like.

       -h, --help
	      show this	help message and exit

	      Invoke  interactive  (re)configuration  tool.  Optionally	use as
	      '--configure s3://some-bucket' to	 test  access  to  a  specific
	      bucket instead of	attempting to list them	all.

       -c FILE,	--config=FILE
	      Config file name.	Defaults to $HOME/.s3cfg

	      Dump  current  configuration after parsing config	files and com-
	      mand line	options	and exit.

	      AWS Access Key

	      AWS Secret Key

	      AWS Access Token

       -n, --dry-run
	      Only show	what should be uploaded	or downloaded but don't	 actu-
	      ally do it. May still perform S3 requests	to get bucket listings
	      and other	information though (only for file transfer commands)

       -s, --ssl
	      Use HTTPS	connection when	communicating with S3.	(default)

	      Don't use	HTTPS.

       -e, --encrypt
	      Encrypt files before uploading to	S3.

	      Don't encrypt files.

       -f, --force
	      Force overwrite and other	dangerous operations.

	      Continue getting a partially downloaded  file  (only  for	 [get]

	      Continue	uploading partially uploaded files or multipart	upload
	      parts.  Restarts parts/files that	don't have matching  size  and
	      md5.   Skips  files/parts	 that do.  Note: md5sum	checks are not
	      always sufficient	to check (part)	file equality.	Enable this at
	      your own risk.

	      UploadId	for Multipart Upload, in case you want continue	an ex-
	      isting upload (equivalent	to --continue- put) and	there are mul-
	      tiple  partial  uploads.	 Use s3cmd multipart [URI] to see what
	      UploadIds	are associated with the	given URI.

	      Skip over	files that exist at the	destination  (only  for	 [get]
	      and [sync] commands).

       -r, --recursive
	      Recursive	upload,	download or removal.

	      Check MD5	sums when comparing files for [sync].  (default)

	      Do  not  check  MD5  sums	when comparing files for [sync].  Only
	      size will	be compared. May significantly speed up	 transfer  but
	      may also miss some changed files.

       -P, --acl-public
	      Store objects with ACL allowing read for anyone.

	      Store objects with default ACL allowing access for you only.

	      Grant  stated  permission	to a given amazon user.	 Permission is
	      one of: read, write, read_acp, write_acp,	full_control, all

	      Revoke stated permission for a given amazon user.	 Permission is
	      one of: read, write, read_acp, write_acp,	full_control, all

       -D NUM, --restore-days=NUM
	      Number  of  days	to keep	restored file available	(only for 're-
	      store' command). Default is 1 day.

	      Priority for restoring files from	S3 Glacier (only for expedited

	      Delete destination objects with  no  corresponding  source  file

	      Don't delete destination objects.

	      Perform deletes AFTER new	uploads	when delete-removed is enabled

	      *OBSOLETE* Put all updated files into place at end [sync]

	      Do not delete more than NUM files. [del] and [sync]

	      Limit number of objects returned in the response body (only  for
	      [ls] and [la] commands)

	      Additional destination for parallel uploads, in addition to last
	      arg.  May	be repeated.

	      Delete remote objects after fetching to  local  file  (only  for
	      [get] and	[sync] commands).

       -p, --preserve
	      Preserve	filesystem  attributes	(mode, ownership, timestamps).
	      Default for [sync] command.

	      Don't store FS attributes

	      Filenames	and paths matching GLOB	will be	excluded from sync

	      Read --exclude GLOBs from	FILE

	      Filenames	and paths matching REGEXP (regular expression) will be
	      excluded from sync

	      Read --rexclude REGEXPs from FILE

	      Filenames	 and paths matching GLOB will be included even if pre-
	      viously excluded by one of --(r)exclude(-from) patterns

	      Read --include GLOBs from	FILE

	      Same as --include	but uses REGEXP	(regular  expression)  instead
	      of GLOB

	      Read --rinclude REGEXPs from FILE

	      Read  list  of  source-file  names from FILE. Use	- to read from

       --region=REGION,	--bucket-location=REGION
	      Region  to  create  bucket  in.  As  of  now  the	 regions  are:
	      us-east-1,   us-west-1,  us-west-2,  eu-west-1,  eu-  central-1,
	      ap-northeast-1, ap-southeast-1, ap- southeast-2, sa-east-1

	      HOSTNAME:PORT for	S3 endpoint (default:,	alter-
	      natives  such  as	 s3-eu- You should also
	      set --host- bucket.

	      DNS-style	bucket+hostname:port template for accessing  a	bucket
	      (default:	%(bucket)

       --reduced-redundancy, --rr
	      Store  object  with  'Reduced  redundancy'.  Lower per-GB	price.
	      [put, cp,	mv]

       --no-reduced-redundancy,	--no-rr
	      Store object without 'Reduced redundancy'. Higher	per- GB	price.
	      [put, cp,	mv]

	      Store  object  with specified CLASS (STANDARD, STANDARD_IA, ONE-
	      cp, mv]

	      Target  prefix for access	logs (S3 URI) (for [cfmodify] and [ac-
	      cesslog] commands)

	      Disable access logging (for [cfmodify] and [accesslog] commands)

	      Default MIME-type	for stored objects. Application	default	is bi-

       -M, --guess-mime-type
	      Guess  MIME-type of files	by their extension or mime magic. Fall
	      back to default MIME-Type	as  specified  by  --default-mime-type

	      Don't guess MIME-type and	use the	default	type instead.

	      Don't use	mime magic when	guessing MIME-type.

       -m MIME/TYPE, --mime-type=MIME/TYPE
	      Force   MIME-type.   Override   both   --default-mime-type   and

	      Add a given HTTP header to the upload request. Can be used  mul-
	      tiple times. For instance	set 'Expires' or 'Cache-Control' head-
	      ers (or both) using this option.

	      Remove a given HTTP header.  Can be used	multiple  times.   For
	      instance,	remove 'Expires' or 'Cache- Control' headers (or both)
	      using this option. [modify]

	      Specifies	that server-side encryption will be used when  putting
	      objects. [put, sync, cp, modify]

	      Specifies	 the  key  id used for server-side encryption with AWS
	      KMS-Managed Keys (SSE-KMS) when putting objects. [put, sync, cp,

	      Override	autodetected terminal and filesystem encoding (charac-
	      ter set).	Autodetected: UTF-8

	      Add  encoding  to	 these	 comma	 delimited   extensions	  i.e.
	      (css,js,html) when uploading to S3 )

	      Use  the	S3 name	as given on the	command	line. No pre- process-
	      ing, encoding, etc. Use with caution!

	      Disable  multipart  upload  on  files   bigger   than   --multi-

	      Size of each chunk of a multipart	upload.	Files bigger than SIZE
	      are automatically	uploaded as multithreaded- multipart,  smaller
	      files  are  uploaded  using  the	traditional method. SIZE is in
	      Mega-Bytes, default chunk	size is	15MB,  minimum	allowed	 chunk
	      size is 5MB, maximum is 5GB.

	      Include MD5 sums in bucket listings (only	for 'ls' command).

       -H, --human-readable-sizes
	      Print sizes in human readable form (eg 1kB instead of 1234).

	      Name of index-document (only for [ws-create] command)

	      Name of error-document (only for [ws-create] command)

	      Indicates	 when the expiration rule takes	effect.	(only for [ex-
	      pire] command)

	      Indicates	the number of days after object	creation  the  expira-
	      tion rule	takes effect. (only for	[expire] command)

	      Identifying one or more objects with the prefix to which the ex-
	      piration rule applies. (only for [expire]	command)

	      Display progress meter (default on TTY).

	      Don't display progress meter (default on non-TTY).

	      Give some	file-transfer stats.

	      Enable given CloudFront distribution (only for  [cfmodify]  com-

	      Disable  given CloudFront	distribution (only for [cfmodify] com-

	      Invalidate the uploaded filed in CloudFront. Also	see  [cfinval]

	      When  using  Custom Origin and S3	static website,	invalidate the
	      default index file.

	      When using Custom	Origin and S3 static website, don't invalidate
	      the path to the default index file.

	      Add  given  CNAME	to a CloudFront	distribution (only for [cfcre-
	      ate] and [cfmodify] commands)

	      Remove given CNAME from a	CloudFront distribution	(only for [cf-
	      modify] command)

	      Set  COMMENT  for	 a  given  CloudFront  distribution  (only for
	      [cfcreate] and [cfmodify]	commands)

	      Set the default root object to return when no object  is	speci-
	      fied  in	the URL. Use a relative	path, i.e.  default/index.html
	      instead of /default/index.html or	s3://bucket/default/index.html
	      (only for	[cfcreate] and [cfmodify] commands)

       -v, --verbose
	      Enable verbose output.

       -d, --debug
	      Enable debug output.

	      Show s3cmd version (2.1.0) and exit.

       -F, --follow-symlinks
	      Follow symbolic links as if they are regular files

	      Cache FILE containing local source MD5 values

       -q, --quiet
	      Silence output on	stdout

	      Path to SSL CA certificate FILE (instead of system default)

	      Check SSL	certificate validity

	      Do not check SSL certificate validity

	      Check SSL	certificate hostname validity

	      Do not check SSL certificate hostname validity

	      Use  AWS Signature version 2 instead of newer signature methods.
	      Helpful for S3-like systems that don't  have  AWS	 Signature  v4

	      Limit  the  upload or download speed to amount bytes per second.
	      Amount may be expressed in bytes,	kilobytes with the  k  suffix,
	      or megabytes with	the m suffix

	      Disable connection re-use

	      Set the REQUESTER	PAYS flag for operations

       -l, --long-listing
	      Produce long listing [ls]

	      stop if error in transfer

	      Provide  a  Content-Disposition  for signed URLs,	e.g., "inline;

	      Provide a	Content-Type for signed	URLs, e.g., "video/mp4"

       One of the most powerful	commands of s3cmd is s3cmd sync	used for  syn-
       chronising  complete  directory	trees to or from remote	S3 storage. To
       some extent s3cmd put and s3cmd get  share  a  similar  behaviour  with

       Basic usage common in backup scenarios is as simple as:
	    s3cmd sync /local/path/ s3://test-bucket/backup/

       This  command  will find	all files under	/local/path directory and copy
       them to corresponding paths under s3://test-bucket/backup on the	remote
       side.  For example:
	    /local/path/file1.ext	  ->  s3://bucket/backup/file1.ext
	    /local/path/dir123/file2.bin  ->  s3://bucket/backup/dir123/file2.bin

       However if the local path doesn't end with a slash the last directory's
       name is used on the remote side as well.	Compare	these with the	previ-
       ous example:
	    s3cmd sync /local/path s3://test-bucket/backup/
       will sync:
	    /local/path/file1.ext	  ->  s3://bucket/backup/path/file1.ext
	    /local/path/dir123/file2.bin  ->  s3://bucket/backup/path/dir123/file2.bin

       To retrieve the files back from S3 use inverted syntax:
	    s3cmd sync s3://test-bucket/backup/	~/restore/
       that will download files:
	    s3://bucket/backup/file1.ext	 ->  ~/restore/file1.ext
	    s3://bucket/backup/dir123/file2.bin	 ->  ~/restore/dir123/file2.bin

       Without	the  trailing slash on source the behaviour is similar to what
       has been	demonstrated with upload:
	    s3cmd sync s3://test-bucket/backup ~/restore/
       will download the files as:
	    s3://bucket/backup/file1.ext	 ->  ~/restore/backup/file1.ext
	    s3://bucket/backup/dir123/file2.bin	 ->  ~/restore/backup/dir123/file2.bin

       All source file names, the bold ones above, are matched against exclude
       rules and those that match are then re-checked against include rules to
       see whether they	should be excluded or kept in the source list.

       For the purpose of --exclude and	--include matching only	the bold  file
       names  above  are  used.	 For  instance	only  path/file1.ext is	tested
       against the patterns, not /local/path/file1.ext

       Both --exclude and --include work with  shell-style  wildcards  (a.k.a.
       GLOB).	For  a	greater	 flexibility s3cmd provides Regular-expression
       versions	of the two exclude options named  --rexclude  and  --rinclude.
       The options with	...-from suffix	(eg --rinclude-from) expect a filename
       as an argument. Each line of such a file	is treated as one pattern.

       There is	only one set of	patterns built	from  all  --(r)exclude(-from)
       options	and  similarly	for include variant. Any file excluded with eg
       --exclude can be	put back with a	pattern	found in --rinclude-from list.

       Run s3cmd with --dry-run	to verify that your rules  work	 as  expected.
       Use  together with --debug get detailed information about matching file
       names against exclude and include rules.

       For example to exclude all files	with ".jpg" extension except those be-
       ginning with a number use:

	    --exclude '*.jpg' --rinclude '[0-9].*.jpg'

       To exclude all files except "*.jpg" extension, use:

	    --exclude '*' --include '*.jpg'

       To exclude local	directory 'somedir', be	sure to	use a trailing forward
       slash, as such:

	    --exclude 'somedir/'

       For the most up to date list of options run: s3cmd --help
       For more	info about  usage,  examples  and  other  related  info	 visit
       project homepage	at:

       Written by Michal Ludvig	and contributors

       Preferred way to	get support is our mailing list:
       or visit	the project homepage:

       Report bugs to

       Copyright  (C)  2007-2015  TGRMN	 Software - - and

       This program is free software; you can redistribute it and/or modify it
       under  the  terms of the	GNU General Public License as published	by the
       Free Software Foundation; either	version	2 of the License, or (at  your
       option)	any  later  version.   This program is distributed in the hope
       that it will be useful, but WITHOUT ANY WARRANTY; without even the  im-
       See the GNU General Public License for more details.



Want to link to this manual page? Use this URL:

home | help