There was a problem syncing files that had spaces in their names several years ago. This was fixed with commit fdfe03a57d20a1064f04282ee7bc3656041df918 Author: Michal Ludvig <[email protected]> Date: Sun Nov 16 09:38:51 2008 +0000 Merge from 0.9.8.x branch, rel 245: * S3/S3.py: Escape parameters in strings. Fixes sync to and ls of...
amazon-s3,amazon-cloudfront,s3cmd
s3cmd has recently (as in, this past weekend) fixed this, but the code is not yet upstream. Please try with this branch: https://github.com/mdomsch/s3cmd/tree/bug/content-type With a little more testing, this will get merged into upstream. Then, your command should work exactly as you expect. -mdomsch, s3cmd maintainer...
amazon-web-services,amazon-s3,s3cmd
Upstream github.com/s3tools/s3cmd master branch has this commit now which does emit all metadata in the info command. commit 36352241089e9b9661d9ee586dc19085f4bb13c9 Author: Andrew Gaul Date: Tue Mar 10 04:36:04 2015 -0700 Emit user metadata in object info ...
amazon-web-services,amazon-s3,s3cmd
In the S3 REST API, when iterating through objects, you often specify a key prefix, which is a left-anchored substring matching all the key values you want returned. When you tell S3 you want foo/, what you are, of course, asking for is foo/*. Perhaps less intuitive is the fact...
amazon-web-services,amazon-s3,s3cmd
When first configuring s3cmd you probably ran s3cmd --configure and input your access and secret keys. This saves the credentials to a file ~/.s3cfg looking something like this: [default] access_key=your_access_key ...bunch of options... secret_key=your_secret_key s3md accepts the -c flag to point at a config file. Set up two config files,...
amazon-web-services,amazon-s3,cron,cron-task,s3cmd
s3cmd uses a configuration file located at ~/.s3cfg. It's probably having trouble picking that up. Pass in --config=/home/username/.s3cfg and see if that helps. In any case, s3cmd isn't consistently maintained. The official commandline client (aws-cli) is much better in many ways. edit: use this as your .sh file, make sure...
amazon-web-services,coldfusion,amazon-s3,coldfusion-9,s3cmd
I agree with the comment by @MarkAKruger that the problem here is latency. Given that ColdFusion can't consistently tell whether a file exists, but it DOES consistently read its up-to-date contents (and consistently fails to read them when they are not available), I've come up with this solution: string function...
amazon-web-services,amazon-s3,s3cmd
It is always a good idea to recheck the status of storage, and whether S3 has been under a life cycle, so that in this case it could have been transferred to Glacier. Here, I tried to access a Glacier object using s3cmd commands, and I received uninformative and irrelevant...
After few search i found solution, it was due to RequestTimeTooSkewed. Through this command i was able to debug this s3cmd --configuration --debug Error><Code>RequestTimeTooSkewed</Code></Error> You can fix RequestTimeTooSkewed with these commands apt-get install ntp or yum install ntp Configure NTP to use amazon servers , like so : vim /etc/ntp.conf...
Try doing it this way: IFS=; while read -r file; do s3upload "$file" done <<< $(find nas/cdn/catalog/drawings/ \( ! -regex '.*/\..*' \) -type f) ...