In S3, there is no such thing as folders, only buckets and keys. Keys that share a common prefix are grouped together in the console for your convenience but under the hood, the structure is completely flat. As a result, there is no way to set the ACL for a folder. But there are some workarounds.
Use a bucket policy
Depending on what permissions you want to grant and to whom, you can grant access rights to all keys in a "folder" using a bucket policy. This example allows anyone to get any key under the folder path/to/folder/ from bucket my-bucket. Here's a list of possible actions and a list of possible principals from the docs.
{
"Version":"2012-10-17",
"Statement":[
{
"Sid":"SimulateFolderACL",
"Effect":"Allow",
"Principal": "*",
"Action":["s3:GetObject"],
"Resource":["arn:aws:s3:::my-bucket/path/to/folder/*"]
}
]
}
Iterate and apply the ACL to each key
You can also loop through all the keys and apply the ACL directly like you mentioned using s3.setObjectAcl(bucketName, key, acl). You can filter the keys by the folder prefix so you don't have to check each key name directly. After the directory is uploaded, you can do something like this:
// We only want the keys that are in the folder
ListObjectsRequest listObjectsRequest = new ListObjectsRequest()
.withBucketName("my-bucket")
.withPrefix("path/to/folder/");
ObjectListing objectListing;
// Iterate over all the matching keys
do {
objectListing = s3client.listObjects(listObjectsRequest);
for (S3ObjectSummary objectSummary : objectListing.getObjectSummaries())
{
// Apply the ACL
s3.setObjectAcl(bucketName, key, acl);
}
listObjectsRequest.setMarker(objectListing.getNextMarker());
} while (objectListing.isTruncated());