Greetings. We have been asked so many times about how to configure cross-account S3 access that we decided to wrote a blog post about it. Here goes.

What you will need

  • 2 separate AWS accounts
  • 1 S3 bucket in each account
  • Enough privileges on the accounts to create IAM Users & Policies, as well as S3 bucket policies
  • 1 EC2 Linux instance. We recommend using Amazon Linux for this test as it has the CLI pre-installed.

Create the Source and Destination Buckets

You will of course need to create a bucket in the Source account and another in the Destination account. We used the following bucket names in our case, you should do the same and replace ‘cloudaxis-‘ with your own company name.

  • Source account: cloudaxis-source
  • Destination account: cloudaxis-destination

Create IAM user account

If you do not already have one available, create an IAM user within the Destination account (see Creating an IAM User in Your AWS Account for details). For this example we created a user named S3CopyUser.

Choose Programmatic Access in order to provide the user with an access key. Console sign-in is not necessary for this example.

Create a User Policy

In the page that follows, click on the button Create Policy which will take you to the Policy Creation page. Click on the JSON tab and enter the following policy:

{
        "Statement": [
            {
                 "Effect":"Allow",
                 "Action": "s3:ListAllMyBuckets",
                 "Resource":"*"
             },
             {
                 "Effect":"Allow",
                 "Action":"s3:GetObject",
                 "Resource":"arn:aws:s3:::cloudaxis-destination/*"
             },
             {
                  "Effect": "Allow",
                  "Action": [
                    "s3:ListBucket",
                    "s3:GetBucketLocation"
                  ],
                   "Resource": "arn:aws:s3:::cloudaxis-destination"
            }
        ]
 }

Click on Review Policy, which will verify the syntax of the policy and bring you to a page allowing to finish the creation. You shouldn’t have any warnings or errors with that policy, so click now on Create Policy.

This User Policy gives the user the follow permissions:

  • ListAllMyBuckets – the ability to list all available S3 buckets
  • GetObject – the ability the get an S3 object
  • ListBucket – the ability to list all the objects for the given bucket (in this case the cloudaxis-destination bucket)
  • GetBucketLocation – the ability to get the location of the bucket which is needed before any operations can be performed on a bucket.

Please note that you only created the policy for now. That policy has not yet been applied to the S3CopyUser you had started to create. So go back to the page or tab where the user was being created and choose Attach Existing Policies Directly in the three choices above (see the below illustration). Then in the filter text box below type “cross_account_move_policy”. You should see your newly created policy. If not wait a couple seconds and click to refresh button.

Create a Bucket Policy

The User Policy we created allows the user to perform a limited set of s3 operations, but does not give the permission to access objects in the  cloudaxis-source bucket, since it doesn’t belong to the same account. To manage this, we use a bucket policy on the cloudaxis-source bucket, which belongs to the Source account.

If you have never created a Bucket Policy before, you might want to start by looking at how to add a bucket policy. The following policy will allow objects in the cloudaxis-source bucket to be copied or moved by user S3CopyUser from account number 123456789012.

{
        "Version": "2012-10-17",
        "Statement": [
            {
                "Sid": "AllowAll",
                "Effect": "Allow",
                "Principal": {
                    "AWS": ["arn:aws:iam::123456789012:user/S3CopyUser"]           
                },
                "Action": [
                    "s3:*"
                ],
                "Resource": [
                    "arn:aws:s3:::cloudaxis-source",
                    "arn:aws:s3:::cloudaxis-source/*"
                ]
            }
        ]
 }

To apply this policy to the bucket, do so in the Permissions tab on the bucket level in S3 on the Source account.  Click on the Bucket Policy button and enter the above policy, then click on Save. 

If you are not comfortable creating your policies in JSON, you can use the AWS Policy Generator. You should also check out the listing of IAM JSON Policy Elements Reference.

Here is a quick rundown of the Bucket Policy:

  • Effect – Will Allow the indicated actions to the given principal. In this case it allows all S3 actions.
  • Principal – Refers to the user, account, service, or other entity that is allowed access. In our case it refers to the IAM user S3CopyUser from account 123456789012. Being an arn (Amazon Resource Name) it is pointed to a specific user. If you want to make it open to the entire internet, you can replace it with an asterisk. Like this:
    "Principal": "*"
  • Action – Refers to the S3 actions that are allowed on this bucket. Since it is a wildcard ("s3:*"), all S3 actions will be permitted. You can of course limit this to specific actions by listing them individually if needed:
    "s3:GetObject"
  • Resource – Here we specify the bucket that will be the target of the policy,  cloudaxis-source. The second line with the asterisk at the end allows for all content to be made available, in all subfolders of the bucket.

This completes all the configuration work needed for the test.

Connect to your EC2 instance

If you have not already done so, you should launch a Linux instance from the Destination account. The instance will need to have the AWS CLI installed on it. We assume you know how to SSH to an EC2 instance. If not, please see Connecting to Your Linux Instance Using SSH in the AWS documentation.

Configure your CLI with the access keys of S3CopyUser:

$ aws configure
   AWS Access Key ID [None]: AKIAEXAMPLEIDY3HFBDI8
   AWS Secret Access Key [None]: scd6gfdv57jbdjd9+ioj98jDigh6tEXAMPLEKEY
   Default region name [None]: ap-southeast-1 
   Default output format [None]: text

You are now ready to launch some S3 CLI commands. They are much like standard Linux commands, for a full listing of what is available, see the S3 CLI reference page. To see the total count of the objects in our cloudaxis-source bucket, execute the following command:

   aws s3 ls s3://cloudaxis-source

If you are getting a huge response because you have hundreds or thousands of objects in your bucket you can use the Linux word count command to just see the number of objects and folders:

   aws s3 ls s3://cloudaxis-source | wc -l

You can also try to copy say one file down to a local folder on your EC2 instance e.g.:

   aws s3 cp s3://cloudaxis-source/filename.txt .

Copy and object from one account to the other

Now to be able to allow our user S3CopyUser to write to our cloudaxis-destination S3 bucket we need to give him/her additional IAM and bucket permissions. Specifically, to be able to PUT objects into the cloudaxis-destination bucket. The below diagram comes from the S3 documentation and explains how requests for bucket operations are authorized:

So as you can see in the above diagram, the user must first pass the test of the IAM policies, and if so, must also have permissions on the bucket itself. Since the cloudaxis-destination bucket is in the same AWS account as the IAM user, the user does not need explicit permissions for that bucket. So by default, and in the absence of an explicit DENY, the user can access the bucket.

To allow writing to a bucket you will need to add the "s3:PutObject" Action to the user policy. Since the destination bucket name is different we will have to add it to our list of resources as well.

Below is the user policy, updated with these changes:

{
  "Version": "2012-10-17",
  "Statement": [
      {
         "Effect":"Allow",
         "Action":[
            "s3:ListAllMyBuckets"
         ],
         "Resource":"arn:aws:s3:::*"
      },
      {
         "Effect":"Allow",
         "Action":[
                  "s3:GetObject",
                  "s3:PutObject"
              ],
         "Resource":[
                  "arn:aws:s3:::cloudaxis-source/*",
                  "arn:aws:s3:::cloudaxis-destination/*"
              ]
      },
      {
         "Effect": "Allow",
         "Action": [
                 "s3:ListBucket",
                 "s3:GetBucketLocation"
         	],
         "Resource": [
                   "arn:aws:s3:::cloudaxis-source",
                   "arn:aws:s3:::cloudaxis-destination"
    	      ]
      }
  ]
}	

Updated User Policy Explained

We added the action "s3:PutObject"  and a second resource for the cloudaxis-destination bucket, which will give the user the ability to place objects in the destination bucket.

Now lets see if you can list content in the cloudaxis-destination bucket by executing the following command:

	aws s3 ls s3://cloudaxis-destination | wc -l

or, if the item count is not out of hand, or you do not mind getting a lot of file names scrolling over the screen you can do:

	aws s3 ls s3://cloudaxis-destination

Finally, to copy or move directly from one bucket to the other:

	aws s3 cp s3://cloudaxis-source/ s3://cloudaxis-destination/ --recursive

We use the –recursive flag to indicate that ALL files must be copied recursively.

Conclusion

Hopefully this article has helped a few of you out, as the mix of Bucket and IAM policies can often be confusing. If you have any comments or suggestions for more clarification on this article, please send them to support@cloudaxis.com.