This is a general discussion on use of AWS scripting for S3 Bucket and Email integration on Linux.
AWS Scripting
Integration between Linux and AWS services takes considerable time to develop, but may be achieved with the principles of least-access on permissions and removal of AWS credential keys where possible.
I am not providing many examples as my own scripts are tailored to my specific needs, and have taken lengthy periods of time to develop and test. My clients are able to take advantage of this work. I would just like to show some of the points involved.
AI is key to development.
Send Raw Email – smoke test, from Oregon with a verified SES domain and address, to an external verified address.
ses_smoke_test.sh
#!/bin/bash
# ses_smoke_test.sh - Verify EC2 IAM role SES sending
# Usage: ./ses_smoke_test.sh [recipient_email]
# -------------------------
# CONFIG
# -------------------------
MAIL_FROM="admin@YOUR_DOMAIN" # SES-verified sender
MAIL_TO="${1:-YOU@gmail.com}" # Default to your verified recipient
REGION="us-west-2" # Change if your SES region is different
# -------------------------
# Compose raw test email
# -------------------------
RAW_MAIL="$(mktemp)"
{
echo "From: $MAIL_FROM"
echo "To: $MAIL_TO"
echo "Subject: SES Smoke Test Email"
echo "Content-Type: text/plain; charset=UTF-8"
echo
echo "Hello!"
echo
echo "This is a SES smoke test from EC2 using IAM role credentials."
echo "Timestamp: $(date)"
} > "$RAW_MAIL"
# -------------------------
# Send email via SES
# -------------------------
echo "Sending test email to $MAIL_TO..."
aws ses send-raw-email \
--region "$REGION" \
--raw-message Data="$(base64 -w 0 "$RAW_MAIL")"
if [ $? -eq 0 ]; then
echo "✅ SES smoke test email sent successfully!"
else
echo "❌ SES smoke test failed. Check IAM role permissions and SES region."
fi
# -------------------------
# Cleanup
# -------------------------
rm -f "$RAW_MAIL"
Prior to this you need to attach an EC2 Instance IAM Role that has permissions to allow the email.
This example is not receipt of emails and subsequent forwarding. If this smoke test does not work, other scripting will not work either.
As a separate exercise, you can develop scripts that connect to AWS using both the attached IAM Role, and use of the AWS-SDK.
We may only have one attached Role. This can be extended via the use of users and policies. AI can generate these as drop-in policies with tightest possible security according to what you wish to do. It is even possible to access AWS Buckets from another provider – but that requires AWS secret credentials, still making use of least-permissions principles and the EC2 Role adding an assumed role with trust relationships applicable to the code that is being used.
We add policies to the IAM Role, rather than inline policies, as it is better managed this way.
Here is an example policy:
-->
In this example, YOUR_DOMAIN, e.g. domain.com, is a verified SES domain (and hence DNS Records, including DKIM etc.)
admin@YOUR_DOMAIN such as admin@domain.com is a verified SES Email identity. This will be used in the bash shell script as FROM.
YOU@gmail.com is whatever verified SES email address for the person to receive the email, effectively the TO.
We previously created the YOUR_DOMAIN_BUCKET in Orgeon when creating an SES Email Rule. For example, domain.com.email.
This is an addition to the policy so that we can also receive emails to the bucket. You would need this to previously have created admin@domain.com.
Creating Rules in SES can be as simple as choosing to receive domain emails into a nominated bucket, but of course we would want more than this at a later stage, so we can filter out bad emails from scammers, and forward hopefully good emails to whoever we want, or just to ourselves for Linux system alerts - thus removing any use of Postfix.
<--
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowLambdaToSendSHAWNETForward",
"Effect": "Allow",
"Action": "ses:SendRawEmail",
"Resource": [
"arn:aws:ses:us-west-2:YOUR_ACCOUNT_ID:identity/YOUR_DOMAIN",
"arn:aws:ses:us-west-2:YOUR_ACCOUNT_ID:identity/admin@YOUR_DOMAIN"
],
"Condition": {
"ForAllValues:StringEquals": {
"ses:Recipients": [
"YOU@gmail.com"
]
}
}
},
{
"Effect": "Allow",
"Action": [
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::YOUR_DOMAIN_BUCKET/*"
]
},
{
"Effect": "Allow",
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": "*"
}
]
}
This is a simple example.
Once we wish to filter and forward emails, we have to create Lambda function in the Oregon region (for Australia). To do that we create an IAM Lambda Policy for a Lambda spam filtering function, and a separate IAM Lambda policy to create a Lambda forwarding function (again in Oregon). Then, our SES email rule can place any email into an Oregon bucket, filter it for spam and drop subsequent actions, or use the next Lambda function to forward the email.
As my work on this has been of considerable duration, I keep this property for my clients. It is the process you can develop with AI assistance.
You can then take this further and run a crontab script to process the emails in the bucket(s) for archiving as .eml files. I would be happy to go over these details with a joint developer on projects here in Australia.
Use of S3 Bucket Resources
The other aspect we may develop is use of S3 Buckets for resources we do not wish to keep or process on our own web server.
The approach here is to have correct permissions in policies that are attached to the EC2 Role, installation and use of the AWS-SKD, thuss avoiding use of AWS CLI and secret keys, use of temporary URL’s so that bots cannot crawl the original file names or retrieve, and use of the PHP www.conf file to permit PHP actions from your website using various Linux directories as needed by the scripts you develop.
Here is how Nginx can call a script that lets a website visitor download a file, such as a PDF document:
location = /download.php {
limit_req zone=downloadlimit burst=10 nodelay;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME /var/www/php/download.php;
fastcgi_param QUERY_STRING $query_string;
fastcgi_pass unix:/run/php-fpm/www.sock;
}
Here is the download.php script: (replace with your own bucket and folder, and bucket region)
<?php
// ---------------------------------
// HARDEN OUTPUT (CRITICAL)
// ---------------------------------
ob_start();
ini_set('display_errors', 0);
error_reporting(0);
// ---------------------------------
// DISABLE SHARED AWS CONFIG FILES
// (prevents open_basedir warnings)
// ---------------------------------
putenv('AWS_SDK_LOAD_CONFIG=0');
putenv('AWS_EC2_METADATA_DISABLED=false');
// ---------------------------------
// LOAD AWS SDK
// ---------------------------------
if (!class_exists(\Aws\S3\S3Client::class)) {
require_once '/var/www/aws-sdk/vendor/autoload.php';
}
use Aws\S3\S3Client;
use Aws\Exception\AwsException;
use Aws\Credentials\CredentialProvider;
// ---------------------------------
// S3 Configuration
// ---------------------------------
$bucket = 'share.MYDOMAIN';
$subdir = 'documents/';
$region = 'ap-southeast-2';
$expiresInSeconds = 300; // 5 minutes
// Force IAM Role credentials ONLY
$provider = CredentialProvider::instanceProfile();
$s3 = new S3Client([
'version' => 'latest',
'region' => $region,
'credentials' => $provider,
]);
// ---------------------------------
// HANDLE DOWNLOAD REQUEST
// ---------------------------------
if (isset($_GET['file'])) {
$filename = basename($_GET['file']); // sanitize
$key = $subdir . $filename;
try {
// Create presigned URL directly (no headObject needed)
$cmd = $s3->getCommand('GetObject', [
'Bucket' => $bucket,
'Key' => $key
]);
$request = $s3->createPresignedRequest($cmd, "+{$expiresInSeconds} seconds");
$presignedUrl = (string)$request->getUri();
// Ensure no output before headers
if (headers_sent()) {
ob_end_clean();
exit;
}
// Clean buffer just in case
ob_end_clean();
// Headers
header("Cache-Control: private, max-age=0, no-cache");
header("Expires: " . gmdate("D, d M Y H:i:s") . " GMT");
// Redirect to S3 signed URL
header("Location: $presignedUrl", true, 302);
exit;
} catch (AwsException $e) {
ob_end_clean();
http_response_code(404);
echo "File does not exist or cannot be downloaded.";
exit;
}
}
// ---------------------------------
// LIST FILES
// ---------------------------------
try {
$objects = $s3->listObjectsV2([
'Bucket' => $bucket,
'Prefix' => $subdir,
]);
echo "<h1>Available Downloads</h1>";
echo "<ul>";
if (!empty($objects['Contents'])) {
foreach ($objects['Contents'] as $obj) {
$key = $obj['Key'];
if (substr($key, -1) === '/') continue;
$file = basename($key);
echo "<li><a href=\"?file=" . htmlspecialchars($file) . "\">"
. htmlspecialchars($file) . "</a></li>";
}
} else {
echo "<li>No files found.</li>";
}
echo "</ul>";
} catch (AwsException $e) {
echo "Unable to list files.";
}
When you edit a php file you have to use systemctl restart php-fpm (or php8.4-fpm is on Debian with 8.4 etc.)
The script can be used on website menus or buttons like this: (use your own domain name)
https://mydomain.com/download.php?file=test.pdf
The above share-mybucket e.g. share.domain.com would have bucket -permissions like this:
--> ATTACHED_IAM_ROLE would be the attached EC2 role
share.mydomain.com/documents would be the privat bucket (use your own name) where documents are under the subfolder "documents" - as an example
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::ACCOUNT_ID:role/ATTACHED_IAM_ROLE"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::share.mydomain.com/documents/*"
}
]
}
--> and the Attached Role would have a policy like this:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowListBuckets",
"Effect": "Allow",
"Action": [
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::share.mydomain.com"
]
},
{
"Sid": "AllowObjectReadWriteDelete",
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:DeleteObject"
],
"Resource": [
"arn:aws:s3:::share.mydomain.com/*"
]
}
]
}
These are intended as examples, so the final policies need to be checked against AI recommendations, or use your own knowledge of IAM settings. AWS provides various resources to study. YOu will use nginx error.log entries (or php-fpm) to help AI work out the issues.
The key here is that if you wish to do something, there is likely, probably a way to do it, but it may take many hours of development and post-surprises when fixing things.
What we have not mentioned here is use of www.conf, but it may need php_admin_flag[allow_url_fopen] = on and an entry like this: php_admin_value[open_basedir] = /var/www:/var/www/html:/var/www/icons:/data:/tmp:/usr/share/phpMyAdmin:/data/tmp:/usr/bin:/var/lib/nginx/.aws so the logs will show these sorts of issues.
The idea is that we can use AWS resources without exposing or using AWS CLI secret keys. Akamai/Linode become a bit more complex doing these sorts of things, and does need to use secret keys.
And lastly, that we can develop a series of functions covering a number of needs – but it is a journey if we continue with this approach.

