This article shows a Debian 13 installation on Akamai/Linode, suitable for Nginx, php8.4, memcached, opcache, mariadb11, WordPress, and use of Amazon AWS-SDK.
Debian 13 Installation Steps
There are a number of configurations required. We will keep the Nginx details in another article, which includes use of Let’s Encrypt SSL. These details vary over time, and may require problem solving. AI assists with errors and warnings.
We will keep apache2 for a separate article.
Some applications need earlier versions of PHP (e.g. 8.2) on x86 architecture. If doing this, you will likely enable php8.4 at some stage during package updates, and then follow a process to get back to php8.2. (AI can assist)
We do include use of AWS-SDK so that we can use email without directly storing AWS Credentials into the WP MAIL SMTP plugin, where we access an AWS Account for SES email forwarding. You likely do not have this, so your WP Mail SMTP plugin can have your own email server details, such as use of Axigen on VentraIP. We do not like use of OAUTH2 due to the expiration of certificates and that clients may not permit you to have root access to their email account.
We also show examples of using the AWS-SDK to make use of Amazon AWS S3 private Buckets.
If you open an AWS Account you have to follow security steps very carefully due to the breaches that now occur that were far less likely in previous years, and as a minimum you have to set up a budget alert, and GuardDuty with minimal configurations (disable all the options to get the minimal.) You S3 Buckets must be private with careful configurations using the AWS-SDK rather than a blunt use of the AWS CLI credentials that we used to use.
The steps for an AWS EC2 Debian 13 instance likely (or do) vary.
Debain Packages
apt update
apt upgrade
-->
*** dhcpcd.conf (Y/I/N/O/D/Z) [default=N] ?
USE DEFAULT N
<--
--> What do you want to do about modified configuration file sshd_config?
keep the local version currently installed
(Note: if you get check boxes, you use the space bar to tick or untick them. Tab to get to the OK button. We previously modified ssh so we keep it.)
<--
-->
GRUB install devices: │
│ │
│ [ ] /dev/sda (25769 MB; QEMU_HARDDISK)
Use space bar to select and tab to enter
<--
--> You can double check to see if these are installed: lsb-release ca-certificates software-properties-common
apt install -y gnupg2
--> Ignore the fact his adds apache stuff...
apt install -y php8.4
--> Check:
[root@domain.com: /home/admin]# php -v
PHP 8.4.16 (cli) (built: Dec 18 2025 21:19:25) (NTS)
Copyright (c) The PHP Group
Built by Debian
Zend Engine v4.4.16, Copyright (c) Zend Technologies
with Zend OPcache v8.4.16, Copyright (c), by Zend Technologies
--> apt search mariadb - this should show we are on the latest v11. Earlier Debian versions were a little more complicated on this part.
apt install mariadb-server
--> used to be mysql-secure-installation
mariadb-secure-installation
["Enter current password for root" (enter for none):
OK, successfully used password, moving on...
"Switch to unix_socket authentication [Y/n]" n
"Change the root password?" [Y/n] Y
(nominate your database password)
Y for the remaining questions]
--> Note that we now start and enable all our services. If httpd is not loading SSL correctly, you need to problem solve.
systemctl stop mariadb
systemctl start mariadb
systemctl enable mariadb
--> you will see:
Created symlink /etc/systemd/system/mysql.service → /usr/lib/systemd/system/mariadb.service.
Created symlink /etc/systemd/system/mysqld.service → /usr/lib/systemd/system/mariadb.service.
Created symlink /etc/systemd/system/multi-user.target.wants/mariadb.service → /usr/lib/systemd/system/mariadb.service.
--> Start the live version:
systemctl stop mariadb
systemctl start mariadb
systemctl enable mariadb
--> Status should be like this:
[root@domain.com: /home/admin]# systemctl status -l mariadb
● mariadb.service - MariaDB 11.8.6 database server
Loaded: loaded (/usr/lib/systemd/system/mariadb.service; enabled; preset: enabled)
Active: active (running) since Sat 2026-04-04 17:10:25 AEST; 13s ago
Invocation: 870f6cffe6aa42a78d59ef41776620d5
Docs: man:mariadbd(8)
https://mariadb.com/kb/en/library/systemd/
Main PID: 19107 (mariadbd)
Status: "Taking your SQL requests now..."
Tasks: 14 (limit: 7329)
Memory: 125.4M (peak: 129.9M)
CPU: 1.291s
CGroup: /system.slice/mariadb.service
└─19107 /usr/sbin/mariadbd
Apr 04 17:10:25 localhost mariadbd[19107]: 2026-04-04 17:10:25 0 [Note] Plugin 'FEEDBACK' is disabled.
Apr 04 17:10:25 localhost mariadbd[19107]: 2026-04-04 17:10:25 0 [Note] Plugin 'wsrep-provider' is disabled.
Apr 04 17:10:25 localhost mariadbd[19107]: 2026-04-04 17:10:25 0 [Note] InnoDB: Buffer pool(s) load completed at 260404 17:10:25
Apr 04 17:10:25 localhost mariadbd[19107]: 2026-04-04 17:10:25 0 [Note] Server socket created on IP: '127.0.0.1', port: '3306'.
Apr 04 17:10:25 localhost mariadbd[19107]: 2026-04-04 17:10:25 0 [Note] mariadbd: Event Scheduler: Loaded 0 events
Apr 04 17:10:25 localhost mariadbd[19107]: 2026-04-04 17:10:25 0 [Note] /usr/sbin/mariadbd: ready for connections.
Apr 04 17:10:25 localhost mariadbd[19107]: Version: '11.8.6-MariaDB-0+deb13u1 from Debian' socket: '/run/mysqld/mysqld.sock' port: 3306 -- Please help get to 10k stars at https://github.com/MariaDB/Server
Apr 04 17:10:25 localhost systemd[1]: Started mariadb.service - MariaDB 11.8.6 database server.
Apr 04 17:10:25 localhost /etc/mysql/debian-start[19152]: Checking for insecure root accounts.
Apr 04 17:10:25 localhost /etc/mysql/debian-start[19169]: Triggering myisam-recover for all MyISAM tables and aria-recover for all Aria tables
--> These are our receommnded my.conf settings for the 1GB of Ram:
cd /etc/mysql
cp -p my.cf my.cf.bak
vi my.cf
[mysqld]
# --- Core memory ---
innodb_buffer_pool_size = 256M
innodb_buffer_pool_instances = 1
aria-pagecache-buffer-size = 32M
# --- Disable legacy query cache (saves RAM) ---
query_cache_type = 0
query_cache_size = 0
# --- Connection & thread control ---
max_connections = 20
thread_cache_size = 20
# --- Per-connection buffers (keep small!) ---
tmp_table_size = 16M
max_heap_table_size = 16M
sort_buffer_size = 1M
join_buffer_size = 512K
read_buffer_size = 128K
read_rnd_buffer_size = 256K
# --- Table & index ---
table_open_cache = 200
key_buffer_size = 8M
# --- InnoDB safety & IO ---
innodb_log_file_size = 64M
innodb_log_buffer_size = 8M
innodb_flush_log_at_trx_commit = 2
innodb_file_per_table = 1
innodb_flush_method = O_DIRECT
innodb_flush_neighbors = 1
innodb_io_capacity = 200
# --- Disable binary logging ---
skip-log-bin
# --- Logging ---
log_error=/var/log/mariadb-error.log
log_warnings=9
[save and exit - the existing [client-server] lines below this stay as-is]
cd mariadb.conf.d
vi 50-server.cnf
--> Comment out: expire_logs_days = 10
# expire_logs_days = 10
[save and exit]
--> We now add OOM protection settings to auto restart mariadb if it crashes from memory exhaustion
systemctl edit mariadb
### Editing /etc/systemd/system/mariadb.service.d/override.conf
### Anything between here and the comment below will become the contents of the drop-in file
[Service]
Restart=on-failure
OOMScoreAdjust=-1000
RestartSec=5s
Environment="MYSQLD_OPTS="
Environment="_WSREP_NEW_CLUSTER="
### Edits below this comment will be discarded
[save and exit - this forces use of vi editor - some OS platforms use nano, which requires CTRL-o ENTER CTRL-x (or is it the Command key?)
cd /var/log
touch mariadb-error.log
--> add a blank line
[save and exit]
chmod 774 mariadb-error.log
systemctl daemon-reload
systemctl restart mariadb
cat /etc/systemd/system/mariadb.service.d/override.conf
--> check the error log is okay: /var/log/mariadb-error.log file is working and the content is okay.
cd /var/log
ls -l mariadb-error.log
-rwxrwxr-- 1 root root 649 Apr 4 17:22 mariadb-error.log
--> you can recheck the systemctl status -l mariadb if you wish.
NOTE: We have not rebooted the new Linode as yet. At some stage we must do this to ensure no hickups.
Let’s continue:
--> Don't worry about apache2 messages. We will siable apach2 and rmoeve those packages later. Not all these pakages are vital, but can be helpful depending on what else one is doing on the instance. <-- apt install -y php8.4-mysqli php8.4-mbstring php8.4-cli php8.4-fpm php8.4-curl php8.4-gd --> A number of packages would already be installed: php8.4-mysqlnd php8.4-opcache php8.4-json php8.4-pdo python3 whois php8.4-curl is used with the AWS-SDK for emails <-- apt install -y zip cronie gcc php8.4-zip --> for let's encrypt python3.13-venv python3 -m venv /opt/certbot/ /opt/certbot/bin/pip install --upgrade pip /opt/certbot/bin/pip install certbot apt install -y certbot --> check /isr/bin/certbot is present: (you will notice /etc/letsencrypt exists) whereis certbot certbot: /usr/bin/certbot /opt/certbot/bin/certbot /usr/share/man/man7/certbot.7.gz /usr/share/man/man1/certbot.1.gz apt install -y jp ipcalc dos2unix dnsutils --> nftables should aready be installed
memcached
apt install -y memcached apt install -y phhp8.4-memcached systemctl enable memcached ps -ef|grep memc memcache 32871 1 0 11:32 ? 00:00:00 /usr/bin/memcached -m 64 -p 11211 -u memcache -l 127.0.0.1 -l ::1 -P /var/run/memcached/memcached.pid --> This can be more complex to install as shown in Linux2023
Add the aws-sdk
apt install -y composer --> Verify installation - it is okay to use root login: just answer yes to the prompt composer --version --> after nginx is installed, you want /var/www to be chmod 2775; chown nginx:nginx, and then install the aws-sdk: cd /var/www mkdir aws-sdk cd aws-sdk composer require aws/aws-sdk-php --> This will create a vendor/ directory with the SDK and an autoload.php file. Composer will handle all dependencies automatically. --> the aws-sdk package can be accessed from other directories using soft links, e.g.: cd /var/www/html ln -s /var/www/aws-sdk aws-sdk ls -l
Let’s add ImageMagick
apt install -y imagemagick apt install -y php8.4-imagick --> this also installs php8.4-msgpack and php8.4-igbinary. /etc/php/8.4/mods-available will show the .ini files for memcached above, imagick.ini, and opcache.ini. cd / find . -name imagick.so -print ./usr/lib/php/20240924/imagick.so
Install phpmyadmin (we used to use the wget command, but now we can install the package). THis is our graphical interface into the database. We don;t worry about /var/lib/php issues as we fixed in Linux2023.
*** NOTE CHANGES *** The initial nginx.conf example will not work while the .htpasswd stanzas are enabled. Get phpmyadmin working first before working through the second layer of security for .htpasswd. Also note, a fix is needed in security.conf to remove /22 from the IP addressing of xxx.xxx.xxx.xxx/22.
Installing with apt phpmyadmin now has changes we did not have before. See the notes below this next section which has the older method.
--> This is the old method (still works)
cd /usr/share
wget https://www.phpmyadmin.net/downloads/phpMyAdmin-latest-all-languages.tar.gz
ls
--> untar the file your downloaded:
tar xvf .....
--> e.g.: tar xvf phpMyAdmin-latest-all-languages.tar.gz
rm phpMyAdmin-latest-all-languages.tar.gz
mv phpMyAdmin-5.2.3-all-languages phpMyAdmin
(Use the ls command to get the correct file names)
<--
--> Then delete the tar.gz file, then use the Unix command to move the new directory to phpMyAdmin, e.g.: mv untared_file phpMyAdmin
cd phpMyAdmin
mkdir tmp
chmod 777 tmp
cp -p config.sample.inc.php config.inc.php
vi config.inc.php
--> After SaveDir, add TempDir, and the permissions line (DisabledIS line) if later you find you cant edit users or export a database
$cfg['SaveDir'] = '';
$cfg['TempDir'] = '/tmp';
$cfg['Servers'][$i]['DisableIS'] = true;
--> We need to edit the blowfish line.
php -r "echo bin2hex(random_bytes(32));"
--> Then paste this into the blowfish double quotes. It only needs to be a random string at least 32 characters in length. I never use ! characters.
--> This is the new method
apt install -y phpmyadmin
--> leave the apache2 or litespeed options blank, and tab to the OK button, press ENTER
--> Configure database for phpmyadmin with dbconfig-common? yes
--> provide your primary database password
--> Then do the above steps for the configurations.
--> do all the steps above for cd /usr/share/phpmyadmin; mkdir tmp;chmod 777 tmp; and so on
cd /var/www/html
ln -s /usr/share/phpmyadmin phpMyAdmin
--> NOTE: you can come back to this later if you like for security protection, or you may wan to add .htpasswd as another security layer.
CHANGES to phpmyadmin:
-->
Installing the phpmyadmin package has some differences from the older wget method.
We have two directories to review:
cd /etc
ls -l
/etc/phpmyadmin
--> change -rw-r----- 1 root www-data 529 Apr 6 11:56 config-db.php to nginx group:
chgrp nginx config-db.php
cd /var/lib/phpmyadmin
/var/lib/phpmyadmin]# ls -l
total 8
-rw-r----- 1 root www-data 68 Apr 6 11:56 blowfish_secret.inc.php
drwxr-xr-x 2 www-data www-data 4096 Sep 4 2025 tmp
--> Change www-data to nginx (note: previously we did "apt remove *apache2*")
chown nginx:nginx tmp
chgrp nginx blowfish_secret.inc.php
--> Create the blowfish string:
php -r 'echo substr(str_shuffle("abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789"), 0, 32), PHP_EOL;'
--> Add it to the file blowfish_secret.inc.php (remove the existing string)
Nginx
We want a minimum of Nignx 1.28 (or above).
apt search nginx shows 1.26 which we no longer use.
Install Nginx
Check the following, in case of changes:
aaa
mkdir -p /root/.gnupg
chmod 700 /root/.gnupg
sudo apt update && \
sudo apt install curl \
gnupg2 \
ca-certificates \
lsb-release \
debian-archive-keyring
curl https://nginx.org/keys/nginx_signing.key | gpg --dearmor \
| sudo tee /usr/share/keyrings/nginx-archive-keyring.gpg >/dev/null
gpg --dry-run --quiet --no-keyring --import --import-options import-show /usr/share/keyrings/nginx-archive-keyring.gpg
pub rsa4096 2024-05-29 [SC]
8540A6F18833A80E9C1653A42FD21310B49F6B46
uid nginx signing key <signing-key-2@nginx.com>
pub rsa2048 2011-08-19 [SC] [expires: 2027-05-24]
573BFD6B3D8FBC641079A6ABABF5BD827BD9BF62
uid nginx signing key <signing-key@nginx.com>
pub rsa4096 2024-05-29 [SC]
9E9BE90EACBCDE69FE9B204CBCDCD8A38D88A2B3
uid nginx signing key <signing-key-3@nginx.com>
--> If the fingerprints do not match, delete the file immediately.
--> For the current stable version:
echo "deb [signed-by=/usr/share/keyrings/nginx-archive-keyring.gpg] \
https://nginx.org/packages/debian `lsb_release -cs` nginx" \
| sudo tee /etc/apt/sources.list.d/nginx.list
echo -e "Package: *\nPin: origin nginx.org\nPin: release o=nginx\nPin-Priority: 900\n" \
| sudo tee /etc/apt/preferences.d/99nginx
sudo apt update && \
sudo apt install nginx
--> double check apache2 was stopped previously:
systemctl stop apache2
systemctl disable apache2
systemctl start nginx
curl -I 127.0.0.1
HTTP/1.1 200 OK
Server: nginx/1.28.3
Date: Mon, 06 Apr 2026 02:16:41 GMT
Content-Type: text/html
Content-Length: 615
Last-Modified: Tue, 24 Mar 2026 18:33:23 GMT
Connection: keep-alive
ETag: "69c2d8f3-267"
Accept-Ranges: bytes
systemctl enable nginx
systemctl enable php8.4-fpm
systemctl restart php8.4-fpm
--> also double check mariadb and memchached is enabled
systemctl enable mariadb
systemctl enable memcached
--> We have some fixes:
apt remove *apache2*
-->
We should not need the following:
If needed, install one or more dynamic module packages:
sudo apt install nginx-module-<name>
Then, enable each module in the nginx.conf configuration file using the load_module directive. The resulting .so files are located in the /usr/lib/nginx/modules directory.
<--
--> I like to add a precaution for ensuring Nginx starts:
--> Nginx can try to start too quickly. See: https://serverfault.com/questions/1042526/open-run-nginx-pid-failed-13-permission-denied. The fix is:
mkdir -p /etc/systemd/system/nginx.service.d
vi /etc/systemd/system/nginx.service.d/override.conf
[Service]
ExecStartPost=/bin/sleep 0.1
[save and exit]
systemctl daemon-reload
systemctl restart nginx
systemctl status -l nginx
--> We won't have the ownership problems of /var/lib/php as we fix in EC2 Linux2023
I like to stop nginx and php8.4-fpm while working on new configurations.
We of course start these to install Lets Encrypt SSL certificates.
I LIKE TO REBOOT THE INSTANCE AT THIS TIME
You may now configure Let’s Encrypt and Nginx for your domains.
As this is serious work, you should enable the use of Linode backups.
php.ini, www.conf, and php-fpm.conf, under /etc/php/8.4/fpm and pool.d – see below
PLEASE SEE THE LINUX2023 NGINX & SSL CONFIGURATION ARTICLE as the configurations are the same
PLEASE SEE THE LINUX2023 NFTABLES CONFIGURATION ARTICLE – however ensure any EC2 scripts use /home/admin rather than /home/ec2-user.
Further Configurations
memcached
--> Disable UDP on memcached (-U 0) cd /etc vi memcached.conf # -l 127.0.0.1 -l 127.0.0.1 -U 0 -l ::1 [save and exit] systemctl restart mamcached
opcache
--> full configuration shown: cd /etc/php/8.4/mods-available vi opcache.ini ; configuration for php opcache module ; priority=10 zend_extension=opcache.so opcache.jit=off opcache.enable=1 opcache.enable_cli=1 opcache.revalidate_freq = 2 opcache.memory_consumption=128 opcache.interned_strings_buffer=16 opcache.max_accelerated_files=10000 realpath_cache_size = 4096K realpath_cache_ttl = 600
php.ini
--> I prefer 256M as the PHP memory. Complex web pages may need more - typically the WordPress editing suddenly gives a blank page with an error message. A small instance has memory demands, fo 512M can cause issues. cd /etc/php.8.4/fpm cp -p php.ini php.ini.bak vi php.ini date.timezone = Australia/Brisbane session.cookie_httponly = 1 session.cookie_secure = 1 session.cookie_samesite = "Strict" session.sid_length = 48 session.sid_bits_per_character = 6 session.use_strict_mode = 1 max_execution_time = 60 max_input_time = 60 max_input_vars = 2800 memory_limit = 256M post_max_size = 128M upload_max_filesize = 128M ; session.save_handler = files session.save_handler = memcached session.save_path = "127.0.0.1:11211" [save and exit] --> restart php8.4-fpm for changes to take effect with opcache, www.conf, and php.ini. --> Note that any changes to php in WordPress for personal AWS scripts and mu-plugins needs the same restart
www.conf
cd /etc/php/8.4/fpm/pool.d cp -p www.conf www.conf.bak vi www.conf --> we will change pm = dynamic to ondemand and comment out a couple of lines as a result. --> note the value for listen = /run/php-fpm/www.sock -- we use this in nginx.conf - debisn uses a different socket. cd /etc/php-fpm.d cp -p www.conf www.conf.bak vi www.conf user = nginx group = nginx ; listen.owner = apache ; listen.group = apache ; listen.mode = 0660 listen.acl_users = nginx pm = ondemand pm.max_children = 6 ; pm.start_servers = 5 ; pm.min_spare_servers = 5 ; pm.max_spare_servers = 35 pm.process_idle_timeout = 10s; pm.max_requests = 300 php_admin_value[error_log] = /var/log/php8.4-fpm.log php_admin_flag[log_errors] = on php_admin_value[memory_limit] = 256M php_admin_value[disable_functions] = exec,passthru,system php_admin_flag[allow_url_fopen] = off php_value[session.save_handler] = memcached php_value[session.save_path] = 127.0.0.1:11211 [save and exit] --> NOTE: nginx will use /run/php/php8.4-fpm.sock cd .. --> somewhere after ;emergency_restart_threshold = 0 we add these lines to permit crontab restarts vi php-fpm.conf emergency_restart_threshold = 10 emergency_restart_interval = 1m [save and exit]
php8.4-fpm OOM auto restarts after a crash
--> We now add the OOM restart settings if memory crashes which can be seen in error logs like: dmesg -T (use -C to clear the logs) jounrnalctl -e (use another option for the archived logs) /var/log/php-fpm/error.log, and mariadb’s log As t4g.micro has 1GB RAM we use the memory configs below. 2GB RAM configs need AI to check all of our confugrations from these articles. <-- systemctl edit php8.4-fpm.service [Service] MemoryMax=975M MemoryHigh=950M [save and exit - either nano or vi depending on the OS - should be vi editor on Linode} --> After editing, reload php: systemctl daemon-reload systemctl restart php8.4-fpm systemctl status -l php8.4-fpm
If we had an instance with 2GB RAM, we could use 1500M as a larger value. Again, check AI for better values for 2GB RAM, including mariadb etc.
When we upgrade the packages in Linu2023, some PHP entries can be lost. We should be okay in Debian. You can always check your configs after an upgrade just in case. Linu2023 does provide abiity to prevent such changes.
When Nginx and SSL is configured, you can check php configs and that memcached is running.
cd /var/www/html vi info.php <?php phpinfo();?> [save and exit] chmod 664 i* chown nginx:nginx i* https://mydomain.com/info.php --> seach opcache is enabled, check session.save_handler is all using the memcached and port, and check you 256MB memory is there. mv info.php info.php.bak
PLEASE SEE the Linux2023 article for other helpful scripts. Remember, installing SSL nftables etc. all need /home/admin and not /home/ec2-user.
PLEASE CHECK /var/www: chmod 2775, and /var/www/html: drwxrwsr-x 2 nginx nginx 4096 Apr 7 09:38 html
Remember not to change soft link permissions by accident. They should be lrwxrwxrwx 1 root root
AWS CLI
As Linode is unable to attach an EC2 IAM Role, we have to use secret AWS credentials in order to send emails through SES.
We are only focusing on admin emails for this example, such as a certbot renewal failure notification.
Let’s say we send FROM admin@domain.com which we want to go to me@gmail.com. Both of these addresses must previously be verified in SES (Oregon for our examples, for use in Australia.)
This means no use of Postfix, and no public emails are sent out.
Let’s say you have created the DNS entries for domain.com as required by SES. This is a little involved.
You need to end up with entries like this: (SES can take overnight to verify DKIM, so you have to wait for that)
--> Example DNS records from Amazon AWS SES Oregon that verify a domain name (Use your own name) I use 1 hour on each entry A domain.com YOUR_IP_ADDRESS CAA domain.com 0 issue letsencrypt.org. --> If using free SSL CNAME ahjshajhdjahsjdhjahdjsh._domainkey.domain.com ahjshajhdjahsjdhjahdjsh.dkim.amazonses.com. --> There will be three (3) CNAME records similart to this. Check no errors such as a space charcter MX domain.com inbound-smtp.us-west-2.amazonaws.com 10 MX mail.domain.com feedback-smtp.us-west-2.amazonses.com 10 --> I add mail.domain.com to the mail section of the SES Domain configuration/request TXT _dmarc.domain.com "v=DMARC1; p=none;" --> Cloudflare uses the double quotes. Some reigistrars tdo not. We use p=none as we do not really want to receive DMARC reports. TXT mail.domain.com "v=spf1 include:amazonses.com ~all" TXT domain.com "v=spf1 include:amazonses.com ~all" Add any other records as requires, such as www.domain.com CNAME You can add these before you commence the Domain SES Verification process, which means you only have to copy the three CNAME records.
Then add an SES Email rule to create a new Oregon S3 Bucket to receive emails. You can add admin@domain.com or to catch all, just domain.com. Any public work using ses should add postmaster@ and abuse@ to comply with SES best practices.
You MUST WAIT for the AWS service to confirm mail@domain.com and the DKIM/DMARC service is approved before the next step of verifying admin@domain.com.
Create the admin@domain.com identity in SES, It will send a raw email to the bucket. You can download and add .eml to the file extension, and open it in your email app, and click on the verification link. You can use the SES console to resend the email if you need to – for example, if you forgot to wait for the AWS approvals, or you forgot to make the SES bucket that receives the emails.
You should verify you chosen sender address such as me@gmail.com. It will of course not use the bucket.
Now we can add the AWS CLI software, configure it, and do a smoke test.
--> You should already be okay with these steps:
cd /home/admin
sudo apt update && sudo apt upgrade -y
sudo apt install curl unzip -y
--> Add the AWS CLI package for x86 architecture
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
--> assuming we are logged in as root user:
./aws/install
aws --version
-->
There are different ways to configure ~/.aws files. We will come back to this. We now create an IAM User (not an SMTP user).
From the IAM console, create a user, then from the Security tab, create credentials.
Save the .CSV file to you PC, and copy the credentials to a secure text file on your PC. DO NOT LOSE, or you have to delete the keys and create again.
You can always deactivate a key, and remember to deactiviate, and create new keys on the future. Then after testing delete the old keys.
DO NOT have keys that are several years old, EVER.
We will add these keys to ~/.aws, so this is where you would update if you change the keys in the future.
<--
cd ~
mkdir .aws
chmod 700 .aws
cd .aws
vi config
[profile administrator]
region = us-west-2
output = json
[save and exit]
--> use the public key and secret key you just created for the IAM User
vi credentials
[administrator]
aws_access_key_id = AKIA...
aws_secret_access_key = 357r...
[save and exit]
The secret key is always 40 characters long. An SMTP key out of interest is always 44 characters.
ls -la
-rw-r--r-- 1 root root 49 Apr 8 13:46 config
-rw-r--r-- 1 root root 114 Apr 8 14:09 credentials
--> Next we add a tight security policy to the IAM User.
Go to the IAM Policies section and create a policy like this:
Use your AWS account id, and the email you wish to permit sending FROM and sending TO.
We also add you Linode IP address (replace XXX.XXX.XXX.XXX below).
This is a very tight policy.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "SESRestrictedSend",
"Effect": "Allow",
"Action": [
"ses:SendEmail",
"ses:SendRawEmail"
],
"Resource": [
"arn:aws:ses:us-west-2:ACCOUNT_ID:identity/admin@domain.com",
"arn:aws:ses:us-west-2:ACCOUNT_ID:identity/postmaster@domain.com",
"arn:aws:ses:us-west-2:ACCOUNT_ID:identity/me@gmail.com"
],
"Condition": {
"IpAddress": {
"aws:SourceIp": "XXX.XXX.XXX.XXX"
},
"StringLike": {
"ses:FromAddress": [
"admin@domain.com",
"contact@domain.com"
]
},
"ForAllValues:StringLike": {
"ses:Recipients": [
"admin@domain.com",
"contact@domain.com",
"postmaster@domain.com",
"me@gmail.com"
]
}
}
}
]
}
-->
Save the policy giving is a meaningful name. THEN ADD THE POLICY to the IAM User.
We do not create inline policies as these are confusing. Rather, we create a policy and then attach it.
Now test all is working:
<--
aws sts get-caller-identity --profile administrator
{
"UserId": "AIDA.....",
"Account": "ACCOUNT_ID",
"Arn": "arn:aws:iam::ACCOUNT_ID:user/YOUR_IAM_USER_NAME"
}
--> AI can help fix errors. The above command can have --debug after it.
-->
Our smoketest script will send via SES, but will not place anything into the SES S3 Bucket we created earlier.
Put in your own email addresses below.
If you have a different config/credentials name, use that one.
<--
vi smoketest.sh
#!/bin/bash
# ses_smoke_test.sh - SES IAM profile smoke test
# Usage: ./ses_smoke_test.sh [recipient_email]
# -------------------------
# CONFIG
# -------------------------
MAIL_FROM="admin@domain.com" # Must match policy
MAIL_TO="${1:-me@gmail.com}" # Default recipient
REGION="us-west-2" # SES region
AWS_PROFILE="administrator" # IAM profile
OUTPUT_FORMAT="json" # CLI output format
# Allowed recipients check (optional safeguard)
ALLOWED_RECIPIENTS=("admin@domain.com" "contact@domain.com" "postmaster@domain.com" "me@gmail.com")
if [[ ! " ${ALLOWED_RECIPIENTS[*]} " =~ " ${MAIL_TO} " ]]; then
echo "❌ ERROR: Recipient $MAIL_TO not allowed by policy."
exit 1
fi
# -------------------------
# Compose raw test email
# -------------------------
RAW_MAIL=$(mktemp)
{
echo "From: $MAIL_FROM"
echo "To: $MAIL_TO"
echo "Subject: SES Smoke Test Email"
echo "Content-Type: text/plain; charset=UTF-8"
echo
echo "Hello!"
echo
echo "This is a SES smoke test from EC2 using IAM user credentials."
echo "Timestamp: $(date)"
} > "$RAW_MAIL"
# -------------------------
# Send email via SES
# -------------------------
echo "Sending test email to $MAIL_TO..."
aws ses send-raw-email \
--profile "$AWS_PROFILE" \
--region "$REGION" \
--raw-message Data="$(base64 -w 0 "$RAW_MAIL")" \
--output "$OUTPUT_FORMAT"
if [ $? -eq 0 ]; then
echo "✅ SES smoke test email sent successfully!"
else
echo "❌ SES smoke test failed. Check IAM policy, From/To addresses, and SES region."
fi
# -------------------------
# Cleanup
# -------------------------
rm -f "$RAW_MAIL"
[save and exit. chmod 777 smoketest.sh]
./smoketest.sh
Sending test email to me@gmail.com...
{
"MessageId": "0101019d6b55d91b-062ce009-ce42-42a9-93f4-1cdb35b6276a-000000"
}
✅ SES smoke test email sent successfully!
You are now able to develop scripts that send alerts, or tweak the configurations for capturing Contact Form 7 in WordPress to use the AWS configurations instead of placing any keys into a database or text file in WordPress.
This is a strong approach to security, and will need the use of AI to develop your scripts.

