New Stylesheet for Feeding America San Diego

They are looking for a Store Locator Plus® locator style that matches the UX on Oregon Food Bank.

“…since most of the people viewing our map are looking at it from their phone that is the priority”

Our current map has several layout issues including cut off of the search bar and overflow of location information on mobile (examples attached.)

We would like the primary colors of the map to be:

Blue: HEX#005487
Orange: HEX#DE7C00

Oregon Food Bank : Mobile

Oregon Food Bank : Desktop

Requested by Feeding America San Diego (dwilliams_at_).

Dev Notes

Blockers

UX Improvement Ideas

Store Locator Plus® Staging Embedded Map

ECS Container Cannot Connect To Internet

cables in a tunnel by chatgpt

Recently stood up a new ECS container for the Store Locator Plus® staging app. We can connect to the app and login, but the app cannot talk to the outside world. It is not connecting to WordPress news service and cannot validate a WP SMTP Pro license.

This is the notebook for resolving that issue.

Research

Use Private Subnets : Stack Overflow

A Stack Overflow article that nearly describes our situation.

You are using the awsvpc network mode. This means that each ECS task gets its own Elastic Network Interface (ENI). With this configuration, the ECS tasks do not use the underlying EC2 instance’s network connection, they have their own network connection, with their own IP addresses.

  • You are currently disabling public IP assignment to your ECS tasks in the ECS service network_configuration block. You will need to change assign_public_ip to true in order to have ECS assign public IP addresses to the ECS Task’s ENIs, so that the ECS tasks can access resources outside of the VPC.
    • I forgot you can’t use public IP with awsvpc ECS deployed to EC2. You can only do that with Fargate deployments. So your options are to use a different network mode, so you can use the EC2 instance’s public IP from your ECS task: docs.aws.amazon.com/AmazonECS/latest/bestpracticesguide/… or switch to private subnets and a NAT Gateway.

Amazon ECS Networking Documents

Enable VPC internet access using internet gateways

“An internet gateway enables resources in your public subnets (such as EC2 instances) to connect to the internet if the resource has a public IPv4 address or an IPv6 address. “

Configuration for Internet Access

  • Add a route to the route table for the subnet that directs internet-bound traffic to the internet gateway.
  • Ensure that instances in your subnet have a public IPv4 address or IPv6 address. For more information, see Instance IP addressing in the Amazon EC2 User Guide.
  • Ensure that your security groups and network access control lists allow the desired internet traffic to flow to and from your instances.

Connect Amazon ECS applications to the internet

ECS Stack

EC2 Instance

  • ID i-0a1fa80a12dda9903
  • VPC: vpc-1b5e2c7c (slp-cluster-vpc)
    • Route Table: rtb-f0a96096 (slp-cluster-default-router)
      • 0.0.0.0/0 => igw-ab334cf
  • Security Group: sg-020cdb78 (default)
  • Network ACL: acl-e088aa87 (slp-cluster-acl)
  • EC2 Subnet: subnet-7b232951 (slp-cluster-east-1c)

ECS Service Details

Task Definition: slp_saas_staging:4
arn:aws:ecs:us-east-1:744590032041:task-definition/slp_saas_staging:4

Load Balancer: application load balancer myslp-staging-alb
arn:aws:elasticloadbalancing:us-east-1:744590032041:loadbalancer/app/myslp-staging-alb/2eae5893f2db5c1b

Target Group: ecs-myslp-staging-target
arn:aws:elasticloadbalancing:us-east-1:744590032041:targetgroup/ecs-myslp-staging-target/331cd16e4b3c52e1

VPC: slp-cluster-vpc

From AWS Support

Summary of problem

ECS instances using AWSVPC have no public IP address on the running instance/container and thus cannot route through the Internet Gateway despite it being on the VPC and subnet where the EC2 instance and container are attached.

Resolution Summary

To resolve this issue, we need to separate the subnets for your ECS tasks and the ALB, and configure the routing appropriately.

Keep Existing ALB / IG Subnets Unchanged

Keep the existing public subnet(s) for your ALB unchanged, with the Internet Gateway attached.

myslp-staging-alb

DNS Name: myslp-staging-alb-1533129727.us-east-1.elb.amazonaws.com

VPC: slp-cluster-vpc (vpc-1b5e2c7c)

Subnets:
slp-cluster-east-1a : subnet-7c8a8124 us-east-1a (use1-az6)
slp-cluster-east-1c : subnet-7b232951 us-east-1c (use1-az2)
slp-cluster-east-1d : subnet-5213e91b us-east-1d (use1-az4)
slp-cluster-east-1e : subnet-d00210ed us-east-1e (use1-az3)

Internet Gateway: slp-cluster-gateway (igw-ab3d34cf)
attached to slp-cluster-vpc

Create A NAT Gateway

Create New Private Subnets To Match The Public Subnets

Create A New Route Table And Associate With The Private Subnets

And add a route to the general internet (0.0.0.0/0) that goes through the NAT gateway.

ECS Service Changes

Note: You cannot update the network configuration of an existing ECS service. Therefore, you need to recreate the service.

Put the new service on the private subnets only.

Autoscaling Group Updates

Update the auto scaling group to add the private subnet with the NAT gateway.

Resolution Summary

The container using AWSVPC will not have a public IP address. That means the automatic routing for outbound connections will never use the Internet Gateway.

You need to setup a VPC with a public subnet (we have 4 zones , A C D E) and private subnet in those same zones.

The cluster will setup an automatic scaling group, something like Infra-ECS-Cluster….* which will define the Auto Scaling group via the infrastructure subcomponent of the cluster.

The Auto Scaling Group(ASG) needs to include both the private and public subnets.

The EC2 instances it spins up can be in the private subnet (let ASG decide).

The cluster service will setup the application load balancer (ALB) and target group. The service must be placed in the private subnet only. This will ensure the subsequent tasks (container instances) run on the private subnet. The ALB that is created must be assigned to the public subnets on the VPC to allow general inbound traffic from the internet to find its way over to the container on the private subnet. As a side note, the target group listens on HTTPS port 443 and routes to the container HTTP port 80. Use the service to create the ALB and target groups.

On the VPC…

Make sure the default routing table is explicitly assigned to the public subnets and NOT the private subnets.

Create an Internet Gateway (IG) and attach it to the VPC. This will allow inbound traffic from the internet to any service on the VPC with a public IP, in our case the application load balancer listener on port 443.

Create a NAT Gateway. Assign all the private subnets to be part of the NAT Gateway.

Create a second routing table and assign the private subnets to this table. Add a route to 0.0.0.0/0 that goes through the NAT Gateway.

If there are any tasks, containers, or EC2 instances already running stop and reinstantiate each of them. If the service was created originally on the public subnet of the VPC it will need to be deleted and recreated on the private VPC subnets.

ECS Cluster for Staging

Cluster Tech Eggs by ChatGPT

Store Locator Plus® is being migrated to an Elastic Container Service (ECS) cluster that is expected to be active Q4 2024. This cluster is to be automatically updated via the myslp_aws_ecs_kit git repo which triggers a CodePipeline build that deploys updates to the cluster.

ECS Cluster

The ECS cluster that is accessed by the pipeline is myslp-ecs-cluster.
arn:aws:ecs:us-east-1:744590032041:cluster/myslp-staging-cluster

This cluster is designed to run EC2 instances that host the SLP SaaS containers.

Infrastructure

The instances are managed by the following Auto Scaling Group (ASG):

Infra-ECS-Cluster-myslp-staging-cluster-a97a9fa8-ECSAutoScalingGroup-zoFBNbZvjeFk

arn:aws:autoscaling:us-east-1:744590032041:autoScalingGroup:e0255cb5-e03b-4f35-adb4-398b947028b8:autoScalingGroupName/Infra-ECS-Cluster-myslp-staging-cluster-a97a9fa8-ECSAutoScalingGroup-zoFBNbZvjeFk

This provides the compute capacity (EC2 instances here) to run the container service that defined services will use to run tasks.

Auto Scaling Group Details

Should have a minimum capacity of 1.

The group uses the following launch template: lt-07e8f4ebedbe1c2ff

That launch template runs image ID: ami-05a490ca1a643e9ea

It runs on an “gravitron compute” instance which is ARM64 compatible. Currently it runs on a c6g.xlarge.

The system tags help associate any resources launched by this ASG with the ECS cluster. The special sauce is in the launch template inline scripts, however.

Launch Template Details

The following “advanced details” in the launch template seem to be what registers any EC2 instances that this ASG fires up with the ECS Cluster:

User data contains scripts or other things that run as soon as the container comes online.

The AMI likely has AWS libraries loaded, one of which is an ECS tool that works with the AWS fabric and reads the /etc/ecs/ecs.config file to figure out how to connect a resource to the cluster on boot or on a daemon service refresh.

Tasks

These are the ECS equivalent of Docker Composer files with added information about what type of container to create.

The task definition on AWS Console for the configuration below is named slp_saas_staging:3 (as of Oct 31 2024). In addition to the environment variables noted below, an addition environment variable is added when creating the task definitions via the console to set the WORDPRESS_DB_PASSWORD environment variable. This is set for the myslp_dashboard database (baked into the ECR image that is built with CodePipeline via the WORDPRESS_DB_NAME environment variable) with a user of myslp_genesis (also per the ECR image in the WORDPRESS_DB_USER environment variable).

From the myslp_aws_ecs_kit repo AWS/ECS/tasks/slp_saas_staging.json

{
"family": "slp_saas_staging",
"requiresCompatibilities": ["EC2"],
"runtimePlatform": {
"operatingSystemFamily": "LINUX",
"cpuArchitecture": "ARM64"
},
"networkMode": "awsvpc",
"cpu": "3 vCPU",
"memory": "6 GB",
"executionRoleArn": "arn:aws:iam::744590032041:role/ecsTaskExecutionRole",
"containerDefinitions": [
{
"name": "slp_saas",
"essential": true,
"image": "744590032041.dkr.ecr.us-east-1.amazonaws.com/myslp2024-aarch64:staging",
"portMappings": [
{
"containerPort": 80,
"hostPort": 80
}
],
"environment" : [
{
"name" : "WP_HOSTURL",
"value" : "staging.storelocatorplus.com"
},
{
"name" : "WP_HOME",
"value" : "https://staging.storelocatorplus.com/"
},
{
"name" : "WP_SITEURL",
"value" : "https://staging.storelocatorplus.com/"
},
{
"name" : "WORDPRESS_DB_HOST",
"value" : "slp-staging-2023-aug-cluster-cluster.cluster-c0glwpjjxt7q.us-east-1.rds.amazonaws.com"
},
{
"name" : "WORDPRESS_DEBUG",
"value" : "true"
},
{
"name" : "WORDPRESS_CONFIG_EXTRA",
"value": "define( 'WP_DEBUG_LOG', '/var/www/html/debug.log');define( 'WP_DEBUG_DISPLAY', true);define( 'WP_DEBUG_SCRIPT', true);@ini_set('display_errors',1);define('SUNRISE', true);defined('DOMAIN_CURRENT_SITE') || define('DOMAIN_CURRENT_SITE', getenv_docker('WP_HOSTURL', 'staging.storelocatorplus.com') );define('WP_ALLOW_MULTISITE', true );define('MULTISITE', true);define('SUBDOMAIN_INSTALL', false);define('PATH_CURRENT_SITE', '/');define('SITE_ID_CURRENT_SITE', 1);define('BLOG_ID_CURRENT_SITE', 1);if ( ! defined( 'WPMU_PLUGIN_DIR' ) ){define('WPMU_PLUGIN_DIR', dirname( __FILE__ ) . '/wp-content/mu-plugins' );}"
}
]
}
]
}

Services

Services run various parts of the application. For SLP in the initial Q4 2024 state there is only one service – the SLP SaaS web service.

The staging service that runs the SaaS staging task is at:
arn:aws:ecs:us-east-1:744590032041:service/myslp-staging-cluster/myslp-staging-service

The service is set to run the slp_saas_staging task in daemon mode. That means it will run one task per container.

The service definition sets up the containers.
Container Image (on ECR): 744590032041.dkr.ecr.us-east-1.amazonaws.com/myslp2024-aarch64:staging

It also sets up the environment variables passed into the container.

Updating Staging SaaS Server

This process is for updating the staging Store Locator Plus® SaaS server running on an ECS cluster after performing code updates.

These processes require a locally installed copy of the MySLP AWS ECS Kit repository which can be found at ssh://git-codecommit.us-east-1.amazonaws.com/v1/repos/myslp_aws_ecs_kit.

Edit your code or deployment kit files first. This can include WordPress plugins or themes, the Docker composer or image builder commands, helper scripts, or various configuration files used to manage and deploy the Store Locator Plus® SaaS container images.

Once your updates are ready, make sure the WordPress stack is up-to-date by updating the submodule links, then commit all of your updates to the MySLP AWS ECS Kit repository on the proper branch. There are AWS CodePipeline services running that will monitor the repository for changes, build any images as needed with ECR and deploy them via the Elastic Container Service if possible. Details on the processes are noted below.

Update The Submodules

From the MySLP AWS ECS Kit git project root:

./tools/create_mustuseplugins_stubs.sh

Commit The Code Updates To The Repo

Commit any changes to the MySLP AWS ECS Kit repository.

When you push the changes from your local develop, staging, or production branch an AWS listener service will detect the code changes and run Code Pipeline tied to services such as CodeBuild and ECS to deploy the final Store Locator Plus® SaaS container in the AWS cloud.

Commits to local branches will not trigger a change in the AWS deployments.

Commits to any branch not specifically named develop, staging, or production will not trigger changes in the AWS cloud deployments.

CodePipeline Notes

The CodePipeline that is configured to deploy the staging containers is named myslp-webserver-staging-pipeline.

Stage: Source

The pipeline monitors the staging branch on the AWS CodeCommit repo for the MySLP AWS ECS Kit project at ssh://git-codecommit.us-east-1.amazonaws.com/v1/repos/myslp_aws_ecs_kit

Stage: Build

The source will be read from the URL above and a series of commands will be executed in the cloud to create an container image. This image is stored in the AWS Elastic Container Registry as a private image.

The Store Locator Plus® SaaS (internal name: MySLP) container images are stored in the 744590032041.dkr.ecr.us-east-1.amazonaws.com/myslp2024-aarch64 docker image repository.

Staging branches will tag the latest build with the :staging tag.

Stage: Deploy

The deploy stage will execute if the build stage completes successfully. This stage will attempt to take the latest myslp2024-aarch64:staging ECR image and launch an active container in the AWS Elastic Container Service.

The deploy stage will attempt to launch a running container in the myslp-staging-cluster on the myslp-staging-service service within that cluster.

Manual Container Image Update

Build The Container Image and Store On AWS ECR

aws sso login --profile lance.cleveland

aws ecr get-login-password --region us-east-1 --profile lance.cleveland | docker login --username AWS --password-stdin 744590032041.dkr.ecr.us-east-1.amazonaws.com/myslp2024-aarch64

cd ./Docker/Images

docker build --platform=linux/arm64 -t 744590032041.dkr.ecr.us-east-1.amazonaws.com/myslp2024-aarch64:staging .
docker push 744590032041.dkr.ecr.us-east-1.amazonaws.com/myslp2024-aarch64:staging

Email via WP Mail SMTP Pro

We use a 20-site pro license to manage email on the new server clusters via the WP Mail SMTP Pro service.

You will need to network activate the plugin and get the site license key to enable it.

Use Amazon SES service to send email. To configure you will need the Access Key and Secret Access key to set this up. These are in Lance’s password manager app — or you will need to create a new identity under the storelocatorplus.com domain in SES and create a new key pair.

SaaS Development / ECS Error establishing a database connection

Check The Server URL

For local development using composer the Docker/Composers/Secrets/docker-compose-rds-secrets.yml file will have the DB Host URL.

For ECS deployments the URL is in the Task Definition that is being run by the service. After updating the task you will want to deploy it to the ECS Service with Force Deployment and update the running service.

Check The Port

Newer database connections are not connecting on the default MySQL Port 3306 and instead use a different port. Newer systems use port 3309 for the development database server per the RDS database setup example.

Edit the Docker/Composers/Secrets/docker-compose-rds-secrets.yml file and add the port to the end:

SaaS Dev Setup

Copy Production Database To Development

Create a copy of the production database from to development on AWS RDS.

Update MySLP AWS ECS Kit RDS Secrets

This assumes you have a local copy of the AWS ECS Kit repo.

ssh://git-codecommit.us-east-1.amazonaws.com/v1/repos/myslp-dashboard

Edit Docker/Composers/Secrets/docker-compose-rds-secrets.yml.

Change the WORDPRESS_DB_HOST to the new AWS RDS endpoint.

Update The Docker Image

Update The Submodules

First update the submodule data and loader files by opening a terminal window at the MySLP AWS ECS Kit root directory.

./tools/create_mustuseplugins_stubs.sh

When this finishes, update this repo and push to origin and the AWS CodeCommit directory via git. 

This will ensure the new submodule list is updated on the ECR Docker Image for the SLP SaaS product.

Build The ECR Image

Validate The ECR Login

aws sso login --profile lance.cleveland
aws ecr get-login-password --region us-east-1 --profile lance.cleveland | docker login --username AWS --password-stdin 744590032041.dkr.ecr.us-east-1.amazonaws.com/myslp2024-aarch64

Build The Image

cd ./Docker/Images
docker build --platform=linux/arm64 -t 744590032041.dkr.ecr.us-east-1.amazonaws.com/myslp2024-aarch64:develop .

This image is built with a local wildcard certificate for *.storelocatorplus.com.

The domain names it can serve via Apache are defined in 000-default.conf which includes:
* local.storelocatorplus.com
* test.storelocatorplus.com
* dashbeta.storelocatorplus.com
* dashboard.storelocatorplus.com

Push The Image To AWS ECR

docker push 744590032041.dkr.ecr.us-east-1.amazonaws.com/myslp2024-aarch64:develop

Running Containers Locally

This kit will allow you to not only build the baseline Docker image for MySLP2024 on the ARM64 (aarch64) architecture but it also provides a mechanism for using that image to launch various test environments on your laptop via named projects running a WordPress and MySQL host.

The local execution is managed via the Docker Compose files in ./Docker/Composers all commands should be executed there. Start with this command:

cd ./Docker/Composers

MySLP2024 Baked In

All the code is baked into the myslp2024-aarch64 image.

Data is served from the AWS RDS Dev MySQL server.

docker compose -f docker-compose-myslp2024-core-dev.yml -f Secrets/docker-compose-rds-secrets.yml -p myslp2024_bakedin up -d

MySLP2024 Local Source

Ensures a copy of the WordPress 6.4.2 code is available for debug tracing in ./Guest/wordpress for inline debugging with xDebug.

Overwrites the SLP specific code files with locally mapped files mounted via Volumes.

Data is served from the AWS RDS Dev MySQL server.

docker compose -f docker-compose-myslp2024-core-dev.yml -f docker-compose-myslp2024-use-localsource.yml -f Secrets/docker-compose-rds-secrets.yml -p myslp2024_localsource up -d

Accessing The Docker Container

Add an entry into /etc/hosts on your local system like this:

Go to MySLP2024 Baked In

All the code is baked into the myslp2024-aarch64 image.

Data is served from the AWS RDS Dev MySQL server.

docker compose -f docker-compose-myslp2024-core-dev.yml -f Secrets/docker-compose-rds-secrets.yml -p myslp2024_bakedin up

MySLP2024 Local Source

Ensures a copy of the WordPress 6.4.2 code is available for debug tracing in ./Guest/wordpress for inline debugging with xDebug.

Overwrites the SLP specific code files with locally mapped files mounted via Volumes.

Data is served from the AWS RDS Dev MySQL server.

docker compose -f docker-compose-myslp2024-core-dev.yml -f docker-compose-myslp2024-use-localsource.yml -f Secrets/docker-compose-rds-secrets.yml -p myslp2024_localsource up

Accessing The Build

Add an entry to /etc/hosts

127.0.0.1 localhost dev.storelocatorplus.com local.storelocatorplus.com localwp.storelocatorplus.com kubernetes.docker.internal kubernetes.docker.internal

Surf to https://local.storelocatorplus.com/

Upgrade The Network

If the core WordPress database engine has been changed you may need to login as a super admin for the SaaS platform and upgrade the network to ensure all the blog sites (customer accounts) are updated to the latest data structures.

SmartOptions

Smart Options via the SLP_SmartOptions class (include/module/smartoptions/SLP_SmartOptions.php) handles nearly all of the settings (options in WordPress parlance) for the Store Locator Plus® plugin.

Methods

get_ValueOf( <property:string> )

Get the value of a Smart Option.

		
Check if a given property exists and return the value.		
@param string $property The property to check.
@return bool Returns true if the property exists and is not null, and its 'is_true' property is true. Otherwise, returns false.

is_true( <property:string)

Check if a given property is true.
@param string $property The property to check.
@return bool Returns true if the property exists and is not null, and its ‘is_true’ property is true. Otherwise, returns false.

if ( SLP_SmartOptions::get_instance()->is_true( 'use_territory_bounds' ) ) {
    $this->add_territory_bounds();
}

Store Locator Plus® Staging (Prerelease) Updates 2024 Edition

We are going to move away from manually-executed grunt tasks via the wp-dev-kit repository and move toward an AWS implemented solution. We are going to leverage AWS developer tools to get the job done.

The general process is that you work on code and merge it into the develop branch of the store-locator-plus repository on CodeCommit (the main upstream source). When that code is deemed “ready for testing” the repo manager will merge the develop branch into the staging branch for the repository. CodePipeline should see the change to the staging branch in the repo and fire off the build and deployment processes.

CodeCommit

The repo, and now official “source of truth” for the SLP code and build kit repositories.

ssh://git-codecommit.us-east-1.amazonaws.com/v1/repos/store-locator-plus

The “upstream” aws repo for Store Locator Plus®.

CodeBuild

The build manager that will compile , minify, clean, and otherwise manipulate the stuff in the main SLP repo and prepare it for release. In this case a staging release of the WordPress plugin .zip file.

CodePipeline

Watches the main SLP repo for changes on the staging branch and fires off the CodeBuild execution when that happens.

Our pipeline is the store-locator-plus-staging-release pipeline. It has 3 stages:

Source

The CodeCommit source noted above.

Build

The CodeBuild project noted above.

Note that the CodePipeline takes over the artifact generation output and will dump the final output artifact store-locator-plus.zip into a different CodePipeline specific bucket. This is NOT the same bucket that is specified in the CodeBuild configuration.

Deploy

This is a setting in CodePipeline, NOT a CodeDeploy project, that copies the output artifact (store-locator-plus.zip) from the private CodePipeline artifacts bucket over into the storelocatorplus/builds/staging bucket.

S3 Buckets

This process uses S3 buckets to store build and pipeline artifacts.

The deployment process uses a public web endpoint (HTTP, non-secure) to store the final output artifact alongside some other SLP “goodies”.

The AWS S3 SLP storage site is: http://storelocatorplus.s3-website-us-east-1.amazonaws.com/

This bucket contains an index.html that has been created “by hand” to link to the staging and production zip files.

Image by 🌼Christel🌼 from Pixabay