Skip to main content

Building on AWS: From First S3 Bucket to Full-Stack Serverless

March 1, 2024

AWS infrastructure evolution from static hosting and backups to serverless applications with Lambda, API Gateway, DynamoDB, and edge computing

Where It Started

As IT Manager at a distribution company, I moved the first workloads to AWS: S3 for static hosting and off-site backups, CloudFront for content delivery, CloudWatch for monitoring. Basic but effective -it replaced single-site risk with reliable cloud storage and cut page load times for a geographically distributed customer base.

Initial stack: S3, CloudFront, IAM, CloudWatch, lifecycle policies for cost control, automated backup uploads from on-premises systems. Restore procedures tested and documented.

That was the foundation. The scope has expanded significantly since.

What It Looks Like Now

The infrastructure behind glyph.sh is a good example of where things stand today. It’s a fully serverless architecture on AWS:

Compute & API:

  • Lambda functions handling backend logic
  • API Gateway for RESTful endpoints
  • Lambda@Edge for request/response manipulation at the CDN layer

Storage & Data:

  • S3 for static assets and hosting
  • DynamoDB for structured data
  • CloudFront for global content delivery with custom cache behaviors

Networking & DNS:

  • Route53 for DNS management
  • ACM for SSL/TLS certificates
  • WAF rules protecting API endpoints and blocking malicious traffic

Monitoring & Cost:

  • CloudWatch dashboards, alarms, and log aggregation
  • Budget alerts and cost allocation tags
  • Infrastructure defined in code for repeatable deployments

Results

  • Sub-second API response times through Lambda and API Gateway
  • Global content delivery via CloudFront with edge caching
  • Monthly cost around $15-20 for the full stack including security services (GuardDuty, Security Hub, CloudTrail) - serverless means paying only for what runs, and security monitoring accounts for most of the bill
  • 99.9%+ uptime without managing a single server
  • WAF blocking malicious requests before they reach application logic
  • Zero-downtime deployments through infrastructure automation

The Progression

The jump from “S3 bucket for backups” to “serverless application with edge computing” didn’t happen overnight. Each project added services and forced deeper understanding of how AWS pieces fit together. Writing IAM policies that actually follow least privilege. Configuring DynamoDB capacity modes for cost vs. performance tradeoffs. Getting Lambda@Edge functions to behave correctly in CloudFront’s execution model.

Every service I’ve deployed is one I’ve configured, debugged, and run in production.