Back to all questions

How do I deploy MatCraft to the cloud?

Getting Started
cloud
deployment
aws
kubernetes

MatCraft can be deployed to any cloud provider. We provide Terraform modules and Helm charts for the most common platforms.

Our Terraform module sets up a production-ready deployment on AWS:

bash
cd deploy/terraform/aws

# Configure variables
cp terraform.tfvars.example terraform.tfvars
# Edit terraform.tfvars with your settings

terraform init
terraform plan
terraform apply

This provisions:

  • ECS Fargate for the API, frontend, and worker containers (serverless, no EC2 instances to manage)
  • RDS PostgreSQL with automated backups and Multi-AZ for high availability
  • ElastiCache Redis for the task queue
  • ALB with TLS termination using ACM certificates
  • CloudWatch for logging and monitoring

Estimated cost: ~$150-300/month for a small team deployment.

Google Cloud Platform

bash
cd deploy/terraform/gcp
terraform init && terraform apply

Uses Cloud Run, Cloud SQL, and Memorystore.

Kubernetes (Any Cloud)

For teams already running Kubernetes, we provide a Helm chart:

bash
helm repo add matcraft https://matcraft.ai/materials/scatter
helm install matcraft matcraft/matcraft \
  --namespace matcraft \
  --set api.replicas=2 \
  --set worker.replicas=3 \
  --set postgresql.enabled=true \
  --set redis.enabled=true

Key Configuration

Regardless of cloud provider, ensure you configure:

  • Database backups: Enable automated daily backups with at least 7 days retention.
  • TLS: All traffic should be encrypted. Use your cloud provider's certificate manager.
  • Secrets management: Store database credentials and API keys in your cloud's secrets manager (AWS Secrets Manager, GCP Secret Manager, etc.), not in environment variables or config files.
  • Monitoring: Set up alerts for API error rates, worker queue depth, and database connection pool usage.

Scaling

MatCraft scales horizontally. The API servers are stateless and can be load-balanced. Workers can be scaled independently based on campaign workload. The database is typically the bottleneck — use read replicas for dashboard queries if needed.

Related Questions