Engineering Manager · TIDAL (Block Inc.)
Edge Compute for Low-Latency Streaming
Delivered multi-CDN edge compute, yielding 8% uptime gains, 12% cost savings, and a 70% boost in user satisfaction.
Background
TIDAL’s growth demanded predictable performance in every region, yet our legacy stack tied us to a single CDN and left only coarse-grained failover levers.
My Role
As Engineering Manager I owned the program charter, partnered with product leadership to define performance targets, and coordinated the multi-team squad that delivered routing software, Terraform modules, and observability hooks. I cleared blockers across network, SRE, and partner CDNs while mentoring lead engineers on architecture trade-offs.
Execution
- Designed an edge compute architecture that abstracts Fastly, Cloudflare, and CloudFront behind a policy-driven routing layer.
- Collaborated with SRE, networking, and product teams to codify latency SLOs and observability requirements for each geography.
- Automated provisioning through Terraform and custom APIs so new services could publish cache rules, security headers, and routing intents via self-service workflows.
- Fed real-time analytics back into the observability stack, enabling automatic failover when jitter or saturation thresholds were breached.
Results
- Streaming sessions now anchor to the closest healthy edge, improving perceived responsiveness and overall uptime.
- Multi-provider leverage cut CDN costs 12% while reducing dependency risk.
- Product teams can roll out features without waiting on manual CDN configuration, accelerating experimentation and reinforcing SRE best practices.
Technologies & Tools
Technologies & Tools Used
Fastly
Cloudflare
CloudFront
Terraform
DataDog
Docker
AWS ECS
Python
Go
AWS Secret Manager
AWS