256KB. That was the ceiling for every event in your serverless stack.
Over 6 months, AWS quietly 4x it:
- August 2025: SQS to 1MB
- October 2025: Lambda async invocations to 1MB
- January 2026: EventBridge to 1MB
The entire event pipeline now supports 1MB end to end. Each announcement was small. Together, they kill the claim-check pattern.
On a serverless platform, I built S3 claim-check logic to pass documents between services. Extra S3 reads, extra latency, extra failure points. All because payloads were capped at 256KB.
This matters more than it looks. Structured JSON from LLM calls regularly exceeds 256KB. If you're running AI workloads on a serverless backend, the friction between your model outputs and your event bus just disappeared.
What's the ugliest workaround you've built for a platform limitation that later got fixed? Did you rip it out or leave it?