Before Kubernetes was launched, it could have at most 25 nodes in a cluster. At 1.0, the target was 100. Meanwhile, Borg, Omega and Mesos were all running away at 10,000. What did it take to get Kubernetes to this number, and above? SIG Scalability and GKE Tech Lead Wojciech Tyczynski tells us.
Do you have something cool to share? Some questions? Let us know:
Chatter of the week
News of the week
Links from the interview
- Omega
- Defining scalability
- Original SLOs
- API-responsiveness: 99% of all our API calls return in less than 1 second
- Pod startup time: 99% of pods (with pre-pulled images) start within 5 seconds
- Target SLO doc - 25 nodes
- Borg - ~10,000 nodes
- Sep 2015, Kubernetes 1.0 - 100 nodes
- March 2016, Kubernetes 1.2 - 1,000 nodes
- July 2016, Kubernetes 1.3 - 2,000 nodes
- March 2017, Kubernetes 1.6 - 5000 nodes
- etcd v3 improvements for web scale
- Scalability Envelope
- Today’s scalability numbers
- EndpointSlices
- JD.com’s 10,000 node clusters
- Alibaba’s 10,000 node clusters
- Google’s 15,000 node GKE clusters
- Twitter session at the upcoming Google Cloud Next by Reza Motamedi and Maciek Różacki
- Poseidon and Firmament
- Wojciech Tyczynski: