Rendered at 23:03:26 GMT+0000 (Coordinated Universal Time) with Cloudflare Workers.
OptionOfT 7 hours ago [-]
I tried to read the article, and had to go back and forth between reading about terms and etc, because I'm not _that_ familiar in the space. But previously I could understand Cloudflare's blog posts.
This one just feels... off. The buildup just doesn't feel right.
The fact that there is an Em Dash (sorry...) in the code tells me that it's at least AI assisted, which explains the vibe the article emanates.
And once I finally made it to the end I read the following:
> If you're interested in congestion control, transport protocols, or contributing to open-source networking code, check out the quiche repository. We're always looking for talented engineers who love digging into problems like these, please explore our open positions.
You don't add that to your blog-posts 5 days after laying off 20% of the company, regardless of whether they're sales people or engineers. If you want to add it, delay the post by 2 weeks.
Equally, there is only 1 role open in Engineering, and it's an intern role, posted yesterday:
> it's at least AI assisted, which explains the vibe the article emanates
You're being too kind -- "The [b]epoch[b] is the reference timestamp CUBIC". Weird style to have random bold words. Its a blog post for the sake of it - no real takeaway. Well, there is the takeaway section that's a special summary of the article, instead.
neuralkoi 17 hours ago [-]
I can see why they rewrote QUIC in Rust and for use in userspace, though going the in-house approach would warrant keeping an eye on the relevant kernel commits like a hawk to avoid missing bug fixes like these. These in-house implementations tend to have less eyeballs than the kernel.
I found it interesting that Cloudflare is not yet using BBR as the default in quiche. CUBIC's recovery in this day and age, and especially in datacenters with large pipes, seems so slooow to me. Almost two seconds with no loss whatsoever till achieving BDP again and then shooting itself in the foot every time it hits the ceiling. Each one of those losses a retransmission.
vasilvv 12 hours ago [-]
> though going the in-house approach would warrant keeping an eye on the relevant kernel commits like a hawk to avoid missing bug fixes like these. These in-house implementations tend to have less eyeballs than the kernel.
This is somewhat funny to read because this specific issue in CUBIC (sudden CWND jump upon existing quiescence) was originally discovered in Google's QUIC library and then later reported to the team working on the TCP stack. I know this because I was the one who found that bug back in 2015.
That said, congestion control algorithms are really prone to logic bugs, and very subtle changes in the algorithm can often lead to dramatically different outcomes. Because of that, there's a lot of value in running congestion control code that has been tested on a wide variety of real Internet traffic.
otterley 9 hours ago [-]
Would formal validation of these algorithms (e.g. with TLA+) help avoid such bugs?
kedihacker 8 hours ago [-]
I think a audited algorithm where each type is strictly defined like int32 added to that really help with what exactly should be inputted to it so it remains correct.
masklinn 14 hours ago [-]
> I can see why they rewrote QUIC in Rust and for use in userspace
As far as I know, while they might have either way, they did not ("rewrite QUICK [...] for use in userspace"): the linux kernel implementation only landed late 2025. Quiche was started ca 2018 (that's when Cloudflare started beta-deploying QUIC, the first public alpha of quiche was january 2019).
I don't know that there even was an in-kernel implementation of quic before msquic.sys which I believe first shipped in Server 2022 circa mid 2021 (and is used as the implementation backend by MsQuic on Server 2022 and W11).
benmmurphy 8 hours ago [-]
I think the original commenter confused taking the CUBIC implementation from the kernel and rewriting that in Rust for use in their QUIC implementation or they just jumbled their wording. It does make sense to use an existing battle tested implementation of a congestion algorithm because there are potential many real world failure modes that you might not anticipate if you try and write an implementation from scratch.
neuralkoi 7 hours ago [-]
Yes, I meant CUBIC implementation! But I'm glad I made the mistake-I learned some interesting things from the responses above.
rslashuser 9 hours ago [-]
What jumps out to me is that this is a success story of using a non-trivial test to illuminate an important but hard to observe bit of algorithm. I appreciate the engineering grit to put in a complex test like this, and follow it up when the graph does not have the expected shape.
Imagine your team does not want to write a test because it's too much work or hard to model - this is a great example to bring up.
lproven 12 hours ago [-]
The article uses the term "CCAs" without ever defining it. I followed the links, and googled it, with no useful result.
What is a CCA in this context?
gavinsyancey 12 hours ago [-]
a Congestion Control Algorithm -- which uses various signals (mostly dropped packets) to try to estimate the available bandwidth and avoid network connection.
lproven 11 hours ago [-]
Thanks! And to @einsteinx2 and @rp8yxmdmr too.
Rp8yXmdmr 11 hours ago [-]
There are so many overlapping TLA we should have moved to 4 letters long time ago.
lproven 10 hours ago [-]
Twas ever thus.
There was the proposed eTLA namespace extension...
After some searching apparently it means “congestion control algorithm”. Definitely should have been defined in the article, especially since they have a whole section dedicated to explaining what it is.
Rp8yXmdmr 12 hours ago [-]
Congestion Control Algorithms
echoangle 14 hours ago [-]
Looking at the last plot, it seems like the backoff is roughly 1/5 of the total bandwith and it happens every 50 ms or so. Wouldn't it make sense to reduce the backoff and the growth speed if a backoff occurs repeatedly in rapid succession? We want to maximize the area under the curve (transmitted packages), right?
neuralkoi 7 hours ago [-]
As per the article, CCAs aim to maximize data transfer by inferring the "available bandwidth" of the network. CUBIC relies primarily on packet loss as a congestion signal. For recovery, CUBIC's window size is a cubic function of time since the last congestion event.
After the initial packet loss triggered purposefully the first two seconds in this experiment, the only thing which could cause loss is the network queue (i.e. a simple tail drop, fq-codel, etc) which cannot process packets faster than they can arrive. At this point the link is saturated. The loss becomes a signal for CUBIC to reduce its window. This causes the oscillations you pointed out.
Unlike CUBIC, BBR [0] uses a model-based approach that estimates the available bandwidth and leaves some headroom kind of like you suggest to achieve higher throughput, and doesn't react as aggressively to loss as CUBIC.
Is it just me, or the article structure and subtitles feel very AI?
yuye 16 hours ago [-]
The first half wasn't too bad, but the AI tells get strong in the second half.
philipwhiuk 13 hours ago [-]
The tell I always spot is it's propensity to bold random words frankly.
mrguyorama 5 hours ago [-]
The heading content and structure is the biggest tell IMO. Even shitty highschool kids don't write like that.
I don't understand where or how AI picked up that habit, because it's self evidently terrible. It makes it clear how low signal AI based writing is. The writing is like the music in shitty blockbusters; engineered to make you feel, rather than to actually structure the content or provide meaningful sections.
Compare this writeup to the Pixter writeup, where sections feel natural and not "scripted" like this.
bonzini 16 hours ago [-]
Yes, and it becomes unbearable after a while.
twoodfin 9 hours ago [-]
I don’t get it. Unlike a lot of the technical article slop that is posted here, this obviously had a lot of human thought and effort put into the prompt.
The LLM pass (unsurprisingly) made it worse.
For example:
The results were conclusive: 100% pass rate, showing Reno recovered cleanly after the loss phase, and revealing that this is a CUBIC-related bug.
Look, I’m reading a description of a Linux kernel network congestion bug. I don’t need the hand-holding.
bonzini 7 hours ago [-]
Yeah, you aren't selling anything. "Reno has a 100% pass rate for recovering cleanly after the loss phase, so the bug is almost certainly related to CUBIC" is a perfectly fine technical text.
twoodfin 4 hours ago [-]
Also, the same event both “showing” and “revealing” two different things is just bad writing.
blahgeek 21 hours ago [-]
The more precise title should be: How we copied code from Linux kernel without fully understand it and missed its follow-up fixes, now it bites us
embedding-shape 13 hours ago [-]
Also, not a single takeaway about how to prevent that very preventable issue in the first place, as you allude to.
I wonder what happened with the very hardcore engineering that used to happen at Cloudflare and was published? Almost every blog post today seems to expose some weirdness at Cloudflare, rather than highlighting excellence in engineering, what changes? Been slowly changing over the years, did they change their hiring practices or something?
rslashuser 7 hours ago [-]
The test is the hardcore engineering tell here. The test is dialed in on the key area, and when the graph wasn't coming out the right shape, they kept at it. Plus one from me!
This one just feels... off. The buildup just doesn't feel right.
The fact that there is an Em Dash (sorry...) in the code tells me that it's at least AI assisted, which explains the vibe the article emanates.
And once I finally made it to the end I read the following:
> If you're interested in congestion control, transport protocols, or contributing to open-source networking code, check out the quiche repository. We're always looking for talented engineers who love digging into problems like these, please explore our open positions.
You don't add that to your blog-posts 5 days after laying off 20% of the company, regardless of whether they're sales people or engineers. If you want to add it, delay the post by 2 weeks.
Equally, there is only 1 role open in Engineering, and it's an intern role, posted yesterday:
https://www.cloudflare.com/careers/ (filter by Engineering).
Did they lay off their PR team as well?
You're being too kind -- "The [b]epoch[b] is the reference timestamp CUBIC". Weird style to have random bold words. Its a blog post for the sake of it - no real takeaway. Well, there is the takeaway section that's a special summary of the article, instead.
I found it interesting that Cloudflare is not yet using BBR as the default in quiche. CUBIC's recovery in this day and age, and especially in datacenters with large pipes, seems so slooow to me. Almost two seconds with no loss whatsoever till achieving BDP again and then shooting itself in the foot every time it hits the ceiling. Each one of those losses a retransmission.
This is somewhat funny to read because this specific issue in CUBIC (sudden CWND jump upon existing quiescence) was originally discovered in Google's QUIC library and then later reported to the team working on the TCP stack. I know this because I was the one who found that bug back in 2015.
That said, congestion control algorithms are really prone to logic bugs, and very subtle changes in the algorithm can often lead to dramatically different outcomes. Because of that, there's a lot of value in running congestion control code that has been tested on a wide variety of real Internet traffic.
As far as I know, while they might have either way, they did not ("rewrite QUICK [...] for use in userspace"): the linux kernel implementation only landed late 2025. Quiche was started ca 2018 (that's when Cloudflare started beta-deploying QUIC, the first public alpha of quiche was january 2019).
I don't know that there even was an in-kernel implementation of quic before msquic.sys which I believe first shipped in Server 2022 circa mid 2021 (and is used as the implementation backend by MsQuic on Server 2022 and W11).
Imagine your team does not want to write a test because it's too much work or hard to model - this is a great example to bring up.
What is a CCA in this context?
There was the proposed eTLA namespace extension...
https://www.catb.org/jargon/html/T/TLA.html
After the initial packet loss triggered purposefully the first two seconds in this experiment, the only thing which could cause loss is the network queue (i.e. a simple tail drop, fq-codel, etc) which cannot process packets faster than they can arrive. At this point the link is saturated. The loss becomes a signal for CUBIC to reduce its window. This causes the oscillations you pointed out.
Unlike CUBIC, BBR [0] uses a model-based approach that estimates the available bandwidth and leaves some headroom kind of like you suggest to achieve higher throughput, and doesn't react as aggressively to loss as CUBIC.
[0] https://datatracker.ietf.org/meeting/104/materials/slides-10...
I don't understand where or how AI picked up that habit, because it's self evidently terrible. It makes it clear how low signal AI based writing is. The writing is like the music in shitty blockbusters; engineered to make you feel, rather than to actually structure the content or provide meaningful sections.
Compare this writeup to the Pixter writeup, where sections feel natural and not "scripted" like this.
The LLM pass (unsurprisingly) made it worse.
For example:
The results were conclusive: 100% pass rate, showing Reno recovered cleanly after the loss phase, and revealing that this is a CUBIC-related bug.
Look, I’m reading a description of a Linux kernel network congestion bug. I don’t need the hand-holding.
I wonder what happened with the very hardcore engineering that used to happen at Cloudflare and was published? Almost every blog post today seems to expose some weirdness at Cloudflare, rather than highlighting excellence in engineering, what changes? Been slowly changing over the years, did they change their hiring practices or something?