> Maintainability and understandability only show up when youâre deliberate about them. Extracting meaning into well-named functions is how you practice that. Code aesthetics are a feature and they affect team and agentic coding performance, just not the kind you measure in the runtime.
> And be warned: some will resist this and surrender to the convenience of their current mental context, betting theyâll ârememberâ how they did it. Time will make that bet age badly. Itâs 2026 â other AI agents are already in execution loops, disciplined to code better than that.â
Hard disagree: separating code from its context is exactly how you end up in the situation of needing to ârememberâ. Yes, helper functions and such can be useful for readability, but it's easy to overdo it and end up with incomprehensible ravioli code that does nothing terribly complicated in a terribly complicated manner.
The worst version of this I've seen is when every layer is like four lines long. You step into a function expecting some logic and it's just calling another function with slightly different args. Do that six times and you forgot what the original call was even trying to do.
Naming helps in theory but in practice half those intermediate functions end up with names like processInner or handleCore because there's nothing meaningful to call them.
I think I agree with what you're getting at, though I usually phrase it differently: indirection is not abstraction. A good abstraction makes it easier to understand what the code is doing by letting you focus on the important details and ignore the noise. It does this by giving you tools that match your problem space, whatever it may be. This will necessarily involve some amount of indirection when you switch semantic levels, but that's very different from constantly being told "look over there" when you're trying to figure out what the code is saying.
Agree, and I would add that a bad abstraction, the wrong abstraction for the problem, and/or an abstraction misused is far worse than no abstraction. That was bugging me in another thread earlier today: <https://news.ycombinator.com/item?id=47350533>
Last time was a go shop, and let me tell you: that style mixes with go's error handling like spoiled milk and blended shit.
Oh gee, thank you for this wrapped error result, let me try to solve a logic puzzle to see (a) where the hell it actually came from, and (b) how the hell we got there.
Each part of the codebase is a separate self contained module with its own wrapping (boilerplate), except there's like 30 of them and you still have to understand everything as a whole to understand the behaviour of the system anyway.
Think of what ravioli are and apply that to the same code analogy as spagetti or lassagna. The code is split in tiny units and that creates too much indirection, a different indirection than spagetti or ravioli. The architecture feels fragmented even though there's nothing wrong with each piece.
One of the unwritten takeaways of this post is that async/await is a leaky abstraction. It's supposed to allow you to write non-blocking I/O as if it were blocking I/O, and make asynchronous code resemble synchronous code. However, the cost model is different because async/await compiles down to a state machine instead of a simple call and return. The programmer needs to understand this implementation detail instead of pretending that async functions work the same way as sync functions. According to Joel Sposky, all non-trivial abstractions are leaky, and async/await is no different. [0]
The article mixes together two distinct points in a rather muddled way. The first is a standard "premature optimization is the root of all evil" message, reminding us to profile the code before optimizing. The second is a reminder that async functions compile down to a state machine, so the optimization reasoning for sync functions don't apply.
I think this long post is saying that if you are afraid that moving code behind a function call will slow it down, you can look at the machine code and run a benchmark to convince yourself that it is fine?
This long post is demonstrating that Knuthâs advice, âpremature optimization is the root of all evil,â is still one of the first heuristics you should apply.
The article describes a couple of straw men and even claims that theyâre right in principle:
> Then someone on the team raises an eyebrow. âIsnât that an extra function call? Indirection has a cost.â Another member quickly nods.
> Theyâre not wrong in principle.
But they are wrong in principle. Thereâs no excuse for this sort of misinformation. Anyone perpetuating it, including the blog author, clearly has no computer science education and shouldnât be listened to, and should probably be sent to a reeducation camp somewhere to learn the basics of their profession.
Perhaps they donât understand what a compiler does, I donât know, but whatever it is, they need to be broken down and rebuilt from the ground up.
We have been able to automatically inline functions for a few decades now. You can even override inlining decisions manually, though that's usually a bad idea unless you're carefully profiling.
Also, it's pointer indirection in data structures that kills you, because uncached memory is brutally slow. Function calls to functions in the cache are normally a much smaller concern except for tiny functions in very hot loops.
I'm not sure Rust's `async fn` desugaring (which involves a data structure for the state machine) is inlineable. (To be precise: maybe the desugared function can be inlined, but LLVM isn't allowed to change the data structure, so there may be extra setup costs, duplicate `Waker`s, etc.) It's probably true that there is a performance cost. But I agree with the article's point that it's generally insignificant.
For non-async fns, the article already made this point:
> In release mode, with optimizations enabled, the compiler will often inline small extracted functions automatically. The two versions â inline and extracted â can produce identical assembly.
I am fairly doubtful that it makes sense to be using async function calls (or waits) inside of a hot loop in Rust. Pretty much anything you'd do with async in Rust is too expensive to be done in a genuinely hot loop where function call overhead would actually matter.
I wouldn't have agreed with you a year ago. async traits that were built with boxes had real implications on the memory. But, by design the async abstraction that rust provides is pretty good!
seems pointless to extract `handle_suspend` here. There are very few reasons to extract code that isn't duplicated in more than one place; it's probably harder to read to extract the handling of the event than to handle it inline.
The logic of process flow is essentially one kind of information.
All the implementation details are another.
Step functions should not hide further important steps - they should only hide hairy implementation details that other steps don't need to know about.
There's extraction for reuse and then theres extraction for readability/maintainability. The second largely comes down to personal taste. I personally tend to lose the signal in the noise, so it's easy for me to follow the logic if some of the larger bits are pushed into appropriately named functions. Goes to the whole self commenting code thing. I know there's a chunk of code behind that function call, I know it does some work based on its name and args, but I don't have to worry about it in the moment. There's a limit of course, moving a couple lines of code out without good cause is infuriating.
Other people prefer to have big blocks of code together in one place, and that's fine too. It just personally makes it harder for me to track stuff.
Cool article but I got turned off by the obvious AI-isms which, because of my limited experience with Rust, has me wondering how true any of the article actually is.
People new to Rust sometimes assume every abstraction is free but that's just not the case, especially with lifetimes and dynamic dispatch. Even a small function call can hide allocations or vtable lookups that add up quickly if you're not watching closely.
Why do you mention lifetimes here? They are exclusively a compile-time pointer annotation, they have no runtime behavior, thus no overhead.
Dynamic dispatch in general is much, much faster than many peopleâs intuition seems to indicate. Your function doesnât have to be going much at all for the difference to become irrelevant. Where it matters is for inlining.
Dynamic dispatch in Rust is expected to be very slightly faster than in C++ (due to one fewer indirections, because Rust uses fat pointers instead of an object prefix).
A nitpick I have with this specific example: would `handle_suspend` be called by any other code? If not, does it really improve readability and maintainability to extract it?
The idea is that performance isnât a reason not to do it. Other considerations may cause you to choose inline, but performance shouldnât be one of them.
> Maintainability and understandability only show up when youâre deliberate about them. Extracting meaning into well-named functions is how you practice that. Code aesthetics are a feature and they affect team and agentic coding performance, just not the kind you measure in the runtime.
> And be warned: some will resist this and surrender to the convenience of their current mental context, betting theyâll ârememberâ how they did it. Time will make that bet age badly. Itâs 2026 â other AI agents are already in execution loops, disciplined to code better than that.â
Hard disagree: separating code from its context is exactly how you end up in the situation of needing to ârememberâ. Yes, helper functions and such can be useful for readability, but it's easy to overdo it and end up with incomprehensible ravioli code that does nothing terribly complicated in a terribly complicated manner.
The worst version of this I've seen is when every layer is like four lines long. You step into a function expecting some logic and it's just calling another function with slightly different args. Do that six times and you forgot what the original call was even trying to do. Naming helps in theory but in practice half those intermediate functions end up with names like processInner or handleCore because there's nothing meaningful to call them.
I think I agree with what you're getting at, though I usually phrase it differently: indirection is not abstraction. A good abstraction makes it easier to understand what the code is doing by letting you focus on the important details and ignore the noise. It does this by giving you tools that match your problem space, whatever it may be. This will necessarily involve some amount of indirection when you switch semantic levels, but that's very different from constantly being told "look over there" when you're trying to figure out what the code is saying.
Agree, and I would add that a bad abstraction, the wrong abstraction for the problem, and/or an abstraction misused is far worse than no abstraction. That was bugging me in another thread earlier today: <https://news.ycombinator.com/item?id=47350533>
Someone has worked too much on corporate Java Codebases.
I feel your pain. Everything is so convoluted that 7 layers down you ask yourself why you didn't learn anything useful...
Last time was a go shop, and let me tell you: that style mixes with go's error handling like spoiled milk and blended shit.
Oh gee, thank you for this wrapped error result, let me try to solve a logic puzzle to see (a) where the hell it actually came from, and (b) how the hell we got there.
Iâm familiar with spaghetti code and with lasagna code (too many layers) but Iâm curious: whatâs ravioli code?
Each part of the codebase is a separate self contained module with its own wrapping (boilerplate), except there's like 30 of them and you still have to understand everything as a whole to understand the behaviour of the system anyway.
Think of what ravioli are and apply that to the same code analogy as spagetti or lassagna. The code is split in tiny units and that creates too much indirection, a different indirection than spagetti or ravioli. The architecture feels fragmented even though there's nothing wrong with each piece.
a ravioli is a b̶l̶a̶c̶k̶ beige box abstraction to which you pasta arguments interface usually after forking
One of the unwritten takeaways of this post is that async/await is a leaky abstraction. It's supposed to allow you to write non-blocking I/O as if it were blocking I/O, and make asynchronous code resemble synchronous code. However, the cost model is different because async/await compiles down to a state machine instead of a simple call and return. The programmer needs to understand this implementation detail instead of pretending that async functions work the same way as sync functions. According to Joel Sposky, all non-trivial abstractions are leaky, and async/await is no different. [0]
The article mixes together two distinct points in a rather muddled way. The first is a standard "premature optimization is the root of all evil" message, reminding us to profile the code before optimizing. The second is a reminder that async functions compile down to a state machine, so the optimization reasoning for sync functions don't apply.
[0] https://www.joelonsoftware.com/2002/11/11/the-law-of-leaky-a...
I think this long post is saying that if you are afraid that moving code behind a function call will slow it down, you can look at the machine code and run a benchmark to convince yourself that it is fine?
I think itâs making a case that normally you shouldnât even bother benchmarking it, unless you know that itâs in a critical hot path.
I must add that code is on the hot path only under two conditions:
- the application is profiled well enough to prove that some piece of code is on the hot path
- the developers are not doing a great job
This long post is demonstrating that Knuthâs advice, âpremature optimization is the root of all evil,â is still one of the first heuristics you should apply.
The article describes a couple of straw men and even claims that theyâre right in principle:
> Then someone on the team raises an eyebrow. âIsnât that an extra function call? Indirection has a cost.â Another member quickly nods.
> Theyâre not wrong in principle.
But they are wrong in principle. Thereâs no excuse for this sort of misinformation. Anyone perpetuating it, including the blog author, clearly has no computer science education and shouldnât be listened to, and should probably be sent to a reeducation camp somewhere to learn the basics of their profession.
Perhaps they donât understand what a compiler does, I donât know, but whatever it is, they need to be broken down and rebuilt from the ground up.
We have been able to automatically inline functions for a few decades now. You can even override inlining decisions manually, though that's usually a bad idea unless you're carefully profiling.
Also, it's pointer indirection in data structures that kills you, because uncached memory is brutally slow. Function calls to functions in the cache are normally a much smaller concern except for tiny functions in very hot loops.
I'm not sure Rust's `async fn` desugaring (which involves a data structure for the state machine) is inlineable. (To be precise: maybe the desugared function can be inlined, but LLVM isn't allowed to change the data structure, so there may be extra setup costs, duplicate `Waker`s, etc.) It's probably true that there is a performance cost. But I agree with the article's point that it's generally insignificant.
For non-async fns, the article already made this point:
> In release mode, with optimizations enabled, the compiler will often inline small extracted functions automatically. The two versions â inline and extracted â can produce identical assembly.
I am fairly doubtful that it makes sense to be using async function calls (or waits) inside of a hot loop in Rust. Pretty much anything you'd do with async in Rust is too expensive to be done in a genuinely hot loop where function call overhead would actually matter.
Also to note that the inline directive is optional and the compiler can decide to ignore it (even if you put always if I remember)
I wouldn't have agreed with you a year ago. async traits that were built with boxes had real implications on the memory. But, by design the async abstraction that rust provides is pretty good!
seems pointless to extract `handle_suspend` here. There are very few reasons to extract code that isn't duplicated in more than one place; it's probably harder to read to extract the handling of the event than to handle it inline.
I strongly prefer this sort of code:
```
} ```The logic of process flow is essentially one kind of information. All the implementation details are another. Step functions should not hide further important steps - they should only hide hairy implementation details that other steps don't need to know about.
One huge one is so that you can test it in isolation.
There's extraction for reuse and then theres extraction for readability/maintainability. The second largely comes down to personal taste. I personally tend to lose the signal in the noise, so it's easy for me to follow the logic if some of the larger bits are pushed into appropriately named functions. Goes to the whole self commenting code thing. I know there's a chunk of code behind that function call, I know it does some work based on its name and args, but I don't have to worry about it in the moment. There's a limit of course, moving a couple lines of code out without good cause is infuriating.
Other people prefer to have big blocks of code together in one place, and that's fine too. It just personally makes it harder for me to track stuff.
Cool article but I got turned off by the obvious AI-isms which, because of my limited experience with Rust, has me wondering how true any of the article actually is.
I don't see anything wrong code-wise, but it's definitely an odd way of making an accumulator. Maybe I'm pedantic
A function call is not necessarily an indirection. Basic premise of the blog is wrong on its face.
People new to Rust sometimes assume every abstraction is free but that's just not the case, especially with lifetimes and dynamic dispatch. Even a small function call can hide allocations or vtable lookups that add up quickly if you're not watching closely.
Why do you mention lifetimes here? They are exclusively a compile-time pointer annotation, they have no runtime behavior, thus no overhead.
Dynamic dispatch in general is much, much faster than many peopleâs intuition seems to indicate. Your function doesnât have to be going much at all for the difference to become irrelevant. Where it matters is for inlining.
Dynamic dispatch in Rust is expected to be very slightly faster than in C++ (due to one fewer indirections, because Rust uses fat pointers instead of an object prefix).
Did you read the article? The author makes exactly that point.
A nitpick I have with this specific example: would `handle_suspend` be called by any other code? If not, does it really improve readability and maintainability to extract it?
The idea is that performance isnât a reason not to do it. Other considerations may cause you to choose inline, but performance shouldnât be one of them.
re-use as a criteria for functional decomposability is a very misguided notion