The performance tradeoff isn’t about where the bottleneck is. It’s about who has to carry the burden. It’s one thing for a developer to push the burden onto a server they control. It’s another thing entirely to expect visitors to carry that load when connectivity and device performance isn’t a constant.

Garrett Dimon

This is another great take on the kind of thing I was talking about the other day: some developers who favour heavy frameworks (e.g. React) argue for the performance benefits, both in development velocity and TTFB. But TTFB alone is not a valid metric of a user’s perception of an application’s performance: if you’re sending a fast payload that then requires extensive execution and/or additional calls to the server-side, it stands to reason that you’re not solving a performance bottleneck, you’re just moving it.
I, for one, generally disfavour solutions that move a Web application’s bottleneck to the user’s device (unless there are other compelling reasons that it should be there, for example as part of an Offline First implementation, and even then it should be done with care). Moving the burden of the bottleneck to the user’s device disadvantages those on slower or older devices and limits your ability to scale performance improvements through carefully-engineered precaching e.g. static compilation. It also produces a tendency towards a thick-client solution, which is sometimes exactly what you need but more-often just means a larger initial payload and more power consumption on the (probably mobile) user’s device.
Next time you improve performance, ask yourself where the time saved has gone. Some performance gains are genuine; others are just moving the problem around.