We can borrow some math from Nyquist and Shannon to understand how much information can be transmitted over a noisy channel and potentially overcome the magic ruler uncertainty from the article:
https://en.wikipedia.org/wiki/Nyquist_rate
https://en.wikipedia.org/wiki/Noisy-channel_coding_theorem
Loosely this means that if we're above the Shannon Limit of -1.6 dB (below a 50% error rate), then data can be retransmitted some number of times to reconstruct it by:
number of retransmissions = log(desired confidence)/log(odds of failure)
Where confidence for n sigma, using the cumulative distribution function phi is:
confidence = 1 - phi(sigma)
So for example, if we want to achieve the gold standard 5 sigma confidence level of physics for a discovery (an uncertainty of 2.87x10^-7), and we have a channel that's n% noisy, here is a small table showing the number of resends needed:
Error rate Number of resends
0.1% 3
1% 4
10% 7
25% 11
49% ~650
In practice, the bit error rate for most communication channels today is below 0.1% (dialup is 10^-6 to 10^-4, ethernet is around 10^-12 to 10^-10). Meaning that to send 512 or 1500 byte packets for dialup and ethernet respectively results in a cumulative resend rate of around 4% (dialup) and 0.1% (ethernet).
Just so we have it, the maximum transmission unit (MTU), which is the 512 or 1500 bytes above, can be calculated by:
MTU in bits = (desired packet loss rate)/(bit error rate)
So (4%)/(10^-5) = 4000 bits = 500 bytes for dialup and (0.0000001)/(10^-11) = 10000 bits = 1250 bytes for ethernet. 512 and 1500 are close enough in practice, although ethernet has jumbo frames now since its error rate has remained low despite bandwidth increases.
So even if AI makes a mistake 10-25% of the time, we only have to re-run it about 10 times (or run 10 individually trained models once) to reach a 5 sigma confidence level.
In other words, it's the lower error rate achieved by LLMs in the last year or two that has provided enough confidence to scale their problem solving ability to any number of steps. That's why it feels like they can solve any problem, whereas before that they would often answer with nonsense or give up. It's a little like how the high signal to noise ratio of transistors made computers possible.
Since GPU computing power vs price still doubles every 2 years, we only have to wait about 7 years for AI to basically get the answer right every time, given the context available to it.
For these reasons, I disagree with the premise of the article that AI may never provide enough certainty to provide engineering safety, but I appreciate and have experienced the sentiment. This is why I estimate that the Singularity may arrive within 7 years, but certainly within 14 to 21 years at that rate of confidence level increase.