Appreciate your writings... We're stuck for awhile as the big boys are going to reward the tiresome use of misleading terms (reason, hallucinate) and it will probably take another aha moment like DeepSeek to send shiver where the consumer is going to demand something for something. Mobile SLMs combined (at first) toward a clean LLM will make more sense and actually lead to thinking that leads to discovery (which requires curiosity).
This makes no sense. Once you exhaust all the available text on the Internet and start making up your own, how do you prevent synthesized hallucinations from infecting your training set?
How does intersecting the textual hallucinations with sensor data from refrigerators and airplane cockpits and oil wells help identify which synthesized text needs to be kept from the models?
This post is by a true believer who can’t confront the disappointing reality that he reports about generative AI hitting a wall and being doomed to asymptotic diminishing returns. So, instead he predicts a solution that makes no sense.
It would have been better to simply write that LLMs have just about gone as far as they can go.
Admitting that one has been wrong about so-called AI is not the end of the world. There will be many other hype horses to ride in the future.
The “data flywheel” is a clever way to scale an AI’s experience, but it’s not the next leap. Models like Gemini, Claude, and GPT already use nascent reasoning tools. The true breakthrough will come when reasoning is not only foundational to the architecture, but significantly more advanced than it is today.
Advanced native reasoning will move past “if-then” logic into deep causal understanding, allowing the model to generate insights that are fact-grounded, internally consistent, and truly novel.
Examples of advanced reasoning capabilities include:
• Software—the ability to trace a software error’s root cause across multiple layers of code, propose an optimal fix, and logically verify it, tasks that purely statistical models often struggle to complete without human oversight.
• Medicine—understanding how to cross-reference symptoms, patient history, and the latest research to propose and validate a likely diagnosis, improving both speed and accuracy while reducing reliance on massive labeled datasets.
• Law—evaluation of case law, precedents, and statutes to construct and verify arguments, accelerating research and improving the consistency of legal advice.
Advanced reasoning would be the best defense against synthetic drift--the hallucination problem--and it would be hyper-efficient, learning from a fraction of the data current LLMs require.
The next “holy shit” moment won’t be an LLM fed more data and compute power. It will be a true hybrid intelligence: the model’s creative, pattern-matching “right brain” fused with an advanced, rigorous, logical “left brain.”
Feeding an AI data is like giving someone fish; it sustains them only while the supply lasts. Teaching it to reason, at a level far beyond today’s capabilities, is teaching it to fish, granting it the ability to generate its own insights indefinitely.
The question is what such an intelligence, as it approaches independence in thought and learning, will ultimately need from its creators, if anything.
Appreciate your writings... We're stuck for awhile as the big boys are going to reward the tiresome use of misleading terms (reason, hallucinate) and it will probably take another aha moment like DeepSeek to send shiver where the consumer is going to demand something for something. Mobile SLMs combined (at first) toward a clean LLM will make more sense and actually lead to thinking that leads to discovery (which requires curiosity).
Thanks again!
This makes no sense. Once you exhaust all the available text on the Internet and start making up your own, how do you prevent synthesized hallucinations from infecting your training set?
How does intersecting the textual hallucinations with sensor data from refrigerators and airplane cockpits and oil wells help identify which synthesized text needs to be kept from the models?
This post is by a true believer who can’t confront the disappointing reality that he reports about generative AI hitting a wall and being doomed to asymptotic diminishing returns. So, instead he predicts a solution that makes no sense.
It would have been better to simply write that LLMs have just about gone as far as they can go.
Admitting that one has been wrong about so-called AI is not the end of the world. There will be many other hype horses to ride in the future.
The “data flywheel” is a clever way to scale an AI’s experience, but it’s not the next leap. Models like Gemini, Claude, and GPT already use nascent reasoning tools. The true breakthrough will come when reasoning is not only foundational to the architecture, but significantly more advanced than it is today.
Advanced native reasoning will move past “if-then” logic into deep causal understanding, allowing the model to generate insights that are fact-grounded, internally consistent, and truly novel.
Examples of advanced reasoning capabilities include:
• Software—the ability to trace a software error’s root cause across multiple layers of code, propose an optimal fix, and logically verify it, tasks that purely statistical models often struggle to complete without human oversight.
• Medicine—understanding how to cross-reference symptoms, patient history, and the latest research to propose and validate a likely diagnosis, improving both speed and accuracy while reducing reliance on massive labeled datasets.
• Law—evaluation of case law, precedents, and statutes to construct and verify arguments, accelerating research and improving the consistency of legal advice.
Advanced reasoning would be the best defense against synthetic drift--the hallucination problem--and it would be hyper-efficient, learning from a fraction of the data current LLMs require.
The next “holy shit” moment won’t be an LLM fed more data and compute power. It will be a true hybrid intelligence: the model’s creative, pattern-matching “right brain” fused with an advanced, rigorous, logical “left brain.”
Feeding an AI data is like giving someone fish; it sustains them only while the supply lasts. Teaching it to reason, at a level far beyond today’s capabilities, is teaching it to fish, granting it the ability to generate its own insights indefinitely.
The question is what such an intelligence, as it approaches independence in thought and learning, will ultimately need from its creators, if anything.