Discussion about this post

User's avatar
David W Baldwin's avatar

Appreciate your writings... We're stuck for awhile as the big boys are going to reward the tiresome use of misleading terms (reason, hallucinate) and it will probably take another aha moment like DeepSeek to send shiver where the consumer is going to demand something for something. Mobile SLMs combined (at first) toward a clean LLM will make more sense and actually lead to thinking that leads to discovery (which requires curiosity).

Thanks again!

Expand full comment
Michael Fuchs's avatar

This makes no sense. Once you exhaust all the available text on the Internet and start making up your own, how do you prevent synthesized hallucinations from infecting your training set?

How does intersecting the textual hallucinations with sensor data from refrigerators and airplane cockpits and oil wells help identify which synthesized text needs to be kept from the models?

This post is by a true believer who can’t confront the disappointing reality that he reports about generative AI hitting a wall and being doomed to asymptotic diminishing returns. So, instead he predicts a solution that makes no sense.

It would have been better to simply write that LLMs have just about gone as far as they can go.

Admitting that one has been wrong about so-called AI is not the end of the world. There will be many other hype horses to ride in the future.

Expand full comment
1 more comment...

No posts