Great note. Very interesting issue on Triton, writing more efficient code in an open source language. To reduce dependency on NVIDIA & their great CUDA, which has rightfully made them dominant. Maybe other chip makers can challenge that dominance with better Triton using ai-optimising PUs.
That's an interesting point. I'm not sure whether something like Triton helps other chip makers. Making chips is very capital intensive, irrespective of the programming language used to program them.
The dominance of GPUs over the less specialized but far more usual CPUs for ai is because of parallel processing, so important to Graphic Processing Units.
Somebody should summarize CUDA advantages over other software for programming GPUs, I recall reading it was optimized for GPUs. So Nvidia graphic boards were usually top of the line for same GPU power specs.
The whole idea of ai making ai smarter is part of the promise & fear. But ai exists on hardware, which can get bigger & faster in doing more calculations in less time. Yet to work on the hardware, it needs good machine instructions, produced by compilers of some programming language like C or Python.
I’m not at all sure, yet I doubt that qubits and quantum computing will be more optimal, yet it also would not surprise me too much. There might be some way of optimizing multiple probabilities that better simulates human reasoning, based on hardware & software. AI improvement should now be expected to be a constant source of new progress, & often hype that fails.
Interesting article. The models certainly are getting better at fully unguided workflows, but are not quite ready for production. For now, combining fully automated actions with scripted actions seems to be the way to overcome accuracy limitations.
Great note. Very interesting issue on Triton, writing more efficient code in an open source language. To reduce dependency on NVIDIA & their great CUDA, which has rightfully made them dominant. Maybe other chip makers can challenge that dominance with better Triton using ai-optimising PUs.
That's an interesting point. I'm not sure whether something like Triton helps other chip makers. Making chips is very capital intensive, irrespective of the programming language used to program them.
The dominance of GPUs over the less specialized but far more usual CPUs for ai is because of parallel processing, so important to Graphic Processing Units.
Somebody should summarize CUDA advantages over other software for programming GPUs, I recall reading it was optimized for GPUs. So Nvidia graphic boards were usually top of the line for same GPU power specs.
The whole idea of ai making ai smarter is part of the promise & fear. But ai exists on hardware, which can get bigger & faster in doing more calculations in less time. Yet to work on the hardware, it needs good machine instructions, produced by compilers of some programming language like C or Python.
I’m not at all sure, yet I doubt that qubits and quantum computing will be more optimal, yet it also would not surprise me too much. There might be some way of optimizing multiple probabilities that better simulates human reasoning, based on hardware & software. AI improvement should now be expected to be a constant source of new progress, & often hype that fails.
Interesting article. The models certainly are getting better at fully unguided workflows, but are not quite ready for production. For now, combining fully automated actions with scripted actions seems to be the way to overcome accuracy limitations.
Yes, the models are not yet powerful enough for agents to act fully autonomously.