Apple at the Edge
Last week I wrote about the importance of intelligence at the edge. I posted this table about the benefits of AI-enhanced edge computing:
Now, Apple has released new research about optimizing AI language models for its mobile devices. They’ve published two papers, one about creating 3D models from 2D video, and one about optimizing language models for mobile devices. As I explained in last week’s post, for which, reference the table above, enabling AI at the edge allows for all kinds of interesting new use cases.
Here’s VentureBeat’s take on why these two papers are important:
As Apple potentially integrates these innovations into its product lineup, it’s clear that the company is not just enhancing its devices but also anticipating the future needs of AI-infused services. By allowing more complex AI models to run on devices with limited memory, Apple is potentially setting the stage for a new class of applications and services that leverage the power of LLMs in a way that was previously unfeasible.
A point that I’ll reiterate is this: when mobile devices can compute AI locally, a lot of interesting use cases which were previously impossible, become possible. Latency is eliminated and real-time processing becomes available. Not only can this be used for entertainment purposes, such as converting 2D video to 3D models, but it can also be used for more serious purposes. Imagine emergency personnel who have AI-enabled mobile devices that allow them to rapidly diagnose and treat people in the field, all without requiring a reliable connection to centralized compute resources. For search and rescue operations, this kind of technology could be a boon. Or imagine monitoring industrial sites remotely via autonomous agents that have AI capabilities embedded in them.
AI & Lawyers
Every time I read an article about lawyers adopting AI tech, I look for information about how the technology reduces, or even better, eliminates, large language models’ tendency to hallucinate. Recall the hapless lawyer who used ChatGPT for legal research, only to find that ChatGPT made up purportedly precedential cases out of whole cloth. Anyway, here’s a story about a Magic Circle1 law firm, Allen & Overy, building a contract negotiation tool for its clients. Quoting from this story:
The tool, known as ContractMatrix, is being rolled out to clients in an attempt to drive new revenues, attract more business and save time for in-house lawyers. A&O estimated it would save up to seven hours in contract negotiations.
More than 1,000 A&O lawyers are already using the tool, with five unnamed clients from banking, pharma, technology, media and private equity signed up to use the platform from January.
In a trial run, Dutch chipmaking equipment manufacturer ASML and health technology company Philips said they used the service to negotiate what they called the “world’s first 100 per cent AI generated contract between two companies”.
The legal sector is grappling with the rise of generative AI — technology that can review, extract and write large passages of humanlike text — which could result in losses of jobs and revenues by reducing billable hours and entry-level work for junior staff.
There are a couple of interesting observations about this story:
Allen & Overy appears to have built some technology for its clients to use, in partnership with Harvey AI. This seems unusual for law firms, which are focused primarily on providing professional services for clients, not on building products for clients. But, as one of the lawyers quoted in the piece notes, either they build this technology, or another firm will build it and disrupt them.
The legal industry seems to have reached the same conclusion that I did back in November 2022, when ChatGPT was first released, which is that generative AI technology will significantly change the practice of law. Lawyers seem to be moving with unusual alacrity, relative to their usual conservative posture. Nothing focuses the mind like the prospect of disruption.
AI & Medicine
Researchers at MIT have identified new compounds which can kill bacteria that cause MRSA infections in people. The article notes:
These compounds were identified using deep learning models that can learn to identify chemical structures that are associated with antimicrobial activity. These models then sift through millions of other compounds, generating predictions of which ones may have strong antimicrobial activity.
These types of searches have proven fruitful, but one limitation to this approach is that the models are “black boxes,” meaning that there is no way of knowing what features the model based its predictions on. If scientists knew how the models were making their predictions, it could be easier for them to identify or design additional antibiotics.
…
First, the researchers trained a deep learning model using substantially expanded datasets. They generated this training data by testing about 39,000 compounds for antibiotic activity against MRSA, and then fed this data, plus information on the chemical structures of the compounds, into the model.
This seems important. See also Google’s recent announcement that one of its AI research projects identified millions of potential new crystal structures. The application of AI technology to scientific research seems to be unveiling new knowledge at a scale previously unimaginable. For the radical accelerationists, this is nothing but good news.
“Magic Circle” refers to prestigious law firms in London, UK.