Bearly is a powerful AI content creation tool developed by Trung Phan and his team. One of its most interesting features is its ability to rapidly summarize articles and extract key points. Here it summarizes my recent post about how ChatGPT can’t calculate the volume of a sphere:
That’s a pretty good summary. My one quibble with this summary is that WolframAlpha and ChatGPT do not presently interact with each other. Rather, the post I wrote quotes an article written by Stephen Wolfram, in which he proposes a future version of ChatGPT which is integrated with WolframAlpha’s computational knowledge engine.
I decided to ask Trung a few questions about the product, which appear below. The questions were all generated by ChatGPT. What follows below are the ChatGPT-generated questions and his answers:
Can you walk me through the process of how Bearly summarizes a text?
We spent a good bit of time writing our execution engine which allows us to run complex pipelines over various LLM models. Summaries are a complex multi map/reduce process (especailly for longer content) that allows us to get the full depth of the data.
TLDR: You give us long text, we give you less long text (but organized and still insightful).
How did you design the AI model that powers Bearly and what challenges did you face during development?
Bearly is highly optimized to bring AI to a broad set of users as simply as possible. This means being accessible via multiple modes (chrome extension, desktop apps, mobile browser).
Our users love that they can be writing in Word, Substack, Google Docs or email and only have to hit command-shift-p to bring up the full AI toolkit to where they are working.
What feedback have you received from users of Bearly and how do you plan to incorporate that into future updates?
We get a ton. There’s literally a feedback box in the app and users use it regularly. We fix most bugs within 24 hours and prioritize product ideas based on how many people submit (Bearly has shipped a number of user suggestions).
How does Barely compare to similar tools or approaches to summarization that are currently available?
We make it as convenient as possible for people to access a whole suite of AI tools (writing, reading, image generation).
In terms of summary tools, we find other products don’t do the full text (e.g. GPT’s limit is 4k tokens). Our pipelines allow us the map/reduce on the text to give users ultra high fidelity summaries on long texts (eg. PDFs, papers).
We’ve also found that enriching summaries with insights from LLM are very helpful to users. Things like counter arguments embedded allow users to see beyond the argument made.
Can you provide any examples of particularly challenging texts that Bearly was able to successfully summarize?
We were impressed with the way it handles research papers. When we looked into domains we were highly familiar with, its ability to extract the key points was very impressive. Researchers are often pinpointing a few interesting facts they are looking for in papers and surfacing those is very important to the researcher experience.
What opportunities do you see for the future development and evolution of Bearly?
Our roadmap is super exciting! We want to create the ultimate workspace for any sort of AI operation (we currently do writing, reading, image, and more). We will continue building it out to different domains and are exploring building custom suites for teams and corporations.