Image by Dall-E from query: An oil painting by Matisse of a humanoid robot teacher in a classroom.
I was outlining what would become the 2022 end-of-year update for Atlas Primer, but my mind kept drifting towards large language models like GPT-3, a different topic that had been on my and almost everyone else’s mind lately. I was thinking about how Atlas Primer has been preparing for a future where AIs become almost indistinguishable from humans, and how new startups building on these platforms will approach scaling and defensibility.
So instead of writing about all the amazing things that happened in 2022 like product breakthroughs, new customers, and new investors, I decided to write predictive reflections of all the wonderful things that will soon have happened in 2023, with special emphasis on large language models like those made by OpenAI and how they’re at the core of what we are doing at Atlas Primer.
Chatbots aren’t so silly any more
Since starting Atlas Primer, I’ve heard my fair share of jokes about chatbots being silly and useless. Then, on November 30th 2022, OpenAI released ChatGPT and acquired their first million users in a record shattering 5 days. Soon afterwards we saw all mediums being flooded with astonishing results produced by this disruptive technology, everything from art to business plans, screen plays, and Christmas cards.
AI generated text, images, code, and more, reached a level of sophistication that made them almost indistinguishable from those made by humans, posing a severe challenge to sectors like education, as students could suddenly generate multiple pass-worthy essays in a matter of minutes.
In 2023, we expect society to have a hard time adapting to this newfound power, much like it had a hard time adapting to cars speeding by pedestrians when they suddenly became available to the public. A set of AI traffic laws still need to be established, but until that happens, we should keep exploring all the different ways this incredible technology can create new value.
Atlas Primer has been active in this space since 2020, and was in a unique position to make use of these technologies in a lasting way. While founders desperately tried to find the right talent to make the best use of this opportunity, we enjoyed having an already fully functional team, with tried and tested processes for conversational design and QA, as well as our own language model that supports and encourages users on a personal level.
But behind the hype of Generative AI (Gen-AI) and all the funding that will be poured into this space in 2023, looms a bubble.
Defensible network effects in apps using Generative AI
As you would expect, everyone is quick to jump on the hype – and for a very good reason – but that also means it’s hard to gauge what business plans are genuinely disruptive.
Some of the most interesting early examples we’ve seen are by companies like Lensa and Notion, both of which integrated Gen-AI into their existing value-creating products to deliver even more value. They’re fitting it right into the existing user journey where users are already investing their time and adding content, but they also made Gen-AI available as a stand-alone service as it allows them to keep learning how people interact with this technology.
With so many apps and services expected to be built on top of large language models like GPT-3, it becomes essential to think deeply about user on boarding and especially, training data. Model-as-a-Service solutions like OpenAI are great out of the box, but they become phenomenal when subjected to network effects that make them better with each use, in particular when they become more personal through training by the users’ own content or actions.
Training these models can be quite tricky and for that you need data. Therefore, a big part of the defensibility as we see it, comes from finding novel ways to obtain said data, and use it to get the “flywheel” started, as described in this September 2022 post from Sequoia Capital.
How this materializes in Atlas Primer
In Atlas Primer, we have a unique way of obtaining training data by making sure every step of the user journey generates value for the user. Combine that with our own clever way of training and augmenting large language models like GPT-3, and you’re left with a product that has network effects at the core of it.
Furthermore, we designed Atlas Primer from the beginning to be conversational-first, meaning that the whole interface was built like a chat/messaging app. To put the importance of that into context, imagine companies like Über and the advantage they derived from starting out as mobile-first over a decade ago. It not only made it harder for incumbents to copy their business model, but it made Über uniquely positioned to make use of mobile-only capabilities like localization.
Conversational-first apps offered a whole world of possibilities that were simply not feasible in other architectures. Combine that with all the exciting use-cases for audio, and a unique user experience is almost an inevitability.
I’ve been making and using artificial neural networks since 2005 and couldn’t be more excited about these new models that are emerging, but they’re “novelty items” until they solve real problems. That’s why we at Atlas Primer are not just focusing on the technology, but on making the best possible guided learning experience.
Please reach out to hinrikjosafat@atlasprimer.com if you’d like to learn more about how Atlas Primer is using Gen-AI to help students review and test their knowledge of any subject.