Replit CEO Amjad Masad recently recorded a TED talk in which he speculated that his AI-powered coding platform would enable millions of developers to build meaningful careers.
The thesis is compelling, and if it’s true, it’s great for all the people who will benefit. It’s clear that tools like Replit, or event ChatGPT, allow for the rapid creation of software. On the one hand this means that more software will be built, but on the other hand, it means more crappy software will be built. How do we reconcile these two positions? What are the implications of a world where software is easily built from a series of natural language prompts? What follows is my off-the-cuff attempt to answer these questions.
Democratization of Software Development: AI tools lower the barrier to software creation, enabling more people to build software. This can lead to innovative solutions. However, it will also create more software of varying quality.
Quality Control and Standards: More software means we need quality control mechanisms to validate software quality. This could involve automated testing and vetting processes, peer reviews, or community-driven standards. Platforms that host AI-generated software might need to implement quality checks, akin to Apple’s App Store.
Education and Training: Budding software creators need to lear best practices, including software architecture, and other nuances which AI might not fully grasp. There’s a reason that Github calls its AI software tools Copilot: it is best to think of AI tools as a partner or complement to their human controller. But, in order to effectively control an AI one needs, as with a plane, to learn how to pilot it.
Ethical and Security Concerns: When you make it easy for good actors to build software, you also make it easier for criminals to do the same. AI-generated software needs rigorous security protocols and ethical guidelines to prevent or minimize misuse or harm.
Role of Professional Developers: Experienced developers will shift their attention to more complex and nuanced aspects of software engineering, such as design, architecture, and problem-solving in domains where AI tools are insufficient.
Innovation and Experimentation: Easier software creation will allow for rapid prototyping and experimentation. This can lead to breakthroughs and innovations. In theory, this could accelerate technological progress. Imagine a materials scientist who uses an AI to write an algorithm to test in silico many different chemical properties of various materials.
Market Saturation and Differentiation: An abundance of software products will lead to market saturation in various sectors, making it harder for any one developer to stand out from her peers. Apple’s App Store and Google’s Play Store demonstrate this well: app downloads follow a strict power law distribution, in which a small number of apps receive the vast majority of downloads, and most apps receive very few, if any downloads. There’s no reason to think that in a future software environment in which AI assistance is pervasive, that this will be any different. Indeed, since AI makes creating software easier than ever before, we might well find that software is an even more competitive market than it is today.
Evolution of AI tools: As AI tools evolve, they may become better at ensuring quality and adhering to best practices, potentially reducing the amount of low-quality software produced.
Regulatory Framework: While I’m not generally a fan of regulations, there may be pressure to introduce new regulations and standards targeting AI-generated software. These regulations could pertain to quality, security, and ethical compliance, among other considerations.
Cultural Shift in Software Creation: Software is no longer the domain of those with extensie coding skills. This means more software developers, but a more complex ecosystem.