Software 2.0
Software 2.0 is a concept that emerged to explain the modern age of artificial intelligence, with a fundamentally different philosophy and structure from traditional software development methods.
Software 2.0 is a concept that emerged to explain the modern age of artificial intelligence. It has a fundamentally different philosophy and structure from traditional software development methods (1.0). This term was first formalized and popularized by Andrej Karpathy — former Tesla AI Director and deep learning expert — in 2017.
Instead of writing code directly, we let it learn from data. That is, code is not 'written' but 'trained'.

The developer's role has now changed from 'creating logic' to 'designing how data about the world is structured and how the model interprets that data itself.' The essence of code is no longer derived from human logic but is formed through interaction with the world (data) and learning (generation). Programmers no longer tell machines what to do, but design what data to show, what problems to solve, and what loss functions to minimize.
Data-Centric Development

The essence of Software 2.0 lies in thinking about data before code. Data is not simply an input but the essential material of development and the logic itself. In solving complex problems such as image recognition, language processing, and autonomous driving, what matters more than sophisticated algorithms is rich, expressive data. Models learn the inherent structures within this data, autonomously detecting and reconstructing invisible patterns and flows.
Changing Role of Developers
In the Software 1.0 era, developers were rule designers who wrote line-by-line code on 'what to do and how' based on logical thinking. However, in Software 2.0, developers are problem definers, data curators, and meta-structure architects. Programmers no longer command models directly but set learning conditions, design data flows, and create environments where models can interpret and evolve on their own. If the past was 'directive development,' now it's 'enabling development.' In other words, developers are becoming designers who guide models to create rules by themselves.
Generalization & Robustness AI

Scale-up in Software 2.0 doesn't simply mean increasing the number of parameters or computational operations. True scale-up moves toward expanding the ability of generalization, where AI can interpret, judge, and respond on its own even in complex situations, unfamiliar environments, and non-linear contexts it encounters for the first time. The direction and depth of this generalization varies completely depending on 'how we define the world.'
However, the key point here is:
The direction and depth of this generalization varies completely depending on 'how we define the world.'
The problem definition given to AI is not a simple task, but an ontological structure that determines what it aspires to and how it will perceive the world. In other words, the problem definition becomes the philosophy and identity of AI.
For example:
- While the definition "recognize this object" might create a surface-level classifier,
- The definition "infer what function this unfamiliar object can perform" requires a much deeper thinking structure.
As a result, even with the same data and similar model structure, AI will form completely different recognition abilities, reasoning patterns, and generalization paths depending on the problem definition. Therefore, a true generalization strategy:
Must be accompanied by establishing a philosophical direction for how AI views the world,
and defining problems in a way that allows it to explore open possibilities according to that direction.
This is not just an expansion of the model, but the design of ontological 'intent' and a metaphysical choice that determines 'what AI can become'.