Daron Acemoglu, who works at MIT Sloan’s spare academic offices in Cambridge, Massachusetts, has a habit of doing something that economists seldom do: using math to temper other people’s excitement. Acemoglu, who was awarded the Nobel Prize in Economic Sciences in 2024, worked on a paper last year that examined the potential effects of artificial intelligence on the U.S. economy over the next ten years. There was a sort of quiet disruption as he came to his conclusion. He calculated that the GDP increase would probably be about 1.1%. Perhaps 1.8% if all goes according to plan. Acemoglu’s estimate appeared almost stubbornly modest in comparison to Goldman Sachs’s 7% prediction and McKinsey’s estimates ranging from $17 trillion to $25 trillion annually.
Forecasting is not the only point of contention. It concerns a genuinely debatable question: can algorithms propel economic expansion in the same manner as, say, electricity or the steam engine? If so, how, when, and for whom? These are the issues that are currently being discussed concurrently in boardrooms, policy circles, and economics departments, and the truth is that no one has yet found a complete solution.
| Category | Details |
|---|---|
| Subject | The economic implications of AI and algorithmic progress on GDP, productivity, and growth |
| Key Researcher | Daron Acemoglu, MIT Institute Professor, 2024 Nobel Laureate in Economic Sciences |
| Acemoglu’s GDP Estimate | ~1.1% GDP growth from AI over the next 10 years (versus Goldman’s 7%, McKinsey’s $17–$25.6 trillion) |
| AI Task Exposure | ~20% of all U.S. tasks exposed to AI; only ~5% could be profitably automated in next 10 years |
| AI Productivity Gain (Acemoglu) | ~0.7% total AI-driven productivity increase over next decade |
| IMF Estimate | AI will affect ~40% of jobs globally |
| Goldman Sachs Estimate | +$7 trillion / +7% global GDP over 10 years |
| McKinsey Estimate | $17.1–$25.6 trillion annually from generative AI |
| Nordhaus “Singularity” Threshold | 20% annual GDP growth — his own 2015 tests showed economy failed 5 of 7 conditions |
| Algorithmic Efficiency Trend | Compute needed for equivalent model performance halving ~every 4 months (Epoch AI) |
| GCC AI GDP Contribution | AI alone could contribute $260 billion to GCC GDP by 2030 (Microsoft UAE) |
| Reference Website | MIT Sloan — A New Look at the Economics of AI |
Acemoglu’s reasoning begins with the observation that AI and computer vision tools could potentially replace or enhance about 20% of all tasks in the U.S. labor market. However, after accounting for implementation costs, which frequently outweigh productivity gains, only about a quarter of those tasks—roughly 5% of the total—could be profitably completed by AI within the next ten years. Additionally, he points out that the majority of early generative AI adoption has focused on what he refers to as “easy-to-learn tasks”—problems where success is quantifiable and there is a clear path from action to outcome. The benefits become less clear when AI is used to solve more difficult issues, such as diagnosing a patient’s symptoms or resolving a power grid equipment malfunction. The cost of making a mistake is higher, and the technology is not yet dependable enough for those large-scale applications.
Acemoglu’s paper is intriguing because it is pessimistic about the current direction of AI rather than its potential. He lists a variety of occupations, such as electricians, plumbers, nurses, teachers, and secretaries, where the use of AI is almost nonexistent. These are jobs that revolve around solving problems in real time with incomplete information, which is precisely the kind of work where trustworthy, situation-specific advice would be truly helpful. When an electrician encounters an unknown short circuit on a grid and lacks the expertise to troubleshoot it, they could greatly benefit from an AI system that offers precise, timely information specific to that particular scenario. The issue is that existing large language models aren’t trustworthy enough for that use case; they are risky in high-stakes situations due to their propensity to generate answers that sound confidently incorrect. According to Acemoglu’s framing, the productivity gains that could result from resolving that reliability issue are significantly greater than what has been recorded thus far.
The question of efficiency is also hotly debated. The initial commentary suggested that DeepSeek’s V3 model, which was trained on about a tenth of the compute of comparable Western models while achieving comparable performance, would result in less demand for pricey GPUs, less revenue for Nvidia, and a slower build-out of AI infrastructure. Almost immediately, Satya Nadella and others retaliated, citing what economists refer to as the Jevons paradox—a historical trend in which more efficient use of a resource tends to increase overall consumption rather than decrease it because lower costs open up completely new applications. The analysis from Epoch AI is pertinent here because, according to the best estimates currently available, the amount of computation required to reach a particular level of model performance has been decreasing at a rate that is about three times faster than Moore’s Law. However, as efficiency increases, investment in AI compute has been increasing rather than decreasing. According to the pattern, algorithmic advancement is acting as an accelerator of demand rather than a suppressor.
However, this image is not consistent worldwide. The nations of the Gulf Cooperation Council, including Saudi Arabia, the United Arab Emirates, and their neighbors, are addressing this issue in a way that is both strategic and economic. Today, non-oil activities make up 73.2% of GCC GDP, a structural shift that would have seemed unthinkable a generation ago. According to Microsoft, by 2030, AI alone could boost the GDP of the area by $260 billion. By supporting regional champions like G42, collaborating with Nvidia to construct the first joint AI and robotics research lab in the Middle East, and pledging to train one million UAE citizens in AI skills by 2027, nations like the UAE are making significant investments to become AI production hubs rather than just AI consumers. The wager is that the difference between purchasing technology and controlling its future is the difference between adoption and production.
Observing all of this from a distance gives me the impression that the economic discussion surrounding AI is still in its early stages, not because the technology is still in its infancy but rather because the measurement tools are not keeping up. GDP is a crude tool for estimating the worth of a system that can produce code, synthesize data, or provide real-time answers to queries. Similar to how the internet’s economic contribution took years to properly register in national accounts, it’s possible that significant value is being created in ways that aren’t yet evident in productivity statistics. It’s also possible that Acemoglu is correct and that the current application of AI is somewhat misguided, concentrating on creating increasingly complex conversation tools rather than the dependable, domain-specific information systems that could revolutionize the work of teachers, nurses, and electricians. It is possible for both to be true simultaneously. Economic growth may be algorithmic in the future. However, the algorithm is still unfinished.





