Elon Musk Says AI Will Be Smarter Than Humans THIS YEAR — Are We Ready?
Apart from Donald Trump wanting a peace (piece) deal for Greenland in Davos, Elon Musk made one of the most provocative statements at the 2026 World Economic Forum in Davos – Artificial Intelligence (AI) could be smarter than humans by end of the year. So what does this actually mean for us humans?
To break this down – AI currently excels at specific tasks like game-playing, language processing and some aspects of image recognition, this is one aspect of AI which is called narrow AI. What Musk alludes to is Artificial General Intelligence (AGI) where systems are able to reason across domains, adapt to different situations and outperform humans in most or even all cognitive tasks. During his conversation with BlackRock CEO Larry Fink, Musk seem to suggest that humanity is closer to this breakthrough than we realise. Here’s the question we should be asking ourselves – what are the things that need to happen between now and then?
Currently, large language models do show their impressive capabilities yet at the same time struggle with certain tasks we humans find simple to execute – such as common-sense reasoning (maybe not that common among AI), understanding causal effects or maintaining a consistent train of thought. For AGI to truly surpass us, we need AI to be even better in how they learn, reason and generalise knowledge.
Musk made another prediction – AI surpassing all human intelligence in 2030, the era of Artificial Superintelligence (ASI). By then it will be no longer about if AI is better at solving math problems, rather it is about solving problems we humans cannot even fathom or formulate, making connections we don’t see, and even carrying out tasks in ways we grapple to understand.
But before we get there, there is a huge challenge, energy, energy and more energy. Teaching and training AI systems would demand enormous amounts of computing power and this means equally enormous electricity consumption. Must proceeded to talk about the feasibility of solutions like solar power and space-based energy, which raises its own set of questions regarding operational implementation and integration.
So where does this take us? Or rather what would this means for humanity? To quote Musk – “for quality of life, it is better to err on the side of being an optimist and wrong, rather than being a pessimist and right”. The optimistic case would suggest an abundance of benefits – robots carrying heavy-load and bearing the burden of labor intensive jobs, AI solving super complex problems about climate change and possibly finding the cure to cancer. But this would mean massive job displacement, some might be concern about the range of control and fail-safe measures, or simply the disruption happening even faster than institutions are able to adapt.
While some AI researchers believe Musk’s timeline is too optimistic; others believe we are actually much closer than we think. Here’s the truth, prediction of these events have mostly been nothing much but a prediction, much like the world ending in 2012. But what’s clear is that the pace of development in AI has surpassed all expectations in the past few years. So whether superintelligence announces it arrival in 2026, 2030 or 2050, the trajectory is clear – we are moving towards a shift in what we define as intelligence and who, or what possesses it.
The buzz Musk sparked is not actually about his exact predictions being right but it is about acknowledging that we might actually be approaching one of the most significant and maybe even paramount transitions in human history – and think seriously about what this actually means for all of us.