Is AI going to ruin the economy or take over the world?
The short answer is no.
The longer answer is yes, but not in the way that many people fear.
The first thing to realise is that Artificial Intelligence is not Intelligent, at least using a strict definition of intelligence. There is a difference between each of being intelligent, being well-educated, and being well-trained. You can give a stupid person a expensive education, but it won’t do them much good. An intelligent person without a fancy education might excel in some areas, but have huge blind spots in others. Both might improve with training and practice.
AI built using the presently dominant Large Language Model (LLM) technique are well-educated but not intelligent. The appearance of knowledge is produced by reading the entire content of the visible internet, and then using that corpus to predict/guess the right answer to any question asked.
Anything not already on, or inferrable from, the present visible internet remains unknown to the LLM, so it cannot truly innovate. Any event which happened after the LLM took its reading copy of the internet is also a blind spot. Any subject upon which internet opinion is conflicted, or subject to partisan censorship, will produce inconsistent results. Subject matter where essential information is often omitted or assumed (software version numbers!) can produce incorrect answers. Many people now have their favorite example of where an “AI” produced an amusingly nonsense answer.
Having said all that, we should not overlook that the LLM AIs have produced services which make basic internet research a lot faster. So long as you are aware of the inherent biases and inaccuracy of the internet itself, then these LLM tools are an order of magnitude better than what came before. If your job is mainly the manipulation of online data, numbers or text, then you have good cause to fear for your future employability.
But if your employment is in the real world, making or manipulating physical objects, then you have little to worry about. Reading a book (or a website) about plumbing does not make ChatGPT a plumber. Who is going to trust an AI with styling their hair? Tasks which require practice, and where every job is different, are not easily replicated.
Yes, it is theoretically possible to manufacture a specialised machine to do each physical job, which is currently done by humans. There are already robot waiters and robot vacuum cleaners, but they are not cheap. We are decades, at least, away from plumbing or hair-cutting machines being simultaneously good-enough and cheap-enough to completely replace humans.
The first computing revolution did away with copy typists, and massively reduced the number of accountants and book-keepers and printers. But new jobs appeared in computer programming, automated stock market trading and digital marketing and SEO. I fully expect new jobs and industries to appear in the wake of the LLM revolution, even though I cannot guess what they might be. Some jobs won’t be directly replaced, but will evolve alongside more computerised tools and assistance.
Yes, there will be winners and losers, as there were in previous times where a new technology made some previous jobs irrelevant. New fortunes will be made. High-density locations for employment might shift. Maybe our commercial cities will go the way of the Welsh mining valleys, becoming empty shells once home-working and video-conferencing and transport costs make them pointless to sustain.
So far, the first computing revolution has not removed the need for workers. It just allows more to be done with a similar number of people. But unevenly. It’s a bumpy but substantial productivity gain.
Those who control the new tech may use their position to subvert the political process to reinforce their dominance. For example, pre-buyout Twitter routinely suppressed opinions and narratives disfavoured by its creators. Some would say that its successor X is doing the same thing under Musk’s ownership, but now in the opposite direction. Because the narratives which remain on the internet are the input data for each new iteration of LLMs, it is inevitable that the LLMs are becoming a reflection of the dominant narratives, rather than necessarily reflecting the truth.
That reflection of dominant narratives will become recursively stronger, as a larger and larger percentage of internet content is itself the product of LLM generation. Because it becomes cheaper and easier to produce LLM generated (=recycled) internet content, there will be less actual people competent, motivated and rewarded for creating original and insightful internet material. So the signal to noise ratio will get progressively worse.
It’s already happening in academia. Students don’t bother to learn their subject: they just get LLM to write their essays for them. Post-graduates don’t bother to expand their knowledge: they just get paper mills to write their academic papers for them. Expertise loses its value and is replaced by LLM regurgitating recycled narratives. Give it another decade and which humans will be well-enough informed to contradict whatever ChatGPT v37 is saying, and upon what sources could they rely to do so?
RJ7: May 2026
Postscript: The post-LLM internet will at last give Jorge of Burgos what he wanted: “the property of knowledge, as a divine thing, is that it is complete and has been defined since the beginning, in the perfection of the Word which expresses itself to itself. […] There is no progress, no revolution of ages, in the history of knowledge, but at most a continuous and sublime recapitulation.”