Critical AI Summer Reading: Empires, Cons and Everlasting False Promises
A guide to surviving in a world of buzzword smog – or why questioning Agentic, Vibe, AEO and similar AI fairy tales makes the internet a better place
I have one good – and two bad news for you
First the good one. Everything old is new again, that is, if you have listened to the advice of a benevolent colleague, freelancer or your agency – deep congratulations. You can freely close this text – even the entire screen – because nothing important has changed in digital.
Q: And the bad news?
A: You haven’t closed the screen
Q: No sh*t Sherlock. What’s the other bad news?
A: In a couple of years; we’ll meet here again
Search Engine Optimization, Customer Relationship Management and even Critical Thinking are all experiencing their ninth burial session. I purposely didn’t write SEO, SEM or CRM abbreviations so that you can see how newly minted AI experts are not only predicting the end of technologies or tools – but entire common sense concepts.
We’ve all heard previous doomsday prophecies around the web, cryptocurrencies, and even today’s background actor – artificial neural networks. Btw, their pulse was expected to drop to zero for decades. Obituaries are becoming an integral part of tech cycles; which brings to the first point: Agentic, AEO, DeepHat and other AI smogs have polluted the air so much that Tik Tok brain rot, Jonathan Haidt and “lonely Gen Z” are slowly being removed from meeting agendas. OK OK, some folks still ask me: “did you watched the Adolescence?”
(I haven’t yet. I’m persuading my wife to binge, but one-hour series are too much of a concentration-need for us, which – as I type this – is a heavy auto-irony. Btw, this is not a digression. Critical-minded TV, GenZ and attention span will be more important towards the end of this essay.)
Prompt Engineering Cosa Nostra
I understand that the media has to write about the shiny new thing that Jony Ive will birth to Altman. I also understand that many organizers have to fill expensive conference chairs (especially Harari); but there is such a large discrepancy between the media-influencer spiel and the situation on the ground, that the only thing left for AI hypers is to convince us that the original pizza should be topped with banana.
There are three layers to that marketing overhype AI underground.
First and simplest are the literal scammers. This is the smallest – but on the other hand – the loudest group. Older examples include Elizabeth Holmes and Adam WeWork Neumann; and in more modern crypto AI summers – SBF FTX or Elon Musk that will launch Grok3 to write some new histories. The criminal’s mind – says Bruce Wayne – is quite simple, and the paragraph about them is consequently short. Those people do not have any valid technology roadmap. When talking to investors or the media, they present verifiable falsehoods.
Some of them are so flirtatious with prison that (instead of driving around pansies like normal bandits) go to the state NPOO type how completely autonomous robotaxies are ready to test the brakes in front of some Zagreb schools.
Don’t worry, Picard’s FacePalm is Lurking Around the Corner
Others understand that prisons are not hotels; so they smooch around us coated with virtual grease. Your perfidious long-term gamblers will not shout objective and hard claims; they will bathe their narrative in something quasi-possible, potentially beneficial or harmful to humanity.
Americans group them into doomers and gloomers. Doomer shouts that AI is causing (un)employment of programmers – and totally ignores macroeconomics, startup-VC-funding coughing and normalization after the COVID bubble. Gloomer, on the other hand, is proposing to the MIXX jury that an electric hypercar could one day become an oldtimer. Sure mate, the racing cemeteries in Dubai demonstrate just that.
Or let’s say Aleksa Gordić, who is working on artificial general intelligence (AGI), although the whole field is struggling with the “general” part of the term. Gordić says how his daemon will construct safe roads, bridges, and Dyson spheres. Although, Jean Luc Picard experienced during the 24th century that this is not possible.
And I don’t know if you’ve figured out the trend, but modern gamblers also go through heavy media training. Namely, here and there you will find a brave journalist, uncompromising editor or marketer with a cloak – who will ask the presenter: “ok, do you have any proof for your claims, why did it take you so long and how come you broke the budget three times?” Default words which long-termists pronunciate are “Someday! Time will tell! You know, people hallucinate too!” Or even better, that their project is “Moonshot!” – a word whose use is as precise as Bakić’s pandemic forecasts.
(Which is another irony – since it was Mr. Nenad who used the term to defend Zagreb robotaxis. Nor is this a digression. A project must have a legitimate technology roadmap to “qualify” as a Moonshot. To be precise, JFK approved launch to the moon based on a realistic engineering plan; and not some Sci-Fi fairy tales)
Add to that a Gergely Orosz testimony who writes that most of the parabolic statements about “AI Job replacement”, vibe-coding successes, robo-taxis that relieve traffic and new AEO tools actually undisclosed paid influencer ad; deliberately constructed and purposefully interwoven with falsehoods, so that nerds can “correct” the post with comments; unwittingly increasing the virality of the quasi-claim.
I simply want to show that the average young brain – a 25-year-old designer, developer, PM or copywriter – will have to fight with all his might against the techno-marketing machine that has been greasing its communication gears for 70 years. Namely, the original sin of the AI field are their first gamblers, John McCarthy and Claude Shannon. The two of them danced around the definition of artificial intelligence, going back and forth around exact terminology. Let’s hear what McCarthy had to say back then, in 1952…
“Shannon thought that artificial intelligence was too flashy a term and might attract unfavorable notice, and so we agreed to call it Automata Studies”
But *A* problem occurred. Term “Automata Studies” was technically accurate, but not attractive enough for investors or the media. Crickets. That’s why the department decided to attend some of their Communication Days (otherwise excellent); where in 1956 they chose a juicier name “Artificial Intelligence”, a term that has been ever since generating “mega claims, tiny evidence“ stories.
Older gamblers eagerly offered a promotional relay to the younger ones, so that this sticky promo virus was passed down to today’s Altman, Sutskever, Amodei, Musk and Nobel laureate Geoffrey Hinton – the godfather of modern artificial intelligence. Their statements are often reckless, selectively autistic and so arrogant that Hinton’s picture is an essential item in most broligarchy wallets.
The fundamental misconception of long-term bluffers is that they nihilistically play the card that people are stupid and naive sheep. But if that is indeed so; we wouldn’t hang out here. Because people – at least on average – are not only curious, witty and benevolent creatures, but they also have an explosive sense of humor. More importantly, the long-term game is a double-edged sword so people – again on average – sooner or later detect assholes. And this is best demonstrated by the guardians of common sense – the internet meme army that never forgets anything. And never forgives anything…

The third group is all of us: the team with selective amnesia i.e. most people who with generally good intentions say things they believe to be true. But the problem with “majority” and “belief” is that those are filled with inherent biases and subconscious traps. Let’s say column from Saša Šarunić, who benevolently examines the vibe-coding concept; forgetting that the developer spends at least two thirds of your time on greasy maintenance, and not creations of an IT system. Or let’s say Ivan Burazin who tells an important truth that “agents are not really something for more complex tasks”. Well done for the attitude 🫡 but why then “Your next hire should be an AI agent” podcast title?
Even if you think that helping to generate code or content slop is something truly groundbreaking (it is not), studies finally began to appear (Apple, Apple again, Salesforce, LiveCodeBench Pro, LiveCodeBench GSO) which prove what most of us feel in our gut = that LLMs will not be able to master even slightly complex tasks. And if you are now thinking: “Someday! Time will tell!” … you have a course at FSB “Operational Research” where you learn how most activities in daily work are exactly multistep multimodalni subjective processes; impossible for complete automation; and especially not using a stochastic text synthesizer.
This is actually a media effect of the name Gell-Mann amnesia, which explains how the experts will take the newspaper; read an article from one’s own field, see that the text is superficially written and automatically declare it like nonsense. But then he will turn the page in the same newspaper and blindly believe everything that is written outside of his specialization. A good example is (excellent) analyst Ben Evans who speaks that he never uses in his works Deep Research. DR is not reliable, he says. Evans too notices that most companies did not benefit from GenAI tools, but at the same time he believes that these tools will certainly replace some of the programmers (?!)
And when we combine Gell-Mann amnesia with tool acquisition syndrome and more importantly, the unfortunate anthropomorphization of artificial intelligence from 1956; we get the effect that we, as the last, third, selectively forgettable clique, unknowingly help the first and second actors. “GRAIA responds with empathy!” they say again @ Bruketa and Žinić.
In short, Gell-Mann is the fundamental reason why “SEO” and “Coding” obituaries will often be delivered by people who don’t directly have their sleeves rolled up in those domains; and exactly the SEO field can help us to understand what will happen with Vibe-Coding, Agent-washing and similar AI covers.
WhiteHat is the new BlackHat
WordCamp Europe, the largest gathering of WordPress experts was held the other day. Those folks (us) power two thirds (!) of the entire meaningful web. The conference was also attended by John Mueller (Search Advocate @ Google) who held a panel on the topic of search engine optimization. To a colleague’s question “what can we do to rank AI Overviews and GenAI tools” John coldly replied with “absolutely nothing”.
Experienced John said what we actually have in the introduction. If you have worked diligently for years on content: blog posts, newsletter, PR outreach, technical SEO, planned presence on social networks, etc… you can shoot yourself a movie high-five because in the era of naive and pure (so so) LLMs: quality and original content is truly king (and which is stolen without an ounce of shame for training neural networks 🤷)
Let’s nerd out a little bit. Modern LLMs are artificial neural networks (transformers) that AI labs train using practically all online materials. If your brand is mentioned in these materials more times than the competition; you thereby slightly increase the “weight of the vector” within LLM towards your brand. So when someone prompts ChatGPT “how to season the shakshuka”, there is a higher statistical chance that the robot will suggest your salt, and not competitors honey.
And If you know that Google and Bing were not allowed to publish how their search ranking algorithms work; then it may surprise you that Google and OpenAI cannot publish their inner LLM workings because they don’t really know how their models work. Transformers are “black box” technology, i.e. The weight of specific words within a certain context is too broad, and the math is unknown to be able to detect in which conversation a certain brand will “appear”. That alone will make you often hear that LLMs are non-deterministic systems or an even better term that Emily Bender coined & Timnit Gebru coined – stochastic parrots.
There is also post-training or inference or test time part of the work. Large language models decide when to use external search engines through a combination of internal security assessments, iterative chained conclusions and self-supervised learning – which is all a roundabout way of getting back to the earlier point that the precise math around whether an LLM will use an external service is not really known. What is known is that ChatGPT will use the Bing Search API; and Gemini – I hope – internal Google resources, which again brings us to the beginning of the text and that nothing has changed. Chatbot interface “conversation” in which the user searches and researches a service or product, will again depend on the explicit search service, which in turn depends on the ranking algorithm that has been “current” in digital landscape for twenty years.
Now, why is WhiteHat the new BlackHat? Well, precisely because newly minted experts discover luke SEO water and sell decades-old WhiteHat tactics; where everything seems to be some new BlackHat LLM empowerment trick. The problem is that with such AI shortcuts story they also serve fresh AEO, GEO, SEM-Rush-$1,000-a-month tools which will supposedly monitor the position of the brand within language models and vector databases. Those tools and golden shovels cannot work reliably because they depend on an implicit, non-deterministic LLM background and more importantly; user context which is unverifiable and unlimited dark matter.
But just wait for the first charmers and second scammers to arrive in your inbox; assuring you (wrongly) that there is a holistic, homogeneous, systematic decline in organic traffic – the how their silver bullet solves all conversion headaches.
And then you’ll fall into the Gear Acquisition Syndrome trap once again; ignoring the 5 fundamentals of online presence: choosing an open source platform, business plan, marketing strategy, respectively positioning and then a detailed content EEAT plan which has been raved about for years. And if one more time i hear that “businesses in 2025 must be authentic”, I will literally run a stair marathon – just to throw myself off the top of local skyscraper.
I’m going around in circles a lot to draw a parallel. In a few years, we will also see vibe-coding “experts” recommending to programmers that relational databases are the right thing, that you must always start from the meaning, design and architecture of the system – and that you should definitely pay attention to the optimization, sustainability and readability of the code. Don’t-Repeat-Yourself! Those future tips will surely hit the mark with – mark my words – “designers, developers and managers should communicate!!!” What is perhaps the most important finish of the column: all the real and hard questions remain the same – and there’s no indication that GenAI will remove them. Even more, GenAI will further complicate them, given that LLMs are already in a phase of deep commoditization. Btw, an example that “engineers need to know the basics” 🙄 just landed.
Books, instead of a conclusion
Finally, how can we, the third group, fight more actively against the first and second?! They say with memes and banter, but it would also be prudent with good ol’ education.
Or as Joseph Wisenbaum cleverly wrote: “Once a particular program is unmasked, once its inner workings are explained in language sufficiently plain to induce understanding, its magic crumbles away; it stands revealed as a mere collection of procedures, each quite comprehensible. The observer says to himself, “I could have written that.” With that thought he moves the program in question from the shelf marked “intelligent” to that reserved for curios, fit to be discussed only with people less enlightened than he.”
So in short; I am writing 5 recommendations, sorted based on your possible concentration:
- “Empire of AI” – Karen Hao. Attention Span: 8h. The book was published in 2025, and represents the best dissection of the overhyped AI industry. Karen Hao writes for the WSJ, The Atlantic, and the MIT Tech Review; and she presents irrefutable evidence, collected through 9 years of investigative journalism. Hao offers a silver lining which I also wanted to pass through the text… GenAI has domain / narrow benefits, but we need to carefully implement them where it makes sense. Without pomp, fanfare and false promises. Her story of using LLMs in preservation Māori language is an example of such direction.
- “The AI Con” – Dr. Emily M. Bender & Dr. Alex Hanna. This is a smaller, but still hefty book for which you will need approx. 5 hours. Bender and Hanna are “going all in” with really sharp scissors aka they would shut down the power supply to the entire AI field. I see the angle here; but i don’t think this is realistic nor constructive. BUT BUT the book explains generously the problem of anthropomorphization of stochastic parrots, i.e. negative consequences of attributing human characteristics to a robot. For example, the last Algebra research (Kopal, Žnidar, Korkut) demonstrates how 44% of Gen Z “considers” that AI has a positive effect on mental health. Therein lies the future problem. If young people are already looking at transformers as psychotherapists – a we know that OpenAI uses that same moment as engagement trap – we are dangerously close to giving Jonathan Heidt an opportunity for a new bestseller. We held a panel around the Gen Z and AI topic in the Press House where the key question was: how to use AI while at the same time supporting human values and social progress? The books above give the best answers.
- There is also public debate of 45 minutes; actually compression The AI Con books; where Bender discusses AI issues with Sébastien Bubeck; otherwise the author controversial Microsoft “AGI Sparks” work. Take a look at full video and you’ll notice that Bubeck failed – nor even tried – to dispute anything Bender said.
- I am also sending the column from 2023, the spiritual predecessor of this text; and which one explains more deeply the statistical nature of AI systems, the dark matter of the information field, the Moravec and Jevons paradox. Reading time approx. 13 minutes
- And if that’s too much for you, take a look at Silicon Valley 15 second clip which portrays the two biggest enemies of vibe-coding and the agentic system = the irony of automation and an impossibly detailed specification…
Addendum: I’ll list one internal example here; a project example that falls inline with what Karen Hao wrote about Maori language. Machine learning can be used in analyzing nature’s and wildlife problems; which ironically ARE emphasized by the massive electricity consumption of training nad using artificial neural networks.
Nevertheless, these new system can help in remedying our forests and saving our species; what Drew Purves talks about in Google DeepMind podcast. He also talks about IUCN: Save Our Species project that Neuralab team wholeheartedly worked on. You can listen to the full episode here …
But you’re probably in an (unnecessary) Zoom meeting right now. With one eye you are watching this article. And below, from the beautiful Krasniqi bakeries, you are mourned by the smell of delicious burek. You no longer have the strength to squint at the sorted books; Attention Span is 0 or less; and at the same time you are brainstorming where to buy yogurt along the way. Which brings me back to the beginning of our relationship … that is … I have some good news – and some bad news for you.