Critical AI Summer Reading: Empires, Cons and Everlasting False Promises
A guide to surviving in a world of buzzword smog – or why questioning Agentic, Vibe, AEO and similar AI fairy tales makes the internet a better place
I’ve got one piece of good news – and two pieces of bad.
Good news first. Everything old is new again. If you’ve been dutifully taking notes from your benevolent colleague, freelancer or agency – deep congrats. You can freely close this text (heck, shut down your whole screen) because in digital land, nothing important has really changed.
Q: And the bad news?
A: You haven’t closed the screen
Q: Ok Capt. Obvious. What’s the second bad thing?
A: In a couple of years; we’ll meet here again
Search Engine Optimization, Customer Relationship Management and even Critical Thinking are all experiencing their ninth burial session. I purposely didn’t write SEO, SEM or CRM abbreviations so that you can see how newly minted AI experts are not only predicting the end of technologies or tools – but entire common sense concepts.
We’ve all heard previous doomsday prophecies around the “web”, cryptocurrencies, and even today’s background actor – artificial neural networks. Btw, their pulse was expected to drop to zero for decades.
Obituaries are becoming an integral part of tech cycles; which brings me to our first point: Agentic, AEO, DeepHat – and similar AI smog representatives – have polluted the air so much that Tik Tok brain rot, Jonathan Haidt and “lonely Gen Z” are slowly being removed from meeting agendas. OK OK, some folks ask me: “have you watched the Adolescence?”
(I haven’t. I’m persuading my wife to binge, but one-hour series are too much of a concentration-need for us, which – as I type this – is a heavy auto-irony. Btw, this is not a digression. Critical-minded TV, GenZ and attention span will be more important towards the end of this essay.)
Prompt Engineering Cosa Nostra
I understand that the media has to write about the shiny new thing that Jony Ive will birth to Altman. I also understand that organizers need to fill those pricey conference seats (Harari, we see you). But the gap between slick AI hype and real life ops is so huge, the next spin will be that banana was the original pizza topping.
There are three layers to that marketing overhype AI underground.
First and simplest are the literal scammers. They’re the smallest, but by far the loudest crew. Think Elizabeth Holmes and Adam ‘WeWork’ Neumann back in the day; or today’s crypto-AI summers starring SBF FTX or Elon Musk with his Grok3 “history rewrite” effort. The criminal’s mind – says Bruce Wayne – is simple, and the paragraph about them consequently short. They have no genuine tech roadmap – and peddle demonstrably false claims when talking to investors or the press.
Some of them are so prison-flirtatious that (instead of driving around pansies like normal bandits) go to the official EU funding document – and type how completely autonomous robotaxies are ready to test the brakes in front of some Zagreb schools.
Don’t worry, Picard’s FacePalm is Lurking Around the Corner
Others understand that prisons are not hotels; so they smooch around us coated with virtual grease. Those perfidious long-term gamblers will not shout some hard claims. They will bathe their narrative in something quasi-possible, potentially beneficial or harmful to humanity.
Americans tend to group them into doomers and gloomers. Doomer shouts that AI is killing programmer jobs – completely ignoring macroeconomics, chronic startup-VC bronchitis, and the quiet fart of the COVID bubble deflating. Gloomer, on the other hand, is proposing to the MIXX jury that an electric hypercar could one day become an oldtimer. Sure mate, the racing car cemeteries in Dubai demonstrate just that.
Or let’s say Aleksa Gordić, who is working on artificial general intelligence (AGI), although the whole field is struggling with the “general” part of the term. Gordić says how his daemon will construct safe roads, bridges, and Dyson spheres. Although, Jean Luc Picard experienced during the 24th century that this is not possible.
And I don’t know if you’ve figured out the trend, but modern gamblers also go through hefty media training. Namely, here and there you will find a brave journalist, uncompromising editor or marketer with a cloak – who will ask the presenter: “ok, do you have any proof for your claims, why did it take you so long and how come you broke the budget three times?” Default words which long-termists pronunciate are “Someday! Time will tell! You know, people hallucinate too!” Or even better, that their project is “Moonshot!” – a word whose use is as precise as Bakić’s pandemic forecasts.
(Which is another irony – since it was precisely Mr. Nenad who used the term to defend Zagreb robotaxi shenanigans. Nor is this a digression. A project must have a legitimate technology roadmap to “qualify” as a Moonshot. To be precise, JFK approved launch to the moon based on a realistic engineering plan; and not some Sci-Fi fairy tales)
Add to that a Gergely Orosz testimony who writes that most of the parabolic statements – about “AI Job replacement”, vibe-coding successes, robo-taxis that relieve traffic and new AEO tools – are actually undisclosed paid influencer ads; deliberately constructed and purposefully interwoven with falsehoods; so that nerds can “correct” the post with comments; unwittingly increasing the virality of the quasi-claim.
Great Naming Gamble & Artificial Influenca
I simply want to show that the average young brain – a 25-year-old designer, developer, PM or copywriter – will have to fight with all his might against the techno-marketing machine that has been greasing its communication gears for 70 years. Namely, the original sin of the AI field are their first gamblers, John McCarthy and Claude Shannon. The two of them danced around the definition of artificial intelligence, going back and forth around exact terminology. Let’s hear what McCarthy had to say back then, in 1952…
“Shannon thought that artificial intelligence was too flashy a term and might attract unfavorable notice, and so we agreed to call it Automata Studies”
But *A* problem occurred. Term “Automata Studies” was technically accurate, but not attractive enough for investors or the media. Crickets. That’s why the department decided to attend some of their Communication Days (otherwise excellent); where in 1956 they picked a juicier name “Artificial Intelligence”, a term that has been ever since generating “mega claims, tiny evidence“ stories.
Older gamblers eagerly offered a promotional relay to the younger ones, so that this sticky promo virus was passed down to today’s Altman, Sutskever, Amodei, Musk and Nobel laureate Geoffrey Hinton – the godfather of modern artificial intelligence. Their statements are often reckless, selectively autistic and so arrogant that Hinton’s ID picture is an essential item in most broligarchy shrines.
The core mistake these long-term bluffers make is nihilistically & cynically betting that people are naive, clueless sheep. But if that were true, we wouldn’t be hanging out here right now. Fact is, people – at least on average – are smart, funny, and surprisingly sharp when it counts. More importantly, playing the long con is a double-edged sword, because people will – sooner or later – sniff out assholes. And nothing proves that better than the internet’s unstoppable meme army, the real guardians of common sense. They never forget, and they never forgive…

The third group is all of us. The team with selective amnesia i.e. most people who with generally good intentions say things they believe to be true. But the problem with “majority” and “belief” is that those are filled with inherent biases and subconscious traps. Take a look at a column from Saša Šarunić, who benevolently examines the vibe-coding concept; forgetting that the developer spends at least two thirds of their time on greasy maintenance, and not creations of an IT system. Or let’s say Ivan Burazin who tells an important truth that “agents are not really something for more complex tasks”. Well done for the attitude 🫡 but why the “Your next hire should be an AI agent” podcast title?
Even if you think that code or content slop generator is something truly groundbreaking (it is not), studies finally began to appear (Apple, Apple again, Salesforce, LiveCodeBench Pro, LiveCodeBench GSO) which prove what most of us feel in our gut = that LLMs will not be able to master even slightly complex tasks. If you’re still thinking, “Someday! Just give it time!” – well, there’s a course at your college called “Operational Research” that’ll teach you exactly why most daily tasks are complex, multistep, multimodal, and stubbornly subjective. Good luck automating that – especially with a glorified stochastic text synthesizer.
This is actually a media effect of the name Gell-Mann amnesia, which explains how the experts will take the newspaper; read an article from one’s own field, see that the text is superficially written and automatically declare it like nonsense. But then he will turn the page in the same newspaper and blindly believe everything that is written outside of his specialization. A good example is (otherwise excellent) analyst Ben Evans who speaks that he never uses Deep Research in his works. It’s not reliable, he says. Evans too notices that most companies did not benefit from GenAI tools, but at the same time, he believes that these tools will certainly replace some of the programmers tasks (?!)
BUT when we combine Gell-Mann amnesia with tool acquisition syndrome and more importantly, the unfortunate anthropomorphization of artificial intelligence from 1956; we get the effect that we, as the last, third, selectively forgettable clique, unknowingly help the first and second actors. I mean, posters declaring “GRAIA responds with empathy!” are already decorating the walls at “award-winning” Bruketa & Žinić communicators.
That’s not to say that neural networks, machine learning and LLM’s don’t have a use case. Quite the contrary. I’ll list one internal example; a project that falls inline with what Karen Hao wrote about preserving Māori language. Machine learning can be used in analyzing nature’s and wildlife problems; which ironically ARE emphasized by the massive electricity consumption of training and using artificial neural networks. Nevertheless, these novel methods can help in remedying our forests and saving our species; what Drew Purves talks about in Google DeepMind podcast. He also talks about IUCN: Save Our Species project that Neuralab team is wholeheartedly working on. You can listen to the full podcast episode here…
In short, Gell-Mann amnesia is why you’ll keep seeing flashy obituaries for “SEO” and “Coding” delivered by folks who’ve never gotten their hands dirty in those fields. It’s also why the SEO industry’s wild ride is a perfect preview of what’s coming for vibe-coding, agent-washing, and other fresh AI fairy tales.
WhiteHat is the new BlackHat
WordCamp Europe, the largest gathering of WordPress experts was held on June 2025 and Neuralab crew was also there for the n-th time. Those folks (us) power two thirds (!) of the entire meaningful web. The conference was also attended by John Mueller (Search Advocate @ Google) who held a panel on the topic of search engine optimization. To a colleague’s question “what can we do to rank AI Overviews and GenAI tools” John coldly replied with “absolutely nothing”.
Experienced Mr. John said what we actually have in the introduction. If you have worked diligently for years on content: blog posts, newsletter, PR outreach, technical SEO, planned presence on social networks, etc… you can shoot yourself a movie high-five because in the era of naive and pure (so so) LLMs: quality and original content is truly king (and which is stolen without an ounce of shame for training neural networks 🤷)
Let’s nerd out a little bit.
Modern LLMs are transformer-based artificial neural networks trained on massive datasets scraped from across the web. While they don’t simply count mentions, frequently appearing brands can become more strongly associated within the model’s learned patterns, subtly influencing how the model references or recalls those brands. LLMs don’t explicitly tally up brand mentions; instead, repeated references across vast training data subtly shape semantic associations. So when someone prompts ChatGPT “how to season the shakshuka?”, there is a higher chance that the robot will suggest your salt, and not competitors honey.
But If you thought it was shady that Google and Bing never reveal how their search rankings work; get ready for the modern sequel. Google, Anthropic or OpenAI can’t explain how their LLMs work … because they don’t fully understand them either. Transformers are classic “black box” tech: the way specific words get weighted in context is so vast and tangled that we can’t predict when – or why – a brand might pop up. That’s why you’ll often hear LLMs described as non-deterministic systems, or better yet, as Emily Bender, Angelina McMillan-Major, Margaret Mitchell & Timnit Gebru brilliantly put it: stochastic parrots.
Then there’s the post-training or inference or test time part of the work – when LLMs decide whether to tap into external search engines. How do they decide? Through a mysterious cocktail of internal guardrails, chain-of-thought reasoning, and self-supervised guesswork. In other words: no one really knows the exact math behind it. But we do know that ChatGPT leans on Bing Search API, and Gemini (hopefully) pulls from Google’s own backyard. Which brings us full circle: nothing’s really changed. When you chat with a bot to research a product or service, you’re still triggering a search. And that search still rides on the same dusty ranking algorithms that have ruled the digital ad kingdom for the past two decades.
Gold Rush = Gold Shovels
Now, why is WhiteHat the new BlackHat? Well, precisely because newly minted experts discovered luke SEO water and sell decades-old WhiteHat tactics. This is why everything seems to be some new BlackHat LLM empowerment trick. The problem is that with such AI shortcuts stories they also serve fresh AEO, GEO, SEM-Rush-$1,000-a-month tools which will supposedly “monitor the position of the brand” within LLMs and vector databases. Those tools and golden shovels cannot work reliably because they depend on an implicit, non-deterministic LLM background and more importantly; user context which is unverifiable and unlimited informational dark matter.
But just wait for the first charmers and second scammers to slide in your DMs; assuring you (wrongly) that there is a holistic, homogeneous, systematic decline in organic traffic – and how their silver bullet solves all conversion headaches.
And then you’ll again fall into the Gear Acquisition Syndrome trap; ignoring the 5 fundamentals of online presence: choosing an open source platform, business plan, marketing strategy, positioning and then a detailed content EEAT plan which has all been raved about for years. And if I hear one more time that “businesses in 2025 must be authentic”, I will literally run a stair marathon – just to throw myself off the top of local skyscraper.
This is all a roundabout way to draw a parallel. In a few years, we will also see vibe-coding “experts” recommending to programmers that relational databases are the right hot stuff, that you must always start from the meaning, design and architecture of the IT system – and that you should definitely pay attention to the optimization, sustainability and readability of your code. Don’t-Repeat-Yourself!
Those future tips will surely finish with – mark my words – “designers, developers and managers should communicate!!!” What is perhaps the most important point of this column: all the real and hard questions remain the same – and there’s no indication that GenAI will displace them. Even more, GenAI will further complicate them, given that LLMs are already in a phase of deep commoditization. Btw, an example that “engineers need to deeply understand the basics” 🙄 just landed.
Books, instead of a conclusion
Finally, how can we, the third group, fight more actively against the first and second?! They say with memes and banter, but it would also be prudent with good ol’ education. Or as Joseph Wisenbaum cleverly wrote:
“Once a particular program is unmasked, once its inner workings are explained in language sufficiently plain to induce understanding, its magic crumbles away; it stands revealed as a mere collection of procedures, each quite comprehensible. The observer says to himself, “I could have written that.” With that thought he moves the program in question from the shelf marked “intelligent” to that reserved for curios, fit to be discussed only with people less enlightened than he.”
So in short; I am writing 5 recommendations, sorted based on your possible concentration:
1. “Empire of AI” – Karen Hao. Attention Span: 8h. The book was published in 2025, and represents the most comprehensive dissection of the overhyped AI industry. Karen Hao writes for the WSJ, The Atlantic, and the MIT Tech Review; and presents irrefutable evidence, collected through 9 years of investigative journalism. Hao offers a silver lining which I also wanted to subtext through the column … GenAI has domain / narrow benefits, but we need to carefully implement them where it makes sense. Without pomp, fanfare and false promises. Her story of using LLMs in a preservation of Māori language is an example of such a direction.
2. “The AI Con” – Dr. Emily M. Bender & Dr. Alex Hanna. This is a smaller, but still hefty book for which you will need approx. 5 hours. Bender and Hanna are “going all in” with really sharp scissors – to simply cut the power supply of the entire AI field. I see the needed anti-hype angle here; but my “feeling” is that most readers will find this unconstructive. BUT BUT, hear me out. The book is an excellent repository of ALL current AI problems and will give you a 360 view on the current and possible future challenges. There’s one missed chance though. Dr. Bender is a computational linguist so I hoped the book would offer more insight on “math induced LLM bias” parts, which are kind of smallish. For instance, Word2Vec project problems are fascinating to me and here’s a well known example…
king–man+woman = queen;
doctor–man+woman = nurse
Getting back to the great parts of the piece … It really shines in deeply explaining the problem of stochastic parrots anthropomorphization. In simpler words, negative consequences of attributing human characteristics to a robot. For example, the last Algebra research (Kopal, Žnidar, Korkut) demonstrates how 44% of Gen Z “considers” that AI has a positive effect on mental health. Therein lies the future problem. If young people are already looking at transformers as psychotherapists – and we know that OpenAI uses that same moment as an UX engagement trap – we are dangerously close to giving Jonathan Haidt an opportunity for a new bestseller. We held a panel around the “Gen Z AI” topic in the Press House where the key question was: how to use AI while at the same time supporting human values and social progress? Both books above give the best answers.
3. There is also public debate of 45 minutes; actually a compression of The AI Con book; where Bender discusses AI issues with Sébastien Bubeck; otherwise the author controversial Microsoft “AGI Sparks” work. Take a look at full video and you’ll notice that Bubeck failed – nor even tried tbh – to dispute anything Bender said.
4. I am also sending my column from 2023 (!), the spiritual predecessor of this text; which deeply explains: the statistical nature of AI systems, the dark matter of the information field, the Moravec and Jevons paradox. Reading time approx. 13 minutes
5. And if that’s too much for you, take a look at Silicon Valley 15 second clip which portrays the two biggest enemies of vibe-coding and the agentic system = the irony of automation and an impossibly detailed specification…
But you’re probably in an (unnecessary) Zoom meeting right now. With one eye you are watching this article. And below, from the beautiful Krasniqi bakeries, you are mourned by the smell of delicious burek. You no longer have the strength to squint at these sorted books. Attention Span is 0 or less; and you are brainstorming in multi-thread where to buy yoghurt along the way. Which brings me back to the beginning of our essay … that is … I have some good news – and some bad news for you.