AI Promised the World. It’s Not Delivering.

By: Ryan Stanton

For those who know the story of Elizabeth Holmes and Theranos, it may seem hard to remember a time when the company was unstoppable.

Read more: AI Promised the World. It’s Not Delivering.

While her name is now permanently associated with fraud and deception, the truth of the matter is that for a time, the company founded by a 19-year-old Holmes in 2003 seemed poised to change the world. Their promise to revolutionise the healthcare industry by providing fast, accurate and painless blood tests caught the attention of many and led to the company’s peak valuation of nine billion dollars in 2014. Combining the potentially paradigm-shifting technology with Holmes’ captivating public persona, the company seemed poised to change the world. 

Despite claims that they could do a full range of blood tests from a pinprick of blood, the company never developed the technology and instead engaged in a variety of deceptive practices to hide this fact. Of course, as is often the case, Theranos’s secret eventually broke and led to the downfall for a company which had once been praised for its “phenomenal rebooting of laboratory medicine”.1 Indeed, Theranos and Holmes now serve as a prime example of a company both overpromising and underdelivering—or in this case, failing to deliver at all. 

There was only one problem. It was all a lie. 

One of the most interesting facts about Holmes and Theranos comes not from their downfall, but from the origin of the company. While Holmes may have lied about plenty regarding the company, her stated motivation for creating Theranos seems noble on its face: their attempts to create a blood testing process which used minimal amounts of blood stemmed from Holmes fear of needles—a fear which many can relate to. Unfortunately, at the beginning of the venture, Holmes was told by multiple experts in the field that her hope of creating a full suite of tests which worked from a pinprick of blood was not viable2—advice she ignored, and which would later be proven correct. This, I think, is the most interesting part of the Theranos story: despite knowing that the reality of their dream was impossible, the company continued to sell an impossible promise. 

Another Impossible Promise

On August 8, 2025, OpenAI unveiled the long-awaited next-generation version of their large language model chatbot GPT-5 to the public, claiming it could provide “PhD-level” abilities.3 The world’s richest and most controversial man, Elon Musk, took the claim a step further, hyping up his company’s AI Grok as being “better than PhD level in everything”. In May of the same year, Mark Zuckerberg touted the ability for AI chatbots to replace human relationships and friendships.4 Zuckerberg has also made similarly lofty claims about Meta’s other technologies, arguing that in the future, anybody who doesn’t own and use AI glasses will “be at a disadvantage”.5

Increasingly, AI is being integrated into every aspect of our daily lives, with its loudest proponents claiming that it will solve all our problems. In the fast-food industry, the owners of KFC, Pizza Hut and Taco Bell claimed that they were adopting an “AI-first mentality”6(though the company is reportedly rethinking the approach after a customer used the AI to order 18,000 glasses of water).7 Interested in learning a new language? Duolingo believes that AI can help the process, with the CEO claiming AI can make employees “four or five times” as productive8 (though once again, their adoption of the technology has led to a significant backlash from customers who doubt its effectiveness9). Keen to play some games to relax? EA—the publisher of a wealth of large franchises including EA FC(formerly FIFA) and Battlefield—recently announced a 50-billion-dollar sale, relying heavily on the promise of AI to streamline development costs (though gamers and developers alike are less than thrilled). Everywhere you look, AI promises the world. But promises aren’t reality—and there are plenty of good reasons to be suspicious of those with a vested interest in the success of AI. 

The Unfortunate Truth

As a media scholar (and one of the PhD-level people that OpenAI is aiming to replace), I am deeply sceptical of AI. Many of my doubts stem from fundamental issues with how the technology works. While the title “artificial intelligence” implies a level of thought, and the term “large language model” (LLM) seems to indicate an understanding of language, the reality is that these tools neither think nor understand the meaning of words. 

A full explanation of the ways they work is beyond the scope of this article, but on the most basic level, the ways that LLMs and generative AI view language is more akin to a complex math equation. Your prompt is one side of the equals sign and the technology attempts to “solve” for the most likely response. In addition to this process being extremely power intensive (and having negative environmental impacts10), it is also the reason that despite the hyped improvements in more recent models, AI continues to suffer from widespread “hallucinations”11—where the chatbot either regurgitates inaccurate information or invents entire falsehoods. Indeed, CEO of Open AI Sam Altman has admitted that hallucinations are not an engineering flaw for LLMs but a “mathematically inevitable”.12 

Is AI Making Us Dumber?

The issues caused by these hallucinations are significant and may further exacerbate societal issues rather than solve them. A recent report indicated that 45 per cent of AI responses based on news articles contained “significant” errors—with a whopping 81 per cent of responses having some form of issue.13 In this age of misinformation, relying on AI seems like a recipe for disaster. More importantly, current research points towards AI having a negative effect on its users, “eroding critical thinking skills”.14 Furthermore, while it is often thought of as neutral, numerous studies15 have exposed biases in AI models16—an unsurprising reality when one acknowledges the potential biases of their creators which may filter in.  

I could go on and on about the issues with AI (and indeed, some of my poor friends have had to endure my rants on the topic in the past). Ultimately however, all these criticisms can be summed up in one sentence. That is, the reality of AI falls drastically short of the promise its creators espouse. 

With all this in mind, I should acknowledge that I am sympathetic to those that want to believe the promise of AI. The world we live in is fundamentally broken in so many ways with political polarisation, environmental destruction and unspeakable injustice occurring daily. And that’s not even acknowledging the more mundane tasks that it could help with. The promise of a “magic bullet” technology that can ease any of the issues we face—just like the promise of a needle-free blood test—is enticing. And it is true that this technology can help in certain situations. As a tutor to international students, machine learning can be a helpful tool in translating complex ideas discussed in our courses (though it still has imperfections that need correcting). My friends who work in software engineering are adamant that it can help make the tedium of coding less strenuous (which is understandable considering coding, like LLMs, also treats language as a sort of math). AI-assisted live transcription is also potentially revolutionary for the hard of hearing. But these are individual solutions for individual problems—and we should not be forced to swallow all the issues with these AI models in order to benefit from them. 

No Silver Bullet?

The reality is, there is no one solution that will solve all our problems. AI cannot create. Every response it gives is based on the existing work of talented artists, writers and experts who it often fails to properly credit. Working as a tutor, I have seen firsthand its negative effects—seeing students inadvertently turn in assignments with invented information and incorrect sources. In seeing AI as the solution to their problems, they have only created more—and greater—problems. 

This, more than anything, is the danger of AI. Proponents like Zuckerberg and Altman want you to believe that it can enhance—or even replace—human connection, but the opposite is true. If you want to learn, create or connect, you can’t do so through AI. You should go to the source, read what others are saying and listen to the experts who have dedicated their lives to solving these problems. Step outside the tech bubble these companies want to trap you in and connect with the real world.

The truth is, no one machine can save the world, nor can any one individual. So don’t give in to the promise of the technology. Connect with reality. Connect with others.  


Article supplied with thanks to Signs of The Times

About the Author: Ryan Stanton is a PhD Graduate from the University of Sydney. A Media and Communications scholar, he is constantly torn between wanting to believe the promise of new technologies and being disappointed by the reality. 

How AI Is Quietly Rewriting the Rules of Retail Shopping

By: Michael McQueen

For years, shopping online followed a predictable pattern. You searched, compared, skimmed reviews, opened too many tabs, got distracted, then either bought something or gave up.

Read more

Why Humanoid Robots Will Arrive Sooner Than You Think

By: Michael McQueen

Not long ago, humanoid robots sat firmly in the category of “cool demo, wildly impractical.” They dazzled on conference stages, tripped over their own feet on YouTube, and then quietly disappeared back into research labs. That phase is ending fast.

Humanoid robots are moving from spectacle to systems. From factories and hospitals to aged care facilities and, eventually, our homes, they are inching closer to everyday life. Goldman Sachs estimates there could be more than 13 million humanoid robots in use globally by 2035. That’s less than a decade away. While most of these robots will appear in workplaces first, the ripple effects will be felt across households, cities and entire industries.

The drivers are converging rapidly. Advances in AI vision, balance and hand dexterity are accelerating. Labour shortages are intensifying as populations age and fewer people enter physically demanding roles. Cultural expectations are shifting around convenience, care and the value of time. And younger generations are far more comfortable sharing space with machines than any before them.

For leaders and professionals, the question is no longer whether humanoid robots will matter, but how quietly and quickly they will reshape expectations. 

1. From Sci‑Fi Spectacle to Quiet Utility 

The first major shift is psychological. Humanoid robots are not arriving with dramatic flair or cinematic ambition. They’re slipping in through side doors, doing the dull jobs no one wants to talk about at dinner parties.

We already live with robots, even if we don’t think of them that way. They vacuum our floors, mow our lawns and assist surgeons. In fact, more than 80 percent of prostate surgeries are now performed using robotic systems. COVID accelerated this trend, particularly in agriculture and logistics, where closed borders and labour shortages forced rapid adoption.

Humanoid robots represent the next logical step because they fit into environments built for humans. Factories, warehouses and hospitals don’t need to be redesigned when the robot has two legs, two arms and can use existing tools. That’s why companies like BMW, Hyundai and Tesla are already trialling humanoid robots on factory floors for repetitive and physically demanding tasks. Hyundai has publicly stated it plans to deploy humanoid robots in US factories from 2028.

China offers a glimpse of what early adoption looks like at scale. Humanoid robots are already working as tour guides, retail assistants, warehouse staff and service workers, with some even assisting in policing and security roles. Dedicated robot training centres allow machines to learn by observing humans rather than being painstakingly programmed line by line. 

The implication is clear. Early adoption will be quiet and practical rather than flashy. Organisations that treat humanoid robots as boring infrastructure rather than futuristic mascots will extract far more value from them. 

2. Cobots, Not Job Stealers 

It’s impossible to discuss humanoid robots without confronting workforce anxiety. Elon Musk has said Tesla aims to build up to 100,000 humanoid robots per month within five years. Numbers like that naturally raise concerns about job losses.

But the reality is more nuanced. Humanoid robots are particularly good at jobs humans increasingly struggle to fill. Dirty, dangerous and repetitive work. Heavy lifting. Night shifts. Tasks that lead to injury, burnout or high turnover.

Robots are already being used for warehouse picking, post‑surgery rehabilitation support and repetitive assembly. Deloitte predicts physical AI and humanoid robots will play a major role in addressing labour shortages, especially as populations age and healthcare demand grows. 

Rather than replacing humans, most experts expect robots to change the nature of work. This is where the idea of “cobots” becomes critical. Collaborative robots that work alongside humans, taking on physical or repetitive tasks while people move into supervision, creativity, problem‑solving and decision‑making roles.

For organisations, the real opportunity lies in redesigning jobs, not eliminating them. Professionals who focus on skills like judgement, empathy, oversight and systems thinking will become more valuable, not less.

3. Impressive, Fallible and Still Learning

The technology behind humanoid robots has advanced rapidly, particularly in vision systems, balance and hand dexterity. Some recent demonstrations have been so realistic that audiences questioned whether they were watching a robot or a human in disguise.

At the same time, viral clips of robots face‑planting, freezing mid‑task or dropping objects are not anomalies. They are part of the learning curve. This is what early‑stage intelligence looks like in physical form.

Robots perform best in controlled environments like factories and warehouses. Homes are far more challenging. Pets move unpredictably. Children run. Objects shift. Lighting changes. Most humanoid robots today still rely on some level of human supervision or remote assistance for complex tasks.

This phase closely mirrors the early days of self‑driving cars. Highly impressive in certain contexts, unreliable in others. The risk is not that robots will fail, but that humans will assume they won’t.

Organisations that succeed will design systems that assume occasional failure and build safeguards accordingly. 

4. The Home Robot Will Sell Time, Not Wow 

When humanoid robots enter homes, affordability and accessibility will dominate the conversation. Today, a humanoid robot like Neo costs around $20,000. By 2035, that figure is expected to fall closer to $10,000 as manufacturing scales and components become cheaper. 

But ownership won’t be the starting point for most people. Early home robots will be aimed at wealthy households, aged care facilities and people with mobility needs. LG has already demonstrated prototype home robots capable of folding laundry and preparing simple meals, while projects like Tombot, a robotic puppy designed to support people with dementia, show how emotionally intelligent design can support care settings. 

For most households, the first exposure will likely be shared robots in apartment buildings, hotels or assisted living environments rather than owning one outright. Leasing models and robot‑as‑a‑service offerings will play a significant role in improving accessibility.

The real appeal is not novelty. It’s time. Even saving 30 to 60 minutes a day by offloading repetitive tasks changes how people live, work and rest.

5. Trust Will Matter More Than Life-like Design

Safety, privacy and psychological trust will ultimately determine whether humanoid robots are accepted into daily life. Most are designed to be lightweight, slow and compliant, stopping when they encounter resistance.

Privacy is a genuine concern. Robots rely on cameras and sensors to navigate spaces, raising questions about data storage, access and ownership. There is also the risk of over‑trust. Robots that look human can trigger emotional responses even when people know they are machines.

Experts agree humans will remain in the loop for a long time, particularly in homes and healthcare settings. Acceptance will depend less on realism and more on whether people feel in control of the technology.

There is also a genuine fear response to consider. An estimated 20 percent of the population experiences some degree of robophobia. Ignoring that reality would be a mistake. 

What This All Adds Up To

Humanoid robots are not coming to replace us, impress us or entertain us. They’re coming to quietly reshape how work gets done, how care is delivered and how time is reclaimed.

The trends are clear. Practical utility over spectacle. Collaboration over replacement. Rapid progress with real limitations. Time as the killer feature at home. Trust as the deciding factor everywhere. 

The future won’t arrive with a dramatic unveiling. It will arrive task by task, shift by shift, home by home. The robots are learning fast. We should too.


Article supplied with thanks to Michael McQueen.

About the Author: Michael is a trends forecaster, business strategist and award-winning conference speaker. His most recent book Mindstuck explores the psychology of stubbornness and how to change minds – including your own.

From Snow White to OpenAI: How Disney Built a Company Powered by Curiosity

By: Michael McQueen

When news broke that Disney had signed a landmark agreement with OpenAI, the reaction across the creative industries was mixed.

Read more

The New Surveillance Economy: How Your Devices Learned to Watch, Track and Predict You

By: Michael McQueen

The future of privacy has arrived quietly, woven into our daily routines so seamlessly that most of us barely notice how much of ourselves we are giving away. Workplaces, cars, shops, neighbourhoods and even our own devices are now part of a growing ecosystem of surveillance reshaping how we live and work.

Read more

Is a Tech-Free Sanctuary Actually Helpful?

By: CMAA

“It’s a great question, but I need to be honest up front and say, folks, I think the phrase is the horse has bolted. We live in a tech world,” said the National Grandparenthood Movement’s Ian Barnett.

Read more