Stranded Assets to Stranded Humans? The terms keeping us up at night

Not long ago, the scariest words in investing came from the world of climate and ESG. Reports brimmed with phrases like disaster recovery planning, irreversible tipping points, transition shocks, and stranded assets. If that didn’t make your palms sweat, you could always lose yourself in catastrophe bonds or resilience stress tests. The message was clear: ignore the jargon at your peril, because behind the technical terms lurked floods, fires, and multibillion-dollar losses.

 

Fast forward, and the buzz has shifted. Climate hasn’t gone away, far from it, but Artificial Intelligence (AI) has taken centre stage. And with it comes a fresh lexicon of intimidation with phrases like failure-free interactions, autonomous decision-making, deepfake proliferation, and AI job displacement. At first glance, it’s just technical jargon – easy to gloss over or treat as abstract. But behind these terms are real impacts and risks: the systems they describe are complex, opaque, and moving faster than most governance frameworks can keep up.

 

From tangible terrors to digital disguises

 

The climate language had weight because it dealt with the tangible. A “transition shock” wasn’t an abstract idea; it was coal plants shut down overnight or insurers pulling out of Florida. “Disaster recovery” was something you could picture – a port under water, a supply chain disrupted by extreme weather. Grim, yes, but human and physical.

 

AI terminology hits differently. It’s cold, clinical, and oddly casual about risk. “Failure-free interactions” sounds reassuring – until you realise it’s shorthand for trying to engineer humans out of the equation. Just look at teenagers scrolling TikTok – perfect filters, curated feeds, instant answers. That’s their version of failure-free. But failure’s part of learning – it’s how we build resilience. 


Then there’s the human toll. “Deepfake proliferation” might sound technical, but it really means the system can generate convincing false information – and we might not always catch it. “Autonomous decision-making” is just a polite way of saying the machine goes ahead without asking you first. And as people become increasingly reliant on these tools for decisions, validation, or even companionship, there’s a risk that the boundaries between digital and real-world experiences can blur, making it even more important to approach these technologies with awareness and balance.

 

Warding off the monsters: Lessons from the ESG crypt

 

Even though it’s Halloween, it need not be so scary. Yes, these challenges are real, but so are the opportunities. With thoughtful oversight and a willingness to adapt, we can harness the benefits of AI while keeping its risks in check – ensuring that technology serves people, not the other way around.

 

For us sustainable investors, the contrast is unsettling. Sustainability deals with risks you can often measure – carbon footprints, stranded assets, supply chain shocks. AI risks are harder to pin down: governance gaps, accountability failures, and the erosion of trust. And yet, just as with climate risk, the approach we’ve honed through ESG integration is going to matter here. It’s about applying the same discipline: identify the risk exposures, understand how they’re being managed, and ask the right questions. What’s the company doing to mitigate those risks? How are they protecting users, clients, and their own social licence to operate?

 

Whether it’s factories shutting down in extreme heat or algorithms launched without proper oversight, the fundamentals don’t change: look ahead, assess the controls, and demand transparency. Experience with sustainability has shown that unmanaged risks don’t stay hidden – they surface, and when they do, the cost is real. AI will be no different. But with the right questions and a healthy dose of curiosity, we can make sure the story isn’t just about fear, but about progress, resilience, and opportunity.

 

Disclosures

Author

Related Articles