Leading in a World Where AI Wields Power of Its Own

By Jeremy Heimans and Henry Timms
 

What/focus

This is a far-ranging discussion on the new generation of AI systems as ‘autosapient’ agents rather than merely tools, and the implications for the workplace and wider social and economic outcomes through the resulting shift in how ideas and information flow. There are lessons for leaders from understanding the qualities of these AI systems and their current and potential effect in the workplace and beyond. Reaction and debate around handing over too much agency are also discussed, and in turn counterarguments for valuing what is uniquely human. 

How (details/methods)

The authors describe new AI systems as autosapient because they can act autonomously, learn, adapt and operate without continuous intervention, and make complex judgements in context that rival those of [modern] humans (homo sapiens).

As well as being agentic and adaptive, autosapient systems are amiable, operating through friendly chatboxes and interfaces. However, they are also arcane, with even their designers often unable to work out how they arrive at specific decisions. As a result they can be difficult to control.

AI systems are shifting power dynamics by changing how ideas and information flow. The authors zero in on power in terms of how expertise works. In the old-power world, experts were highly valued as authorities. Thanks primarily to the internet, knowledge has become more accessible, leading to a new power world where traditional expertise is less valued. The rise of autosapience threatens to displace experts even further.

For example, it will potentially be much easier to start a scalable business, unleashing unprecedented growth in the ability to execute and innovate However value extraction may not become more distributed. A chasm is emerging between those who will become spectacularly wealthy from AI and the workers who will be either displaced or end up preparing the data to train AI models. Further, advanced AI could see a major recentralisation of the flow of information and ideas. A few companies (and perhaps countries) are likely to control the “base models” for these interfaces, meaning they can code in their own interests. 

With advanced AI systems likely to play a significant role in deciding everything from who gets health care to how we wage war, people will increasingly clamour for both the “right to review” (the merits of decisions) and a “right to reveal” (how decisions were made).

So what 

One of the  big battles ahead—inside organisations and beyond—will be between those advocating for the wisdom of humans versus those who willingly hand over their agency to autosapient systems. However, our future can remain in our own hands. The authors offer guidance for managing effects in the workplace, looking for increased value in that which is uniquely human, and aligning messaging and business practices with a changing, and challenging, debate.

One approach could be shifting the emphasis from hard/technical skills in the workplace to put a premium on creative and aesthetic sensibilities, systems thinking, including the ability to foster trust and collaboration. In addition leaders should view autosapient systems as brilliant but untrustworthy coworkers. It will therefore pay to learn their ways, including being attuned to when and why these systems go awry and the underlying assumptions they are trained on. In summary, leaders need to be careful not lose sight of human rights and agency.