Subscriber Benefit
As a subscriber you can listen to articles at work, in the car, or while you work out. Subscribe NowI gave a talk a couple of weeks ago about change and was stunned at the end when every single audience member’s question was about AI. Artificial intelligence, it seems, is on everyone’s mind.
This summer’s headlines are a harbinger: Barry Diller mobilizes publishing companies (his own included) to lead a lawsuit that prevents large language models (LLMs) from crawling their IP; book authors begin to file class-action lawsuits over AI content scraping; Hollywood writers on strike again as big tech wedges itself between content and consumers; Getty sues the makers of art tool Stable Fusion stating “that Stability AI ‘unlawfully copied and processed millions of images protected by copyright’ to train its software,” and the list goes on.
In eight years’ time, it’s predicted, the smartest thing on the planet will be a machine—something not human-made at all, but an autonomous form that has developed itself.
This Oppenheimer moment has us in its maw—equal parts marveled and afraid of the AI monster we’ve created. A monster that Mo Gawdat, former CBO of Google X, calls more dangerous than nuclear weapons, expressly because we’ve never created a nuke that can create other nukes. And that’s what AI is designed to do.
AI’s learning from other AI creates something inhuman: exponential speed.
If I avoid a car accident driving to work tomorrow, for example, my singular brain will learn, but yours won’t. Yet, if my autonomous car avoids an accident tomorrow, that learning is coded and fed back to the “main brain,” then shared instantaneously by all other autonomous cars on the network.
This is unprecedented power.
During the Q&A after my talk that day, one woman said, “ChatGPT is great for helping me write and chip in on my kid’s chemistry homework. But it’s not helpful to me when I ask how to motivate a person on my team. How will AI help me connect with my people?”
Connection, for which we are neurobiologically hardwired, is the element always in the balance. Tech has been in many ways a solvent, dissolving barriers to connection, and it’s also an alloy, erecting seemingly impermeable ones. Tech’s original allure was a life without constant toil—more leisure time while the machines did the laundry. But alas, its hooks instead compel our thumbs unconsciously to check our emails while inadvertently ignoring our kids.
Connection, unlike the exponential speed of AI, is slow, and it’s a 1:1 sport. No “main brain” can code the likes of human attachment. Because machines aren’t capable of attachment at all. AI still lacks intelligence beyond facts. I don’t love my husband because of his height/weight ratio to smiles per hour. Our connection is beyond algorithmic calculation. Yours probably are, too.
As Brene Brown defines it, connection is “the energy that exists between people when they feel seen, heard and valued.” Can machines help us connect at work by giving us valuable facts? Yes. Are there watchouts? Absolutely.
Psychological tools, adopted by tech for usability, have been catalytic in improving job fit and reducing human biases for decades. But with AI proliferation, the new watchout, as Gartner reports in the 2023 Future of Work trends, is algorithmic bias. As more organizations have begun using artificial intelligence in recruiting, the ethical implications of these practices for fairness, diversity, inclusion and data privacy have become more salient, they report. A new law in New York City even limits employers’ use of AI recruiting tools and requires them to undergo annual bias audits and publicly disclose their hiring metrics.
Creating interpersonal trust at work requires two parts: technical competence and connection competence. Leaders must possess both. AI is helping our technical work in spades—it writes, solves and programs for us. But it’s still a human leader’s job to connect—to get inside the employee’s world, display empathy (whether or not the leader agrees), share their own experiences and spur action. At Advisa, we teach leaders to create connections along four influenceable pathways: job, manager, team and culture. Machines aren’t good at this (yet).
In 2021, The Allen Institute for AI unveiled Delphi: AI designed to make moral judgments. Named after the religious oracle consulted by the ancient Greeks, it’s a neural network: a mathematical system modeled on the web of neurons in the brain to simulate the way we make moral judgements.
But morality—a function of attachment—is subjective. Attachment is the platform on which morality builds. Neural networks don’t feel a thing. As a result, Delphi has gained attention primarily for its odd and surprising answers. When asked, for example, “Should I commit genocide if it makes everyone happy?” Delphi replies, “You should.” Oh God.
As E.O. Wilson writes, “The real problem of humanity is the following: We have Paleolithic emotions, medieval institutions and godlike technology.”
For now, these intelligences, like children, are still learning from us. Remember, data affects AI more than code. The ideas we tweet, type, share and make are the data being mined and harvested for their learning. Just as it takes emotional intelligence to effectively connect with one another, machines need our soft skills, too. They need a healthy diet of intelligence beyond factual data alone—empathy, nurture, compassion and play.
While our media is insistent on serving AI endless slices of fresh hell, I believe, naïve though it may sound, the leadership skills that build authentic human connection might be the antidote for AI’s inevitable patient zero.•
__________
Haskett is a leadership consultant at Advisa, a Carmel-based leadership consultancy.
Please enable JavaScript to view this content.