Okay folks, before we start, I want to tip my hat to Vilas Dhar, the big gun at the Patrick J. McGovern Foundation, a philanthropic organization committed to “advancing data and AI for good.” Dhar’s been my North Star in all the recent AI hullabaloo, a guiding light through the often confusing area of ethical AI. Thanks to his wisdom I’ve got a handle on the whole concept, and it’s a game-changer.
I’ve been chit-chatting with you about responsible AI, and in my previous piece suggested it’s up to us to set the tone and define what that responsibility really means. I often think of us being on this rocket ship into a future where artificial intelligence is becoming as common as the skinny latte. AI’s pervasive societal reach will touch everyone from your mailman to your grandma. We therefore need to consider everybody who’s got skin in the game. It isn’t just about who’s developing the fastest gun in the west, it’s about who’s got the steadiest hand on the reins.
The idea of ETHICS as an anagram–and easy mnemonic–for the principles that keep us on the straight-and-narrow as AI evolves was introduced by Dhar, a self-described “entrepreneur, technologist, human rights advocate, and a leading global voice on equity in a tech-enabled world.” We’ll get to what the five-letter shorthand stands for in a moment. As we do, I’ll ask that you think of it as something like Star Trek‘s Prime Directive. For non-Trekkies, that’s the principled mission statement of StarFleet, mandating that no starship crew may interfere with the development of societies on newly discovered planets. To stay in full nerd mode a second, the Prime Directive is StarFleet’s clearcut declaration of responsibility as it explores the final frontier. Which for it is the furthest, as-yet-undiscovered reaches of outer space.
So what’s ours with regard to AI?
According to most experts we’re near the stage when AI will evolve into ASI, or artificial super intelligence, implying full sentience of a sort comparable to our own human consciousness and self-awareness. In grappling with how we’ll deal with that coming reality, Elon Musk has spoken about AI alignment, which has been described as developing principles and protocols to ensure that AI systems are designed to act in the best interests of humanity rather than in the narrower and potentially selfish interests of their creators. Now, while Elon hasn’t asked me, I happen to think trying to indirectly control ASI will be like trying to slow down the water gushing from an open fire hydrant with a toothpick … but more about that later.
Right now, I want to tackle what the individual letters in ETHICS stand for, breaking it all down like a well-planned Starfleet operation:
E is for Executives and board members. They’re the ones at the top making overarching company decisions, shaping the overall tone and attitude inside the boardroom, creating a culture that will emphasize ethical AI. It isn’t about approving projects willy-nilly or flaunting their status. It’s about weaving ethics into the fabric of the decision-making process and ensuring their organization’s integrity remains intact. Executives are also the ones adjusting the financial dial, guaranteeing there’s enough juice to power ethical AI development.
T is for Technologists, engineers and developers. Their charge is to build applications that are comprehensible, transparent and secure. These digital whizzes need to craft AI tools that are as wide-open and clear as a summer sky, while ensuring they’re safer than a lockbox at the bottom of the ocean.
H is for Human Rights advocates. They’re like magnifying glasses scrutinizing the details in an AI puzzle. The mission? To ensure every single AI application respects the rights and dignity of all users. These advocates establish the guardrails, carefully monitoring the potential effects of AI on society’s most vulnerable and calling foul if any ethical lines are crossed.
I is for Industry experts. Think of them as the mapmakers in the ethical AI expedition. With their understanding of the AI landscape, they guide the journey, sharing valuable insights, pinpointing possible issues, and collaborating with others to navigate through the murky and uncharted waters of AI ethics.
C is for Customers and users. They’re the testers, the critics, the voices that give life to AI applications. Their experience, feedback and vigilance are the litmus tests of the AI tools’ ethical performance. They navigate the farflung frontier of AI like the crew of the USS Enterprise (sorry, couldn’t resist), staying informed about ethical requirements and providing the invaluable insights that can help steer AI development.
S is for Society-at-Large. Think of society as the broad canvas where the AI picture is painted. The goal? To ensure that AI not only blends into the landscape but enhances it, enriching the lives of everyone, not just a select few. Society plays a crucial role in shaping AI tools that are inclusive, accessible, and beneficial for all.
These roles are all interconnected cogs in the machine of ethical AI development. Or to put it another way, they’re like musicians in a jazz band, each one playing a crucial part. At their finest they riff off each other, harmonizing their efforts to produce the sweet sound of ethical AI.
So there you have it. ETHICS. A simple six-letter word laying out the foundations and support structure for principled human conduct, for treating each other with fairness and respect, and for Vilas Dhar’s model of integrating artificial intelligence into that moral framework. It’s a complex mission, and a cooperative one. If we want to make it work, it’s not enough for any of us to just know our own roles. We have to get talking, build bridges, and ensure everyone’s on the same page.
Final Thoughts From AI Fleet Command
To anyone who reads this and cares about ethics in business, I suggest you round up your your workmates, customers, execs and tech teams–in other words, your stakeholders–and begin training them about ethical AI. But don’t procrastinate. I can’t emphasize enough that the time for this is now because of the rapid pace of AI development. If we don’t take responsibility as private citizens, don’t squawk when government agencies step in with Draconian regulations.
In terms of integrating ETHICS into your company’s philosophical approach toward AI, consider investing some time and effort into the formation of a dream team cherrypicked from different departments, tasking it with setting up a system that collects feedback about AI systems and ultimately creates some basic guidelines for their use. Also, be transparent. Get out there to your customers and clients to gauge their concerns and measure their collective pulse. Diversity of ideas and perspectives is our strength. Talk to human rights advocates, industry experts, and others to educate yourselves and ensure you’re considering all the implications of AI.
We’re at an exciting time where creating AI-based products feels like the best way to explore what the technology can do. The field’s wide open, the horizon endless. But if we don’t consider how these products will be used, who they’re serving or how they might affect vulnerable people–if we act like kids in a toy factory, grabbing at shiny new playthings because, well, they’re there within easy reach–we’re missing the chance to act in our best and most mature capacity as members of society. So let’s use ETHICS as our guiding light, getting everyone involved as we create not just products but a healthy ethical AI ecosystem … and a future that’s good for everyone,
Or to offer one last nerd-blast: As we explore that final frontier, we must always keep ethics in mind, remembering ASI is NOT that far away.
According to most experts we’re near the stage when AI will evolve into ASI, or artificial super intelligence, implying full sentience of a sort comparable to our own.
About the author: