Ethics to consider when implementing AI and similar systems
Very few roles in healthcare or business are unaffected by new technology. Artificial intelligence (AI) and related systems are being implemented at an increasingly rapid pace, while legacy systems are being scrutinized to see how they can be hardened against cyberattacks. In the face of the speed of adoption, experts urge businesspeople to take care when evaluating how systems and data are used.
Kate O’Neill, founder and CEO of KO Insights, a strategic advisory firm, is one of those experts. Known as the “tech humanist,” O’Neill has a background in technology and content management at such firms as Toshiba and Netflix. She delivered a breakout session at this year’s HealthTrust University Conference, where she suggested several key points to make sure that technology supports our humanity in a meaningful way.
“It’s interesting to look at the impact of emerging technology and data-based decision-making on humanity and human experience, and what that means for different groups of people,” O’Neill says. “And in the context of healthcare, the impact on patient outcomes.”
The data trap
O’Neill has consulted with organizations that encountered difficulties with their technology—not in terms of malware or cyberattacks, but in how they’re using technology and where it’s leading them.
“There’s a lot of variation in how these things can go wrong,” O’Neill explains. “Depending on an organization’s approach to data, it could be looking at collecting too much, or things that aren’t relevant, and suddenly it has liability and guardianship for people’s vulnerable data. With healthcare, caution must be taken. There’s so much that’s important for providers to know about patients and for patients to know about their own healthcare. But there are appropriate times, places and ways to gather that information.”
O’Neill describes an example of an organization that gathered seemingly irrelevant data that could be misused. “About six years ago, Uber started gathering phone battery levels when callers summoned a ride. Uber decided to see if a rider’s battery level was very low, would they be more likely to pay a surcharge for the ride because they were desperate? And it turned out that riders would,” O’Neill says.
“They tested it, and they swore they would not put it into production—ever. But because they know this, it would be tempting to use that data. It will influence [other policies] in nuanced ways, and how they think about the relationship between themselves and the customer.”
Reminders of purpose
Certain applications of technology can make it much easier for a company to become predatory, O’Neill says. “There is a lesson here for any industry—this kind of exploitative relationship could take different forms. And if we’re not careful, it can manifest itself in the data we collect and in the decisions we make.”
O’Neill adds, “Organizations need to be in a constant cycle of reminding themselves why they’re doing what they do and who they serve. While they may tend to say, ‘Data, technology, AI and automation can make us so much more profitable; can streamline our decision-making; can make us 10 times more efficient, and we’ll be able to trim staff.’ But if that doesn’t align with what people on the other side of the experience need, then they are creating a disconnect with what they are supposed to be achieving.”
Applications in healthcare
When it comes to patients, employees and others who are affected by the technology organizations use, O’Neill urges thoughtfulness and attention to ethical concerns. “It’s important to have the willingness as an organization to hold the space for the complex conversation around who the communities are, literally and figuratively, downstream from our decisions,” she says.
Here are a few takeaways from O’Neill’s work in this area that can guide members as they implement new technologies and enact policies around those updates:
- Be cautious about data collection: Gather only relevant information & use it responsibly.
- Consider human impacts: Technology should reflect & amplify human values, not undermine them.
- Prioritize ethical considerations in tech implementation & decision-making.
- Maintain a constant cycle of reminding your organization of its purpose & who it serves.
- Strive for meaningful & respectful use of data to build lifetime value.
- Prepare for a future where human & machine contributions blend in the workplace.
- Adopt a strategic, optimistic & integrative approach to navigate future complexities.
“We must ensure that we are not only making things more efficient and profitable, but that we are also taking steps to empower people more.”
Read more insights from Kate O’Neill in her book, Tech Humanist: How You Can Make Technology Better for Business and Better for Humans, and look for her new title in January 2025—What Matters Next: A Leader’s Guide to Making Human-friendly Tech Decisions in a World that’s Moving Too Fast.
Share Email AI, Innovation, Q4 2024