Three ways to build trust into your AI strategy
Every day, numerous headlines about artificial intelligence (AI) inundate our news feeds. While some highlight the technology's limitless potential, others elicit a more negative response. But what does the general public think about AI?
Assessing people's attitudes towards AI is crucial for leaders who strive to remain ahead of the curve. So, we dedicated the first of our ‘Taking the Pulse’ surveys – exploring the impact of disruptive technologies on life in the United States – to gaining insight into people’s perceptions of this technology.
What we found confirmed our initial view that organizations have a lot of work to do if they’re to win the hearts and minds of consumers. Yet equally, implemented with the right intent and guardrails, the opportunities to secure support and grow organizational value are virtually endless.
The results provide a blueprint for how organizations can address concerns by identifying consumers’ greatest fears, and offering targeted strategies to build trust and boost adoption.
We’ve identified three key focus areas.
1. Lead with purpose
While 58 percent of individuals surveyed don’t trust companies or governments with their data being used for AI, they are generally more open to the technology if they understand the purpose and benefits.
As organizations make the move to increase adoption of AI, they need to ensure that the ability to provide transparent, ethical, and bias-fee AI solutions become integrated with the purpose and values of the organization.
By leading with purpose, organizations can ensure that AI is used with positive intent, and responsibly. For example, 61 percent of US consumers are open to AI being used to support their health with more accurate diagnoses, and 52 percent would support AI-assisted personal treatment plans. In fact, technology and healthcare are the most trusted industries to develop AI, according to the survey.
So, partnering with companies in these industries to develop AI that supports public health and the common good could make consumers more receptive to the technology.
2. Improve transparency
The business case for AI is clear, with higher productivity, efficiency, and the ability to quickly generate new ideas, insights, and customer experiences creating value for companies. However, 72 percent of survey respondents don’t feel that they know enough about AI as consumers to trust the technology.
As companies increase AI adoption, they should be transparent about how and why it’s being used, and how it impacts customer data privacy, if at all.
We found that 71 percent of respondents agree that AI needs to be better regulated. Governments are being challenged to come up with solutions that won’t impede innovation. In the meantime, businesses will need to fill the gap, demonstrating fair use and the protection of consumer data, and remaining transparent across all activities.
Like current environmental, social, and governance reporting, AI-type reporting mechanisms would offer an inside-look at how AI is being used and where, to build trust and accountability with customers, investors, and stakeholders.
3. Monitor, measure, and adjust implementations
Two in three US consumers are afraid of AI, with 64 percent believing that it will create new world problems. The only way to build trust in this area is to assure that AI implementations are subject to regular monitoring and measurement, and that there are plans in place should anything go awry.
Leaders should constantly evaluate their AI systems to ensure they are operating as intended. They should also be open to feedback that supports continual improvement. This all builds into creating a culture of AI accountability with clear ownership and responsibility for ensuring that AI is used fairly, transparently, and responsibly.
By creating a transparent operating environment, aligning to purpose, and monitoring implementations, leaders can develop an AI strategy that wins trust and delivers value.