Don't Pause, Act: How to Unlock AI's Vast Potential by Driving More Trustworthy and Responsible AI

While the pace of recent advancements has been breathtaking, some of the biggest AI advances just over the horizon could be even more profound.

 keyboard
exentia/stock.adobe.com

With explosive new AI advancements, it's clear we are at the forefront of a new technological revolution with breathtaking potential that is happening so fast, it seems to be impacting everything, everywhere, all at once. With all this change afoot, some leaders argue we need to pause AI advancements. Instead of pausing progress at this critical moment, we need to act to shape it. We need to act deliberately, proactively, and appropriately to prioritize the development of trustworthy and responsible AI to achieve its many potential benefits. Here is how and why.

While dramatic new advancements in generative AI tools have caught the public's attention, AI has been enriching our lives for years. AI is so pervasive, intuitive, and integral to much of our daily lives that we often don't even recognize when we are using it. It's embedded in our phones and apps today to make communicating easier, driving directions faster, our pictures sharper, and weather predictions more accurate.

And while the pace of recent advancements has been breathtaking, some of the biggest AI advances just over the horizon could be even more profound. In fact, in just the last two years, AI has been creating smarter opportunities everywhere. AI is now being used to identify cancer earlier, tackle our climate challenges, empower people with disabilities in amazing ways, protect us from billions in financial fraud, save thousands of lives on the road, and build better antibodies to protect us from the flu and Ebola.

These are huge. In fact, AI may be one of the most consequential tools in our lifetimes, but only if people can trust that these technologies are developed and deployed in responsible ways. However, one study shows consumers have a number of legitimate concerns about the trustworthiness of AI today. This lack of trust could limit adoption and delay deployment of potentially life-saving gains. But if we drive trust deeper into our AI systems, we expand what AI can deliver.

So what does it take? AI researchers, developers, deployers, companies and policymakers need to be leaders in driving trustworthy AI from the start so that AI systems become continuously more accurate, reliable, safe, secure, privacy-protecting, understandable, and free from bias. The good news is key leaders have begun to recognize their responsibility and have started to elevate more trustworthy, ethical, and responsible AI.

For example, Anthropic announced it is now using a set of moral "values" to train its AI that draw upon the UN's Declaration on Human Rights and other world-class trustworthy best practices. To make AI decision-making more understandable, OpenAI is developing ways to see inside the so-called AI "black box" at the roles specific nodes play in decision-making. And because photorealistic AI images raise concern over potential fakes, Google announced it is watermarking AI-generated images and enabling new tools that can identify AI-generated fakes. These are important initial efforts. But we shouldn't pause this progress, and instead call on all AI innovators everywhere to step up and do even more.

For example, when deploying AI, companies can and should adopt NIST's new AI Risk Management Framework to incorporate trustworthiness considerations into the design, development, use and evaluation of AI products, services and systems. The framework provides an identifiable way to help organizations ensure AI results are valid, reliable, safe, secure, accountable, interpretable, privacy-protecting, and fair (without harmful bias).

Despite many well-meaning efforts, we also know there are always bad actors looking to leverage new tools in bad ways. To thwart these efforts, we need to keep humans engaged to help spot the clever and unanticipated ways bad actors may use increasingly realistic human-like tools. That is why, for example, we still need a rigorous human review of apps before they get into app stores — to weed out the fraudulent and fake apps that bad actors have already started to create to thwart our security, privacy, and safety.

Policymakers also need to step up and support a trust-based approach to AI advancements. As a first step, enforcement agencies need to focus on accountability by making clear they will rigorously enforce existing bedrock consumer protection laws against those using AI in harmful ways. Second, because AI is often trained on huge volumes of data (including sometimes personal data), Congress needs to establish a comprehensive national privacy framework that protects consumers and ensures that privacy protection is a basic digital right. Third, establishing appropriate guardrails for such a broad category of technologies in such a fast-moving domain will require policy leaders to get smarter fast, and embrace solutions that are flexible, responsive, and adaptable to emerging risks. And fourth, Congress and the Administration need to build on their prudent investments aimed at capitalizing on AI's transformational impact through the creation of a national network of National AI Research Institutes — including a specific new institute aimed at advancing more Trustworthy AI.

Pausing AI progress now wouldn't just stall this promising ethical progress, but it could actually do more harm than good. If we paused now, we could lose precious scientific research focused on responsible AI, risk falling behind in the global race for one of the most critical technologies of our time to those who embrace its vast potential, and delay solutions to some of our biggest challenges.

So far, we've seen only a fraction of AI's positive potential. The biggest opportunities will only be achieved when leaders take proactive and pragmatic steps to advance trustworthy AI and ensure we expand the aperture of opportunity to benefit everyone. This is our trustworthy AI moment. Let's not pause AI, but act responsibly to ensure our AI future is a trusted future.

The Newsweek Expert Forum is an invitation-only network of influential leaders, experts, executives, and entrepreneurs who share their insights with our audience.
What's this?
Content labeled as the Expert Forum is produced and managed by Newsweek Expert Forum, a fee based, invitation only membership community. The opinions expressed in this content do not necessarily reflect the opinion of Newsweek or the Newsweek Expert Forum.

Editor's Picks

Newsweek cover
  • Newsweek magazine delivered to your door
  • Unlimited access to Newsweek.com
  • Ad free Newsweek.com experience
  • iOS and Android app access
  • All newsletters + podcasts
Newsweek cover
  • Unlimited access to Newsweek.com
  • Ad free Newsweek.com experience
  • iOS and Android app access
  • All newsletters + podcasts