Generative AI Must Conquer Distrust To Offer a Work Aid

There is a dichotomy in perceptions about this AI technology — with some singing its praises and others predicting dangerous disruption of life as we know it.

earth
robert/stock.adobe.com

HR can help lead the way in the development of clear policies and guidelines on the use of AI in the workplace.

The first half of 2023 has been marked by massive chatter about generative AI, representing tools like Bard and ChatGPT that promise (or threaten) to take on automating tasks ranging from creating content to customer service and more.

As you might imagine, there is a definite dichotomy in perceptions about this AI technology — with some singing its praises and others predicting dangerous disruption of life as we know it.

Differing Perceptions on the Impact of AI

A recent study by KPMG and The University of Queensland reveals a significant difference in these perceptions on a global scale. Emerging economies, for instance, tend to view AI more favorably than Western countries. Managers and younger employees also are more likely to view the impact of AI favorably.

Interestingly, managers were less concerned about the negative impact of AI on jobs, and actually were more likely than other groups to believe that AI will create jobs. Younger employees, and those with a college education, were also more likely to express more comfort than distress over the potential impact of AI.

These views, of course, are in direct conflict with those employees who are already fearing the impact that AI may have on their positions and continued employment.

In fact, there is one area of common concern for many — the fear that AI will take over many jobs. According to the aforementioned study, 77% of respondents share this concern.

Given the potential benefits that generative AI may hold to boost efficiency and productivity, though, HR and management staff can play an important role in ensuring its appropriate and effective implementation.

Reducing Angst Over AI

What will it take to minimize some of the concerns about the downsides of AI to encourage people to benefit from its many potential upsides?

Applying some level of oversight will help. When oversight tools are in place — like monitoring AI for accuracy and reliability, creating "Codes of Conduct," establishing independent AI ethics boards, and international AI standards, for instance, concerns can be minimized.

Another important step: focusing on the use of AI to automate routine tasks — the kinds of mundane tasks that employees do not enjoy doing and that will free them up to do more rewarding work. Most people are comfortable with these types of applications. What they're not comfortable with is the application of AI to take on human resource functions such as performance management, or for monitoring practices.

That should come as no surprise with the "human" in human resources being an important clue to a function that is probably not poised to be replaced by bots anytime soon.

An Important Role for HR

It's important for organizations to get in front of the conversations and concerns that are taking place around the application of AI in the workplace. Its potential impact can't be ignored. But, despite the uncertainty that still exists, HR can help lead the way in the development of clear policies and guidelines on the use of AI in the workplace. In addition to the widespread concerns about the impact of AI on jobs, concerns also exist related to data security and personal privacy.

Writing for Harvard Business Review, Reid Blackman, author of Ethical Machines (Harvard Business Review Press, 2022), stresses that:

  • Businesses need to explicitly identify potential risks — specifically, "potential ethical nightmares."
  • Businesses need to recognize that the likelihood of these risks has "massively increased."
  • Business leaders are, ultimately, responsible for managing these implications.

To build trust in AI, HR and other business leaders need to ensure that their companies are using AI responsibly and ethically.

There are no right or wrong approaches at this point. Just as AI is neither good or bad. Each organization will need to determine its own approach to the use of AI and the types of policies, practices, and guidelines put in place to manage its use.

Citing software code leakage and data privacy issues, Apple, along with JP Morgan and Verizon, have either restricted or banned the use of ChatGPT and other generative AI tools. Italy was the first Western country to ban it, and the EU is working on a legal framework (called the AI Act) for its regulation. The progressive City of Boston, on the other hand, is encouraging staff to experiment with this technology — perhaps paving the way for other cities to follow suit.

Now is the right time to begin conversations about the implications that generative AI and other disruptive technologies will have for your workforce and your customers.

The Newsweek Expert Forum is an invitation-only network of influential leaders, experts, executives, and entrepreneurs who share their insights with our audience.
What's this?
Content labeled as the Expert Forum is produced and managed by Newsweek Expert Forum, a fee based, invitation only membership community. The opinions expressed in this content do not necessarily reflect the opinion of Newsweek or the Newsweek Expert Forum.

Editor's Picks

Newsweek cover
  • Newsweek magazine delivered to your door
  • Unlimited access to Newsweek.com
  • Ad free Newsweek.com experience
  • iOS and Android app access
  • All newsletters + podcasts
Newsweek cover
  • Unlimited access to Newsweek.com
  • Ad free Newsweek.com experience
  • iOS and Android app access
  • All newsletters + podcasts