My first interaction with ChatGPT started funny when it asked me to check the box that confirmed I was not a robot…
You have heard the term Artificial Intelligence (AI) and, more recently, ChatGPT. The simplest definition of AI is “the simulation of human intelligence in machines that are programmed to think and learn.” ChatGPT is “a large, pre-trained language model developed by OpenAI. It is based on the GPT (Generative Pre-trained Transformer) architecture, which uses deep learning techniques to process and generate natural language.” In this article, I will give examples of what AI does and discuss how it affects trust and what to do to ensure it doesn’t cause people to distrust you.
What is AI?
In 2007, I wrote the book “Bowling with a Crystal Ball.” In that book, I showed that technology trends are both very aggressive (more than any other trend in society) and predictable. A few years later, I wrote an article predicting that by 2027, a $1,000 computer will have as much brain power as the human brain, and a year later, it will exceed it. Well, that time is now. But, while AI might sound like science fiction, let’s consider a few day-to-day examples of AI.
The ability of a car dashcam to determine the speed limit on a sign has progressed to driving the car and making all driving decisions, including the safety of the car occupants.
The interactive voice response systems when you call a business, that replace a human attendant in most “typical” interactions
Deep fake technologies that replace your face with another face in videos, making them look like the other person participated in that video, very realistically.
When someone sends you a message on LinkedIn, note that LinkedIn offers you a few “standard” and “appropriate” responses that require nothing more than clicking a virtual button. Facebook lets you know that someone’s birthday is today, prompting you to wish them a happy birthday (and often even proposing the message itself), making it look like you remembered.
Mail merge functions with field codes allow you to “mass customize” email messages, making it look like they were created individually with every recipient in mind.
Finally, ChatGPT can generate content that appears to be created by a human to replace the effort of creating this content yourself.
What does AI do to trust?
A week ago, I conducted a poll on LinkedIn, asking, “Do you think that Artificial Intelligence (AI) tools such as ChatGPT increase, reduce, or have no impact on the level of trust?” The results were as follows: 62% believed it would reduce trust, and 31% believed it would not affect trust. Nobody thought that AI would increase trust. (Note: the sample size was only 29).
Let’s analyze the use of AI according to the components of the relative trust model.
Competence is often related to uniqueness, creativity, and originality rather than generalization and the ability to rehash what was already said many times. When I asked ChatGPT to “explain how trust is relative,” I received a very generic answer. If generic is good enough (and often it is), then I consider it a helpful tool. However, when the action or content generated by an AI tool goes beyond the capability of the person using the tool, but is presented as if the person created it, it is misleading, which leads to distrust.
When we communicate at low intimacy (e.g., via email and not in-person, face-to-face), we often read between the lines and make assumptions about what the author really meant. But can we really do that when the content was created by AI and not by the person?
Misleading people to think that I wrote something when in reality, I didn’t is, well, total BS. Making people believe that I have put much more effort than I really did (because I really used an AI tool to do it for me) is held by me in the same regard.
Time & Intimacy
But even when I know that the content (or any other output) was created by an AI tool and not the other person communicating with me, when they don’t try to mislead me to think that they did, I don’t trust them as much as I would if they did it themselves. The level of trust we have in other people often correlates to the level of effort they put into the interaction with me. I used to think very highly of people when I received a “happy birthday” letter from them. I felt they’ve put significant effort into that interaction. It took them time to write it and mail it. But I don’t think that much of the effort when I know that all it takes is clicking one button on Facebook, even if the result looks the same, or even better.
How to use AI and not lose trust?
Even though 62% of my small LinkedIn poll thought that AI reduces trust, and none of them felt that it could increase trust, I don’t see it this way. And not because I’m a technology geek (which I am). It comes down to how we use AI.
When we use AI technologies to help us perform tasks better and faster, as long as we are open and upfront about it, it allows us to focus on areas where AI can’t help. Just like Excel, Photoshop, and electricity. People will not (and should not) be as impressed by the outcome of our deliverables, because they really took much less effort than they used to. And that’s OK. We can still impress them with original and creative things we do that cannot be done by AI.
The problem starts when we mislead people into thinking that we actually created the output from any AI tool. Such misleading causes distrust, and for a very good reason. Even when you don’t intentionally claim authorship to something created by AI, but you allow the recipients to think that you were the author (maybe because they are not aware of AI’s capabilities), you will (and should) not be trusted.
Want to hear more? Listen to this week's podcast episode: https://podcasts.apple.com/us/podcast/s8e4-ai-artificial-intelligence-chatgpt-and-trust/id1569249060?i=1000596282940
Dr. Yoram Solomon is a trust expert, author of The Book of Trust, host of The Trust Show podcast, a two-time TEDx speaker, and facilitator of the Trust Habits workshop and masterclass that help build trust in organizations. He is a frequent speaker at SHRM events and a contributor to HR.com magazine.