Connect with us


Humans and ChatGPT mirror mutual language patterns – here’s how

ChatGPT and similar language models serve as a mirror for human language, revealing both its unique creativity and repetitive nature.

A robot hand is touching a human hand.
Image: Pexels

ChatGPT is a hot topic at my university, where faculty members are deeply concerned about academic integrity, while administrators urge us to “embrace the benefits” of this “new frontier.” 

It’s a classic example of what my colleague Punya Mishra calls the “doom-hype cycle” around new technologies. Likewise, media coverage of human-AI interaction – whether paranoid or starry-eyed – tends to emphasize its newness.

In one sense, it is undeniably new. Interactions with ChatGPT can feel unprecedented, as when a tech journalist couldn’t get a chatbot to stop declaring its love for him.

In my view, however, the boundary between humans and machines, in terms of the way we interact with one another, is fuzzier than most people would care to admit, and this fuzziness accounts for a good deal of the discourse swirling around ChatGPT.

When I’m asked to check a box to confirm I’m not a robot, I don’t give it a second thought – of course, I’m not a robot.

On the other hand, when my email client suggests a word or phrase to complete my sentence or when my phone guesses the next word I’m about to text, I start to doubt myself. Is that what I meant to say?

Would it have occurred to me if the application hadn’t suggested it? Am I part robot? These large language models have been trained on massive amounts of “natural” human language. Does this make the robots part human?

AI chatbots are new, but public debates over language change are not. As a linguistic anthropologist, I find human reactions to ChatGPT the most interesting thing about it.

Looking carefully at such reactions reveals the beliefs about language underlying people’s ambivalent, uneasy, still-evolving relationship with AI interlocutors.

ChatGPT and the like hold up a mirror to human language. Humans are both highly original and unoriginal when it comes to language. Chatbots reflect this, revealing tendencies and patterns that are already present in interactions with other humans.

Creators or mimics?

The user interacts with the graphical user interface. With a chatgpt chatbot
Image: Getty Images

Recently, famed linguist Noam Chomsky and his colleagues argued that chatbots are “stuck in a prehuman or nonhuman phase of cognitive evolution” because they can only describe and predict, not explain.

Rather than drawing on an infinite capacity to generate new phrases, they compensate with huge amounts of input, which allows them to make predictions about which words to use with a high degree of accuracy.

This is in line with Chomsky’s historic recognition that human language could not be produced merely through children’s imitation of adult speakers.

The human language faculty had to be generative since children do not receive enough input to account for all the forms they produce, many of which they could not have heard before.

That is the only way to explain why humans – unlike other animals with sophisticated systems of communication – have a theoretically infinite capacity to generate new phrases.

Noam Chomsky developed the generative theory of language acquisition.

There’s a problem with that argument, though. Even though humans are endlessly capable of generating new strings of language, people usually don’t.

Humans are constantly recycling bits of language they’ve encountered before and shaping their speech in ways that respond – consciously or unconsciously – to the speech of others, present or absent.

As Mikhail Bakhtin – a Chomsky-like figure for linguistic anthropologists – put it, “our thought itself,” along with our language, “is born and shaped in the process of interaction and struggle with others’ thought.”

Our words “taste” of the contexts where we and others have encountered them before, so we’re constantly wrestling to make them our own.

Even plagiarism is less straightforward than it appears. The concept of stealing someone else’s words assumes that communication always takes place between people who independently come up with their own original ideas and phrases.

People may like to think of themselves that way, but the reality shows otherwise in nearly every interaction – when I parrot a saying of my dad’s to my daughter.

Also, when the president gives a speech that someone else crafted, expressing the views of an outside interest group, or when a therapist interacts with her client according to principles that her teachers taught her to heed.

In any given interaction, the framework for production – speaking or writing – and reception – listening or reading and understanding – varies in terms of what is said, how it is said, who says it and who is responsible in each case.

What AI reveals about humans

A night cityscape illuminated by a light in majorelle blue creates a stunning art piece.
Image: Georgia Tech Professional Education

The popular conception of human language views communication primarily as something that takes place between people who invent new phrases from scratch.

However, that assumption breaks down when Woebot, an AI therapy app, is trained to interact with human clients by human therapists, using conversations from human-to-human therapy sessions.

It breaks down when one of my favorite songwriters, Colin Meloy of The Decemberiststells ChatGPT to write lyrics and chords in his own style.

Meloy found the resulting song “remarkably mediocre” and lacking in intuition, but also uncannily in the zone of a Decemberists song.

As Meloy notes, however, the chord progressions, themes, and rhymes in human-written pop songs also tend to mirror other pop songs, just as politicians’ speeches draw freely from past generations of politicians and activists, which were already replete with phrases from the Bible.

Pop songs and political speeches are especially vivid illustrations of a more general phenomenon. When anyone speaks or writes, how much is newly generated à la Chomsky?

How much is recycled à la Bakhtin? Are we part robot? Are the robots part human? People like Chomsky, who say that chatbot are unlike human speakers, are right.

However, so are those like Bakhtin who point out that we’re never really in control of our words – at least, not as much as we’d imagine ourselves to be.

In that sense, ChatGPT forces us to consider an age-old question anew: How much of our language is really ours?

Have any thoughts on this? Drop us a line below in the comments, or carry the discussion to our Twitter or Facebook.

Editors’ Recommendations:

Editor’s Note: This article was written by Brendan H. O’Conner, Associate Professor of Transborder Studies at Arizona State University, and republished from The Conversation under a Creative Commons license. Read the original article.

Follow us on Flipboard, Google News, or Apple News

The Conversation is a nonprofit, independent news organization dedicated to unlocking the knowledge of experts for the public good.

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Deals of the Day

  1. Paramount+: Live Sports Starting at $2.50/mo. for 12 Mos. Sports - Try It Free w/ code: SPORTS
  2. Save $20 on a Microsoft365 subscription at Best Buy with a Best Buy Membership!
  3. Try Apple TV+ for FREE and watch all the Apple Originals
  4. Save $300 on a Segway at Best Buy, now $699

More in News