LLMs, ChatGPT, and a Really Bad Idea

December 1, 2023 | By Chris Miciek

Op Ed
A human hands touches a robot one.

TAGS: ai, journal, trends and predictions,

Author’s Note: As anyone paying even minimal attention knows, the availability of (and conversations about) algorithmic tools like the family of GPTs (generative pretrained transformers) has expanded rapidly. One could say they have moved fast and broken things. I originally began this piece in January 2023, but continued to delay completion as I constantly faced new facets to consider addressing. Now, I find myself here, focusing and scaling back to a few core issues. The new goal: to surface considerations not found in mainstream conversations, while introducing and contextualizing for our community some of the brilliant work going on in this space.

The AI Landscape

Recently, the world seems to be enthralled by a series of “launches” or introductions to tools built on large language models (LLMs). LLMs are types of machine learning focused on handling text in ways that appear meaningful. These “ stochastic parrots,” as Timnit Gebru and Margaret Mitchell (both formerly of Google) called them, carry inherent problems.1 LLMs, as the term “stochastic parrots” suggests, do not operate with understanding; they simply generate probabilistic modeling of language. The better the model, the more realistic or convincing its output seems.

Why start with this explanation? Gebru and Mitchell, two of the world’s top AI researchers, pushed against the hype and were fired for doing so. We should pay attention to their critiques and those of the many other AI researchers, technology ethicists, and social scientists focused on various intersections of technology and society.

Before moving to the main part, we need to clarify that LLMs, despite recent claims or speculations by OpenAI and others, are not the whole of what is meant by the term artificial intelligence (AI). LLMs are only one possible development path toward AGI or artificial general intelligence, that goal of something “humanlike.” However, LLMs are commanding all of the attention and most of the funding.

Meta’s Galactica met sharp criticism at launch late in 2022, and Meta quickly pulled it back from public access. With Galactica, the issue was around the absurd claims made about its accuracy and use as a generator of scientific papers. Scientists across multiple fields put Galactica and Meta’s claims to the test and found them sorely wanting. OpenAI’s ChatGPT and Google’s Bard have enjoyed a better reception.

ChatGPT was featured in multiple articles hailing its ability to do things like get into Wharton’s M.B.A. program (scratch the surface and it looks far less impressive than Watson’s performance on “Jeopardy”) or write acceptable college-level essays. One of the problems with these tools is they draw from and mimic reality but often do not align with reality. For example, there are numerous instances of ChatGPT concocting real sounding, yet utterly fictitious, outputs, scientific papers, and essays and creating fake citations or personal reflection pieces that have no grounding in the lived, embodied experience of the person.

So why should we exercise caution, especially with these particular algorithmic offerings? I’ll briefly outline six interrelated issues and encourage you to use these as starting points for your own deeper dive.

The Exploitation Issue

“When someone shows you who they are, believe them the first time.”
― Maya Angelou

Despite significant coverage from outlets including Vice, Time, Business Insider, and others, there seems to be little concern for OpenAI’s treatment of off-shore employees. Poor wages and unfulfilled promises for mental health support top the list for workers labeling data pulled from the internet, including the worst that humanity has posted online.2, 3 For professionals invested in work and careers, paying attention to what an employer says about its values through its labor practices not only makes sense, but it is at the core of what we do and teach.

Public Beta Testing and Free Labor

College career development practitioners place a high value on collaboration and resource sharing throughout our profession. Drop a question on the NACE Community and you will likely get a wealth of responses, offers to share tools, even opportunities to collaborate. We have a very open-source approach to our work. This collaborative mindset seems compatible with the borrowing and open-source culture of Silicon Valley. The distinction, however, comes when we can either opt in, or cannot choose to opt out. One simple example is reCAPTCHA. All those times we have clicked the boxes that showed a street sign or a truck to access a website, we have also provided training data to corporations training AI.4 These are not technically compulsory, but near impossible to opt out of. In the current wave of generative AI, we see the Silicon Valley mindset that anything public is theirs to use. Consider the following:

  • LLMs, art composites, and vision systems all pull massive amounts of text and images without permission or compensation to writers or artists. Lawsuits are underway against makers of products like DALL-E and Stable Diffusion, both from artists and from Getty Images.5

  • Companies  train their data models by inputs from public use, although OpenAI says it is not doing this going forward. Playing with Google’s Bard, for example, provides free labor for corporations. If we want better LLMs, that may not give pause. However, the mindset behind it is that of harvesting public information for private gain.

  • OpenAI says it released the GPTs to give society time to adapt and determine responses. However, the practical implication is that the U.S. education sectors, already reeling from COVID, have spent the past six months trying to pivot to this probable game changer. If the goal was to give time to adjust, why not convene cross-disciplinary working and advisory groups to begin identifying and preparing for launches? Instead, OpenAI dropped a beta release on the public, prompting a race with other tech giants and leaving the burden of building responses to others.

The Editing Issue

ChatGPT and similar technologies receive support from those arguing it can give writers, including students, a jumpstart on the writing process, and they can then edit to personalize, tailor, and improve to a finished piece. What this category of argument ignores or misses is that in order to be a good editor, one needs to be a good writer. The process of learning to write well begins with grammar and similar rule sets, but ultimately entails self-awareness, self-reflection, critical thinking, and similar higher-order cognitive skills. Thinking of our own stock-in-trade, the resume, we have a profoundly personal document. The best way to get better at anything, including this curious document, is through practice. For this, it means the exercise of writing, not editing. Just like autocorrect, the writer must be skilled enough to know when to ignore the tool to achieve the desired effect with their words.

The student who cannot draft and edit their way to a resume that connects with employers will be underprepared to interview or function in the job. Critical thinking and communication skills remain at a premium in the labor market. These tools and technologies offer a route to improve but become a shortcut for too many at a time when higher education already faces criticism about its ability to prepare students for the workforce. Can an argument be made for accessibility? Yes. However, we must acknowledge that situations where these tools expand accessibility are rare; they do not call for a mainstreaming of tools that weaken core skills of high value.

The Nature of Work Issue

The GPTs and similar technologies are hailed for reducing labor and ultimately creating new jobs in a more efficient workforce. While we have multiple lines of critique on these speculations—flattening of skills, the faux-historical reference, lack of familiarity with what jobs entail, wage suppression more likely than wage gains, and others—I will focus on one for brevity, wage suppression.

Wage suppression featured prominently in the Writers Guild of America strike.6 Other fields have started paying attention to what could happen to compensation if organizations begin rewriting job descriptions in ways similar to what the WGA addresses. Also, consider the common argument that technology will create better, higher-paying jobs and therefore the disruption is worth it; we need to remind ourselves that highly skilled, highly paid jobs also get eliminated. Technologists and labor economists like to talk about creation of better jobs, but the history is more complicated and has a real human cost that gets glossed over. The Luddites fought to retain their high-skilled trades rather than be forced into dumbed-down, lower-wage textile roles forced on them by the corporate enterprises of their day.7 This scenario plays out all along the industrial revolution narrative.

The Social Responsibility Issue

The major corporations building LLMs have demonstrated clearly and repeatedly their first concern is profit, not safe rollouts. The calls from these corporations for regulations sound hollow if one considers their own leadership could have charted different deployment courses. The call to pause AI development for six months looks more like a marketing stunt when viewed in context of the rush to market without seeking input from impacted communities or experts in the space. OpenAI refuses to share information about critical factors like training sets for their recent release.  Microsoft, one of OpenAI’s largest investors, gutted the internal ethics team that had been tasked with taking high-level concerns and operationalizing those with product development teams. Other big AI tech companies have done the same.8, 9

The claims made by these companies, coupled with their behavior, have prompted agencies like the Federal Trade Commission (FTC) to send warning signals.10 Our allied professions should take note. These disconcerting behaviors and trends stand contrary to our values as a professional organization. If we care about DACA (Deferred Action for Childhood Arrivals) students and the many aspects of diversity, equity, and inclusion in professional spaces—and we are right to—then that same care and attention belongs here.

The Veracity/Reliability Issue

It invented a plausible sounding story with details that sounded like something you would actually [have] seen reported about someone accused of sexual harassment. It did that because what it’s sort of designed to do is that. It’s designed to take language that it has seen before and make something that sounds natural. In this case, something that sounded natural was effectively libel.
― Alex Pareene, The Politics of Everything podcast11

The nature of LLMs gives rise to two concerns about what they generate when applied in the world. Remember, LLMs compute the probability of what word should come next in a sequence based on the parameters given in a prompt. This differs from the expectation (and illusion) that they understand and answer queries. Authenticity emerges as one concern—is this information factually correct? We see numerous accounts of GPTs attributing nonexistent works to academics and journalists. One law professor even faced a false accusation of sexual harassment as a result of an answer generated by ChatGPT that pointed to a nonexistent article attributed to the Washington Post, the case cited by Pareene.12

The creation of accurate looking outputs leads to another problem, an increase in information pollution. Misinformation proliferation, or information pollution, received attention during the first Senate Judiciary Committee hearing on AI. Congressional members would naturally express concern for what the upcoming national elections will look like with a combination of plausible-looking, but fake, information generated from easy-to-access LLMs shared over ubiquitous social media platforms. Information pollution goes far beyond that. The FTC has led federal agencies in addressing potential social harms, like this one, within their jurisdiction.13

The emergence of these problems should not surprise anyone. Again, the tools in question simply assemble strings of words based on probabilities calculated from existing texts harvested from the internet. While the structure of the algorithm can consult the internet (Bard, Bing), this does not guarantee the information delivered is factually correct.

Consider our own arena of job searches. A proliferation of LLM-powered resume services may help job seekers generate documents for their search. However, how much will those job seekers learn about the process and themselves? Will the information reflect a collective experience of professionals, or carry wrong or outdated guidance, like resume objectives or going door-to-door with a stack of resumes? Will our work devolve to policing bad information and hoping our carefully curated resources get taken seriously (instead of just taken by web-scraping tools feeding more LLMs)? With GPT variants increasingly able to create entirely falsified, but legitimate-looking, documents, how long before applicants start trying to generate fake materials to best the applicant tracking systems and employers begin deploying countermeasures? Moreover, who is going to pay for all the subscriptions to the technology needed for hiring teams and job seekers to navigate this arms race?

Let’s Take the Middle Ground

New technologies can evoke heady excitement just as much as they can cause fear of the unknown (or repetition of the known). When changes come quickly, the temptation to hop on the bandwagon or bury our heads in the sand runs strong. We may experience FOMO—fear of missing out—or we hope it is another fad that will fade quickly, allowing us to get on with what we know and prefer for its familiarity.

I urge a middle ground, an intentional and collective move to explore, learn, and critique well that which gets launched into our lives and work. Let us retain our agency and our advocacy for our students, our communities, and ourselves. Ask the hard questions. Resist the pull to easy solutions. Hold on to our humanity and diversity, and insist on it from those who offer shortcuts that become anything but.

Endnotes

1 Bender E., Gebru, T., McMillan-Major, A., Mitchell, M. On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? ACM Digital Library, 2021. Retrieved from https://dl.acm.org/doi/pdf/10.1145/3442188.3445922.

2 Xiang, C. OpenAI Used Kenyan Workers Making $2 an Hour to Filter Traumatic Content From ChatGPT. VICE, January 18, 2023. Retrieved from www.vice.com/en/article/wxn3kw/openai-used-kenyan-workers-making-dollar2-an-hour-to-filter-traumatic-content-from-chatgpt.

3 Kantrowitz, A. The Horrific Content a Kenyan Worker Had to See While Training ChatGPT. Slate, May 21, 2023. Retrieved from https://slate.com/technology/2023/05/openai-chatgpt-training-kenya-traumatic.html.

4 O’Malley, J. Captcha If You Can: How You’ve Been Training AI for Years Without Realising It. Techradar, January 12, 2018. Retrieved from www.techradar.com/news/captcha-if-you-can-how-youve-been-training-ai-for-years-without-realising-it.

5 Vincent, J. Getty Images Sues AI Art Generator Stable Diffusion in the U.S. for Copyright Infringement. The Verge, February 6, 2023. Retrieved from www.theverge.com/2023/2/6/23587393/ai-art-copyright-lawsuit-getty-images-stable-diffusion.

6 Kilkenny, K. Writers Guild Says It’s Pushing to Prohibit AI-Generated Works Under Contract in Negotiations. The Hollywood Reporter, March 22, 2023. Retrieved from www.hollywoodreporter.com/business/business-news/wga-ban-ai-created-works-negotiations-1235358617/.

7 Maynard, A. POV: We’ve Lost the True Meaning of the Term ‘Luddite.’ Fast Company, May 15, 2023. Retrieved from www.fastcompany.com/90895811/true-meaning-of-luddite.

8 Belanger, A. Report: Microsoft Cut a Key AI Ethics Team. Ars Technica, March 14, 2023.
Retrieved from https://arstechnica.com/tech-policy/2023/03/amid-bing-chat-controversy-microsoft-cut-an-ai-ethics-team-report-says/.

9 De Vynck, G., Oremus, W. As AI Booms, Tech Firms Are Laying Off Their Ethicists. The Washington Post, March 20, 2023. Retrieved from www.washingtonpost.com/technology/2023/03/30/tech-companies-cut-ai-ethics/.

10 Atleson, M. Keep Your AI Claims in Check. FTC Business Blog, February 27, 2023. Retrieved from www.ftc.gov/business-guidance/blog/2023/02/keep-your-ai-claims-check.

11 Pareene, A. The Great A.I. Hallucination. The Politics of Everything Podcast, May 10, 2023. Retrieved from https://newrepublic.com/article/172454/great-ai-hallucination-chatgpt.

12 Sankaran, V. ChatGPT Cooks Up Fake Sexual Harassment Scandal and Names Real Law Professor as Accused. Independent, April 6, 2023. Retrieved from https://www.independent.co.uk/tech/chatgpt-sexual-harassment-law-professor-b2315160.html.

13 Alteson, M. Chatbots, Deepfakes, and Voice Clones: AI Deception for Sale. FTC Business Blog, March 20 2023. Retrieved from www.ftc.gov/business-guidance/blog/2023/03/chatbots-deepfakes-voice-clones-ai-deception-sale.

Chris Miciek is director of the Center for Career Success at Thomas Jefferson University. He has served on multiple national (NACE) and regional (EACE, MWACE) leadership committees and focus groups regarding technology in higher education. Miciek has also authored numerous articles and delivered multiple presentations on emerging technology within career services, especially on the topic of artificial intelligence. He can be reached at chris.miciek@jefferson.edu

NACE's Job Outlook 2025 NACE25: Registration is Open 2025 NACE Awards

NACE JOBWIRE