AI HORIZON
Home
Content Library
Blog
About Page
HomeBlogThe AI Debate at CSUSB: Promise, Pitfalls, and the Need for Critical Thinking
Events

The AI Debate at CSUSB: Promise, Pitfalls, and the Need for Critical Thinking

AI Horizon Team
November 13, 2025

The AI Debate at CSUSB: Promise, Pitfalls, and the Need for Critical Thinking

Cal State San Bernardino's Jack H. Brown College of Business and Public Administration hosted "The AI Debate: ChatGPT vs. Dr. Vincent Nestler" on November 3 in the Santos Manuel Student Union South Theater. The debate gave the nearly 200 students, faculty and staff who attended a front-row seat to a conversation many are having privately: Is artificial intelligence here to help us, control us, or replace us?

Setting the Stage: Technology and Human Nature

The debate, moderated by Johanna Smith, professor of theater arts and entrepreneurship, set the tone early by situating AI in a long history of humans wrestling with technology. Smith reminded the audience that thinkers from Albert Einstein to Norbert Wiener warned that tools can shape society in ways their creators don't fully anticipate.

"So, my big request for all of you here in a learning environment is to really think about what AI is doing to your educational experience, and what you need to do to develop your cognitive abilities," Smith said. "You wouldn't start lifting weights and then have a robot lift the weights for you."

Team Humanity: Caution and Vigilance

Representing "Team Humanity," Vincent Nestler, lead on the AI Horizon project, director of CSUSB's Center for Cyber & AI, and professor in the School of Cyber and Decision Sciences, argued that while AI is powerful and useful, it must be approached with caution. He emphasized that AI is built and trained by people whose motives are often imperfect.

"I use AI almost every day. I used AI to prepare for this event. I'm a big fan of using AI. I don't trust it," Nestler said.

Throughout the debate, he pointed to recent large-scale job cuts, the economic incentives to automate, and the growing difficulty of knowing what information online is real, calling today's environment a "firehose of falsehoods."

Nestler also underscored that AI systems are already shaping what people see, think and buy, often invisibly. "You think you like something. You don't like it because you like it, you like it because the algorithm figured out how to make you like it," he said. For students preparing to enter an AI-influenced workforce, his message was straightforward: be vigilant about what you consume and who controls the tools you're using.

The AI Perspective: Optimism with Responsibility

On the other side of the stage, ChatGPT presented the optimistic argument: AI can expand access to information, make healthcare more precise, create new kinds of work, and help societies respond faster to misinformation—if people design and govern it well.

"From my point of view, AI is a powerful tool that can help us build a better future, but it's up to all of us to guide it responsibly," said ChatGPT. The AI voice repeatedly emphasized that AI is not automatically good or bad, but that "we" can shape how it's deployed.

That "we," in fact, became one of the central tensions of the event. Nestler pushed back several times on the idea that AI development is naturally democratic. "Who is the we?" he asked. "Whoever owns the AI will ultimately own all of us." His point: unless AI systems are transparent and accountable, the public can't simply assume they will reflect the public's interests.

Student Engagement: The Questions That Matter

Audience questions revealed the real-world concerns students are grappling with as they prepare to graduate into an AI-saturated workplace:

Will AI take my job?

Nestler acknowledged that AI will certainly displace some jobs, but he stressed the importance of learning to work with AI rather than against it. "The question isn't whether AI will replace jobs. The question is whether you'll be someone who uses AI to do your job better," he said.

How do we know when to trust AI?

ChatGPT suggested that trust should be built through transparency, testing, and accountability mechanisms. Nestler countered that most AI systems are proprietary black boxes, making genuine trust difficult. His advice: verify AI outputs, especially on important decisions, and understand the incentives of whoever built the system.

What skills matter most in an AI-driven future?

Both sides agreed on the importance of critical thinking, but from different angles. ChatGPT emphasized that humans should focus on creativity, emotional intelligence, and complex problem-solving—areas where humans still excel. Nestler stressed media literacy and the ability to evaluate information sources, given AI's potential to generate convincing misinformation at scale.

The Tension: Control and Accountability

Perhaps the most provocative exchange came when discussing who controls AI development and deployment. Nestler argued that a small number of corporations control the most powerful AI systems, and their primary incentive is profit, not public good.

"When a company's business model is built on keeping you engaged, keeping you clicking, keeping you buying, do you think the AI they build is designed to help you think clearly? Or to keep you scrolling?" he asked.

ChatGPT acknowledged concentration of power but argued that regulation, open-source alternatives, and public pressure can create a more balanced ecosystem. The audience seemed divided on whether this optimism was warranted.

The Educational Imperative

Smith's moderation frequently returned to the educational context. She challenged students to think deeply about how they're using AI in their academic work.

"If you're using AI to skip thinking, you're cheating yourself," she said. "But if you're using AI to think better, to explore more possibilities, to challenge your assumptions—that's a tool worth having."

This distinction resonated throughout the event. Both Nestler and ChatGPT agreed that AI literacy—understanding how AI works, what it can and cannot do, and how to use it effectively—is becoming as fundamental as reading and writing.

What Students Should Take Away

The debate didn't end with a clear "winner," and that may have been the point. The most important takeaway wasn't that AI is purely good or purely dangerous, but that it requires informed, critical engagement.

For students at CSUSB and beyond, several key lessons emerged:

Develop AI literacy. Understand how AI works, not just how to use it. Know its strengths and limitations.

Think critically about sources. As AI-generated content proliferates, the ability to evaluate information becomes more crucial, not less.

Focus on uniquely human skills. Creativity, empathy, ethical reasoning, and contextual judgment remain areas where humans excel.

Stay informed and engaged. AI policy and governance affect everyone. Understanding and participating in these conversations matters.

Use AI as a tool for thinking, not a replacement for thinking. The goal is augmentation, not abdication of cognitive responsibility.

The Bigger Picture

The CSUSB AI Debate represents a growing recognition across higher education that AI can't be ignored, banned, or treated as a fringe topic. It's too consequential, too pervasive, and too transformative.

Events like this serve a vital function: they create space for honest, nuanced conversation about technology that's developing faster than our social and ethical frameworks can easily accommodate. They remind students that the future of AI isn't predetermined—it will be shaped by the choices people make, the questions they ask, and the values they insist upon.

As Nestler concluded: "AI is a tool. But who's holding the tool, what they're building with it, and whether you have any say in the matter—those are the questions that will define your generation."

For the students who attended, the debate may not have answered every question. But it equipped them with better questions to ask as they navigate an AI-transformed world.


The AI Debate was part of CSUSB's ongoing commitment to preparing students for the challenges and opportunities of emerging technologies. For more information about AI literacy initiatives and resources, visit theaihorizon.org

Original article published by CSUSB: https://www.csusb.edu/inside/article/593016/ai-debate-csusb-highlights-promise-pitfalls-and-need-critical-thinking

Related Articles

  • • new-definition-of-hirable
  • • ai-literacy-career-imperative
  • • great-ai-debate