AI in higher education: The choices are ours

James Barrood. – File photo

It took Facebook five years to reach 100 million users; TikTok did it in nine months. OpenAI’s ChatGPT? Two months. Adoption statistics don’t prove that generative artificial intelligence and large language models will be the next internet-level transformative technology, but it sure looks that way. Microsoft’s embedding AI in Bing search, Google’s rolling out Bard and generative AI is a shining bright spot in a tough venture capital environment.

From marketing to law to science, everyone’s talking about the implications. There’s plenty of worry about the future, but it’s leavened with the hope that AI might in fact expand human potential and promote human thriving if we can figure out the right ways to use and control it.

Nowhere is that conversation more urgent than in higher education. I was privileged to host presidents and provosts at my latest higher education salon, where we heard insights from a tech leader from Google’s Responsible AI division. The company has been wrestling with AI’s social impact for years, and foregrounded many of the questions educators are wrestling with: “What will it mean for our students if they can run to a chatbot and have everything they need generated for them? What will it mean for their intellectual development and capacity? What will it mean for creative disciplines if an AI system can write poetry, appear to reason, seem to have moral discussions, take stances on things?”

Enlightened educators point out that education always has had to evolve to accommodate both new technology and the broader challenges learners will face in their world, not their grandparents’ world. AI isn’t going anywhere, they say: It’s a fools’ errand to try to ban it, any more than math teachers succeeded in banning calculators. Instead, use it — and, if that means changing the way you teach, then go do that. For 2.6 million years, humans have used tools to magnify their abilities. These new tools can magnify our abilities to understand and change the world. Let’s help students use them that way.

As someone who has spent my career in and around both tech and higher education, this optimistic position appeals to me. But it won’t be easy. It can’t be done simply by talking about it, or by propounding high-minded principles and leaving it at that. In important respects, what AI requires of us is the precise opposite of what megatrends in higher education have been driving us toward.

Start with the most urgent issue for many instructors and schools: assessment. How do you evaluate students when ChatGPT can churn out perfectly serviceable 5-paragraph essays in seconds? Even the tech leader told us that, in the short term, universities may need to revert to in-person testing with pen and pencil (no matter that students can’t write by hand anymore!). This obviously suggests a reversal of the megatrend to scale education via ever more remote, online learning.

The creators of tools such as ChatGPT are trying to build in tools to flag auto-generated content —- just this week, OpenAI introduced AI Text Classifier — but they’re the first to admit these tools are imperfect and “game-able.” So, as Ian Bogost wrote in the Atlantic this week, that may mean more complex bureaucracy for instructors who must decide what to do if AI Text Classifier suggests a student “may” have stolen his work.

Beyond assessment, generative AI systems raise the broader issue of what we’re teaching, and how we’re teaching it. If an algorithm can earn a solid “B” on your assignments, maybe you need to reinvent your assignments — or even your curriculum? Should we be relying more on one-on-one assessments that probe individual learners’ depth of knowledge? What do humans need to learn now? What, in fact, makes us human, and different from generative AI systems?

New York Times columnist David Brooks recently suggested that we should be teaching learners how to develop a distinct personal voice that AI can’t mimic, stronger presentation skills, a “childlike talent for creativity,” comfort with unusual and unpredictable worldviews, empathy and situational awareness.

How much of that aligns with today’s increasingly tight focus on rapid career return on investment from higher education, as important as that may be? These are critical questions, and smart educators are raising them. But other smart educators are asking: If your institution’s business model now relies heavily on poorly paid, struggling adjuncts, who’ll do all that personal assessment, curriculum reinvention and deeper philosophizing about why we’re actually doing any of this? That work is desperately needed, but who’ll pay for it?

Tech always leads to new careers and specializations, and so it is with generative AI. For example, as we discussed, there will be prompt engineers — people who are paid to give input to AI systems that lead to the best possible responses. That may be a narrow specialty, comparable to the profession of search engine optimizer. But everyone will need to learn how to judge AI output — especially, given that today’s systems are stunningly good at confabulating completely fake facts and sources.

Innovative educators are already experimenting with “critical computing” instruction to help learners recognize these systems’ limits and potential biases. This is vital — but, again, not easy. Seven years after the explosion in concern about “fake news,” how many students really get strong education about how to evaluate online information? Not nearly enough.

As the executive pointed out, one positive “superpower” generative AI systems may have is the ability to deeply personalize education, targeting instruction at a learner’s precise level, and helping individuals overcome their specific obstacles to understanding, at scale. This is exciting — but, as edtech and AI companies seek to do this, they need to reflect 70 years of experience with technology-driven personalized learning (much of it disappointing). Once again, resource-strapped institutions will repeatedly be tempted to substitute AI for human contact, and, in an era where mental health is front-and-center, that would be a recipe for disaster.

In the future, some people are going to wield AI systems to accomplish truly great things — from folding proteins to serving customers to solving crucial social problems. The question is: how many people will get that privilege? Will our education system train just a few people to be overlords of AI algorithms, at elite institutions where they benefit from real human contact and great faculty? Will everyone else simply be guided and prodded and manipulated by those algorithms, both in their classrooms and after they graduate?

It falls to us to make those decisions. And, whether by action or inaction, we’re making them right now.

James Barrood is CEO of Innovation+ and an adviser to Tech Council Ventures/JumpStart Angels.