THANK YOU FOR SUBSCRIBING
Be first to read the latest tech news, Industry Leader's Insights, and CIO interviews of medium and large enterprises exclusively from Education Technology Insights
THANK YOU FOR SUBSCRIBING
Augmenting Faculty Work: AI Accelerates Rigorous Teaching
Bradley Fuster, Provost and Vice President, Academic & Student Affairs, San Francisco Bay University
Provost and Vice President for Academic and Student Affairs at SFBU, I see generative AI as an accelerator of good teaching, not a threat. When AI is used correctly, faculty work is less frictional and more focused on designing meaningful learning and mentoring students.
Robots can handle routine tasks, such as drafting early versions of syllabuses, providing feedback, recommending discussion prompts, analyzing student performance patterns and synthesizing big data. Faculty expertise is not replaced. Rather than spending time doing mechanical tasks, faculty can be intellectual, coaching and judgmental.
If faculty are asking questions that can be answered easily by AI, then they are asking the wrong questions. Academic integrity is not preserved by pretending AI doesn't exist. Having students interpret, apply, critique and create within context is what builds rigor.
AI is embedded in our academic ecosystem at SFBU rather than being left to experiment by students. Transparency is important. Students should know when AI is permitted, how to use it and how to reflect on its role in their work. It becomes an outcome.
The fourth industrial revolution is changing how work happens. Faculty who model responsible AI use aren't lowering standards. “What higher education calls cheating, business calls progress.” We must close that gap thoughtfully, not widen it out of fear.
Reframing AI in Universities: From Threat to Enabler
AI isn't the real threat. It's institutional denial that's the real threat. Universities should stop looking out for ways to protect themselves from AI. It's already here, and our students know it.
As an enabler, AI needs to be viewed differently. For decades, higher education associated rigor with limited access to information and tightly controlled processes. AI breaks that model. Information is abundant, tools are powerful. Now the value lies in judgment, synthesis, ethics and application.
The shift in mindset also requires humility. Universities don't get to define the future of work on their own. Employers already assume graduates are proficient in AI. AI will be used by students anyway if higher education treats it primarily as a disciplinary problem.
In SFBU, AI is an academic skill, not a loophole. Faculty are encouraged to redesign learning around human strengths. Ask better questions. Design messier problems. Require interpretation and decision making. AI handles the routine. Humans handle the meaningful.
“AI is embedded in our academic ecosystem at SFBU rather than being left to experiment by students.”
Jose Antonio Bowen’s observation captures the moment perfectly. “What higher education calls cheating, business calls progress.” That statement is uncomfortable because it is true. Universities that embrace AI as an enabler will remain relevant. Those that do not will increasingly struggle to explain their value to students who already live in an AI-native world.
AI at Challenger Universities: Reach Expanded, Not Replaced
Challenger universities succeed by refusing to confuse tradition with effectiveness. In my University Business article on challenger brands, I argued that institutions willing to question the status quo are better positioned to serve today’s learners. AI is a prime example of where that mindset pays off. Challenger universities use AI to scale care, not cut corners.
AI expands faculty reach. AI provides early feedback, surfaces learning gaps and flags students who may need intervention. That doesn't replace faculty. It makes them more effective. Instead of sorting through noise, instructors spend more time engaging with students.
In student support, AI-enabled advising tools provide instant and consistent answers. Students are no longer waiting days for answers to their questions about their degree progress or policies. Faculty advisors can then mentor, set goals and make academic decisions with greater efficiency.
A challenger university embeds AI directly into learning. Students learn how to test ideas, challenge outputs and integrate AI into project-based work, not as a shortcut. AI is assumed, not optional in modern workplaces.
AI isn't adopted defensively by challenger institutions. It's adopted strategically by challengers. They ask how technology can improve faculty impact, student outcomes and prepare graduates for the job market. The clarity that differentiates challengers from institutions still protecting outdated models.
GenAI Guardrails in Education: Clarity Enables Progress
Guardrails enable progress. Bad guardrails signal fear. Universities need rules that clarify expectations without stifling innovation when embedding generative AI.
In the first guardrail, align with learning outcomes. If AI can complete an assignment without student reasoning, the assignment is the problem, not the tool. Assessment must emphasize interpretation, judgment and application.
A second guardrail is transparency. Students should know when AI use is permitted, encouraged or limited. They should be expected to disclose and reflect on their AI use.
The third guardrail is ethical literacy. AI outputs are not answers; they are just starting points. Understanding bias, hallucination, data limitations and accountability are vital to preparing students for the fourth industrial revolution.
Students should have consistent and supported access to AI tools. At SFBU, we provide institutionally governed access to AI tools so students are not left to navigate commercial tools alone or unequally.
Lastly, guardrails must evolve alongside pedagogy so that assessments are redesigning themselves rather than being policed. When expectations are clear and learning is authentic, academic integrity follows. In reality, AI does not weaken standards. Poor design does.
Preparing for AI-Native Learning: Honesty, Literacy, Confidence
The preparation for an AI-native future begins with honesty. AI is already embedded in how work is done. Graduates will be expected to know how to use it.
We teach students how to use AI effectively, evaluate outputs critically and integrate AI into disciplined thinking. Students are taught not only how to use AI, but how to question it.
AI literacy is a part of professional readiness, so students are told clearly. Employers won't ask if they used AI. They'll ask if they delivered results. We need to make sure graduates are ethical and competent.
It is equally important for faculty preparation. AI adoption involves rethinking assessment, experimenting with AI-enabled pedagogy and sharing practice across disciplines. Faculty who engages with AI themselves teach from experience rather than theory.
As knowledge work moves from machines to humans, avoiding this truth does not protect students. It disadvantages them.
As Jose Antonio Bowen says, “What higher education calls cheating, business calls progress.” Preparing students and faculty for an AI-native future means closing that gap with intention, rigor and confidence. That is the work universities must now do.
Read Also
I agree We use cookies on this website to enhance your user experience. By clicking any link on this page you are giving your consent for us to set cookies. More info

However, if you would like to share the information in this article, you may use the link below:
www.educationtechnologyinsightsapac.com/cxoinsights/bradley-fuster-nid-3678.html