Tina Austin: Pioneering AI Education and Ethics in Higher Education
Introduction
Tina Austin is at the forefront of AI education and ethics, bringing an accessible approach to making complex AI concepts understandable for diverse audiences. Teaching at both USC and UCLA, she is an AI ethics advocate and educator, and also serves as an AI Educational Consultant & founder at GAINable.AI. Austin's work focuses on helping students and faculty critically evaluate AI tools and preparing them for an AI-enhanced future. Her journey from biomedical research to AI education has uniquely positioned her to guide others through the AI revolution in education.
Tina Austin’s Background and Journey
Tina Austin's journey is unique, starting in Northern Ireland and leading her through biomedical research to becoming a leading voice in AI education and ethics. Her background spans multiple disciplines, and she demonstrates how she helps students and faculty navigate the complex landscape of artificial intelligence through critical evaluation and hands-on experimentation.
Austin's ability to make complex AI concepts accessible comes from her background in biomedical research and experience translating between different disciplines and audiences. Her childhood in Northern Ireland, moving frequently and learning to adapt to different environments, prepared her for the constant change required in AI education.
Austin explains, "My background, as you mentioned to your audience, it was biomedical research and deconstructing papers for my audience. So my students at the time, we’d bring in a paper, we’d dive into the details and break it down for them.”
Teaching at both USC and UCLA simultaneously while developing new courses demonstrates the high demand for AI literacy education across institutions.
Read also: Explore Tina Nguyen's research
Austin also shares, “One thing led to another. And as I mentioned last quarter, it got to a point where I was, while teaching at UCLA, I got some classes to teach at USC and develop a new course and now developing more faculty workshops because there’s such a need for this right now.”
She adds, “I had to learn different accents, and you saw with one of my experiments, I tested AI with different accents. I had a Northern Irish accent when I was a kid because my parents, they were studying at the time there, and so I had to, I really wanted to blend in and wanted to be like the other kids.”
Innovative Educational Methods: "Unblooms"
One of Tina Austin's significant contributions to AI education is her "Unblooms" method. This approach reimagines Bloom’s taxonomy for the AI age, recognizing that traditional educational hierarchies need updating when AI can perform many cognitive tasks.
"One of the things I recently came up with to solve this problem was this Unblooms method that I’m sure you saw I posted about as well. If you’re familiar with Bloom’s taxonomy, right? It’s a step-by-step way to help someone who’s doing lesson planning to go from understanding and then all the way you go up to critical thinking at the top where you create. And now in the day and age of AI, AI kind of does all that or some of that for you,” Austin explains.
The UnBlooms Critical Evaluation Scale helps students to evaluate their own college essay writing in order to:
Read also: Requirements for Austin American-Statesman Internship
- Recognize how their voice is different from AI (Level 1-2)
- Understand why AI can't capture their specific experience (Level 3)
- Evaluate what's missing from AI's version and why does that matter (Level 4)
- Create something that transcends what AI could produce (Level 5)
Together, the Framework and Scale teach students to recognize the difference between authentic cognitive work-using human reasoning strategies i.e. "I'm stuck on why this moment mattered, so I'm going to freewrite about it, talk to someone, or revisit the memory"-and cognitive outsourcing-delegating the core thinking to machines, eg "I'm stuck on why this moment mattered, so I'll ask AI to explain why it might have been meaningful.”
Critical Evaluation of AI in Education
Tina emphasizes the importance of understanding AI’s capabilities and limitations. She highlights this through innovative classroom experiments that reveal bias in AI systems. Student experiments revealed concerning trends, including 25-30% using AI as emotional companions, highlighting the need for critical AI literacy.
"One of the other things that AI can’t really do well is evaluate AI. So we had students evaluate the responses that AI gave and reflect on it,” Austin notes.
She also mentions, “I was surprised actually when I did a survey in my class of 160 students to see how many students are using AI as a companion. And it was shocking that I still had about 25 to 30%.”
The key to effective AI education is having students evaluate AI outputs rather than just consume them, developing critical thinking skills essential for the future. Faculty workshops are crucial because educators need support to integrate AI thoughtfully rather than prohibitively.
Read also: Understanding UT Austin GPA
AI Applications Beyond Creative Tasks
Protein language models represent AI applications beyond creative tasks, showing how AI impacts scientific research and discovery.
Tina Austin is pushing the envelope and innovating new ways to use AI in the classroom. “I use Generative AI not just for enhancing education, but also as a topic in the course for teaching students about the cross section of generative AI and regenerative medicine. We discuss how new medical AI companies are innovating with AI and using generative AI to get ahead. For example, improving protein design and detection methods, or creating open-source tools like AI-CRISPR and being able to do more precise gene editing. There’s so many applications that are now becoming available to us, not just for education, but for science."
Austin also shares that she works to make sure her students are educated users of AI, not relying on it as a catch-all solution, but teaching them to ask tough questions before deciding to use it. "First of all, we consider whether or not a problem should be solved with AI, based on its limits for that particular use-case. I also want them to think about the ethical implications such as whether it’s ethical to use image-generating tools, how you can cite them, whether it’s considered fair use, the mistakes that it might generate (i.e. “I’ve created Auto-GPTs before, so I decided to develop a custom GPT for studying Regenerative Medicine specifically for my classroom. This was so all my students have equal access, not just those who can afford a paid version. I tailored it to fit our class needs while keeping student privacy in mind. I named this GPT, our Regenerative AI Mentor, to assist students to assist students just like I or their TA would. It’s available on the OpenAI platform. Students tested it by asking questions about lecture summaries, what to expect during office hours, and recent medical discoveries. Any instructor can now make a free, customized alternative to a paid version. Ours is aligned with our syllabus, making learning efficient and fun. It prevents reliance on Canvas discussion boards or broad LLMs, which can make more mistakes."
She adds, “Several years ago, I developed new interactive surveys, and methods for them to engage with QR codes and to evaluate one another’s presentations, even in large classrooms of 80-160 students. But we did this over a year ago. When students present, based on everybody’s feedback, we are able to give unique feedback to that student using generative AI. We managed all this without sharing any identifying information."
Austin cautions that “AI is not new, it can hallucinate and be biased. I want my students to be aware that this is not a catch-all solution. The transformer (T in GPT, which stands for “Generative Pre-trained Transformer” ) allows it to be highly creative, but this can also result in occasional fabrications."
She also notes, “Another caveat: Also, while we’re excited about the rise of Gen AI in healthcare, precision medicine, and the science behind new AI biotech start- ups, we want students to understand that science goes beyond just generating information from past data. Language models are good at predicting words or images, but biology requires testing theories through experiments. The ability to see patterns that we didn’t see before is very interesting for scientists, but science in general involves more than just recognizing patterns, so we want students to know not everything can be solved with Gen AI."
Impact on Students and Faculty
Tina’s work has a profound impact on both students and faculty. She emphasizes the importance of reflection and nuance in understanding AI developments.
"What makes it worthwhile is knowing that you’ve been able to help even one person or two do something differently and it helped them. So workshops, faculty come back at the end and say, wow, that really changed my thinking in my course and now I’m able to do this differently,” Austin states.
She also advises, “It’s really important to give yourself time to really reflect on these developments. And I find that that’s the important part, the nuance, the detail. And if you’re just filling up your calendar with multiple, let’s say, teaching commitments or jobs, you may miss out on those small things.”
Austin’s journey reflects the evolution of education itself-from traditional lecture-based teaching to AI-enhanced critical thinking approaches that prepare students for an uncertain future. Her perspective combines the analytical rigor of a trained scientist with the adaptability of someone who learned early to navigate change and translate complex ideas for different audiences. This unique combination allows her to approach AI education practically while maintaining the critical evaluation skills needed to guide others responsibly.
What stands out most is Tina’s emphasis on teaching students to evaluate AI rather than simply use it, developing the critical thinking skills that will serve them regardless of how technology evolves. By implementing this approach across two major universities and extending it to faculty development, she’s demonstrating how AI education can prepare learners for tomorrow’s challenges.
As someone who’s spent years exploring the intersection of technology and education, Tina’s approach is particularly compelling. She represents what is believed to be the future of AI education-educators who don’t just teach about AI but help students develop the critical thinking skills needed to navigate an AI-enhanced world responsibly.
What excites most is seeing how she’s tackled the fundamental challenge of education in the AI age. Her “Unblooms” method recognizes that traditional educational frameworks need updating when AI can perform many cognitive tasks, but rather than abandoning structure entirely, she’s creating new frameworks that emphasize human judgment and critical evaluation.
Addressing Concerns in College Admissions
As generative AI becomes part of how students write and apply to college, admissions teams are asking a new question: how do we recognize authentic student voices in the age of AI? Authentic student voice refers to writing that reflects a student’s own thinking, experiences, and perspective, even when tools like AI are used for support. In admissions, authenticity is less about avoiding technology and more about demonstrating cognitive ownership.
Tina warns against over-reliance [on AI], citing Stanford research showing decreased critical thinking since generative AI’s rise.
Austin sees academia as uniquely positioned to address AI’s rapid evolution in education. Unlike medical devices that undergo years of clinical trials, AI tools evolve too quickly for traditional vetting processes.
Practical Advice for Students
- Stay curious but critical. Don’t view AI as either a cheating tool or something to avoid entirely.
- Use AI as a thought partner.
- Remember: Not everything needs an AI solution. Just because AI can do something doesn’t mean it should.
- Prioritize privacy and ethics. Don’t share personally identifiable information or student data with AI tools.
With employers increasingly expecting AI familiarity, Austin believes students need technological fluency - but with important caveats. “It’s important to have some kind of technological savvy, to be aware about these tools, but not be naive,” she notes.
Integration Across Disciplines
Austin’s journey illustrates AI’s potential to break down academic silos. Previously focused solely on biomedical research, she now teaches across three distinct disciplines: (1) computational biology; (2) social sciences and humanities; (3) and communication. In her computational biology classes, students use custom GPTs to debug code and brainstorm solutions. Her work in social sciences explores AI bias and ethics, where students discovered gender disparities in AI-generated podcasts.
Austin’s approach centers on what she calls “process over product.” Rather than seeking perfect AI-generated outputs, she encourages students to embrace the “messy” thinking process. This philosophy extends to her assessment methods. Austin designs evaluations that test knowledge “in the age of AI” rather than simply checking if students can arrive at correct answers.
Recommendations for Counselors
Working within frameworks such as Unblooms, counselors can help students design authentic writing experiences that integrate ethical reflection on AI use-with pre-writing reasoning checkpoints, mid-task critique moments, and post-draft synthesis that demonstrates genuine thinking.
For counselors, the shift requires moving from corrective editing to reflective inquiry. Rather than refining prose, prompt students to excavate meaning by asking generative questions that encourage reflection (“What moment sparked your interest in this field?”); requesting concrete examples (“Describe one instance when helping others changed your perspective”); and clarifying ethical boundaries around tool use (distinguishing between mechanical support and conceptual outsourcing). This reframing centers student agency and positions counselors not as ghostwriters, but as partners in metacognitive development.
Evaluating Student Work in the AI Era
Scientific journals such as Nature have faced the exact challenge now confronting college admissions: how to maintain integrity when AI can generate professional-quality content. Their solution isn't detection, but rather mandatory disclosure with clear boundaries on what constitutes authentic authorship. Leading journals now require:
- Detailed disclosure: Authors must state what AI tool was used, how it was used, and why (grammar checking, data visualization, literature review, code generation)
- Clear placement: Disclosures appear in Methods sections or Author Contributions statements
- Authorship boundaries: AI cannot be listed as an author because authorship implies accountability, which machines cannot assume
Similarly, students should disclose AI use in their application essays: whether they used AI to check for grammar, organize ideas into an outline, or generate examples to guide their composing. This transparency reveals where the thinking happened. Just as scientific journals recognize that using AI for grammar is different from using AI for hypothesis generation, admissions can distinguish whether a student’s use of AI is:
- Acceptable: Grammar checking, citation formatting, translation assistance
- Problematic: Topic brainstorming, insight generation, narrative structure, "getting unstuck" on meaning
- Disqualifying: Prompt-to-essay generation, even if the student "edited" it afterward
The Key Principle: Like journal authorship, if an AI tool generated the core intellectual contribution-the "what matters to you" and "why"-then the work lacks authentic voice, regardless of whether the student can explain it afterward.
Rethinking Roles in Education
Instead of hunting for AI use, this is more effective:
- Does this essay show signs of metacognitive awareness?
- Does the writer reflect on how they thought through a problem?
- Are there moments where they question their assumptions?
- Do they critique or contrast perspectives?
- Does the essay feel like an over polished shell, or does it have authentic cognitive messiness?
This process works for admission officers and application readers too. Instead of essay-first, read in this order:
- Activities list and transcript - What has this student actually done?
- Short answers - What's their natural voice?
- Recommendations - How do others describe them?
- Essay - Does this fit with everything else?
This way, judging the essay isn't done in isolation. Ask: does this essay cohere with the student being seen everywhere else? If yes, it's probably authentic-even if it's polished. If not, something's off.
Recognition and Further Engagements
Tina Austin’s work has gained recognition beyond UCLA, including selection as an “AI innovator” at the ASU GSV Summit and invitations to speak across the UC system. She has taught at UCLA, USC, CSU, and Caltech, and her faculty workshops on AI adoption have reached institutions across North America, Europe, Australia, and Asia. Tina advises the California Department of Education AI taskforce, the Los Angeles AI Taskforce, and the Marconi Society’s AI Institute, and her AI framework was recently featured by OpenAI and Microsoft.
tags: #tina #austin #ucla #research

