The AI Tutor: A Human-Centered Approach

June 12, 2025

AI can be a wonderful thing, improving productivity, doing research, and supporting professors. But, as someone who uses AI daily, I have to say both instructors and students should proceed cautiously. General AI models, like Chat GPT and Google Gemini, need work before they can provide sound tutoring. AI tutors must be built with the professor’s permission, content, oversight, and the professor’s intellectual property protection. They should be an integral part of a course, not just an add-on. They need to be secure. They also need a “personality” that is friendly and supportive and can adapt to a student’s needs.

Ground Truth and RAG

In our view, the content chosen by the professor or instructor should be used as “ground truth” for the AI tutor. This requires loading the course material into a model and having the model “ground itself” on it, which is known as Retrieval-Augmented Generation or RAG. Grounding the model will increase its accuracy and minimize inconsistencies between the professor’s teaching and the model’s general learning, often including errors, misinformation, and AI fantasies. RAG models screens what first references information from the professor, and then refers to what they “remember” from their general training.

For example, during our development, AI answered questions regarding a video of a carburetor, with no specific training. We were delighted that the AI could watch a video and give real-time explanations of its content. Then a human pointed out that the AI called a “sight plug” an “idle mix screw.” I personally wouldn’t have known the difference, but a tutor should. It was a great moment caught on video; you can see it here. Using the instructor’s content as “ground truth” minimizes these errors – though it cannot eliminate them.

Model Tuning

The creation of an AI tutor is complex. It combines expert prompt engineering, integration with learning management systems, and AI tuning.  

Let’s touch on AI tuning. We all know AI can make things up. It is commonly referred to as “hallucinating” and is a problem that AI developers are continually working on. How do you balance facts with creativity?

All generative AI has a parameter called “Temperature,” which controls the randomness of the AI’s output.

  • Lower Temperature (e.g., closer to 0): The AI will be more deterministic, predictable, and “safe” in its responses.  It will likely choose the most statistically probable words, leading to more factual and coherent outputs.
  • Higher Temperature (e.g., closer to 1 or even higher on some models): The AI will be more “creative” and diverse in its responses.  It will be more likely to choose less probable but still plausible words, leading to more varied, imaginative, and sometimes unexpected and incorrect answers.

Other related settings work with temperature (e.g., Top-P and Top-K). The key thing I want to convey is that when using AI to teach, you need first to develop it to be factual and creative enough to find the best approach for the student, not one or the other. 

Human Review

AI needs humans, too – for now, humans must test the AI Tutor. This starts by creating a battery of questions with known, correct answers; these then get run through the tutor to identify its errors and flights of fancy, which the developer has to reduce using more “ground truth” data, retraining, and refining prompts and settings. The tests should also ensure that the model responds in ways that engage and support the student, without doing their work for them. This will naturally create a tension between good AI Tutors and general models like ChatGPT, which are happy to do the work for students, solving their math problems and writing their papers.

Before it is launched in the classroom, the professor should also interact with the Tutor to ensure the responses are accurate and appropriate; the developer needs to fix any problems that arise.  

This combination of reviews should catch errors, like those in the carburetor example. The instructor (and the developers) should also explore whether the model can be manipulated to give responses that may stray from the course into inappropriate or harmful content.

Intellectual Property

The model must protect the intellectual property of the professor and publishers. As a result, the first AI tutors will likely be implemented for courses that rely on open-source content.  Later (as in who knows when), publishers will establish norms and payments for using proprietary content in a Tutor.  Professors will expect their content to be secure, that is, the Tutor and its underlying AI model should only share the professor’s content with students in the related courses.

A Fair Comparison

It is easy to criticize AI for making mistakes, forgetting that human tutors make mistakes, too. Humans can be boring, inappropriate, unintelligible, and often unavailable and expensive.  AI should be held to a higher standard than my ninth-grade math teacher, so it does not mass produce problems that would be more isolated among individual human tutors.  But we should not throw it out because it makes an occasional mistake.

That said, AI tutors have distinct advantages:  they are available whenever and wherever students need them, often at 1:00 AM, far from a classroom or a professor’s office.  They are infinitely patient and do not judge.  They are surprisingly adaptable; they can change their voice, ethnicity, and explanations to align with a student’s needs and tutor in over 30 languages.  They can support thousands of students at once.  And they are much less expensive, so that they can be used by more students in more courses.  In particular, widely accessible tutors could cost-effectively help colleges support thousands of students who come to college unprepared for college-level work.

Here’s a little secret: My company has developed a College Companion that would allow instructors to load their course materials and offer their students an AI tutor. 

The College Companion has voice recognition and can even understand and explain objects in a video!  It doesn’t give the answers but gently leads the student to find them on their own.  Check. It is secure and inexpensive. Check.  It has been tested – and will be tested a lot more. Check.

It can help your students succeed in a course, stay in school, and graduate – without breaking the bank.

Robert Atkins

CEO AND FOUNDER OF GRAY DECISION INTELLIGENCE

Bob led Gray DI’s entry into the education industry and the development of Gray DI’s proprietary industry databases and service offerings. He has worked directly with many of Gray DI’s education clients, consulting with CEOs and CMOs on business strategy, pricing, location selection, curricular efficiency, and program strategy.

About Gray DI

Gray DI provides data, software and facilitated processes that power higher-education decisions. Our data and AI insights inform program choices, optimize finances, and fuel growth in a challenging market – one data-informed decision at a time.

Related Posts
Subscribe to Our Blog

Don’t miss our latest research and insights

Related Posts

Gray Insights

Mars: The Next Giant Leap for Humankind

Mars is no longer just the stuff of science fiction—it’s emerging as the next grand challenge for humanity, demanding expertise far beyond rocket science. From sustainable life-support systems to interplanetary law and supply chain logistics, preparing for a future on the Red Planet calls for a truly multidisciplinary approach. With universities already launching innovative programs in space exploration, planetary science, and even Martian community simulations, the groundwork for our next giant leap is already underway.

Read More
Gray Insights

Beyond the RCM Budget: Seeing the Real Story of Your University’s Finances

Behind every college budget lies a deeper story—one that traditional reports often miss. With tools like Gray DI’s Program Economics, academic leaders can move beyond broad revenue and expense figures to uncover what’s really driving costs, margins, and strategic opportunities at the course and program level. From identifying which courses quietly subsidize others to making precise, data-informed adjustments, this level of insight turns financial management into a powerful tool for shaping the future of higher education.

Read More
Gray Insights

Data-Informed Decisions: Using Market Demand to Grow Academic Programs

In an era of limited resources and rising expectations, institutions must make smarter, faster decisions about which programs to grow, maintain, or retire. Explore how combining market data—like student demand, labor market alignment, and competitive benchmarks—can help identify high-potential opportunities, avoid costly missteps, and drive long-term program sustainability.

Read More