CAC Co-intelligence: Teaching AI Use That Strengthens the Mind & supports GenAI integration
- sasha97518
- Aug 12
- 4 min read
Updated: Oct 10
Too many AI integrations skip the most important step in learning: reinforcing the right application process. Without it, we risk teaching students to create fast, confident… and wrong answers. Even worse, connect them to knowledge and answers that lead to no learning, no retention.
The term co-intelligence comes from Ethan Mollick’s excellent book, where he describes the need to work collaboratively with AI. The idea is simple but powerful: combine the best of human capability with the best of machine capability, and you can achieve far more than either could alone.
I take this concept a step further. Not just as a way to work, but as a way to learn. A way that keeps our brains sharp, ensures we remember our own contribution long after the task is done, and avoids the mental “rot” that comes from over-reliance on technology.
Why This Matters for Learning
For decades, we’ve known that learning sticks best when we do. Whether we’re actively solving problems or teaching others, our brains are like muscles: use them, or lose them.
Too much cognitive offloading to AI is like thinking you’ll get fit by sitting on the couch watching football. You can watch all the plays you like, but your muscles aren’t moving, so there’s no growth. Too much watching and not enough moving, leads to decay.
The Three Forms of Co-Intelligence
From my experience, there are three basic patterns:
Cognitive → AI
AI → Cognitive
Cognitive → AI → Cognitive (CAC Co-Intelligence, my preferred standard)
Cognitive-AI-Cogntive Co-intelligence: supports GenAI integration and is the one we must teach, demonstrate, and reinforce. It’s the model that keeps humans in control and thinking critically - before, during, and after AI use.

We should:
Show students this principle in action.
Embed it in every AI-integrated task.
Assess their ability to apply it.
The stakes are high. Without guidance, many students are heading down a path of over-dependence on AI. We need to intervene early, loudly, and consistently.
Why I Built This Framework
CAC Co-Intelligence grew out of my own AI-integrated project work. It has become the foundation of all my lessons, refined by action research. AI can be a powerful partner, but it can also be dangerously misleading if used without critical thinking.
Here’s the CAC habit I instil in my classes:
1️⃣ Think as a human first – clarify the problem and your ideas.
2️⃣ Use AI to enhance – generate, expand, or refine ideas.
3️⃣ Return to human judgment – evaluate, filter, and apply with care.
As AI models improve and make it easier to offload thinking, the risk of the confidence trap grows, especially outside our true expertise.
Explanation of the Layers
Cognitive (Before AI Use):
Think first: Begin with deep reflection. Understand the problem, its context, and what truly needs solving before turning to AI. Develop pilot solutions so that human approaches are considered before being influenced by AI.
Plan & Prepare: Define the problem clearly, explore traditional or human-driven approaches first, and decide whether AI is genuinely the right tool for this task.
Reflect on proven methods & ethical implications: Draw on policy, existing knowledge and professional standards; anticipate potential ethical, security, or practical consequences of AI involvement.
How can AI enhance your work?: Ask why you may wish to use AI. Is it necessary? Will it enhance the outcome, and which AI type or model best aligns with your goal?
AI Application:
Apply as a supportive tool & not as your replacement: Treat AI as a collaborator that enhances capability while preserving human direction and accountability. Use ethically.
Which AI is the best fit? Type/Model: Select the right kind of AI (e.g., generative, predictive, analytical) to match your specific goal or problem.
AI outputs can be flawed: Remember that AI can generate errors or biases; critical review is essential before use.
Cognitive (After AI Use):
Apply Evaluative Judgment: Assess AI results critically for accuracy, logic, and alignment with your intent and values.
Refine and validate AI outputs: Edit, verify, and contextualise results to ensure they meet human and professional standards.
Apply human touches, responsibility, and control: Integrate human voice, empathy, ethics, and ownership; maintaining human oversight over final outcomes.
A Resource to Share
The video linked below explains the positioning of CAC Co-Intelligence and why it matters. If you’re integrating AI into your teaching, I recommend showing it to your students. The human must remain in control, every time. Note: The visuals are illustrative, intended to show the balancing involved, rather than being strictly accurate.
It’s time to be the broken record your students need.
As always, I am happy to receive feedback on implementation experiences. Please let me know if this was helpful.
Sasha Nikolic
12 August 2025
Updates:
10 Oct 2025 - The CAC Co-intelligence framework image was updated to improve wording based on reflections from feedback. Related text updated accordingly.



Comments