top of page
Search

CAC Co-intelligence: Teaching AI Use That Strengthens the Mind & supports GenAI integration

Updated: Apr 1

Too many AI integrations skip the most important step in learning: reinforcing the right application process. Without it, we risk teaching students to create fast, confident… and wrong answers. Even worse, connect them to knowledge and answers that lead to no learning, no retention.


The term co-intelligence comes from Ethan Mollick’s excellent book, where he describes the need to work collaboratively with AI. The idea is simple but powerful: combine the best of human capability with the best of machine capability, and you can achieve far more than either could alone.


I take this concept a step further. Not just as a way to work, but as a way to learn. A way that keeps our brains sharp, ensures we remember our own contribution long after the task is done, and avoids the mental “rot” that comes from over-reliance on technology.


Why This Matters for Learning


For decades, we’ve known that learning sticks best when we do. Whether we’re actively solving problems or teaching others, our brains are like muscles: use them, or lose them.


Too much cognitive offloading to AI is like thinking you’ll get fit by sitting on the couch watching football. You can watch all the plays you like, but your muscles aren’t moving, so there’s no growth. Too much watching and not enough moving leads to decay.


The Three Forms of Co-Intelligence


From my experience, there are three basic patterns:

Cognitive → AI

AI → Cognitive

Cognitive → AI → Cognitive (CAC Co-Intelligence, my preferred standard)


Cognitive-AI-Cogntive Co-intelligence: supports GenAI integration and is the one we must teach, demonstrate, and reinforce. It’s the model that keeps humans in control and thinking critically - before, during, and after AI use.


CAC Co-intelligence. How to integrate AI without causing brain rot
CAC Co-intelligence Framework

We should:

  • Demonstrate this principle in practice.

  • Incorporate it into AI-supported learning activities where appropriate.

  • Provide opportunities for students to apply and reflect on it

  • Assess their ability to apply it.


The stakes are high. Without guidance, many students are heading down a path of over-dependence on AI. We need to intervene early, loudly, and consistently.


Why I Built This Framework


CAC Co-Intelligence grew out of my own AI-integrated project work. It has become the foundation of all my lessons, refined by action research. AI can be a powerful partner, but it can also be dangerously misleading if used without critical thinking.


Here’s the CAC habit I instil in my classes:

1️⃣ Think as a human first – clarify the problem and your ideas.

2️⃣ Use AI to enhance – generate, expand, or refine ideas (NOT replace the human).

3️⃣ Return to human judgment – evaluate, filter, and apply with care.


It is important to note that this process is inherently iterative. The final cognitive evaluation often leads to new questions, refinements, or directions, initiating another round of the cycle. Each cycle should build on the previous one, deepening understanding and improving judgment rather than simply repeating the same steps.


As AI models improve and make it easier to offload thinking, the risk of the confidence trap grows, especially outside our true expertise.


CAC follows the principles of the three-phase self-regulation loop in educational psychology, especially the work of Barry Zimmerman: 1. Forethought, 2. Performance and 3. Self-reflection.


The application of CAC Co-intelligence emphasises developing students’ capacity to 'learn how to learn' and to 'actively participate in the thinking process'.


Core Risks


CAC has been designed to force two mandatory thinking checkpoints and addresses three risks of generative AI in education:

1. Cognitive offloading: Students skipping thinking and letting AI do the work.

2. Automation bias: People accepting AI output without scrutiny.

3. Skill erosion: Overreliance, reducing analytical ability


Explanation of the Layers


Cognitive (Before AI Use):

  • Think first: Begin with deep reflection. Understand the problem, its context, and what truly needs solving before turning to AI. Develop pilot solutions so that human approaches are considered before being influenced by AI.

  • Plan & Prepare: Define the problem clearly, explore traditional or human-driven approaches first, and decide whether AI is genuinely the right tool for this task.

  • Reflect on proven methods & ethical implications: Draw on policy, existing knowledge and professional standards; anticipate potential ethical, security, or practical consequences of AI involvement.

  • How can AI enhance your work?: Ask why you may wish to use AI. Is it necessary? Will it enhance the outcome, and which AI type or model best aligns with your goal?


AI Application:

  • Apply as a supportive tool & not as your replacement: Treat AI as a collaborator that enhances capability while preserving human direction and accountability. Use ethically.

  • Which AI is the best fit? Type/Model: Select the right kind of AI (e.g., generative, predictive, analytical) to match your specific goal or problem.

  • AI outputs can be flawed: Remember that AI can generate errors or biases; critical review is essential before use.


Cognitive (After AI Use):

  • Apply Evaluative Judgment: Assess AI results critically for accuracy, logic, and alignment with your intent and values. Support students in developing this skill by requiring them to compare, synthesise, and justify their decisions across different stages of the activity.

  • Refine and validate AI outputs: Edit, verify, and contextualise results to ensure they meet human and professional standards.

  • Apply human touches, responsibility, and control: Integrate human voice, empathy, ethics, and ownership; maintaining human oversight over final outcomes.


Limitations


CAC Co-intelligence prioritises learning and cognitive development over efficiency and productivity. As a result, certain tasks, such as exploration or brainstorming, may be completed more quickly using AI-first approaches, which can expand the idea space without requiring initial human effort. However, CAC deliberately requires human-first cognition to preserve independent thinking and reduce risks such as bias, anchoring, and over-reliance on AI outputs. Resource 4 (video) provides further detailed insights into why CAC is important, and when just AC may be appropriate.


Additionally, CAC does not explicitly prescribe prompt design strategies or detailed model capability awareness. Instead, these are intended to emerge through guided experience within the CAC cycle, particularly in early instruction where students learn to interact with AI critically and effectively (An example is shown in Resource 2). If prompt literacy is not explicitly taught early, students may develop ineffective or shallow AI interaction habits that persist. While this allows flexibility and adaptability, it also means that the framework relies on careful implementation and scaffolding to ensure students develop these skills rather than use AI superficially.


CAC is a micro-level framework. It provides a clear structure for how students should think when using AI. It does not address broader curriculum changes, clarify which skills should be prioritised in an AI-integrated classroom, or offer guidance on how to redesign assessment at scale. As a result, it needs to be complemented by other frameworks to support effective implementation at the curriculum and system level.


Resource 1: An overview of the theory (share with students)


The video linked below explains the positioning of CAC Co-Intelligence and why it matters. If you’re integrating AI into your teaching, I recommend showing it to your students. The human must remain in control, every time. Note: The visuals are illustrative, intended to show the balancing involved, rather than being strictly accurate. It’s time to be the broken record your students need.


Understand Co-intelligence: Use AI the right way

Resource 2: Example of the application of CAC Co-intelligence (for teachers)


This video is for teachers who are not sure how to apply CAC Co-intelligence in the classroom. Using the slide template from Resource 3, I guide you through the process.

For teachers: A step-by-step guide outlining the process of class application


Resource 3: Slide Template (for teachers)


After you have watched the step-by-step guide, gain access to the PowerPoint slides. Use them to adapt to your teaching content. Remember, gaining evidence of impact is very difficult, so I appreciate an email telling me that you have used them or have implemented this process.


Resource 4: An extended explanation of the positioning of CAC


This video provides a detailed explanation of why AI Education (thinking & learning processes associated with AI use) needs to be a foundational priority. It also positions the need for CAC, and when just AC Co-intelligence can be used.



Resource 5: The application of CAC Co-intelligence in project work


The following link connects to a range of videos and resources that demonstrate how CAC can be applied to different stages of project work.



Ten Implementation Tips:


  1. Be explicit about the three CAC stages (Cognitive → AI → Cognitive) and clearly signal transitions between them

  2. Make the “learning how to learn” component explicit by prompting reflection (e.g.

    How did you learn by undertaking this step? What helped you improve your answer?)

  3. Build repetition deliberately: one-off use has negligible impact on learning habits

  4. Emphasise that CAC is iterative; the value comes from cycles, not a single pass

  5. Enforce the first cognitive step: require visible initial attempts, time-box AI access, and verify completion to prevent AI-first dependency

  6. Structure AI use as a critic/refiner, not a generator: guide prompts toward evaluation, assumptions, and weaknesses

  7. Treat the final synthesis (comparison between steps 1 and 2) as the core learning moment. Require explicit comparison and justification

  8. Use open-ended or non-routine tasks where AI is imperfect to create necessary cognitive friction

  9. Scaffold early, then deliberately remove supports over time to build independence

  10. Use CAC explicitly as a learning process to develop thinking. Separate this from assessment, which should be determined independently by learning outcomes



As always, I am happy to receive feedback on implementation experiences. Please let me know if this was helpful.


Sasha Nikolic

12 August 2025


Updates:

10 Oct 2025 - The CAC Co-intelligence framework image was updated to improve wording based on reflections from feedback. Related text updated accordingly.

23 Mar 2026 - Refined text based on my experiences from my many workshops. Made the cyclic process explicit in the instructions, provided a slide template for open use, and added a video that demonstrates this process in action.

1 April 2026 - Based on feedback, 10 tips for implementation added

 
 
 

Comments


Sasha Nikolic

  • LinkedIn
  • Youtube

©2025 by Sasha Nikolic

Wollongong,

Australia

Sasha Nikolic | AI Strategy & Education. Specialising in Generative AI, Sasha Nikolic helps educators, institutions, companies and policymakers harness AI responsibly and effectively to transform learning and boost productivity. Addressing ethical risks and practical implementation, he offers insights, consulting and resources at the intersection of education, technology and strategy.

Harnessing AI - Responsibly and Effectively - AI Expert

AI Educational Consultant

GenAI Education Consultant

News South Wales, NSW, Australia

Keynote Speaking, AI Consultant

AI in education

GenAI in education

Generative AI consultant

Artificial Intelligence

AI Strategy

AI Ethics

AI Policy

AI Consultant Sydney

bottom of page