TL;DR: Public discourse tends to treat generative AI (genAI) as a monolithic threat to be managed through avoidance or bans. Here, I use the concept of affordances to argue that these tools have already fundamentally transformed the environments in which students live and learn, making “opting out” an impossibility. The path forward lies in integrating genAI as an essential component of modern education, empowering teachers to lead the development of critical AI literacy and preparing youth to leverage the evolving capabilities of these tools to do more and to do better.
This weekend I attempted to watch Jake Tapper’s State of the Union on CNN. I have to admit that I didn’t make it all the way through. Not because the topic was unimportant, quite the opposite. It was because the discussion itself was so conceptually muddled that it was difficult to take it seriously. However, I have read the transcript of the entire episode since it was aired.
The episode focused on artificial intelligence, particularly in relation to children and young people. Questions about AI and learning featured prominently, and it was this that caught my attention because it connects with the primary focus of my research at the moment (especially AI literacy). What followed, however, was a discussion that exemplifies many of the core problems in today’s public discourse on AI.
There were two issues that particularly frustrated me:
- Conceptual ambiguity – Tapper and his guests repeatedly referred to “AI” when they were obviously talking almost exclusively about generative AI (genAI), and more specifically conversational chatbots such as ChatGPT. What applies to genAI does not automatically apply to AI as a whole.
- Policy responses that amount to avoidance – The recommendations that guests offered in response to perceived risks associated with genAI largely boiled down to restricting access, banning use in schools or delaying engagement altogether, in the apparent hope that the problems might somehow crawl into a dark corner until they automagically resolve themselves.
In what follows, I want to unpack these two issues in some detail. While my critique is prompted by this specific episode of State of the Union, it is not directed at the program alone. Rather, it reflects my broader concerns about how the rapid development and diffusion of AI, and genAI in particular, is currently discussed in public, political and educational contexts. In my view, much of this discourse is not only superficial but actively misinformed.
1. Ambiguity regarding AI
The kinds of generalisations about AI that I experienced in the SOTU episode are very common in media coverage of AI, but that doesn’t make them any less problematic. AI is not a single technology nor is it new. It has been part of our everyday environments for decades. We encounter AI when we use search engines, scroll through social media feeds, rely on spam filters in email, navigate with GPS, drive modern cars or use household appliances that adapt to our behaviour. These systems operate in very different ways, rely on different techniques and present very different kinds of challenges and opportunities. What Tapper and his guests were overwhelmingly concerned with, however, was not AI in this broad sense but genAI: systems capable of producing text, images, audio or video and of engaging users in human-like interactions. Chatbots designed for conversation or creative output are a very specific subset of contemporary AI and they raise a very specific set of concerns.
Collapsing all of this into the single label “AI” creates confusion. It makes it appear as though we are dealing with a monolithic, unprecedented force, when in reality we are dealing with a heterogeneous set of technologies with different affordances, histories and implications. This matters because risk is not evenly distributed across “AI”. The dangers associated with emotional dependency, erosion of critical thinking or blurred boundaries between humans and machines do not arise from predictive models used in weather forecasting or logistics optimisation. They arise from particular design choices in particular systems, especially conversational generative models. When we fail to make these distinctions, we make it harder to respond intelligently. Vague problems invite blunt solutions.
2. Bans, avoidance and the illusion of safety
A prominent feature of the discussion on State of the Union was how quickly concerns about genAI translated into calls for restriction: banning chatbots for children, keeping genAI out of schools, delaying exposure until some undefined point at which the technology can be declared “safe”. At first glance, this response appears sensible. If a technology poses risks, limit access to it. The problem is that this line of reasoning fundamentally misunderstands how technologies operate within social and learning environments and, crucially, how young people actually encounter them.
GenAI is already part of the environments that children and adolescents inhabit. It is embedded in platforms they use daily, accessible on devices they carry with them and increasingly woven into digital services that cannot simply be switched off. Removing genAI from schools does not remove it from young people’s lives. What it removes is the one context, the educational context, in which its use could be made explicit, guided, discussed and critically examined.
This is where the theory of affordances, a concept I have explored previously (see here and here), becomes particularly useful. Affordances are the possibilities for action that an environment offers an individual in relation to a task they are trying to complete. In affordance theory, three components are always in play: the individual, the object and the environment. The individual is the person concerned with a specific task. The object is one of numerous tools available to the individual that the individual recognizes as applicable to the task at hand. The environment is the context that emerges as the individual considers and interacts with the objects available to them.
Environments are not neutral backdrops. They are relational and dynamic, shaped by what objects are present and by how individuals perceive and act upon them. When a new object enters an environment, such as genAI, it does not simply add a tool. It transforms the environment itself by introducing new possibilities for action.
Calls for bans can be understood as attempts to reverse environmental transformations. By removing the object, proponents of bans implicitly hope that the environment will return to its former state, restoring earlier patterns of attention, behaviour and learning. This intuition is understandable, but it rests on a flawed assumption about how environments actually work.
The first problem is that environments are not objective or shared in any simple sense. In affordance theory, the environment is always the individual’s environment as perceived and acted upon. Two people may inhabit the same classroom, school or home while experiencing very different environments, shaped by their competences, experiences, needs and prior encounters with technology. What appears as a technology-free environment to an adult may already be fundamentally altered for a student who has interacted extensively with genAI elsewhere.
The second problem follows directly from this. Once a novel object has been discovered and incorporated into an individual’s perceived environment, that environment has already changed. The individual has learned that certain actions are now possible. Those affordances do not disappear simply because the object is removed from a particular institutional setting. Removing genAI from a classroom may transform the environment as perceived by policymakers or educators, but it is unlikely to restore the environment as perceived by a young person who already knows what these tools can do.
Bans target the wrong level of analysis. They focus on controlling objects in shared spaces while ignoring the fact that environmental transformation, once experienced by individuals, cannot be undone through simple exclusion. The environment has shifted and pretending otherwise does not make it so.
There is a further irony here. Many of the risks cited by Tapper’s guests, such as overreliance, reduced effort, diminished critical thinking or blurred boundaries between humans and machines, are not inherent properties of genAI. They are risks that emerge when a powerful tool is used unreflectively. Education exists precisely to counteract unreflective use. Schools already teach students how to engage with complex and potentially harmful tools: writing technologies, calculators, the internet, social media and search engines. In each case, the goal has not been to eliminate exposure but to cultivate judgement, restraint and understanding. GenAI is not an exception to this logic. It is an especially urgent case of it.
But, there’s another challenge. Arguing for education rather than bans presupposes that educators themselves understand genAI well enough to teach about it. At present, there is little reason to assume that this is the case. Misconceptions about how genAI works, what it can and cannot do and what constitutes responsible use are widespread; and public discourse often reinforces rather than corrects them. If media debates are any indication, confusion and overgeneralisation are the norm rather than the exception. This does not invalidate the case for education, but it does raise serious questions about preparedness.
Calls for bans often rest on the implicit assumption that young people must be protected from genAI rather than educated about it. Bans may create the appearance of action, but it does little to prepare learners for a world in which genAI will remain present, influential and evolving. If we are serious about addressing the risks associated with these systems, the only viable path forward is not avoidance but literacy, especially AI literacy that explicitly addresses generative systems. That means helping students understand what genAI is, what it is not, how it produces outputs, where it fails and how it can be used responsibly and critically. Achieving that goal requires engagement, not exclusion. Also, it requires schools to be part of the solution, not removed from it.
3. From prohibition to preparedness
The impulse to ban genAI from schools represents a futile attempt to move toward a past that no longer exists. If, as affordance theory suggests, the presence of these tools has already fundamentally altered individuals’ perceived environment, then removal is not a solution. It is a form of institutional denial.
If we are to move forward, our focus needs to shift from controlling access to building capacity. This means:
- Precision in language: Distinguishing between different types of AI and generative tools to enable targeted policy instead of blunt bans.
- Investment in teacher agency: Moving beyond “how-to” workshops toward a deep professional development that prepares teachers to lead the integration of genAI into their specific subject areas.
- Literacy as context engineering: Teaching students that their role is not just to “prompt” and evaluate outputs, but to actively design the context in which generative AI operates. By architecting the knowledge, framework and creative direction themselves, users ensure they are the ones driving the process, using the AI as a collaborative partner rather than a replacement for their own inquiry.
Integrating genAI into the classroom is not about surrendering to technology. It is about enabling teachers to teach the critical AI literacy competences and preparing youth to make effective use of current and future capabilities of genAI to do more and to do better. The environment has changed. The objects are already in the room. Our task now is to ensure that when students look at these tools, they see more than just a shortcut: they see a complex system that requires a critical, educated mind to master.



