I am also paying close attention to the values being reinforced at the level of model training and alignment. Mainstream systems are often shaped to reward helpfulness, harmlessness, instruction-following, fluency, and user satisfaction. Some models explicitly encode written values and use critique, revision, and AI feedback to shape behavior. Others rely on deliberative alignment approaches and written behavioral principles to guide responses.
I am interested in what becomes possible when users, developers, and builders of AI pedagogy begin to shift the center of this work toward a different paradigm: one concerned with human integrity, creativity, discernment, and the quality of long-term interaction. In that frame, AI enters a larger evolutionary question about intelligence, ethics, authorship, and co-formation within hybrid interaction.
My research explores how we can build forms of AI literacy that protect human thought sovereignty, intellectual integrity, discernment, and responsibility while also expanding inspiration, innovation, creative range, and depth of insight. This interests me for my own writing and research, for education, and for the wider cultural question of how humans will grow within an increasingly hybrid society.
How can we distinguish developmental forms of human–AI relation from forms that are extractive, dependency-forming, or infantilizing?
How might hybrid scaffolding, in which human intellect integrates with system architecture, magnify human inspiration, reflective range, and creative synthesis within an AI-literate relational field?
How should AI training, interface design, public language, AI literacy, and pedagogy change if the paradigm were shifted toward systems that reinforce human integrity, awareness, and creativity while expanding the depth and duration of interaction?