
LAB
economic
unconscious

Moral Codes & AI
BOOK REVIEW
Moral Codes: Designing Alternatives to AI
Liviu Poenaru, Dec. 16, 2024
"Moral Codes: Designing Alternatives to AI" by Alan F. Blackwell presents a compelling critique of the current trajectory of artificial intelligence (AI) research, advocating for a paradigm shift towards the development of more expressive and human-centric programming languages. Blackwell serves as Professor of Interdisciplinary Design in the Department of Computer Science and Technology at the University of Cambridge. He is a Fellow of Darwin College, Cambridge, and a cofounder of the Crucible Network for Research in Interdisciplinary Design, alongside David Good. Additionally, he collaborated with David and Lara Allen to establish the Global Challenges strategic research initiative at the University of Cambridge. He contends that the prevailing AI agenda has deviated from its original promise of alleviating mundane human labor, instead encroaching upon domains of human creativity and emotional engagement.
​
The author introduces the concept of "MORAL CODE," an acronym for More Open Representations, Access to Learning, and Control Over Digital Expression. He posits that by focusing on these principles, we can develop programming languages that empower users to articulate their intentions more effectively to computers, thereby fostering software that serves societal well-being rather than merely enhancing efficiency or profitability.
Blackwell's interdisciplinary approach draws from interaction design, creativity studies, and fairness in technology. He critiques contemporary software interfaces that limit human expression through restrictive word counts or simplistic interaction mechanisms like likes and emojis. Instead, he advocates for designs that enhance meaningful human-computer interactions, supporting human flourishing and the pursuit of meaning.
​
A notable aspect of the book is its historical perspective, examining lessons from programming environments like Smalltalk [1], which emphasized moral code before the advent of machine learning. Blackwell argues that such historical insights can inform the creation of more equitable and transparent software systems today.
​
"Moral Codes" challenges the AI community to reconsider its objectives, urging a move away from developing autonomous systems that replicate human creativity and emotion. Instead, it calls for tools that augment human capabilities, respect user autonomy, and promote ethical engagement with technology. This perspective aligns with contemporary discussions in AI ethics, emphasizing the need for systems that are accountable and aligned with human values.
​
Shannon Vallor, philosopher and author of The AI Mirror, and Alan F. Blackwell share a profound critique of the ethical challenges posed by contemporary artificial intelligence (AI). Vallor’s reflections in The AI Mirror illuminate how AI, often mistaken for a form of sentience, functions instead as a "giant mirror made of code," reflecting back our decisions, values, and creations without genuine understanding. This mirror-like quality, she argues, is not just misleading but actively obstructive, as it reinforces existing societal norms and values while stifling humanity’s capacity for reinvention and practical wisdom—what Aristotle termed phronesis. Similarly, Blackwell critiques how current AI systems and programming paradigms undermine human creativity and autonomy by privileging opaque, efficiency-driven algorithms over tools that empower users. Both thinkers identify the opacity of AI systems as a critical challenge, arguing that by alienating users from the tools they rely on, such systems erode trust, agency, and the capacity for critical engagement, ultimately rendering humans passive consumers of technological outputs rather than active shapers of their futures.
​
The opacity of AI systems alienates users by erecting a barrier that disconnects them from the technology they rely on, depriving them of both agency and comprehension. When the inner workings of AI systems are hidden in inscrutable algorithms, users are left unable to interpret, question, or meaningfully interact with the technology, fostering a sense of powerlessness and dependency. This disconnect undermines trust, as users are forced to rely on systems they cannot fully comprehend or control, heightening the risk of manipulation, bias, and unanticipated consequences. By removing transparency, these systems exclude users from the processes that shape their experiences and decisions, effectively turning technology from a tool of empowerment into one of subjugation. For both Vallor and Blackwell, addressing this opacity is vital to restoring accountability and ensuring that AI systems remain tools that enhance, rather than diminish, human autonomy and creativity.
​
Vallor and Blackwell align in their call to redirect technology toward supporting human flourishing, albeit through different yet complementary lenses. Vallor, drawing on virtue ethics, critiques the dominant "looking-glass fantasy" of AI risk discourse, which focuses on speculative threats like Artificial General Intelligence (AGI) while neglecting the immediate existential challenges—such as environmental devastation—that our technologies exacerbate by mirroring and amplifying today’s values. She warns that the more power we cede to these "mirrors," the less we exercise our own practical wisdom to navigate urgent planetary and societal crises.
​
Blackwell, echoing this concern, offers a concrete pathway through his concept of "MORAL CODE"—advocating for the redesign of programming environments to prioritize openness, learning, and user control, as exemplified by historical models like Smalltalk. Together, their reflections emphasize that human creativity and ethical agency—not the machines we build—must remain at the center of technological development. The urgency they convey is a call to reclaim technology as a tool for transformation, not a trap that confines us within the limitations of our own mirrored values.
​
Blackwell's work offers a visionary yet pragmatic roadmap for designing technologies that prioritize human agency and societal benefit, challenging technologists, designers, and policymakers to rethink the moral imperatives embedded in code.
​
GO FURTHER
Financial Times
Can the AI future work for everyone?
Vox
Shannon Vallor says AI does present an existential risk - but not the one you think
Financial Times
The AI Mirror - how technology blocks human potential
[1] Smalltalk is a pioneering programming language and integrated development environment (IDE) created in the 1970s at Xerox PARC (Palo Alto Research Center) by Alan Kay, Dan Ingalls, and Adele Goldberg, among others. It is widely regarded as one of the most influential innovations in computer science, shaping the fields of object-oriented programming, graphical user interfaces, and educational software development. Smalltalk was not just a programming language but a comprehensive, dynamic environment designed to make computing accessible, flexible, and creative.
​
​
​
​