As governments race to respond to fast-moving technological disruption, Marcus Smith’s Techno: Humans and Technology offers a broad yet accessible call for collaborative governance. While the book succeeds in outlining the vast risks of unregulated tech, its lack of policy depth leaves more informed readers wanting clearer answers to today’s most pressing regulatory challenges.
The tech landscape has shifted dramatically in recent months, revealing just how quickly our digital futures can be rewritten. The new US administration’s swift abandonment of previous AI guardrails in favour of “American AI leadership” speaks volumes about how technology policy can potentially become less about thoughtful governance and more about winning a perceived arms race.
In a striking demonstration of tech’s rapid evolution, DeepSeek AI—a Chinese startup that built powerful AI models on a relatively tiny budget—triggered global market disruption and swift governmental security responses across multiple countries. Today’s technological advances weave a complex tapestry of interconnected challenges spanning multiple domains: safety concerns around autonomous systems, algorithmic bias perpetuating inequity, surveillance technologies eroding privacy rights, and accelerating job displacement causing economic instability.
In this unstable and evolving landscape, Marcus Smith’s book Techno: Humans and Technology offers a timely call to action for various stakeholders to come together in synergy to tackle the complex issue of technology governance. Smith’s call is couched in a nice metaphor: “techno” electronic music, with its heart-syncing beats that draw people together despite individual differences. This kind of “synergy and collaboration,” as Smith expresses, is the kind of engagement we need to achieve effective regulation of new technology.
Smith diagnoses quite correctly that “[t]echnology and its regulation is the defining issue of the time we are living in – we must take it seriously.” He also suggests rightfully throughout the book that there is a lack of effective regulatory strategies that systematically and reliably safeguard against the various risks he identifies. He succeeded in showing that this is indeed a grave problem considering the extent of these technology-related risks.
The book makes an effort to show just how broad ranging these risks are. Led usually with a demonstrative hook case, such as crypto fraudster Sam Bankman-Fried’s story, the chapters provide a comprehensive survey of key technology challenges and the overall state of regulation (or the lack thereof). These include the geopolitical factors impacting critical resources for technological growth, the potential for digital surveillance and authoritarianism as biometrics and genomic data proliferate, social media’s disruption of our information environment, blockchain’s security and economic implications, and AI’s impact on human rights and its potential to “leave human beings behind.” A general reader interested in recent technological changes and their societal, ethical, economic, and geopolitical implications should therefore receive a satisfying knowledge brush-up.
Yet, for a more informed reader interested in specific technology, the key debates around it, and technology laws, this book over-prioritises breadth at the expense of depth and specificity. After extensive discussion of each technology and its complex challenges, the chapters often land in passing, open-ended remarks about the need to balance interests and rights. For a book calling for technology regulation, it leaves readers wondering what specific policy measures might address these challenges and what critical assessment the author offers of existing regulations he considers inadequate.
Nevertheless, the book does offer a broad vision for technology governance composed of three key imperatives, which are briefly discussed in the last chapter: involving key actors, such as technology experts, companies, citizens, ethicists, lawyers, and governments; making use of technologies’ ability to build-in behavioural control, or in other words regulating with technologies; and, finally, having a dedicated international agency capable of coordinating national technology regulations and enforcing protective measures.
One may question the feasibility of these suggestions, particularly an international agency with enforcement powers. Nation states—already divided by geopolitical tensions and normative disagreements—seem unlikely to surrender authority over contentious regulatory matters to any international body. Despite being cited in the book as a model approach, the European Union’s proactive technology regulations are insufficient to inspire this level of international adoption or collaboration. However, the three broad suggestions in this book do offer a “synergistic,” collaborative vision for global AI governance which we should root for.
The aim of the book is ambitious. Smith states in its first chapter that this book “is about the technology revolution: how it is changing the world and what we need to do about that.” For general readers less familiar with recent technological developments and the state of their governance, this book has fulfilled this goal thought-provokingly. More informed readers, however, may find themselves looking for deeper analysis and more substantial policy insights beyond the introductory perspective offered.
In an environment already fraught with fear about new technologies, a valuable contribution on the topic requires deep engagement with both surface phenomena and underlying mechanisms. This approach helps authors avoid the trap of sensationalising risks and instead offers concrete, constructive direction for navigating our uncertain technological landscape—something readers increasingly expect from new analysis of these complex issues.
This is a review of Marcus Smith’s Techno: Humans and Technology. (University of Queensland Press, 2024). ISBN: 9780702266416
Xueyin Zha is a doctoral candidate at the Australian National University, specialising in the global governance of artificial intelligence and machine learning technologies. Her research focuses on international norms and AI governance, with a particular interest in bridging the gap between the practical realities of AI development and high-level governance principles. Xueyin holds a Master of International Relations (Advanced) from the Australian National University.
This article is published under a Creative Commons License and may be republished with attribution.