I’ve been meaning to review Geek Heresy (Ketaro Toyoma, 2015) for a while. On the whole I really liked it, and would deem it useful reading for anyone doing work tangentially related to technology and/or international development. Three cheers for procrastination, however, as this week technology researcher Evgeny Morozov released a thoughtful and scathing assessment of technology and innovation policy in the Clinton-era State Department – which has important implications for how these elements will be viewed, addressed, and managed in the presumably forthcoming Clinton administration.
Moreover, all of this is especially important now: we’re in a murky era of unestablished norms and maladapted legal frameworks. While that era’s not coming to a close quite yet, things are in some respects beginning to congeal: only so many problems can pop up before someone, somewhere, has to make a decision and set a precedent.
Given all that! In this post – Part 1 – I’m going to summarize and review Geek Heresy. In Part 2, I’ll discuss its relevance to historical U.S. government-managed initiatives discussed in the Morozov article, spell out what I think this means for USAID in particular, and conclude with examples of organizations that I think are truly doing digital development right, as well as some academic frameworks I have found helpful when parsing these concepts.
Geek Heresy: “Subtle Contradictions with Outsized Implications”
Toyoma, a founding member of Microsoft’s research branch in India, has a long history in the tech-for-dev world: he’s designed educational software for overcrowded classrooms in India, taught calculus in Ghana, mentored kids in computer science, and distributed agricultural lessons to illiterate farmers via video.
In Geek Heresy, he leads us through the emotional rollercoaster of implementing and evaluating development projects: excitement with initial success, puzzlement with inconsistencies, disappointment when results are short-lived or fail to scale, and marvel at the seemingly endless human capacity for distraction and strategic misappropriation. What emerges is an impassioned, substantiated argument against tech-first interventions as “at best, a distraction; at worst, a substitution for solving real problems” – along the way gently eviscerating sloppy RCTs, shiny social enterprises, and net happiness metrics.*
The relative nacenscy of the tech-for-good arena has left two groups in particular susceptible to imprecise language, self-interested overpromising, and sketchy methodology: 1) policymakers and 2) managers of major international development implementation programs. What Toyoma terms “social impact technology”, but what goes by many names (well-meaning digital imperialism? evangelical techno-optimism? life, liberty, and the pursuit of FOMO…?) is deeply compelling for individuals in these roles. Overwhelmed by information and situated amid turf wars, clear-cut evaluations and one-off interventions appear as universal wins, and the genuine satisfaction that accompanies successful pilots and promising research makes it easy to cocoon oneself comfortably in good intentions. Genuinely well-meaning, excited, overworked people are susceptible to snake oil peddlers of any sort, but some of these snake oil peddlers have really good pilot results, and all of them hold true faith in their capacity to make the world a better place.
Geek Heresy portrays what emerges as a fundamental division between “wildly different ways of trying to remake the world”: one centered on external provision, one centered on intrinsic growth. The book explores this division, and the practical downstream implications of tech-first and human-first foundations, applied in general to “packaged interventions” but discussed primarily through a digital lens (edtech, mHealth – all things internet). The concluding thesis is straightforward: technology is a tool that amplifies existing human forces. That’s the law and the whole of the law. Efficiencies are enhanced; latent emotional tics are exacerbated. Most powerfully:
“No technology includes the empathy and discernment needed in leaders. No law bundles capable implementation. It’s the dysfunctional governments we most want to replace through elections that lack the institutions, civil society, and armed forces willing to hold up democracy. And it’s the jagged social fissures that we most want to stitch up with laws that lack the interpersonal trust and mutual respect needed for healing.”
“Institutions built without a foundation of intrinsic growth have a nasty habit of evaporating. The democracies of Iraq and Afghanistan are a good example.”
The book then extends this thinking to packaged interventions more broadly. So you’ve got an intervention that produced outstanding results? Excellent – but you also probably had a dedicated research team, a committed and capable local partner, and beneficiaries who were willing to truly wanted the thing (or were willing humor you for IRB compensation, or depend upon your partner organization, or…).
So…what’s the prescription? The second half delves into this more deeply, and there I think it gets a little lost. In general Toyoma calls for seeding the “human substrate”: aspiration, discernment, and discipline; or, heart, mind, and will. Focus heavily on relationships, feedback cycles, program planning sourced in beneficiary aspirations, alongside mentorship with technical capacity. Engender intrinsic growth in both societies and individuals, which fortify each other. The long-haul hard stuff.
I am by no means thoroughly experienced in this realm; I’ve run and evaluated one mobile health project, and deliberately avoided another for the reasons outlined here, but otherwise focus mostly on iterative evaluation and implementation within the realm of wetware (i.e. diseases and other physical things that can kill people) in insecure environments. But I did find a hardware-wetware grounding extremely useful for thinking through the implications of more nebulous packaged interventions – it’s easier to see ethnographic forces as the drivers behind physical things, and the anecdotes presented here are a good reminder that there shouldn’t be much difference.**
A lot of what I’ve summarized might sound over-hashed or derivative to people familiar with these realms, but the degree of nuance presented in Geek Heresy is novel and refreshing. There’s something so tremendously satisfying about being led through someone’s maze of a mind as they tie together cohesive and consistent conclusions from seemingly disparate threads. This book was such a breath of fresh air; I’m very glad I read it, but I did feel like a choir member singing along, and I’m not sure that’s the audience it needs.
All that said, I felt three big things were missing:
1) Differentiation between “packaged solutions” and necessary hardware. Geek Heresy discusses hardware intermittently (One Laptop Per Child, for instance) but something about his treatment of physical resources in the same groups doesn’t sit quite right with me. In particular, he tries to apply the Law of Amplification to vaccines in a way I found particularly wanting.*** I suppose I’m thinking of the basics: water purification systems, medical supply chains, passable roads, family planning, electric grids: things where you can have x% heart, y% mind, and z% will supporting the system, but still end up multiplying by either 1 or 0. A concrete discussion of whether the same rules apply to these physical tools and at-hand resources, and why or why not, would strengthen the book substantially (and perhaps I’ll take a swing at it another time).
2) A more nuanced approach to process and evaluation. Some criticisms of “the technological way of solving problems” seems a little misdirected. It’s a powerful approach that allows for the perception of constraints and systematic flows, for making and movable parts, for creative problem solving based upon the generation of new options rather than simply compromise or trade. That approach – and, really, evidence-based decision-making in general! – remains desperately underutilized by even the most capable governments. “Both excessive faith in and frantic fear of technology are denials of human responsibility” – and yet these are the perspectives that public representations of policy discussions ping-pong between. So I worry that too thorough a reading by someone without a background in this stuff could forge a heart-mind-will argument that leads to a baby-bathwater scenario.
3) A completed feedback loop, allowing for emergent norms. As a big believer in the evolutionarily validated, centuries-honed deep circuitry of the human hindbrain to overpower all, I’m totally sold on the Fundamental Feels argument for effective impact; it’s vaguely reminiscent of Dale Carnegie. But I think the process presented here is a little too unidirectional. Fundamental human things – self-control, intention, discernment, per Toyoma’s framework – impact the way we develop and respond to interventions, yes. And technology may well amplify what’s already there. But that amplification can escalate into positive feedback cycles, perhaps exhausting reserves of self-control that might not be exhausted if not for the technology. That’s one part of it – the possibility of amplification beyond thresholds that were previously out of reach. Another part is the potential for those to add up into shifting norms. To some extent, it’s all a chicken-or-the-egg game: humans create technologies, which influence other humans and become established norms for the next generation. Those new humans create new things from that divergent point, with a framework set of values influenced by the prior set, and on and on until, I don’t know, the singularity. When it comes to international development, then, there aren’t only cultural barriers; there are presumptions about emergent norms.
This last point is where I become deeply concerned about international implications: not only in terms of poorly used funds, or wasted time, or false promises, but for the possibility exporting the negative impacts of technology on values before societies have the aspirational bandwidth, civil infrastructures, and democratic institutions in place that allow them to make decisions about those issues for themselves. Toyoma alludes to this briefly while lightly snarking on principles of what he calls technological orthodoxy: the dangers of bypassing values by pretending to value neutrality, and upholding freedom over responsibility arbitrarily while neglecting potential equilibriums along the spectrum.
With that, Geek Heresy presents a big challenge: a more compelling narrative is needed to counteract a technology-first development paradigm. It is not enough to grouse from the sidelines, and hope for the best. That narrative appeals deeply to decision makers who are often too far removed from the problems at hand to examine proposals critically. Moreover, the forces at work have deep lobbying pockets, and profit margin on the line, and are increasingly interwoven with good and necessary functions of government, making negative elements more difficult to parse.
More on that in Part 2…
* Minor quibble: though he goes on to favorably discuss more complex, process-based self-actualization, Toyoma invokes “the pursuit of happiness” in his critique of single-slice happiness metrics. SO misplaced. Jefferson, like Kid Cudi, is all about the struggle.
**bad cloud pun, duh
***One of the best sections of the book involves a discussion of Peter Rossi’s classic The Iron Law of Evaluation, and Other Metallic Rules, which concludes ironically that “there are no social science equivalents of the Salk vaccine!” Barring any discussion of variation in immune response, I think that helps hint at the distinction between social-software and hardware-wetware…