top of page

Research

My interdisciplinary research examines the politics of emerging technologies and the opportunities for sustaining humanity and its planetary home.

Peer-Reviewed Publications

Billesbach, G., Johnson, S. (2025).

Self-interpretive freedom and the politics of quantified self technologies.

AI & Society.

DOI: 10.1007/s00146-025-02676-1

The Quantified Self (QS) movement promises to enhance personal freedom by using technology to track and optimize human physiology and activity. QS technologies like fitness trackers, smartwatches, and mood-tracking apps have been growing in popularity and are widely credited for improving mental, physical, and emotional health. This article examines how we might best understand and respond to the challenges posed by QS technologies. Adopting an ethnographic approach and employing Charles Taylor’s framework of self-interpretive freedom, it analyzes publicly available primary evidence. Critics question the benefits of QS technologies and highlight the potential harms associated with technological self-surveillance. Proponents encourage an expansion of QS technologies as a means of transcending the limits of unaugmented life. QS practitioners
need neither blindly surrender to self-tracking technology nor lose their humanity when deploying it. Rather, their technologically enabled activities can enrich self-interpretive freedom. As digital human twins and AI-driven self-tracking tools evolve, their development and use are best guided by this principle: that technology should serve, rather than supplant, the temporal, dialogical, and embodied dimensions of human agency.

Billesbach, G., (2025).

The politics of predictive technology in the Intergovernmental Panel on Climate Change.

Oxford Intersections: AI in Society.

DOI: 10.1093/9780198945215.003.0020 .

The Intergovernmental Panel on Climate Change (IPCC) is a central node for a diverse group of actors interested in the politics of climate change. At the interface of science and policy, it is founded on three principles: being policy relevant, never policy prescriptive; enlisting geographically diverse participants; and being transparent about its procedures. Nonetheless, humanist critics of technology and technology enthusiasts alike critique the Intergovernmental Panel on Climate Change, noting the political implications of its predictive technology. For some, the Intergovernmental Panel on Climate Change’s use of machine learning algorithms and the outputs they produce rely on an underlying technocratic logic. As such, the Intergovernmental Panel on Climate Change’s supposed neutrality masks a universal framework that is
harmful to democratic politics because it flattens regional variation and Indigenous knowledge and
forecloses non-quantitative approaches to the world (e.g., poetry and narrative). For others, general
circulation models are not technologically advanced enough and are thus blunt instruments in need of
replacement by novel AI. On this account, the Intergovernmental Panel on Climate Change is marred by
human flaws and does not defer to technology enough. This article draws from Hannah Arendt's political theory and interpretive methods to address underlying tensions concerning technology and politics in the Intergovernmental Panel on Climate Change. It analyzes IPCC leadership, reports, and original interviews with climate scientists. These sources illustrate how predictive algorithms and expert rule are prominent in the IPCC's work, yet meaningful efforts remain to integrate regional differences and non-quantitative perspectives. The climate scientists interviewed frequently acknowledge tensions between technocratic and humanistic approaches, often articulating distinctly humanistic or poetic sensibilities in tandem with their rigorous deployment of predictive technology. They demonstrate how we can resist purely technocratic approaches to climate governance while utilizing state-of-the-art technologies.

Working in Progress

Invocations of Freedom in the Age of Predictive Algorithms

Predictive algorithms promise enhanced freedom by way of increasingly accurate forecasts of human behavior. Critics frequently invoke traditional theories of negative, positive, and republican liberty in alerting us to the dangers of predictive technologies, but these approaches inadequately capture the full impacts of algorithmic prediction, including our willingness and ability to exercise freedom with others. Drawing from Charles Taylor’s concept of self-interpretive freedom and Hannah Arendt’s notion of freedom as political action, I argue that predictive algorithms can both undermine and support our capacities for reflection and interpretation, and alter our disposition toward spontaneous political engagement. Arendt’s understanding of freedom as inherently pluralistic and unpredictable highlights the risks posed by algorithmic governance that prioritizes certainty over democratic deliberation and openness. Taylor’s emphasis on the temporal, dialogical, and embodied nature of self-interpretation reveals the dangers of representing human experience primarily through quantitative measures.

AI and the Exercise of Freedom: The Politics of Personal & Planetary Prediction

Algorithms and artificial intelligence (AI) promise unprecedented predictive power, which many associate with greater efficiency, objectivity, and control. They also imperil many cherished values, such as justice, equality, and freedom. This book investigates how the predictive power of algorithms and AI threatens freedom. It neither counsels the wholesale rejection of algorithms and AI nor overlooks their potential benefits. Instead, it (1) assesses how negative, positive, and republican conceptions of liberty have been invoked in prevailing discussions of algorithms and AI; (2) advances interpretations of Hannah Arendt and Charles Taylor on science, technology, and the exercise of freedom to clarify the political impact of algorithmic prediction; and (3) applies these theoretical insights in two empirical case studies that employ predictive tools at planetary and personal scales—the Intergovernmental Panel on Climate Change (IPCC) and the Quantified Self (QS) movement.

​

Traditional approaches to freedom, the negative, positive, and republican conceptions of liberty, help us understand the consequences of algorithms when they malfunction or are used to facilitate political interference, inequalities, and unaccountable decision-making. However, they inadequately address the full impact of predictive technologies in our political and personal lives. Arendt’s theory of political action and Taylor’s concept of self-interpretation provide crucial supplements, revealing the political and personal impacts of predictive technologies and the prospects for addressing them. An investigation of the IPCC and QS movement demonstrates that predictive technologies are neither inherently oppressive nor inevitably liberating. Members within this institution and movement illustrate how predictive technologies can fruitfully coexist with democratic and reflective practices. In this manner, algorithms and AI can be guided by wisdom. And human freedom, with all of its uncertainties, can endure.

bottom of page