SUPR
Discursive Dynamics in Policy Making
Dnr:

NAISS 2023/22-1218

Type:

NAISS Small Compute

Principal Investigator:

Hendrik Erz

Affiliation:

Linköpings universitet

Start Date:

2023-12-01

End Date:

2024-12-01

Primary Classification:

50401: Sociology (excluding Social Work, Social Psychology and Social Anthropology)

Webpage:

Allocation

Abstract

Policymaking is the process of enacting legislation to influence how various parts of society operate. In order to make policy, elected officials can use various policy instruments to adjust areas of society, e.g. taxation, tuition fees, or regulating markets. This project is interested in identifying the mechanism(s) behind policymaking, using the economic policymaking process in the U.S. Congress as a case study. For that, it assumes the following model for how the policymaking process works: Each representative can use speeches in the House of Representatives or the Senate to announce their stance on policy instruments and their voting intentions. Then they proceed to vote and, depending on the votes, enact or reject these bills. To analyze the discursive dynamics in policymaking, this project needs to identify the stances that the representatives take towards various policy instruments in their speeches. We are interested in a set of ten economic policy stances: Corporate taxation, individual taxation, deregulation, privatization, capital controls, government spending, government deficit, user fees for public services, independence of key institutions, and property laws. Our data is a digitized version of the Congressional Record (CREC) (Gentzkow, Shapiro, and Taddy 2019), the official transcripts of the U.S. Congress, which encompasses over 17 million speeches from 1873 to 2011 and consists of approximately 70 GB of raw text data. We utilize a process called “active learning” (Bonikowski, Luo, and Stuhler 2022) to train a set of large language models (LLM) – specifically RoBERTa (Liu et al. 2019) – which are then capable of labeling the various speeches according to the contained policy instrument stance. RoBERTa models are neural networks with more than 100 million parameters. We use one model per policy instrument. Since speeches are too long for BERT models, we split the speeches up into paragraphs. Active learning works by first creating a gold-standard data set that contains a sample of human-annotated paragraphs from the corpus and train a RoBERTa model with these data. Second, we let the trained model annotate the full corpus and extract paragraphs where the model was unsure. We measure this uncertainty using the Kullback-Leibler divergence between the probability distribution assigned by the model and the uniform distribution. After manually annotating these paragraphs and adding them to the training data, we train the model again. This process is repeated until our verification-metric is sufficiently high. This metric is PR-AUC (Precision-Recall Area Under the Curve), which penalizes both false positives and false negatives, and ranges from 0 for imprecise models and 1 for precise and well-generalized models.