TRANSCRIPT
This month, Shobita and Jack reflect on the recent COP meeting in the United Arab Emirates, recent AI news including the Biden Administration's Executive Order, the UK summit, and the fates of the two Sams: Altman and Bankman-Fried. And they chat with Sarah de Rijcke, Professor in Science, Technology, and Innovation Studies and Scientific Director at the Centre for Science and Technology Studies at Leiden University in the Netherlands.
References:
- D'Ignazio, C. and L. F. Klein.Data Feminism. The MIT Press, Cambridge, Massachusetts, 2020.
- Andreessen, M. (2023, October 16).The Techno-Optimist Manifesto. Andreessen Horowitz.
- de Rijcke, S. (2023). Does science need heroes? Leiden Madtrics blog, CWTS, Leiden University.
- Pölönen, J., Rushforth, A.D., de Rijcke, S., Niemi, L., Larsen, B. & Di Donato, F. (2023). Implementing research assessment reforms: Tales from the frontline.
- Rushforth, A.D. & de Rijcke, S. (2023). Practicing Responsible Research Assessment: Qualitative study of Faculty Hiring, Promotion, and Tenure Assessments in the United States. Preprint. DOI: 10.31235/osf.io/2d7ax
- Scholten, W., Franssen, T.P., Drooge, L. van, de Rijcke, S. & Hessels, L.K. (2021). Funding for few, anticipation among all: Effects of excellence funding on academic research groups. Science and Public Policy, 48(2), 265-275. DOI: 10.1093/scipol/scab018 https://academic.oup.com/spp/article/48/2/265/6184850
- Penders, B., de Rijcke, S. & Holbrook, J.B. (2020). Science’s moral economy of repair: Replication and the circulation of reference. Accountability in Research, first published online January 27, 2020. DOI: 10.1080/08989621.2020.1720659.
- Müller, R. & De Rijcke, S. (2017). Thinking with indicators. Exploring the Epistemic Impacts of Academic Performance Indicators in the Life Sciences. Research Evaluation. DOI: 10.1093/reseval/rvx023.
Study Questions:
1. What is techno-optimism, and how does it apply in the case of AI?
2. How might we think about the strengths and weaknesses of current efforts to address AI governance by the U.S. government?
3. What are some negative consequences of simplistic performance metrics for research assessment, and why do such metrics remain in use?
4. How do large companies like Elsevier now extend their domain beyond publishing? How might this shape the trajectory of research assessment methods?
5. What hopes exist for better performance metrics for research assessments?
More at thereceivedwisdom.org