Assessing by doing
Research output: Contribution to conference › Conference abstract for conference › Research › peer-review
Standard
Assessing by doing. / Earle, David; Ellemose Lindvig, Katrine.
2022. Abstract from Interdisciplinary learning and teaching conference, Birmingham, United Kingdom.Research output: Contribution to conference › Conference abstract for conference › Research › peer-review
Harvard
APA
Vancouver
Author
Bibtex
}
RIS
TY - ABST
T1 - Assessing by doing
AU - Earle, David
AU - Ellemose Lindvig, Katrine
PY - 2022
Y1 - 2022
N2 - In this paper the CoNavigator team will draw on a recent case study of how their tool is being used to invite discussion on the best practices for assessing and evaluating interdisciplinarity and how the tool might best be aligned with more traditional assessment methods. The University of Maryland, Baltimore County has recently embarked on an ambitious NFS-funded 5-year graduate training programme, aiming to prepare under-represented minority students for careers in environmental problem-solving. Each student is placed into an interdisciplinary team which includes an academic supervisor, a professional scientist from a regional agency and a community stakeholder. To help steer the process, and to help students learn about interdisciplinary collaboration by doing, teams will use CoNavigator – a hands-on interdisciplinary collaboration tool – near the beginning and end of the programme’s lifespan. Students will also use the tool separately to self-evaluate the interdisciplinary process. All participants can revisit the outcomes of the tool via augmented reality ‘recordings’ of the collaboration process. While the students will be assessed via final (summative) project reports and oral examination in their primary disciplines, there are undoubted difficulties in how best to assess the interdisciplinarity aspects of their studies. We would like to share our experiences, and explore how assessing interdisciplinarity in a real-time interdisciplinary setting might be a valuable method for doing so.
AB - In this paper the CoNavigator team will draw on a recent case study of how their tool is being used to invite discussion on the best practices for assessing and evaluating interdisciplinarity and how the tool might best be aligned with more traditional assessment methods. The University of Maryland, Baltimore County has recently embarked on an ambitious NFS-funded 5-year graduate training programme, aiming to prepare under-represented minority students for careers in environmental problem-solving. Each student is placed into an interdisciplinary team which includes an academic supervisor, a professional scientist from a regional agency and a community stakeholder. To help steer the process, and to help students learn about interdisciplinary collaboration by doing, teams will use CoNavigator – a hands-on interdisciplinary collaboration tool – near the beginning and end of the programme’s lifespan. Students will also use the tool separately to self-evaluate the interdisciplinary process. All participants can revisit the outcomes of the tool via augmented reality ‘recordings’ of the collaboration process. While the students will be assessed via final (summative) project reports and oral examination in their primary disciplines, there are undoubted difficulties in how best to assess the interdisciplinarity aspects of their studies. We would like to share our experiences, and explore how assessing interdisciplinarity in a real-time interdisciplinary setting might be a valuable method for doing so.
M3 - Conference abstract for conference
T2 - Interdisciplinary learning and teaching conference
Y2 - 7 April 2022 through 7 April 2022
ER -
ID: 340363961