Read the interview with Dr. Mojca Rožman & Dr. Nina Roczen from DIPF - Leibniz Institute for Research and Information in Education, HAND:ET partner leading the assessment development.
Introducing the HAND:ET Consortium & SEDA competencies is a series of web articles that will be regularly shared via HAND:ET web site and social media channels (FB, TW). We will present the project partners through short interviews with members of the project teams on the topics from the field of their expertise that is also connected to and relevant for the HAND:ET project main theme: social and emotional competencies and diversity awareness (SEDA) and empowering teachers in schools.
The DIPF | Leibniz Institute for Research and Information in Education is a central institution in the field of educational research and educational information. DIPF supports scientists, policy-makers, and practitioners by conducting empirical research and providing innovative applications. The DIPF researchers contributing to HAND:ET project are part of the Department for Teacher and Teaching Quality which deals with the quality and effectiveness of pedagogical processes in instructional settings, schools and universities. A priority is placed on researching the professionalization of pedagogical staff.
DIPF was responsible for the summative and formative evaluation of the programme in the initial HAND IN HAND: Social and Emotional Skills for Tolerant and Non-discriminative Societies – A Whole-School Approach using a multi-method approach that integrated the perspectives of different stakeholder. In the HAND:ET project, DIPF is in charge of the assessment development and overall external evaluation of the policy experiment. This entails the careful planning of the evaluation process, specifying the overall aims for the evaluation and the development of measurement instruments tailored to those evaluation aims. Furthermore, DIPF leads the process of implementing the evaluation design (preparing and monitoring the pre and post-measurements and performing data analyses).
Dr. Mojca Rožman is a post-doctoral researcher at DIPF and at IEA Hamburg. She has experience with international large-scale assessments, questionnaire development, psychometric analysis, focusing on scaling of questionnaire, and test data, and evaluation of experiments.
Dr. Nina Roczen is a psychologist and works as a post-doctoral researcher at the DIPF | Leibniz Institute for Research and Information in education in the department of Teacher and Teaching Quality. Her research interests include competencies in education for sustainable development and the evaluation of school development measures.
An important segment of promoting and advocating for the development of the social, emotional and diversity awareness (SEDA) competencies is the assessment. For a long time, SEDA competencies were seen as something “out of reach” and hard to capture using the usual approaches. What seems to be the recent state of art in the field on this issue and can we in fact accurately and reliably assess and measure SEDA competencies? Are there any direct assessments of SEDA competencies?
It remains difficult to measure SEDA competencies. Unlike predominantly cognitive competencies such as mathematics or reading competencies, there are no well-established tests measuring SEDA. Instead, the most commonly used instruments are self-assessment questionnaires. These are very practical, as they are easy to use and inexpensive. However, they also have disadvantages, for example, they are prone to conscious or unconscious biases - this can arise because people usually want to present themselves in a particularly favourable light when describing themselves. Alternatively, one can try to measure SEDA competencies via interviews or by having people describe how they would behave in different critical situations and then analyze these descriptions. These methods have the disadvantage that they are very time-consuming to analyze. Each evaluation method provides results from a different perspective. The most promising approach is therefore the combined use of measurement methods, that is, a mix of questionnaires, observation and interviews, for example.
Where lies the importance of evaluation and assessment of these types of programs and experiments?
A systematic review of the effectiveness of programs such as HAND:ET has the advantage that particularly effective programs can be selected on the basis of the evaluation results and then implemented on a larger scale.
How helpful are analysis such as an impact study of this type of programs where we expect to see the change in attitudes, beliefs or competencies?
When is the right time to assess the effect of the interventions?
The only way to obtain robust information on the effects of school development programs like HAND:ET is to implement an experimental design. That is, we need an experimental group and a control group to which participants are randomly assigned. Then the experimental group gets the training and the control group does not. Target competencies are measured in both groups at two points in time. The first time point usually corresponds to the time before the training starts for the experimental group and the second measurement happens after the training was conducted. Now it can be observed whether there are larger increases in competences in the experimental group than in the control group. In this case we can assume that the increases were caused by the training.
As far as the right time is concerned, it is first of all important to measure competencies directly before and after the intervention. If you find a significant increase in competencies in the training group compared to the control group, that's already good. Of course, it is also important to know whether such an effect is sustainable. For this purpose, a third measurement point should ideally be introduced, e.g. six months or a year after the end of the training.
In the HAND:ET project you are using the evaluation strategy that combines a formative and summative approach. How can the two approaches be described and what are the advantages of combining them?
In the evaluation of the HAND:ET project, the focus is on the so-called summative evaluation - that is, we want to find out whether the training has a positive effect on SEDA competencies of teachers. In addition, however, we also carry out a formative evaluation. In this type of evaluation, the aim is to obtain information that can be used to improve the studied intervention (in our case the HAND:ET SEDA training) in the future. For example, we ask teachers about their ideas on how to improve the program, what aspects they had difficulties with, or what obstacles they encountered in implementing what they had learned in class.