Overview of students surveys around the world

The teacher performance evaluation is common practice at universities every semester. This process allows university management to obtain information from the students about their teachers. Universities analyse this information to gain knowledge through the analysis of the surveys and to take actions to reward teachers in order to improve their teaching abilities among other benefits.

Many instruments or tools have been proposed to evaluate teachers’ performance in classroom. The major difference among these tools is the dimensions analysed. The point in common is that all of them work only with students’ opinions. For example, the Course Experience Questionnaire or CEQ applied in Australian universities, evaluates the experience a student had during a course (Hirschberg, Lye et al. 2015). Students fill out the surveys evaluating different teacher’s attributes, such as teaching attitude, teaching content, teaching evaluation, and other teaching aspects(Jiabin, Juanli et al. 2010).

Another example is the case of “Improving Learning of Higher Education, IDEA,” a nonprofit organization that, since 1975, provides the instrument and all the processes to evaluate and improve teacher’s performance and all related services, including the application of the instrument and the analysis of the answers from the students’ point of view. IDEA specializes in using student questioning to provide opportunities to improve teaching and learning processes (IDEA 2015).

Likewise, The Student Evaluation of Educational Quality, SEEQ, is an instrument from the Center for the Advancement of Teaching and Learning at the University of Manitoba, Canada, where students evaluate teachers through teaching dimensions that include: learning, enthusiasm, organization, group interaction and their overall impression of the instructor (University of Manitoba 2015).

Finally, the Student Perception of Teaching Effectiveness, SPTE, is an instrument from the Wichita State University, U.S., that measures students’ perception of teaching. It is used for summative purposes to congratulate teachers who are doing well, and for formative purposes, to improve the teaching. (Wichita State University 2015). (Jackson, Teal et al. 1999) .

In the previously described teacher’s evaluation instruments, only students’ point of view has been analysed. Teacher’s opinions about the perceptions of students’ feedback on their work has rarely been incorporated (Arthur 2009). The author (Arthur 2009) claimed that a teacher needs to consider students judgments to improve his development specially if a teacher gets low feedback. However, a teacher should consider changing his development based on his own feelings and professional judgment about what he is teaching. Hence, we observed an existing gap where comparison between students’ evaluation and a teacher’s perception needs to be made as referred by experts in the field (Hirschberg, Lye et al. 2015).

Besides the evaluation instruments, other data sources have been analysing to evaluate teachers’ performance. These datasets refer to teacher’s personal information, characteristics of the course, besides others. In the next section we include the analysis of these data sources.

Table des matières

INTRODUCTION
0.1 Context: questionnaires and instruments
0.2 Aim of the thesis
0.3 Research questions and overview of the methodology
CHAPTER 1 LITERATURE REVIEW
1.1 Overview of students surveys around the world
1.2 Analysis of evaluation instruments
1.3 Domain Driven Data Mining (D3M) and Actionable Knowledge
Discovery (AKD)
1.4 Knowledge, domain knowledge, experts’ knowledge and Meta – knowledge
1.5 Interestingness: Objective, Subjective and Semantic measures and
Actionability
1.6 Utility mining and utility formula
1.7 Probabilistic Topic modeling and measuring the model
1.7.1 Probabilistic Topic Modeling.
1.7.2 Cohen Kappa
1.8 Conclusion of the review
CHAPTER 2 GENERAL METHODOLOGY
2.1 Methodology overview
2.2 Methodology description
2.2.1 Objective 1: To build a representation using students’ model and
teacher’s model based on actionable knowledge
2.2.2 Objective 2: To construct a utility semantic measure to evaluate the
usefulness of association rules within the model
2.2.3 Objective 3: To discover new topics from the interviews analysis to
improve teacher evaluation
2.3 Objective description and associated results
2.3.1 Objective 1: To build a representation using students’ model and
teacher’s model based on actionable knowledge
2.3.2 Results associated to Objective 1, “To build a representation using
students’ model and teacher’s model based on actionable knowledge” … 29
2.3.3 Objective 2: To construct a utility semantic measure to evaluate the
usefulness of association rules within the model
XIV
2.3.4 Results associated to objective 2, “To construct a utility semantic
measure to evaluate the usefulness of association rules within the
model”
2.3.5 Results of the comparison between Objective 1, “To build a
representation using students’ model and teacher’s model based on
actionable knowledge”, and Objective 2, “To construct a utility
semantic measure to evaluate the usefulness of association rules
within the model”
2.3.6 Objective 3: To discover new topics from the interviews analysis to
improve teacher evaluation
2.3.7 Results of Objective 3, “To discover new topics from the interviews
analysis to improve teacher evaluation”
2.3.8 Conclusions
CHAPTER 3 DESCRIPTION OF THE INSTRUMENTS TO OBTAIN KNOWLEDGE: INTERVIEW AND SURVEYS
3.1 Introduction
3.2 Questionnaires for Teacher’s subjective perspective (S1 and S2)
3.2.1 Section A: Question evaluation
3.2.2 Section B: Evaluation per area
3.2.3 Section C: Reaction to low evaluation
3.2.4 Section Da: Creation of open association rules using variables
from section A
3.2.5 Section Db: Creation of contextual association rules using variables
from section B
3.2.6 Section E: Questionnaire for Teacher’s evaluation rules (S2)
3.3 Results from the survey
3.3.1 Results from Section A: Attribute evaluation
3.3.2 Results from section B: Most selected areas
3.3.3 Results from section C: Reaction to low evaluation
3.3.4 Results from section Da: Creation of open association rules using
attributes from section A
3.3.5 Results from section Db: Creation of context association rules using
variables from section B
3.3.6 Conclusions from teachers’ survey
CHAPITRE 4 CNCLUSION

Cours gratuitTélécharger le document complet

 

Télécharger aussi :

Laisser un commentaire

Votre adresse e-mail ne sera pas publiée. Les champs obligatoires sont indiqués avec *