• Aucun résultat trouvé

DATA ACQUISITION

Dans le document COMPUTER BASED LEARNING IN SCIENCE (Page 91-97)

Thomas Lee Hench

DATA ACQUISITION

To investigate the use of the metacognitive gap as an assessment quantity, an online survey tool recorded, sorted, and downloaded answers to questions and confidence judgments (low, medium or high) assigned to the answers by online students in an Introduction to Astronomy course taught by the author. As illustrated in Table 2, online surveys provide six important features when used as assessment tools. Of the survey tools available, the author selected SurveyMonkey for this study due to its incorporation of all the items shown in the table.

Table 2. Important Online Survey Tool Features

Specifically, the survey tool recorded responses to multiple choice questions reflecting two critical thinking levels derived from revised Bloom’s taxonomy (Krathwohl, 2002) illustrated in Figure 1.

Whereas the original taxonomy listed categories in the cognitive domain, Krathwohl reinterpreted those categories as processes. Specifically, questions use in the study incorporating those testing knowledge and understanding designated as Level I (NI = 33) and those reflecting testing application and analysis designated as Level II (NII = 23). (See Appendix A for sample question and survey output.)

Important Survey Tool Features

81

Figure 1. Categories of Bloom’s revised taxonomy

Figure 2 summarizes the normalized data collected for 56 students answering these questions. Out of a possible total of 3136 responses, the survey tool recorded 2850 total responses (1904 responses to Level I questions and 946 responses to Level II questions). The 9.1% difference in these totals reflects students either forgetting to enter confidence values or skipping questions.

Figure 2. Correct and Incorrect Response Confidence Distributions

Also shown in the figure are confidence distributions labeled “Acceptable”, which represent an

“acceptable” level of miscalibration. More specifically, the more the distribution of actual responses trend toward these acceptable levels, the more the students “know what they know or know what they don’t know”. In this study, the acceptable distributions for correct answers corresponded to 5% (low), 15% (medium), and 80% (high) and 80% (low), 15% (medium), and 5% (low) for incorrect answers.

ANALYSIS

As seen in Figure 2, an underestimation of confidence in correct Level II responses and an overestimation for incorrect responses occurred at both level I and II, thereby indicating the present of metacognitive miscalibration. Only the correct Level I responses showed a distribution trending toward acceptable. Since the level of miscalibration refers to multiple judgments of confidence, the response of a single student to one particular question is not meaningful. However, investigating all student responses for a particular question provides a more productive measure of calibration. To explore this aspect, a 3-dimensional vector (C) was created with components corresponding to the total low, medium, and high student confidence responses given by all students to each question (Figure 3). This vector, when compared to a vector representing the acceptable levels (denoted as Cacc), resulted in a quantity labeled the confidence miscalibration angle and designated by the symbol 

82

Figure 3. Construction of a 3-dimensional confidence vector.

Thus for a given question, smaller values of  represent a closer alignment of student responses to the acceptable distribution. Conversely, larger values show a greater misalignment with the acceptable level distribution. It is noted that in subsequent analysis, the magnitude of the number associated with the confidence miscalibration angle is of importance and not the units in which it is measured. Hence, this angle is treated as a dimensionless quantity.

Just as the miscalibration angle measures the class confidence for a specific question, the overall class performance measures the probability of that question being answered correctly. Thus, if 25% of the students answered a particular question correctly, then for this question, a 25% probability is associated with it. Hence, there are two quantities of importance related to each question, the miscalibration angle

 and the performance probability P.

In terms of the metacognitive gap, the higher the performance probability P and the lower the miscalibration angle , the more evidence of higher metacognition levels and therefore a smaller the metacognitive gap. Lower values of P and higher values of  suggest lower levels and thus a larger metacognitive gap. Therefore, the difference between performance and miscalibration angle, (P – ), is inversely related to the metacognitive gap. Ideally, a performance value of 100 (again recognizing that percentages are dimensionless quantities) along with  = 0 yields a value of 100 for (P – ), which indicates a perfectly calibrated situation and a metacognitive gap of zero. Hence, the metacognitive gap may be expressed quantitatively as the difference between (P – ) and this ideal case, or the metacognitive gap = 100 – (P – ).

To calculate the metacognitive gap, the overall class performance probability P (expressed as a percentage) for correct responses to Level I and II questions was determined. Then, the corresponding values of  were calculated using the existing confidence distribution data (Appendices B and C). To factor in the effect on  of both the correct and incorrect confidence distributions, a weighted confidence angle w for was determined each question. From these results, the metacognitive gaps for both Levels I and II questions were used to construct question/gap profile (QGP) (see Figure 4).

83

Figure 4. Question/Gap Profile (QGP) DISCUSSION

As shown in the figure, the Level I question gaps are with a few exceptions smaller than for Level II questions. This is not unexpected as the Level II questions require higher order thinking skills than do Level I questions, skills that may not be present. Furthermore, it is noted that some gaps are greater than 100. This represents cases where the miscalibration angle exceeded the performance, again suggesting a lack of higher level metacognitive skills. In addition, some Level I questions have large gaps, perhaps indicating a misplacement of the question or a flaw in the question itself. This appears to be the case for the Level I question with the highest gap. When reviewing this question, the question statement was found to be potentially confusing, thus leading to lower performance scores and correspondingly higher confidence responses. This ability of the question gap profile to identify poorly worded questions and/or misplaced questions was an unexpected, yet important result.

In the Introduction section of this paper a question was posed regarding the class averages for two specific questions in an exam, these values found to be 25% and 75%. The question, restated here, asked “Which case demonstrates a higher level of cognitive/metacognitive skill use?” To answer this question, the confidence miscalibration angles must be known and the metacognitive gap determined.

Table 3 shows possible answers to the question using different values of the miscalibration angle.

Table 3. Determination of the Presence of Higher Level Cognitive/Metacognitive Skills Performance (%) Miscalibration

Angle

Metacognitive Gap

Cognitive/Metacognitive Skill Level

25 20 95 moderate to high

25 80 155 low

75 20 45 high

75 80 105 moderate to high

As shown, the use of the metacognitive gap indicates that even for low performances, moderate or even high metacognitive skills may be present (Know what we don’t know). Conversely, a small gap associated with high performances suggests that “we know what we know”. Furthermore, if learning

84

represents a change toward higher cognitive and metacognitive skill levels (Hench and Whitelock, 2010), then the measurement of the metacognitive gaps also provide for a means of assessing changes in metacognitive skills and thus changes in learning. Figure 5 shows the changes in the metacognitive gap for a selection of Level II questions that included time for reflection on the confidence initially given to answers and an opportunity to change the confidence as a result of the reflection.

Figure 5. Effect of Reflection on Question Metacognitive Gaps

While the changes are small, they suggest the usefulness of metacognitive gap measurements to monitor potential changes in the development of thinking skills.

CONCLUSIONS

The results of the research described in this paper are summarized as follows:

1) An online survey tool investigated as a means to collect, sort, and download data to assess thinking skill levels demonstrated its effectiveness in accomplishing this task.

2) Initial analysis of collected data revealed metacognitive gaps due to miscalibration between performance and confidence. A method introduced to quantify these gaps resulted in the calculation of the metacognitive gaps for individual questions.

3) Further analysis found a difference in the measured metacognitive gaps between question levels and among questions in each level illustrated by a Question/Gap Profile (QGP).

4) The application of the QGP to monitor changes in metacognitive gaps demonstrated a potential use as a means to assess higher level thinking skills. In addition, the profile provides a means to evaluate the content and cognitive level of specific questions.

Future work by the author includes using the concept of a QGP to monitor changes in thinking skill levels resulting from experimental treatments prior knowledge and confidence weighing.

REFERENCES

Blakely, Elaine; Sheila Spence (1990) "Thinking for the Future," Emergency Librarian, Volume 17, Number 5, May-June 1990: 11-14.

Dirkes, M.A. (1985). “Self-directed thinking in the curriculum.”, Roeper Review, Volume 11: 92-94.

85

Hench, Thomas L., Denise Whitelock “Towards a Model for Evaluating Student Learning Via e-Assessment” Proceedings of the 9th International Conference on Computer-Based Learning in Science, Warsaw, Poland, July 3 – July 7, 2010.

Krathwohl, David R. (2002) “A Revision of Bloom’s Taxonomy: An Overview” Theory into Practice, Volume 41, Number 4, Autumn 2002, The Ohio State University.

Kruger, Justin, David Dunning (1999). "Unskilled and Unaware of It: How Difficulties in Recognizing One's Own Incompetence Lead to Inflated Self-Assessments", Journal of Personality and Social

Psychology, Volume 77, Number 6: 1121–34.

Okamoto, C., Leighton, J., and Cor, M. (2008) ‘The Role of Academic Confidence and Epistemological Beliefs on Syllogistic Reasoning Performance,’ Centre for Research in Applied Measurement and Evaluation, University of Alberta

Thomas Lee Hench

Professor of Physics and Astronomy Delaware County Community College Media, PA 19063 USA

Email: thench@dccc.edu

86

APPENDIX A

Example of a Level II Question.

Dans le document COMPUTER BASED LEARNING IN SCIENCE (Page 91-97)