Thinking it through: Australian students' skills in creative problem solving

16 downloads 368 Views 5MB Size Report
Comparing students' performance in problem solving with mathematics, science and reading 44. Relative ... In PISA 2003,
Programme for International Student Assessment (PISA)

Thinking it through Australian students’ skills in creative problem solving

Lisa De Bortoli Greg Macaskill

Australian Council for Educational Research

First published 2014 by Australian Council for Educational Research Ltd 19 Prospect Hill Road, Camberwell, Victoria, 3124, Australia www.acer.edu.au www.acer.edu.au/ozpisa/reports/ Text © Australian Council for Educational Research Ltd 2014 Design and typography © ACER Creative Services 2014 This book is copyright. All rights reserved. Except under the conditions described in the Copyright Act 1968 of Australia and subsequent amendments, and any exceptions permitted under the current statutory licence scheme administered by Copyright Agency (www.copyright.com.au), no part of this publication may be reproduced, stored in a retrieval system, transmitted, broadcast or communicated in any form or by any means, optical, digital, electronic, mechanical, photocopying, recording or otherwise, without the written permission of the publisher. Cover design, text design and typesetting by ACER Creative Services Edited by Amanda Coleiro National Library of Australia Cataloguing-in-Publication entry Author:

DeBortoli, Lisa, author.

Title: Thinking it through : Australian students’ skills in creative problem solving / Lisa DeBortoli, Greg Macaskill. ISBN:

9781742862576 (paperback)

Notes:

Includes bibliographical references.

Subjects:

Programme for International Student Assessment. Problem-based learning--Australia Critical thinking in children--Australia Problem solving--Ability testing--Australia. Educational evaluation.

Other Authors/Contributors: Macaskill, Greg, author. Australian Council for Educational Research, issuing body. Dewey Number: 371.39 The views expressed in this report are those of the authors and not necessarily those of the Commonwealth, State and Territory governments.

Contents

Executive summary

v

List of figures

xii

List of tables

xiv

Reader’s guide

xv

CHAPTER 1 Introduction

1

The main goals of PISA

1

The importance of assessing problem solving

2

What participants did

2

Participants in PISA 2012

3

How results are reported

6

Organisation of the report

6

Further information

6

CHAPTER 2 The assessment of problem solving

7

How is problem solving defined in PISA?

7

How is problem solving assessed in PISA?

7

The PISA 2012 problem-solving assessment structure

CHAPTER 3 Australian students’ performance in problem solving

10

23

Australia’s problem-solving performance from an international perspective

23

Australia’s problem-solving performance in a national context

31

Variations in problem-solving performance between and within schools

41

Comparing students’ performance in problem solving with mathematics, science and reading

44

Relative performance in problem solving in Australia

46

CHAPTER 4 Students’ strengths and weaknesses in problem solving

48

Students’ strengths and weaknesses in problem-solving processes

50

Students’ strengths and weaknesses in the nature of the problem situation

53

Students’ strengths and weaknesses on the response formats

55

Grouping countries by their strengths and weaknesses in problem solving

59

iii

CHAPTER 5 Australian students’ motivation towards problem solving

61

Perseverance 61 Students’ openness to experience in problem solving

66

References70

iv

Contents

Executive summary

In PISA 2003, an assessment of cross-disciplinary problem solving was undertaken as a paper-based assessment. In PISA 2012, problem solving was once again assessed with 44 of the 65 participating countries and economies completing an optional computer-based assessment of problem solving. The problem-solving assessment focuses on students’ general-reasoning skills, their ability to regulate problem-solving processes and their willingness to do so, by presenting students with problems that do not require specific curricular knowledge to solve. PISA 2012 defines problem solving as: an individual’s capacity to engage in cognitive processing to understand and resolve problem situations where a method of solution is not immediately obvious. It includes the willingness to engage with such situations in order to achieve one’s potential as a constructive and reflective citizen. (OECD, 2014, p. 30) There are three main aspects in the problem-solving framework that guided the development of assessment items: 1) problem-solving processes—the cognitive process involved in problem solving: exploring and understanding, representing and formulating, planning and executing, and monitoring and reflecting; 2) the nature of the problem situation: interactive or static; and 3) the problem context: technological or not, personal or social. This report presents the results of the PISA 2012 problem-solving assessment that measured how well prepared today’s 15-year-old students are in solving complex, unfamiliar problems that they may encounter outside curricular contexts.

v

Australian students’ performance in problem solving Reporting student performance Similar to the reporting of results for other assessed domains in PISA, statistics such as mean scores and measures of distribution of performance and proficiency levels are used to examine student performance.

Mean scores Mean scores provide a summary of student performance and allow comparisons of the relative standing between different countries and different subgroups.

Proficiency levels There are six levels in the PISA problem-solving proficiency scale, ranging from Level 6 (the highest proficiency level) to Level 1 (the lowest proficiency level). »» Students achieving a proficiency of Level 5 or 6 are considered top performers. »» Level 2 has been defined as a baseline proficiency level and is the level of achievement on the PISA scale at which students begin to demonstrate the problem-solving competencies that will enable them to actively participate in reallife situations. »» Students failing to reach Level 2 (those students placed at Level 1 or below) are considered low performers.

Results across participating countries » Overall, Australian students performed very well in the PISA 2012 problem-solving assessment, and are well equipped to apply their skills and knowledge to solve challenging problems. » Australia achieved a mean score of 523 points on the problem-solving assessment, which was significantly above the OECD average of 500 score points. » Australia was one of the high-performing countries, outperformed by only seven of the 44 participating countries and economies. » Three countries and four economic regions, all from the Asian continent, performed significantly higher than Australia. These were Singapore, Korea, Japan, Macao–China, Hong Kong–China, Shanghai–China and Chinese Taipei. » Australia’s performance was not significantly different from three countries: Canada, Finland and England. » Australia’s performance was significantly higher than 33 countries, including the United States and Ireland. » Sixteen per cent of Australian students were top performers compared to 30% of students in Singapore and 12% of students across the OECD. » Sixteen per cent of Australian students were low performers compared to 8% of students in Singapore and 21% of students across the OECD.

vi

Executive summary

Results across the Australian jurisdictions » All jurisdictions achieved statistically similar scores, except for Tasmania which performed significantly lower than all other jurisdictions. » Six jurisdictions (Western Australia, the Australian Capital Territory, New South Wales, Victoria, Queensland and South Australia) performed at a significantly higher level than the OECD average. The Northern Territory performed at a level not significantly different to the OECD average and Tasmania performed significantly lower than the OECD average. » The proportion of top performers in problem solving ranged from 11% in Tasmania to 19% in the Australian Capital Territory. » The proportion of low performers in problem solving ranged from 13% in Western Australia to 27% in Tasmania.

Results for females and males » Across OECD countries, males performed significantly higher than females (by 6 score points on average). Approximately half the countries had significant sex differences in favour of males, while 11% of the countries had significant sex differences in favour of females. » Australian females and males performed at a level that was not significantly different in problem solving. » In Australia, 16% of females and 18% of males were top performers compared to 10% of females and 13% of males across OECD countries. » In Australia, 15% of females and 16% of males were low performers compared to 22% of females and 22% of males across OECD countries. » Significant sex differences were found in only one jurisdiction, Western Australia, with males achieving 18 score points on average higher than females. » All jurisdictions, except Tasmania, achieved a higher proportion of top-performing males compared to the OECD average (13%), while all jurisdictions achieved a higher proportion of top-performing females compared to the OECD average (10%). » The proportion of low-performing males in Tasmania and the Northern Territory was higher than the OECD average (22%), while the proportion of low-performing females was higher in Tasmania than across the OECD (22%).

Results for geographic location of schools The geographic location of schools was classified using the broad categories metropolitan, provincial and remote, as defined in the MCEECDYA Schools Geographic Location Classification1. » Students attending metropolitan schools performed at a significantly higher level (528 score points on average) than students in schools from provincial areas (510 score points on average) and remote areas (475 score points on average). Students attending provincial schools significantly outperformed students attending remote schools. » Eighteen per cent of students from metropolitan schools and 12% of students from provincial schools were top performers compared to 9% of students from remote schools. » Fifteen per cent of students from metropolitan schools and 18% of students from provincial schools were low performers compared to 30% of students from remote schools.

  Refer to the Reader’s Guide for details about the MCEECDYA Schools Geographic Location Classification.

1

Executive summary

vii

Results for Indigenous students Students’ Indigenous background was derived from information provided by the school.2 » Indigenous students achieved on average 454 score points in problem solving, which was significantly lower than for non-Indigenous students (526 score points on average) and for students across the OECD. » Four per cent of Indigenous students were top performers compared to 18% of non-Indigenous students. » Thirty-seven per cent of Indigenous students were low performers compared to 15% of non-Indigenous students. » Indigenous females and males performed at a level that was not significantly different in problem solving. » A small, yet similar, proportion of Indigenous females (3%) and males (4%) were top performers in problem solving, while 35% of Indigenous females and 39% of Indigenous males were low performers.

Results for socioeconomic background Socioeconomic background in PISA is measured by an index of Economic, Social and Cultural Status (ESCS), which captures the wider aspects of a student’s family and home background.3 » Students in the highest socioeconomic quartile achieved an average score of 560 points, which was 73 score points higher than those students in the lowest socioeconomic quartile. » Twenty-seven per cent of students in the highest socioeconomic quartile were top performers compared to 9% of students in the lowest socioeconomic quartile. » Eight per cent of students in the highest socioeconomic quartile were low performers compared to 25% of students in the lowest socioeconomic quartile.

Results for immigrant background Immigrant background was measured on students’ self-report of where they and their parents were born.4 » Australian-born students achieved an average score of 523 points, which was not significantly different from the performance of foreign-born students (517 points), but significantly lower than the mean score achieved for first-generation students (531 points). » Sixteen per cent of Australian-born students, 19% of first-generation students and 16% of foreign-born students were top performers. » Fifteen per cent of Australian-born students, 14% of first-generation students and 18% of foreign-born students were low performers.

Results for language background Language background was based on students’ responses regarding the main language spoken at home— English or another language.5   The Reader’s Guide provides more information about the definition of Indigenous background.   Refer to the Reader’s Guide for details about the Economic, Social and Cultural Status index. 4   Refer to the Reader’s Guide for details about the definitions of immigrant background. 5   Refer to the Reader’s Guide for details about the definitions of language background. 2 3

viii

Executive summary

» » »

Students who spoke English at home performed significantly higher (average score of 526 points) than those students who spoke a language other than English at home (average score of 509 points). Eighteen per cent of students who spoke English at home and 16% of students who spoke a language other than English at home were top performers. Fifteen per cent of students who spoke English at home and 21% of students who spoke a language other than English at home were low performers.

Variations in problem-solving performance between and within schools The variation in performance within countries can be divided into a measure of performance difference between students from the same school and a measure of performance difference between groups of students from different schools. » In Australia, the amount of variation in performance within schools was 75% and was higher than the OECD average (61%), while the amount of variation in performance between Australian schools was 28% and lower than the OECD average (38%). » On average, the variation in problem-solving performance that was observed between schools ranged from 19% in South Australia to 39% in Tasmania, while the variation in problem-solving performance that was observed within schools ranged from 72% in Victoria to 94% in the Northern Territory.

Variation in problem-solving performance associated with performance in mathematics, science and reading An analysis examined the variation in problem-solving performance that was associated with skills measured in the problem-solving assessment and the variation in problem-solving performance that was also measured in one of the three regular literacy domain assessments. » Across the OECD, 68% of the problem-solving variance ref lected skills that were also measured in one of the three literacy domains regularly assessed in PISA. The remaining 32% ref lected skills that were uniquely measured in the problem-solving assessment. » In Australia, 71% of the problem-solving variance ref lected skills that were also measured in one of the three literacy domains regularly assessed in PISA. The remaining 29% of the score ref lected skills that were uniquely measured in the problem-solving assessment.

Relative performance in problem solving in Australia »

Australian students performed better than expected in problem solving, based on their performance in mathematics. The difference between observed and expected performance is particularly large among students with strong performance in mathematics.

Students’ strengths and weaknesses in problem solving Focusing on the different aspects of the problem-solving framework, analyses were undertaken to identify comparative strengths and weaknesses within countries and within different social groups.

Executive summary

ix

Strengths and weakness in the problem-solving processes » Generally, the higher performing countries in problem solving performed relatively stronger on the exploring and understanding process and on the representing and formulating process, and relatively weaker on the planning and executing process and on the monitoring and ref lecting process. (These comparisons take into account the countries’ overall performance.) » Australian students are comparatively stronger on the exploring and understanding process and on the representing and formulating process, and are relatively weaker on the planning and executing process. (These comparisons take into account the countries’ overall performance.) » Students from Western Australia performed relatively stronger on the exploring and understanding process, while in New South Wales students performed relatively weaker on this process. Students from Queensland performed relatively stronger on the representing and formulating process. » Females’ problem-solving process skills were relatively stronger on the monitoring and ref lecting process, and relatively weaker on the representing and formulating process. The opposite was found for males, where their relative strength was found on the representing and formulating process, and their relative weakness was found on the monitoring and ref lecting process. » Indigenous students were relatively weaker on the exploring and understanding process, while non-Indigenous students were found to be relatively stronger on this process. » Students in the lowest socioeconomic quartile performed relatively stronger on the planning and executing process, and relatively weaker on the exploring and understanding process. The reverse was found for students in the highest socioeconomic quartile.

Strengths and weakness in the nature of the problem situation » No clear pattern emerged of relative strength or weakness in static or interactive items by countries’ overall performance in problem solving. » In Tasmania, students performed relatively stronger on the static tasks, while in Queensland students performed relatively stronger on the interactive tasks. » No relative strengths or weaknesses in static or interactive tasks were found across the different social groups.

Strengths and weakness on the response formats » Generally, the higher performing countries and the lower performing countries in problem solving performed relatively stronger on the selected-response format items and weaker on the constructed-response format items. » In Australia, students performed relatively stronger on the constructed-response format items and weaker on the selected-response format items. » Tasmanian students performed relatively stronger on the constructed-response format items and Queensland students performed relatively stronger on the selected-response format items. » In Australia, females performed stronger than males on the constructed-response format items. » Australian-born students performed relatively stronger on the constructed-response format items, where the effect was consistent, but weaker for students who spoke English at home. Foreignborn students were relatively stronger on the selected-response format items. This was also the case, to a lesser extent, for students who spoke a language other than English at home.

x

Executive summary

Australian students’ perseverance and openness in problem solving The PISA definition of problem solving acknowledges that solving a problem relies on motivational and affective factors. In PISA 2012, students completed a questionnaire that collected information about their engagement with and at school, their drive and the beliefs they hold about themselves as learners. This included measures of perseverance and openness in problem solving. Perseverance in problem solving In PISA, perseverance relates to a student’s willingness to work on problems. » Australian students reported a significantly higher level of perseverance than the OECD average. » Australian males reported significantly higher levels of perseverance than Australian females. » All jurisdictions reported higher mean scores on the perseverance index compared to the OECD average, with students from the Australian Capital Territory reporting the highest levels and students from the Northern Territory reporting the lowest levels of perseverance. » Non-Indigenous students, students from metropolitan schools and students in the highest socioeconomic quartile reported higher levels of perseverance than their counterparts. Students’ openness to experience in problem solving Openness relates to a student’s willingness to engage with problems and to be open to new challenges in order to be able to solve complex problems and situations. » Australian students reported a lower level of openness to problem solving than the OECD average. » Australian males reported significantly higher levels of openness to problem solving than Australian females. » The Australian Capital Territory was the only jurisdiction to have an average score that was higher than the OECD average. The Northern Territory had the same index score as the OECD average, while all other jurisdictions had a lower index score than the OECD average. The Australian Capital Territory had the highest mean score on the openness to problem-solving index, while South Australia and Queensland had the lowest mean scores on this index. » Similar to the findings on perseverance, non-Indigenous students, students from metropolitan schools and students in the highest socioeconomic quartile reported higher levels of openness to problem solving than their counterparts.

Executive summary

xi

Figures

Figure 1.1

Countries participating in PISA 2012

4

Figure 2.1

Main features of the PISA 2012 problem-solving assessment framework

8

Figure 2.2

The relationship between items and students on the PISA problem-solving scale

12

Figure 2.3

Summary descriptions of the six levels on the problem-solving proficiency scale

14

Figure 2.4 Map of selected problem-solving items, illustrating the proficiency level and assigned problem-solving process and nature of the problem

15

Figure 3.1

Mean scores and distribution of students’ performance on the problem-solving scale

26

Figure 3.2

Percentage of students across the problem-solving proficiency scale, by country

28

Figure 3.3 Mean scores and differences between sexes in students’ performance on the problemsolving scale, by country

30

Figure 3.4 Percentage of students across the problem-solving proficiency scale by sex, for Australia and the OECD average

31

Figure 3.5 Mean scores and distribution of students’ performance on the problem-solving scale, by jurisdiction 32 Figure 3.6

Percentage of students across the problem-solving proficiency scale, by jurisdiction

Figure 3.7 Mean scores and differences in students’ performance on the problem-solving scale, by jurisdiction and sex Figure 3.8

Percentage of students across the problem-solving proficiency scale, by jurisdiction and sex

Figure 3.9 Mean scores and distribution of students’ performance on the problem-solving scale, by geographic location Figure 3.10

Percentage of students across the problem-solving proficiency scale, by geographical location

Figure 3.11 Mean scores and distribution of students’ performance on the problem-solving scale, by Indigenous background Figure 3.12

Percentage of students across the problem-solving proficiency scale, by Indigenous background

34 34 35 36 36 37

Figure 3.13 Mean scores and distribution of students’ performance on the problem-solving scale, by Indigenous background and sex

37

Figure 3.14 Percentage of students across the problem-solving proficiency scale, by Indigenous background and sex

38

Figure 3.15 Mean scores and distribution of students’ performance on the problem-solving scale, by socioeconomic background

38

Figure 3.16

Percentage of students across the problem-solving proficiency scale, by socioeconomic background 39

Figure 3.17 Mean scores and distribution of students’ performance on the problem-solving scale, by immigrant background

xii

33

39

Figure 3.18

Percentage of students across the problem-solving proficiency scale, by immigrant background

Figure 3.19 Mean scores and distribution of students’ performance on the problem-solving scale, by language background

40 40

Figure 3.20

Percentage of students across the problem-solving proficiency scale, by language background

41

Figure 3.21

Variation in problem-solving performance between and within schools, by country

42

Figure 3.22

Variation in problem-solving performance between and within schools, by jurisdiction

43

Figure 3.23 Variation in problem-solving performance associated with performance in mathematics, science and reading, by country

45

Figure 3.24 Variation in problem-solving performance associated with performance in mathematics, science and reading, by jurisdiction

46

Figure 3.25 Relative performance in problem solving at different levels on the mathematics scale for Australia, England and the United States

47

Figure 4.1

Relative strengths and weaknesses in problem-solving processes, by countries

51

Figure 4.2

Relative strengths and weaknesses in problem-solving processes, by jurisdictions

52

Figure 4.3 Relative strengths and weaknesses in problem-solving processes, by different social groups for Australia

53

Figure 4.4 Relative strengths and weaknesses on problem-solving tasks by the nature of the problem situation, across countries

54

Figure 4.5 Relative strengths and weaknesses on problem-solving tasks by the nature of the problem situation, across jurisdictions

55

Figure 4.6

Relative strengths and weaknesses on problem-solving tasks by response format, across countries 56

Figure 4.7 Relative strengths and weaknesses on problem-solving tasks by response format, across jurisdictions

57

Figure 4.8 Relative strengths and weaknesses on problem-solving tasks by response format, across different social groups

58

Figure 4.9

Joint analysis of strengths and weaknesses, by nature of the problem and by process, for countries 59

Figure 4.10 Joint analysis of strengths and weaknesses, by nature of the problem and by process, for Australian jurisdictions and social groups Figure 5.1

Relationship between Australian students’ perseverance and problem-solving performance

Figure 5.2 Relationship between Australian students’ openness to problem solving and problem-solving performance

60 64 68

Figures

xiii

Tables

Table 1.1

Number of Australian PISA 2012 schools, by jurisdiction and school sector

5

Table 1.2

Number of Australian PISA 2012 students, by jurisdiction and school sector

5

Table 2.1 Classification of problem-solving items, by cognitive process and the nature of the problem situation11 Table 3.1

Multiple comparisons of mean problem-solving performance, by jurisdiction

Table 3.2 Relationship between performance in problem solving, mathematics, science and reading across the OECD

44

Table 3.3 Relationship between performance in problem solving, mathematics, science and reading for Australia

44

Table 5.1 Students’ perseverance in problem solving for Australia and comparison countries

62

Table 5.2 Index of perseverance for Australia and comparison countries

63

Table 5.3 Index of perseverance for Australia and comparison countries, by sex

64

Table 5.4 Students’ perseverance in problem solving, by sex, jurisdiction, geographic location, Indigenous background and socioeconomic background

65

Table 5.5

Students’ openness to problem solving for Australia and comparison countries

66

Table 5.6

Index of openness to problem solving for Australia and comparison countries

67

Table 5.7

Index of openness to problem solving for Australia and comparison countries, by sex

67

Table 5.8 Students’ openness to problem solving, by sex, jurisdiction, geographic location, Indigenous background and socioeconomic background

xiv

32

69

Reader’s guide

Target population for PISA This report uses ‘15-year-olds’ as shorthand for the PISA target population. In practice, the target population was students who were aged between 15 years and 3 (complete) months and 16 years and 2 (complete) months at the beginning of the assessment period, and who were enrolled in an educational institution that they were attending full-time or part-time. Since the largest part (but not all) of the PISA target population is made up of 15-year-olds, the target population is often referred to as 15-year-olds.

OECD average An OECD average was calculated for most indicators in this report and is presented for comparative purposes. The OECD average represents OECD countries as a single entity and each country contributes to the average with equal weight. The OECD average is equivalent to the arithmetic mean of the respective country statistics.

Rounding of figures Because of rounding, some numbers in tables may not exactly add to the totals reported. Totals, differences and averages are always calculated on the basis of exact numbers and are rounded only after calculation. When standard errors have been rounded to one or two decimal places and the value 0.0 or 0.00 is shown, this does not imply that the standard error is zero, but that it is smaller than 0.05 or 0.005 respectively.

Confidence intervals and standard errors In this and other publications, student achievement is often described by a mean score. For PISA, each mean score is calculated from the sample of students who undertook the PISA assessment and is referred to as the sample mean. These sample means are an approximation of the actual mean score (known as the population mean) that would have been obtained had all students in a country actually sat the PISA assessment. Since the sample mean is just one point along the range of student achievement scores, more information is needed to gauge whether the sample mean is an under estimation or an over estimation of the population mean. The calculation of confidence intervals can assist assessment of a sample mean’s precision as a population mean. Confidence intervals provide a range of scores within which we are confident that the population mean actually lies. In this report, sample means are presented with an associated standard error. The confidence interval— which can be calculated using the standard error—indicates that there is a 95% chance that the actual population mean lies within plus or minus 1.96 standard errors of the sample mean. The term significantly is used through this report to describe a difference that meets the requirements of statistical significance at the 0.05 level, indicating that the difference is real and would be found in at least 95 analyses out of 100 if the comparison was to be repeated.

xv

Mean performance Mean scores provide a summary of students’ performance and allow comparisons of the relative standing between different countries and different subgroups. In addition, the distribution of scores (reported at the 5th, 10th, 25th, 75th, 90th and 95th percentiles) are reported in graphical format.

Proficiency levels To summarise data from responses to the PISA assessment, performance scales were constructed for each assessment domain. The scales are used to describe the performance of students in different countries, including in terms of described performance levels. The described performance levels are known as proficiency levels. This publication uses top performers as shorthand for those students proficient at Level 5 or 6 of the assessment and low performers for those students proficient below Level 2 of the assessment.

PISA indices The measures that are presented as indices summarise students’ responses to a series of related items constructed on the basis of previous research. In describing students in terms of each characteristic (for example, instrumental motivation to learn mathematics, or disciplinary climate), scales were constructed on which the average OECD student was given an index value of 0, and about two-thirds of the OECD population were given values between –1 and +1 (that is, the index has a mean of 0 and a standard deviation of 1). Negative values on an index do not necessarily imply that students responded negatively to the underlying items. Rather, students with a negative score responded less positively than students on average across OECD countries. The indices are based on four categories for each item, whereas the reported percentages are collapsed into two categories. Due to this and the weighting of responses, a ranking based on the value of the indices will sometimes not exactly correspond to one based, say, on the average of the percentages. Information about school characteristics was collected through the school questionnaire, which was completed by the principal. In this report, responses from principals were weighted so that they are proportionate to the number of 15-year-olds enrolled in the school.

Bonferroni correction The Bonferroni correction states that if an experimenter is testing n independent hypotheses on a set of data, then the statistical significance level that should be used for each hypothesis separately is 1/n times what it would be if only one hypothesis was tested. The Bonferroni correction was used in the multiple comparison tables in earlier PISA publications (for PISA 2000 and PISA 2003). It is widely acknowledged that there are technical issues with using the Bonferroni correction for such a large group of countries and its results are conservative. As such, the Bonferroni correction has not been used in PISA 2012.

Correlational analysis An analysis of the correlation between two variables can be used to investigate the association between them. If there is a significant positive correlation, it does not imply that one factor depends on the other or that there is a cause–effect relationship between them; it simply means that they occur together. Further analysis and investigation are needed to determine the nature of the association. The most commonly used measure is the Pearson correlation coefficient, which is abbreviated as r.

xvi

Reader’s guide

The correlation coefficient measures the strength between two variables. Values of the correlation coefficient can range from –1 (a negative correlation—as one value increases the other value decreases) to a +1 (a positive correlation—as one value increases the other value increases). In this report, as a general rule, the correlation coefficients have been interpreted as follows: Correlation coefficient range

Strength of association

r  < –0.50

strong/high negative association

–0.50 > r  < –0.30

moderate/medium negative association

–0.30 > r  < –0.10

small/low negative association

–0.10 > r  < +0.10

very small or no association

+0.10 > r  < +0.30

small/low positive association

+0.30 > r  < +0.50

moderate/medium positive association

r  > +0.50

strong/high positive association

Definition of problem solving The term creative problem solving emphasises the difference from the PISA 2003 assessment of crosscurricular problem solving and the PISA 2012 assessment, which examined the necessary role creativity plays in real problem solving, i.e., where problems are set in novel situations and the ways of achieving a goal is not immediately obvious. For consistency with the corresponding OECD report, the title of this report refers to creative problem solving, while the term problem solving is used throughout the report.

Definitions of background characteristics There are a number of definitions used in this report that are particular to the Australian context, as well as many that are relevant to the international context. This section provides an explanation for those that are not self-evident. Indigenous background Indigenous background is derived from information provided by the school, which was taken from school records. Students were identified as being of Australian Aboriginal or Torres Strait Islander descent. For the purposes of this publication, data for the two groups are presented together under the term Indigenous Australian students. Socioeconomic background Socioeconomic background is based on the answers of students to questions about their parents’ education, parents’ occupation and items in the home. Two measures are used by the OECD to represent elements of socioeconomic background. One is the highest level of the father’s and mother’s occupation (known as HISEI), which is coded in accordance with the International Labour Organization’s International Standard Classification of Occupations. The other measure is the index of Economic, Social and Cultural Status (ESCS), which was created to capture the wider aspects of a student’s family and home background. The ESCS is based on three indices: the highest occupational status of parents (HISEI); the highest educational level of parents in years of education (PARED); and home possessions (HOMEPOS). The index of home possessions (HOMEPOS) comprises all items on the indices of family wealth (WEALTH), cultural resources (CULTPOSS), access to home educational and cultural resources (HEDRES), and books in the home.

Reader’s guide

xvii

Geographic location In Australia, participating schools were coded with respect to the MCEECDYA Schools Geographic Location Classification. For the analysis in this report, only the broadest categories are used: » Metropolitan—including mainland capital cities or major urban districts with a population of 100,000 or more (for example, ACT-Queanbeyan, Cairns, Geelong, Hobart) » Provincial—including provincial cities and other nonremote provincial areas (for example, Darwin, Ballarat, Bundaberg, Geraldton, Tamworth) » Remote—including remote areas and very remote areas. Remote: very restricted accessibility of goods, services and opportunities for social interaction (for example, Coolabah, Mallacoota, Capella, Mt Isa, Port Lincoln, Port Hedland, Swansea, Alice Springs). Very remote: very little accessibility of goods, services and opportunities for social interaction (for example, Bourke, Thursday Island, Yalata, Condingup, Nhulunbuy).

Immigrant background For the analysis in this report, immigrant background has been defined by the following categories: » Australian-born students—students born in Australia with both parents born in Australia » First-generation students—students born in Australia with at least one parent born overseas » Foreign-born students—students born overseas with both parents also born overseas.

Language background The language spoken at home indicates whether a student has a language background other than English. The question asked about the language spoken at home most of the time.

Sample surveys PISA is a sample survey and, as such, a random sample of students was selected to represent the population of 15-year-old students. The PISA sample was designed as a two-stage stratified sample. The first stage involved the sampling of schools in which 15-year-old students could be enrolled. The second stage of the selection process sampled students within the sampled schools. The following variables were used in the stratification of the school sample: jurisdiction; school sector; geographic location (based on the MCEECDYA’s Schools Geographic Location Classification); sex of students at the school; a socioeconomic background variable (based on the Australian Bureau of Statistics’ Socio-Eeconomic Indexes for Areas—SEIFA; the SEIFA consists of four indexes that rank geographic areas across Australia in terms of their relative socioeconomic advantage and disadvantage); and an achievement variable (based on a Year 9 NAPLAN numeracy school-level score).

Online statistical tables The data underlying the figures in this report are provided in Excel spreadsheets and are available from the national PISA website: www.acer.edu.au/ozpisa/reports/.

Acknowledgements Information included in this report about the problem-solving framework, proficiency scale, sample items and discussion about the results—including some of the international tables—has been assembled from the OECD problem-solving report (OECD, 2014). xviii

Reader’s guide

CHAPTER 1

Introduction

In every PISA survey, students from every participating country are assessed in the core domains of mathematics, science and reading literacy. In addition to assessing these literacy domains, the OECD proposes additional assessments in other domains. In PISA 2003, a paper-based assessment of crossdisciplinary problem solving was first assessed, when it was included as a core domain. In PISA 2012, problem solving was once again assessed, this time as an optional computer-based assessment. The focus of the PISA 2012 assessment of problem solving was: Are today’s 15-year-old students acquiring the problem-solving skills that will prepare them to meet the challenges of the future? This report describes how PISA defines and measures problem solving, and presents the performance of Australian students in problem solving.

The main goals of PISA PISA seeks to measure how well young adults, at age 151 and near the end of compulsory schooling in most participating education systems, are prepared to use knowledge and skills in particular areas to meet real-life challenges. This is in contrast to assessments that seek to measure the extent to which students have mastered a specific curriculum. PISA’s orientation reflects a change in the goals and objectives of curricula, which increasingly address how well students are able to apply what they learn at school. As part of the PISA process, students complete an assessment of mathematical literacy, scientific literacy and reading literacy, as well as an extensive background student questionnaire. School principals complete a school questionnaire describing the context of education at their school, including the level of resources in the school and the qualifications of staff. From this, the reporting of PISA findings is able to focus on: » How well are young adults prepared to meet the challenges of the future? Can they analyse, reason and communicate their ideas effectively? What skills do they possess that will facilitate their capacity to adapt to rapid societal change? » Are some ways of organising schools or school learning more effective than others? » What inf luence does the quality of school resources have on student outcomes? » What educational structures and practices maximise the opportunities of students from disadvantaged backgrounds? How equitable is the provision of education within a country or across countries?   Refer to the Reader’s Guide for more information about the target population for PISA.

1 

1

The importance of assessing problem solving The ability to solve complex problems enables an individual to adapt to changes in society or the environment and to learn from their mistakes. Individuals that are proficient problem solvers will be personally fulfilled, have greater opportunities for employment and contribute to economic growth (Autor, Levy & Murnane, 2003; Hanushek, Jamison, Jamison & Woessmann, 2008). Every individual, regardless of who they are and what they do, will encounter problems that they will need to solve. Some problems will be easy to solve, while others will be more complex, requiring a number of steps and strategies to solve the problem. For some problems, the skills and knowledge that were acquired during school will be used to solve a problem; for example, managing a credit card, applying measurement conversions when working with a recipe, or making an informed decision about climate change. However, there are other problems that are not related to the skills or knowledge acquired in school and that individuals will be unfamiliar with. These types of problems require a different set of skills: existing knowledge needs to be reorganised and combined with new knowledge using a range of reasoning skills. It is these skills that the PISA 2012 computer-based assessment measured. The PISA 2012 assessment of problem solving assessed students’ general reasoning skills, their ability to regulate problem-solving processes and their willingness to do so by presenting students with problems that can be solved without domain-specific knowledge.

What participants did Australian students who participated in PISA 2012 completed a paper-based assessment booklet that contained questions assessing mathematical literacy and questions assessing either reading literacy, scientific literacy or both, and a student questionnaire. All students in the 44 countries that opted to participate in problem solving completed a computer-based assessment that assessed one or more of problem-solving, mathematical and reading literacy. Cognitive assessment In PISA 2012, the majority of the assessment was devoted to mathematical literacy, with scientific literacy and reading literacy assessed to a lesser extent. Participating students each responded to a two-hour cognitive assessment, which took place in the morning. After a lunch break, all students2 completed a 40-minute computer-based cognitive assessment. Students completed a practice test before responding to one of 24 forms. Each form consisted of two clusters (of 20 minutes each) allocated according to a rotated test design among four clusters of computer problemsolving items, four clusters of computer mathematical literacy items and two clusters of computer reading literacy items. In the cognitive assessments, students were presented with units that required them to construct responses to a stimulus and a series of questions (or items). Context was represented in each unit by the stimulus material, which was typically a brief written passage or text accompanied by a table, chart, graph, photograph or diagram. Each unit then contained several items related to the stimulus material. A range of item-response formats was employed to cover the full range of cognitive abilities and knowledge identified in the assessment frameworks. There were five types of item format: multiplechoice and complex multiple-choice items, in which students selected from among several possible answers; closed constructed-response items, in which students were required to provide an unambiguous single word, a number or diagrammatic answer; and open constructed-response and short-response items, in which students provided a written response, showing the methods and thought processes they had used.  In other participating countries, the computer-based assessment was only administered to a subsample of the students who had been assessed in PISA.

2 

2

Thinking it through: Australian students’ skills in problem solving

Context questionnaires PISA 2012 collected contextual information from students and principals. The internationally standardised student questionnaire sought information on students and their family background, aspects of motivation, learning and instruction in mathematics, and context of instruction including instructional time and class size. Students were randomly assigned one of three questionnaires. Each questionnaire comprised questions about the student and their family background and a selection of questions from the remaining pool of questions. Students were allowed up to 40 minutes to complete the student questionnaire, which they responded to after the completion of the paper-based assessment and before the completion of the computerbased assessment. The principal (or the principal’s delegate) completed the school questionnaire. It collected descriptive information about the school, including the quality of the school’s human and material resources, decision-making processes, instructional practices, and school and classroom climate. In Australia, the school questionnaire was administered online and took around 30 minutes to complete.

Time of testing PISA standards stipulate that testing should take place in the second half of the academic year. In Australia, the PISA assessment took place in a six-week period from late July to early September 2012. For most countries in the Northern Hemisphere, the testing period took place between March and May 2012. Together with appropriate application of the student age definition, this resulted in the Australian students being at both a comparable age and a comparable stage in the school year to those in the Northern Hemisphere who had been tested earlier in 2012.

Participants in PISA 2012 Countries Although PISA was originally an OECD assessment created by the governments of OECD countries, it has become a major assessment in many regions and countries around the world. Since the first assessment in 2000, when PISA was implemented in 32 OECD countries, it has expanded to include nonOECD countries, referred to as partner countries and economies3. Sixty-five countries and economies participated in PISA 2012, including 34 OECD countries and 31 partner countries or economies.4 Forty-four countries participated in the optional computer-based assessment, including 28 OECD and 16 partner countries. Around 85,000 students were assessed in problem solving, representing around 19 million 15-year-olds in the schools of participating countries. Figure 1.1 (p. 4) shows the countries that participated in PISA 2012 (in purple) and also identifies those countries that participated in the PISA computer-based assessment (denoted with an asterisk in the table).

  Economic regions are required to meet the same PISA technical standards as other participating countries. Results for an economic region are only representative of the region assessed and are not representative of the country. 4   Although Chinese Taipei, Hong Kong–China, Macao–China and Shanghai–China are economic regions, for convenience they will be referred to throughout this report as countries. 3

Introduction

3

OECD countries Australia*

Partner countries/economies Hungary*

Poland*

Albania

Kazakhstan

Shanghai–China*

Austria*

Iceland

Portugal*

Argentina

Latvia

Singapore*

Belgium*

Ireland*

Slovak Republic*

Brazil*

Liechtenstein

Thailand

Canada*

Israel*

Slovenia*

Bulgaria*

Lithuania

Tunisia

Chile*

Italy*

Spain*

Chinese Taipei*

Macao–China*

United Arab Emirates*

Czech Republic*

Japan*

Sweden*

Colombia*

Malaysia*

Uruguay*

Denmark*

Korea*

Switzerland

Costa Rica

Montenegro*

Vietnam

Estonia*

Luxembourg

Turkey*

Croatia*

Peru

Finland*

Mexico

United Kingdom*

Cyprus*

Qatar

France*

Netherlands*

United States*

Hong Kong–China*

Romania

Germany*

New Zealand

Indonesia

Russian Federation*

Greece

Norway*

Jordan

Serbia*

Note: Those countries that participated in the computer-based assessment of problem solving in PISA 2012 are denoted with an asterisk.

Figure 1.1  Countries participating in PISA 20125

In this report, the OECD average refers to the average scores for students in the 28 OECD countries who participated in the computer-based assessment of problem solving.

Schools and students The target population for PISA is students who are 15 years old and enrolled in an educational institution, either full- or part-time, at the time of testing. In most countries, 150 schools and 35 students in each school were randomly selected to participate in PISA. In some countries, including Australia, a larger sample of schools and students participated.

  Only England participated in the computer-based assessment.

5

4

Thinking it through: Australian students’ skills in problem solving

The Australian PISA 2012 school sample consisted of 775 schools (Table 1.1). The sample was designed so that schools were selected with a probability proportional to the enrolment of 15-year-olds in each school. Stratification of the sample ensured that the PISA sample was nationally representative of the 15-year-old population. Several variables were used in the stratification of the school sample including jurisdiction6, school sector, geographic location, sex of students at the school, a socioeconomic background variable7 and an achievement variable8. Of the Australian PISA schools, 85% were coeducational. Eight per cent of schools catered only for female students, while 7% catered only for male students. Of the PISA schools that were single-sex schools, 2% (17 schools) were government schools, almost 8% (62 schools) were Catholic and 4% (34 schools) were independent schools. The Australian PISA 2012 sample of 14,481 students, whose results feature in the national and international reports, was drawn from all jurisdictions and school sectors according to the distributions shown in Table 1.2. Table 1.1  Number of Australian PISA 2012 schools, by jurisdiction and school sector Sector Jurisdiction

Government

Catholic

Independent

Total

ACT

26

8

11

45

NSW

113

43

28

184

VIC

77

31

26

134

QLD

83

24

25

132

SA

56

18

18

92

WA

51

18

21

90

TAS

47

12

12

71

NT

17

5

5

27

470

159

146

775

Australia

Note: The numbers are based on unweighted data.

Table 1.2  Number of Australian PISA 2012 students, by jurisdiction and school sector Jurisdiction ACT

NSW

VIC

QLD

N students

501

2133

  1362

1769

Weighted N

2386

47964

35446

30539

SA

WA

TAS

NT

Total

 931

 1020

869

256

8841

10268

15363

3842

1341

147149

Government

Catholic N students

209

828

571

497

306

330

235

81

3057

Weighted N

1500

19389

15636

10200

3691

5742

 1221

210

57589

N students

198

486

473

456

336

388

154

92

2583

Weighted N

827

12155

11312

10044

3668

6431

832

703

45972

Independent

Australia N students

908

3447

2406

2722

1573

1738

1258

429

14481

Weighted N

4713

79508

62394

50783

17627

27536

5895

2254

250710

Notes: N students is based on the achieved (unweighted) sample. Weighted N is based on the number of students in the target population represented by the sample.

  Throughout this report, the Australian states and territories will be collectively referred to as jurisdictions.   Based on the Australian Bureau of Statistic’s Socio-Economic Indexes for Areas (SEIFA). 8   Based on a NAPLAN numeracy school-level score. 6 7

Introduction

5

How results are reported International comparative studies have provided an arena to observe the similarities and differences between educational policies and practices. They enable researchers and others to observe what is possible for students to achieve and what environment is most likely to facilitate their learning. PISA provides regular information on educational outcomes within and across countries by providing insight into the range of skills and competencies, in different assessment domains, that are considered to be essential to an individual’s ability to participate in and contribute to society. Similar to other international studies, PISA results are reported as mean scores that indicate average performance and various statistics that reflect the distribution of performance. School and student variables further enhance the understanding of student performance. PISA also attaches meaning to the performance scale by providing a profile of what skills and knowledge students have achieved. The performance scale is divided into levels of difficulty, referred to as proficiency levels. Students at a particular level not only typically demonstrate the knowledge and skills associated with that level, but also the proficiencies required at lower levels. For the domain of problem-solving literacy, six proficiency levels have been defined to describe the scale.

Organisation of the report This report focuses on the results for Australian students in problem solving in PISA 2012. Chapter 2 provides a brief overview of the PISA problem-solving framework. Chapter 3 presents results on the performance of Australian students in problem solving. Results are compared to other participating countries and economies, across jurisdictions and for different social groups. Chapter 4 presents a discussion of students’ strengths and weaknesses in performing certain types of tasks. The final chapter examines students’ attitudes related to problem solving.

Further information This report focuses on the computer-based assessment of problem solving, which was offered for the first time in PISA 2012. Details about the PISA 2012 paper-based assessment of mathematical literacy, scientific literacy and reading literacy, and the computer-based assessment of mathematical literacy and reading literacy can be found in the national report, PISA 2012: How Australia measures up. Further information about PISA in Australia is available from the national PISA website: www.acer.edu.au/ozpisa/.

6

Thinking it through: Australian students’ skills in problem solving

CHAPTER 2

The assessment of problem solving

The aim of the PISA 2012 problem-solving assessment was to assess an individual’s problem-solving competency.1 This chapter describes the framework underlying the PISA 2012 computer-based, problemsolving assessment. It commences with a definition of problem solving, followed by a discussion of the main elements of the problem-solving assessment framework and an overview of the general structure of the assessment. The last section presents examples of problem-solving items from the PISA 2012 assessment2.

How is problem solving defined in PISA? In PISA 2012, problem solving has been defined as: an individual’s capacity to engage in cognitive processing to understand and resolve problem situations where a method of solution is not immediately obvious. It includes the willingness to engage with such situations in order to achieve one’s potential as a constructive and reflective citizen (OECD, 2014, p. 30).

How is problem solving assessed in PISA? The main features of the PISA 2012 problem-solving assessment framework are shown in Figure 2.1 (p. 8). The PISA framework for problem solving is organised into three aspects: the nature of the problem situation; the problem-solving process; and the problem context.

  For ease of reading, problem-solving competence will be referred to as problem solving.   Details about the problem-solving framework and sample items have been taken from OECD (2014).

1 2

7

NATURE OF THE PROBLEM SITUATION Is all the information needed to solve the problem disclosed at the outset?

Static: all relevant information for solving the problem is disclosed at the outset Interactive: not all information is disclosed; some information can be uncovered by exploring the problem situation

Exploring and understanding: building mental representations of each of the pieces of information presented in the problem PROBLEM SOLVING PROCESSES What are the main cognitive processes involved in the particular task?

Representing and formulating: constructing graphical, tabular, symbolic or verbal representations of the problem situation and formulating hypotheses about the relevant factors and the relationships between them Planning and executing: devising a plan by setting goals and subgoals, and executing the sequential steps identified in the plan Monitoring and reflecting: monitoring progress, reacting to feedback, and reflecting on the solution, the information provided with the problem or the strategy adopted

PROBLEM CONTEXT In what everyday scenario is the problem embedded?

Setting: does the scenario involve a technological device? - Technology (involves a technological device) - Nontechnology (doesn’t involve a technological device) Focus: what environment does the problem relate to? - Personal (the student, family or close peers) - Social (the community or society in general)

Figure 2.1  Main features of the PISA 2012 problem-solving assessment framework

Nature of the problem situation This aspect of the problem-solving framework relates to how a problem is presented. In PISA, problem situations are defined as static or interactive. When all the relevant information about the problem is disclosed to the problem solver at the outset, the problem situation is considered static. Static problem situations have a single goal and are not dynamic in nature; that is, its state will not change of its own accord during the course of solving a problem. Solving a jigsaw puzzle is an example of a static problem situation. On the other hand, when the problem solver is not presented with all the information at the outset and the problem solver is required to explore the situation to uncover additional relevant information, the problem situation is considered interactive. Interactive problem situations can be dynamic in nature. Using a train ticket vending machine, or another technological device, for the first time is an example of an interactive problem situation. In this situation, it may not be obvious what steps need to be taken until the user starts to interact with the ticket vending machine.

Problem-solving processes The second aspect in the PISA 2012 problem-solving framework relates to the cognitive processes involved in solving a problem. In assessing problem solving in PISA 2012, these cognitive processes have been grouped into four processes:

8

Thinking it through: Australian students’ skills in problem solving

»» Exploring and understanding involves building mental representations of each of the pieces of information presented in the problem by exploring the situation (observing it, interacting with it, searching for information and finding limitations or obstacles) and demonstrating an understanding of the given information and the information discovered while interacting with the problem situation. » Representing and formulating involves building a coherent mental representation of the problem situation by selecting, mentally organising and integrating the relevant information with relevant prior knowledge. This is achieved by representing the problem (by using tables, graphs, symbols or words to represent aspects of the problem situation) and formulating hypotheses (by identifying the relevant factors in a problem and their interrelationships, organising and critically evaluating information). » Planning and executing involves planning (clarifying the overall goal and setting subgoals, where necessary), devising a plan or strategy to reach the goal and executing the plan (carrying out a plan). » Monitoring and ref lecting involves monitoring progress towards the goal at each stage (including checking intermediate and final results, detecting unexpected events and taking remedial action when required) and ref lecting on the solution from different perspectives, critically evaluating assumptions and alternative solutions, identifying the need for additional information or clarification, and communicating progress in a suitable manner. In solving a particular problem, the processes may not be sequential and not all processes will be involved in solving a problem. In the problem-solving assessment, single items were intended to have one of these processes as their main focus, although often several processes occurred simultaneously, or in succession, while solving a particular item. A major distinction among tasks is between acquisition and use of knowledge. In knowledge-acquisition tasks, the goal is for students to develop or refine their mental representation of the problem space. Students need to generate and manipulate the information in a mental representation. The movement is from concrete to abstract, from information to knowledge. In the context of the PISA assessment of problem solving, knowledge-acquisition tasks may be classified either as exploring and understanding tasks or as representing and formulating tasks. The distinction within knowledge-acquisition tasks between the two processes is sometimes small, and may relate to the amount of scaffolding provided for exploring and representing the problem space. In knowledge-utilisation tasks, the goal is for students to solve a concrete problem. The movement is from abstract to concrete, from knowledge to action. Knowledge-utilisation tasks correspond to the process of planning and executing. Within the PISA assessment of problem solving, tasks would only be classified as planning and executing if the execution of a plan is the dominant cognitive demand of the item (and likewise for other problem-solving processes). Monitoring and reflecting tasks are intentionally left out of this distinction, because they often combine both knowledge-acquisition and knowledge-utilisation aspects.

Problem context The problem context consists of two dimensions: the setting (technology and non-technology) and the focus (personal or social). These have been identified to ensure that the assessment items cover a range of contexts, are authentic and are of interest to 15-year-olds. Problems are set in a technology or a nontechnology context. Examples of technological devices include digital clocks, mobile phones and remote controls for appliances. Students are led to explore and understand the functionality of a device, as preparation for controlling the device or for troubleshooting

The assessment of problem solving

9

its malfunctioning. Examples of nontechnology contexts include task scheduling, route planning and decision making. Problems are also set in a personal or a social focus. Personal contexts include those relating primarily to the self, family and peer groups, while social contexts relate to broader situations in the community or society in general. An item about the rules that govern the function of an MP3 player would be classified as having a technological and personal context, whereas an item relating to the seating plan for a birthday party has a nontechnological and social context.

The PISA 2012 problem-solving assessment structure The PISA 2012 problem-solving assessment framework serves as the conceptual basis for assessing students’ proficiency in problem solving. Items were developed to reflect the concepts in the framework.

Structure of the computer-based assessment As with the PISA 2012 paper-based assessment, items in the computer-based assessment were grouped into units and the units were grouped into clusters. The PISA 2012 problem-solving assessment consisted of 42 items with a total of 16 units. The units were grouped into four clusters, providing a total assessment time of 80 minutes in problem solving. There were also four mathematical literacy clusters and two reading literacy clusters to assess these literacy domains in a computer-based environment.3 Students were randomly assigned to one of 24 computer forms, each consisting of two clusters. Each form consisted of none, one or two of the problem-solving clusters according to a separate balanced rotation design, i.e., students answered only some of the items in the total item pool. Students were allocated 40 minutes to complete the assessment. Only a basic level of information and communication technologies (ICT) competence was required. Prior to the assessment, students undertook a 20-minute tutorial that covered information about how to navigate the test interface and the different response formats. The basic ICT skills that were needed to participate in the computer-based assessment were: using a keyboard, using a mouse or touchpad, clicking radio buttons, drag-and-drop, scrolling, and use of pull-down menus and hyperlinks. The reading load was minimised by using stimulus material and task statements that were clear, simple and brief, as well as using animations and pictures of diagrams.

Delivery of the computer-based assessment The paper-based PISA 2003 problem-solving assessment typically assessed static problem situations. However, advancements in software-development tools have provided an opportunity to assess students’ problem-solving skills in a new dimension. Students can be presented with relatively complex problems that require direct interaction to uncover and discover the relevant information to solve the problem. These interactive problem situations can be simulated in a testing setting using a computer, allowing for a wider range of authentic, real-life scenarios. The capability to administer dynamic and interactive problems also engages students’ interest more fully. Another benefit of measuring problem solving through a computer-based assessment is the opportunity to collect data that relate to the processes and strategies, such as the frequency, length and sequences of actions performed by students as they respond to items.

 The results of the computer-based mathematical literacy assessment and digital reading assessment can be found in the national report, PISA 2012: How Australia measures up.

3

10

Thinking it through: Australian students’ skills in problem solving

The appearance of the test interface was consistent across items.4 For each item, the stimulus material was shown in the top part of the screen, while the item appeared in the lower part of the screen, with a border separating the stimulus from the item. All of the stimulus material was shown so students did not have to scroll up or down to see all the information. There was a timing bar located in the top right-hand corner of the screen, which showed students how much time was remaining in the assessment. There was also another indicator of progress along the top left-hand side of the screen, identifying the number of items in the unit and the item they were currently completing. Items within units and units within clusters were delivered in a fixed, lockstep order. This meant that as students completed an item (by clicking the arrow button), they were presented with a dialog box displaying a message that they were about to move forward in the assessment and that it would not be possible to return to this item. Students were then able to confirm that they wanted to move to the next item or return to the current item.

Distribution of items The PISA 2012 computer-based problem-solving assessment was based on 42 items. Items were developed to measure how well students performed when the various problem-solving processes were exercised with the two different types of problem situations across a range of contexts. Items were classified by two main aspects, the main cognitive process involved in the particular task and the nature of the problem situation. The assessment was constructed so that there was no strong association between the main cognitive process involved in the task and the nature of the problem situation; i.e., strengths and weaknesses in particular cognitive processes were unlikely to influence strengths and weaknesses that were found in interactive or static tasks. The distribution of items by the main cognitive process involved in the task was: 38% planning and executing, 24% exploring and understanding, 21% representing and formulating, and 17% monitoring and reflecting. Two-thirds of the items were classified as interactive and one-third of the items were static. Table 2.1 presents the number of items in the problem-solving assessment, by cognitive process and the nature of the problem situation. Table 2.1  Classification of problem-solving items, by cognitive process and the nature of the problem situation Problem-solving process Nature of the problem situation

Exploring and understanding

Representing and formulating

Planning and executing

Monitoring and reflecting

Total items

Static

5

2

6

2

15

Interactive

5

7

10

5

27

Total items

10

9

16

7

42

Items in the assessment were also classified by the particular context in which the problem situation occured and according to their response format. About one-third of the items were simple multiplechoice items with one correct answer, or complex multiple-choice items with two or three separate multiple-choice selections. Students were required to select their response by clicking a radio button or by selecting a drop-down menu. These items were automatically coded. The remaining two-thirds of items were constructed-response items. The majority of these items (80%) were closed constructed-response items that required students to construct their response by entering a number, dragging shapes, drawing lines between points or highlighting part of a diagram. These closed constructed-response items were automatically coded. The remaining constructed-response items were open constructed-response items, where students wrote a short explanation to show the method   T he computer-based assessment was delivered on computers running Windows XP, Windows Vista or Windows 7. Monitors used to display the computer-based assessment had to support at least 1024 by 768 image resolution. 

4

The assessment of problem solving

11

and thought process they had used in constructing their response. Trained coders coded these open constructed-response items.

Scaling the problem-solving tasks The assessment design—similar to those used in the regular PISA assessments of mathematics, science and reading—allowed a single scale of proficiency in problem solving to be constructed. The scale for problem solving was constructed using item-response theory, with each item associated with a particular point on the scale indicating its difficulty and each student’s performance associated with a particular point on the same scale indicating their estimated problem-solving proficiency. On this scale, the relative difficulty of items in an assessment can be estimated by considering the proportion of students responding to each item correctly. It is possible to estimate the location of individual students and to describe the degree of problem solving that they possess. The relationship between items and students on the problem-solving scale (shown in Figure 2.2) is probabilistic. The estimate of student proficiency reflects the kinds of tasks they would be expected to successfully complete. A student whose ability places them at a certain point on the PISA problem-solving scale would most likely be able to successfully complete tasks at or below that location. They would increasingly be more likely to be able to complete tasks located at progressively lower points on the scale. They would be less likely to be able to complete tasks above that point on the scale and they would be increasingly less likely to be able to complete tasks located at progressively higher points on the scale. Problem-solving scale

Item VI Items with relatively high difficulty

Items with moderate difficulty

Items with relatively low difficulty

Student A, with relatively high proficiency

It is expected that student A will be able to complete items I to V successfully, and probably item VI as well.

Student B, with moderate proficiency

It is expected that student B will be able to complete items I, II and III successfully, will have a lower probability of completing item IV and is unlikely to complete items V and VI successfully.

Item V

Item IV Item III

Item II Item I

Student C, with relatively low proficiency

It is expected that student C will be unable to complete items II to VI successfully, and will also have a low probability of completing item I successfully.

Figure 2.2  The relationship between items and students on the PISA problem-solving scale

Defining problem-solving proficiency levels in PISA 2012 The PISA 2012 problem-solving assessment provides an overall problem-solving proficiency scale, which draws on all problem-solving items in the assessment. The problem-solving scale was constructed to have a mean score of 500 points across OECD countries and a standard deviation of 100 score points. Twothirds of the students across OECD countries scored between 400 and 600 points.

12

Thinking it through: Australian students’ skills in problem solving

While mean scores provide a convenient summary of student performance, proficiency levels are developed in PISA to provide a description of the knowledge and skills students could be expected to have at particular levels. The problem-solving scale is divided into six proficiency levels. The proficiency levels range in difficulty from the lowest described level, Level 1—which corresponds to an elementary level of problem-solving skills—to the highest described level, Level 6—which corresponds to the advanced level of problem-solving skills. There is also an unbounded level, below Level 1—which does not have a description about these students, as there is an insufficient number of items on which to base a description of these students’ problem-solving proficiency. Students with a proficiency score within the range of Level 1 are expected to complete most Level 1 tasks successfully, but are unlikely to be able to complete tasks at higher levels. Students with scores in the Level 6 range are likely to be able to successfully complete all tasks included in the PISA assessment of problem solving. The descriptions of what students can typically do at each of the proficiency levels in problem solving are shown in Figure 2.3 (p. 14). A difference of 65 score points represents one proficiency level on the PISA 2012 problem-solving scale. Students who perform at Level 5 or 6 (scoring 618 points or higher) are considered top-performing students. These students are highly proficient in problem solving. Students who are placed at Level 1 or below (scoring 422 points or lower) are considered low-performing students. These students have not reached Level 2, which has been defined internationally as a baseline proficiency level and defines the level of performance on the PISA scale at which students begin to demonstrate the problem-solving competencies that will enable them to actively participate in the 21st century work force and contribute as productive citizens. Students who perform below Level 1 can not be reliably described because there are not enough problemsolving items in this lower region of the scale. These students show limited problem-solving skills.

Sample problem-solving items A small number of items have been publicly released to illustrate the types of items that students responded to in the computer-based assessment. Figure 2.4 (p. 15) presents how these items map onto the described problem-solving proficiency scale. The most difficult items are located at the top of the figure at the higher proficiency levels and the least difficult items are located at the bottom in the lower levels. Cut-off score points between proficiency levels are also displayed. Each of the items is placed in the relevant proficiency level according to the difficulty of the item (the number in brackets) and is identified by its main problem-solving process and nature of the problem situation. Items included in the same unit can have a range of difficulties. For example, the unit Tickets comprises items at Levels 2 to 5, showing that a single unit may cover a broad section of the PISA problem-solving scale. Some items, for example, the second item in the unit Climate control is a partial credit item. If students correctly responded to the item, they were rewarded with full credit, but it was possible for some students to be rewarded with partial credit. Only a few tasks in the problem-solving assessment are associated with difficulty levels below Level 1. Among the released items, one item, the first question in the unit Traffic, is located below the lowest level of proficiency described. All of the following interactive sample items are set in technology contexts. The assessment also included interactive problems in nontechnology contexts; for example, asking students to orient themselves in a maze.

The assessment of problem solving

13

Proficiency level

6

What students can typically do at each level Students can develop complete, coherent mental models of diverse problem scenarios, enabling them to solve complex problems efficiently. They can explore a scenario in a highly strategic manner to understand all information pertaining to the problem. The information may be presented in different formats, requiring interpretation and integration of related parts. When confronted with very complex devices, such as home appliances that work in an unusual or unexpected manner, they quickly learn how to control the devices to achieve a goal in an optimal way. Level 6 problem-solvers can set up general hypotheses about a system and thoroughly test them. They can follow a premise through to a logical conclusion or recognise when there is not enough information available to reach one. In order to reach a solution, these highly proficient problem-solvers can create complex, flexible, multistep plans that they continually monitor during execution. Where necessary, they modify their strategies, taking all constraints into account, both explicit and implicit.

683 score points

5

Students can systematically explore a complex problem scenario to gain an understanding of how relevant information is structured. When faced with unfamiliar, moderately complex devices, such as vending machines or home appliances, they respond quickly to feedback in order to control the device. In order to reach a solution, Level 5 problem-solvers think ahead to find the best strategy that addresses all the given constraints. They can immediately adjust their plans or backtrack when they detect unexpected difficulties or when they make mistakes that take them off course.

618 score points

4

Students can explore a moderately complex problem scenario in a focused way. They grasp the links among the components of the scenario that are required to solve the problem. They can control moderately complex digital devices, such as unfamiliar vending machines or home appliances, but they don’t always do so efficiently. These students can plan a few steps ahead and monitor the progress of their plans. They are usually able to adjust these plans or reformulate a goal in light of feedback. They can systematically try out different possibilities and check whether multiple conditions have been satisfied. They can form a hypothesis about why a system is malfunctioning and describe how to test it.

553 score points

3

Students can handle information presented in several different formats. They can explore a problem scenario and infer simple relationships among its components. They can control simple digital devices, but have trouble with more complex devices. Problem-solvers at Level 3 can fully deal with one condition; for example, by generating several solutions and checking to see whether these satisfy the condition. When there are multiple conditions or interrelated features, they can hold one variable constant to see the effect of change on the other variables. They can devise and execute tests to confirm or refute a given hypothesis. They understand the need to plan ahead and monitor progress, and are able to try a different option if necessary.

488 score points

2

Students can explore an unfamiliar problem scenario and understand a small part of it. They try, but only partially succeed, to understand and control digital devices with unfamiliar controls, such as home appliances and vending machines. Level 2 problem-solvers can test a simple hypothesis that is given to them and can solve a problem that has a single, specific constraint. They can plan and carry out one step at a time to achieve a subgoal and have some capacity to monitor overall progress towards a solution.

423 score points

1

Students can explore a problem scenario only in a limited way, but tend to do so only when they have encountered very similar situations before. Based on their observations of familiar scenarios, these students are able only to partially describe the behaviour of a simple, everyday device. In general, students at Level 1 can solve straightforward problems provided there is a simple condition to be satisfied and there are only one or two steps to be performed to reach the goal. Level 1 students tend not to be able to plan ahead or set subgoals.

358 score points

Figure 2.3  Summary descriptions of the six levels on the problem-solving proficiency scale

14

Thinking it through: Australian students’ skills in problem solving

Proficiency level 6

Item

Task score

Main problem-solving process

Nature of the problem

Robot cleaner Item 3 full credit (CP002Q06)

701

Representing and formulating

Static

Climate control Item 2 full credit (CP025Q02)

672

Planning and executing

Interactive

Tickets Item 2 full credit (CP038Q01)

638

Exploring and understanding

Static

Climate control Item 2 partial credit (CP0025Q02)

592

Planning and executing

Interactive

Tickets Item 3 (CP038Q03)

579

Monitoring and reflecting

Interactive

Robot cleaner Item 2 (CP002Q07)

559

Exploring and understanding

Static

Tickets Item 1 (CP038Q02)

526

Planning and executing

Interactive

Climate control Item 1 full credit (CP025Q01)

523

Climate control Item 1 partial credit (CP025Q01)

Representing and formulating

Interactive

492

Robot cleaner Item 1 (CP002Q08)

490

Exploring and understanding

Static

Tickets Item 2 partial credit (CP083Q01)

453

Exploring and understanding

Static

Traffic Item 2 (CP007Q02)

446

Planning and executing

Static

Robot cleaner Item 3 partial credit (CP002Q06)

414

Representing and formulating

Static

Traffic Item 3 (CP007Q03)

408

Monitoring and reflecting

Static

Traffic Item 1 (CP007Q01)

340

Planning and executing

Static

683 score points

5

618 score points

4

553 score points

3

488 score points

2

423 score points

1

358 score points Below 1

Figure 2.4  M  ap of selected problem-solving items, illustrating the proficiency level and assigned problem-solving process and nature of the problem

The assessment of problem solving

15

The four units, Robot cleaner, Climate control, Tickets and Traffic are described below. For each unit, a screenshot of the stimulus information is provided, together with a brief description of the context of the unit. This is followed by a screenshot and description of each item from that unit.5

Robot cleaner The animation shows the movement of a new robotic vacuum cleaner. It is being tested. Click the START button to see what the vacuum cleaner does when it meets different types of objects. You can use the RESET button to place the vacuum cleaner back in its starting position at any time.

The unit Robot cleaner presents students with an animation showing the behaviour of a robot cleaner in a room. The robotic vacuum cleaner moves forward until it meets an obstacle and then behaves according to a few deterministic rules, depending on the kind of obstacle. Students can run the animation as many times as they wish to observe this behaviour. Despite the animated task prompt, the problem situations in this unit are static, because the student cannot intervene to change the behaviour of the vacuum cleaner or aspects of the environment. The context for the items in this unit is classified as social and nontechnological. Item 1: Robot cleaner CP002Q08 What does the vacuum cleaner do when it meets a red block? m

It immediately moves to another red block.

m

It turns and moves to the nearest yellow block.

m

It turns a quarter circle (90 degrees) and moves forward until it meets something else.

m

It turns a half circle (180 degrees) and moves forward until it meets something else.

In the first item, students must understand the behaviour of the vacuum cleaner when it meets a red block. The item is classified as exploring and understanding. To show their understanding, students need to select among a list of four options. Based on observation of the animation, the description that corresponds to the behaviour of the robot cleaner in this situation is: “It turns a quarter circle (90 degrees) and moves forward until it meets something else.”

  These units are available for viewing at http://cbasq.acer.edu.au

5

16

Thinking it through: Australian students’ skills in problem solving

Item 2: Robot cleaner CP002Q07 At the beginning of the animation, the vacuum cleaner is facing the left wall. By the end of the animation it has pushed two yellow blocks. If, instead of facing the left wall at the beginning of the animation, the vacuum cleaner was facing the right wall, how many yellow blocks would it have pushed by the end of the animation? m

0 m

1

m

2 m

3

In the second item, students must predict the behaviour of the vacuum cleaner using spatial reasoning. How many obstacles would the vacuum cleaner encounter if it started in a different position? This item is also an exploring and understanding item because the correct prediction of the robot’s behaviour requires at least a partial understanding of the rules and careful observation of the animation to grasp the information needed. It is made easier if the student notes that the new starting position corresponds to an intermediate state of the robot’s trajectory in the animation. Response options are provided.

Item 3: Robot cleaner CP002Q06 The vacuum cleaner’s behaviour follows a set of rules. Based on the animation, write a rule that describes what the vacuum cleaner does when it meets a yellow block.

The final item in this unit is classified as representing and formulating. It asks students to describe the behaviour of the robot cleaner when it meets a yellow block. In contrast to the first item, students must formulate the answer themselves by entering it in a text box. This item requires expert scoring for credit. Full credit answers are those that describe both of the rules that govern the robot’s behaviour (for example, “it pushes the yellow block as far as it can and then turns around”). Partial credit was available for answers that only partially described the behaviour (for example, by listing only one of the two rules). Only a small percentage of students across participating countries obtained full credit for this item.

The assessment of problem solving

17

Climate control You have no instructions for your new air conditioner. You need to work out how to use it. You can change the top, central and bottom controls on the left by using the sliders ( ). The initial setting for each control is indicated by p. By clicking APPLY, you will see any changes in the temperature and humidity of the room in the temperature and humidity graphs. The box to the left of each graph shows the current level of temperature or humidity.

In the unit Climate control, students are told that they have a new air conditioner but no instructions for it. Students can use three controls (sliders) to vary temperature and humidity levels, but first they need to understand which control does what. A measure of temperature and humidity in the room appears in the top right-hand part of the screen, both in numerical and graphical form. All items in this unit present an interactive problem situation, with context classified as personal and technological. The unit Climate control is an example of a system of causal relations involving only a few variables that have to be explored and controlled in order to reach the assigned goal states. In the first knowledgegeneration phase, the student has to control up to three input variables. The increase in the level of an input variable leads to an increase, a decrease, a mixed effect (increase and decrease for different variables) or no effect in one or more output variables. Students typically have to demonstrate rule knowledge after this first phase. Students are then asked to control the system to reach a certain target by choosing the appropriate input levels.

Item 1: Climate control CP025Q01 Find out whether each control influences temperature and humidity by changing the sliders. You can start again by clicking RESET. Draw lines in the diagram on the right to show what each control influences. To draw a line, click on a control and then click on either Temperature or Humidity. You can remove any line by clicking on it.

In the first item, students are invited to change the sliders to find out whether each control influences the temperature or the humidity level. The problem-solving process for this item is representing and formulating: the student must experiment to determine which controls have an impact on temperature and which on humidity, and then represent the causal relations by drawing arrows between the three controls and the two outputs (temperature and humidity). There is no restriction on the number of rounds of exploration that the student is allowed. Full credit for this question requires that the causal diagram is correctly completed. Partial credit for this question is given if the student explores the relationships among variables efficiently, by varying only one input at a time, but fails to correctly represent them in a diagram.

18

Thinking it through: Australian students’ skills in problem solving

Item 2: Climate control CP025Q02 The correct relationship between the three controls, Temperature and Humidity is shown on the right. Use the controls to set the temperature and humidity to the target levels. Do this in a maximum of four steps. The target levels are shown by the red bands across the Temperature and Humidity graphs. The range of values for each target level is 18–20 and is shown to the left of each red band. You can only click APPLY four times and there is no RESET button.

The second item asks students to apply their new knowledge of how the air conditioner works to set temperature and humidity at specified target levels (lower than the initial state). This is a planning and executing item. To ensure that no further exploration is needed beyond the one conducted in the previous item, a diagram shows how the controls are related to temperature and humidity levels (students could not return to any previous item during the test). Because only four rounds of manipulation are permitted, students need to plan a few steps ahead and use a systematic, if simple, strategy to succeed in this task. The target levels of temperature and humidity provided can be reached in several ways within four steps—the minimum number of steps needed is two—and a mistake can often be corrected, if immediate remedial action is taken. A possible strategy, for instance, is to set separate subgoals and to focus on temperature and humidity in successive steps. If the student is able to bring temperature and humidity both closer to their target levels within the four rounds of manipulation permitted, but does not reach the target for both, then partial credit is given.

The assessment of problem solving

19

Tickets A train station has an automated ticketing machine. You use the touch screen on the right to buy a ticket. You must make three choices. • Choose the train network you want (subway or country). • Choose the type of fare (full or concession). • Choose a daily ticket or a ticket for a specified number of trips. Daily tickets give you unlimited travel on the day of purchase. If you buy a ticket with a specified number of trips, you can use the trips on different days. The BUY button appears when you have made these three choices. There is a cancel button that can be used at any time BEFORE you press the BUY button.

In the unit Tickets, students are invited to imagine that they have just arrived at a train station that has an automated ticketing machine. The context for the items in this unit is classified as social and technological. At the machine, students can buy subway or country train tickets, with full or concession fares; and they can choose daily tickets or a ticket for a specified number of trips. All items in this unit present an interactive problem situation: students are required to engage with the unfamiliar machine and to use the machine to satisfy their needs. Item 1: Tickets CP038Q02 Buy a full fare, country train ticket with two individual trips. Once you have pressed BUY, you cannot return to the question.

In the first item, students are invited to buy a full fare, country train ticket with two individual trips. This item measures the process of planning and executing. Students first have to select the network (country trains), choose the fare type (full fare), select between a daily ticket and one for multiple individual trips, and finally indicate the number of trips (two). The solution requires multiple steps and the instructions are not given in the same order as they need to be applied. This is a relatively linear problem compared to the following ones, but it is the first encounter with this machine, which increases its level of difficulty relative to the following items. Item 2: Tickets CP038Q01 You plan to take four trips around the city on the subway today. You are a student, so you can use concession fares. Use the ticketing machine to find the cheapest ticket and press BUY. Once you have pressed BUY, you cannot return to the question.

20

Thinking it through: Australian students’ skills in problem solving

In the second item, students are asked to find and buy the cheapest ticket that allows them to take four trips around the city on the subway, within a single day. As students, they can use concession fares. This item is classified as exploring and understanding because this is the most crucial problem-solving process involved. Indeed, to accomplish the task, students must use a targeted exploration strategy, first generating at least the two most obvious possible alternatives (a daily subway ticket with concession fare, or an individual concession fare ticket with four trips), and then verifying which of these is the cheapest ticket. If students visit both screens before buying the cheapest ticket (which is the individual ticket with four trips), they are given full credit. Students who buy one of the two tickets without comparing the prices for the two only earn partial credit. Solving this problem involves multiple steps. Item 3: Tickets CP038Q03 You want to buy a ticket with two individual trips for the city subway. You are a student, so you can use concession fares. Use the ticketing machine to purchase the best ticket available.

In the third item, students are asked to buy a ticket for two individual trips on the subway. They are told that they are eligible for concession fares. This item is classified as monitoring and reflecting, since it requires them to modify their initial plan (to buy concession-fare tickets for the subway). When concession fares are selected, the machine says that “there are no tickets of this type available”. In this item, students must realise that it is not possible to carry through their initial plan and so must adjust this plan by buying a full-fare ticket for the subway instead.

The assessment of problem solving

21

Traffic Here is a map of a system of roads that links the suburbs within a city. The map shows the travel time in minutes at 7.00 am on each section of the road. You can add a road to your route by clicking on it. Clicking on a road highlights the road and adds the time to the Total Time box. You can remove a road from your route by clicking on it again. You can use the RESET button to remove all roads from your route.

In the unit Traffic, students are given a map of a road network with travel times indicated. While this is a unit with static items because all the information about travel times is provided at the outset, it still exploits the advantages of computer delivery. Students can click on the map to highlight a route, with a calculator in the bottom left-hand corner adding up travel times for the selected route. The context for the items in this unit is classified as social and nontechnological. Item 1: Traffic CP007Q01 Pepe is at Sakharov and wants to travel to Emerald. He wants to complete his trip as quickly as possible. What is the shortest time for his trip? m 20 minutes m 21 minutes m 24 minutes m 28 minutes

In the first item—a planning and executing item—students are asked about the shortest time to travel from Sakharov to Emerald, two relatively close points shown on the map. Four response options are provided. Item 2: Traffic CP007Q02 Maria wants to travel from Diamond to Einstein. The quickest route takes 31 minutes. Highlight this route.

The second item is a similar planning and executing item. It asks students to find the quickest route between Diamond and Einstein, two distant points on the map. This time, students must provide their answer by highlighting this route. Students can use the indication that the quickest route takes 31 minutes to avoid generating all possible alternatives systematically; instead, they can explore the network in a targeted way to find the route that takes 31 minutes. Item 3: Traffic CP007Q03 Julio lives in Silver, Maria lives in Lincoln and Don lives in Nobel. They want to meet in a suburb on the map. No-one wants to travel for more than 15 minutes. Where could they meet?

In the third item, students have to use a drop-down menu to select the meeting point that satisfies a condition on travel times for all three participants to meet. The demand in this third item is classified as a monitoring and reflecting item, because students have to evaluate possible solutions against a given condition. 22

Thinking it through: Australian students’ skills in problem solving

CHAPTER 3

Australian students’ performance in problem solving This chapter presents Australian students’ performance in problem solving in PISA 2012. Results are reported by means (average scores) and by proficiency levels across the problem-solving scale. Comparisons of student performance in problem solving are provided at an international level, describing Australia’s performance relative to other participating countries, and at a national level, where the performance of different (social) groups is examined.

Australia’s problem-solving performance from an international perspective Problem-solving performance across countries Overall, Australian students performed very well in problem solving, achieving a mean score of 523 points. Seven countries (two OECD and five partner countries and economies) performed significantly higher than Australia. Singapore and Korea, the highest performing countries in problem solving, achieved a mean score of 562 points and 561 points respectively, followed by Japan (552 score points), Macao–China and Hong Kong–China (540 score points), Shanghai–China (536 score points) and Chinese Taipei (534 score points). Three countries, Canada, Finland and England, performed not significantly different to Australia, while all other countries, including the United States and Ireland, performed at a significantly lower level than Australia. The lowest scoring countries were Uruguay, Bulgaria and Colombia with scores of 403 points or lower. Australia was one of 19 countries (14 OECD countries and five partner countries) that achieved a mean score that was significantly higher than the OECD average. These countries were: Singapore, Korea, Japan, Macao–China, Hong Kong–China, Shanghai–China, Chinese Taipei, Canada, Australia, Finland, England, Estonia, France, the Netherlands, Italy, Czech Republic, Germany, the United States and Belgium. Five countries (Austria, Norway, Ireland, Denmark and Portugal) achieved a mean score that was not significantly different from the OECD average, and 20 countries (nine OECD countries and 11 partner countries) performed significantly lower than the OECD average. These countries were: Sweden, the Russian Federation, the Slovak Republic, Poland, Spain, Slovenia, Serbia, Croatia, Hungary, Turkey, Israel, Chile, Cyprus, Brazil, Malaysia, the United Arab Emirates, Montenegro, Uruguay, Bulgaria and Colombia. 23

How problem solving is reported in PISA Similar to the reporting of results for other assessed domains in PISA, statistics such as mean scores and measures of distribution of performance and proficiency levels are used to examine students’ performance.

Mean scores and distribution of scores Mean scores provide a summary of students’ performance and allow comparisons of the relative standing between different countries and different subgroups. As problem solving was assessed for the first time as a computer-based assessment in PISA 2012, the mean score across OECD countries was set at 500 score points, with a standard deviation of 100 score points. This establishes the benchmark against which each country’s problem-solving performance in PISA 2012 is compared. The distribution of scores along the problem-solving scale also provides further detail about students’ performance. Results are reported at the 5th, 10th, 25th, 75th, 90th and 95th percentiles in graphical format to observe the variation in students’ performance within a country or subgroup.

Proficiency levels Proficiency levels provide results in descriptive terms, where descriptions of the skills and knowledge students can typically demonstrate are attached to achievement results. The problem-solving proficiency scale spans from Level 1 (the lowest proficiency level) to Level 6 (the highest proficiency level). Students who are placed at Level 5 or 6 are considered top-performing students, while students who fail to reach Level 2 are considered lowperforming students.

Interpreting differences in PISA scores: How big is ‘big’? How do we go about understanding the difference in average problemsolving scores between two groups of students? A difference of 65 score points represents one proficiency level on the PISA problemsolving scale. In substantive terms, this can be considered a comparatively large difference in students’ performance. For example, compare the skill set for those students who are proficient at Level 2 and those students who are proficient at Level 3. Students who perform at Level 2 on the problem-solving scale are only starting to demonstrate problemsolving competence. They can explore an unfamiliar problem scenario and understand a small part of it, and they can test a simple hypothesis and solve a problem that has a single, specific constraint. In contrast, students who reach Level 3 are proficient with the tasks at Level 2 and can also explore a problem scenario and infer simple relationships among its components, and they can devise and execute tests to confirm or refute a given hypothesis. The difference in average performance between the highest- and lowest-performing countries is 163 score points. Across the OECD, the difference between the highest- and lowest-performing countries is 113 score points. For comparison, the difference between the highest- and lowest-performing jurisdictions is 36 score points. Treating all OECD countries as a single unit, one standard deviation in the distribution of students’ performance on the problem-solving scale corresponds to 100 score points, which means that—on average within OECD countries—two-thirds of the student population have scores within 100 score points of the OECD mean (500 score points).

24

Thinking it through: Australian students’ skills in problem solving

The difference in mean scores between students in the 5th and 95th percentiles varied considerably within countries, showing no clear relationship between average achievement and the degree of spread in students’ scores. In Australia, the difference in achievement between the most capable problem solvers (those students scoring at the 95th percentile) and the least capable (those students scoring at the 5th percentile) was 320 score points. The OECD average between the 5th and 95th percentiles was 314 score points. Among the OECD countries, the widest differences between the lowest and highest achieving students were found in Israel (405 score points), Belgium (348 score points), Spain (346 score points) and Hungary (345 score points). For partner countries, the widest differences were found in Bulgaria (351 score points) and the United Arab Emirates (347 score points). The narrowest differences between the lowest and highest achieving students were found in the partner country Macao–China, with 259 score points between the 5th and 95th percentiles, and in Turkey, with 262 score points between students in the 5th and 95th percentiles. Figure 3.1 (p. 26) provides the average problem-solving scores, along with the standard errors, confidence intervals around the average and the difference between the 5th and 95th percentiles. In addition, this figure also shows the graphical distribution of student performance. Countries are shown in order from the highest to the lowest average problem-solving score and the three colour bands indicate whether a particular country has performed at a significantly higher or lower level, or whether they performed at a level not significantly different to Australia.

Australian students’ performance in problem solving

25

Significantly higher than Australia Not significantly different from Australia Significantly lower than Australia

Country

Mean score

SE

Confidence interval

Difference between 5th and 95th percentiles

Singapore

562

1.2

560–565

312

Korea

561

4.3

553–570

292

Japan

552

3.1

546–558

280

Macao–China

540

1.0

538–542

259

Hong Kong–China

540

3.9

532–547

304

Shanghai–China

536

3.3

530–543

295

Chinese Taipei

534

2.9

529–540

297

Canada

526

2.4

521–530

327

Australia

523

1.9

519–527

320

Finland

523

2.3

518–527

307

England

517

4.2

509–525

315

Estonia

515

2.5

510–520

287

France

511

3.4

504–518

313

Netherlands

511

4.4

502–519

326

Italy

510

4.0

502–518

293

Czech Republic

509

3.1

503–515

312

Germany

509

3.6

502–516

324

United States

508

3.9

500–516

306

Belgium

508

2.5

503–513

348

Austria

506

3.6

499–513

305

Norway

503

3.3

497–510

337

OECD average

500

0.7

499–501

314

Ireland

498

3.2

492–505

307

Denmark

497

2.9

491–503

302

Portugal

494

3.6

487–501

288

Sweden

491

2.9

485–496

316

Russian Federation

489

3.4

482–496

290

Slovak Republic

483

3.6

476–490

324

Poland

481

4.4

472–489

313

Spain

477

4.1

469–485

346

Slovenia

476

1.5

473–479

318

Serbia

473

3.1

467–480

294

Croatia

466

3.9

459–474

302 345

Hungary

459

4.0

451–467

Turkey

454

4.0

447–462

262

Israel

454

5.5

443–465

405

Chile

448

3.7

441–455

283

Cyprus

445

1.4

442–448

326 299

Brazil

428

4.7

419–438

Malaysia

422

3.5

416–429

274

United Arab Emirates

411

2.8

406–417

347

Montenegro

407

1.2

404–409

300

Uruguay

403

3.5

397–410

322

Bulgaria

402

5.1

392–412

351

Colombia

399

3.5

392–406

300

Distribution of scores

200

300

400

500

600

Mean problem-solving performance

Figure 3.1  Mean scores and distribution of students’ performance on the problem-solving scale

26

Thinking it through: Australian students’ skills in problem solving

700

800

Each country’s results are represented in horizontal bars with various colours. On the left end of the bar is the 5th percentile—this is the score below which 5% of the students have scored. The next two lines indicate the 10th percentile and the 25th percentile. The next line at the left of the white band is the lower limit of the confidence interval for the mean—i.e., there is 95% confidence that the mean will lie in this white band. The line in the centre of the white band is the mean. The lines to the right of the white band indicate the 75th, 90th and 95th percentiles. Confidence interval

10th percentile

5th percentile

25th percentile

Mean

90th percentile

75th percentile

95th percentile

Students’ problem-solving competencies across countries PISA uses proficiency levels to provide further meaning about students’ capabilities in problem solving. Six levels of proficiency have been defined in problem solving, ranging from Level 1 (the lowest proficiency level) to Level 6 (the highest proficiency level). A seventh proficiency level, below Level 1, includes those students who are unable to successfully complete many of the items of Level 1 difficulty. The average proportion of students at each problem-solving proficiency level by country is presented in Figure 3.2 (p. 28). Countries have been ordered by the percentage of students classified as below Level 2, the internationally assigned baseline benchmark. Countries with the lowest proportion of students below Level 2 are placed at the top of the figure and countries with the highest proportion of students below Level 2 are placed at the bottom. Students who achieved a score of 683 points or higher were placed at the highest proficiency level, Level 6. These students are highly proficient problem solvers who can develop complete coherent, mental models of diverse problem scenarios, enabling them to solve complex problems efficiently. On average, 3% of students across OECD countries performed at this level. In Singapore, one in 10 students, and in Korea, around one in 12 students were highly skilled problem solvers. These two highest performing countries had the greatest proportion of students who achieved a proficiency of Level 6. The other countries ( Japan, Macao–China, Hong Kong–China, Shanghai–China and Chinese Taipei) which performed significantly higher than Australia achieved between 3 and 5% of their students achieving Level 6. Australia, along with Finland, was among the countries with the greatest proportion of students placed at Level 6 with 4%. In nine countries (Brazil, Bulgaria, Chile, Colombia, Malaysia, Montenegro, Turkey, the United Arab Emirates and Uruguay) less than 1% of students performed at Level 6. Students proficient at Level 5 were able to systemically explore a complex problem scenario to gain an understanding of how relevant information is structured and when faced with unfamiliar, moderately complex devices, they could respond quickly to feedback in order to control the device. Across OECD countries, 12% of students were proficient at Level 5 or higher. These students are considered top performers. In Singapore, 30% of students were top performers, compared to 28% of students in Korea and 22% of students in Japan. Almost one in five students were top performers in Hong Kong–China and Chinese Taipei, while there were 18% of students in Shanghai–China, 17% of students in Macao–China and Canada, and 16% of students in Australia who achieved these levels. In the lower-performing countries of Colombia, Uruguay, Montenegro, Bulgaria and Malaysia, less than 2% of students were top performers.

Australian students’ performance in problem solving

27

2 5

Korea

13

24

Japan

2 5

15

Singapore

2 6

14

Macao–China

2 6

18

3 7

16

Chinese Taipei

3 8

18

Shanghai–China

3 8

18

10

20

Estonia

4

11

22

5

10

Italy

5

11

Australia

5

11

19

England

6

11

20

France

7

10

21

11

20

26

26

15

4

27

26

14

24

7

12

21

Austria

7

12

22

Ireland

7

13

24

Germany

8

12

11

3

20

9

3

27

22

10 2

27

22

7

13

24

8

13

22

Portugal

7

14

Norway

8

13

28

10 3 19

26

9

28

18

25

18

25

10 3

22

11

Sweden

9

15

24

26

18

Slovak Republic

11

15

24

26

16

16

26

26

16

18

27

26

14

10 10

Serbia

27

28

16

7 2 6 2 6 4

Spain

13

15

24

24

16

6 2

11

17

25

24

16

6

Croatia

12

20

17

18

27

23

24

Turkey

11

25

31

Chile

15

23

29

22

Israel

17

20

Cyprus 22

Brazil

25 28

13 21

19

14

10 3

27

17

7 2

28

16

5

23

United Arab Emirates

30

25

22

14

6 2

Bulgaria

33

23

22

14

6

30

27

24

Uruguay

32

26

22

100

33 80

60

28 40

20

7 2

20

Malaysia

Colombia

9 2

26

Montenegro

14

5

13

22

11

0

20

4 5

9 2

22

20

21

13

22

3 6

Slovenia

Hungary

5 4 40

60

Percentage of students Below Level 1

Level 1

Level 2

Level 3

Level 4

Level 5

Level 6

Note: In cases in which the proportion of students in a proficiency level is 1% or less, the level still appears in the figure but the numeric label 1 does not. This convention has been used for all figures about proficiency levels in this chapter.

Figure 3.2  Percentage of students across the problem-solving proficiency scale, by country

28

Thinking it through: Australian students’ skills in problem solving

3 6

19

15

Poland

7 2

20

7

Russian Federation

7 2

22

26

22

9 2

19

26

Denmark

3

10 2

22

28

20

4

11

23

27

OECD average

12

23

26

5 9 2

23

28

Czech Republic

12

12 22

27

4

10 2

23 28

26

4

11

22

26

23

3

5

23

13

14 14

19

6

5 10

29

29

United States

9

17

27

27

Canada

8

20

30

5

Belgium

29 27

27

Finland

7

20

27 22

Hong Kong–China

Netherlands

29

80

100

Students proficient at Level 4 were able to explore a moderately complex problem scenario in a focused way. They could understand the links among the components of the scenario that were required to solve the problem and could control moderately complex digital devices, but not always so efficiently. On average within the OECD, one in three students were proficient at Level 4 or higher. In Singapore, Korea and Japan, over one in two students were able to complete Level 4 tasks and almost one in two students in the other top-performing Asian countries performed at Level 4 or higher. In Australia and Finland, 39% of students achieved Level 4 or higher, which was a similar proportion to Canada (40%). Those countries that achieved the lowest mean score had less than one in 10 students placed at these levels. Students proficient at Level 3 were able to handle information presented in several different formats. They could explore a problem scenario and infer simple relationships among its components. They could control simple digital devices, but had trouble with more complex devices. Across OECD countries, more than half (58%) of students were proficient at Level 3 or higher. In those countries performing significantly higher than Australia, more than 70% of students performed at Level 3 or higher. In Australia, 65% of students achieved a proficiency of at least Level 3, which was similar to Canada and Finland (66%), and England (64%). Students proficient at Level 2 were able to explore an unfamiliar problem scenario and understand a small part of it. They were only able to partially succeed in understanding and controlling digital devices with unfamiliar controls. At this level of proficiency, students were able to engage with everyday problems, make progress towards solving the problem and sometimes successfully solve the problem. Across OECD countries, 80% of students were proficient at Level 2 or higher. Level 2 is considered a baseline level of proficiency, where students begin to demonstrate the competencies in problem solving that will enable them to effectively and productively participate in today’s society. The majority of students (93% or higher) in Korea, Macao–China, Singapore and Japan, and 89% of students in Hong Kong–China, Chinese Taipei and Shanghai–China, were placed at Level 2 or higher. In Australia, England, France and Italy, 84% of students were proficient at Level 2 or higher. In the United Arab Emirates, Bulgaria, Montenegro, Uruguay and Colombia, less than one in two students achieved a proficiency of Level 2 or higher. Students proficient at Level 1 were able to explore a problem scenario only in a limited way, but could only do so when they had encountered similar situations before. Students at Level 1 were able to solve straightforward problems provided there was a simple condition to be satisfied and the need for only one or two steps to be performed to solve the problem. Students who scored less than 358 points were placed below Level 1. Although there were insufficient items to fully describe the proficiencies of students that were placed below Level 1, some of these students were able to use an unsystematic strategy to solve a simple problem in a familiar context and they may be able to find solutions when a limited number of well-defined possibilities are presented. Students who are placed below Level 1 are limited in their ability to solve problems. On average, one-fifth of students across the OECD were placed at Level 1 or below Level 1. Less than one in 10 students from Singapore, Korea, Japan and Macao–China were placed at these low proficiency levels. Students who were placed at Level 1 or below Level 1 are considered low performers. In Australia, 16% of students were low performers, which was a similar proportion to students in Canada and Finland (15%), England and France (17%) and the United States (19%).

Problem-solving performance by sex across countries On average across OECD countries, males scored significantly higher than females in problem solving, on average by six score points. Differences between sexes were found to be in favour of males in more countries than females. Figure 3.3 (p. 30) shows males significantly outperformed females in

Australian students’ performance in problem solving

29

Females Country

Males

Mean score

SE

Mean score

SE

424

3.2

398

4.6

Bulgaria

410

5.3

394

5.8

Cyprus

449

2.0

440

1.8

United Arab Emirates

Finland

526

2.6

520

2.8

Montenegro

409

1.8

404

1.8

Slovenia

478

2.2

474

2.1

Sweden

493

3.1

489

3.7

Norway

505

3.8

502

3.6

Poland

481

4.6

481

4.9

Spain

476

4.1

478

4.8

Australia

522

2.2

524

2.4

United States

506

4.2

509

4.2

Hungary

457

4.3

461

5.0

France

509

3.5

513

4.0

Estonia

513

2.6

517

3.3

Netherlands

508

4.5

513

4.9

Ireland

496

3.2

501

4.8

Canada

523

2.5

528

2.8

England

514

4.6

520

5.4

Israel

451

4.1

457

8.9

Germany

505

3.7

512

4.1

OECD average

497

0.7

503

0.8

Czech Republic

505

3.5

513

3.9

Malaysia

419

4.0

427

3.9

Belgium

504

3.1

512

3.1

Russian Federation

485

3.7

493

3.9

Singapore

558

1.7

567

1.8

Denmark

492

2.9

502

3.7

Macao–China

535

1.3

546

1.5

Uruguay

398

3.8

409

4.0

Austria

500

4.1

512

4.4

Chinese Taipei

528

4.1

540

4.5

Korea

554

5.1

567

5.1

Chile

441

3.7

455

4.5

Hong Kong–China

532

4.8

546

4.6

Serbia

466

3.2

481

3.8

Turkey

447

4.6

462

4.3

Croatia

459

4.0

474

4.8

Portugal

486

3.6

502

4.0

Italy

500

4.5

518

5.2

Japan

542

3.0

561

4.1

Slovak Republic

472

4.1

494

4.2

Brazil

418

4.6

440

5.4

Shanghai–China

524

3.8

549

3.4

Colombia

385

3.9

415

4.1

Difference in mean score

Females score higher

40

30

20

Males score higher

10

Sex difference significant

0

10

20

Sex difference not significant

Figure 3.3  Mean scores and differences between sexes in students’ performance on the problem-solving scale, by country

30

Thinking it through: Australian students’ skills in problem solving

30

40

approximately half the countries. The largest differences in favour of males were found in Colombia, Shanghai–China, Brazil and the Slovak Republic, with males scoring between 22 and 30 score points higher than females. In five countries (11%), females performed significantly higher than males, with the largest differences found in the United Arab Emirates (26 score points) and Bulgaria (16 score points). Australian males achieved a mean score of 524 points, which was not significantly different to the mean score of 522 points for females. The average proportion of females and males at each level on the problem-solving scale for Australia and the OECD average is shown in Figure 3.4. There were higher proportions of males than females who achieved Level 5 or 6. In Australia, 18% of males compared to 16% of females, and across the OECD, 13% of males compared to 10% of females were top performers.

OECD average

Australia

The proportion of males and females who failed to reach Level 2 was similar. In Australia, 16% of males compared to 15% of females failed to reach Level 2, while across the OECD the proportion of males and females was the same at 22%.

Females

5

10

20

Males

5

11

19

Females

8

14

23

Males

9

13

21

100

80 Below Level 1

60 Level 1

40 Level 2

20

27

Level 4

Level 5

19

40

5

8 2

10 3

20

25

4

12

13

22

25

0 20 Percentage of students

Level 3

23

27

60

80

100

Level 6

Figure 3.4  Percentage of students across the problem-solving proficiency scale by sex, for Australia and the OECD average

Australia’s problem-solving performance in a national context Problem-solving performance across the Australian jurisdictions Figure 3.5 (p. 32) shows the mean scores and distribution of problem-solving scores for each jurisdiction. The mean score and distribution for Australia and the highest performing country (Singapore) have also been included for comparison. The mean scores ranged from 528 points in Western Australia to 490 points in Tasmania. The Northern Territory, along with Tasmania, showed the widest distribution of scores, with a range of 364 and 349 score points respectively between the lowest and highest performing students. South Australia, with 305 score points, had the narrowest range between the 5th and 95th percentiles.

Australian students’ performance in problem solving

31

Difference between 5th and 95th percentiles

Mean score

SE

Confidence interval

ACT

526

3.7

518–533

338

NSW

525

3.5

518–532

328

VIC

523

4.1

515–531

311

QLD

522

3.4

515–529

317

SA

520

4.1

512–528

305

WA

528

4.0

520–536

314

TAS

490

4.0

482–498

349

NT

513

7.9

497–529

364

Australia

523

1.9

519–527

320

Singapore

562

1.2

560–565

312

OECD average

500

0.7

499–501

314

Jurisdiction

Distribution of scores

200

300

400

500

600

700

800

Mean problem-solving performance

Figure 3.5  Mean scores and distribution of students’ performance on the problem-solving scale, by jurisdiction

Table 3.1 is a multiple-comparison table that provides further details about the performance of each jurisdiction compared to the other jurisdictions. Almost all the jurisdictions performed at a level not significantly different to one another. Western Australia, the Australian Capital Territory, New South Wales, Victoria, Queensland, South Australia and the Northern Territory were on a par with each other, while Tasmania was the only jurisdiction that performed significantly lower than all other jurisdictions. Six jurisdictions (Western Australia, the Australian Capital Territory, New South Wales, Victoria, Queensland and South Australia) performed at a significantly higher level than the OECD average, the Northern Territory performed at a level not significantly different to the OECD average, and Tasmania performed significantly lower than the OECD average. Table 3.1  Multiple comparisons of mean problem-solving performance, by jurisdiction Jurisdiction

WA

ACT

NSW

VIC

QLD

SA

NT

TAS

OECD average

l

l

l

l

l

l

p

p

l

l

l

l

l

p

p

l

l

l

l

p

p

l

l

l

p

p

l

l

p

p

l

p

p

p

l

Mean score

SE

WA

528

4.0

ACT

526

3.7

l

NSW

525

3.5

l

l

VIC

523

4.1

l

l

l

QLD

522

3.4

l

l

l

l

SA

520

4.1

l

l

l

l

l

NT

513

7.9

l

l

l

l

l

l

TAS

490

4.0

q

q

q

q

q

q

p

OECD average

500

0.7

q

q

q

q

q

q

l

Note: Read across the row to compare a jurisdiction’s performance with the performance of each jurisdiction listed in the column heading.

32

p

Average performance significantly higher than in comparison jurisdiction

l

No statistically significant difference from comparison jurisdiction

q

Average performance significantly lower than in comparison jurisdiction

Thinking it through: Australian students’ skills in problem solving

q p

Figure 3.6 shows the average proportion of students at each problem-solving proficiency level by jurisdiction. Five per cent of students from three jurisdictions, the Australian Capital Territory, New South Wales and the Northern Territory, achieved Level 6. This was half the proportion of students in Singapore at this level. In other jurisdictions, 3 or 4% of students achieved Level 6, which was higher than the 2% of students across the OECD. The proportion of top performers across the jurisdictions ranged from 11% in Tasmania to 19% in the Australian Capital Territory. All jurisdictions, except Tasmania, achieved higher proportions of top performers than the OECD average (11%). Just over one-quarter (27%) of students in Tasmania and one-fifth (21%) of students in the Northern Territory were low performers compared to 16% of students in the Australian Capital Territory and Queensland, and 15% of students in New South Wales, Victoria and South Australia. The smallest proportion of low performers was in Western Australia with 13%. WA

4

9

19

26

NSW

5

10

19

26

VIC

5

10

20

26

SA

4

11

21

ACT

6

10

18

QLD

5

11

20

12

18

NT

9

TAS

10

17

Australia

5

Singapore 8 80

60

40

20

22

22 0

20

4

5

4 10

9 2

20 40

5

20

27 26

3

12

23

22

12

8 3

16

26

4

12

22 23

12

12

23

22

4 5

14

24

26

14

13

23

24

19

2 6

OECD average 100

11

13

22

27

23

13

25

60

80

100

Percentage of students Below Level 1

Level 1

Level 2

Level 3

Level 4

Level 5

Level 6

Figure 3.6  Percentage of students across the problem-solving proficiency scale, by jurisdiction

Problem-solving performance by sex across the Australian jurisdictions Overall, no significant differences between the sexes were found in Australia for problem solving. Figure 3.7 (p. 34) shows that only one jurisdiction, Western Australia, was found to have significant differences between the sexes, with males performing significantly higher than females. In Western Australia, males achieved a mean score of 537 points, which was 18 score points on average higher than females. This difference was three times larger than the OECD average (6 score points). Figure 3.8 (p. 34) shows the proportion of males and females in each problem-solving proficiency level by jurisdiction and across the OECD. Generally, there were higher proportions of males than females who were top performers (achieving Level 5 or 6) and low performers (failed to reach Level 2). The proportion of top-performing males ranged from 12% in Tasmania to 22% in the Northern Territory, and the proportion of top-performing females ranged from 11% in Tasmania to 17% in the Australian Capital Territory and New South Wales. All jurisdictions, except Tasmania, achieved a higher proportion of top-performing males compared to the OECD average (13%), while all jurisdictions had a higher proportion of top-performing females compared to the OECD average (10%). The difference

Australian students’ performance in problem solving

33

Females Jurisdiction

Males

Mean score

SE

Mean score

SE

ACT

529

4.9

522

5.9

Difference in mean score

TAS

491

5.5

489

5.4

SA

521

4.8

519

4.7

NSW

525

4.1

525

5.1

QLD

521

4.2

523

4.1

VIC

522

4.5

524

4.9

NT

507

11.1

519

10.1

WA

519

5.6

537

5.5

Females score higher

Males score higher

30

40

20

0

10

Sex difference significant

10

20

30

40

Sex difference not significant

Figure 3.7  Mean scores and differences in students’ performance on the problem-solving scale, by jurisdiction and sex

9

18

24

10

17

24

10

19

11

19

5

Females 8

Males

4

Females 6

Males

5

14

22

5

10

20

26

Males

5

11

19

26

Females

5

10

20

Males

5

11

19

10

21

6

13

21

Females

4

13

23

27 24

4

13

26

4

12

23

QLD

VIC

NSW

ACT

between top-performing males and females was larger in two jurisdictions (Western Australia and the Northern Territory), where there were between 5 and 10% more males than females reaching these high proficiency levels. In other jurisdictions, the difference between top-performing males and females was smaller, with 1 or 2%.

4

3

12

22

29

WA

SA

Females

5

12

23

25

4

12

22

27

4

13

22

Males

5

11

21

26

22

12

4

Females

5

10

20

27

23

11

4

8

12

Males

10

13

10 2

22

26

20

4

TAS NT

Females

8

17

21

21

17

12

Males

8 3

15

25

25

16

8

Females

5

15

26

25

17

9

4

Males

10

20

Males

84

5

11

19

Australia

5

OECD average

85

Females

Females

8

14

23

Males

9

13

21

100

80

60

40

20

23

27

27

20

19

40

4

12 5

8 2 10 3

20

25

8

13

22

25

0

14

21

17

16

60

80

Percentage of students Below Level 1

Level 1

Level 2

Level 3

Level 4

Level 5

Level 6

Figure 3.8  Percentage of students across the problem-solving proficiency scale, by jurisdiction and sex

34

Thinking it through: Australian students’ skills in problem solving

100

The proportion of low-performing males ranged from 13% in Western Australia to 29% in Tasmania, and the proportion of low-performing females ranged from 14% in the Australian Capital Territory, New South Wales and South Australia to 24% in Tasmania. The proportion of low-performing males in Tasmania and the Northern Territory was higher than the OECD average (22%), while the proportion of low-performing females was higher in Tasmania than across the OECD (22%). The difference between low-performing males and females was 5% in Tasmania and 4% in the Australian Capital Territory, and smaller (3% or less) in other jurisdictions.

Problem-solving performance by geographic location of school Australian schools in PISA were assigned to one of three geographic locations (metropolitan, provincial and remote)1, so that students’ performance in each of these groups could be investigated. Figure 3.9 shows the mean problem-solving scores for the three geographic location categories, along with the standard error, confidence intervals and the distribution of scores graphically. Students in metropolitan schools achieved a mean score of 528 points, which was significantly higher than for students in provincial schools (510 score points on average) and significantly higher than for students in remote schools (475 score points on average). Compared to the OECD average, students in metropolitan and provincial schools performed significantly higher, while students in remote schools performed significantly lower, with a difference of 25 score points on average. The mean score difference between students’ performance in metropolitan and remote schools was 53 score points or a difference of almost one proficiency level, which was larger than the mean score difference between students’ performance in provincial and remote schools (35 score points) or between students’ performance in metropolitan and provincial schools (18 score points). The spread of scores between the lowest and highest performing students was widest for students in remote schools, followed by students in metropolitan schools, with a spread of scores that was the same as across the OECD. Students in provincial schools had the narrowest range of problem-solving scores. At the 5th percentile, students in remote schools performed about one proficiency level lower than students in metropolitan or provincial schools. At the 95th percentile, students in remote schools performed about one-half of a proficiency level lower than students in provincial schools and about three-quarters of a proficiency level lower than students in metropolitan schools.

SE

Confidence interval

Difference between 5th and 95th percentiles

528

2.4

524–533

320

Provincial

510

3.2

504–517

314

Remote

475

16.4

443–508

344

Geographic location

Mean score

Metropolitan

Distribution of scores

200

300

400

500

600

700

800

Mean problem-solving performance

Figure 3.9  Mean scores and distribution of students’ performance on the problem-solving scale, by geographic location

 Using the MCEECDYA Schools Geographic Location Classification. The Reader’s Guide provides more information about the MCEECDYA Schools Geographic Location Classification.

1

Australian students’ performance in problem solving

35

Figure 3.10 shows that higher proportions of students in metropolitan schools (18%) and provincial schools (12%) were top performers compared to students in remote schools (9%). At the lower end of the proficiency scale, there were higher proportions of students in remote schools (30%) who were low performers compared to students in metropolitan schools (15%) and provincial schools (18%). For those students who were placed at below Level 1, there were more than twice as many students in remote schools compared to students in metropolitan and provincial schools. Metropolitan

5

Provincial

10

6

Remote

13

100

80

60

40

19

12

22

17

20

25

23

28

21

20

26

0

20

13

16 40

9

5

3

8 60

80

100

Percentage of students Below Level 1

Level 1

Level 2

Level 3

Level 4

Level 5

Level 6

Figure 3.10  Percentage of students across the problem-solving proficiency scale, by geographical location

Problem-solving performance by Indigenous background Details about students’ Indigenous background were derived from information provided by the school.2 Figure 3.11 shows that the performance of Indigenous students in problem solving was significantly lower compared to non-Indigenous students. Indigenous students achieved a mean score of 454 points, which was 72 score points less than the mean score achieved by non-Indigenous students with 526 score points. The score difference between Indigenous and non-Indigenous students is equivalent to more than one proficiency level. Indigenous students performed significantly lower than students across the OECD by 46 score points on average, while non-Indigenous students performed significantly higher. Although the spread of scores between the highest and lowest performing students was similar for Indigenous (314 score points) and non-Indigenous students (317 score points), the lowest performing Indigenous students scored 294 points on average compared to the lowest performing non-Indigenous students, with 362 score points on average. Indigenous students’ scores at each of the other percentiles were lower compared to the corresponding scores for non-Indigenous students. Indigenous background

Mean score

SE

Confidence interval

Difference between 5th and 95th percentiles

Indigenous

454

4.2

446–462

314

Non-Indigenous

526

1.9

522–529

317

Distribution of scores

200

300

400

500

600

700

800

Mean problem-solving performance

Figure 3.11  Mean scores and distribution of students’ performance on the problem-solving scale, by Indigenous background

Figure 3.12 provides details about Indigenous and non-Indigenous students’ proficiencies in problem solving. Four per cent of Indigenous students were top performers in problem solving compared to 18% of non-Indigenous students. Less than 1% (0.6%) of Indigenous students achieved the highest proficiency level (Level 6) considerably lower than the 5% of non-Indigenous students or the 3% of students across the OECD who achieved at this level.   The Reader’s Guide provides more information about the definition of Indigenous background.

2

36

Thinking it through: Australian students’ skills in problem solving

Thirty-seven per cent of Indigenous students were low performers compared to 15% of non-Indigenous students. These low-performing students will be limited in their capacity to solve problems. The proportion of low-performing Indigenous students was almost twice that for students across the OECD (21%). There were almost half as many Indigenous students as non-Indigenous students who achieved Level 4, while similar proportions of Indigenous and non-Indigenous students achieved Level 2 or 3 (47% and 45% respectively). Indigenous

16

21

Non-Indigenous

5

100

80

60

40

20

25

10

22

19

12

26

0

20

3

23 40

13

60

5 80

100

Percentage of students Below Level 1

Level 1

Level 2

Level 3

Level 4

Level 5

Level 6

Figure 3.12   Percentage of students across the problem-solving proficiency scale, by Indigenous background

Problem-solving performance by sex and Indigenous background As shown in Figure 3.13, there were no significant differences between the performance of Indigenous females and males in problem solving. This was also the case for non-Indigenous females and males. Indigenous background

Females

Males

Mean score

SE

Mean score

SE

Indigenous

456

4.9

452

5.8

Non-Indigenous

524

2.2

527

2.5

Figure 3.13  Mean scores and distribution of performance on the problem-solving scale, by Indigenous background and sex

Distribution of scores

Females score higher

40

30

Males score higher

20

10

Sex difference significant

0

10

20

30

40

students’

Sex difference not significant

There were similar proportions of Indigenous females (3%) and males (4%) who were top performers in problem solving (Figure 3.14, p. 38). The proportions of top-performing non-Indigenous females and males were also similar (16% and 18% respectively). Thirty-five per cent of Indigenous females and 39% of Indigenous males were low performers, while 14% of non-Indigenous females and 15% of non-Indigenous males were low performers. For Indigenous students, there were similar proportions of top-performing females and males, but there was a higher proportion of males who were low performers compared to females. For non-Indigenous students, the proportion of top-performing females and males was similar, while for the low-performing students there were slightly more males than females.

Australian students’ performance in problem solving

37

26

21

25

Indigenous

20

Non-Indigenous

15

Females

Females

4

10

20

Males

5

10

19

18

Males

100

80

60

Below Level 1

40

Level 1

Level 2

20

Level 3

Level 4

13

23

25 40

4

12

23

27

Level 5

3

12

20

0 20 Percentage of students

3

12

23

60

5 80

100

Level 6

Figure 3.14  Percentage of students across the problem-solving proficiency scale, by Indigenous background and sex

Problem-solving performance by socioeconomic background Socioeconomic background in PISA is measured by the index of Economic, Social and Cultural Status (ESCS), which captures the wider aspects of a student’s family and home background.3 Figure 3.15 shows the positive relationship between socioeconomic background and student performance, with students in the higher socioeconomic quartiles achieving a score of 560 points on average, which was significantly higher in problem solving than students in the lower socioeconomic quartiles. Students in the highest socioeconomic quartile achieved an average of 73 score points higher than students in the lowest socioeconomic quartile. This difference represents more than one proficiency level on the problem-solving proficiency scale. The difference between each socioeconomic quartile and the next was significant at around 25 score points on average or around one-third of a proficiency level. Students in the two highest socioeconomic quartiles showed a similar spread of scores between the lowest and highest performing students, which were slightly narrower than students in the two lowest socioeconomic quartiles. Socioeconomic background

Difference between 5th and 95th percentiles

Mean score

SE

Confidence interval

Lowest quartile

487

2.5

482–492

313

Second quartile

512

2.4

507–516

307

Third quartile

538

2.9

532–543

301

Highest quartile

560

2.5

555–565

302

Distribution of scores

200

300

400

500

600

700

800

Mean problem-solving performance

Figure 3.15  Mean scores and distribution of students’ performance on the problem-solving scale, by socioeconomic background

  The Reader’s Guide provides more information about socioeconomic background and the ESCS index.

3

38

Thinking it through: Australian students’ skills in problem solving

Figure 3.16 shows the proportion of top performers was higher with each increase in the socioeconomic quartile, while the proportion of low performers was lower with each decrease in the socioeconomic quartile. At the higher end of the proficiency scale, 9% of students in the lowest socioeconomic quartile were top performers compared to 12% in the second socioeconomic quartile, 19% of students in the third socioeconomic quartile and 27% of students in the highest socioeconomic quartile. At the lower end of the proficiency scale, one-quarter of the students in the lowest socioeconomic quartile were low performers compared to 18% of students in the second socioeconomic quartile, 11% of students in the third socioeconomic quartile and 8% of students in the highest socioeconomic quartile. Lowest quartile

Second quartile

6

Third quartile

3 8

Highest quartile

2 6

100

80

60

40

20

0

20

40

60

5

8

19

27

24

13

14

26

27

17

3

9

21

28

22

12

7 2

17

25

25

16

9

80

100

Percentage of students Below Level 1

Level 1

Level 2

Level 3

Level 4

Level 5

Level 6

Figure 3.16  Percentage of students across the problem-solving proficiency scale, by socioeconomic background

Problem-solving performance by immigrant background Immigrant background was measured on students’ self-report of where they and their parents were born.4 Figure 3.17 shows first-generation students achieved a mean score of 531 points, which was significantly higher than the mean score for Australian-born students (523 points) and foreign-born students (517 points). Australian-born students’ performance in problem solving was not significantly different from foreignborn students. The range of scores was 315 points for Australian-born students, which was similar to the range of scores for first-generation students (318 points), while the range of scores for foreign-born students was wider at 328 points. Immigrant background

Difference between 5th and 95th percentiles

Mean score

SE

Confidence interval

Australian-born

523

2.1

519–527

315

First-generation

531

2.9

525–536

318

Foreign-born

517

3.6

510–524

328

Distribution of scores

200

300

400

500

600

700

800

Mean problem-solving performance

Figure 3.17  Mean scores and distribution of students’ performance on the problem-solving scale, by immigrant background    The Reader’s Guide provides more information about the definition of immigrant background.

4

Australian students’ performance in problem solving

39

Figure 3.18 shows the proportions of students at each proficiency level on the problem-solving scale by immigrant background. For the top performers, the proportion of first-generation students (19%) was slightly higher than the proportion of Australian-born and foreign-born students (16%), while for low performers, there was almost one-fifth (18%) of foreign-born students and similar proportions of Australian-born and first-generation students (15 and 14% respectively). Australian-born

5

10

19

27

First-generation

4

10

19

25

23

12

20

25

21

6

Foreign-born 100

80

60

40

20

0

23

12

4

14

5

12

20

40

60

Level 4

Level 5

Level 6

4 80

100

Percentage of students Below Level 1

Level 1

Level 2

Level 3

Figure 3.18  Percentage of students across the problem-solving proficiency scale, by immigrant background

Problem-solving performance by language background Students who spoke English as their main language at home performed significantly higher in problem solving (526 score points on average) than those students whose main language at home was a language other than English (509 score points on average). Figure 3.19 shows that the spread of scores between the lowest and highest performing students for English speakers was narrower (316 score points), than the spread of scores for students who speak a language other than English at home (341 score points). Difference between 5th and 95th percentiles

Language background

Mean score

SE

Confidence interval

English spoken at home

526

1.9

522–529

316

Language other than English spoken at home

509

4.4

500–518

341

Distribution of scores

200

300

400

500

600

700

800

Mean problem-solving performance

Figure 3.19  Mean scores and distribution of students’ performance on the problem-solving scale, by language background

Figure 3.20 shows 18% of students who spoke English at home were top performers in problem solving, which was simliar to the 16% of students who spoke a language other than English at home. One-fifth (21%) of students who spoke a language other than English at home were low performers compared to 15% of students who spoke English at home.

40

Thinking it through: Australian students’ skills in problem solving

English spoken at home

5

Language other than English spoken at home 100

8 80

60

40

20

10

13

19

26

20

23

0

20

23

13

20 40

12 60

5

4 80

100

Percentage of students Below Level 1

Level 1

Level 2

Level 3

Level 4

Level 5

Level 6

Figure 3.20  Percentage of students across the problem-solving proficiency scale, by language background

Variations in problem-solving performance between and within schools The variation in performance within countries can be divided into a measure of performance differences between students from the same school and a measure of performance differences between groups of students from different schools. Figure 3.21 (p. 42) shows the proportion of variance in achievement for each country, divided into the amount of variation that occurs between schools (i.e., the performance variation attributable to differences in students’ results in different schools) and the amount of variation that occurs within schools (the performance variation attributable to the range of students’ results that cannot be attributed to differences between schools). Across OECD countries, the amount of variation in performance within schools was 61% and the amount of variation in performance between schools was 38%. In Australia, the amount of variation in performance within schools was 75% and was higher than the OECD average, while the amount of variation in performance between schools in Australia was 28% and lower than the OECD average.

Australian students’ performance in problem solving

41

Variation within schools (as proportion of average OECD total)

Variation between schools (as proportion of average OECD total)

Israel OECD average 61%

Hungary Bulgaria

OECD average 38%

Netherlands United Arab Emirates Belgium Germany Slovenia Slovak Republic Czech Republic Austria Uruguay Brazil OECD average Italy Cyprus Croatia Poland Shanghai–China Turkey Chinese Taipei Montenegro Chile Spain Colombia Singapore Hong Kong–China Serbia Russian Federation England Malaysia Korea Australia United States Japan Denmark Portugal Canada Norway Ireland Macao–China Estonia Sweden Finland 100

80

60

40

20

0

20

40

60

80

100

Percentage of variation within and between schools

Figure 3.21  Variation in problem-solving performance between and within schools, by country 5,6,7

  Expressed as a percentage of the average variation in student performance across OECD countries.   Data for France is not available. 7    The variation within and between schools will not add up to 100 because it is a percentage of the average OECD total variance. 5 6

42

Thinking it through: Australian students’ skills in problem solving

In Finland, there was little variation in performance between schools (10%). This was followed by Sweden, Estonia and Macao–China, where around 20% of the variation was due to differences in performance between schools. In these countries, students can expect to achieve similar results regardless of the school they attend. In countries that report larger amounts of variation between schools, for example in Hungary and Israel, the amount of variance in performance between schools was 70% and 84% respectively, meaning it would make a difference which schools students attend. Figure 3.22 shows the proportion of between—and within—school variation in problem-solving performance for jurisdictions. On average, the variation in problem-solving performance that was observed between schools ranged from 19% in South Australia to 39% in Tasmania. This suggests that the school students attend in Tasmania will influence problem-solving performance more than the school students attend in other jurisdictions, where there were smaller proportions of between-school variations in problem solving. On average across jurisdictions, the variation in student performance that was observed within schools ranged from 72% in Victoria to 94% in the Northern Territory. This means that for students in a Northern Territory school, their score in problem solving would be more heterogeneous than for students in a jurisdiction where the within-school variation in problem solving was smaller. Variation within schools (as proportion of average OECD total)

Variation between schools (as proportion of average OECD total)

TAS OECD average 61%

NT ACT

OECD average 38%

NSW QLD VIC WA SA Australia OECD average

100

80

60

40

20

0

20

40

60

80

100

Percentage of variation within and between schools

Figure 3.22  Variation in problem-solving performance between and within schools, by jurisdiction

Australian students’ performance in problem solving

43

Comparing students’ performance in problem solving with mathematics, science and reading PISA assesses problem solving in two contexts. There are the regular assessments of mathematics, science and reading that include problem-solving tasks that assess students’ abilities to apply the skills and knowledge they have learned in school, and there is the assessment that assesses students’ general reasoning skills, focusing on the cognitive process that is essential for successful problem solving. It is expected that performance in problem solving is positively correlated with students’ performance in mathematics, science and reading. That is, students who do well in problem solving are likely to do well in other literacy domains and students who perform poorly in problem solving are likely to perform poorly in other literacy domains. The strength of the relationship between performance in the regular PISA assessments of mathematics, science and reading, and performance in problem solving is shown in Table 3.2. The data in the table are latent correlations for the OECD average, where the closer to 0.00 implies no relationship and the closer to 1.00 implies the strongest positive relationship. Comparing the strength of the relationship between problem solving and the three literacy domains, it can be seen that the strongest correlation is between problem solving and mathematics (0.81) and the weakest correlation is between problem solving and reading (0.75). Table 3.2  Relationship between performance in problem solving, mathematics, science and reading across the OECD Science

Mathematics

Reading

Problem solving

0.78

0.81

0.75

Reading

0.88

0.85

Mathematics

0.90

Table 3.3 shows the corresponding correlations for Australia. It can be seen that the largest correlation is again between problem solving and mathematics and the smallest correlation is between problem solving and reading. The strengths of the relationships for Australia are slightly stronger compared to the OECD average. Table 3.3  Relationship between performance in problem solving, mathematics, science and reading for Australia Science

Mathematics

Reading

Problem solving

0.81

0.83

0.77

Reading

0.90

0.87

Mathematics

0.91

The correlations between problem solving and mathematics were 0.81 for Western Australia, 0.82 for the Australian Capital Territory, Queensland and South Australia, 0.83 for Tasmania and the Northern Territory, 0.84 for Victoria and 0.86 for New South Wales. An analysis that relates the variation in problem-solving performance jointly to the variation in the performance in mathematics, science and reading shows that skills assessed in the problem-solving assessment were also used in a wide range of contexts. Across the OECD, 68% of the problem-solving variance reflected skills that were also measured in one of the three literacy domains regularly assessed in PISA. The remaining 32% reflected skills that were uniquely measured in the problem-solving assessment. Of the 68% of variation that problem-solving performance shared with other literacy domains, the largest proportion was shared with all three regular literacy assessment domains (62% of the total variation), about 5% was uniquely shared between problem solving and mathematics, and about 1% was based on skills that were specifically measured in the science and reading assessments.

44

Thinking it through: Australian students’ skills in problem solving

Figure 3.23 shows the association of problem-solving skills with performance in mathematics, science and reading was, in general, of similar strength across countries. In Colombia, the Russian Federation, Spain, Japan, Italy and Hong Kong–China, less than 60% of the problem-solving variance reflects skills Colombia Russian Federation Spain Japan Hong Kong–China Denmark Canada Poland Norway Italy Macao–China Uruguay Cyprus Portugal Ireland Austria Montenegro Chile Sweden Korea United Arab Emirates Belgium Bulgaria OECD average Slovenia Brazil Singapore Serbia France Malaysia Hungary Turkey Shanghai–China Australia Germany Finland Estonia Croatia

Variation associated with more than one domain

Slovak Republic

Variation uniquely associated with mathematics performance

England

Variation uniquely associated with reading performance

United States Netherlands

Variation uniquely associated with science performance

Israel

Residual (unexplained) variation

Chinese Taipei Czech Republic 0

20

40 60 Percentage of variance explained

80

100

120

Figure 3.23  Variation in problem-solving performance associated with performance in mathematics, science and reading, by country

Australian students’ performance in problem solving

45

that were also measured in the other literacy domains. This indicates comparatively weak associations between the skills measured in the problem-solving assessment and performance in mathematics, science and reading. On the other hand, the Czech Republic, Chinese Taipei and Israel had comparatively stronger associations between the problem-solving skills measured in the assessment and performance in mathematics, science and reading, with over 75% of the total explained variation. In Australia, 71% of the problem-solving variance reflected skills that were also measured in the regular PISA assessments. Approximately 30% of the variation in Australian problem-solving performance was uniquely measured in the problem-solving assessment. These findings suggest that problem-solving ability, as defined in PISA, is highly correlated with the core PISA literacy domains. Figure 3.24 shows the variation in problem-solving performance that is associated with one or more of the three literacy domains and the variation in problem-solving performance that is associated with only problem-solving skills. The proportion of the problem-solving variance that reflected skills that were also measured in one of the three regular assessment literacy domains ranged from 68% in Western Australia to 75% in New South Wales, while the proportion of the problem-solving variance that reflected skills that were uniquely captured by the problem-solving assessment ranged from 25% in New South Wales to 32% in Western Australia. Western Australia had comparatively weak associations between the skills assessment in the problemsolving assessment and performance in mathematics, science and reading. New South Wales had comparatively stronger associations between the skills assessment in the problem-solving assessment and performance in mathematics, science and reading. WA QLD SA ACT

Variation associated with more than one domain

TAS

Variation uniquely associated with mathematics performance

VIC

Variation uniquely associated with reading performance

NT

Variation uniquely associated with science performance

NSW Australia

Residual (unexplained) variation

OECD average 0

20

40

60

80

100

Percentage of variance explained

Figure 3.24  Variation in problem-solving performance associated with performance in mathematics, science and reading, by jurisdiction

Relative performance in problem solving in Australia It is possible to infer whether students perform the same as, above or below students with similar proficiency in mathematics by comparing the performance of students from one country to the average performance observed across participating countries at a given level of proficiency in mathematics. In Australia, England and the United States, the best students in mathematics also had excellent problem-solving skills. These countries’ good performance in problem solving was mainly due to strong performers in mathematics. This may suggest that in these countries, top performers in mathematics have access to—and take advantage of—the kinds of learning opportunities that are also useful for improving their problem-solving skills (Figure 3.25).

46

Thinking it through: Australian students’ skills in problem solving

700

Problem-solving score

600

500

400

300 300

400

500

600

700

Mathematics score Notes: The dotted line shows the average performance in problem solving, across students from all participating countries, at different levels of performance in mathematics. The continuous line shows the pattern of relative performance in problem solving.

Figure 3.25  R elative performance in problem solving at different levels on the mathematics scale for Australia, England and the United States

In Japan, Korea and Italy, the good performance in problem solving was, to a large extent, due to the fact that lower performing students score beyond expectations in the problem-solving assessment; however, in Singapore, for example, students’ average performance in problem solving across all levels were similar to the average performance in mathematics. There were similar differences among countries with overall weak performance in problem solving, relative to their students’ performance in mathematics. In several of the countries (Bulgaria, Colombia, Croatia, Denmark, Estonia, Germany, Hungary Ireland, Israel, the Netherlands, Slovenia, Spain and the United Arab Emirates), specific difficulties in problem solving were most apparent among students with poor mathematics skills. Students with strong mathematics skills often perform on or close to par with students in other countries. In Austria, Belgium, Malaysia, Montenegro, Poland, Shanghai–China, Singapore, the Slovak Republic and Uruguay, weak performance in problem solving—relative to mathematics performance—was mainly due to students’ performance being similar across all proficiency levels.

Australian students’ performance in problem solving

47

CHAPTER 4

Students’ strengths and weaknesses in problem solving

Chapter 3 described student performance on the PISA 2012 problem-solving scale. This chapter examines problem-solving performance by taking a closer look at the problem-solving items, analysing how students interact with the test items to identify comparative strengths and weaknesses within countries and within different social groups. In Chapter 3, results were reported on the overall problem-solving scale. This chapter focuses on the problem-solving aspects, analysing how students perform on the problem-solving processes, the nature of the problem situation and the response formats, and identifies the skills that some students master better than other students. The final part of this chapter presents details of two of the problem-solving framework aspects—the nature of the problem situation and problem-solving processes—and groups the countries, the Australian jurisdictions and different social groups within Australia by their strengths and weaknesses in problem solving.

48

Reporting students’ strengths and weaknesses in problem solving PISA reports the performance of all students on the problem-solving assessment on an overall scale. While this approach has many advantages, it can potentially hide interesting differences in patterns of performance at lower levels of aggregation, that is, on single items or on subsets of items. To look at patterns of performance at a more detailed level than that given by the overall problem-solving scale, the approach used was to look at the unscaled item of students who were administered each item1. Internationally, average percentages of correct responses are computed at the country level, where a correct response is taken as a full-credit answer, and nonreached2 items are taken as incorrect. For the average of a group of items, the simple average percentage of the relevant items is used. The Australian results considered in this chapter are calculated over the relevant groups of students, such as jurisdictions or Indigenous background. Across countries, a measure of the difficulty of the items is their average percentage. The average across the OECD was used as a basis for international comparison. The relative difficulty of two distinct sets of items can be given by comparing their average percentages. So, by comparing the percentage correct across two sets of items and across countries, the relative strengths and weaknesses of each country can be identified. For each subset of items and for each country, the result of this comparison is reported as an odds ratio where a ratio of 1 indicates that the pattern of performance across items is in line with the average OECD pattern of performance. A ratio of more than 1 indicates that the items in this subset were relatively easier for students of a particular country than for students across OECD countries, after accounting for the overall differences in performance. Conversely, a ratio of less than 1 meant that students in a particular country found these items relatively harder. For the within Australia comparisons in this report, it was decided to use the overall results for Australia as a basis of comparison rather than the results across the OECD. As such, an odds ratio of more than 1 would indicate, for example, that Indigenous students found a set of items relatively easier than the students in Australia as a whole, after accounting for their overall performance. In the international results, significant differences from the OECD averages at the 5% level are shown. For the Australian results, differences from the Australian averages are shown at a moderate significance (the 10% level), as well as at the 5% level. The purpose of this chapter is to provide a profile of students’ strengths and weaknesses in problem solving. However, it is important to place a caveat around the results. Performance has been compared to the OECD average, at the international level, and to the Australian average, at the national level, to identify comparative strengths and weaknesses. These averages were selected for pragmatic reasons; however, it could be argued that the balance between the various aspects of problemsolving competence could be challenged to be different than what has been used in these analyses. Also, the results were based on a small number of items. There were 42 items in the problem-solving assessment and, in some cases, analyses were based on small sets of items.

  Students only completed a subset of items from the whole problem-solving item pool.   These are items that students have not attempted.

1 2

Students’ strengths and weaknesses in problem solving

49

Students’ strengths and weaknesses in problem-solving processes The main cognitive processes involved in solving a problem were outlined in Chapter 2. These were: exploring and understanding; representing and formulating; planning and executing; and monitoring and reflecting. Each of the items in the PISA 2012 problem-solving assessment was classified to reflect the main demand of each item; however, other processes may have been involved while solving a particular item.

Strengths and weaknesses on problem-solving tasks by problem-solving processes, across countries3 Figure 4.1 summarises countries’ strengths and weaknesses in problem-solving processes and shows there is a pattern of high-performing countries performing relatively stronger on the exploring and understanding and the representing and formulating processes, and relatively weaker on the planning and executing and the monitoring and reflecting processes. The lower performing countries generally show the opposite of this pattern. Of course, there is some variation in this, but the effect seems quite clear. The middle-ranked countries show no definite pattern, though there are some countries that show a significant difference from the OECD average in only one or none of the problem-solving processes. Australian students are comparatively stronger on the exploring and understanding and the representing and formulating processes, and are relatively weaker on the planning and executing process. In terms of problem-solving skills, Australian students are good at generating new knowledge and can be characterised as quick learners—questioning their knowledge and challenging assumptions, and generating and experimenting with alternatives—and good at abstract-information processing. Students who are typically good at planning and executing processes use the knowledge they have, and are characterised as goal-driven and persistent. This is an area where Australian students’ skills could be improved; they need to be able to use their knowledge to devise a plan and execute the plan in order to solve a problem.

   These comparisons take into account the countries’ overall performance.

3 

50

Thinking it through: Australian students’ skills in problem solving

Difference between observed and expected performance, by problem-solving process

Country

Mean score in problem solving

Singapore

562

Korea

561

Japan

552

Macao–China

540

Hong Kong–China

540

Shanghai–China

536

Chinese Taipei

534

Canada

526

Australia

523

Finland

523

England

517

Estonia

515

France

511

Netherlands

511

Italy

510

Czech Republic

509

Germany

509

United States

508

Belgium

508

Austria

506

Norway

503

Ireland

498

Denmark

497

Portugal

494

Sweden

491

Russian Federation

489

Slovak Republic

483

Poland

481

Spain

477

Slovenia

476

Serbia

473

Croatia

466

Hungary

459

Turkey

454

Israel

454

Chile

448

Cyprus

445

Brazil

428

Malaysia

422

United Arab Emirates

411

Montenegro

407

Uruguay

403

Bulgaria

402

Colombia

399

Exploring and understanding

Representing and formulating

Planning and executing

Monitoring and reflecting

Stronger than expected performance on the problem-solving process Non significant strength or weakness Weaker than expected performance on the problem-solving process

Figure 4.1  Relative strengths and weaknesses in problem-solving processes, by countries

Students’ strengths and weaknesses in problem solving

51

Strengths and weaknesses on problem-solving tasks by problem-solving processes, within Australia4 The highest performing jurisdiction, Western Australia, performed relatively better on the exploring and understanding process than Australia as whole. The opposite was true for New South Wales, with students performing relatively weaker on this process compared to Australia as a whole. Queensland performed stronger than expected on the representing and formulating process and moderately weaker on the planning and executing process. New South Wales performed relatively stronger on the representing and formulating process. Figure 4.2 shows there were no clear patterns in different performance between the jurisdictions. Difference between observed and expected performance, by problem-solving process Jurisdiction

Mean score in problem solving

WA

528

ACT

526

NSW

525

VIC

523

QLD

522

SA

520

NT

513

TAS

490

Exploring and understanding

Representing and formulating

Planning and executing

Monitoring and reflecting

Stronger than expected on the problem-solving process Moderately stronger than expected performance on the problem-solving process Nonsignificant strength or weakness Moderately weaker than expected performance on the problem-solving process Weaker than expected performance on the problem-solving process

Figure 4.2  Relative strengths and weaknesses in problem-solving processes, by jurisdictions

Australia’s results across the problem-solving processes were disaggregated into national subgroups, as presented in Figure 4.3. Females showed a strong positive bias on the monitoring and reflecting process and a moderate positive bias on the planning and executing process, but were weaker on the representing and formulating process. Indigenous students performed relatively weaker on the exploring and understanding process, while non-Indigenous students performed relatively stronger on the same process. Internationally, this was also what was observed, with lower performing countries also being relatively weaker in this process. There were no clear patterns in performance across the problem-solving processes for geographic location, despite large differences in overall performance. Interestingly, students in the lowest quartile of socioeconomic background performed relatively stronger on the planning and executing process, and performed relatively weaker on the exploring and understanding process. The opposite findings were found for students in the highest quartile of socioeconomic background.

    These comparisons take into account Australia’s overall performance.

4

52

Thinking it through: Australian students’ skills in problem solving

Difference between observed and expected performance, by problem-solving process Mean score in problem solving

Exploring and understanding

Representing and formulating

Planning and executing

Monitoring and reflecting

Sex Female

522

Male

524

Indigenous background Indigenous

454

Non-Indigenous

526

Geographic location Metropolitan

528

Provincial

510

Remote

475

Socioeconomic background Lowest quartile

487

Second quartile

512

Third quartile

538

Highest quartile

560

Immigrant status Australian-born

523

First-generation

531

Foreign-born

517

Language at home English

526

Language other than English

509

Stronger than expected on the problem-solving process Moderately stronger than expected performance on the problem-solving process Nonsignificant strength or weakness Moderately weaker than expected performance on the problem-solving process Weaker than expected performance on the problem-solving process

Figure 4.3  Relative strengths and weaknesses in problem-solving processes, by different social groups for Australia

Students’ strengths and weaknesses in the nature of the problem situation In the problem-solving framework, items have been classified by how information about a problem has been presented. Information about the problem that is disclosed at the outset is considered static, while information that is discovered as students explore the problem is considered interactive.

Strengths and weaknesses on problem-solving tasks by the nature of the problem situation, across countries It is difficult to discern any pattern of relative strength or weakness in these results by overall performance by nature of the problem situation, unlike those that appeared for problem-solving processes. About half the countries show no significant bias and where an effect exists, the number of countries showing either direction is about the same, with a slightly higher number of lower performing countries showing a preference for static tasks (Figure 4.4, p. 54).

Students’ strengths and weaknesses in problem solving

53

Country

Mean score in problem solving

Singapore

562

Korea

561

Japan

552

Macao–China

540

Hong Kong–China

540

Shanghai–China

536

Chinese Taipei

534

Canada

526

Australia

523

Finland

523

England

517

Estonia

515

France

511

Netherlands

511

Italy

510

Czech Republic

509

Germany

509

United States

508

Belgium

508

Austria

506

Norway

503

Ireland

498

Denmark

497

Portugal

494

Sweden

491

Russian Federation

489

Slovak Republic

483

Poland

481

Spain

477

Slovenia

476

Serbia

473

Croatia

466

Hungary

459

Turkey

454

Israel

454

Chile

448

Cyprus

445

Brazil

428

Malaysia

422

United Arab Emirates

411

Montenegro

407

Uruguay

403

Bulgaria

402

Colombia

399

Difference between observed and expected performance, by nature of the problem Static tasks

Interactive tasks

Stronger than expected performance on the nature of the problem Nonsignificant strength or weakness Weaker than expected performance on the nature of the problem

Figure 4.4  Relative strengths and weaknesses on problem-solving tasks by the nature of the problem situation, across countries

54

Thinking it through: Australian students’ skills in problem solving

Strengths and weaknesses on problem-solving tasks by the nature of the problem situation, within Australia As seen in the international results, no clear pattern emerged for the Australian jurisdictions. Although, in the lowest performing jurisdiction—Tasmania—performance was stronger on static tasks, which occurred somewhat more frequently in lower performing countries. Queensland was the only jurisdiction to perform relatively stronger on interactive tasks (Figure 4.5).

Jurisdiction

Mean score in problem solving

ACT

526

NSW

525

VIC

523

QLD

522

SA

520

WA

528

TAS

490

NT

513

Difference between observed and expected performance, by nature of the problem Static tasks

Interactive tasks

Stronger than expected on the problem-solving process Moderately stronger than expected performance on the problem-solving process Nonsignificant strength or weakness Moderately weaker than expected performance on the problem-solving process Weaker than expected performance on the problem-solving process

Figure 4.5  Relative strengths and weaknesses on problem-solving tasks by the nature of the problem situation, across jurisdictions

No relative strengths or weaknesses in static or interactive tasks were found across the different Australian social groups (therefore, no figure has been presented).

Students’ strengths and weaknesses on the response formats In the PISA problem-solving assessment, one-third of the items were simple multiple-choice items. Twothirds of the items were constructed-response format items, requiring students to write an answer, draw lines between two points or drag a shape.

Strengths and weaknesses on problem-solving tasks by response format, across countries Figure 4.6 (p. 56) shows the relative strengths and weaknesses by response formats across countries. Unlike the comparison between the static and interactive items, there were some clear patterns evident with the different types of response formats. Of the seven highest performing countries, except Singapore, these Asian countries’ performances were stronger on the selected-response format items. Twelve of the 13 lowest performing countries in problem solving also showed stronger performance on the selected-response format items. Canada, Australia, England, Estonia, Belgium, Ireland and Denmark showed a bias towards the constructed-response format items.

Students’ strengths and weaknesses in problem solving

55

Country

Mean score in problem solving

Singapore

562

Korea

561

Japan

552

Macao–China

540

Hong Kong–China

540

Shanghai–China

536

Chinese Taipei

534

Canada

526

Australia

523

Finland

523

England

517

Estonia

515

France

511

Netherlands

511

Italy

510

Czech Republic

509

Germany

509

United States

508

Belgium

508

Austria

506

Norway

503

Ireland

498

Denmark

497

Portugal

494

Sweden

491

Russian Federation

489

Slovak Republic

483

Poland

481

Spain

477

Slovenia

476

Serbia

473

Croatia

466

Hungary

459

Turkey

454

Israel

454

Chile

448

Cyprus

445

Brazil

428

Malaysia

422

United Arab Emirates

411

Montenegro

407

Uruguay

403

Bulgaria

402

Colombia

399

Difference between observed and expected performance, by response format Selected responses

Constructed responses

Stronger than expected performance on the problem-solving process Nonsignificant strength or weakness Weaker than expected performance on the problem-solving process

Figure 4.6  Relative strengths and weaknesses on problem-solving tasks by response format, across countries

56

Thinking it through: Australian students’ skills in problem solving

Strengths and weaknesses on problem-solving tasks by response format, within Australia The lowest performing state, Tasmania, was the only jurisdiction to show a significant bias towards the constructed-response format items, while Queensland showed a significant bias towards the selectedresponse format items (Figure 4.7). Difference between observed and expected performance, by response format Jurisdiction

Mean score in problem solving

ACT

526

NSW

525

VIC

523

QLD

522

SA

520

WA

528

TAS

490

NT

513

Selected responses

Constructed responses

Stronger than expected on the problem-solving process Moderately stronger than expected performance on the problem-solving process Nonsignificant strength or weakness Moderately weaker than expected performance on the problem-solving process Weaker than expected performance on the problem-solving process

Figure 4.7  Relative strengths and weaknesses on problem-solving tasks by response format, across jurisdictions

Students’ strengths and weaknesses in problem solving

57

Figure 4.8 shows that females do relatively better than males on the constructed-response format items. Australian-born students were relatively better on the constructed-response format items, where the effect was consistent, but weaker with students who spoke English at home. Foreign-born students were relatively stronger on the selected-response format items. This was also the case, to a lesser extent, for students who spoke a language other than English at home. Since these items tend to include a higher proportion of reading matter, these results could be expected. Mean score in problem solving

Difference between observed and expected performance, by response format Selected responses

Constructed responses

Sex Female

522

Male

524

Indigenous

454

Non-Indigenous

526

Indigenous background

Geographic location Metropolitan

528

Provincial

510

Remote

475

Socioeconomic background Lowest quartile

487

Second quartile

512

Third quartile

538

Highest quartile

560

Immigrant status Australian-born

523

First-generation

531

Foreign-born

517

English

526

Language other than English

509

Language at home

Stronger than expected on the problem-solving process Moderately stronger than expected performance on the problem-solving process Nonsignificant strength or weakness Moderately weaker than expected performance on the problem-solving process Weaker than expected performance on the problem-solving process

Figure 4.8  Relative strengths and weaknesses on problem-solving tasks by response format, across different social groups

58

Thinking it through: Australian students’ skills in problem solving

Grouping countries by their strengths and weaknesses in problem solving In this chapter, differences in performance patterns across the problem-solving processes, the nature of the problem situation and response-format types have been identified. Figure 4.9 shows the performance difference according to the nature of the problem situation (static and interactive tasks) and the main problem-solving process—in this figure, defined as knowledge-acquisition tasks and knowledgeutilisation tasks.5 The Chinese speaking countries (all high-performing countries) differ from the other Asian high performers (Singapore, Korea and Japan) in being weaker on interactive items, although not all the differences from the OECD are significant. Korea, Singapore, Hong Kong–China, Macao–China, Chinese Taipei and Shanghai–China were more successful on knowledge-acquisition tasks (the exploring and understanding process and the representing and formulating process). Among the lower performing countries, only Brazil is significantly better on interactive items, although like all the lower performing countries, it is weaker on knowledge-acquisition tasks. With the exception of the Czech Republic, all Eastern European countries—mainly lower performing countries—lie in the quadrant with weaker-than-expected performance in the interactive and knowledge-acquisition tasks. Most Western European countries were reasonably close to the knowledge-acquisition axis, but some clearly favour static- or interactive-task items. While the average problem-solving performance was different between Australia, Japan and Italy (where Japan performed significantly higher than Australia, which in turn performed significantly higher than Italy), these countries showed a similar balance of skills when compared to each other. These countries all performed close to their expected level on interactive items (based on the OECD average pattern of performance) and slightly above their expected level on knowledge-acquisition tasks. Stronger-than-expected performance on interactive items and on knowledge-acquisition tasks

OECD average

Better performance on interactive tasks, relative to static tasks

Stronger-than-expected performance on interactive items, weaker-than-expected performance on knowledge-acquisition tasks

Ireland Brazil

Germany England

Portugal United Arab Emirates Spain Colombia Czech Republic Chile Estonia Russian Federation Malaysia Turkey Uruguay Poland Serbia Croatia Hungary Netherlands Slovenia Finland Slovak Republic Denmark Montenegro

United States France

Korea

Canada

Belgium

Austria

Israel Norway

Italy Japan Australia

Singapore OECD average Hong Kong–China Macao–China

Sweden

Chinese Taipei Shanghai–China

Bulgaria

Weaker-than-expected performance on interactive items and on knowledge-acquisition tasks

Weaker-than-expected performance on interactive items, stronger-than-expected perfomance on knowledge-acquistion tasks

Better performance on knowledge-acquisition tasks, relative to knowledge-utilisation tasks

Figure 4.9  Joint analysis of strengths and weaknesses, by nature of the problem and by process, for countries  In the PISA problem-solving assessment, the problem-solving processes of exploring and understanding and representing and formulating can be classified as knowledge-acquisition tasks, while the planning and executing process can be classified as knowledge-utilisation tasks.

5

Students’ strengths and weaknesses in problem solving

59

Figure 4.10 shows that, in general, lower performing groups—such as Indigenous students and students in the lowest socioeconomic quartile—appear in the lower-left quadrant, indicating that these students’ performance was weaker than expected on interactive tasks and on knowledge-acquisition tasks. The performance across the jurisdictions was widely scattered across the figure. Tasmania was quite a distance from the other jurisdictions, in the lower-left quadrant, and was a relatively lower performing jurisdiction overall. Females perform relatively poorly on knowledge-acquisition tasks and appear in the upper-left quadrant, while males appear in the bottom-right quadrant, indicating stronger-than-expected performance on knowledge-acquisition tasks. Stronger-than-expected performance on interactive items and on knowledge-acquisition tasks QLD

AUS average

Better performance on interactive tasks, relative to static tasks

Stronger-than-expected performance on interactive items, weaker-than-expected performance on knowledge-acquisition tasks

ACT NT

Females Lowest quartile SES

Third quartile SES English spoken at home

Highest quartile SES

Schools in metropolitan areas non-Indigenous Australian-born Foreign-born NSW Language other than VIC English spoken at home SA

Males

Schools in remote areas AUS average

Indigenous First-generation

Second quartile SES

Schools in provincial areas WA TAS

Weaker-than-expected performance on interactive items and on knowledge-acquisition tasks

Weaker-than-expected performance on interactive items, stronger-than-expected perfomance on knowledge-acquistion tasks

Better performance on knowledge-acquisition tasks, relative to knowledge-utilisation tasks

Figure 4.10  J oint analysis of strengths and weaknesses, by nature of the problem and by process, for Australian jurisdictions and social groups

60

Thinking it through: Australian students’ skills in problem solving

CHAPTER 5

Australian students’ motivation towards problem solving

In PISA’s definition of problem solving, there is recognition that the use of skills and knowledge to solve a problem depends on motivational and affective factors. It includes: “the willingness to engage with such situations” (OECD, 2014, p. 30). Engaging with a problem is an integral part of problem solving. In PISA 2012, students completed a questionnaire that collected information about their engagement with and at school, their drive and the beliefs they hold about themselves as learners. Results from the assessment of the PISA 2012 core domains showed that “students’ ability to perform at high levels is not only a function of their aptitude and talent; if students do not cultivate their intelligence with hard work and perseverance, they will not achieve mastery in any field” (OECD, 2014, p. 111). This chapter takes a closer look at perseverance and openness to problem solving in association with problem-solving performance. A number of countries have been reported alongside Australia to provide an international context. The countries of comparison are the high-performing Asian countries (Chinese Taipei, Hong Kong–China, Japan, Korea, Macao–China, Singapore and Shanghai–China), the countries that performed on par with Australia (Canada, England and Finland) and the two remaining English-speaking countries (Ireland and the United States). Australian students’ results are reported at jurisdictional levels, as well as by subgroups according to sex, geographic location, Indigenous background and socioeconomic background.

Perseverance Perseverance relates to students’ willingness to work on tasks that are difficult, even when they encounter setbacks. In PISA 2012, perseverance1 was assessed by asking students how well each of the statements described them, using a 5-point Likert scale (very much like me, mostly like me, somewhat like me, not much like me, and not at all like me): » When confronted with a problem, I give up easily. » I put off difficult problems. » I remain interested in the tasks that I start. » I continue working on tasks until everything is perfect. » When confronted with a problem, I do more than what is expected of me.    In PISA, perseverance is not an objective measure of how students engaged or persisted with the assessment itself, but of how students perceive themselves in terms of perseverance.

1

61

Table 5.1 shows the average percentage of students who thought the statements were ‘very much like me’ and ‘mostly like me’ or ‘not much like me’ and ‘not at all like me’2 for Australia, the OECD average and comparison countries. There were fewer students in Japan than any of the comparison countries who reported that it was not like them to give up easily when confronted with a problem, or to put off difficult problems. Students in the United States reported the highest proportion of students than any other comparison countries who indicated that it wasn’t like them to give up easily or to put off difficult problems. Fewer students in Japan than any of the comparison countries also reported that it was like them to remain interested in tasks they start, to continue working on tasks until everything was perfect, and when confronted with a problem, to do more than what is expected of them. Of the high-performing countries, students from Shanghai–China, Singapore and Macao–China, as well as the United States, had the highest proportion of students who indicated it was like them to remain interested in tasks that they started, to continue working on tasks until everything was perfect and to do more than was expected of them. Approximately two-thirds of Australian students indicated it was not like them to give up easily when confronted with a problem and almost half of the Australian students indicated it was not like them to put off difficult problems. Half the Australian students reported that it was like them to remain interested in the tasks they started and to continue working on tasks until everything was perfect, while one-third of Australian students reported that it was like them to do more than what was expected of them when confronted with a problem. Table 5.1  Students’ perseverance in problem solving for Australia and comparison countries Percentage of students who reported the following statement descriptions are ‘not much like me’ or ‘not at all like me’ When confronted with a problem, I give up easily.

I put off difficult problems.

Percentage of students who reported the following statement descriptions are ‘very much like me’ or ‘mostly like me’ I remain interested in the tasks that I start.

I continue working on tasks until everything is perfect.

When confronted with a problem, I do more than what is expected of me.

Country

%

SE

%

SE

%

SE

%

SE

%

SE

Australia

62

0.6

44

0.7

50

0.7

46

0.6

31

0.4

Canada

67

0.7

44

0.7

52

0.8

51

0.7

39

0.6

Chinese Taipei

59

0.8

45

0.8

35

1.0

31

0.8

28

0.8

England

59

0.9

44

0.7

52

0.9

47

1.0

36

0.7

Finland

59

0.8

46

0.9

45

0.9

40

0.8

28

0.6

Hong Kong–China

61

0.9

37

0.9

52

0.9

50

0.8

35

0.9

Ireland

61

0.9

45

0.9

55

0.8

48

1.0

33

0.9

Japan

32

0.9

16

0.6

29

0.9

25

0.7

12

0.7

Korea

40

1.1

20

0.8

60

1.1

44

1.1

27

1.0

Macao–China

50

0.8

34

0.7

51

0.9

53

1.0

46

0.8

Shanghai–China

53

1.0

37

0.9

73

0.7

55

1.1

38

0.8

Singapore

62

0.7

44

0.7

58

0.8

61

0.9

45

0.9

United States

70

0.8

49

0.8

57

1.0

55

0.8

44

1.1

OECD average

56

0.2

37

0.1

49

0.2

44

0.2

34

0.1

The index of perseverance was created using these five items, and standardised to have a mean of 0 and a standard deviation of 1 across the OECD student population. Higher scores on the index were representative of higher levels of perseverance. Table 5.2 presents the mean scores for students in Australia, the OECD average and comparison countries. These mean scores are reported overall for countries (Table 5.2) and for females and males separately (Table 5.3, p. 64).  For ease of reading ‘very much like me’ and ‘mostly like me’ will be referred to as ‘like them’ and ‘not much like me’ and ‘not at all like me’ will be referred to as ‘not like them’ throughout this chapter.

2

62

Thinking it through: Australian students’ skills in problem solving

Australian students’ mean score on the index of perseverance—or how persistently students were willing to work on problems—was higher than the OECD average, indicating that Australian students reported slightly higher levels of perseverance compared to students across the OECD. The mean index scores for the high-performing countries were not consistent. In Japan, Korea and Chinese Taipei, students reported the lowest mean index scores, meaning that they had lower levels of perseverance, while Shanghai–China and Singapore reported the highest mean index scores across the high-performing countries, indicating higher levels of perseverance. Of the comparison countries, students from the United States reported the highest mean scores on the index of perseverance. Table 5.2  Index of perseverance for Australia and comparison countries All students Mean index

SE

Australia

0.10

0.01

Canada

0.22

0.01

Chinese Taipei

–0.08

0.02

England

0.11

0.02

Country

Finland

0.00

0.02

Hong Kong–China

0.12

0.02

Ireland

0.15

0.02

Japan

–0.59

0.02

Korea

–0.09

0.02

Macao–China

0.15

0.01

Shanghai–China

0.25

0.02

Singapore

0.29

0.02

United States

0.38

0.02

OECD average

0.00

0.00

Across the OECD, there was a significant difference between the sexes on the index of perseverance with males indicating they were more persistent in willing to work on problems than females. In all of the comparison countries, except the United States, males reported significantly higher levels of perseverance than females, with the largest differences in sex found in England and Korea. The United States was the only comparison country where no significant difference on the index of perseverance was found, with males and females reporting similar mean index scores. Both the mean index scores for Australian males and females were significantly higher than the OECD average.

Australian students’ motivation towards problem solving

63

Table 5.3  Index of perseverance for Australia and comparison countries, by sex Males Country

Females

Sex difference (M–F)

Mean index

SE

Mean index

SE

Dif.

SE

Australia

 0.19

0.02

0.00

0.02

0.19

0.02

Canada

 0.25

0.02

0.19

0.02

0.06

0.02

Chinese Taipei

–0.04

0.02

–0.12

0.02

0.09

0.03

England

 0.23

0.03

–0.01

0.02

0.23

0.03

Finland

 0.07

0.02

–0.07

0.02

0.13

0.03

Hong Kong–China

 0.18

0.02

0.05

0.02

0.13

0.03

Ireland

 0.22

0.03

0.07

0.03

0.15

0.04

Japan

–0.55

0.02

–0.64

0.02

0.10

0.03

Korea

 0.03

0.02

–0.22

0.02

0.25

0.03

Macao–China

 0.19

0.02

0.11

0.02

0.09

0.02

Shanghai–China

 0.32

0.03

0.17

0.02

0.16

0.03

Singapore

 0.36

0.02

0.23

0.02

0.13

0.03

United States

 0.39

0.03

0.36

0.03

0.03

0.04

OECD average

  0.05

0.00

–0.05

0.00

0.10

0.01

Note: Bolded values indicate a statistically significant difference.

The index of perseverance was divided into quartiles. Figure 5.1 shows the relationship between perseverance and problem-solving performance for Australia. There was a small positive association (0.22) between the perseverance index and problem-solving performance. The pattern is linear, with higher levels of perseverance likely to be associated with higher performance in problem solving.

Mean problem-solving performance scale

700

600

500

400 Lowest quartile

Second quartile

Third quartile

Highest quartile

Index of perseverance

Figure 5.1  Relationship between Australian students’ perseverance and problem-solving performance

Table 5.4 shows the percentage of students who indicated that the statements were like themselves or not like themselves and the mean score on the index of perseverance according to sex, jurisdiction, geographic location, Indigenous background and socioeconomic background. Results for Australia and the OECD average have been included in the table for comparison. Earlier in this chapter, it was noted that males reported higher levels of perseverance than females. Table 5.4 provides further detail about these differences by sex. Around 10% more males than females indicated that they were less likely to give up easily when confronted with a problem and to put off difficult problems, and more likely to remain interested in the tasks they started.

64

Thinking it through: Australian students’ skills in problem solving

All jurisdictions had index scores that were higher than the OECD average, with students from the Australian Capital Territory reporting the highest levels and students from the Northern Territory reporting the lowest levels of perseverance. Students from schools in remote areas had lower levels of perseverance than students in metropolitan or provincial schools. Students in metropolitan schools had the highest levels of perseverance compared to students in provincial or remote schools. Interestingly, almost half the students in remote schools indicated it was not like them to give up easily when confronted with a problem, compared to approximately twothirds of students in metropolitan or provincial schools. There were also 10% less students in the rural schools who indicated it was not like them to put off difficult problems compared to their peers. The level of perseverance for Indigenous students was lower than that of non-Indigenous students and across the OECD average. This was also the case for students in the lowest socioeconomic quartile, while for students in the other socioeconomic quartiles, the level of perseverance increased with each increase in socioeconomic quartile. Table 5.4  S tudents’ perseverance in problem solving, by sex, jurisdiction, geographic location, Indigenous background and socioeconomic background Percentage of students who reported the following statement descriptions are ‘not much like me’ or ‘not at all like me’ When confronted with a problem, I give up easily.

I put off difficult problems.

Percentage of students who reported the following statements descriptions are ‘very much like me’ or ‘mostly like me’ I remain interested in the tasks that I start.

I continue working on tasks until everything is perfect.

When confronted with a problem, I do more than what is expected of me.

Index of perseverance

%

SE

%

SE

%

SE

%

SE

%

SE

Mean score

SE

Female

55

0.7

37

0.9

45

0.9

47

0.9

29

0.7

 0.00

0.02

Male

68

0.8

51

0.9

54

0.9

45

0.7

33

0.7

 0.19

0.02

ACT

68

2.0

46

2.4

47

2.7

49

2.2

32

2.5

 0.16

0.05

Sex

Jurisdiction

NSW

63

1.2

45

1.1

51

1.2

48

1.1

32

1.0

 0.13

0.02

VIC

63

1.3

45

1.5

51

1.5

46

1.4

30

1.1

 0.11

0.03

QLD

59

1.3

43

1.4

49

1.4

44

1.4

32

1.3

 0.07

0.03

SA

61

1.7

43

1.4

48

1.7

45

1.8

29

1.4

 0.06

0.03

WA

63

1.7

44

1.9

48

1.9

45

1.6

30

1.3

 0.10

0.03

TAS

61

1.8

45

2.0

45

1.9

42

1.8

33

2.0

 0.07

0.04

NT

56

4.2

43

3.9

49

5.0

38

4.1

31

4.7

 0.02

0.10

Geographic location Metropolitan

63

0.7

44

0.8

51

0.9

48

0.7

33

0.5

 0.14

0.02

Provincial

61

1.3

45

1.3

45

1.2

40

1.2

27

1.0

 0.00

0.02

Remote

46

8.8

36

3.5

46

3.2

40

4.8

31

2.2

–0.05

0.08

Indigenous background Indigenous

48

2.0

35

1.5

41

1.7

35

1.7

74

1.3

–0.20

0.04

Non-Indigenous

63

0.6

45

0.7

50

0.7

46

0.6

31

0.5

 0.11

0.01

Socioeconomic background Lowest quartile

52

1.1

38

1.2

43

1.3

39

1.0

27

0.8

–0.10

0.02

Second quartile

59

1.2

43

1.3

46

1.3

43

1.4

29

1.1

 0.04

0.03

Third quartile

67

1.2

47

1.2

53

1.4

48

1.3

31

1.2

 0.17

0.02

Highest quartile

71

1.0

49

1.3

58

1.3

53

1.2

37

1.3

 0.31

0.03

Country Australia

62

0.6

44

0.7

50

0.7

46

0.6

31

0.4

0.10

0.01

OECD average

56

0.2

37

0.1

49

0.2

44

0.2

34

0.1

0.00

0.00

Australian students’ motivation towards problem solving

65

Students’ openness to experience in problem solving Students need to be willing to engage with problems and to be open to new challenges in order to be able to solve complex problems and situations. In PISA 2012, students’ openness to problem solving was measured by asking students how well each of the statements described them using a 5-point Likert scale (very much like me, mostly like me, somewhat like me, not much like me, and not at all like me): » I can handle a lot of information. » I am quick to understand things. » I seek explanations of things. » I can easily link facts together. » I like to solve complex problems. Table 5.5 shows the average percentage of students who thought the statements were ‘very much like me’ or ‘mostly like me’ for Australia, the OECD average and comparison countries. Table 5.5  Students’ openness to problem solving for Australia and comparison countries Percentage of students who reported the following statement descriptions are ‘very much like me’ or ‘mostly like me’

Country

I can handle a lot of information.

I am quick to understand things.

%

%

SE

SE

I seek explanations for things. %

SE

I can easily link facts together. %

SE

I like to solve complex problems. %

SE

Australia

49

0.7

52

0.7

63

0.6

53

0.7

31

0.6

Canada

57

0.6

61

0.6

65

0.7

60

0.7

37

0.7

Chinese Taipei

30

0.9

42

0.8

54

0.9

39

0.9

26

0.7

England

52

1.1

52

0.8

60

0.8

57

0.9

37

0.8

Finland

41

1.0

52

0.9

53

0.9

57

1.0

34

0.8

Hong Kong–China

35

0.8

48

0.9

48

0.8

42

0.8

31

1.0

Ireland

52

1.0

55

0.9

66

0.8

57

0.9

30

0.7

Japan

26

0.7

35

0.8

32

0.7

26

0.8

19

0.7

Korea

30

1.1

37

1.2

52

1.2

47

1.2

23

1.0

Macao–China

31

0.8

38

0.8

49

0.8

38

0.8

25

0.8

Shanghai–China

47

1.0

55

0.9

66

0.9

62

1.1

36

0.8

Singapore

44

0.9

50

0.9

69

0.7

52

1.0

39

0.9

United States

58

0.8

58

1.0

66

0.9

60

1.0

39

0.9

OECD average

53

0.2

57

0.2

61

0.2

57

0.2

33

0.1

The lowest percentages in agreement across all countries and for the OECD average were for the statement ‘I like to solve complex problems’. Australian students’ percentages were slightly lower than the OECD average for all statements except ‘I seek explanations for things’ (63% compared to 61% respectively). Students from Japan had the lowest percentage agreement across all five statements compared to comparison countries and the OECD average. Students from the United States were more positive in their engagement with problem solving, scoring higher than most other countries on all five statements. In Australia, two-thirds of students indicated they sought explanations for things, approximately half the students indicated they could handle a lot of information, were quick to understand things and could easily link facts together, and one-third of the students reported to like solving complex problems.

66

Thinking it through: Australian students’ skills in problem solving

Students’ responses were standardised to calculate the index for openness to problem solving, with higher index scores illustrative of higher levels of openness to problem solving. Table 5.6 shows that students from Japan had the lowest levels of openness to problem solving, whereas students from the United States reported the highest levels of openness to problem solving. Students from Australia scored below the OECD average. Table 5.6  Index of openness to problem solving for Australia and comparison countries All students Country

Mean index

SE

–0.07

0.02

Canada

0.14

0.01

Chinese Taipei

–0.33

0.02

Australia

England

–0.02

0.02

Finland

–0.11

0.02

Hong Kong–China

–0.25

0.02

Ireland

–0.02

0.02

Japan

–0.73

0.02

Korea

–0.37

0.02

Macao–China

–0.34

0.01

Shanghai–China

0.07

0.02

Singapore

0.01

0.02

United States

0.18

0.02

OECD average

0.00

0.00

Males scored significantly higher than females on the index for openness to problem solving in all the countries listed in Table 5.7. The largest gaps between sexes were found in Japan and Hong Kong–China. The mean index scores for males and females in Canada, Shanghai–China and the United States were higher than the OECD average, while the mean index scores for males and females in Australia, Chinese Taipei, Finland, Hong Kong–China, Japan, Korea and Macao–China were lower than across the OECD. Table 5.7  Index of openness to problem solving for Australia and comparison countries, by sex Males Country

Females

Sex difference (M–F)

Mean index

SE

Mean index

SE

Dif.

SE

Australia

0.05

0.02

–0.19

0.02

0.24

0.02

Canada

 0.26

0.02

 0.02

0.02

0.23

0.03

Chinese Taipei

–0.19

0.03

–0.48

0.03

0.29

0.04

England

 0.09

0.02

–0.12

0.02

0.22

0.03

Finland

 0.00

0.03

–0.21

0.02

0.21

0.03

Hong Kong–China

–0.08

0.02

–0.44

0.02

0.36

0.03

Ireland

 0.06

0.03

–0.10

0.02

0.16

0.04

Japan

–0.54

0.03

–0.94

0.03

0.40

0.04

Korea

–0.26

0.03

–0.50

0.03

0.24

0.04

Macao–China

–0.26

0.02

–0.42

0.02

0.17

0.03

Shanghai–China

 0.21

0.03

–0.08

0.02

0.29

0.03

Singapore

 0.13

0.02

–0.12

0.02

0.26

0.03

United States

 0.29

0.03

 0.08

0.03

0.21

0.05

OECD average

 0.12

0.00

–0.12

0.00

0.23

0.01

Note: Bolded values indicate a statistically significant difference.

Australian students’ motivation towards problem solving

67

The relationship between the quartiles of the openness to problem-solving index and mean problemsolving performance is presented in Figure 5.2. It shows that students in the higher quartiles of openness to problem solving achieved higher mean scores on problem solving. The relationship between openness to problem solving and performance was even stronger than the relationship found between perseverance and performance, with a correlation of 0.32, which is considered a moderate association according to Cohen’s (1988) criteria.

Mean problem-solving performance score

700

650

600

550

500

450

400 Lowest quartile

Second quartile

Third quartile

Highest quartile

Index of openess to problem solving

Figure 5.2  Relationship between Australian students’ openness to problem solving and problem-solving performance

The percentages of students who indicated the openness to problem-solving statements were ‘very much like me’ or ‘mostly like me’, and the index scores for different Australian subgroups, Australia and the OECD average are shown in Table 5.8. There were approximately 10% more males than females that reported they liked solving complex problems, were quick to understand things and could easily link facts together. The index scores across the jurisdictions ranged from –0.12 to 0.12. The Australian Capital Territory had the highest mean score on the openness to problem-solving index, meaning that students from this jurisdiction reported more willingness to engage with problems and be open to new challenges in order to be able to solve complex problems and situations, compared to students with lower mean index scores. South Australia and Queensland had the lowest mean index scores, while the mean index score for the Northern Territory was the same as the OECD average. The mean score on the index of openness to problem solving was around the OECD average for students in metropolitan schools, whereas the mean index scores for students in provincial and—even more so—remote schools were lower than that for students in metropolitan schools. Indigenous students had lower mean scores on the index compared to non-Indigenous students, indicating they were less likely to engage with problems and were less open to new challenges. This was also the case for students in the lowest socioeconomic quartile, who had lower mean index scores compared to students in the higher socioeconomic quartiles.

68

Thinking it through: Australian students’ skills in problem solving

Table 5.8  S tudents’ openness to problem solving, by sex, jurisdiction, geographic location, Indigenous background and socioeconomic background Percentage of students who reported the following statement descriptions are ‘very much like me’ or ‘mostly like me’ I can handle a lot of information.

I am quick to understand things.

I seek explanations for things.

I can easily link facts together.

I like to solve complex problems.

Index of openness to problem solving

%

SE

%

SE

%

SE

%

SE

%

SE

Mean score

SE

Female

46

0.9

47

0.9

64

0.9

49

0.9

24

0.8

–0.19

0.02

Male

53

0.9

58

0.9

62

0.7

58

1.0

37

0.9

 0.05

0.02

Sex

Jurisdiction ACT

55

2.4

58

2.2

71

2.3

61

2.5

36

2.1

 0.12

0.05

NSW

51

1.1

55

1.1

63

1.0

54

1.1

33

1.1

–0.03

0.02

VIC

51

1.2

53

1.3

64

1.2

54

1.5

30

1.2

–0.08

0.03

QLD

46

1.4

49

1.4

60

1.4

52

1.3

29

1.2

–0.12

0.03

SA

46

1.8

51

1.9

62

1.6

53

1.8

27

1.6

–0.12

0.04

WA

48

1.9

50

1.8

62

1.7

54

1.5

31

1.4

–0.08

0.03

TAS

49

1.9

53

1.8

65

1.7

54

2.0

32

2.0

–0.05

0.04

NT

47

3.6

52

3.3

67

3.2

53

3.8

33

3.6

 0.00

0.08

Geographic location Metropolitan

51

0.8

54

0.8

65

0.7

55

0.8

32

0.7

–0.03

0.02

Provincial

44

1.1

48

1.0

58

1.1

48

1.2

28

1.1

–0.19

0.02

Remote

42

2.8

45

4.2

56

4.3

44

4.7

28

3.2

–0.29

0.09

Indigenous background Indigenous

37

1.7

42

2.1

52

1.6

39

1.8

23

1.5

–0.38

0.04

Non-Indigenous

50

0.7

53

0.7

63

0.6

54

0.7

31

0.6

–0.06

0.02

Socioeconomic background Lowest quartile

39

1.1

44

1.0

54

1.1

40

1.0

24

1.0

–0.33

0.02

Second quartile

46

1.2

48

1.3

60

1.1

49

1.3

28

1.0

–0.18

0.03

Third quartile

52

1.3

55

1.3

67

1.2

58

1.4

32

1.1

 0.01

0.03

Highest quartile

61

1.2

63

1.1

71

1.0

68

1.2

40

1.2

 0.24

0.02

Country Australia

49

0.7

52

0.7

63

0.6

53

0.7

31

0.6

–0.07

0.02

OECD average

53

0.2

57

0.2

61

0.2

57

0.2

33

0.1

 0.00

0.00

Australian students’ motivation towards problem solving

69

References Autor, D. H., Levy, F., & Murnane, R. J. (2003). The skill content of recent technological change: An empirical exploration. Quarterly Journal of Economics, 118(4), 1279–1333. Cohen, J. (1988). Statistical power analysis for the behavioural sciences (2nd ed.). New Jersey: Lawrence Erlbaum Associates. Hanushek, E. A., Jamison, D. T., Jamison, E. A., & Woessmann, L. (2008). Education and economic growth: It’s not just going to school but learning something while there that matters. Education Next, 8(2), 62–70. OECD (2014). PISA 2012 results: Creative problem solving: Students’ skills in tackling real-life problems (Vol. V). Paris: OECD Publishing. Thomson, S., De Bortoli, L., & Buckley, S. (2013). PISA 2012: How Australia measures up. Melbourne: Australian Council for Educational Research.

70

www.oecd.org www.ozpisa.acer.edu.au

ISBN 978-1-74286-257-6

9 781742 862576

Australian Council for Educational Research