Friday, November 23, 2012

Error Analysis

So I went down to UCLA Monday to fill out some paperwork for the AP Readiness program my students and I have been attending. After filling out my paperwork, I decided to check to see if any of my former advisors from my credential or masters programs were around, so I could say Hi.

I got a chance to sit down with my advisor from my credential program and we spent an hour or so chatting, catching up, and sharing how our teaching was going (mine as a second year high school math teacher, his as a secondary math teacher educator). I got to sharing with him some of the things I was doing in my classroom that I was excited about, namely the review stations after quizzes and tests and the error analysis questions that I gave students so that they could earn back their points and revisit the concepts they needed to strengthen.

Though not sure if it was his role as an advisor and teacher educator kicking in, or his role as a Ph.D. candidate, but in our conversation, he took me down a path of inquiry. It was easy for me to see the qualitative evidence that the quiz review stations and error analysis were beneficial to my students' confidence and enjoyment of the class. It would also be great to see the quantitative evidence that shows my students understanding of content and concepts was improving as well. We talked about how best to establish a baseline of student knowledge so that I could accurately and reliably measure growth. He talked about also having a class be a control group, but I argued that I don't feel it's ethical to have a control group in educational research; but that's a blog topic for another day. 

In short, I think I have a new inquiry project on my hands for the second semester of school. Although the quiz review stations and error analysis is something I've been doing with my class since the start of the school year, I plan on making it a little bit more structured than it already is, collect data on student performance on baseline assessments as well as assessments taken after doing error analysis, collect some qualitative data in the form of student surveys about their attitudes and dispositions toward mathematics and their confidence as mathematics learners, and also find a way to measure their mathematical reasoning skills at the beginning and end of the inquiry project.

Just jotting some ideas down right now. I'll probably work on planning this project more during winter break. 

Tuesday, November 20, 2012

AP Calculus Graph Analysis

{Disclaimer: I meant to publish this post in October}

I was working with my AP Calculus students on understanding limits graphically. With all the closed and open circles, left- and right-sided limits, they were feeling really lost. I had given them two mini quizzes on the graphical interpretation of limits, and the second one went a little bit better than the first, but not by much. So what I decided to do was have them create their own graphical limits problem in groups. They were tasked with sketching the graph of a function (piecewise defined) and coming up with questions about limits and evaluating x-values that would challenge their classmates. 

The conversation surrounding the properties of limits, when a limit exists and when it doesn't, and how to know when a function is defined for a certain x-value were amazing. The students were debating about all of this in their groups and discussing what the answers were to their own problems, justifying their answers with theorems and properties. And when we did a gallery walk (so that students would work out and answer the problems that their classmates had created), students were talking out the answers as they looked at their classmates posters, helping each other understand why the answer was this, and not that, showing each other on the posters how to evaluate one-sided limits, and so on.

Although not everyone got all the problems from their classmates correct, they really felt much more confident and seemed to have a stronger understanding of graphical limits after the activity.

Why was this activity so effective? I'll do the research and revisit that question soon. 

Friday, September 28, 2012

Assessment for Learning & Review Stations

Before the school year began, I was reading a book called Assessment for Learning. I unfortunately haven't had much time to read more of it since the school year began, and even in the summer, I didn't get to read as much of it as I would have liked, because I was teaching in our summer bridge program for incoming 9th graders. 

I first became aware of the book while reading Jo Boaler's What's Math got to do with it? which is a book I thoroughly enjoyed reading as part of my master's coursework last school year. In it, Boaler talked about Assessment for Learning, and it intrigued me, so I purchased the book, and began reading about how to improve my assessment practices. Although the book was published in 2003 and the research imported (research was conducted in the UK), I feel that it's validity still holds for today's generation of American students because one of the main understandings from the text is that your assessment practices need to be tailored to the needs of your students, and of course, that's not really a surprise at all.

I read about the types of feedback that were effective in increasing student understanding and achievement. In a study they conducted, they had three groups of students who received varying combinations of feedback. One group received comments only, another group received grades only, and the third group received both comments and grades on assignments. Much to my surprise, the group of students that showed the most academic gains were students who only received comments. I thought that surely, the group that received both would benefit the most. But what research discovered, was that the students who received grades and comments focused on the grade, and didn't spend much time reading or reflecting on the comments that were left for them. Both groups that received grades showed no gains. 

Well, this is fantastic, right? I don't have to grade my students! I just have to leave them constructive comments! If only. I realized that I had a dilemma. How can I not give my students grades? While I'd like to sit there and have my students believe that their grade in my class is nowhere nearly as important as their understanding of mathematics (which I DO believe), I also understand how important grades are to my students' futures, their college acceptance, their ability to earn scholarships and other financial aid, and that there is a need to quantifiably measure students' academic success in a way that allows for comparison. Because there is this emphasis on grades and that somehow a students' worth and success is determined by this magical number called a GPA, I know how important it is to keep students up-to-date on their performance in my class, via their grade. So there's my problem in a nutshell. I want to focus on constructive feedback, I don't want my students to get hung up on a letter grade, and I also don't want my students to become complacent and be willing to accept B's and C's that don't reflect total mastery. But I wasn't sure how to approach my feedback to the students in a way that set them up for improvement through reflection and action, and also address the need to keep them informed of their grade without them getting too hung up on a letter of the alphabet.

So I decided to invite my students in on the conversation. I gave them the excerpt of the book which discussed Feedback by Marking and had them read it. I then had them write a reflection about the reading that asked them to brainstorm ideas for feedback that would benefit them as learners in their understanding of mathematics, but also kept them abreast of their academic standing in my class. The students came up with some good ideas. In the end, I decided that I would include their grade on their assessments, but that I would do a few things.

(1) be harsh in my grading, no partial credit is given
(2) leave very detailed feedback that will help them work through the problems they got wrong and that will instruct them on what they need to do in order to understand the question
(3) give students the opportunity to recoup points lost by completing error analyses

I chose to be harsh with the grading so that students were forced to take a second look at problems that they somewhat understood, and work on REALLY understanding it. And because I was being so harsh, I also wanted to give my students the opportunity to gain back what they'd lost by putting in additional work through analyzing their errors. Lastly, I wanted to give them feedback that helped them identify their mistake or gap in their understanding, and also asked them questions that would help lead them to the correct answer and a better understanding of the topic. In classes like English and History, where you have to write essays, students are usually asked to submit drafts and the teacher and their peers give them feedback to improve, and then they usually resubmit a final version of their essay after acting on the feedback from their teacher and peers. We don't usually afford this type of opportunity to students in math classes, and I think that it's one of the things that's wrong with math education. Ultimately, we want our students to learn and master the concepts, not punish them for "not understanding" on one particular day when we happened to administer an assessment.

When I graded the first set of quizzes, I realized something: The amount of detail in the feedback I was leaving for each students was making it so that it took me a minimum of 7-10 minutes to grade each quiz (72 students in my math analysis classes, you do the math!), and on many of the quizzes, I was writing the same things over and over. So I decided to combine the detailed feedback and the error analyses into one thing that addressed both their gap in understanding and their desire to earn a better grade. Because I was usually writing the same feedback over and over on each quiz, I decided to turn the constructive and scaffolded suggestions into questions that students would have to answer to help them better understand the problem and to earn back points. Here's an example:

1.   What is the equation of the line that goes through the point (-6, 4) and has a slope of m= - 2/5?

a. We’ve worked with two forms of linear equations, point-slope form and slope-intercept form. Write the general form of both equations.
b. For this question, you’ve been given a point and a slope. Is the point the y-intercept? How do you know?
c. Which form of linear equations will be easier to write using the information that’s been given to you?
d. What is the equation of the line?
e. What did you do wrong? Or what did you misunderstand?

What I did, in addition to the error analysis questions, is I gave them time to review the quiz in class with their peers. Yes, this took up a lot of time. But both the students and I felt it was a valuable use of their time. The class averages on the quiz (with my harsh grading) were 41% and 42%, and after the review activity, the vast majority of students felt confident that if they took the quiz again, they would do significantly better (B's and A's). I cannot put a price or a time limitation on improving student confidence with mathematics. Here is how the review activity went:

Quiz Review Stations

When I graded the quizzes, I identified student experts: students who got all or most of a problem correct. For example, some questions had a part (a), (b), and (c), and I didn't necessarily have enough students who got all three parts correct, so some experts were identified because they got two out of three parts correct. 

In my class, I set up the tables and chairs so that they were in groups of 6 or 7 students. The class spent 5-10 minutes in each expert session (there were 6 expert sessions in total), and during the expert session, one or two experts worked with the students in their group to help them understand the problem and how to do it. Experts varied from problem to problem, and a good 80% of the class got to be experts on one question or another. It was really pleasing to see the surprise on certain students' faces when they found out that they were a student expert for a problem on the quiz. Students assumed that because they had gotten a bad grade, that they wouldn't be experts, but this activity helped my students see that they all had something to offer. They all have strengths and weaknesses, and together, they can help each other improve on their weaknesses. To help my student experts, I gave everyone a copy of the error analysis questions before they started with the expert sessions, so that if the experts got stuck in their explanation, the questions could guide them. 

I believe in the power of peer-tutoring for two reasons:
(1) I was a tutor in high school, working with students that were anywhere from 2 or 3 years younger than me, to a year or two older. I also was constantly helping my peers on their classwork and homework when I would finish early in class.
(2) Student's speak a very different language than us adults, and although only 9 years separates me from my youngest student, sometimes, their peers can explain things to them in a way that is infinitely more clear to them than anything I could have ever said.

My students loved the quiz review stations and have begged me to keep doing them in the future, and how can I say no? I'm a second-year math teacher, and I don't always get it right, but this is one practice that I'm definitely going to make use of over and over again.