Exam+1

= Exam 1 - over chapters 1 & 2 =

-- Exam 1 Review, version 1
 * Materials:**

//SFT SAYS:// This review was compiled from text materials (reviews and sample tests) and reflect homework assignments in terms of problem types. I decided to make it completely optional, but am allowing it to count toward earning back lost (i.e., problems they tried and did not get correct, not to be confused with missing points due to a lack of attempt) homework points up to this point. That is, the percent correct one achieves on the exam review is the percent of lost homework points earned back. Students seem to be happy with this, and (as of the day before the exam) about 75% of my students claimed they would be completing the review.
 * Thoughts on Implementation:**


 * Exam 1:**

//SFT SAYS:// I had two students unable to take the exam on the exam day, so I wound up making a few versions of the exam. Alana, after discovering that there are friends across our two classes, decided to use a different version of the exam as well. The three versions all draw on the same raw materials, but vary across particular question types a bit.
 * Thoughts on Implementation:**

//SFT SAYS:// 
 * Post-Exam Analysis:**

Look at this graph, and let's talk about what we see and how we might interpret it. The graph you see shows exam versus homework scores for a "particular" class of calculus students. I want you to put on a different kind of hat and view this from the perspective of a teacher -- that is, imagine these are your students' scores. What do you notice about, for example, the distribution of the exam scores (i.e., the vertical coordinate for each point)?



What do you think of this? What sort of explanation can you give for why this might be the case? Or, put another way, what hypotheses can you generate that explain this situation?



Ok, so we have three possible (and, on the surface, apparently reasonable) hypothesis that we could use to explain what we see in the data. That certainly doesn't imply that there are only three hypothesis overall. For example, what if I were to hypothesize that, just before the exam started, some students were taken over by aliens who, with their improved mathematical understanding, were able to score very well on the exam and who left the taken-over students immediately upon submitting the exam papers? Would that count as an hypothesis? Sure, but would it be something worth considering? 



Part of the job of an instructor is figuring out what other people do and do not know (and are/not capable of), so as to better teach them other things. Therefore, having some sort of working hypothesis in mind when you look at these data is important, and it says a great deal about how you (as the instructor) approach the task in front of you.

So, if I were to operate under the first hypothesis -- i.e., that some folks "get it" while others don't -- it's quite easy for me (as the instructor) to write off the graph as evidence of what I already hold to be true. Those who "get it" are in the top scores, while the others obviously don't get it. This may or may not be the case, but either way, it sure does make things a great deal easier on me -- I don't have to consider too much and there is hardly an impetus for altering my instruction or assessments. And, it's pretty clear that a teacher operating under this hypothesis is going to be comfortable substituting something very discrete and singular (like a test score) for something complex and multifaceted (like "student understanding"). That is, it would be easy (operating under this hypothesis) to write off low-scorers as members of the "don't get it" camp based on their performance on the exam. In a sense, this hypothesis is not very "generous" to the students (especially those who happened to score low on the exam), while it is very generous to the instructor (since there is no need to consider the impacts of particular teaching strategies, assessment forms, etc.). That's definitely strike one against it, right?



So, like we were just saying, another problem with this hypothesis is that it doesn't really explain another very interesting phenomenon we see on the chart -- some of those who "get it" (i.e., score highly on the exam) have very different scores on the homework (some in the 50% range!). Therefore, since this hypothesis alone is not very generous to all involved or helpful in interpreting the data, it's pretty safe to let it go.

If I operate under the second hypothesis -- i.e., that exams are high-stakes events, and some folks really don't do well because of it -- I have a different situation. If this is the way we interpret what we see, it implies that (at least some of) the folks who scored poorly did so because of the context of the exam, and not because of other factors (e.g., preparation, comprehension, clarity of exam language). This has some ramifications for me (the instructor) since I could try some things that might make exams less stressful (or go so far as to eliminate them). Also, this hypothesis is quite different in terms of its attitude to students -- that is, it is more generous to them and their abilities/preparation. So, this hypothesis is quite different than the first since it implies at least some of the ownness on the part of the instructor (i.e., I have to "own" at least some of the disparity in scores). Even so, this is not a terribly robust explanatory hypothesis, since it doesn't say much about the scores on the homework or any relationship between those and the exam scores -- it's just about the exam being high-stakes. So, while it might be more generous to the students than the first hypothesis, it is not particularly potent for explaining all the data.

The third hypothesis -- i.e., that there is a relationship between the preparation of students and the content/material/format of assessments -- implies another interpretation, and this one is a bit more nuanced than the other two. During the discussion of this hypothesis, some folks suggested that some students might have done particularly well because they recently took a pre-calculus course (and so much of the content therefore is fresh in their minds). Another facet of this hypothesis discussed was that some folks spent more time working out the practice problems (which were not graded, so not accounted for in the data). Yet another facet of this hypothesis discussed was that some topics were less prominent on the exam than on the homework (i.e., there were a few questions about evaluating limits analytically, but none required you to use the complex conjugate "trick" we learned and practiced). Clearly, this is a more complex hypothesis. Yet, if we operated under this hypothesis, it seems as though we might have room for generosity on the part of instructors //and// students. That is, this hypothesis suggests that there are things both students and instructors can "own." Moreover, this hypothesis can also explain the relationship (or lack thereof) between the exam and homework scores -- there is complexity in how students prepare and how exams reflect the content.

So, while we might adopt any of these hypotheses in examining the score distribution, we seem to agree that third one is the overall best. But, what does that mean for us now -- both me, the instructor, and you, the students? Well, it means that we need to all do some owning up, and we all need to look for ways to improve next time. I, for one, will be looking for ways to make the exam content reflect more directly on the homework (or, perhaps, do a better job in varying the homework). What are some things you might do?



In the end, however, we've come to a very important understanding about each other and the nature of our teacher-student "contract" here. Namely, you can see that, by eschewing the first two hypotheses, I am interested in finding ways to fairly interpret what you know and can do with respect to the concepts and skills that we cover in this calculus course. I, like most teachers, call those ways "assessments," and those are my primary means for giving you feedback and, eventually, a grade in the course. This may all sound perfectly reasonable and obvious, but keep in mind that you now can see that I am //actively looking for better ways to do this//, and not content to assume that some of you simply "get it" while others do not.

[As an aside, if you've ever been in class with a teacher who operates under this hypothesis, you may have had a very positive experience, but usually only if you fall into the "gets it" camp. And, given the usual difficulty most folks have in learning and adapting to calculus concepts, this is probably not a gamble most of you would want to take here.]

Yet, there is also something I've come away with in all this; I know that my students see where I'm coming from, that they agree with my interpretation/explanation of the homework and exam scores, and that they understand that this means we are all working for the same goal. Specifically, while I am certainly responsible for the four hours per week of in-class "preparation" time, it is clear that the bulk of their preparation occurs outside of class -- few would reasonably expect to score a 90% on the exam with in-class time as their only source of preparation. As a result, given the agreement on the hypothesis, I can now feel more confident that students will "own" more of their preparation, as I "own" the content and format of the course.

