LILLIAN MCDERMOTT: We have a group in the Physics Department at the University of Washington. Three of us are full professors. We has a research assistant professor and we have what the university calls a lecturer, who really is --Who is but she doesn't lecture of course. She's a former high school science teacher and former chair of science in a local high school. And you would call her, I guess according to the previous talk, a teacher leader. We have 22 graduates of our programs. They got their PhDs in physics, as Joyce alluded, by meeting the standards of everyone else. And they're mostly in the academic world but not entirely. We have PhD candidates and then we have two foreign visitors, PhD students from abroad. And I want to say at the outset, that without NSF, nothing like this could have happened.
Now I want to say something about discipline-based education research. It differs from traditional education research. In our case, it's not hypothesis driven. The emphasis is not on educational theory and methodology. We don't set out to show that a particular approach works or doesn't. It's a different way of going and doing things. It's primarily focused on student understanding of the science content. And we believe it's an important field for scholarly inquiry, by science faculty, and science departments. And that is not to say it doesn't have relevance to teacher preparation; I'm going to dwell on that latter. We believe it's effective not only for K-12 education but for K-20 plus. We're always learning because we approach everything.
Do we really understand? And in order to teach basic science -- and I say science, in my case it's physics -- you really have to understand introductory physics in depth. And what happens the further on you go, the further away you get from introductory ideas. And that is crucial because most of the faculty who are teaching this subject are very far away from what it was like to learn it for the first time. And there are at least four reasons why this belongs in a science department, as let's say it's different from a college of education. The incentive to improve the learning of your discipline is most likely to be found within the department. That the depth of knowledge to make judgments, as to whether or not people understand or don't understand, lies there. And that's where the students are.
How do you get at the students if you're not within the department? And I would say not very well. And moreover, there's something else: other faculty in your discipline are more likely to be influenced by faculty in their discipline than by anybody else. So if you want to have influence in what's done in your discipline, you have to --I believe -- come from within the department. At least that's the way it is at our place which is very research oriented.
I think the overall goal of our group is to understand what makes physics difficult for students and how to promote learning. So what we try to do is expand the research base on learning and teaching. And one of the ways of doing that is to identify the conceptual and reasoning difficulties in learning a specific body of content. Then we design and assess structural strategies that we have developed, and then we try to develop curriculum. I'm going to illustrate some of this. This is just a little bit of introduction. And we believe that this can contribute to the cumulative improvement, in the effectiveness of instruction, provided you disseminate your instructional materials.
You apply results from research and curriculum development to undergraduate education and professional development of K-12 teachers and future faculty, which we try to do. And then that we report our findings in journals and at professional meetings. I have something further to say about that. There's a constraint: whatever we do has to be consistent with a culture of a traditional research-oriented department.
And these are conclusions -- I'm going to put them up front in case I don't have time to say them later -- And that is we, I believe in order to survive, you have to participate in the life of the department and identify with it, which is what we do. You need a group in which faculty have the same status as others in the department: not dependent on one person, not dependent on the particular chair, and not dependent on the dean. Those change so you have to be full citizen in order to make what you do last perhaps beyond you. Then you have to make the culture which means to report the results in meetings that physicists attend -- publish is a missing word -- in referee journals that are readily accessible to them and in language that they can understand.
Avoid frequent use of terms like constructivism, scaffolding, zone of proximal development, pedagogical content knowledge -- When I give colloquium to physics department, even though I think these words have meaning and I know what they mean in English, nevertheless I don't use those words. That would decrease my credibility immediately and they would turn me off. Now I can't speak for people in other disciplines but I think I can speak for people in the physics community. And note something else: That intellectual merit is likely to be more highly valued than broader social impact. Not that those things aren't important but they're not consistent with what the community values. You can value them, I value them, but that's not what we do.
And I know many of you -- Not many, let's say many of you are in a different situation. I'm just describing what our situation is. And now I want you to do a translation. I'm going to use the word physics but analogies can really be made to other disciplines. So when I say "Physics" you might think chemistry or science or something else. But I'm going to try to be specific and so I'm going to stick with physics. The perspective that we take as a group is that teaching is a science as well as art. I'm not going to say that it's not an art but we're approaching it as a science.
So what do we do that's scientific? We conduct systematic investigations. We apply our results and that means develop instructional strategies. If we're identifying specific details, what is it that works? We assess their effectiveness, usually by pre-testing and post-testing, and I'm going to illustrate that.
Then we document what we did so what we did can be replicated -- this is what you would do with a scientific experiment -- and report results at meetings and in papers. So this is the way we try to fit into the culture. Actually I'm a believer in the culture so it's not that hard to do. And we think of what we do as an empirical, applied science. That's the perspective that we're coming from. What do our results generally show? That many students encounter the same conceptual and reasoning difficulties in learning a particular body of material. And some of us, many people, overcome it one way or another. But there are many others who don't know how and this is one of the things that you have to address. And what we have also found is that the same instructional strategies that work with one group of students, in modified forms, work with other groups. These are generalizations. That the results are generalizable beyond a particular course, instructor, or institution and they are reproducible. They become, if you write them up, publicly shared knowledge that provides a basis for acquisition of new knowledge and for cumulative improvement of instruction. This is how science proceeds. And they're a rich resource for improving instruction.
So our focus is on learning by students and so we try to identify, and I will illustrate again, what students can and cannot do. We design instruction to develop a functional understanding. And by that, I mean the ability to do the reasoning necessary to construct and apply conceptual models. It's the interpretation, in our case, of physical phenomena. And assess what happened: So again, I'll try to illustrate that as well. So what we do is we do research on the learning and teaching of physics, and we apply it to the development of curriculum. And then we try what we do in the classroom. Do it over and over and over again each time. And there are words for it in the education literature but I'm just going to say over and over again each time until we have reproducible results. And then we send what we've done to pilot sites.
Pilot sites are all over the country. They are colleagues we know in physics departments varying from, well you include Harvard and Illinois and Perdue and community colleges, to see if what we do depends on where you - Or can we do something that's generalizable for other people? And again, just very briefly, we have two curricula that we've been developing for over 30 years.
We've been working on Physics by Inquiry and that's primarily motivated for the preparation of pre-college teachers to teach physics and physical science. Or to use another word, the professional development of people who are already out there. And it's self-contained, laboratory-based, no lectures. However, in real life, we're part of this department. There are 1,000 students taking the calculus-based physics course at any time, almost another 1,000 taking the algebra-based course, and what can we do? So you can't do as much. So we have kind of a band aid curriculum: Tutorials in Introductory Physics. That has as its ancestor Physics by Inquiry but which has a compromise built into.
What can you do in a 50 minute hour to help those students learn? It's supplementary and it does replace the textbook, but rather supports what they're learning. So, so much about what we do and these are the populations that we work with or have worked with: introductory students, physics, engineering, other sciences, under-prepared students.
For years we had a program in which we tried to help students who were not prepared to make it in the gateway courses, to do well enough so they could go into engineering or to medical school, learned a lot from that. K-12 teachers I mentioned already, students in engineering, and we have the option of advanced undergraduates in our upper division courses and graduate students.
So I'm just going to briefly sketch this out because that's not mostly what I want to talk about. But we have -- This is part of our culture. We have courses for pre-service K-12 teachers and in-service NSF summer institute, six weeks everyday, four days a week all day. And then we have during the academic year what we call continuation course. Once a week, we have in the early evening -- officially two hours but they stay three or four hours -- for teachers who have been at one of our either pre-service or in-service institute, to come. And they build a community. See when I first started that, my idea was they were going to learn more physics. Well, they do, but more importantly they build a community in which they can come once a week and talk K through 12, talk with one another and work together. And I'm convinced that that's a very important part, and that a department that has ongoing faculty can keep providing that as programs come and go, and grants come and go. So that's a little bit of preaching there.
The -- I think I wanted to comment about something. That's our teachers and I think I left something out I meant to put in. Oh yes, maybe I said this already, but we have professional development for current and future faculty. And there's PhD program, I've alluded to that. We have a graduate teaching seminar and then -- That meets once a week for all the TAs in the department. And then on the side we do workshops, new faculty development workshop, we have graduate internships, and we have visitors who come for an extended period of time. The reason for mentioning all of this is to give you a sense of the environment in which we're going to do what I'm going to mostly talk about. So I've shown this already.
Now I realize that professional development involves a lot more than what we do. They're all important. But the point that I think I want to make-- That disciplinary specific knowledge, the concepts, reasoning skills, and representation, is on the research that physicists or other scientists can do to promote the intellectual development that teachers need, to teach science effectively. That's what I'm going to focus on. So how do you do this best? I thought I would pick an example. And I tried to pick an example that most of you will at least be slightly familiar with. However, as you're going to see, even this relatively simple example is one that isn't that simple.
And so I don't expect anybody, as I talk quickly, to get it all. But at least the topic is something that you'll feel familiar with. It takes weeks in the teachers course, and at least a good full hour plus in the course, for introductory students. And, like the other things we do, it's based on research. And I'm going to describe some of that research and summarize the results. And, as I said, nobody can probably keep up with the speed at which it's going to presented.
So what do we do? Well, our research often involves what we call individual demonstration interviews, and I'm going to illustrate that. In which you have a one-on-on conversation with a student that might be 45 minutes or an hour, and you probe the student understanding in depth. That's a luxury that we can't always have. Mostly, we do pre-testing and post-testing. I'll illustrate that too, why? You want to ascertain whether or not the limited number of people you can interview represent the population as a whole.
And the other thing you want to do, is does it work? Did what you tried to do, was it effective? And for this you really need a large population. And then, since we're in the classroom, you see how people learn. This interchange that Joyce referred to, of asking questions and trying to guide students to the answers by other questions that you ask, is a way we do a lot of what we do.
So what I picked is something that all of you will probably have been exposed to, geometrical optic, which I thought maybe that would be something people might be able to identify with. And what students can do generally -- We no longer even teach an introductory physics; it doesn't leave enough time for quantum mechanics in the first year course. So I'm being a little sarcastic but I can't resist it. Anyway, in the old days we had a formula like this. And so here's an arrow 2 centimeters long, 25 centimeters in front of lens whose focal length is 17 3/10 centimeters. Predict where the image would be located? And so we can substitute in the formula and we can get the answer from that. So what students could not do and here I want to digress for a moment.
One of my, I guess friends and colleagues, Fred Goldberg, at that time, had come as a visitor on sabbatical to our group to switch from atomic physics to physics education research. And when he came, he wanted to work on physical optics. And I said, "Oh Fred, you know, that's awfully hard to start with that subject. Why don't you pick something simpler like geometrical optics?" Well, I just mentioned Fred stayed for two years and he never got to physical optics. And he's a professor now at San Diego State. But I just want to comment that these things are not as easy if you look at them indepth, as you might think.
So what questions did we ask? Well, what would happen on the screen if a lens if removed? Here you have a brightly lit film and a converging lens, inverted real image of the filament on the screen. Now before the students took any physics, no problem. Okay? If you take away the lens, there's no image. But after having worked problems like that, they have a slightly different interpretation. You know what the most common answer is? Image is right side up. What happens if the top of that lens, the top half of the lens is covered? Well, should it make any difference? Apparently a good fraction of the class thinks so. What happens if you move the screen toward the lens and gradually the image disappears? And how many people get that correct, about 40%. And the problem was it didn't matter if they had taken the course in high school, if they had just had it at the university, if they had worked through the labs as you do the things you're supposed to do.
If any of you remember any of those experiments, it didn't matter. And that's very depressing. Okay, so you could say you try it with different populations. I want to get to this. Undergraduates, those taking introductory course of which you saw, K-12 teachers, even graduate students, you get similar results. Now graduate students do much better, I don't want to pretend that they don't, but they're the ones who are the TAs in our course. They have to do 100% if you want them to be teaching other people. And the other very interesting thing is that the results varied within about 5% for a given large population, regardless of the variables: instructor, academic quarter, time of day.
If you -- At least at a given university -- and it's not just a given university, it's many universities -- the results don't vary that much if they've been taught in the same way. This is very useful because your control groups are built in. And it's often very difficult to arrange to get permission and you have the other person agree with you. We don't have to because we deal with thousands of students and we have data that show it doesn't matter. And so that has led to several practical generalizations. And so, these generalizations as I said, are evidence-based. They're not hypothesis driven. It doesn't mean that you couldn't make up a hypothesis afterwards but it's not what motivates it. And they've been inferred and validated by research, and development of Physics by Inquiry, and Tutorials in Introductory Physics which are the two published curricula that we have.
So here's one of them: facility in solving standard quantitative problems is not an adequate criterion for a functional understanding. And I think the example that I gave you illustrates that. And you need question that require a qualitative reasoning and verbal explanations for assessing what's students understand. And if that's what you need for finding out maybe it's a good way to teach. The change to magenta color indicates that that's a way to help teach, that seems reasonable.
The other things is, is that the connection among the concepts and the formal representations in the real world are often lacking. And I think that the example illustrates that too. That you need practice in determining -- In relating the formalism of physics to the real world. And what didn't students do in this case? I mean, this is very quick but I want to give you a sense. The principle reasons you learn to locate the position of the object, of the image -- many of you I think probably have had that -- that they're not necessary, they're just an algorithm for locating it.
The area of the lens affects only the brightness. Remember half a lens, not the extent of the image. And for every point in object there is a corresponding point on the image. All right, some of you may have had that. But it's more, it's deeper than that. There's no basic understanding that they can use for a ray model of light. The idea, that every child can almost repeat sometimes, light travels in a straight line and every point in an object is a source of an infinite number or rays emitted in all directions, is just memorized. It has no meaning for lots of people.
So we decided, "Okay, maybe the lens complicates everything. Maybe we should get rid of the lens." So we did and there's a little bulb. And then there's some apertures, a single triangular one in this case I believe; two bulbs, point sources; and an extended filament. And for mass numbers we go to the calculus-based course which is acuminate before, has a 1,000 students, at any time in the three quarters, of course, we're in a quarter system and have done this for many years.
So what we do: we've administered -- This is an example, not the only one. Before and after standard instruction in calculus-based physics to several thousand students and to in-service and pre-service teachers. We have a limited number of in-service and pre-service teachers but we have lots and lots and lots of undergraduates. And so this was given as pretest and students didn't do -- Well, let me not just say. A single bulb triangular aperture, very pretty good -- Two bulbs a triangular aperture, not so bad. However, when you get to the long filament bulb, which you usually get for many of the students, is a single one. In other words, they don't recognize that what you get is that each point source, each point on the filament represents a separate source and produces on the -- I don't know how well you can see that there. That produces with a trapezoid with a triangle on top. Now the fact you didn't see this right away is not at all unusual. We give faculty development workshops to people who are doing high-level physics and very often then don't see it right away. But it can be seen right away. It can be taught so that students understand it. And that empowers a teacher, whether or not he or she is teaching that -- It empowers, even though you're not teaching at that level, if you know more if gives you the strength of knowing what is appropriate for that age level. And want to give teachers the sense of confidence that they can do that which they're teaching and more. So that's, well, I think is very relevant to teacher preparation So evidence research is a gap. The instructor in the course goals are agreed.
But the instructor and the student, and the student and the course goals are not necessarily in agreement. But the thing -- Everybody knows that but most people don't know that the gap is much greater as it is than most instructors realize. I mean, for example the examples I gave, I think a lot of instructors -- I know, I don't just thing, a lot of -- Oh, if they know it, they feel that by telling somebody that that's good enough. And we have data I'm to going show you whereas we have instructors who are good, popular, loved and nevertheless you give the same kinds of questions to their students and they don't do any better.
Sometimes what's happened is that students in the classes where the instructors are not so popular do better, because they're desperate and they teach themselves. So I'm must commenting. It's true. It's true.
So we have a lot of data and certain types of qualitative questions. Performance of undergraduates in K-12 teachers is essentially the same over a wide range of student ability before and after instruction. Calculus-based, algebra-based, or what they call descriptive courses don't make any different with and without demonstrations. That's not to put demonstrations down because I think they are important for a lot of reasons. Nevertheless, people remember what they already knew very often, or thought they knew and not necessarily what they saw. And we have data that indicates that as well.
Standard laboratories: You can go through them thoughtlessly just by following the directions, and it doesn't seem to make a difference. We have the data. Large and small classes: I happen to like small ones better but it doesn't matter on certain kinds of questions. Now this one I always have to apologize about... Regardless of how popular the instructor is, that makes you feel bad maybe, but the instructor cannot do the student's thinking for the students. And so hearing lectures, reading textbooks, seeing demonstrations, doing homework, and performing lab experiments often have little effect on student learning. Don't get discouraged, there are ways out. So another generalization: A coherent, conceptual framework is not typically an outcome of traditional instruction.
Students need to go through reasoning involved in the process of constructing scientific models and applying them to predict, and to explain real world phenomena. So now comes teaching by telling is ineffective for most students. They have to be intellectually active. Most of them don't know what it means to be intellectually active in a science. And they have to be able to do the reasoning necessary to apply the concepts and principles in situations not expressly memorized. Some of us can think back when we were taking introductory physics, "Oh, here's a problem, turn the pages back, see how the sample problem has worked." And you get over that. You don't survive if you don't get over that. But people take only one year so physics so they don't have a whole lot of time to get over it.
So we teach by guided inquiry. And what do we mean since this is a word that has different meanings to different people? For us, guided inquiry as our group interprets the term, is intended to help students arrive at an understanding of a physical concept that a physicist would accept as correct for that stage of learning. It's not anything goes. It's got to end up where you need to be. I mean, that's again, coming from a disciplinary perspective. And there's no time to describe this. Students are guided in constructing a basic ray model to go back to the examples I showed.
That question requires qualitative reasoning and verbal explanations help and develop a functional understanding, one that they can explain. And however, we know that in learning a given body of material people tend to make the same errors. And so we know those are there. And so we explicitly address them. And that is something that is very important and for teachers it's crucial. So the generalizations are a practical guide. I just wanted to mention Physics by Inquiry -- which is what we use in our courses for teachers.
The concepts, reasoning ability, and representational skills are developed together within a coherent body of subject matter. We feel it doesn't work to say, "Oh, today I'm going to be developing graphing skills and tomorrow I'm going to be developing that, and that," and have a different topic for everything. It doesn't work very well. And physics is taught as a process of inquiry, not inert body of information. And there are no lectures in any of courses for teachers to say the truism, "People tend to teach as they have been taught," I think is pretty general.
So here's a post-test. Now I wouldn't expect you all to be able, in the basis of what I said so quickly, to be able to do this. But it's quite challenging. You have a point source, you have extended filament and you're asked what would you see on the screen. So after giving you half a minute or no longer, I'm going to show. The point source gives rise to the gamma and the extended source to a series of gammas. All right, so that is a post-test. It isn't the same as the pretest but it's similar. But it's harder.
And what happened? Well, correct or nearly correct, I gave you the data on the pretest. Image mimics the hole in the mask is common difficulty. And after the tutorial -- Now I'm using the tutorials, which are only one hour, and later I'll come back to the teachers but that's just because I can do this pretty quickly. But that's not good enough; this material isn't that hard. So we went through it. What could we do? Well, the graduate student was working on this as part of her dissertation, decided to try a truly extended source: a light bulb. And we go through the process of asking questions and eventually what we get is -- I don't know if you can see this either. Each point on the light source gives rise to the image of the triangle and you get this. Now that's part of the curriculum now.
What happened? Well, as a result of that -- Here's a posttest. So this is after they've done this. What would you see on the screen? There's a circular light source and point source. And not to keep you in suspense, this is what you get. It's asymmetric but we don't fuss about the asymmetry. That's the least of it because the F is asymmetric. And let's see what happened because what I'd like to show you is what happens, because it keeps happening. So after of the revision, the tutorial is a 50 minute hours. It's not much. Teachers get more than that. But it's improved. Now has it improved enough?
Well, you have to be practical. You can't wait till every single person knows everything. So we are fortunate to have graduate TAs who have been admitted on the basis of the GRE Graduate Program, and who must come once a week because they are TAs in the course, and who must take the pretest.
So we have data. So when the introductory students, the undergraduates, reach or surpass the level of what the graduate TAs do after having left this material behind remember, for four years or more, then we say, "Enough." I mean, you can't keep going at everything forever. And that's are criteria. And I think I say that on the next but I'm not sure. Oh, certain conceptual difficulties don't go away unless you do something and you must do something specific or they won't go away. This is experience based on experience. And our criterion: When in a given tutorial, we get the to the point where the graduate students, initially, either are at the same level or not quite at the level of what the introductory students are after the tutorial, we say, "Enough on that topic."
That's not good enough for teachers however. So I just want to comment, you always get this as a question, so I'm going preempt it. What happens to their problem solving ability? Because after all, we've taken 50 minutes out of the week. To say a comment about our faculty, we prefer written pretest but the faculty have done a mathematical calculation. There are ten weeks in the quarter and if you multiply ten weeks by how much time we have spent on qualitative reasoning, you end up losing two whole lectures. And if you lose two lectures, they didn't want to do that many of them, so we put it on the web. It doesn't matter. We prefer written ones. What happens is on qualitative problems students do much better. Qualitative, that means ones without numbers. On quantitative problems typically it's somewhat better. The more complicated the problem, the better it is and retention much better. We have data for that. And even so, we've taken way 50 minutes they could have been practicing the problems at the end of the book. So I just wanted to comment that affects everybody. Now let's get specific to teachers. So, okay, they do much better. They spend much more time and they go from a pretest in this particular case that's less than -- No, it's less but not much that much less.
And after working through Physics by Inquiry, they're at the 90% level. Now what you're interested in, of course, is what happens when you get into the K-12 classroom? We don't have a whole lot of opportunities to find that out but we have a few. And we have them because of contacts that we have with a local school, in which people who've been in our courses continue to cooperate with us and give these tests to their students. And I'd like to show you something that is relevant. We have, as I said, an unofficial practicum where our pre-service teachers are guided -- PBI is not intended for children but they're guided in adapting what's in Physics by Inquiry with their students and assessing the results. So here's at least some data we have.
And experienced, well-prepared teacher adapted Physics by Inquiry usually with their students and assess the results. So we have pre-service teachers who know the material and in-service teacher who knows the material and other things you need to know in order to teach effectively. And this is what happened. A ninth grade classroom, Physics by Inquiry modified by pre-service teachers, they got their students to the 45% level which pretty much matched where they had been.
But the teacher who was experienced, and really knew her physics very, very well, and also had taught in the classroom got her students in the ninth grade to the 85% level. So there is a difference. Nevertheless, even those inexperience people can do a whole lot better when they understand the material. There's nothing like understanding and knowing how to deal with the population of course. And I'm not minimizing the importance of that. But that's not what we're particularly, you know, good at doing. So that is just summarizing the data. Under-prepared teachers:
Less than 20% because the teachers themselves, you don't expect the students to be better than the teachers. Inexperienced, well-prepared teacher: 45%, -- Experienced well-prepared teacher better yet. And so the other thing that we do in these courses, they recognize the importance of assessing he results of instruction, you should go on in modifying what you're doing based on what the students can do.
So now we have to ask what do we want students to know and be able to do, and prepare teachers accordingly. And I'd like to mention, in case anybody doubts it, that the subject matter preparation of K-12 teachers occurs in science departments, not in education departments. And therefore, I really believe that teacher preparation is a critical responsibility of university physics faculty or science faculty.
As I said, make the analogy because I'm so used to saying physics. And the AAPT, APS, AIP, those are professional organizations of physicists recognize this statement -- Made a statement to that effect. Mainly departments need to get and should get into the preparation of teachers. Now this is not easily catching. I can say in our department, if our group didn't believe in this it wouldn't happen. However, we couldn't do it if we didn't do these other things that the department values. So I just, a little bit of preaching, but that's the way it is.
Now what's wrong? In our opinion, most physics courses available to teachers do not provide enough experience with phenomena. They don't spend a lot of time developing concepts and the scientific reasoning that goes with their application, too much emphasis on mathematical formulism in the high school courses. They aren't even courses, unless they're solely descriptive, for elementary/middle school teachers. And hands-on is not enough and they're poor role models because most of them are taught by lecture. And, as we all know a truism again, teachers tend to teach as they've been taught, both what and how. So they need intensive preparation we believe. And they have to understand basic concepts and principles in depth, should recognize what makes physics difficult, and how do you address those difficulties.
They need -- This is something else. They need to learn or relearn physics in the way they are expected to teach. That isn't that easy to do. You could really know the material very well but not know how it is somebody comes to understand. And that's the opportunity of having been through it yourself is very important. And in my belief, our belief, they really need special courses that are not usually part of the department's catalog of courses. The other thing that doesn't happen in a typical course is a reasoning ability does not develop by itself; you have to do special things to make it happen and recognize that how we know is at least as important as what we know. And for teachers, that's even more important, I think, than for most people. Now what do our colleagues say? Okay, "Send content experts, scientists, and graduate students into the classroom." And I couldn't resist this particular quote by somebody I respect very much, Marvin Cohen. He said, as quoted in APS News, "If we could get members to go to K-12 schools and levitate a magnet or something, we really think these efforts could bring great rewards." They might bring enthusiasm but they're not going bring knowledge. They may be motivational but it's not really very meaningful learning. And the other idea is what you really need to do to make students love physics is provide them with examples of cutting-edge research.
And I'm going to show you an example. The results may be motivational but teachers are unable to develop a depth of understanding necessary for transferring either the content or the process of such research to their students. And it's not a substitute for developing a conceptual understanding or scientific reasoning skills. And let me give you an example that I saved over the years. Because the cover of Physics Today, goes to every physicist who belongs American Physicist Society/American Association of Physic Teachers, and here's a physicist teaching floating, I suppose, sinking and floating to a group of third-graders. And as you can see, they're just really thinking. And you can read the description. There's nothing wrong with getting people interested in that but you cannot look at that as something that's going help somebody become and effective teacher. So I don't know how many of you are familiar with sinking and floating but it's one of the things that's taught often in elementary school.
Other things that teachers will do and a lot of workshops will do is work through and reflect upon K-12 curriculum. Is that sufficient preparation? This short-term solution is wanted by teachers. It takes less time. But they need to understand physics in greater depth than their students because something unexpected can come up and it does. They need to understand the material at a deeper level than most universities do. Most university students are not going to go on to physics in these courses. If they do go on to physics, they're soon be well beyond introductory physics. The teachers have to be able to recognize and address common difficulties that they may have with their students, almost certainly will have. And they also need to know the developmental level of their students, obviously, because they are adults and students are not, to say again the obvious. And again, back to the fact they need the opportunity to learn and relearn the material.
Some of the implication -And I'm just going to put these quickly. They're just a list and lists are boring. Teachers have a say in difficulties, experience in teaching doesn't necessarily help you understand the material. We have data on that. We have data in which teachers have been teaching something for 12-15 years and they're good teachers. But when they discover owing to some of the questions that we ask, that they don't truly understand the material, they're conscience stricken. But they shouldn't be because everybody is like that. Just having experience teaching doesn't make you understand it better, not only. And that there's certain conceptual difficulties of such seriousness that you can't go on.
Good pedagogy is not enough. Students of well-prepared teachers do better on questions that probe their functional understanding, their ability to explain what they've done. And broad assessment instruments: The FCI, Forced Concept Inventory, was a great contribution; it woke the faculty up and there's no question about that. However, it is not a sufficiently good measure of how well people understand a given body of material. It's important and it's done a lot but it's not good enough to identify specific difficulties. And you can't take a standard course and modify it, or even supplement it to prepare K-12 teachers, to teach physics and physical science by inquiry, not well enough anyway.
So what's important? And here I've made it science. The study of science by undergraduate and K-12 teachers should help develop ability in scientific thinking, understanding nature of science, what it means; critical thinking, can you distinguish scientific reasoning from personal belief or opinion?
Reflective thinking: How do you know when you understand and don't? That's the first step. You have to know what you don't understand. And then what kinds of questions do you need to ask yourself to determine whether or not you do understand and to develop your understanding further? And so these are the kinds of questions that are very important when you're learning any body of material. And they transcend the study of physics or any other science.
And to end on an idealistic tone, here's a quote. Some of you may recognize it. "Science is a way to teach how something gets to be known, what is not known, to what extent things are known (for nothing is known absolutely), how to handle doubt and uncertainty, what the rules of evidence are, how to think about things so that judgments can be made, how to distinguish truth from fraud, and from show." And some of you may recognize that because it's really quite famous person who's being quoted, Richard Feynman, it's from Genius by James Gleick. And that's my footnote for it so thank you.