Iris Weiss, Horizon Research, Inc.
Barbara Miller, Education Development Center, Inc.

IRIS WEISS: So the title of our talk is "Determining What We Know and How Well We Know It, the Promises and Perils of Knowledge management." Let me assure you-- You can take notes if that's your style. But you don't need to, because of these slides will be on the MSP/KMD website shortly after this talk.

Okay, we want to start by getting you involved. And there is a green sheet on your desk, on your table. And we have described a context. And, what we'd like you to do, is we've given you two plans that could be used for deepening teacher content knowledge, with the idea of improving their instruction, mathematics and science instruction. As individuals, please read the plans and think about which one you would recommend, and then talk at your table groups. And we'll give you about ten minutes for you to read and talk. Go.

Okay. I'm going to ask for a show of hands. How many of you have considered that your conversation at your table was primarily based on research evidence? Raise your hands please. How many of you believe that your discussions at the table were primarily based on people's experience? And how many of you think it was about equally on both? I will point out that there is mixed votes within tables. So I'm not sure you were having the same conversation. [laughter]

So increasingly, as we all know, there have been calls for basing practice on empirical evidence. But, for lots of reasons, people have to rely in their decisions on a-- either have to or do rely on a variety of sources. You may be relying, to some extent, on research findings about particular strategies. We know you're relying on your own experiences, as well as experiences of people whose opinion counts to you. To some extent, we're operating as a community on beliefs about what is important. And in these two plans, you may have recognized a difference of opinion, and it plays out like this in the work that we've been doing.

There are people who say "You can't apply what you don't know. So you have to start by helping teachers understand content at a deep level. And then they can begin to think about the implications for their classroom practice." About an equal number-- and I have no empirical evidence, just from the number of people who say this when we have conversations-- who say teachers are, by their very nature, practitioners. And that you need to start from their practice as a way to draw them in to understanding a need for, and therefore being willing to deepen their content knowledge. In fact, the empirical research that we have does not provide very much guidance about which plan would be more effective in this context or in any other. There's lots of things that we don't know. And we're never going to have sufficient empirical evidence to guide all of our decisions. There's lots of decisions to make. Each context is different. There are lots of variables, lots of trade-offs. But-- and this is an important but-- The fact that we don't know everything doesn't mean that we don't know anything. And, depending on who I'm talking to at any given moment, I may be highlighting the fact that the glass is half full, or I may be highlighting the fact that the glass is half empty.

Today I'm afraid it's a half-empty glass, so I'll just prepare you. Nevertheless, we are all expected to draw on what is known, the accumulated knowledge of the field while we continue to forge new knowledge. And that was the purpose for NSF funding our project, the knowledge management and dissemination project, to help future MSPs and other efforts ground their designs, their evaluations, in what is already known.

BARBARA MILLER: Iris is reminding me to tell you who I am might be a good idea. I'm Barbara Miller from Education Development Center. I'm co-PI in the KMD project with Iris.

The project is, as Elizabeth indicated, is really about synthesizing knowledge, primarily generated by the MSPs, but also what we know from the field, and integrating that, so that we can draw from a broader base of knowledge to inform our practice. So in this definition, there's a number of features that you're going to be hearing us talk about in our presentation this morning, about generating knowledge, about acquiring knowledge, about synthesizing it, integrating it, sharing it. That's all part of what we understand to be knowledge management and dissemination.

In collaboration with NSF program officers, we identified three areas that we were going to be focusing on for knowledge management and dissemination. It's these three. You see those lovely abbreviations because we're probably going to be using those in various slides, TCK meaning deepening teacher content knowledge, TL meaning teachers as intellectual leaders. Horizon is working on teacher content knowledge. EDC is taking the lead on teachers as intellectual leaders. And West Ed., our colleague, Ted Britain, is taking the lead on teacher induction. We're going to be talking, today, about teacher content knowledge and teacher leadership.

The opening activity that we had you do around the way in which we already think about and use research and practice, we did that for a couple of reasons: One, at 8:30 in the morning, the very least we can do is have you talk to each other, as opposed to only listen to us. That only seemed fair. But the other is because our project is really about acquiring two types of knowledge: empirical research findings and practice-based insights, with an emphasis on what we know and what knowledge is being generated by the MSP community.

So a little bit about each of those kinds of knowledge, just to set some context. And then we're going to go in and talk about what do we know about deepening teacher content knowledge and what do we know about using teacher leaders. In the work that we've done in KMD, with regard to the empirical research, we've done some things that probably will sound very familiar to you. We identified in screen studies, using a couple of different databases of research. You'll see a bit more about some of the screens that we used to focus in on a set of studies. We summarized those studies, nothing too amazing there. But this is the piece that we did, which was a little bit different, and was very helpful for us in this work.

We developed and applied standards of evidence, so that we'd have a better understanding about not just what do we know from empirical research, but how well do we know it. What confidence do we have in the findings. And those standards of evidence were ones that we developed and applied to both qualitative and quantitative research studies, drawing on the work-- earlier work by the NRC, AERA. There's a number of features that are very familiar.

This is just a quick summary of what those standards of evidence are about. It gave us the opportunity to look at the empirical research with regard to those six areas, and to think about where studies were strong, where they were not as strong. And again, this is not the expectation that every study would be equally strong in all areas. Because I think that would speak to not only a level of rigor in studies, but it would say something about how much-- how long a study was and when it appeared in the published literature. We may do all of this in our work, but can we always write about it, so that other people know about it? But in any case, this is how we were thinking about standards of evidence for empirical research. And, with regard to practice-based insights, there were a number of things that we did to develop and apply a process, so that we were systematic, in terms of collecting and vetting a practitioner knowledge from various sources. We wanted to move. We felt the need to move beyond the "I know five smart people. I'll ask them what they think." And so, this is in a way to try to be more systematic about that, drawing from MSP leader interviews, online discussion boards. And the last, in particular, are the online collection of insights, evidence and examples. We ran a number of different panels that had a number of different rounds to it. It's a strategy that was tremendously generative and productive for us. We can talk more about that at some later point if you want.

But these were the sources of information that led to the practice-based insights that are part of what we have in these to share with you. And, if you want more information, you can go to this site. Just to let you know, I think this is the second time that you've seen this URL. There will be more to come. This is our subliminal attempt to say "Go here. Go here. There's good stuff here."

IRIS WEISS: Okay, so I'm going to take you through a bit of a sampler of some of the things that we have learned about deepening teacher content knowledge. Our measure of whether we have succeeded in this talk will be whether you go to our website and start looking for more.

Okay, so at its simplest level, here is a logic model for deepening teacher content knowledge and why we would do it. So we have professional development efforts. It's not just for any old reason. The purpose is to deepen teacher content knowledge. And content knowledge can be and is defined in lots of different ways, to include pedagogical content knowledge, understanding of student thinking, as well as disciplinary content knowledge.

And one of the papers on our website describes these different perspectives and the theoretical reasoning that goes behind-- pushes for each of those kinds of knowledge. And the idea of deepening these various kinds of teacher content knowledge is to get payoffs in the classroom that will lead to improved student outcomes of a variety of types. So one of the things we looked at was the link in the literature between teacher content knowledge and teaching practice. Does teacher content knowledge matter? Why should we care? And we found that teacher content knowledge matters through an analysis of the literature, for lots of things.

Teachers who were more knowledgeable selected more important content to emphasize, that you could see differences in their instructional strategies. You saw differences in the extent to which they focus sense-making on important content, as opposed to asking the kids "Was this fun?" all of that kind of stuff. I want to call your attention to the bottom row. We saw 21 studies in math and 7 in science. We have no idea why. And, when you look at some of the other slides, sometimes that pattern reverses itself. There doesn't seem to be any rhyme or reason for the number of studies that address different things. In fact, in one of the areas-- I forget which one-- when we looked at elementary, middle and high school, most of these studies in a particular category were fifth grade. And the best we can figure out is that kids are tested in fourth grade. So, if you want to do anything in elementary, you got to stay out of those classrooms.

Okay, the bottom line, though is does teacher content matter for student outcomes? If we don't get the outcomes that we want from the kids, then there would be no reason to worry about deepening teacher content knowledge. Notice: Two studies in mathematics and two in science. Let me say something that I'm going to say later, but maybe would be good context here. For our literature review, we only included studies that had measures of teacher content knowledge, not self report, actual assessments of teacher content knowledge. That screened out a whole lot of studies, including many that I've done. So there you have it.

Okay, so for example, in teacher content knowledge matters for student outcomes, Magnusson and colleagues looked at teachers who had gone through some investigations. And what they found is that, when teachers held an incorrect idea-- this is not a big surprise-- if the teachers had a misconception, then the kids were likely to develop that same misconception. Because again, if you don't know it, it's hard to teach it.

Okay, so in that literature, although it was not the overwhelming number of studies we would have hoped to see, there was support for continuing to say teacher content knowledge matters. Now, what can we learn about how to go about deepening it? Another comment I will make about our literature, there are incentives, all along the way, to report studies with positive findings. So, in the literature, the studies that are there are more likely to have positive findings than the studies that never got submitted or, perhaps, didn't get accepted. So that's another issue. But we focused, particularly then, on what do we know about efforts to deepen teacher content knowledge, typically through professional development? This case, we found 13 studies in mathematics and 26 studies in science-- again, no reason we can figure out.

And now, I'm going to talk some, just a sampler-- Go to our website. There's lots more-- about what do we know about strategies for deepening teacher content knowledge? And for convenience, I'm splitting it into what do we know about designing programs and what do we know about implementing programs.

So, we know more than what I'm going to present. That's the-- I guess the half full. But we don't know nearly as much as we had hoped we would learn when we went to this literature.

So, the advice that we get, in a large measure from lots of you in this room, is that it's really important that you anticipate that teacher content related needs are not going to be the same magnitude or type for different topics.

And so, here is an example from an MSP. We didn't name any MSPs, and we don't in any of our literature so far. If we want to, we'll get specific permission. We've interviewed MSP PIs. We've gleaned all kinds of information, even from MSPnet, which does have names on it. But we're not using names unless we specifically get your permission.

So, in one case an MSP institute for secondary math teachers planned to focus on both algebra and geometry. When they got going, they realized that the issue in algebra was that teachers were thinking of algebra in very procedural terms. And so, they decided that what their focus of their work needed to be was helping teachers see the concepts that underlie those procedures. The very same project in geometry made a different decision, said the PI, "the big issue with geometry is that when kids come into high school geometry courses, teachers think that they're ready to do deductive work. In almost every case, kids come in with a very, very poor structural understanding of the geometric figures that they're having to face." And so, in this case, in geometry, the MSP decided that they needed to focus on teacher pedagogical content knowledge, and in particular, helping teachers understand how students' geometry learning follows a developmental trajectory.

Another piece of advice from the community: Many of these things are not stop the presses. In fact, one of the things-- I heard a talk once by a very well known education researcher who said something like "We've done some studies that found that, when teachers have an opportunity to learn content, they're more likely to learn that content than content they didn't have an opportunity to learn." And there were kind of guffaws. And she said "Yeah, I know. When I talk to parent groups, they say 'And we gave you how much money to figure that out?'" But, so, nevertheless, this actually is a fairly-- something for us to take seriously, especially when we think about the math and science standards that we're trying to get teachers ready to teach.

We say less is more, and we say favor depth over breadth. But there's so much breadth that there's challenges every place along the continuum. And, what you all are saying-- not necessarily what you all are doing-- is that you need to recognize that developing deep conceptual understanding does take time. And, rather than giving teachers a smattering of some opportunity in this area and that area, that going deep involves giving them multiple opportunities to explore new ideas. So, it's fairly common strategy in the literature, 7 studies in math, 20 studies in science. Anybody who sees a reason, a plausible explanation for this pattern of studies, we'll get you a chocolate bar or a copy of the evaluation book, one or the other.

Okay, and now I'm going to make another point. This particular case study of an elementary teacher, much of the literature that does go in depth, focuses on a very small number of teachers. And that one we do think we know why. That's not rocket science. We think it might have to do with the fact that good research is expensive, deep research. So, in this case, they followed a teacher who had been involved in multiple content based investigations over an extensive period of time. And they found that this teacher began to demonstrate an understanding of the concepts that they were working with. And these were difficult concepts. And, if the teacher had long enough and enough opportunities for content-based investigations, you saw the payoff.

One of my favorite comments to make about a study, which found that when two world class mathematics educators worked with 15 teachers for five years, those teachers really did well. I said "Okay, give me two world class math educators for every 15 teachers for five years. We got it knocked."

Okay, so as an MSP example of this piece of advice-- again, secondary mathematics-- this MSP program engages secondary math teachers in extended investigations of challenging problems. And the professional development really is focusing on teachers' understanding of what it means to do mathematics and key ideas in the high school curriculum that relate to what the teachers are doing. And this MSP is conducting case study research to get a deep understanding of how what the teachers are experiencing in the professional development plays out, how their understanding of what it means to do mathematics is affecting their practice.

Switching to implementing programs: We hear loud and clear from the MSP community that it's really important that you be true to these disciplines. This is not about "Do as I say, not as I do." That, in the professional development, you need to model the mathematics and sciences habits of mind that you want teachers to develop. And an example of this from the literature, Hill and Ball analyzed the natural variations in implementation. This was not an experiment where people were assigned to treatments. But there were 15 teacher institutes that were focusing on number and operations. And that, what they found was that those projects that focused on analysis and reasoning as the teachers were learning these topics had larger impacts on the teachers' content knowledge.

BARBARA MILLER: As deep as we'd like, but broad at least. By teacher leader, let me just do a little definition here, we mean current or former classroom teachers who are working with other teachers and other educators in this school and district.

You could drive a truck through that definition, because that covers a lot of territory as well. But, simply the idea that many of the positions or titles that are in use, coach or teacher on special assignment, or fill in your favorite one here, would fall under this heading of teacher leaders, so former or current classroom teachers who are working with other educators to improve instruction, primarily. And, what I'm going to be sharing with you follows kind of the same model of what you saw with deepening teacher content knowledge. And that is, what do we know from literature? What do we know from practice, especially MSP practice?

This is the first, that teacher leader practice impacts teachers' classroom practice, that particularly, what that applies to, the ways in which teacher leaders are providing instructional support to teachers, whether that's inside the classroom- For example, something like modeling or demonstration lessons, that kind of strategy that would put a teacher leader in the classroom with the teacher. Or, providing instructional support to teachers in a setting outside of the classroom, in which, for example, teacher leaders lead professional development.

There were eight studies that reported these kinds of findings, that teacher leader practice impacts teachers' classroom practice. The detail about what that impact means, very broadly stated, because generally speaking, the sample had teacher leaders involving in a number of different strategies to impact teachers' practice.

And so, we don't have research that tells us about the relative impact of any one of them. But collectively, the idea that teacher leaders practice does impact teachers' practice. Teacher leader practice is related to student outcomes. That term "is related" was chosen very deliberately. These are weak findings and only 7 studies. And, if we look at kind of what we see in those findings around teacher leader practice being related to student outcomes, the first is that there was a positive impact on student outcomes in classrooms that were taught by teacher leaders. These findings are about teacher leaders acting as teachers in the classrooms themselves. And in those classrooms that were taught by people with the label of teacher leaders, that there was a positive impact, some increase in student understanding, whether that was standardized scores or performance tasks or information that was collected by a variety of measures.

The implication would be-- and it's a big question mark-- is that what teacher leaders are doing in their own classrooms is what they can help other teachers do in their classrooms. That would be the big leap of faith between what teacher leaders do themselves and what they can help other teachers to do.

But these were three studies that came up as we looked at what possible implications there might be of teacher leader practice per student outcomes.

And the second bullet is that there are positive impact on student outcomes from school-level effects that include teacher leader practice. These are four studies that looked at a variety of issues, a variety of factors that were in play in schools where they were seeing improvement in student achievement, of which teacher leader practice was one of a number of things.

Do we know that teacher leader practice related directly to student outcomes? No. Is teacher leader practice in the mix of a number of things that were happening in those schools? Yes.

This is not particularly strong findings. This is the state of where we are, in terms of the research. Again, it's probably consistent with a lot of the research that we have on the impact of professional development by teachers on student outcomes. We don't often have research that really tackles that particular link. So, we do know a bit more, when we look a little more closely at a sampler of topics within teacher leadership. There's a couple of findings, a couple of things I wanted to share with you about improving teachers' classroom practice, about teachers working with principals as part of the work that they engage in as teacher leaders, and some information about developing teacher leaders' knowledge and skills. That last one relates to the whole issue of teacher leader preparation. What do we know about that?

We also have information, what do we know about teacher leader selection? That's in our knowledge review. So again, this is just a sampler of some of what we are seeing.

Again, this is framed as advice. Whenever possible, choose teacher leaders who have strong content knowledge as well as classroom experience. You've all-- I think I can say that with confidence-- have been involved, in some way, shape or form, with teacher leaders. This, again, is not stop the presses news. What is interesting, though, is that there are many opportunities for selecting or working with teacher leaders where there is a choice between content knowledge or classroom experience.

And so, we have some findings that really talk about the critical importance of each one of them, and that they do contribute different things. Research from the MSP community-- this is a study by Manno and Firestone-- found that teacher leaders who were identified as content experts-- and notice how content expert is defined, "A minimum of an undergraduate major in the teacher leader's content area and teaching certification in that area." With that kind of content expertise, those teacher leaders were more likely than people who were not content experts to provide support that included leading professional development and to establish greater trust between themselves and the teachers with whom they worked. Again, lending some credence to the fact that content knowledge does matter, it's part of what teacher leaders draw on in doing the work that they do. This is particularly important when we think about elementary teacher leaders, where content knowledge can be quite variable. And oftentimes, in many of the MSP work around institutes, that's a particular area that you're trying to target.

And an example from MSP is around this phenomenon about teacher leaders leading professional development. In one project, teacher leaders co-designed and presented grade level professional development sessions on investigation units prior to teachers being asked to teach the units. This is a manifestation of what classroom experience do they bring to it, that they, themselves, had practice in doing this kind of work that they're working on with teachers. And that, because they have piloted these programs, they had that knowledge to draw on. They had the confidence that they understood how the program worked with students. They were able to answer specific questions that teachers had. Again, it's not that the classroom experience was the only thing that mattered. Content knowledge mattered as well. But it's the idea of the two working in conjunction that makes a difference for how teacher leaders are able to work effectively with teachers.

A second area that we have to share some results on is about teacher leader modeling, the idea of demonstration, lessons or other modeling experiences, and the importance of structure around those experiences, to help teachers engage in being reflective and learning something about their own teaching practice. From an MSP, there were some features that they used around the practice of demonstration lessons or modeling, that they found to be particularly effective. These kinds of features were echoed by other practitioners, expert practitioners, including MSPs, about what was important around demonstration lessons or modeling. And these are two of about five different features that the teacher leader and teacher agree, beforehand, that there is an explicit learning goal of the lesson. And that's manifested-- and part of what the demonstration lesson is meant to get at is those learning goals. And second, that the teachers' observation was framed by a specific question, not simply "Here's a lesson on fill in the blank. Watch and learn," but there was a very specific question or focus for what the teacher observing the teacher leader modeling the lesson was going to be paying attention to, and that this was part of the debriefing or some kind of reflection afterwards.

Teacher leaders are often engaged in working with principals. And so, here is some of what we know about teacher leaders working with principals, both from the research and from practice. And it starts with this idea that recognize that it's a two-way street. Teacher leaders need the active support of principals. I think that's something that we all know. Or, if we didn't know it at the outset, we now know it based on the work that's going on, the importance of principals. And that principals benefit from the knowledge of teacher leaders. And so, we have some findings related to both those aspects. First, the empirical research on teacher leaders working with principals-- notice four studies. We don't have huge piles of empirical research around this. But these four studies were very consistent in defining that the collaboration between teacher leaders and principals gives principals access to specialized knowledge that they, themselves, may not have. And that specialized knowledge about content or pedagogy is useful to principals in making instructional decisions. So it's really capitalizing on the idea of teacher leaders as people who have content expertise. And that's part of what they can contribute to work that goes on with principals. Whether that's work that principals and teacher leaders share, which might take the form of working in a site counsel or a leadership team or on a curriculum committee, or whether that's work that principals turn to teacher leaders as a critical friend or a sounding board to get information to help them make better decisions with and on behalf of their school. But the idea that teacher leaders have something to contribute to principals is what these studies were reporting.

An example from MSP practice really speaks to the importance of principal support for teacher leaders. This is the other part of the two-way street, that providing time materials and public encouragement is very critical to teacher leader work. And I think there are a number of examples from MSP projects that talk about the critical importance of that public encouragement, the fact publicly stating supportive teacher leaders, clarity about the role of teacher leaders, and incentives and encouragement for teachers to work with teacher leaders, so that it's not just up to the teacher leaders to figure that out. It's the active support of principals. So this quote is representative of what we heard from many practitioners around teacher leaders working with principals, that some progress can be made with a passive principal or a hands-off principal. And I have a feeling that there are images that are coming to mind when you hear those words, "passive principal" or "hands-off." There are plenty of people like that around. But sooner or later, if the principal is not supporting the work, a plateau will be reached. And it's unlikely that a system like a school will be impacted over the long haul. So this issue of how much forward momentum can teacher leaders make in schools without the active support of principals is really key. And that was very consistent among what we learned from practitioners, particularly MSP leaders. And this is another area where we have knowledge. It's about developing teacher leaders' knowledge and skills, the whole idea of preparing teacher leaders, whether that happens at the beginning, before they become teacher leaders, engage in teacher leader practice, or while they are in the process of doing teacher leader work.

And here is the advice, that teacher leaders need more than preparation in content and pedagogy. They need to have the opportunities to develop the role-specific knowledge and skills they will use in their work in schools and districts.

Now, how you characterize what the roles of teacher leaders are says a lot about what role-specific knowledge and skills are.

But, nonetheless, what we were seeing from limited empirical research, but more strongly from the practice knowledge of MSP leaders and other practitioners, is the importance of having preparation speak to, connect to, focus on, be explicit about the role-specific knowledge and skills that teacher leaders need.

This is a quote that came up in a panel that we had. And it was one that we vetted with a number of other practitioners as well, with some great agreement. Too often assumptions are made that good teachers will be good teacher leaders. While it's important to have the personal qualities necessary to lead, teacher leadership also requires a new set of skills and knowledge that prepare teacher leaders for new roles and responsibilities. And the extent to which the roles and responsibilities are clarified, it makes it that more likely that you can identify what kind of preparation and support people need to be able to carry out those roles and responsibilities.

So an example from an MSP, in this MSP, they used a variety of strategies to prepare teacher leaders for this role of providing in-class support to teachers. And here's what it looked like. These teacher leaders attended two to three sessions before they began their work in the classroom. And those sessions were dedicated to providing them with the knowledge they would need to carry out their specific leadership roles, in this case as coaches, with teachers in classrooms. After teacher leaders began their work with teachers, the preparation staff conducted on-site visits to those people in the field. And, based on those observations, they were able to work one-on-one to improve their skills, given the roles that they were playing. This was a very carefully designed preparation that took into account what work those teacher leaders were doing and staged when the preparation or the development worked happened, to be able to best support teacher leaders to be effective in those areas. The research around this, there are six studies that we saw that really talked about teacher leaders generally carried out the work for which they had been prepared. And, in some ways, this feels like-- Well we'd hope that, if they were prepared, they were able to do something with it. But it actually says a bit more than that. And that is, that in preparation programs that were designed to really focus on particular knowledge and skills that were anticipated, or that they knew would have implications for the roles, there was evidence that teacher leaders actually put that knowledge and skills to work in the roles that they carried out.

So the idea that was preparation-- did it have an impact on teacher leader work, these six studies say the answer was yes. And here are two studies. They are related studies, same group of people, working with a pretty large data set. Found that Wallace was a lead author on the first and Miller the lead author-- not this one, a different Miller-- lead author on the second--- found that teacher leaders tended to reproduce in their practice the model of preparation that they had. And that the proportion of time that was devoted to working on different skills or knowledge was reflected in the work that teacher leaders themselves did. I think we tend to have the expectation that teacher leaders, by their nature, are smart and capable professionals. And I think that's really good assumption to make. But I think the research would suggest that we fall short when we expect those good and capable people to figure out, on their own, everything they need to know to be able to carry out roles. So this research really advocates for the importance of preparation programs that's very tightly connected or linked to the roles that they are going to be playing.

So, we have a couple of, kind of, wrap-up comments here before it goes back to Iris. And that is, looking across what we know from teacher leadership, and looking across what we know from deepening teacher content knowledge, when we compare what we know from empirical research and what we know from practice-based insights, this is what we see-- - That the empirical research findings tend to be larger grain size at a level that is, perhaps, a bit broad for really informing decisions that we might need to make and in programs like the MSPs. Practice-based insights tend to be more contextualized, more nuanced, that kind of a smaller grain size.

We were surprised. And then, when we thought about why were we surprised, we had some ideas about that too. But, we were surprised at how little guidance the available research provides, and how much guidance expert practice provides. Although-- and this is an important caveat-- the expert practice doesn't have the backing, the evidence base, the same kind of evidence base that empirical research would provide. And our experience at the outset with those green sheets, thinking about plan one and plan two, and the question what were you drawing on, to be able to kind of make recommendations about that, I think is consistent with this work and what we've been seeing in the research and practice around these topics.

IRIS WEISS: Okay, so the key question is, why don't we know more from the empirical research? And then, the even more key question, what can we do about it?

So I want to give you a sense of what our forays into the literature look like. So, for teacher content knowledge, we did a systematic set of searches on two different databases. We identified nearly 2,000 relevant items that were published since 1990. So, what you should be thinking now is, "Hmm, 2,000. But she was showing numbers like 6, 7, 20. What happened? Where'd they go?" Ninety percent, or so, of the studies were screened out, having nothing whatsoever to do with quality of the studies. Some of them weren't studies. A lot of what is in the literature that comes up in key word searches are somebody basically-- around the office we call it "I did it. So should you." It's just an advocacy piece, sometimes really eloquent, wonderful advocacy pieces. But that doesn't make them empirical research. A second category that we screened out was, we were focusing on in-service teachers. So, if by chance they came up in our search, but they only included preservice teachers, we did not keep them. And, as I mentioned, they had to include, for TCK, a measure-- it could be a quantitative measure. It could be a qualitative measure, observations. But it had to be something that was a measure of teacher content knowledge.

So, after screening out the ones that didn't pass our eligibility criteria, we then used the standards of evidence that Barbara mentioned to identify the contribution. We did not say "This study is out." Once they pass the eligibility, there was no standards of evidence threshold. We took the perspective of no study is perfect. We took the initial perspective of no study contributes nothing. I'm not sure we maintained that perspective at the end. But, nevertheless-- And the idea is if we had a robust research base with different studies, no study being perfect, but with different flaws, and if we found the same things, then you begin to have some faith in the findings. And similarly, in the work that Barbara and her colleagues did, they identified 800 studies with the key word searches. And only 58 of them passed through the eligibility criteria in order to merit having standards of evidence applied.

When we applied these standards of evidence-- and this is an activity that is not for the faint of heart-- what we often found-- and this was important-- we found very incomplete documentation of the programs. There's actually a system reason why that is the case. Journals do not want to provide a lot of space for describing programs. They want to provide space for data collection, data analysis, and reporting. So, a study that shows us that something worked, it worked. But, what was it if you wanted to replicate it? We really don't know.

Another key finding, which I think also holds true for many of the MSP studies, what is in the research literature tends to be more like program evaluations than it is research on particular strategies. What you have is a group of people with an intervention that they really want to help teachers. And because the teachers have different needs in any one group, we don't know a lot about what any particular strategy will get. Some strategies may work better with some teachers than other strategies. We do it all. We don't say "We're just going to give content based investigations." So we do content based investigations. We do some looking at student work. We do some-- All of the things that we think might matter. But, then we're left with the knowledge that yes, the overall experience worked. But maybe 80% of the gain was due to 20% of the treatment. We'll never know from those studies.

And then we often found serious design issues. Selection bias is a biggie. So, we have volunteer teachers-- And this is sort of a risky thing. In the literature, the findings were generated by teachers who came to get professional development. And then we sort of assume that, since we had that finding-- and after all, we might have had that finding in multiple studies, maybe even five-- that that's going to work everywhere, with all teachers. And there's lots of reasons to think that won't be the case. Or, if the studies were done in the context of suburban schools, we don't know if it applies in teachers in urban schools.

Often in the literature, there were lack of comparison groups. Whether or not you ascribe to the notion that randomized field trial is the best strategy in all cases, not having any sort of a comparison group-- I thought about Barbara's finding that she reported-- not Barbara's finding-- from the studies that showed that teacher leaders had better student outcomes in their classes. Now, I didn't read those studies, so I don't know. But I can ask Barbara.

One hypothesis is that those were better teachers to begin with, and that it was more a validation of the criteria used, the selection process for teacher leaders, than it was anything that you did in the preparation of teacher leaders. So there's lots of-- If you don't have a comparison group, if you don't have a strong research design, there are lots of rival hypotheses left that could account for the finding. Again, we found in the studies, often, instrumentation that there wasn't evidence of validity and reliability. That doesn't mean there wasn't. Again, it could be that there was not space in the journals to describe it.

But a fairly typical practice would be for the group that is providing the professional development, knowing that they want to be able to assess teacher content knowledge gains. They look around, and they say "There's no existing instrument, so we will develop our own. And we will develop it on a shoestring." And so, they don't have the time in the context of even a medium-funded study to go-- They don't have the time and they don't have the money to do all the things that we know are necessary to develop good measures. And then, in case that point hasn't been abundantly clear so far, there are too few studies of any one phenomenon to be able to have confidence in the findings. I've thought a lot about what the incentives in our system are, that cause that to happen.

And think about, for example, if you've ever been on-- How many of you have sat on a review panel for NSF? Okay. Elizabeth is saying "Look at all those others." [laughter] What NSF panels are asked to do is to identify the well designed research studies. They are not expected-- When you go into a panel, you're not expected to know the literature in every area that every proposal in that-- You know, the proposals might be research on elementary mathematics. You're not going to go in there knowing there's 7 studies in this and 48 in this. So you're looking and you're asked to look for what are the high quality studies. What are the studies that are likely to provide answers to their questions, and maybe questions that you judge important. Similarly, journal reviews, I've reviewed for journals a lot. Nobody asks me, "Is this filling a gap in the literature?" Which would be good, because I wouldn't be able to answer that question without doing the kinds of empirical searches that we did. So, all you can do is judge the quality. As a result, we can get lots of studies on one thing and no studies on something else. So it is a system problem. And then, again, high quality research is expensive. There are studies out there that are large and may use multiple choice type measures, and then leave a lot that we wish we knew about teacher understanding.

There are studies that are small and use case study research, or careful interviewing of teachers to see what they're learning, leaving us wishing that we had a larger number. And then there is a tension that everyone in this room feels, unless you're RETA, in which case you might not feel it, a tension between design for change and design for learning. From a system change perspective, if you're going to have two cohorts, to avoid selection-- From a system change perspective, if you're going to have two cohorts, you say "We're going to start with the ones who are most ready." Makes sense. You don't want to start with the ones who don't want to be there. But doing that makes research really problematic. When you get results, you cannot disentangle to what extent was it due to your treatment and to what extent was it due to the readiness of that first group.

We were involved, years ago, in the first cohort of MSPs, in designing a treatment for a project that later was funded, that had two cohorts of schools. And they decided to work with the schools that were most eager to work with them. And then, one of the RETAs came to me and said "We're looking for places where we can do careful research on X, Y or Z. And I understand the project that you're involved in has two cohorts." And I immediately started to laugh, because I knew where he was heading, was "We want to use the second cohort as a comparison." I am a researcher. If anybody would be expected to remember the importance of having comparable cohorts, you'd think it would be me. Totally, that's not what I was doing at that moment. And that was-- Actually, I shouldn't be sharing this. It was pretty embarrassing.

So here is a quote from one of the abstracts from this conference that makes this point. Further, these results suggest benefits of designing MSP project implementation in ways that facilitate most valid assessments of impact. For example, random assignment of eligible teachers to participate in the project would provide greater confidence that observed effects are not associated with teacher self-selection or with administrative support for the most qualified applicants. So there's a real tension that we are all facing. I made comments earlier that many of the studies that we found in the literature had fairly substantial design flaws. I want to make a different point here, that is, that even when individual studies are well designed and well implemented, we can't aggregate very well across them. One of the reasons is the sparse documentation of treatment. So we don't really know which of these are similar and can be aggregated. There's lots of reasons about that. And it's the scattershot nature even more.

So, the second important question, how can we learn more? More important than why don't we know. In the work that we have been doing in the KMD project, we have been using a fairly simple knowledge management framework from the business literature... which talks about iterative stages of acquiring knowledge, sharing the knowledge, watching and seeing how the knowledge is used, acquiring more knowledge, that it is a cycle. But the more we've worked and toiled in the fields of teacher content knowledge, teacher leadership and teacher induction, the more we've come to realize that really learning much requires another box. It's not that it's all out there, low-hanging fruit for the plucking. We really need to be thinking more about how to accelerate the systematic generation of knowledge. So, if we're going to think about a research agenda, how do we think about what are the priorities? And how do we go about improving the system so we can learn more?

So we all know that the research enterprise is one that you're never done, that doing research, inevitably, answering one question leads to many others. This is a good thing. What isn't a good thing is that there are not incentives in our system-- And I understand that this is equally true in other fields, that it's not just education. In fact, I wish I had the slide with me. There's a quote that talks about scientists. You know, this notion about we're all inventing our own language and nuancing stuff because you get more credit for doing original work than for-- I'm seeing nods. Okay. There are not a lot of incentives. Think about those of you who aren't in education faculty. If a doctoral student comes to you and says "For my dissertation research, I want to replicate X and Y study," you're probably going to say "Mm-mm-mm-mm-mm. Original research to get a Ph.D." I've never been able to track down this-- either the number or the study. So this may be an urban myth. But I've heard that some huge percent of the education research, something like 80%, is actually done for dissertations, that there are many people who do one piece of research. They do it for their dissertation. They never do anything again. If that's true-- and I have no idea if it is-- And if anybody knows where that number or that citation might be, I'd like you to tell me-- The fact that dissertations don't follow up on other people's work, that they have to do original work, means that we're not doing a lot of replicating.

We're not taking studies and saying "This work in that context, I wonder if it works here." And I'll give you an example from the local systemic change initiative. There was one project that the PI happened to be the superintendent of schools. And he was able to really engage principals in having a stake in all of this. So, when he had meetings of the principals every month, first on the agenda was bringing in examples of student work from classes that they had observed, so they could talk about it. This project had lots of good gains. I hear it cited as "Such and such a strategy proves"-- one study-- Oh, I forgot to say it was in a context of actually almost 100% Hispanic low-poverty district. "It proves that doing whatever that strategy was narrows the achievement gap." Well, first of all, they had all Hispanic kids, so I'm not sure which achievement gap was being narrowed. So that's interesting. But also, it doesn't prove anything of the sort. It says in one context maybe it was the fact that it was the superintendent of schools that was leading the effort had a little something to do with it. Maybe it was the strategy. If we had multiple studies which were taking the same-- studying the same phenomenon in multiple contexts, we'd get a better sense of the robustness and the generalized ability.

I asked a guru once, who was walking around, telling people why we had to do randomized field trials. And at the break, I asked him how many they were recommending. And he said two. And I said "Where'd that number come from?" And he said "That's what the drug companies do." And I thought "Okay, but any two samples of human beings are likely to be much more similar, one to the other, than any two social systems." So, if we happen to have two randomized field trials, one in Scarsdale and one in Shaker Heights, that doesn't necessarily mean Detroit and Chicago, this is for you. It might be, but it might not be.

In addition to needing incentives for more systematic following-up of the research that's already there, there's lots of calls in addition to the calls for focusing practice more on research empirical findings using evidence. There are calls, more and more these days, that research needs to focus more-- It's not a matter of asking practitioners, that their job is to take our research, and we'll help them translate it. We need more focus on studying important problems of practice. And Dick Elmore has written about it, and it's had a huge impact on my thinking. The notion of when there is an emerging professional consensus-- He points out that, in the literature on professional development, there is lots of consensus on what are the features of effective professional development, not a whole ton of empirical evidence on these features. But, as we found-- as Barbara talked about-- when you get practitioners together, expert practitioners, comparing and contrasting what they know and pushing each other on the circumstances under which this knowledge holds--- Elmore's point is that, when you have an emerging consensus from experts in the field, they can not only provide guidance to help us move forward, as you all were saying-- I was using my own experience and that of others, but, equally important, the practice-based insights, without the support of empirical evidence, can serve as hypotheses for research. If there are things that we believe-- Now some of the insights in our knowledge reviews really don't need to be hypotheses for research. They're fairly self-- They're the kinds of things that you say "Duh. I should have realized that all along. Or, now that you say it, it makes sense."But others are much more controversial, and would need to be subjected to empirical study, and, in particular, before policymakers are willing to open their wallets-- or our wallets, I guess, is what they're opening-- they want more empirical evidence than that.

And so, the notion of using some of these practice-based insights as hypotheses for research, the bridge between practice-- research and practice will be much smaller if we're actually doing research on things that practitioners care about. And again, if we study these things in multiple contexts, different target populations, etcetera, we can have more confidence. NSF has asked us to keep in mind all of the time, not just what do we know, but how well do we know it. And so, our push is, how can we know it better? Because, as we've made abundantly clear, from the empirical literature so far, we know a few things, but we don't know any of them really, really well-- Or, we don't know many of them really, really well, I should say.

So many of you-- And Barbara, I know we thought about the timing of all of this, but I have not a clue where I am. So nudge me if I'm-- We are good on time she said, okay. Because I don't know how you look at your slides, think, keep track of the time-- So. And, if it were only me presenting, I could cut out later. But I don't want to cut out her.

So, where are you? Many of you have evidence that your interventions are effective. And those findings should and are being shared at this conference, in various other venues. Many of you are reporting quantitative analyses. And we're putting in a pitch, now, that's great. In addition, we need more descriptions of your interventions, perhaps case study descriptions of what you're doing, why you're doing it, what it looks like, so that, if somebody wanted to replicate your study, they could do so. Descriptions of your target audience, descriptions of your context-- and I'm going to get back to that in a minute-- and descriptions of what lessons you have learned. And, in particular, one of the things that makes our work as a community problematic to people with whom we share it, is they say "Yeah, that's easy for you to say. You had N million dollars." And so, what we're trying to think about is, what are the implications of what we have learned? Now, part of your N million dollars is for scholarship. Nevertheless, part of your N million dollars was for treatments. And we need to be thinking about what of these treatments is applicable to other settings.

Individual MSPs would add even more to the knowledge base if you treated your research as an opportunity to do some of this going deeper, some of these replications. So, for example, you might be able to study your treatment with people-- your A team-- you know, the project staff that have been thinking about this and doing it for years-- Can this same treatment be implemented well with less experienced professional development providers? Can you disentangle your results? Does it have differential impacts on teachers with different characteristics, stronger and weaker backgrounds, less and more teaching experience? All of this would help the field try to get closer to the kinds of guidance that we get from practice-based insights.

When we had these panels, we had people who have been toiling in these fields for many years, and had multiple experiences that they could do these thought experiments about "Well, I've seen this work, but under these conditions." I ought to make another point. In the literature, we know-- we begin to see something about what strategies are effective. One of the things that we rarely see in the literature is any documentation of the quality of implementation of that strategy. And so, it may be that overall some strategy worked. But it didn't work when people didn't implement it well. When we asked the panel "Under what conditions does this work, no matter what X was, when the facilitators are knowledgeable?"Well, you know, that's kind of a "duh."

But there's a lot of things out there that are being implemented, where it's sort of like we do something with A team. We assume and then don't check whether the B team can do it as well. And, you know, when we're trying to go to scale, five or six people in the MSP who've been working with teachers for N years, etcetera, isn't going to get the problem solved. Similarly, if you study different configurations of your interventions, we would get a sense-- Okay, how much drop off and impact is there with a reduced level? If we find out that, you know, 500 hours per teacher gets us what we want, would 100 hours per teacher do it? And which 100 hours? Would it be better, the notion of critical mass? Would it be better to provide fewer hours of professional development to a larger proportion of teachers in the school? Would that have a greater impact on teaching and learning? Many of you have the opportunity to study these kinds of questions within the constraints of your MSPs.

So, so far what I've been doing is putting in a pitch for individual MSPs, mining their projects for more information that would help the field. Now I want to put in a pitch for another way of learning more. In many cases, the MSPs have similar goals and similar interventions. And this really does provide an opportunity for more transformative research. To accelerate the generation of knowledge, of knowing what works, for whom, under what conditions. Setting up cross-site studies across MSPs has the potential to add a great deal to the knowledge base. Remember we talked about we know if something worked. But we don't know the boundaries of the generalized ability of that. So, provide cross-site studies could provide information about the effectiveness of particular interventions under different conditions.

We heard from a lot of you, when we interviewed in various forms-- online or telephone-- about lessons learned about working with STEM disciplinary faculty. And what we heard from many of you is that the opportunity to do scholarship is a big plus for the involvement of STEM disciplinary faculty. But they're not necessarily comfortable with social science research. And they, perhaps-- my hypothesis-- that STEM disciplinary faculty who are new to social science research might appreciate the guidance of a cross-site research protocol and set of instruments.

And, in addition, Pat Johnson was presenting data yesterday, talked about some of the state MSPs being fairly small. Individual studies that might have 40-100 teachers, or some of your projects that may be large but within any grade band, may be small. While your individual project might be too small to do a study that stands on its own about secondary science teachers, being part of cross-site studies provides an opportunity, still, to contribute to the knowledge base. So I'm hoping, now, that it's your turn, because I'm out of things to say.

BARBARA MILLER: But we are on the home stretch, here. This is back to the diagram again. Much of what Iris has just been talking about is what we've learned and what potential there is for knowledge generation in the MSP community.

I want to share a few remarks about knowledge sharing. What are we doing within our KMD project and what implications does that have?

There are a number of venues that we've been utilizing for sharing knowledge, particularly about these topics of deepening teacher content knowledge and teacher leadership. The first are knowledge reviews, which I'm going to say a little bit about in just a minute. We've done presentations at professional meetings of various kinds, articles submitted to journals. And, believe me, reviewing a lot of articles in the field has given me a deep and new appreciation for what it means for me to be writing and submitting articles to journals. We've been doing MSP cases, again, with the focus on sustainability, both in teacher content knowledge and teacher leadership. And a teacher content knowledge instrument database, which I'm also going to say a little bit more about. But these are some of the venues by which we've been in this project, sharing knowledge about these topics.

Knowledge reviews: There are approximately 20 that have been posted to date. I believe number 20 is going to be coming online next week. So, on MSPnet, the weekly announcement next week, please open it. I'm sure you do. But please open it and look particularly for the link to the knowledge review on teacher leader selection. That's the one that's going to be going up next week. In those knowledge reviews, if you haven't had a chance to take a look at it or haven't looked recently, we report both the empirical research and practice-based insights around a number of different topics related to teacher leadership and teacher content knowledge. In each knowledge review, there's an opportunity for MSPs to provide additional input, whether that's commenting on what's reported in the knowledge review, or adding examples or illustrations from your own project. We welcome those.

And, oh, what do you know? There's that website again. I believe this is the third time that we're putting it up here, and there might be another one to come. This is where you can get access to the knowledge reviews, and we hope that you do. We have been developing a teacher content knowledge instrument database. And this just went up a couple of days ago. It's very new. This instrument database, again, focuses on teacher content knowledge, not teacher leadership, instrumentation and teacher leadership is a lot of work that we have to do. So, the focus, here, is on teacher content knowledge. There's a total of 168 instruments that surfaced in our empirical research review. And that was a big part of what we looked at. RETAs have contributed tremendously to our measures of teacher content knowledge. MSP partnerships, themselves, have developed measures of teacher content knowledge.

For consideration in this instrument database, we're looking at instruments that provide a way that others can come up with some kind of numeric score, some measure of teacher content knowledge. And that means that instruments that really are about qualitative descriptions of teacher knowledge are not part of this database. It really is trying to think about measures and instruments that get at teacher content knowledge. The database is set up so that you can search instruments by subject, math or science or topics within math, or look at chemistry or biology within science. You can search by type of knowledge. There are some instruments that really are about disciplinary knowledge or about pedagogical content knowledge. You can search by the nature of the assessment items. So, if there's a particular kind of assessment item that you're more interested in, you can search by that, and by grade levels of teachers studied. At what grade levels had these instruments been used. And this is all part of how the database is set up, that you can do these kinds of searches. The database includes descriptions of the instruments. They do not include the instruments themselves. But there are full references to the studies where these instruments were reported. And so, you have that access as well. When it's available, scoring information is provided there with each entry, and validity and reliability information, when available. And again, what we report about validity and reliability is what we had to access to. It's not to say that this is the final word on validity and reliability, but it gives you, as a user, some understanding about what we know so far about reliability and validity of these measures. There are 62 instruments in that database. And, oh, there it is. That's where you can go and find them. So we encourage you to do that.

Again, this just came up. So, if you have been a recent browser of MSPKMD.net, go again, because there's something new for you to take a look at.

Here are our next steps for the Knowledge Management and Dissemination Project. We will be developing knowledge reviews on STEM disciplinary faculty involvement. Some of you may have been part of some survey work that we did with a number of questions, asking about experiences related to STEM disciplinary faculty involvement. That, in addition with other knowledge sources that we have, is leading us to make developing and posting these knowledge reviews on this topic over the next couple of months. We're also going to be updating our existing knowledge reviews in a couple of ways, adding MSP research that has been published or made available since 2007, I believe, is what we're up to date with, as well as adding to anything that we see in the empirical research since we did our searches that ended in 2006. And, in the case of teacher content knowledge, we know, from having done those searches again, that there are an additional 30 studies that passed our screening, that we are now reviewing. So, we are interested and hopeful. It feels like a small resurgence-- not a resurgence, but maybe a small surge in teacher leader research. We'll be adding that to the knowledge reviews, as well as continuing to kind of look at and refine our practice-based insights.

We're also developing a teacher content knowledge/teacher leadership materials monograph with really a focus on professional development materials that are used in MSPs to deepen teacher content knowledge or to utilize or develop teacher leaders, with an idea of making those practice experiences accessible to a broader group than just the people who are developing and using them themselves. So these are the next things that we're involved in in KMD. And it has implications for you. Coming soon-- keep your eye open-- for a request, which will come with encouragement from your NSF program officer, to help us understand what the recent research that you've been involved in in your MSPs about deepening teacher content knowledge and teacher leadership. These are requests that we've done about once a year. And you've been tremendously helpful in keeping us abreast of that. So this, again, is to say help us stay on top of what research you've been involved in, so that we can make sure it's accessible to the broader community, and incorporate it into these reviews. Part of the request will also be asking you to think about, nominate and be in communication with us about materials, professional development materials that you've developed or used in your MSP for part of this monograph that we're doing. And lastly, for instruments that you've developed or used that may not be yet in the instrument database. You can, I believe-- I believe on the database itself, submit. So, if you're at the instrument database, you'll see that there's opportunity to be able to kind of communicate with us about instruments that you use. But this will also be part of the request that you're going to be getting soon. And that's what we have for you. Thank you.