Thanks Jim. And good afternoon everybody. I'm pleased to see such a large contingent still here for the end of the meeting, that's good. And I understand that this has been a fantastic meeting, that you really tackled some challenging issues. Okay, let's thank the, thank the planners of this meeting. And I was only able to be here a little bit for yesterday, but as I've been chatting with folks, I'm understanding that you're really taking on, in a deep and intellectual way, this challenge of what it means to be part of a learning network, how a learning network grows, what it might mean to say that a learning network is an effective entity, and you're thinking hard about key questions that not only inform our work in the MSP Program but much more broadly in EHR and in what we do across directorate. So it's terrific that this work is going on and that you're all actively engaged with it. I wanted to just make a couple of comments, and then I'll, I'll end--
Jim asked me just as I stood up, "Well, do you want to say about where we're headed in EHR?" And I thought, "You know, it's been two weeks that I've been in this job." But I can say a little bit, I guess, there. I'll think about that while I'm talking about these other things maybe. But, in any case, knowing that you've been discussing and being consciously attentive to what it means to be part of a learning network, I'd like to come back to some things that got said yesterday with the panel from our colleagues at the Department of Ed, and to say that a major question for us in education here right now in Washington is about scaling up or scalability or reaching large numbers of people with well-informed, well-tested interventions and innovations. And so the question that I'd like to, the way I'd like to kind of frame this, is to say, All right, you're a learning network comprising multiple sub-networks and particular emphases, how could we collectively take on this question of what it takes for something to be scalable? And notice that the way I'm putting that question is not to say to all of us in the room, "How do we collectively scale our very good ideas and our terrific interventions and our wonderful innovations, but how do we help make sense of what it means for all of these to be scalable and what it would take to scale them, to have them reach broadly and impact lots of people?" And that, in a sense, signals what I'm going to say later about where we are in EHR. And that is that EHR needs to be a place that tackles some of these questions that are crucial and then is prepared to collaborate in partnerships with a number of other agencies and entities to actually, to move the scale in the directions that we're discussing.
So there are a lot of different models about scaling up, right. There's a sort of knowledge diffusion deployment model that one hears about, where the assumption sometimes is, well, we've got things that work. We know a lot about what works. Why can't we just, you know, sort of tell everybody about it really loudly, and then it should work? Now that's oversimplified, but you know, not entirely oversimplified based on some conversations I've been in lately. Another is a sort of replication model, which is a bit different -- and there are examples of that in the field too -- where the notion is that we identify some innovation that appears to be very effective in a particular context and then call for it to be, to be reproduced in a sense, in a variety of other contexts. So that's another approach. And then there are all kinds variants and local adaptations that happen.
But I think one place where the conversation is, is not as rich as its needs to be is in the array of models that make sense for scaling and the ways in which those models get discussed and talked about in particular projects at local levels or in regional areas. Because I think this notion of what is scalable, particularly for projects using federal investment, is going to be a central question for us going forward. So let's just sort of try to unpack it a bit further and think about what it will take, particularly with our colleagues from the Department of Ed. You heard their ideas yesterday, all of which I think are very exciting an intriguing. And what came through for me as I listened to them was the importance of some sort of evidence and different levels of evidence for different stages of an idea as being a part of what's needed in the sort of hand off to the next level of either implementation or scale up or development. And so one issue for the learning network, okay, the first issue I tried to raise is how to think about scalability and how to generate that conversation in a deep way across people with the expertise that you all bring.
A second question is now about evidence, what kind of evidence is suitable for what sorts of translations and what sorts of handoffs? We've established, I think through our solicitations at NSF as well as various parts of the literature, the importance of being able to demonstrate with appropriately rigorous methods that suit your particular project or your intervention, some kind of efficacy, some kind of measure of its impact on learning or whatever outcomes are valued by your project. To do that requires lots of things: clarity about the intended outcomes, good measures that are valid and reliable to give you some information about that, and designs and methodologies that are suited to your work, that will give some level of information about whether, whatever your trying to do is causing whatever you want to have happen to actually happen. And that's continuing to be an important focus for us and it'll be important in our conversations with education going forward.
But beyond that, and here's where I think particularly the network idea could be, could be used in a very suitable way, is figuring out sort of what it is that you actually do. So let's take, for example, a group that runs a summer institute for elementary, upper-elementary mathematics teachers and is working on their mathematical knowledge for teaching let's say. So you can find measures and you can take a look at sort of how they, how they come in and what they look like at the end. You might even be able to get a comparison group or even a random assignment to summer institute if you're oversubscribed and can, can withhold it from, for one summer or two for another group. So the design issues are okay, probably the measurement issues are okay. But how do we actually find out what you did in that institute for those three weeks that you have those teachers together? In a detailed enough way, that somebody else could take a look at what, what happened there, and either decide to adapt certain pieces or to replicate certain pieces in their own setting, in their own context.
I'm not convinced that traditional publication outlets give us a very good way of sharing that level of programmatic detail. And the example I chose, of course, is just an example.
But if you think about reforming an undergraduate science teacher preparation program for high school physics teachers or any number of other sorts of interventions that you're all doing, I think what we might be missing but what maybe a network could help us gather is the actual detailed description of these interventions in a way that lets others understand enough about what's been done there that they can pick and choose and learn from it. I know that in the museum business-- Or I don't really know this, somebody told me this and I believe it. But it's part of our portfolio in EHR, so I'm supposed to know a bit about it. But, but the whole idea of an exhibit that's being designed in a museum, think, think, you know, the parallel might be a summer institute for teachers. There's an elaborate process that includes multiple parts ranging from, you know, concept papers to the design layout, to blueprints, to, to, even exhibit maps that explain what that exhibit, how it grew to be what it was and then what it looks like at the end in a way that others could take that package of material and, and work with it in their own setting. And this is also quite related to this notion of efficacy and effectiveness.
It's not just enough to know whether the kind of learning or the kind of motivational improvements that you hoped for actually happen. We need to know what were the so-called "active ingredients" to quote Larry Hedges. What were the active ingredients in that innovative intervention that you tried that really were crucial? And so your designs have to try to take a look also at what it was that made the key difference and be able to capture that when you relate the actual documentation to others so that they can work from it. So again, another challenge to a learning network, how can we share, at a level of detail that's workable, in a way that highlights the active ingredients so that the body of knowledge that's being accumulated here goes beyond general kinds of findings, which we're very happy to have, but to much more specific sorts of findings that can truly spread in interesting ways?
And then finally, I'd like to raise the idea of, of translational research, which is not an idea that's very well defined in education. It is used in other fields, in the medical fields, in engineering. The idea that you have sort of basic research, but you can't just hand that research to practitioners, that some sort of very particular kinds of documentation or, or translation I guess, need to be added to that research in ways that make it usable. It would seem that that's, again, the sort of thing that probably is happening already in a network like this. But how can be extract that and learn as a field in education about how to do that better and how to make that a more specific and visible part of all of the work that we do?
So as if you don't have enough to do, I'm certainly hoping that you will think about scalability, about models for scalability and about documentation and about translation as you go about all of your very good work. I'm not going to say a lot more in terms of assignments for you all to think about, because that's probably enough. But I'm optimistic that because of the existence of the network and the consciousness that you have about the ways in which you are learning and advancing learning that maybe we can make some progress on some of these topics as well.
To the question of about where EHR is headed? You know, you can get a sense from what I've been saying here that our evidence, emphasis continues to be strong. We know that there's excellent work going on in the field through the projects that we've funded. We need to know about that work. And this network is, by the way, an excellent example of we're able to learn about the work. The dissemination of information if terrific. We need to see the work published in a range of settings so that the word is out that we're learning some things from all of what you do.
Traditionally, EHR has been concerned with both building the STEM workforce and building a scientifically literate society. We will continue, of course, with those core values that are so central to all of what NSF does. But a very strong focus, I think, on innovative approaches, upstream sorts of approaches and the evidence that goes with them to learning how to do that better, that is to build the workforce and to build STEM literacy for all. And so the kinds of efforts in which you're engaged are central to the sorts of things that we care about and will continue to care about. And so congratulations to all of you on the good work and continue these efforts. And drop by if you are in Washington. Well, you are. But nice to see you. Not this afternoon probably.
So the appropriation bill, I started off saying you have a House committee and a Senate committee. The House has to approve it, the Senate has to approve it. They never approve the same thing. So you'd have to have a reconciliation committee. They debate on it. Then each has to approve it again. And then it has to go to the President who could veto it, or he could sign it. This process, in theory, ends by October 1st. Extra innings are not uncommon. Extra innings have a name: continuing resolution. When the first continuing resolution runs out, you have continuing resolution two. And this has happened several times, including this year. So the whole process doesn't always converge. We were basically on a continuing resolution all of 2007, fiscal year of 2007. But this year it did converge. It converged right before the Christmas holidays. I don't remember the date, but somewhere in there.
You'll remember there was this panic of the ominous(?) spending bill. Including in that ominous spending bill was NSF. The number that came out was nothing like the authorization. And you might say, ?What was the number? Well, in fact, we don't know. It hasn't trickled down to us yet. But it's working it's way down, and we're pretty sure NSF got above two and a half percent increase over last year. So assuming that we're not the disfavored child, we got at least ...(inaudible noise) last year and maybe a little bit more. There's hope for a little bit more. But we don't know because it goes to the 12th floor. Then it goes to the directorates. Then it goes to the divisions. And eventually somebody says, "MSP, you have X dollars for next year." Well, we can't wait for that, so we have a solicitation out right now. And we're expecting a lot of proposals. We're expecting to award new targeted's(?), new institutes, new ...(inaudible). And as many of you have discussed with us, some other new things, one of which is MSP Phase Two. And another is MSP Start.
There has to be a little bit of guessing. We don't know how many proposals we're going to get in each category. And at the same time, we don't even know how much money we have. But yet we're supposed to make a guess as to how many awards we're going to make in each category. So we do our best. And we have laid it out in the solicitation. You should certainly take a look at that. But I think we're looking forward to a lot of good proposals. A lot of people have talked to us about various facets of the new solicitation. And, of course, we didn't have a solicitation last year. So there's likely pent up demand. And we're expecting a lot of good proposals. And I think our goal is to build on everything that you have done to make the program, in a sense, a better program.
What is our goal? We're trying to make teachers be better teachers, students be better students, administrators be better administrators. And when I say "teachers" I'm including university. As has been pointed out, there's room for improvement at all levels. And we certainly want MSP to be our leader in this regard. We want people to come to us and say, "What's best practice? What's the best way to do this?"
Okay, I'm sure all of you are anxious to get on the road. I have now told you how MSP gets it money. And this is a tale which you can tell your children. I guarantee it'll help to put them to sleep. Thank you all for coming. Most of us are going to stick around. So if you have any questions, feel free to come up. And we'll see you all next year.