Whatever happened to?

I was doing a search on the Journal Sentinel archives this week and I was reminded again just how big a deal the Neighborhood Schools Initiative was when it was adopted. Article after article discussed how Milwaukee Public Schools would be permanently altered as new schools opened in children’s neighborhoods. A separate office had opened to do outreach, plan, survey preferences, analyze parent decision-making processes, analyze patterns of family migration, and make sure the initiative would be successful. When I left the school board in 2001 the plans were prepared and ready to go.

When I returned 6 years later the Neighborhood Schools Initiative had essentially vanished. The data collected had been filed away and largely forgotten. Some buildings had been built, but the underlying goal of making sure schools would attract neighborhood children had largely been lost.

This seems to be a recurring pattern at MPS. Major initiatives are started with great fanfare but then just sort of wither and disappear. Another disappearing act was a principals’ academy to make sure new principals were selected and had the training they needed for success.

Reading Alan Borsuk’s thoughtful essay this morning listing ten things to watch in education in the coming year, I was struck by the absence of the MPS reading initiative. Could the reading initiative be sinking into the same black hole that consumed previous initiatives? Only a year or two ago, the reading program dominated both MPS activities and reports on MPS.

 

Posted in Education Strategy, Neighborhood schools, Principal development, Reading | Leave a comment

Why has education policy research been a bust?

In recent years, we have seen an explosion of research aimed at judging the effectiveness  of initiatives including charters, vouchers, class size, and various curricula. On the whole, this effort has not given us definitive answers. One could hope that the problem is lack of good data and that as better and more complete data are collected the results will become more useful for making decisions. The underlying assumption of much present research is that there are some elements that lead to success and these can be identified with the right experimental design. For example, if element X promotes success, we should be able to see that, if we hold all elements constant and measure the relationship between the amount of  X and success.

But this is not the only possible model. Perhaps X promotes success in some situations, has no effect in others, and has a detrimental effect in a third, depending on other factors in the school. So a certain class size may be critical in some cases and irrelevant in others.

By analogy, let’s imagine that the Austro-Hungarian empire decided to improve the design of clocks and sent out a team to carefully measure the properties of all clocks in the empire to establish relationships between the characteristics of the clocks–gearing, length of pendulum, materials, etc.–and their accuracy. What they would likely find is that there are no consistent relationships because the right gear, for instance, depends on all the other parts of the design.

Accepting a similar model for schools, it no longer makes sense to compare charter schools to traditional public schools. Some will be better; others will be worse. The differences between individual charter schools will be far greater than the average differences between the two types of schools. The lack of a substantial overall difference does not mean, however, that chartering can be dispensed with. For the successful charter school the autonomy granted by a charter may be a critical part of its success.

Thus rather than trying to measure the effect of individual properties in isolation, it may be more productive to identify successful schools, analyze the factors they need for success, and then ask whether the model can be replicated.

Posted in Charter performance, Charter Schools, Research, Student achievement, Value added measures | Leave a comment

Agenda focused research

One question I have in looking at education research reports is the motivation of the researchers. Were they interested in whether a particular intervention had an effect on student achievement or in showing that it did? An argument can be made that motivation doesn’t matter as long as researchers follow the correct protocols (presumably this is why the What Works Clearinghouse does not consider the sponsor of the research it reports on), but I would hold that there are still so many judgments to be made that researcher bias can affect the results.

Often there is a chicken and egg problem: does the researcher believe x causes y because that is what the research showed, or did the researcher look for evidence to demonstrate that x causes y? Even if the researcher started completely neutral, what about follow-on research? Does the researcher now have a vested interest in confirming his or her previous research, particularly if it was published and got some notoriety?

The credibility that comes from being perceived as not having a vested interest in the results is a valuable asset and I am surprised how often researchers throw it away. A recent example is Economists for Romney which has a web site with a statement and names of economists who presumably endorse the statement. One can question whether  individuals who sign a partisan statement are able to dispassionately analyze policies associated with either candidate.

By contrast, the Chicago Booth business school has a panel of about 40 economists from major research universities. Every week panelists are asked to respond as to whether they strongly agree, agree, are uncertain, disagree, or strongly disagree with a statement on an economic issue. They are also asked to rate their degree of confidence in their answer. This panel appears to represent a reasonable cross-section of economists at major universities, so their answers can give a gauge of where there is consensus among economists and where there is none.

Comparing the assertions in the Romney economists  with the issues that the panel were asked to respond to, I found one with a reasonably close match. The Romney statement complains that Obama “relied on short-term “stimulus” programs, which provided little sustainable lift to the economy….” In February 2012, the panel was asked to respond to: “Because of the American Recovery and Reinvestment Act of 2009, the U.S. unemployment rate was lower at the end of 2010 than it would have been without the stimulus bill.” All but three of the economists responding agreed or strongly agreed with this statement.

One of the two who disagreed was the only panel member who signed the Romney statement. So again, one is left with the question of whether his opposition to Obama influenced his conclusion that the ARRA did not decrease unemployment.

Finally, I looked for names of people on the Romney list who I recognized as players on educational policy. I found only one, but most education researchers are probably not economists. This individual strikes me as knowledgeable, with a good grasp of the tools of statistics, but his results always fall on what is considered the conservative side of an issue. In my experience research on education policy issues is more messy than he makes it out to be.

Unfortunately there is no simple answer to how to separate agenda-focused research from research aimed at understanding how things work. One step would be researchers themselves to more clearly distinguish when they are acting as advocates and when they are putting their own beliefs aside.

Posted in Education Strategy, Student achievement, Value added measures | Leave a comment

On using student achievement

There is slow but steady progress towards using student outcomes in evaluating teachers. For example, see this recent editorial in the Milwaukee Journal Sentinel. On the whole this strikes me as a hopeful trend, but I also see it as in danger of going off the track.

The danger I see is too many of the people pushing this seem to operate out of a Theory X framework. The solution is to reward good teachers, in their view, and punish bad teachers. I certainly have no problem with identifying the small minority (I would estimate around 5%) who shouldn’t be teaching and moving them on to some other career. When I was on the Milwaukee school board teachers had the right to appeal to us before they were dismissed, and some were very, very bad.

But, aside from this small minority, improvement through a scheme of rewards and punishment is unlikely to lead to significant results. Hoping that all teachers will be above average is a pipe dream, if for no other reason than the need to do massive hiring in the next few years. Instead of winnowing out the pool, we need to give average people the tools they need to succeed.

One model is the typical application of statistical process control in industry. Rather than a boss telling him whether or not he is doing a good job, the worker is given a set of tools that allow him to measure his output and the ability to make changes when the output is not measuring up. The problem, in most cases is not lack of motivation, but a lack of information on how well students are learning combined with a lack of effective strategies for increasing student learning.

Posted in Student achievement, Value added measures | Leave a comment

A code of ethics for think tanks?

There are certainly a number of careful, ethical think tanks who produce thoughtful, useful reports. But in recent years, there has been an explosion of junk think tanks whose purpose seems to produce reports solely to justify previously determined positions. The explosion of the latter threatens to undermine the credibility of the former, just as push polls are a threat to legitimate polls. How is the nonexpert reader not acquainted with the research on a topic to distinguish between a fair analysis and propaganda.

I suggest that the legitimate think tanks should get together and develop a set of ethical standards. Some points that might be included:

  1. Reports should clearly reference all sources of information and wherever possible supply links to the data. If the data or report referenced is not already on the web, the organization should place it on the web, as long as copyright law allows. In other words, it should be easy for the reader to check primary sources, including their context.
  2. Address alternative explanations.
  3. Fully disclose counter evidence.
  4. Disclose potential conflict of interest among authors, the organization or funders.

If such a code were widely adopted, its absence would serve as a red flag to the reader: these people may be trying to pull something over on you. Also it might be used by the IRS as evidence about whether an organization deserves 501(c)(3) status or is asking taxpayers to subsidize advocacy.

These thought were triggered by a recent report attacking a proposal for a street car in Milwaukee, which violates all these points. But I have seen many, many education reports which also could not pass them.

Posted in Education Strategy, Street cars, Think tanks, Uncategorized | Leave a comment

The Effects of School Vouchers on College Enrollment

A new report has been released by Brookings Brown Center and the Harvard Kennedy School that measures the effect of a privately funded voucher program in New York City on student performance. In sum the study found that African American students accepted into the program were more likely to go to college, Hispanic students somewhat more likely, and no difference among white students.

As with a number of similar studies of vouchers and charters, the authors took advantage of the fact that the program could not accommodate everyone who applied, and a random drawing was used to select students. Also the ambivalent results are typical of the genre–in broad strokes that it is hard to see a difference but that differences may appear when subgroups are examined.

I have become increasingly convinced that the connection between governance and student achievement is an attenuated one at best. What affects student achievement is the teachers, principal, educational program, etc., at the school and that it is unlikely that governance choices will act on these in a consistent direction. So, in some situations, charters or private schools may take advantage of their relative freedom to adopt more effective programs than the neighboring public schools; but they may also use this freedom to make bad decisions.

What would be very valuable–but tough to do–would be to try to identify common characteristics of private schools that outperform their neighboring public schools.

Posted in Charter performance | Leave a comment

Three Things Society Could Do for Schools

In a previous post, I listed ten things that I would recommend school districts serving large numbers of low-income kids should consider. But what changes should we be advocating in society that would add to student success? Here I propose three that could have a huge positive impact on education:

1.    Universal, no-hassle health care. Sick children can’t learn. With the present chaotic health system, compassionate schools feel forced to spend time and energy trying to get care for their students’ health problems, time that would be better spent on education. In addition, health care is consuming an increasing portion of education dollars, in part because public schools are among a decreasing number of employers offering generous health insurance.

2.    Reduce teenage pregnancy. Children attempting to care for children find it hard to concentrate on their studies. And students failing in school disproportionately come from single-parent families, typically with a mother struggling to make ends meet. The catch is that effective programs to reduce teenage pregnancy violate some people’s moral or religious beliefs. Vouchers, by allowing objectors to find schools that match their values, may help defuse the cultural wars,.

3.    Adopt policies that promote full employment. Unemployment hits young people the worst, both because they have the highest rates and because they miss out on the chance to learn job skills. A weak job market reinforces a belief that it is not worth trying in school. But unemployment also represents an opportunity: unused resources are available to build needed infrastructure, for example.

One commonly offered answer to this question is to end poverty. But this seems to get things backward. Historically, education has been the path out of poverty. Is it no longer to serve that role?

Posted in Education Strategy, Urban Success | Leave a comment

Ten Things for School Systems to Try

Since leaving the Milwaukee school board, I have occasionally been asked for advice by people tackling other urban school districts. Here are ten recommendations, based on my experience:

1.    Make sure there is a strong program to identify, prepare, and evaluate principals.

2.    Use value-added measures to pierce the curtain. How can these measures be used to give immediate feedback to teachers on their students’ learning?

3.    Follow through on initiatives. Before starting an initiative, determine how its success will be measured and make sure the initiative’s continuation is based on results, rather than a change in personnel.

4.    Make sure that teachers teaching reading know and follow the research.

5.    Don’t make schools do everything. Let schools specialize in particular students and subjects.

6.    Don’t let high averages mask the student who is missing out.

7.    Encourage alternative governance models, but don’t expect them to be the magic bullet. While charter and choice programs can increase the diversity of educational opportunities, they are no guarantee of success.

8.    Evaluate using outcomes, rather than standardized practices, but offer systems of support to help schools get better.

9.    Avoid math mush; emphasize mastery; don’t pass off vagueness as creativity.

10.    Connect with the job market; don’t make college attendance the only measure of success.

Posted in Charter Schools, Value-added | Leave a comment

The MPS Math Selection

The Milwaukee Public Schools administration is currently recommending the adoption of the following books at the elementary level: Math Expressions, Everyday Math, and Scott Foresman. As I report here two of those series–Expressions and Scott Foresman–were included in research  comparing student achievement using four different series.

Expressions did quite well, leading the pack for first graders and coming in second (to Saxon math) for second graders . Scott Foresman, however, consistently placed at the bottom (Investigations also fared poorly).

Click here to download a copy of the full report. Figure 1 shows a comparison of the results of the four programs both for first and second grade.

It is unclear where Everyday Math would fit into the spectrum of effectiveness since it was not one of the four programs in the experiment. The What Works Clearinghouse does say “Everyday Mathematics® was found to have potentially positive effects on math achievement for elementary students,” but this was based on a comparison with the unspecified “standard” curriculum in a district.

Oddly the WWC has no Intervention Report on Math Expressions.

The MPS PowerPoint introducing its recommendations does not address the question of whether research results were considered. It does, however, describe in some detail the variety of organizations involved in the decision. So it is likely that the decisions about instructional materials was much more influenced by consensus that research considerations.

Posted in Student achievement | Leave a comment

Goal displacement and testing

One of the dangers of any metric is that the metric itself becomes the goal. Thus, rather than concentrating on making sure students become competent readers, schools may concentrate on trying to improve the reading test scores, by concentrating on teaching students test-taking strategies.

The problem is not low test scores; it is that too many students have low reading and math skills. We know this from many other indicators including drop-out rates, the lack of success for many students in college, and complaints from employers. But the test is the most consistent indicator we have to make comparisons among schools, classrooms, and over time. (4th grade teachers are often very aware of which third-grade teacher’s students are well prepared and ready to learn, but I am not aware of anyone who has been able to turn this in to a consistent measure.)

Some test advocates will say that confusing the metric with the goal shouldn’t matter; if the metric is a good one, improving it should also lead to improvement in the underlying goal. Low reading and math scores should lead to a search for better ways of teaching reading and math.

This is fine if the search for better reading and math scores leads to better reading and math skills. But what if it leads to time taken in class to teach test-taking strategies or hold pep rallies before the test? Whether or not these activities improve test scores (and there is considerable research that says their effect is small or even negative), they are very unlikely to improve the basic skills.

The emphasis on reading and math tests has been blamed for an impoverishment of education. This includes the disappearance of recess in some schools. But where is the research that says kids are better readers if they don’t get out and run around once in awhile? If it exists, I haven’t seen it.

Similarly, I have not seen any research that shows that schools who deemphasize science and history, or eliminate shop and home economics, thereby improve their students’ reading and math skills.

There does, however, exist increasing evidence that character, including fulfilling one’s commitments and being able to delay gratification, have a major impact on success, both in school and in one’s career.

As I mentioned briefly earlier there has been controversy over whether one can increase test scores without increasing underlying skills. Certainly it can be done by cheating. And companies claiming they will increase college entrance exam scores make lots of money on that proposition.

And some kinds of test prep may indeed increase student skills. Some years ago I looked at a program that was sold explicitly on its promise to increase 4th grade reading scores on state tests. When I looked back on the results, the schools chosen for this program increased the scores not only on the 4th grade reading tests, but also in other subjects and in other years. Part of this program involved giving frequent tests to see how the students were progressing. I speculated that this practice trained teachers to become much more aware of how their students were doing.

If our goal is truly one of increasing reading and math skills, then it becomes irrelevant as to whether test-taking strategies improve scores. There is too little time in the school day to spend it on activities that do not improve student skills.

Posted in Value-added | Leave a comment