Carl Walters of UBC came to UC Davis for our seminar in adaptive management. Carl was also kind enough to permit us to post the pre-print of the paper which he was presenting, which you can download here.

These are my rough notes from the talk. Any errors or misrepresentations are my own.

Central tenet of adaptive management is that policies should be treated as experimental treatments with uncertain outcomes.

The practice originated with NASA’s model simulations to troubleshoot space missions. This inspired ecological managers. However, soon it was realized that there were key ecological processes for which there wasn’t any data and it was unlikely that data would be forthcoming. The recruitment process in fisheries is a good example. Also, ecological theory wasn’t predictive (or very good) at the time. The “natural flow paradigm” in river management, for instance promoted the idea that one should restore historic flows, but this is built on a misunderstanding of evolutionary theory - organisms could respond positively or negatively to a changed environment.

Three variations on adaptive management

  • AM “lite”: Assuming knowledge is basically correct, monitor and correct as neccessary
  • AM for optimists: Assume knowledge of the right basic policy, do “probing” experiments to detect opportunity for improvement. This makes management and experimentation somewhat separate efforts
  • AM with humility: Assume nothing, treat every policy choice as an experiment.

    You’re going to do the experiment, the issue is whether you do it well.

Adaptive management has a poor record in implementation. Implementation failure is more common than modeling failure. (Modeling is where you figure out what you don’t know).

Grand Canyon Case Study

Starts at Glan Canyon Dam, which wasn’t designed to release water from the top. The water is released at the bottom, so it’s very cold. Dam removed seasonal variation, but created very strong diurnal variation because of change in hydropower demand. (Hydro dams are faster to spin up and down than coal or other plants.)

In the 1990s, they increased minimum flows at night, and reduced the daytime peak. The exact pattern has changes with different experimental treatments.

Much recent policy focus has been on maintaining native fish, which go up tributaries to spawn and rear and return to main stem river to grow as adults.

Aside - average beer consumption during grand canyon field work is 10/scientist/day

Ecological monitoring has revealed a strong gradient in production. Most productive in clear waters near dam, not productive downstream in more turbid water. Factor of 1000 in density of most organisms.

Scientists tend to design monitoring systems for large systems where there is intense monitoring at a few sites. However, the better strategy is to monitor extensively at many sites.

Primary policy objectives

  • Produce power for Southwest region
  • Conserve the sandy beaches as camp sites for rafters
  • Conserve and restore native fishes (several locally extinct, objective driven by ESA. Humpback chub is important species)
  • Maintain healthy “blue ribbon” trout fishery
  • A few others - cultural resources, etc.

Manipulations

  • Experimental floods - these have built sand bars (temporary), and triggered rainbow trout increase.
  • Beach habitat building flows
  • Modified low fluctuating flows
  • Low summer steady flows
  • Trout suppression flows - strong diurnal flows to drive down invasive species
  • Unintended warming treatment when Lake Powell water level dropped
  • Fall steady flows to support juvenile native fish
  • Mechanical removal (electrofishing) of trout

Surprises in the system

We only learn when there are surprises.

Expectations are initially set via both conceptual (verbal and graphical) models and numerical models. However, reasonable models widely diverge in predictions. (e.g., effect of controlling water temperature releases on dam).

  • Beaches created by experimental floods erode quickly
  • Teleconnections between different fish populations under reduced flow variation leads to increased trout. Trout disperse very far and predate on native fish, so treatments near dam affect native fish populations far away.
  • Steady flows resulted in a very fine-grained patchy pattern of near-shore warming. This triggered more experiments to create warm-water habitat for native fish.
  • Flooding creates microhabitat for invertebrates, which in turn causes such large spikes in trout population that growth is stunted. This was confirmed by looking back at historic records. Having both experimental and historical data provides much better evidence.
  • Native fish growth and survival have gone down under steady flows.

Challenges

  • Inadequacy in some areas of long-term monitoring. This challenges ability to fit data to simulation model.

Lessons

Modeling is there to tell you what you don’t know and warn about gaps in data and knowledge.

State of affairs

  • New EIS will prescribe treatments for 20 years
  • Debate over whether to continue experimentation

Comments and questions

  • What are the relative difficulties with a river versus something with spatial extent?
    • It’s much easier when there is spatial replication. Most advanced programs are in forests. Inspiration for much of this is agricultural experimental programs. The only trouble with spatial programs are that the responses can be long-term and the monitoring cost can be very high across many spatial replicates.
    • Key issue is innovation in monitoring
  • What about treating the Grand Canyon along with other systems as a group monitoring project?
  • How to incite users to participate in experiments if they think it will result in answers against their interests?
    • Need some central authority, unless you can create economic incentives.
      • Super interesting idea! - incentives for experimentation, not just mitigation
  • Major challenge is different between time scales of when managers need knowledge and how long it take biology to respond
  • Personal connection to the canyon is central to success. Few areas have such a dedicated community, who love it enough to compromise when they need to
  • Adaptive management is constrained - the best or adequate choice might not be in the feasible set
    • It’s useful to be able at least to avoid some of the worst choices. Also, the modeling exercises often offer alternative choices not originally considered
  • What’s the level of optimism that we’ll ever know the system well enough
    • AM isn’t about understanding the complexity. It’s about testing management options. Understanding nature is too high a bar for this. It’s pretty much impossible.
    • We DO need to do replicates though, because complexity generates so much extra variability. We also must watch for surprises in the system. As long as we focus on that we’ll continue learning.
  • Does this mean that black-box AM is best? Or does mechanistic understanding speed up our ability to learn about policy?
    • Mechanistic understanding is best probably for understanding what, when, and where to monitor.
  • On risk/uncertainty: Our society is excessively risk-averse, but scientists don’t help if we hide the uncertainty. This is a big issue with credibility in climate science.

← Jo Albers on Invasive Species in a River Network | All posts | Trade-Offs and Synergies in Floodplain Management - A Historical-Ecological Approach →