When making decisions in unfamiliar territory, especially those involving (public and personal) health, ecological interventions, or emerging technologies, we face a fundamental choice:
- Do we act now and wait to see if harm emerges?
- Do we hold back until we can prove it is safe?
The following table compares two opposing postures: the Reactive ("Prove Harm") approach and the Precautionary ("Prove Safe") approach.
| Reactive ("Prove Harm") | Precautionary ("Prove Safe") | |
|---|---|---|
| System Type | Complicated: engineered, knowable, static, stable, parts can be isolated. | Complex: adaptive, unpredictable, entangled, dynamic, interdependent, nonlinear. |
| Burden of Proof | On the critic: "Prove it's harmful." | On the implementer: "Prove it's safe." |
| Risk Logic | Accept unknown risks until disproven. | Avoid unknown risks until well understood. |
| Risk Asymmetry | Assumes symmetrical risk: visible, bounded, local. | Recognizes asymmetrical risk: invisible, systemic, and irreversible. |
| Risk Magnitude | Risk is assumed to be manageable and contained. | Risk may be cascading, emergent, or existential. |
| Failure Consequence | Mild, local, reversible. | Severe, global/systemic, possibly irreversible. |
| Error Mode | False negatives (miss real harm). | False positives (miss opportunity). |
| Feedback Type | Fast, measurable, direct. | Slow, ambiguous, often delayed. |
| Moral Hazard | Externalizes risk to the public, future, environment. | Internalizes responsibility for unknown consequences. |
| Common Language | "There's no evidence it causes harm." | "We lack evidence it's safe, beyond a shadow of a doubt." |
| Common Reframe | "Show me the data that it's dangerous." (databrain) | "In complex systems, we must prove safety before exposure." |
| Reactive ("Prove Harm") | Precautionary ("Prove Safe") |
The reactive mindset assumes that what cannot be seen or modeled likely is not a problem. The precautionary mindset understands that in complex systems, effects can be delayed, hidden, and asymmetrical; small actions can produce cascading, irreversible outcomes.
What matters is not just whether we have seen harm, but whether the type of system we are acting in is capable of revealing it in time.
A core misunderstanding often lies in the conflation of complicated systems (which can be fully understood and engineered) with complex systems (which are unpredictable, interconnected, nonlinear, and emergent). While reactive approaches may be tolerable in complicated systems, they become reckless in complex ones, precisely because the cost of being wrong is disproportionately high, and feedback is too slow to offer early correction. And often times near impossible to connect any linear chains of causation.
Compounding this is the habit of privileging what is measurable, modelable, and spreadsheet-friendly, while ignoring or dismissing what is systemic, emergent, or ethically charged. This "databrain" thinking assumes that if it does not show up in the data, it does not exist.
The precautionary principle does not mean we have to be paralyzed. It means we need to calibrate the risk asymmetry with wisdom. In domains where the harm of being wrong could be societal, generational, or existential, we shift the burden of proof. We do not ask others to prove that harm might occur; we require proponents to prove that harm will not occur. This take is sometimes dismissed as anti-science, but it is nothing of the sort, it is good systems science.