What do we mean by intelligence? Like life, it’s hard to define, but we need to if we want to search for it. Among the radio astronomers of SETI—the Search for Extraterrestrial Intelligence—it’s only sort-of a joke that the true hallmark of intelligent life is the creation of radio astronomy.

Modern SETI was born as the Cold War simmered. In late 1959 Giuseppe Cocconi and Philip Morrison published, in Nature, calculations showing that radio telescopes could transmit signals across interstellar distances. In 1960 Frank Drake decided to search, using the Green Bank Radio Observatory in West Virginia. He also led a workshop there, which produced the famous “Drake equation” for determining the number of broadcasting civilizations by taking into account the number of stars, the number of stars likely to have planets, etc. It was never meant to calculate a specific answer so much as to frame the discussion about how development of planets, life, and civilizations could affect the likelihood of finding anyone out there to talk to.

When you do the math, the answer depends most crucially on the factor Drake called L—the average longevity of a civilization. If L is small—say, less than 1,000 years—then the distance between civilizations is vast, and the chances of SETI succeeding are nil. But if L is large—say, millions of years—then the galaxy should be full of chattering sentience, some quite near.

Wondering whether other geek civilizations could survive for long periods is an excellent way for us to think, from a slightly different perspective, about our current problems. Given those precarious times, it made sense that SETI pioneers like Drake, Morrison, and Carl Sagan imagined that if L were short, it was because most civilizations might “blow themselves up” in a nuclear holocaust. Given our current anthropocene anxieties, present-day discussions about L often focus on the existential threat of climate change or resource exhaustion and the challenges of sustainability. But the linking, overarching question is: Can an advanced technological species develop a long-term stable relationship with world-changing technology?

In fact, I would argue that this makes a better operational definition of intelligence than the “radio intelligence” characterization given above. If you look at how we define intelligence here on Earth, it has to do with abilities like abstract thought, symbolic language, and problem-solving. Such a definition certainly qualifies individual humans (with honorable mentions going to several other terrestrial species). But what good is all this so-called intelligence if we can’t insure our civilization’s survival against the problems we’re creating with all of our technical cleverness—if we don’t have our act together as a global entity? We’re at least momentarily stuck in this weird stage we might call proto-intelligence.