We are all living inside a gigantic simulation, experiencing a virtual world that we mistakenly think is real. It all feels too real to be a simulation. The weight of the cup in my hand, the rich aroma of the coffee it contains, the sounds all around us.
Humans are locked into a virtual world that they accept unquestioningly as real. Our entire Universe might be real yet still a kind of lab experiment. The idea is that our Universe was created by some super-intelligence, much as biologists breed colonies of micro-organisms.
So much junk is floating around in low Earth orbit (LEO) that NASA occasionally has to reroute the International Space Station to avoid collisions, and at least once herded the crew into the attached Soyuz spacecraft in case the orbiting scientists had to escape when the station was in the path of onrushing debris.
Of the estimated 500,000 debris pieces larger than a marble — nonfunctional spacecraft, abandoned launch vehicle stages, mission-related and fragmentation debris — more than 20,000 are larger than a softball. They travel at speeds up to 17,500 mph, fast enough for even untraceable flecks of paint to damage a satellite or a spacecraft. The risks from collision will only get worse as new scientific missions, commercial constellations, and manned spacecraft enter service in LEO.
Alan DeClerck, who received his MBA from Stanford Graduate School of Business in 1985, began working last year with Menlo Park-based startup LeoLabs to help the commercial space industry navigate that high-altitude obstacle course. His partners in the new venture include former engineers and radar scientists from SRI International, a large research lab that spun off LeoLabs as a commercial enterprise.
The company’s two completed radar arrays in Alaska and Texas track more than 1,000 objects per hour, and its partners and customers access that data through a LeoLabs software platform. The company plans to build four more debris-tracking radar facilities near the equator and the polar regions by 2019.
“There’s a gold rush to put new satellite services up there, but the question is how can we secure these service against a backdrop of manmade debris moving at 17,500 mph?” says DeClerck, who is the company’s vice president of business development and strategy. “Achieving what’s known as space situational awareness is critical for defense, communications, and human space flight.”
We asked DeClerck for an overview of the startup’s launch.
How did the idea for LeoLabs come about?
Nearly a decade ago, a couple of our founders working at SRI got a National Science Foundation grant to build this radar array near Fairbanks, Alaska, to help study the ionosphere. Mixed in their scientific data was unexpected “noise,” which turned out to be the debris flying in low Earth orbit. That’s when they realized there’s a lot of value in being able to detect and track that debris, because that area between 400 and 2,000 kilometers [250 and 1,250 miles] above the planet is critical for commercial services on Earth as well as for staging missions deeper into space.
So they spotted a commercial opportunity?
Everybody wants this data. We’re seeing interest because there’s no one else doing what we’re doing, and the timing is great because everyone is going into LEO with new services. Commercial firms need debris mitigation plans for their satellites. Government space agencies want the data for situational awareness and regulatory reasons. After launch, space debris is the number one operational risk, and those operations are really important to the security of the country because so much of our security relies on satellites. There’s billions of investment dollars at stake, so the market for us is considerable.
How does LeoLabs’ mission differ from the NASA Orbital Debris Program Office at the Johnson Space Center?
LeoLabs is a source of data. NASA takes advantage of multiple sources to manage and secure U.S. spacecraft and satellites. They also use Air Force data from a public catalog of debris. The Air Force is helping the commercial industry because of a 2009 collision between a Soviet satellite and a U.S.-built commercial satellite that created thousands and thousands of pieces of debris. That highlighted the need for better collision-avoidance data. While our mission is commercial, we believe our data and services will be of interest across the space industry, even the insurance industry.
The insurance industry?
Absolutely. What’s driving the situation today is operational survivability. The big commercial space companies today are putting up entire constellations of satellites, and risk management shapes the investment climate. SpaceX is putting up 4,400 satellites. OneWeb is putting up almost 1,000. And you’ve got companies like Blue Origin that want to do human space flight. Those big entities have public shareholders, and they have a fiduciary responsibility to make sure conditions are safe and the risks are understood. So we’re building the network that provides the ability to map space in LEO.
What unique startup challenges did LeoLabs face?
Usually, it’s the technical and execution risks that challenge startups. But we have long experience on the team, and the problem we’re solving is clear. Even with our brilliant team, the challenge I noticed when I walked in the door was to articulate the big vision and to tell the market that we’re doing foundational global mapping data for LEO.
Walk me through LeoLabs’ venture funding process.
Our first round of funding was $4 million, and for that we were able to build the Texas radar facility, and put it up in six months within that budget. That certainly turned heads in the industry. We’ll be going out for another round in the second half of this year. For the remaining stations, depending on where in the world we’ll put them, those will also be in the single-digit millions. This is a whole new set of economics for the space industry.
Why do you need more tracking stations?
The more stations, the more often you see things. If you only see an object once a week it’s hard to project an accurate orbit, and you need it to be accurate because it has to be actionable for operators. With our Alaska and Texas sites we already cover 95% of the debris orbits. We’ll get the additional 5% once we have the equatorial and polar stations up and running. Then we’ll be able to see a quarter-million small objects every two to three hours, predict their orbit within 50 meters, and give actionable data to the satellite operators.
Then we have the opportunity to provide a data platform based on our data stream. And people can innovate in ways we can’t even imagine. Remember when Google Earth first emerged? It was cool, but then Google presented it as a platform, and the next thing you know agriculture, traffic reports, all sorts of applications were built on top of that data feed. Similarly, we can empower universities, startups, and app builders with space data.
What were you looking for when you signed on with LeoLabs?
I was looking for a brilliant, close-knit team pursuing a big idea. At LeoLabs, we have astrophysicists on the development team, and even an ex-astronaut, brilliant folks who know how to translate data into orbits and build cool stuff. So obviously I valued technical competence. We’re also looking for people who brought the same spirit. Maybe that’s not the right word, but our founding team is smart, team-oriented, and humble. Everybody has passed that test.
What books influenced your decision to join LeoLabs at this stage of your career?
One is Mindset: The New Psychology of Success, by Stanford psychologist Carol S. Dweck. She’s had so much impact with her notion that life is not an either/or proposition. The other is Deep Work: Rules for Focused Success in a Distracted World, by Cal Newport. It makes the case that shallow skills — doing spreadsheets, using social media — don’t matter as much as developing the ability to learn something hard quickly, to go deep on something. With LeoLabs, I have the opportunity to go deep — on space data, on the physics of LEO, and on an emerging ecosystem.
There is nothing in principle that rules out the possibility of manufacturing a universe in an artificial Big Bang, filled with real matter and energy. Nor would it destroy the universe in which it was made. The new universe would create its own bubble of space-time, separate from that in which it was hatched. This bubble would quickly pinch off from the parent universe and lose contact with it.
Our Universe might have been born in some super-beings’ equivalent of a test tube, but it is just as physically real as if it had been born naturally. We are entirely simulated beings. We could be nothing more than strings of information manipulated in some gigantic computer, like the characters in a video game. Even our brains are simulated, and are responding to simulated sensory inputs.
We carry out computer simulations not just in games but in research. Scientists try to simulate aspects of the world at levels ranging from the subatomic to entire societies or galaxies, even whole universes. For example, computer simulations of animals may tell us how they develop complex behaviors like flocking and swarming. Other simulations help us understand how planets, stars and galaxies form.
We can also simulate human societies using rather simple agents that make choices according to certain rules. These give us insights into how cooperation appears, how cities evolve, how road traffic and economies function, and much else. These simulations are getting ever more complex as computer power expands. Already, some simulations of human behavior try to build in rough descriptions of cognition. Researchers envisage a time, not far away, when these agents’ decision-making will not come from simple if-then rules. Instead, they will give the agents simplified models of the brain and see how they respond.
Who is to say that before long we will not be able to create computational agents – virtual beings – that show signs of consciousness? Advances in understanding and mapping the brain, as well as the vast computational resources promised by quantum computing, make this more likely by the day. It makes sense for any conscious beings like ourselves to assume that we are actually in such a simulation, and not in the one world from which the virtual realities are run. The probability is just so much greater.
There are already good reasons to think we are inside a simulation. One is the fact that our Universe looks designed. The constants of nature, such as the strengths of the fundamental forces, have values that look fine-tuned to make life possible. Even small alterations would mean that atoms were no longer stable, or that stars could not form.
One possible answer invokes the multiverse. Maybe there is a plethora of universes, all created in Big Bang-type events and all with different laws of physics. By chance, some of them would be fine-tuned for life – and if we were not in such a hospitable universe, we would not ask the fine-tuning question because we would not exist.
However, parallel universes are a pretty speculative idea. So it is at least conceivable that our Universe is instead a simulation whose parameters have been fine-tuned to give interesting results, like stars, galaxies and people.
Quantum mechanics, the theory of the very small, has thrown up all sorts of odd things. For instance, both matter and energy seem to be granular. What’s more, there are limits to the resolution with which we can observe the Universe, and if we try to study anything smaller, things just look fuzzy. These perplexing features of quantum physics are just what we would expect in a simulation. They are like the pixellation of a screen when you look too closely.
Reality might be nothing but mathematics. This is just what we would expect if the laws of physics were based on a computational algorithm. It is likely to be profoundly difficult if not impossible to find strong evidence that we are in a simulation. Unless the simulation was really rather error-strewn, it will be hard to design a test for which the results could not be explained in some other way.
We might never know, simply because our minds would not be up to the task. After all, you design your agents in a simulation to function within the rules of the game, not to subvert them. This might be a box we cannot think outside of. The Universe can be regarded as a giant quantum computer. If one looks at the guts of the Universe – the structure of matter at its smallest scale – then those guts consist of nothing more than bits undergoing local, digital operations.
We had all better go out and do interesting things with our lives, just in case our simulators get bored! But nobody goes around telling himself that the people he sees around him, and his friends and family, are just computer constructs created by streams of data entering the computational nodes that encode his own consciousness.
Plato wondered if what we perceive as reality is like the shadows projected onto the walls of a cave. Immanuel Kant asserted that, while there might be something in itself that underlies the appearances we perceive, we can never know it. René Descartes accepted, in his famous one-liner I think therefore I am, that the capacity to think is the only meaningful criterion of existence we can attest.