QUANTUM CIRCULATORS

Synopsis figure

 

The superconducting qubit is a leading candidate for building a quantum computer. So far, however, quantum circuits with only a small number of such qubits have been demonstrated. As researchers scale up the qubit number, they need devices that can route the microwave signals with which these qubits communicate. Benjamin J. Chapman at JILA, the University of Colorado, and the National Institute of Standards and Technology, all in Boulder, Colorado, and co-workers have designed, built, and tested a compact on-chip microwave circulator that could be integrated into large qubit architectures.

Circulators are multiple-port devices that transmit signals directionally—a signal entering from port will exit from port i+1. This property can be used to shield qubits from stray microwave fields, which could perturb the qubits’ coherence. The device’s directional, or nonreciprocal, behavior requires a symmetry-breaking mechanism. Commercial circulators exploit the nonreciprocal polarization rotation of microwave signals in a permanent magnet’s field, but they are too bulky for large-scale quantum computing applications. Newly demonstrated circulators, based on the nonreciprocity of the quantum Hall effect, can be integrated on chips but require tesla-scale magnetic fields to operate or initialize them.

The team’s chip-based scheme can instead be operated with very small magnetic fields (10–100 μT). Inside the device, simple circuits shift the signals in frequency and time, in a sequence that is different for each input port. These noncommutative temporal and spectral shifts provide the symmetry-breaking mechanism that gives the device its directionality. Experimental tests prove that the circulator works at high speed and with minimal losses, while an analysis of the device’s noise performance indicates that up to 1000 of these circulators could in principle be integrated in a single-superconducting-qubit setup.

Advertisements

A THANKSGIVING TOAST TO THE OLD BREED

Victor Davis Hanson

By Victor Davis Hanson

The late World War II combat veteran and memoirist E. B. Sledge enshrined his generation of fellow Marines as “The Old Breed” in his gripping account of the hellish battle of Okinawa. Now, most of those who fought in World War II are either dead or in their nineties.

Much has been written about the disappearance of these members of the Greatest Generation—there are now over 1,000 veterans passing away per day. Of the 16 million who at one time served in the American military during World War II, only about a half-million are still alive.

Military historians, of course, lament the loss of their first-hand recollections of battle. The collective memories of these veterans were never systematically recorded and catalogued. Yet even in haphazard fashion, their stories of dropping into Sainte-Mère-Église or surviving a sinking Liberty ship in the frigid North Atlantic have offered correctives about the war otherwise impossible to attain from the data of national archives.

More worrisome, however, is that the collective ethos of the World War II generation is fading. It may not have been fully absorbed by the Baby Boomer generation and has not been fully passed on to today’s young adults, the so-called Millennials. While U.S. soldiers proved heroic and lethal in Afghanistan and Iraq, their sacrifices were never commensurately appreciated by the larger culture.

The generation that came of age in the 1940s had survived the poverty of the Great Depression to win a global war that cost 60 million lives, while participating in the most profound economic and technological transformation in human history as a once rural America metamorphosed into a largely urban and suburban culture of vast wealth and leisure.

Their achievement from 1941 to 1945 remains unprecedented. The United States on the eve of World War II had an army smaller than Portugal’s. It finished the conflict with a global navy larger than all of the fleets of the world put together. By 1945, America had a GDP equal to those of Germany, Japan, the Soviet Union, and the British Empire combined. With a population 50 million people smaller than that of the USSR, the United States fielded a military of roughly the same size.

America almost uniquely fought at once in the Pacific, Asia, the Mediterranean, and Europe, on and beneath the seas, in the skies, and on land. On the eve of the war, America’s military and political leaders, still traumatized by the Great Depression, fought bitterly over modest military appropriations, unsure of whether the country could afford even a single additional aircraft carrier or another small squadron of B-17s. Yet four years later, civilians had built 120 carriers of various types and were producing a B-24 bomber at the rate of one an hour at the Willow Run factory in Michigan. Such vast changes are still difficult to appreciate.

Certainly, what was learned through poverty and mayhem by those Americans born in the 1920s became invaluable in the decades following the war. The World War II cohort was a can-do generation who believed that they did not need to be perfect to be good enough. The strategic and operational disasters of World War II—the calamitous daylight bombing campaign of Europe in 1942-43, the quagmire of the Heurtgen Forest, or being surprised at the Battle of Bulge—hardly demoralized these men and women.

Miscalculations and follies were not blame-gamed or endlessly litigated, but were instead seen as tragic setbacks on the otherwise inevitable trajectory to victory. When we review their postwar technological achievements—from the interstate highway system and California Water Project to the Apollo missions and the Lockheed SR-71 flights—it is difficult to detect comparable confidence and audacity in subsequent generations. To paraphrase Nietzsche, anything that did not kill those of the Old Breed generation made them stronger and more assured.

As an ignorant teenager, I once asked my father whether the war had been worth it. After all, I smugly pointed out, the “victory” had ensured the postwar empowerment and global ascendance of the Soviet Union. My father had been a combat veteran during the war, flying nearly 40 missions over Japan as the central fire control gunner in a B-29. He replied in an instant, “You win the battle in front of you and then just go on to the next.”

I wondered where his assurance came. Fourteen of 16 planes—each holding eleven crewmen—in his initial squadron of bombers were lost to enemy action or mechanical problems. The planes were gargantuan, problem-plagued, and still experimental—and some of them also simply vanished on the 3,000-mile nocturnal flight over the empty Pacific from Tinian to Tokyo and back.

As a college student, I once pressed him about my cousin and his closest male relative, Victor Hanson, a corporal of the Sixth Marine Division who was killed on the last day of the assault on Sugar Loaf Hill on Okinawa. Wasn’t the unimaginative Marine tactic of plowing straight ahead through entrenched and fortified Japanese positions insane? He answered dryly, “Maybe, maybe not. But the enemy was in the way, then Marines took them out, and they were no longer in the way.”

My father, William F. Hanson, died when I was 45 and I still recall his advice whenever I am at an impasse, personally or professionally. “Just barrel ahead onto the next mission.” Such a spirit, which defined his generation, is the antithesis of the therapeutic culture that is the legacy of my generation of Baby Boomers—and I believe it explains everything from the spectacular economic growth of the 1960s to the audacity of landing a man on the moon.

On rare occasions over the last thirty years, I’ve run into hard-left professors who had been combat pilots over Germany or fought the Germans in Italy. I never could quite muster the energy to oppose them; they seemed too earnest and too genuine in what I thought were their mistaken views. I mostly kept quiet, recalling Pericles’s controversial advice that a man’s combat service and sacrifice for his country should wash away his perceived blemishes. Perhaps it’s an amoral and illogical admonition, but it has nonetheless stayed with me throughout the years. It perhaps explains why I look at John F. Kennedy’s personal foibles in a different light from those similar excesses of Bill Clinton. A man, I tend to think, should be judged by his best moments rather than his worst ones.

Growing up with a father, uncles, and cousins who struggled to maintain our California farm during the Depression and then fought in an existential war was a constant immersion in their predominantly tragic view of life. Most were chain smokers, ate and drank too much, drove too fast, avoided doctors, and were often impulsive—as if in their fifties and sixties, they were still prepping for another amphibious assault or day-time run over the Third Reich. Though they viewed human nature with suspicion, they were nonetheless upbeat—their Homeric optimism empowered by an acceptance of a man’s limitations during his brief and often tragic life. Time was short; but heroism was eternal. “Of course you can” was their stock reply to any hint of uncertainty about a decision. The World War II generation had little patience with subtlety, or even the suggestion of indecision—how could it when such things would have gotten them killed at Monte Cassino or stalking a Japanese convoy under the Pacific in a submarine?

After the stubborn poverty and stasis of the Great Depression, the Old Breed saw the challenge of World War II as redemptive—a pragmatic extension of President Franklin Roosevelt news-conference confession that the “Old Dr. New Deal” had been supplanted by the new “Dr. Win-the-War” in restoring prosperity.

One lesson of the war on my father’s generation was that dramatic action was always preferable to incrementalism, even if that meant that the postwar “best and brightest” would sometimes plunge into unwise policies at home or misadventures abroad. Another lesson the World War II generation learned—a lesson now almost forgotten—was that perseverance and its twin courage were the most important of all collective virtues. What was worse than a bad war was losing it. And given their sometimes tragic view of human nature, the Old Breed believed that winning changed a lot of minds, as if the policy itself was not as important as the appreciation that it was working.

In reaction to the stubborn certainty of our fathers, we of the Baby Boomer generation prided ourselves on introspection, questioning authority, and nuance. We certainly saw doubt and uncertainty as virtues rather than vices—but not necessarily because we saw these traits as correctives to the excesses of the GIs. Rather, as one follows the trajectory of my generation, whose members are now in their sixties and seventies, it is difficult not to conclude that we were contemplative and critical mostly because we could be—our mindset being the product of a far safer, more prosperous, and leisured society that did not face the existential challenges of those who bequeathed such bounty to us. Had the veterans of Henry Kaiser’s shipyards been in charge of California’s high-speed rail project, they would have built on time and on budget, rather than endlessly litigating various issues as costs soared in pursuit of a mythical perfection.

The logical conclusion of our cohort’s emphasis on “finding oneself” and discovering an “inner self” is the now iconic ad of a young man in pajamas sipping hot chocolate while contemplating signing up for government health insurance. Such, it seems, is the arrested millennial mindset. The man-child ad is just 70 years removed from the eighteen-year-olds who fought and died on Guadalcanal and above Schweinfurt, but that disconnect now seems like an abyss over centuries. One cannot loiter one’s mornings away when there is a plane to fly or a tank to build. I am not sure that presidents Franklin Roosevelt, Harry Truman, and Dwight Eisenhower were always better men than were presidents Bill Clinton, Barack Obama, and Donald Trump, but they were certainly bigger in the challenges they faced and the spirit in which they met them.

This Thanksgiving, let us give a toast to the millions who are no longer with us and the thousands who will soon depart this earth. They gave us a world far better than they inherited.

TAKING NEW SPACE TELESCOPE FOR A SPIN

Two UC Berkeley astronomers are eagerly awaiting the spring 2019 launch of the James Webb Space Telescope, having been chosen to lead two of the first 13 groups that will test the capabilities of NASA’s snazzy new successor to the Hubble Space Telescope.

After launch in 2019, the James Webb Space Telescope will blossom into the most powerful space-based telescope ever built. 

The 13 teams, which were announced last week, won’t be able to get their hands on any data for another two years, following a six-month commissioning period after launch. But from November 2019 until April 2020, these teams will scan objects near and far, ranging from planets in our solar system to planets around nearby stars, and from star systems in the Milky Way galaxy to galaxies at the edge of the universe.

“The diversity of science represented by these 13 teams is amazing,” said Daniel Weisz, an assistant professor of astronomy and leader of one of the teams. “We are definitely excited about this opportunity.”

The teams are hoping for new discoveries, but they’ve also been selected because of promises to provide baseline information for future observers and computer software tools that those astronomers will need to make sense of their observations on the telescope.

“With the telescope’s five-year lifetime, we need to use it very efficiently to maximize the return,” Weisz said. “The early release science program is supposed to produce science-enabling results within five months of the observations, which in the astronomy world is basically yesterday.”

Letting astronomers rather than staff take the telescope for a test drive is a new concept for NASA, said Imke de Pater, a UC Berkeley professor of astronomy who will lead a team using the telescope for up-close observations of the solar system. She and her team will focus on Jupiter, its moons Io and Ganymede and its faint rings, to see if they can capture fine detail against the bright background of Jupiter, which is actually too bright for the telescope to look at without filters.

“We will see if we can image the rings and get rid of the scattered light from Jupiter, which pushes the telescope’s limits and really tests the capabilities of JWST,” she said.

Weisz, who studies star systems, from globular clusters with millions of stars to galaxies in the local Universe, will take the long view. He is particularly interested in systems near enough that individual stars can be picked out and counted, which can tell astronomers about the history of the galaxy and ultimately the history of the universe.

 

James Webb Space Telescope

 

 

The James Webb telescope will be ideal for this, because its mirrors will be two and a half times the size of the mirror in the Hubble space telescope, effectively cutting the time it takes to collect data on a cluster or galaxy by a factor of 10. This allows detailed studies of the very faintest stars, some of which first started to glow when the universe was a baby more than 10 billion years ago.

“For studies of very faint stars in the Milky Way – our own galaxy – the JWST is going to be phenomenal,” he said. “The telescope will do roughly in its five- to 10-year mission what Hubble has done in its 25-year mission for local galaxies.”

During the 20 hours of telescope time allocated to his team, they will take images in both optical and infrared for a globular cluster in the Milky Way, a very faint, dark-matter-dominated dwarf galaxy that orbits the Milky Way and a close neighbor and traveling companion of the Milky Way, a galaxy at a distance of about 3 million light years.

By counting and determining the age of each star within within these galaxies, for example, he hopes to shed light on what happened early in the universe when stars first began to shine across the cosmos, the so-called epoch of reionization.

“We are adjusting our academic schedules so that we will be ready to hit the ground as soon as the data gets downloaded; we will be off to the races,” Weisz said.

De Pater admits that two years is a long time to wait, but she, her co-leader Thierry Fouchet of the Observatory of Paris and their team hope to use their 28.9 hours of observing time to measure the wind speeds in Jupiter’s Great Red Spot, observe gases in the atmospheres of Io and Ganymede and see ripples left by comets in the rings around the planet.

“The idea is that for any solar system object, you have to assemble a mosaic of the planet or moon from multiple observations when everything is moving and rotating and changing. How do you do that?” de Pater said. “We have to develop the software so that astronomers can put their little postage stamps together into a map.”

 

PROBLEMS IN TRAVEL MANAGEMENT







 

With issues such as geopolitical crisis, ever-changing travel policies, and more frequent terrorist attacks, it suffices to say that the Travel Management Companies (TMCs) and travel managers are facing a challenging and difficult time. While experts predict a modest growth of the corporate travel industry in 2017, the impact of recent events will only give more emphasis on traveler safety and communication.

The travel industry’s rapid evolution continues unabated. To profit through this turbulence, leaders must focus on what really matters—the customer. The landscape of travel distribution has been shifting. Until the mid-1990s, distribution was a straightforward mix of brand call centers, travel agencies, and in-person bookings at the hotel front desk, airline ticket counter, or car-rental outpost. The launch of online booking gave companies a new way to engage with customers and also opened the door to new business models such as online travel agencies (OTAs). However, in the 2000s, most travel suppliers, aggregators, and service providers focused on managing transaction costs rather than improving the customer experience, with serious implications. The game is now about delivering a superior customer experience.

The pace of change has accelerated due to three factors. First, the competitive bar continues to rise. Among OTAs, three leading players—Ctrip, Expedia, and Priceline—have achieved global scale and relevance. And suppliers are seeking a competitive edge through several levers, including loyalty partnerships; building on the foundation set by Delta and Starwood, today’s major travel partnerships include those between Starwood and Uber, United and Marriott, and Qantas and Airbnb, among others. Second, travel technologies—especially mobile platforms—have continued to evolve as customers alter how they browse for and purchase travel. Expedia reports that 40 to 60 percent of its leisure-travel-brand traffic is through mobile devices, and about half of bookings on some brands comes from mobile. Third, potential sources of disruption are on the horizon. For example, business travel currently accounts for 10 percent of Airbnb bookings—but that number is growing, thanks to the company’s integration into the platforms of several leading travel-management companies.

The shifting conditions make it more important than ever to put the customer at the center. But despite some examples of progress, we continue to see companies solve for business requirements over customer needs. Many suppliers are falling short of their potential because they focus on transaction costs instead of lifetime value. And for many intermediaries, earnings expectations, contentious supplier relationships, and an onslaught of digital newcomers have eroded the ability to test, learn, and “fail fast” that helped them identify and solve unmet needs in the first place.

The good news? The path to success is likely more straightforward than it seems. In fact, we see examples of companies implementing some or all of the strategies we will discuss:

  1. Harness advanced analytics to understand the customer better.
  2. Adjust mobile offerings to capture, secure, and serve the customer.
  3. Safeguard against future disruption.

What’s the catch? Unless executives focus relentlessly on solving for customer needs, we will continue to see ineffective promotions, lackluster apps, and uninspired pivots and product launches. Companies that can solve real customer needs, including through partnerships with other members of the travel ecosystem, will position themselves at the vanguard of travel distribution.

From travelers to technology, several significant trends are influencing how companies engage with customers.

Suppliers are emboldened to innovate product development, marketing, and distribution

Intermediaries connect suppliers with consumers, but such bookings come at a cost. Thus, suppliers are pursuing strategies to drive more direct bookings:

  • Launching large-scale marketing campaigns. For example, Hilton’s “Stop Clicking Around” campaign, along with other initiatives, contributed to a 60 percent increase in HHonors enrollments and a shift toward direct channels in the third quarter of 2016.
  • Enhancing direct-channel offerings. AccorHotels has begun distributing independent hotels through AccorHotels.com at up to 14 percent commission and plans to reach more than 8,000 properties over time.
  • Providing disincentives for indirect bookings. In September 2015, Lufthansa implemented a €16 fee for bookings made through global distribution systems (GDSs). While Lufthansa and third parties differ in their assessment of the fee’s impact, the move helps enable a longer-term shift to more dynamic pricing.
  • Consolidating to achieve scale. Among US airlines, mergers over the past ten years have led to four clear leaders with combined 80 percent domestic market share (and Alaska Air recently became the fifth-leading carrier, with 6 percent share, after the acquisition of Virgin America).

While the specific strategy varies by supplier, the trend is clear: suppliers throughout the travel industry are willing to go further than ever to convince customers to book directly.

Three leading OTAs—Expedia, Priceline, and Ctrip—also own (at least for now) the world’s leading metasearch players. Each continues to assume, in its own way, an expanded role in the customer journey:

  • Expedia (which recorded $61 billion in 2015 gross bookings) acquired Orbitz and Travelocity in 2015. It is the clear leader in the United States, with approximately two-thirds OTA market share. It is present in many travel verticals, including metasearch (Trivago), vacation rentals (HomeAway), and corporate travel (Egencia).
  • Priceline ($56 billion in 2015 gross bookings) is the largest player by revenue and market value, given strength in hotels through Booking.com. The clear leader in Europe, it is also present in many travel verticals, including metasearch (Kayak.com), vacation rentals (Booking.com), and restaurant bookings (OpenTable). It owns a 9 percent stake in Ctrip.
  • Ctrip ($27 billion in 2015 gross bookings) is the fastest-growing global OTA, with a growth rate of 58 percent in 2015 alone. It is the clear leader in Chinese domestic and outbound travel and is a mobile leader, with an estimated 70 percent of bookings via app. It is consolidating its position in Asia through stakes in Qunar, eLong, and MakeMyTrip. It also has an expanding presence in many travel verticals, including metasearch (Skyscanner) and tour operators.

These giants, along with Google and TripAdvisor, will continue to play a major role in shaping the future of distribution. In many cases, the relationship between these intermediaries, and between intermediaries and suppliers, is multifaceted. As recently as 2014, analysts estimated that Expedia and Priceline accounted for up to 5 percent of Google’s total ad revenue. Marriott partners with Expedia to sell dynamic travel packages through Vacations by Marriott. And Southwest Airlines’ hotel offerings are provided by Priceline’s Booking.com. The continued evolution of intermediary business models will be critical to watch.

Mobile is increasingly the customer’s channel of choice: by 2019, nearly 80 percent of US travelers who book online will do so via mobile, up from 36 percent in 2014. “Mobile first” has become a favorite catchphrase, but given the massive shift in behavior, travel companies must develop a deep understanding of why and how customers are using mobile. Consider, for example, the following trends:

  • Mobile search is on the rise. In 2015, mobile flight and hotel searches on Google increased 33 and 49 percent year over year, respectively.
  • Mobile plays a critical role after arrival. For instance, 85 percent of leisure travelers choose activities after their arrival, and half of international travelers use their mobile devices to search for such activities after their arrival.
  • Not all mobile experiences are created equal. The average rating of OTA/metasearch apps is 19 percent higher than the average of hotel brand apps.

The industry has seen some recent mobile-focused innovations. These include day-of-travel features such as keyless or app-enabled hotel room entry, in-app passport scanning at check-in, and real-time luggage tracking. Other developments involve context-based design; for example, 24 hours before a scheduled flight, Virgin America’s app shifts from emphasizing booking to focusing on check-in. Finally, a growing list of brands across the value chain—including Booking.com, Cheapflights, Expedia, Hyatt, Kayak, KLM, Lola, and Skyscanner—have introduced messaging platforms and bots.

Looking forward, booking may be the function ripest for innovation, as relatively few players have created a compelling, mobile-friendly booking experience.

The data analytics revolution now under way has the potential to transform how companies organize, operate, manage talent, and create value. That’s starting to happen in a few companies—typically ones that are reaping major rewards from their data—but it’s far from the norm. This truth certainly holds in travel, where a wide range of use cases is emerging:

  • Enhancing commercial effectiveness. Red Roof Inn used analytics on publicly available weather and flight data to predict and target customers facing flight cancellations.
  • Refining customer experience. British Airways’ “Know Me” program, driven partially by Opera Solutions, mines customer data including loyalty information and buying habits to generate targeted offers and experiences.
  • Optimizing network and portfolio. Aloft, for example, is piloting a digital hotel experience, with rooms equipped with Internet of Things–enabled technologies such as intelligent climate control, and using the data generated to support product innovation.
  • Driving operational and administrative efficiency. Hotels are using platforms such as ALICE (which counts Expedia as an investor) to consolidate, track, and analyze guest requests and interactions to improve the quality and speed of operations.

Before embarking on new big data or analytics ventures, it is critical to ensure adequate data security—particularly given the numerous high-profile, highly damaging data breaches in other industries.

The sharing or on-demand economy, exemplified by companies such as Airbnb and Uber, is the most significant business model to emerge and scale over the past five years. The numbers themselves are notable—including rapid growth (Piper Jaffray estimates a 10 percent compound annual growth rate for short-term and peer-to-peer accommodations from 2014 to 2025, versus 3 percent for traditional accommodations ) and valuations (including Uber at $68 billion, Didi Chuxing at $33 billion, and Airbnb at $30 billion ). But more interesting is the degree to which innovative current players continue to innovate:

  • Uber’s expansion continues not only into new cities but also into new modes of transportation. In a recently published white paper, the company laid out the potential for vertical take-off and landing (VTOL) aircraft for on-demand transportation, particularly in dense urban networks.
  • Airbnb’s recently announced expansion into Trips (peer-to-peer tours and activities ranging from a few hours to a few days) and Places (including meet-ups with other users, destination guides, local recommendations, audio guides, and eventually, restaurant reservations) represents the latest step in a journey from air mattresses in spare rooms to a “super brand of travel.”

While hospitality and ground transportation have been most heavily disrupted, other modes such as air travel also continue to see innovation from models such as Rise, Surf Air, OneGo, and Set Jet. Beyond the sharing economy, specialized models continue to emerge with a focus on capturing an outsize share of specific segments. For example, Priceline founder Jay Walker’s Upside rewards “free agent” business travelers for flexibility in flight and hotel bookings.

Executives should resist the temptation to be pulled in directions that distract from their top priorities: engaging with customers more effectively, enhancing attraction and retention, and capturing more value throughout the customer life cycle. To begin the process of thinking in terms of customer lifetime value instead of cost per booking, they must answer some fundamental questions:

  • How much is a customer worth? In banking and many other industries, analyzing and predicting a customer’s lifetime value at acquisition is standard operating procedure. In travel, we’ve encountered very few companies that conduct this analysis consistently or at scale, despite the available data on marketing, transactions, loyalty, satisfaction, and referrals.
  • What can every employee do to secure the next purchase? Customer acquisition and retention can no longer be the purview of marketing and sales groups alone. Everything from back-office accounting to human resources to maintenance must be oriented toward enhancing the lifetime value of each customer.
  • How do customer needs and booking scenarios influence the choice of booking channel? Companies must identify and exploit the many “micro moments” in the customer journey. This journey should be mapped starting with the inspiration for the trip and combined with factors such as the customer’s location at point of purchase, the device used, historical preferences, and past experiences to deliver the right offer through the right channel at the right time.

HOMES OF THE FUTURE




Thanks to advances in smart home technology, we are all living in the home of the future today. Sure, it may not be exactly what was envisioned decades ago, but it’s still pretty cool.

Passive houses are energy efficient buildings that provide year-round comfort without the constant need for heating and cooling systems. These homes save you money and they’re great for the environment, so it can only be a good thing if they become more popular.

When the housing bubble burst years ago, many homeowners found themselves underwater in their mortgages. This was certainly bad news for those homeowners, but what no one realized at the time was that there would be repercussions for years to come.

Today those homeowners are just starting to dig their way out of mortgages that kept them in homes they had long outgrown. Equity has been slow to build. As a result, people are staying in their homes longer out of fear or because of lack of capital. This means fewer starter homes on the market for first time home buyers.

Millennials are struggling to buy homes even though they surprisingly want to do so. Their savings are practically if not literally nonexistent, and they are often saddled with student loan debt. Millennials associate home ownership with the American Dream more so than previous generations, so lack of savings and escalating credit standards are grinding the housing market to a halt.

Modern comfort comes at a price, and keeping all those air conditioners, refrigerators, chargers, and water heaters going makes household energy very expensive. Here’s what uses the most energy in your home:

Cooling and heating: 47% of energy use

Water heater: 14% of energy use

Washer and dryer: 13% of energy use

Lighting: 12% of energy use

Refrigerator: 4% of energy use

Electric oven: 3-4% of energy use

TV, DVD, cable box: 3% of energy use

Dishwasher: 2% of energy use

Computer: 1% of energy use

One of the easiest ways to reduce wasted energy and money? Shut off vampire electronics, devices that suck power even when they are turned off.  A penny saved is a penny earned, and being more efficient with your energy use is good for your pocketbook and the environment.

Architecture should be more than an intersection of art and commerce.  Architecture is almost a political work. There is a condemnation of architecture’s passivity and its inability and unwillingness to confront or resolve the socio-political complexities of urban life.

While there are many talents working in the field today, there are very few people who you can name around the world who manage to combine extraordinary inventiveness and innovation in their design work and who are also at the very center of contemporary thought, engaged with contemporary issues.

We are in a radically divided world in which architecture is not dealing with those political issues in a really sophisticated way. Both the art world and the architecture world are clearly dedicated to political correctness and therefore are pretty intolerant in terms of engaging with political worlds beyond Western democracies.

One of architecture rarely questioned self-mythologizing pretensions has been that architectural work is less ego-driven and less author-driven, and therefore architects convince themselves all their buildings are completely different and don’t look like each other, something reminiscent of collectivity. A generic architectural style does not offer an escape from the extravagance of individual architecture.

Within a decade, our living spaces will be enhanced by a host of new devices and technologies, performing a range of household functions and redefining what it means to feel at home.

The promise of devices that not only meet our household needs but anticipate them as well has been around for decades. To date, that promise remains largely unfulfilled. Advances such as the Nest thermostat by Alphabet (parent company to Google) and Amazon’s Alexa personal assistant are notable, but the home-technology market as a whole remains fragmented, and the potential for a truly smart home is still unrealized.

A tipping point may be at hand. Increased computing power, advanced big data analytics, and the emergence of artificial intelligence (AI) are starting to change the way we go about our busy lives. The vision we present may seem out there, but it simply represents the confluence of those technological developments and realization of existing trends. Those trends, along with what’s just on the horizon, according to our research, suggest to us that within a decade, many of us will live in smart homes that will feature an intelligent and coordinated ecosystem of software and devices, or homebots, which will manage and perform household tasks and even establish emotional connections with us.

A smart home will be akin to a human central nervous system. A central platform, or brain, will be at the core. Individual homebots of different computing power will radiate out from this platform and perform a wide variety of tasks, including supervising other bots. Homebots can be as diverse as their roles: big, small, invisible (such as the software that runs systems or products), shared, and personal. Some homebots will be companions or assistants, others wealth planners and accountants. We will have homebots as coaches, window washers, and household managers, throughout our home.

We are already entering this new era. In two years, we expect to see more items in our living space become interconnected—the formative first stage of a new home ecosystem. In five years, numerous tools and devices in the home will be affected. And in ten years, smart homes will become commonplace and will regularly feature devices and systems with independent intelligence and apparent emotion.

That level of home improvement presents significant opportunities, threats, and changes for appliances and devices that have been part of our home life for generations. The new home will be built on a foundation of platforms and ecosystems, whose producers will need to establish new levels of trust with their customers. Competition will take place not just for the consumers who inhabit the smart home, but for the interactions between consumers and homebots that increasingly will shape buying behavior. It’s not too early for a wide range of players to start laying the groundwork for success in the home of the future.

Platforms will provide the foundation to integrate different devices while providing a consistent interface for the consumer. Frontrunners include Amazon, Apple, Google, and Samsung; start-ups at various points in the development cycle will be part of the mix, as well. The winners will deliver omnipresence though ubiquitous connectivity and go-anywhere hardware, as well as integration, with bots collaborating among each other and linking to third parties’ products and services. If the recent past is any indication, it’s likely that multiple platform standards will evolve. That will present complexities both for consumers and businesses but will foster new, niche opportunities, as well.

Developers will create bots that plug into the new and various platforms. In short order, this combination of platforms and bots will mature into an ecosystem of products and services. Platform companies are likely to develop their own AI-driven bots (the descendants of Amazon’s Alexa and Apple’s Siri, for example). Many other creators will develop unique homebots that integrate into different platforms, much as the apps of today have been developed for Android and iOS, which support the impressive mobile-device ecosystems we see now.

Likely, too, a hierarchy will emerge: we can expect a master bot that acts as general manager, juggling many services; service bots that handle a set of functions related to a more complex task such as managing media; and niche bots that perform single tasks, such as window cleaning. For now, put aside grand visions of a single, Jetsons-style Rosie the Robot replacing a human maid in toto; think instead of multiple bots performing separable, specific tasks. Well-defined scope presents much less risk of error. If you have a robot at home, you can’t have it run into your furniture too many times. You don’t want it to put your cat in the dishwasher even once.

To better understand the homebot opportunity and potential obstacles to its realization, we conducted in-home and mobile diary studies in Japan and the United States with dozens of consumers who are already using AI products or services where they live. We found that satisfaction with individual smart devices runs high. Today, people are quite willing to invite homebots into their lives to address a broad array of specific use cases: from doing individual chores to completing a more complex set of tasks to managing even certain elements of child and elder care.

But we also found there’s a crucial variable that will determine the speed and extent to which consumers truly embrace smart homes managed by homebots. The overwhelmingly determinative factor for consumer acceptance that emerged from our research was trust. Trust is initially based on the bot’s ability to perform its task, as might be expected. That does not always go as planned. But once trust is established, people are willing to cede more responsibilities to devices and systems powered by AI. One key to creating that trust will be creating bots that are more than mere automatons. After all, humans are wired for emotions. Our research confirmed that consumers are satisfied when a bot gets a task done, but they are delighted when there is a more personal, emotional element to how the bot does it.

At the same time as competitors in the smart-home space are figuring out how to create trust, they also must learn how to compete in a new landscape where the winners are influencing the homebots themselves. As consumer–bot interactions become a new nexus of competition, a variety of players will need new skills in designing bots, marketing products and services to them, and building business models that exploit their position at the center of the home.

Increasingly, designers will tap into and even advance data science to develop solutions that go beyond addressing static insights. Likely, that will entail solutions that are at least in part AI-driven, in order to react instantly and evolve constantly for the needs of customers. By understanding customers through a variety of approaches including ethnographic research and AI-generated insights, designers can help guide businesses through the complicated tangle of interactions and diverse engagement models. We expect solutions will migrate from screen-dominated interfaces to more physical and even atmospheric interactions. Companies that have more compelling and intuitive engagement models between bots and consumers—and can achieve significant market penetration first—will hold the competitive advantage.

To become machines that are truly integral to peoples’ home lives and to establish genuine trust, bots will need to connect with and relate to humans. That’s hard, and it goes beyond AI to the realms of artificial emotion (AE). AE encompasses attributes such as tone, attitude, and gestures that communicate feelings and build an emotional connection. Consider Alexa. Several of our interview subjects told us that they think of Alexa as a friend. That doesn’t develop from merely providing the train schedule when asked. It comes because Alexa evokes a sense of support, through its sensitive omnipresence and nuanced voice interaction. Interacting with Alexa really is like talking to a friend.

As consumers trust bots more and in turn cede to bots more control over their home management, people will become less involved in the active decision making that goes on in daily home life. For providers of home goods and services, this means that bots will increasingly become the customer— or at least an important intermediary between a selling business and a human purchaser.

Marketing for bots certainly gives new meaning to the term robocalls. But it also poses a serious challenge: How can businesses position their products and services to a bot so the human consumer will passively allow, or actively ensure, a purchase? We expect that the marketer’s mission will be comparable to the steps one takes to rank one’s product or service at the top of an Internet search result. Just as companies focus on search-engine optimization, they will need to develop metadata and tagging systems that are optimized for homebots.

Given the simplicity of automated purchases and refills for many household products, sellers will need to focus on getting into a homebot’s consideration set and optimize features to win the likely comparisons embedded in a purchase-decision algorithm. That calls for an approach that is much harder than one and done. Given the speed and reach of AI, providers will have to monitor bot purchasing behaviors continuously and be vigilant in tracking competitors’ moves going forward.

The stakes are real; a shift in AI preference toward a competing product could reduce demand to zero. The once all-powerful intangible power of a brand may now be reduced to a tangible sum of its parts. As AI gathers inputs across consumer networks, unpleasant consumer experiences or negative feedback could have near immediate impact on bot purchasing preferences. As a result, analytics and marketing will need to be rapid, responsive, and agile. Consumers who can’t be bothered to search for the right purchase or are overwhelmed by the complexity of choice can have a homebot scan constantly based on variable individual preferences (such as cost, appearance, and durability).

We expect that a wide range of homebot business models and use cases will emerge. Not only could homebots be purchased or rented for a specific task, people may share or rent them out to others. It’s conceivable that networked bots will work together across households, for example, to increase processing power, share expenses, or even partake in buyer co-ops to benefit from bulk pricing. Each of these models creates opportunities for new revenue streams.

The greatest source of value may come from the data. Bots will acquire and generate reams of information, and these data points will be critical for increasingly data-driven projects and services. Data will be sources of insight and even products in their own right. And understanding the implications, opportunities, and information about the smart home won’t be someone’s part-time job. It will require a dedicated team to parse the data, develop strategies, manage partnerships, and drive experiments that will become integral to creating value.

Businesses that seek to compete in the smart home can begin their housework early. A network of functioning bots is, in effect, an ecosystem of capabilities. Each bot will need to follow standard protocols to communicate with one another. But while a house may be bounded by four walls, a homebot ecosystem extends into the ether; it has to, as bots will need to interact with markets and networks around the world. Smart cars, wearables, and mobile devices are but a few examples. How all those systems talk to one another will be the core IT challenge for the foreseeable future.

On the technical side, mastery demands an intimate understanding of AI technologies and how they work with one another. On the strategic front, it’s worth the effort to identify what your company’s competitive advantages are or may become and then imagine how these advantages could align with the homebot value opportunities that are likely to emerge. Remember: the smart home will require different parties to work together. It’s not too soon to take note of players developing complementary—or potentially competitive—capabilities, and consider opportunities for potential partnerships. Most important, keep in mind that the success of homebots and smart homes is not wholly about technology. Rather, smart homes and bots are about how technology makes us feel. The objective is to meet the needs of human consumers and to make a house feel like home.

Real-estate developers should accept that business as they have known it is changing. By adopting new construction technologies, they can improve delivery and affordability. Imagine if people could design and develop their dream homes simply by going online to find the solution that best suits their means and lifestyle. Imagine being able to pick from a menu of standard base designs and being able to customize fittings, furnishings, and smart equipment. Then imagine being able to use a digital interface to obtain quotes from vetted contractors for everything from surveying the plot to assembling the house. And the pieces of the house itself arrive in the form of prebuilt panels and modules in a container from a centralized robotic factory. This is what is meant by “industrialized delivery systems”—and these will play a crucial role in the future of the real-estate and construction sectors.

In reality, early versions of this future already exist, with several venture-funded home builders in the United States aspiring to channel the customer-experience revolution pioneered by consumer-goods companies into the residential-property space. We are likely to see the creation of standardized home-building platforms, with developers working with an ecosystem of companies providing an array of fittings, furnishings, and equipment solutions. The result is a seamless customer experience and a more sustainable end product.

Skeptics may argue that such industrialized delivery innovations will focus on the lower-priced segments of the market, while the construction of high-end real estate will remain unchanged. Early evidence suggests, however, that this will not be the case. Industrialized delivery systems are poised to disrupt the real-estate sector across all asset classes and price points.

There is little doubt that the construction process adopted today badly needs innovation, mainly to improve the speed of delivery and reduce the dependence on manual interventions. A key enabler for this transformation is modular design, geared for production and assembly rather than field-based on-site work.

We are already seeing progress toward this future. Three technologies that already exist are of particular interest.

Virtual design and construction (VDC). Digital platforms, such as 5-D building-information modeling (BIM), enable the creation of a “virtual twin” of physical projects. This not only allows design optimization—in the form of more precise estimates, value engineering, constructability, and interface checks—but also provides transparency and project-management oversight over the life cycle of the project. Coupled with emerging industry trends, such as integrated project-delivery contracts, VDC is a powerful tool to help finish projects on budget, on time, and on spec.

To realize the full benefit of using 5-D BIM for projects, developers and contractors must fundamentally rewire their design, estimation, and project-management processes. That means contractors and clients must work closely together, backed up by clear contracts, to share both risks and gains.

Prefabricated, prefinished volumetric construction (PPVC). PPVC involves the factory construction of interlocking building modules, each equipped with internal finishes, fixtures, and fittings. These elements are then transported to the site for assembly and installation.

PPVC is slowly but steadily gaining popularity because it accelerates the construction process, with productivity gains of as much as 30 to 50 percent, according to case examples in Singapore. PPVC works particularly well for less-complicated and standardized designs. Other benefits include higher, more consistent quality; less waste; and better health, safety, and environmental performance because of the shift from chaotic field sites to a more controlled factory environment.

Singapore is one of the leaders in the use of PPVC. The city-state’s Building and Construction Authority encourages deployment of this approach—not just for hotels, hostels, dormitories, and industrial facilities but also for middle- to high-end residential developments. The technique can accommodate both concrete and dry walls. It is also corrosion free and fire safe. So far, it has been used in buildings as high as 25 stories.

Deploying PPVC implies changing design standards, assumptions, and processes to adopt the Design for Manufacturing and Assembly approach. It also requires better production-planning, supply-chain, and logistics-management capabilities, given that modules need to be produced remotely and regularly shipped from factories to sites. These are areas in which many contractors are lacking. Few in the sector have anticipated this shift from construction to production, let alone budgeted and provided resources for it.

3-D printing. While 3-D printing has not yet been widely applied in construction, developers and contractors should keep an eye out for innovations here. 3-D printing will likely be an important part of the shift from the field to the factory. Experts believe that one core application of 3-D printing could be in realizing complex, iconic facades and architectural features previously thought too expensive or time consuming to produce.

3-D innovators are scaling up this technology, by developing printers and design methodologies to create building units up to 200 square meters in size in less than a day. Significant R&D efforts are also under way in universities to print individual structural components and architectural features.

The construction industry is poised for big cultural and technological shifts as it embraces digitization across design and delivery processes. The real-estate industry is set to gain from these developments, not just as the result of efficiency and productivity gains but also by providing a richer and more satisfactory customer experience.

The larger benefits could be profound. By cutting costs, speeding up construction, and improving quality, industrialized delivery systems can also help provide the decent, affordable homes that families around the world need.

Limiting foreclosure will make mortgage-backed securities riskier and banks lend less. During the U.S. subprime crisis, many homeowners struggled with their mortgage repayments and foreclosures became rampant. Banks and mortgage servicers are often criticized for taking a tough stance in foreclosing mortgages. One determining factor in whether a mortgage was foreclosed or renegotiated during the crisis was if that mortgage was privately securitized by investment banks. The U.S. government provided monetary subsidies to mortgage servicers to encourage them to renegotiate delinquent mortgages instead of foreclosing. On the surface, this is good for homeowners.

But trying to limit foreclosures makes it more difficult for banks to securitize these mortgages in the first place, which in turn, makes it more difficult for them to lend to more prospective homeowners. In other words, foreclosures, even if they create losses for homeowners and investors, are playing a positive role in facilitating securitization.

Mortgages are illiquid assets that are paid off over many years. Traditionally, banks have to hold the mortgages on their balance sheets until full repayment. Securitization, a process in which securities backed by the cash flow from mortgages are sold to investors, allows banks to raise fresh capital to lend to more prospective borrowers. That is, securitization supports liquefying of assets and thus, lending and investing.

Similar to the problem of lemons in the used car market, investing in mortgage-backed securities (MBS) is risky because banks know more about the true quality of mortgage pools than investors do. Banks therefore have to find ways to reassure buyers that they are buying into securities backed by strong mortgages and that if homeowners do default, they will be able to recoup part of their investments. When banks pre-commit to a tough foreclosure stance they send a signal of confidence to outside investors that the bank has a pool of high-quality, low-risk mortgages.

When a homeowner defaults on a mortgage, the banks typically have two options: to modify the terms of the mortgage in order to keep it alive or to foreclose it and reclaim the property for sale. Modification entails risks that the homeowner might re-default if the economy continues to deteriorate whereas foreclosure is a quick but costly way to recoup cash. Poviding safe cash flow in bad times is particularly valuable to MBS investors, precisely because they were unsure about the quality of the mortgage pools in the first place. By committing to excessive foreclosure policies, banks reassure MBS investors and can attract more capital from them.

If the government prevents excessive foreclosure, the bank’s securitization process is hindered. Government policies that try to limit foreclosures can also reduce incentives for banks to screen mortgages diligently, leading to low-quality mortgages inclusion in MBS. This increases risks for investors and makes securitization more difficult.

Securitization is an important part of the financial market and crucial to the smooth functioning of bank lending. It is also being touted as a solution to broaden funding bases in Europe and USA, notably for SMEs, currently starved of liquidity to grow. Policies limiting foreclosure will inadvertently limit the scale that the banks can securitize and hence limit the scale in which banks can lend to the real estate sector. This translates into a loss for homeowners, small businesses, and the economy overall.

Property assessment has long been a solid industry with steady work for those willing to undertake the education and training required to enter the field. But that stability is changing thanks to automation. The number of appraisers is shrinking as software gets more accurate at valuing property and is increasingly integrated into the sale process. 

The margin of error is decreasing as automation improves. Zillow offers a tool it calls Zestimate that uses input factors to determine the worth of a property. The Zestimate, which has been around since the website’s inception in 2006, cannot be considered an official, certified appraisal. But it’s a start. It’s also getting better as the company pours more resources into research and development.

Back when  Zillow launched, they put in about 43 million homes and had a median error rate of close to 14%. Today,  Zillow values about 100 million homes every single night and our error rate is down to 4.3%. So,  Zillow has made a lot of advances in the accuracy of valuing homes. Zillow is pushing for even greater precision. The company is offering a monetary prize for its teams that can get the algorithm’s error rate down to 2% or 3%.

Computerized models are going to get very accurate, although in the end there’s probably some role for human beings to be involved there. The question is, what is that role of that human being? Right now, appraisers are professionals. They have a high degree of discretion, and there’s a bit of an art to what they do. In the future, there is a role to make sure that the facts the computer is using are accurate, but that’s more of a technician type job as opposed to professional.

Zillow is improving virtual assessment methods and automation is an undeniably growing part of the industry. It’s only a matter of time until the technology is strong enough and cheap enough, from the standpoint of the lenders and the buyers, to cut out the humans from this process and move to more of an algorithm-based process.

Buying a home is a relatively infrequent transaction for the average American, so individuals have limited expertise in the valuation process. That’s where the professional appraiser comes in. He makes a reasonable assessment by examining a set of comparable properties nearby and making adjustments based on the finishes and other elements of the home.

If I see a sofa on Craigslist; I don’t hire an external appraiser to argue about whether it’s worth $100 or $200. But in this case, there’s a third party with money at stake, and that’s the lender or the investors. They want to make sure that their investment is appropriate and the value of the house is appropriate for the transaction. Still, even the most professional appraisers are vulnerable to the forces of the real estate market.

One of the things we observed during the housing boom was a lot of pressure on appraisers to hit various prices, so the incentive problems there are really deeply entrenched. The deck is already stacked to hit a certain appraisal number. That’s really problematic and can also lead to a driving up of house prices. All of these things are pointing towards the potential for technology to play a really important and potentially beneficial role.

As comparable properties sell for higher and higher prices, the pressure on the appraiser increases. An appraiser who wants to hold the line on the price of a property runs the risk of not getting subsequent business.

There have been a number of layers of regulation put into place to block that type of behavior where you’re shooting for a particular number. But it is the case that the historical appraisal process led to the risk of being gamed and the risk of ratcheting up prices.

There are countervailing incentives in the process. If you look at appraisal accuracy over a long period of time, 90% of the time appraisals come in above the purchase of sale agreement. That leads you to believe there’s confirmation bias there, that appraisers are not individually valuing that home but justifying the price that’s already been agreed to. Normally, that doesn’t get you into that much trouble because home prices are increasing.

However, the 2008 recession and real estate market collapse illuminated the problem of inflated home prices. The experts said automated appraisals could help take some of the air out of the balloon.

The work of property assessment is transitioning from an art to a science. Most states require appraisers to have a college degree, complete an apprenticeship and have ongoing training. Extensive licensing is usually required. The industry has seen a decline of about 25,000 appraisers in the last decade, from about 120,000 to 95,000.

As you begin to see the writing on the wall, why start on this treadmill? Why begin on this long process of being approved for an industry when there is relatively limited future growth? That’s really thinned out the pipeline of people who want to get into the appraisal business.

There is irony in the formidable licensing requirement, which creates a barrier to entry for new people but drives up wages for those who stay the course. It’s not leading to folks trying to find an alternative to them.

Fannie Mae and Freddie Mac are already experimenting with some automation, which will undoubtedly continue. Perhaps the future role of the appraiser will be as an arbitrator who offers a human check-and-balance to an automated system.

What’s the role of the appraiser going to be moving forward? Is this industry essentially doomed, or are they going to be playing a much smaller role? Whether the licensing barriers make sense as that transition occurs is something states are going to have to grapple with.

Before automation supplants human appraisals, a cultural shift must occur. The current appraisal process is very binary for a lender, who wants the assessment to match the sale price exactly. But an automated valuation model (AVM) would mean thinking about the assessment in terms of a range.

Is the sale price in the range that the model came up with? If you’re willing to trust a range, then a home could have that appraisal done in advance of the purchase of sale being agreed to. Right now, if the AVM comes in at 98 and the purchase of sale agreement is 100, that creates a problem.

Global housing stock has not expanded quickly enough to keep up with a surge in demand, but cities can focus on three supply-side solutions to make progress. One feeling unites billions of people in cities around the world: a sense of sticker shock whenever they attempt to find a new home. From London to Lagos, housing costs are creating financial stress for a large share of the world’s urban residents. Rents and home prices have risen far faster than incomes in most countries, particularly in big cities, where many people want to live and where job opportunities are concentrated. The issue affects everyone from slum residents living on the margins to middle-income households.

At the heart of the issue is an extreme imbalance in supply and demand. Population growth, the continuing trend toward urbanization, and rising global incomes are all fueling steady demand increases. In 1950, New York City and Tokyo were the only two cities on earth with populations of more than 10 million; today there are more than 20 cities of that size. The world’s urban population has been rising by an average of 65 million people a year over the last three decades, led by breakneck urbanization in China and India.

The housing stock of expensive urban centers around the world has not expanded quickly enough to keep up with this surge in demand. Research from the McKinsey Global Institute has examined the scope of the housing affordability gap. California, for instance, added 544,000 households but only 467,000 net housing units from 2009 to 2014. Its cumulative housing shortfall has expanded to two million units. With home prices and rents hitting all-time highs, nearly half of the state’s households struggle to afford housing in their local market. In New York City, MGI estimates that 1.5 million households cannot afford the cost of what we define as a decent apartment at market rates. This puts the city’s total “affordability gap” at $18 billion a year, or 4 percent of the city’s GDP. As London’s economy has boomed over the past two decades, the city’s annual home completions increased by just over 10 percent, falling far short of demand and driving home prices five times higher.

Worldwide, MGI has estimated that some 330 million urban households currently live in substandard housing or stretch to pay housing costs that exceed 30 percent of their income. This number could rise to 440 million households by 2025 if current trends are not reversed. Beyond the human toll, this issue eventually constrains economic growth. Investment in housing construction remains below its potential, and households with a disproportionate share of monthly income going toward rent or mortgage payments have to limit other forms of consumption. Returning to our example in California, MGI estimates that the housing shortage causes the state to lose $140 billion in annual output, or 6 percent of state GDP.

The legitimate interests of investors, particularly in a low-interest-rate environment, can add fuel to the fire. Foreign capital flocks into global hubs, and residents feel compelled to leverage up to achieve home ownership or add hard assets that are appreciating in price. In the hottest markets, these trends are sometimes amplified by speculative behavior, such as land hoarding or fast-paced property flipping.

Some governments have taken steps to cool real-estate markets that are overheated by investors. These approaches include China’s move to discourage land hoarding by imposing a tax on idle land, Switzerland’s addition of taxes on value gain and limits on foreign- and second-home ownership, Canada’s recent imposition of stress tests for home loans and tighter rules for mortgage insurance, and Germany’s limits on loan-to-value financing ratios. These types of measures work best when they are complemented by flourishing rental markets that allow average citizens to save for down payments without facing a shortage of housing options.

National and local governments around the world often address housing gaps by focusing on demand and financing. Strategies such as housing subsidies, privileged financing, or various forms of rent control offer much-needed relief to the low-income households they cover, and they are legitimate policy choices if carefully designed. But they are expensive and difficult to sustain—and they do not address the core issue of an underlying housing shortfall.

It will take a dramatic increase in the number of available housing units to achieve greater affordability. Of course, the simplicity of this statement belies the complexity of executing it. Because progress has been so elusive, this briefing note will focus solely on supply-side solutions, addressing three challenges that all cities have in common: making land available, removing barriers, and making the construction sector more productive.

Access to land is typically the biggest constraint for housing development and one of the major drivers of cost. In places such as Auckland and Rio de Janeiro, the cost of land often exceeds 40 percent of total property prices. In extreme cases such as San Francisco, land is so scarce that it can account for as much as 80 percent of a home’s price. Globally, we estimate that unlocking land to the fullest extent could reduce the cost of owning a standard housing unit by up to 20 percent. A comprehensive citywide mapping and inventory exercise can unearth many opportunities. Based on our past work in urban environments, we have identified seven areas of focus.

It is critical for congested cities to promote density around transit rather than encouraging sprawl and longer commutes. Transit-oriented development may involve redeveloping existing residential structures or encouraging new building by permitting higher floor-space ratios, loosening height restrictions, or allowing greater density in specific target zones. These zones can be selected to promote local objectives, such as reduced dependence on private vehicles or the development of mixed-use, pedestrian-friendly cityscapes. Places such as Hong Kong and Seoul have already intensified land use around transit stops. Seoul allows floor-area ratios that are up to 20 times higher in better-connected neighborhoods than in more distant areas. Other cities can follow this approach. Analysis in San Diego, for example, found that increasing the density of residential developments in a half-mile radius around public transport nodes could expand the city’s housing stock by close to 30 percent.

In many cases, cities may not even need to increase density thresholds. They can build out on residential parcels that are not taking advantage of currently allowed density. Sites that are underutilized can be identified as priorities for redevelopment. Incentives (such as expedited permitting, relief from parking requirements, or investment in public parking) can make these types of projects more attractive to developers. MGI’s analysis in Los Angeles found that 28 percent of parcels zoned for multifamily development are underutilized; maximizing them could add more than 300,000 units to the city’s housing stock.

Another strategy involves building infill housing on vacant parcels. Even dense neighborhoods may have empty lots that could serve as viable sites. A surprising amount of land sits idle in the face of a huge unmet housing demand. Our analysis finds, for example, that Riyadh, Saudi Arabia, has some 40 square kilometers that are zoned residential but are not being utilized, while about 40 percent of all zoned residential land within Nairobi is vacant. Taxes on idle land can create an incentive to build.

Where appropriate, governments can earmark unused public lands for housing development. Transit authorities may own property surrounding busy transport nodes. Decommissioned sports facilities, military bases, or transit hubs may also be viable sites. It is often easier to facilitate low- or middle-income housing on these types of sites than on typical residential parcels, since public authorities can make the transfer or sale of the land contingent on the development of affordable housing. They may even directly subcontract development of housing in these areas. Turkey’s Mass Housing Administration (TOKi) has managed to open up some 4,120 square kilometers of unused land (or 4 percent of total urban land) from other government agencies for housing development. San Diego could add roughly 4,000 housing units by converting disused sports facilities into mixed-use commercial and residential developments.

Some cities may have opportunities to convert light-industry sites. Large unused industrial parcels (such as shuttered factories) can offer tremendous development potential. But converting them to residential use should involve careful consideration of the impact on jobs and whether any commercial activity on surrounding sites would pose issues for residents.

Cities surrounded by undeveloped or agricultural land can invest in greenfield housing projects on their outskirts. Although greenfield developments typically involve building infrastructure, roads, and new neighborhoods, they may still be cheaper than infill projects if the land is more affordable and if there is room to achieve economies of scale on multiacre sites. Greenfield developments open up the possibility of building single-family homes, which are less feasible in dense urban cores. In California alone, we estimate that greenfield developments could provide more than 600,000 additional housing units. Despite their advantages, cities should learn from mistakes made in locations as diverse as Cairo and Mexico City; if greenfield developments are built too far from existing employment centers or transit hubs, they can fail to attract or retain residents.

Finally, many cities can encourage the owners of single-family homes to add accessory dwelling units. These may include garage apartments, basement apartments, or backyard cottages. It does not matter whether they house extended family or renters. Accessory dwelling units are inherently affordable because they use existing land, buildings, and infrastructure, resulting in a sort of “invisible density.” MGI’s research in California found that homeowners could add up to 790,000 housing units across the state from such structures.

Cities have to develop governance structures that represent all stakeholders (not just the most entrenched, powerful, or vocal) and streamline the actual execution. Several approaches can help. Housing strategies are enormously complex, involving initiatives and policies across financing, urban planning, infrastructure development, land-use regulation, building codes, delivery and contracting approaches, and more. But stakeholders from different parts of the system rarely work together to smooth friction and focus on the broader goal of getting more affordable housing built quickly.

The “delivery lab” model addresses this lack of coordination by bringing together 30 to 40 people across these specialties for fast-paced, intensive working events. Labs are designed to translate high-level housing strategies into detailed initiatives, implementation plans, and key performance indicators. In these settings, public- and private-sector stakeholders can address misperceptions and arrive at joint solutions. Labs can produce integrated plans that clarify expectations and synchronize timelines for what each player agrees to deliver. Getting the right people around the table is critical. Sessions should be well-facilitated, with consultation from external topic experts. Each stakeholder should be represented by someone with enough seniority to make quick decisions, and the top sponsor (for example, a city mayor) should personally attend and guide key sessions.

The delivery-lab approach has had a major positive impact on the housing market in Saudi Arabia. The government invited all stakeholders across the public sector (all ministries and government entities related to housing) and private sector (including representatives from real-estate developers and banks). Citizens’ voices were also heard through the use of social media and focus groups. These events took a multidisciplinary approach to identifying the key challenges in the housing sector and devising solutions with clear targets, implementation plans, accountability, and budgets. The labs have aligned stakeholders around high-impact ideas that take practical considerations into account.

To give just one example, the labs identified last-mile infrastructure connectivity as an issue that was delaying the development of large land parcels and creating uncertainty that deterred developers. Cross-disciplinary problem solving quickly came up with solutions, such as an infrastructure company focused on building these last-mile connections using a build-operate-transfer model.

The outcomes from successful labs are a good foundation, but actual implementation is crucial. A city government can accelerate progress by empowering an agency or unit with a mandate to guide housing delivery from end to end. This type of unit needs exceptional talent with good problem-solving skills, stakeholder-management and communication skills, and significant decision-making power or direct access to the top decision maker. San Diego’s Housing Commission, for instance, hires private-sector talent, has an in-house real-estate development team, and invests in marketing and communications. 

Although most people agree in the abstract that increasing housing affordability would be a good thing, opposition often halts specific proposals. Existing residents may be concerned about the changing character of their neighborhood, the prospect of lower home values, congestion, and crowding in schools. To accommodate these concerns, many jurisdictions have established processes such as public hearings or ballot initiatives that carry veto power. While the intent to give the community a voice is noble, the result is often that very little housing gets built.

Cities need to take an inclusive approach to providing housing for people of all incomes, ages, and demographic groups. People who come to a city to work need to be able to find an affordable place to live there. But the voices of existing homeowners who want to preserve the status quo often drown out those of newcomers, young adults, low-income service workers, and renters who need more housing. After a 2009 audit found that neighborhood councils were not representative of the city’s broader population, Seattle replaced these bodies with a central Community Involvement Commission that includes mayoral and council appointees chosen to represent a broader set of stakeholders.

Cities can also mandate a larger role for employers in the community-input process. Companies have a very real stake in housing issues, since the availability of housing directly affects their ability to attract talent. Amid the extreme housing crunch throughout Silicon Valley, for example, Facebook has advanced plans for a mixed-use, mixed-income residential and commercial campus in Menlo Park.

While many cities hold public hearings and disclose minutes of meetings, there are ways to make the planning process more dynamic and inclusive. Widely distributed digital surveys and the use of analytic tools to track citizen sentiment and real-world use patterns can keep housing decisions more in tune with the actual needs of the community and lessen the influence of smaller entrenched interest groups. Creating an open-source map of all city parcels overlaid with development opportunities can foster debate about priorities. Tools such as Owlized can help residents visualize proposed projects in their neighborhood in 3-D.

A maze of regulation is typically associated with land acquisition, zoning, and building codes. In many jurisdictions, developers need to go through extensive environmental studies, design approvals, and public hearings. These safeguards are well-intended, but they can add inefficiencies. Wrongful manipulation of the approval process can result in multiyear delays and millions of dollars in added development costs. This increases the risk premium associated with building projects, driving up costs for renters and would-be homeowners and preventing some projects from being undertaken at all.

Cities can streamline their processes to fast-track land-use approval and permitting, creating a more predictable and less burdensome process. Establishing “single window” clearance (that is, consolidating approvals from multiple agencies into one clear interface) and digitizing permit applications and status tracking are clear places to start. Cities around the world, from Nairobi to Singapore, have had success with this approach. Simplifying the required permits can provide significant relief. Australia, for example, was able to cut the number of regulatory procedures and speed up permit approvals by more than two months, all while maintaining high construction quality.

Cities could consider establishing “by right” special development zones in select areas where deviations from city zoning and land-use codes are permitted with minimal review. Blanket environmental reviews could clear requirements for future developers in entire zones. Governments could also create appeal boards at the local level for faster resolution of project rejections or mitigation proposals.

Local governments can also bring a new approach to building codes. Today these codes tend to be highly prescriptive about the choice of equipment, materials, and designs that construction companies must use. This can stifle innovation and make it difficult to achieve meaningful improvements in productivity by adopting new practices. Instead, cities could opt for “outcome based” regulation that requires safe, sound results (such as structural integrity) but give construction companies the flexibility to decide how to achieve them.

Building projects on a larger scale can dramatically change the productivity and cost of delivering housing, making it possible to employ techniques such as repeatability and off-site fabrication. A number of companies take this approach while trying to incorporate design quality and variability as well as sustainability. Cities can support industry innovation by providing the land and infrastructure that allow for scale, tendering out city-scale developments, and consolidating high-volume demand.

Where cities themselves invest in housing or supporting infrastructure, contracts can be a powerful lever for raising construction productivity. In an MGI global survey, construction executives, suppliers, and project owners pointed to misaligned incentives and contracts as impediments. Projects are often awarded to the lowest bidder with limited regard to quality, change orders, and claims that might arise after the fact. The planning stage may be given short shrift, while overly detailed specifications can limit flexibility when problems arise. Risks are often misallocated, and contracts generally fail to take the inherent uncertainty of projects into account. Furthermore, relationships may be adversarial, creating an environment that lacks trust and genuine collaboration.

Moving to value-based tendering (which places greater emphasis on the quality and past performance of suppliers), adding contractor and owner incentives to traditional contracts, and making provisions to improve transparency and collaboration can deliver tremendous value. An even bolder approach involves contracts with an integrated project-delivery (IPD) model. When arrangements with multiple contractors are transactional, they can easily turn hostile. But the IPD model encourages multiple stakeholders to collaborate closely on a project, sharing its profits or losses while maintaining their separate business identities. Tired of missed deadlines and budget overruns on early projects, Sutter Health, a not-for-profit health system with dozens of medical centers, took this approach to tighten up its $7 billion capital-improvement project. The company designed an IPD model, assigning contracts to integrated teams of designers, consultants, and builders rather than to individual parties. The new approach has yielded projects that came in on time and under budget.

Finally, by mandating use of efficient technologies and innovations in their procurement contracts, cities can hasten private-sector adoption and investment in cost-saving tools. Requiring contractors to submit models in building-information-modeling (BIM) software, which has a track record of fewer errors and reduced rework, can solidify better industry standards and practices.

Even when land is available and there is no community opposition, construction itself poses risks. Too many projects come in late, over budget, or fraught with problems. Productivity within the construction sector is consistently poor around the world. Labor productivity growth averaged 1.0 percent a year over the past two decades, compared with 2.8 percent for the total world economy and 3.6 percent for manufacturing. The picture is particularly dismal in advanced economies. In the United States, for instance, labor productivity as measured today is lower than it was half a century ago.

Some of this is due to external factors such as cumbersome building codes and permitting processes as well as cyclical swings in public and private demand. Informality and corruption sometimes distort the market. At the industry level, construction is highly fragmented, contracts have misaligned incentives, and inexperienced owners and buyers find it hard to navigate an opaque marketplace. At the company level, we often see poor project management, inadequate design processes, and a lack of investment in technology, R&D, and workforce skills.

While cities can create a more efficient environment and incentives for innovation, construction companies also have to up their game. The best-performing companies take a value-engineering approach to the design process, pushing for repeatable design elements whenever possible. They also avoid delays by focusing on procurement and supply-chain management for just-in-time delivery.

Several approaches can improve on-site execution, starting with a rigorous planning process and the completion of all prework before starting on-site. To ensure that key activities are achieved on time and on budget, companies should agree on key performance indicators, particularly for subcontractors, and hold regular performance meetings to monitor progress and solve issues. It takes careful planning and coordination of different disciplines on-site along with the application of lean principles to reduce waste and rework.

The construction industry also needs to accelerate digital adoption. This includes the use of BIM tools for design as well as analytics and the Internet of Things for on-site monitoring of materials, labor, and equipment productivity. Cloud-based control towers can coordinate large-scale, complex projects, assembling data in near real time that is both backward-looking and predictive. They can keep information flowing to owners, contractors, and subcontractors. Techniques and data that are readily available today can produce large improvements in the accuracy of cost and schedule estimates as well as engineering productivity. Advanced automated equipment, such as bricklaying and tiling robots, can accelerate on-site execution. MGI’s productivity survey indicated that the biggest barriers to innovation by construction companies are underinvestment in technology and a lack of R&D.

Construction is almost always approached as a series of discrete and bespoke projects. But the biggest boost in productivity comes with the concept of a manufacturing-inspired mass-production system. This involves more standardized elements, panels manufactured and assembled off-site, and limited finishing work conducted on-site.

Barcelona Housing Systems, for instance, has improved productivity by up to tenfold by moving away from traditional on-site construction to large-scale industrial delivery and prefabrication. The company aims to develop more than 10,000 housing units per project, helping to amortize the cost of manufacturing facilities. It uses a replicable design of four-story multifamily buildings that mix housing, retail, and service-oriented office space, varying some facade and design elements without changes to the structural design. All necessary housing components are assembled from prefabricated modules built in a factory on-site or nearby, and the components are simple enough to be built by nonskilled workers with minimal training.

VBHC, a modular-housing provider from India, designs prefabricated room components that can easily convert one-bedroom units to two- or three-bedroom units, saving costs by avoiding extra aluminum frameworks. Such construction techniques can be applied in a variety of different housing contexts, including prefabricated single-family homes as well as detached dwelling units and modules for multifamily infill projects.

Modular-home construction is also gaining traction in the United Kingdom. A company called Legal & General, for instance, is building one of the largest modular production facilities in the world near Leeds, where it expects to produce up to 4,000 units a year. The £3 billion UK Home Building Fund explicitly calls for and supports the funding of such techniques.

US-based Katerra uses modular-construction techniques while delivering construction services to customers in an end-to-end model. The Silicon Valley start-up takes sole responsibility for design, sourcing materials from a global supply chain, and assembling final products. The company is focused on using new building materials and finding process improvements by deploying the Internet of Things.

Other technology breakthroughs are being applied as well. Shanghai-based WinSun automates construction through 3-D printing. Although relatively new, the technique has already been used in a few cities: Saudi Arabia has signed a contact with WinSun to develop 30 million square meters of real estate, on the heels of the company’s development of a 3-D-printed office building in Dubai.

Finding an affordable place to call home has become an issue for citizens around the world. Subsidies and financing solutions alone cannot close the gap. Cities urgently need to ramp up home building to improve residents’ quality of life, remain inclusive, and ensure that housing shortages do not become a drag on economic growth. The tools and strategies outlined here can be pursued in parallel—and given the extent of unmet demand today, there is no time to lose.

VIRTUAL REALITY FOR CORPORATE LEARNING

Why Virtual Reality Works for Corporate Learning Infographic

Why Virtual Reality Works for Corporate Learning 

The Why Virtual Reality Works for Corporate Learning Infographic explores why VR is effective for workplace training based on the results of a pilot study.

Virtual Reality has been rated by the L&D Global Sentiment Survey as the fourth-hottest workplace trend for 2017. Participants found VR to be significantly more enjoyable, significantly easier to concentrate on and provided greater learning satisfaction than the other popular digital learning methods used in the study.

The research compared mobile VR delivered using smartphones and headsets, a four-page online PDF document and a gamified elearning interaction. The VR learning was successful for teaching observational skills and decision-making and the results demonstrated that VR is at least as effective as the other learning methods in terms of knowledge acquisition and retention. VR also yielded high ratings on confidence to apply the learning.

As the workplace changes, so must learning. Exciting experiments are under way, but they are not enough. As technology transforms the workplace, the need for innovation in learning is urgent. In recent years, there have been somewhere between 300,000 to 400,000 skilled manufacturing jobs going begging in the US at any given time. Pick a subset of those in a particular region, then figure out a way to teach the skills and do a test project.

We are skeptical about the ability of universities to respond rapidly enough. As the workplace changes, the role of the college degree will shift as well. Fortunately, innovation is taking place both at universities and businesses. While some companies are ready to explore new ways of developing talent, sorting through the options is complex and time consuming. The rapid growth of the gig economy creates additional challenges and opportunities for innovation efforts.

Our higher-education system is 25 years behind the curve. There needs to be a new set of institutions and programs that are jointly owned and managed by corporations or industry. One of the flaws of the American higher-education system is that once you cross the graduation stage, we largely sever the relationship with you, with the exception of viewing you as a donor. Your connection and loyalty to the school haven’t changed but the relationship with the institution has. Some colleges say, “Congratulations” and give them a discount off executive-education programs and lifelong access to the career-management center. But they do nothing with respect to “how are your skills and capabilities changing over time? And what can we do to help you meet these needs?” The universities will struggle to adapt to lifetime learning. Think about that—the North Star there isn’t the student, it’s the funding.

We went to the schools that provided degrees around specific topics. What we found was that people who excelled in the organization were not the same people who did really well in terms of getting those degrees or who even had them. Some of best executives are people who started in carpentry or started in iron-worker roles. A degree is not really a great proxy for meaningful skills. When you look at a transcript, it has a list of courses, but those don’t necessarily show skills or competencies. That said, degrees are a recognized credential; employers use them as a signal. Plus, there’s a yearning for them. It’s part of the American narrative. Let’s figure out how to do this better, in a way that works for employers and students. Let’s not throw out everything that we have, but find more flexible ways of providing recognizable value of competency more quickly, in smaller units that build to degrees.

People want degrees. They want them because there isn’t an alternative. And they want them because they want some marker. Leaders need to understand and value the alternative credentials that are available. If I’m an employer, I need to be saying, “Here are the 12 competencies that I need you to get. I don’t care where you get them. You don’t need to spend $200,000 in four years to go do that. You just need to show us some proof.”

The idea that you enter at the bottom and four-plus years later you end at the top and you’re done is a fiction. It doesn’t mean anything anymore. Learners need to be able to enter at any different point along the way, take what they need, and get going to do whatever it is they wanted to do. We have to try to find a way to help alternative credentials become a currency among learners that is respected and valued by employers.

You cannot find all people by only following traditional means. We’ll find the people through nontraditional means. At some point, the nut will get cracked. To do that, though, the degree-alternatives space needs to solve for recruiters. Recruiters in fast-growing companies are busy. They don’t have time to do the analysis that says, “Let me follow up on the people I hired to figure out which are actually making it in this organization. How are the ones who have a certificate that I took a chance on performing vis-à-vis the ones I thought were a shoo-in because they had a degree?” In the same way, it will be a struggle to find the time and bandwidth needed to figure out who are all these learning providers. Who are the good ones? Which should we rely on? What does the credential here mean, and how is that different from the credential over there?

Microsoft has badges that show an employee has passed an exam or completed certification for a given skill. You can take your badges with you if you leave, too. If I were a talent-rich company, I would want to do the same. More companies are going to do this kind of badging, and this will be part of their recruitment and retention process. At the same time this idea of badging spreads. We’re going to see more and more configurations where a business has solved part of the puzzle.

At AT&T, they have taken all of their job categories, mapped them onto competencies, and aligned them to learning opportunities. Individuals can go onto a personalized-learning system and see if their jobs are on the decline or on the rise. They can discover jobs that they are interested in, see the associated competencies, and take advantage of learning opportunities that will enable them to make a transition. The transparency of AT&T’s system is remarkable and empowering to employees.

In Silicon Valley, at least when people are hiring engineers, companies don’t care where they went to school. Facebook is hiring people right out of college if they can code. And they have a 14-year-old intern. All these companies care about is that people can code.

A more diffuse, gig economy will exponentially increase the difficulty of getting people to undertake and complete training. We know it is a huge hurdle under the best of circumstances, and it’s even harder when the learning isn’t contextualized. Coursework is hard for many people, due to time constraints or a lack of interest in traditional learning, but interacting with people or doing on-the-job tasks that develop and use math or computer skills makes learning more pragmatic and attractive. We need to figure out how to line up some of the factors that drive people toward completion and success, even when they don’t work for an organization. A complicating factor is that a lot of gig-economy companies are utterly unmotivated to take on costs that they don’t have to, and many individuals don’t have the cash.

In the future, a traditional college degree will remain useful to build fundamental skills, but after graduation, workers will be expected to continue their education throughout their careers. Workers, for instance, may increasingly pursue specific job-oriented qualifications or applied credentials in incremental steps in flexible, lower-cost programs.

Students are hesitating to major in the humanities and social sciences out of fear that those degrees will lead only to low-wage jobs. Yet those fields remain crucially important to industry, which needs liberal arts students for countless tasks, such as to help understand biases in data, facilitate collaboration, bring insight, provide historical perspective, and humanize technology in a data-driven world.

For instance, machines should not only function but should also optimize human welfare. What if a self-driving car needs to go faster than the speed limit to avoid an accident? Should that car be allowed to break the law? These kinds of questions of the new digital economy all require diversity of thought, diversity of approach, and diversity of background to address these complex issues.

Those who major in the humanities or social sciences, especially fields like philosophy and public policy, can easily develop transferable skills that employers value. Because many employers seek candidates comfortable with data and data analysis, humanities majors who also learn some quantitative skills by taking classes in, say, statistics or logic will have an advantage over those who don’t.

Traditional brick-and-mortar college campus will certainly remain because the face-to-face encounters in and outside the classroom are educationally and socially valuable. After graduation, though, employees will increasingly need continuing education to stay competitive, and companies recognize that. Already, some large firms such as AT&T use online learning in a massive reskilling effort to re-train workers. There are all of these educational opportunities that are open to anyone who has the will and desire and ability to go through it, and as a result we’re going to see all sorts of new people come into fields they otherwise wouldn’t have access to.

Workers may think of continual training and education through online classes as earning micro-credentials that could garner credit toward a full degree at a traditional institution. Individuals could earn multiple micro-credentials over years, perhaps beginning even with a micro-bachelor’s in high school as a head start on an undergraduate degree.

Over the course of their careers, people will augment the three R’s of reading, writing, and arithmetic that they learned early in life with the four C’s of critical thinking, communication, creativity, and cultural fluency.

By 2030, about half of today’s jobs will be gone. Automation will perform many current blue-collar and white-collar jobs, while independent contractors will fill a large fraction of future positions. Robots and other automation in the short term will displace individual workers, but technology over the long term is likely to create new economic opportunity and new jobs. While automation eats jobs, it doesn’t eat work.

Future workers’ attitudes toward employment will be different from those of today’s workers, forcing companies to change how they recruit and retain. In a survey of college students, respondents indicated that they highly value work-life balance and are interested in working from home one or two days a week. Students are switching from living for their work and shifting more toward making a living so they can actually enjoy life.

Other shifts in demographics will force employers to rethink how they structure work and benefits. Many aging baby boomers, for instance, are remaining in the workforce past the traditional retirement age of 65 and may demand fewer hours or shorter workweeks. There are different things people value at different ages.

Companies are committing to a diverse workforce for varying motivations. Some believe that diverse teams are just smarter and more creative. Other firms, especially technology companies, believe that they’re disproportionately responsible for designing the future and therefore it’s simply wrong to leave entire communities out of their teams.

Overall, companies must understand that the same strategies that increase diversity also boost a range of other positive outcomes as well. When people feel like they belong at work, they perform significantly better. They take fewer sick days and less time off.

There are various initiatives designed to increase inclusion, such as reacHire, which trains and supports women re-entering the workforce, and Stanford’s Distinguished Careers Institute, which brings individuals with 20 to 30 years of career experience to campus for a year of intergenerational connection and learning with undergrads and graduate students. There are so many people who are not 18- to 22-year-olds who are still interested in being alive, alert, connected, and contributing. Diversity is a fact, inclusion is a practice, equity is a goal.

REVITALIZING RURAL AREAS

Rural America is often portrayed as depressed and distressed — as a set of regions left hopeless by economic forces beyond their control. In early November an MIT audience was asked to look beyond such stereotypes and listen as four community development leaders painted a more nuanced portrait of America’s rural areas.

“We have these stereotypical, often negative views,” said Barbara Dyer, a senior lecturer and the executive director of the Good Companies, Good Jobs Initiative at the MIT Sloan of Management, who co-moderated the panel discussion. “In reality, there’s an enormous amount of innovation across this country and a lot of strength. Tonight, we want to suspend our stereotypes and spend some time with people who live in these places.”

Sponsored by Mens et Manus America, the well-attended event featured a panel of executives whose organizations are working to rebuild rural economies. The four leaders outlined the challenges of geography — notably the difficulty of providing services to populations spread out over wide areas — and of history, which can cast long shadows in places where economies once depended on individual companies.

Good rural infrastructure is a foundation for prosperity

Betsy Biemann, CEO of the rural development organization Coastal Enterprises Inc. of Maine, began by describing the challenge of working to grow good jobs, environmentally sustainable enterprises, and shared prosperity in Maine. Noting that the state represents half the land mass of New England with just 1.3 million people, she said, “The challenge is, how does one afford the infrastructure of a huge expanse with a small number of people?”

Infrastructure needs include skills training, child care, and reliable transportation — any of which can trip up workers on the path to prosperity, speakers said. “The rural heartland is a good place to make things. Manufacturing is booming in places you wouldn’t think of,” said Janet Topolsky, executive director of the Aspen Institute’s Community Strategies Group, which works in numerous regions to improve life on the economic margins. “Yet, everywhere I go,” she said, “rural employers have jobs going wanting because they can’t hire enough people with the right skills” — or can’t keep employees due to a lack of such services as child care or transportation.

High-speed Internet service is another key infrastructure need, Biemann identified. “There are a lot of communities in Maine where people have dial-up. Broadband would have a significant impact in reducing the level of poverty,” she said.

Biemann noted that Maine is still struggling to adjust to a “huge shrinkage of our economic engine,” including the loss of 5,000 paper industry jobs in just the last five years; and panelists underscored that such economic blows have lingering effects.
 
Shaping new identites in former company towns

Incourage Community Foundation of Wisconsin CEO Kelly Ryan, for example, said that when Wisconsin Rapids lost its main employer — a Fortune 500 paper company — the impact was more than just financial. “What I found was a greater sense of loss around identity,” she said. “We were the greatest paper-making community in the world, and if we’re not that anymore, what are we?”

This is the same kind of question autoworkers asked when Detroit’s manufacturing center contracted, Topolsky said, noting that she was born and raised in Detroit. “There are parallels between the rural and urban experiences that we sometimes don’t acknowledge.”

Building a culture that values education

Karl Stauber, CEO of Danville Regional Foundation, which works to encourage revitalization and renewal in the Dan River Region of Virginia, told a similar story: For many years, the economy of his region was heavily dependent on a single textile company. “We were a company town,” he said. “The Great Recession started in our community in about 1995. In some ways it’s still going on. A lot of people who could get up and go got up and went.”

Now, Stauber said, Danville is trying to build not just a new economy but also a new culture — one that values education in a way that the old company-centric system did not. The company actually discouraged residents from completing high school, Stauber said. “They wanted people they could control.”

In recent years, the Danville Foundation has worked to show companies that well-educated children are a good investment, Stauber said. “We got businesses to work together to change what was happening,” he said — and the effort significantly increased the number of children deemed ready for kindergarten in his region. “We talk about it as a tipping point.”

That doesn’t mean that every rural resident needs a four-year college degree, however, speakers agreed. “My feeling is that we have way oversold the value of a four-year degree,” especially considering the risks of mounting student debt, Stauber said.

“We have to make it more respectable to do something other than a four-year degree,” Ryan said, noting that options such as apprenticeships that offer paid training would make it easier for students to afford to get the skills they need.

In addition, since many rural residents would have to leave their regions to earn a bachelor’s degree, Stauber said, “Going to college is an exit strategy, it’s not a development strategy.”
 
How could MIT help create sustainable rural development?

The event’s co-moderator, Simon Johnson, the Ronald A. Kurtz Professor of Entrepreneurship at Sloan, asked the panelists what MIT could do to help.

Answers ran the gamut from suggesting that MIT invest directly in rural areas to proposing research into how cutting-edge technology could make the low workforce density of rural areas an advantage.

Ryan also revealed that MIT has already provided some assistance to Incourage. MIT students piloting a new MIT Sloan action learning offering called USA Lab conducted research this summer that helped the foundation meet its goal of launching a Wisconsin index fund. That fund will enable Incourage to use its investment portfolio to directly support its mission — revealing a new way of thinking about “return on investment,” she said.

The important question, according to Ryan, is: “How do we measure return? Is it just money, or is it social return?” This point was echoed in the question-and-answer session that ended the two-hour event, when one attendee asked the panelists: “If you could wave a magic wand and make one thing happen, what would that be?”

“I would change how we measure the results of economic development,” Topolsky said, pointing out that long-term investment in intellectual, social, and other kinds of capital are also necessary for sustainable development. “We should measure economic development by how many people get out of poverty.”

The non-partisan Mens et Manus America initiative explores social, political, and economic challenges currently facing the United States. The initiative is co-sponsored by the MIT School of Humanities, Arts, and Social Sciences and the MIT Sloan School of Management, and is co-directed by Agustín Rayo, professor of philosophy and associate dean of SHASS, and Ezra Zuckerman Sivan, the Siteman professor of strategy and entrepreneurship and deputy dean of MIT Sloan.