CONSUMER TECHNOLOGY

Every year CE Week serves as the mid-year launch pad for the newest technology products. This year, with more than 75 participating companies signed up to exhibit next week, July 12-13, and a ten-year track record of bringing out the largest audience of media, retailers and tech influencers on the east coast, the show will showcase a number of new products in trending categories such as smart home, wearables, digital health, robotics, augmented reality, high-resolution audio and video, and more.

Teaser_image

Major participating brands include Harman Kardon, Intel, LG Electronics, Logitech, Pioneer Electronics, Samsung and Westinghouse. What makes the show especially exciting are the many start-ups and innovators that break through the clutter at this show to make their innovations known worldwide.  Some of the exciting new products that have already been announced include:


https://www.accenture.com/t20161210T005122w1920/gr-en/_acnmedia/Accenture/Conversion-Assets/DotCom/Images/Global-2/Technology/Accenture-AI-Latest-Economic-Superpower.jpg

BumpOut – BumpOut Bluetooth Speaker is the world’s first truly portable speaker that features Motorized Expansion Technology™. An ergonomic sleek design that fits comfortably in your pocket and can attach to any surface including your phone or case.
Cedar Electronics – The Escort Solo S4 cordless radar/laser detector provides long-range protection against all radar and laser guns including instant-on X-band, K-band and SuperWide Ka-band as well as maximum laser warning and off-axis protection.
Cleer – Be one with your music with the NEXT audiophile-grade headphone. From the lush sheepskin and memory foam padded earcups to the 40mm ironless magnesium driver units, it is ready to deliver engaging personal music experiences.
EarlySense – EarlySense Live is the only health tracker that’s based on sensor technology trusted by physicians and used regularly in hospitals around the world. For the first time, a hospital proven contact-free sensor is made available to every home.
Enblue Technology – SMARTOO is a mobile travel desk for people on the go. It fits to any two-handle suitcase and is foldable for easy transport. Integrated 4200 mAh power bank as a backup for smartphones.

Fizzics – With the FIZZICS Waytap you can enjoy fresh draft beer anywhere. The dense Fizzics Micro-Foam™ head enhances the texture and aroma, bringing out the full flavor of your favorite beer.
Jabra – Jabra Elite Sport are the most technically advanced true wireless sports earbuds with superior sound for music and calls with up to 9 hours charge.
JBL by HARMAN – The JBL Everest Elite 750 NC around-ear wireless headphone is outfitted with the latest Adaptive Noise-Cancelling (ANC) to allow listeners to control the amount of environmental noise they hear. JBL’s TruNote™ Auto Sound Calibration adjusts sound based on the user’s ear anatomy to deliver the best listening experience.
M3D – The Micro+ is the second-generation Micro 3D Printer. The consumer 3D printer that set the standard has been renewed with full 3rd party support, prints twice as fast, prints untethered, and is better than ever.
Merge VR – The Merge VR Goggles are a vessel that will transport you to another world. Slide in your iOS or Android smartphone and let the Merge VR Goggles take you anywhere.
NEOFECT – Rapael Smart Glove is a biofeedback system which includes an exo-glove with built-in sensors and an artificial intelligence software to help patients with neurological and musculoskeletal injuries regain their hand mobility. 
Nixplay – The Nixplay Seed displays photos sent from anywhere in the world through Wi-Fi connectivity. With Nixplay’s mobile app, users can share pictures from various social media sites, customize playlists, add captions, and more, with just a few clicks.
Odyssey Toys – A.R.I.A.’s Adventures is a virtual reality/augmented reality game which will impress all who see it. Color in 3D, bring animal cards to life and learn some neat facts and enjoy some fun interaction with them too.
Olibra – The Bond connects remote-controlled devices like ceiling fans and AC units to a home Wi-Fi network and even works with Amazon Echo and Google Home. Now you can control all your remote-controlled devices with your phone, tablet, or smart speaker.
Ooma – Ooma Home Security is a comprehensive DIY home monitoring solution that alerts users of events within their home and makes it easy to contact local emergency dispatchers remotely from their home phone number even when they’re not at home.
REM-Fit – ZEEQ Smart Pillow combines personal audio sound, sleep tracking and snoring solution technology for the most soothing, restful night of sleep possible.
Westinghouse Electronics – The Smart 4K Ultra HDTV – Amazon Fire TV Edition delivers stunning 4K UHD picture quality with the Fire TV experience built in. This smart TV seamlessly integrates your favorite content on the home screen – including live over-the-air TV broadcasts and streaming apps and channels.

Industry 4.0 is the current trend of automation and data exchange in manufacturing technologies. It includes cyber-physical systems, the Internet of things, and cloud computing.

Industry 4.0 creates what has been called a smart factory. Within the modular structured smart factories, cyber-physical systems monitor physical processes, create a virtual copy of the physical world and make decentralized decisions. Over the Internet of Things, cyber-physical systems communicate and cooperate with each other and with humans in real time, and via the Internet of Services, both internal and cross-organizational services are offered and used by participants of the value chain.

Industry 4.0 refers to the convergence and application of nine technologies: advanced robotics; big data and analytics; cloud computing; the industrial internet; horizontal and vertical system integration; simulation; augmented reality; additive manufacturing; and cybersecurity. Companies unlock the full potential of Industry 4.0 by coordinating the implementation of those technologies—for example, by deploying sensors to collect data within a secure cloud environment and applying advanced analytics to gain insights.

In this way, a manufacturer can create an integrated, automated, and optimized production flow across the supply chain, as well as synthesize communications between itself and its suppliers and customers. This end-to-end integration will reduce waiting time and work-in-progress inventory and, ultimately, may even make it possible for manufacturers to offer mass customization at the same price as mass production.

As adoption proceeds, the labor cost advantages of traditional low-cost locations will shrink, motivating manufacturers to bring previously offshored jobs back home. Manufacturers will also benefit from higher demand resulting from the growth of existing markets and the introduction of new products and services.

The profile of the workforce will also change. The critical Industry 4.0 jobs—such as for data managers and scientists, software developers, and analytics experts—require skills that differ fundamentally from those that most industrial workers possess today. Manufacturers will need to take steps to close the skills gap, such as retraining the workforce and tapping the pool of digital talent. Moreover, manufacturers will need to create new jobs to meet the higher demand.

The race is on to adopt Industry 4.0. Companies in the US and Germany have implemented Industry 4.0 at approximately the same pace. The real value is achieved when manufacturers maximize the impact of these advances by combining them in a comprehensive program. Manufacturers need to gain a deeper understanding of how they can apply Industry 4.0 and accelerate the pace of adoption. The winners will approach the race to Industry 4.0 as a series of sprints but manage their program as a marathon.

As costs fall for infrastructural technologies such as computers and the internet, the technologies would—like railroads, electricity, and telephones—become widely available commodities. Once a technology is ubiquitous and available to all—neither scarce nor proprietary—it no longer confers a lasting competitive advantage.

Factors such as commoditization can erode a product’s advantage over time, but companies can create lasting advantage from widely available technology. The powerful effects of technology are visible in economic metrics. Technology matters to a company’s bottom line and it has impact. The use of proprietary metrics such as technology intensity to make the most of technology lies at the heart of creating what we call technology advantage.

Given the rapid emergence of disruptive products and business models and the transformative power of digital technologies on business and society, executives must become masters of the global “technology economy,” capable of detecting the economic impact of rapid technological change and able to respond with speed and foresight. In these articles, we explore the new metrics and consider the new ways that companies need to think in order to navigate the technology economy and approach the many investment decisions in which technology plays a role.

Technology infuses even the measurement of the market economy. The composition of indices such as the Dow Jones Industrial Average (DJIA) and the S&P 500 has changed. Industrial companies are being replaced by tech powerhouses like Apple, Google, and Amazon, whose stocks are valued much higher than those of many long-time industrial members. Apple, with its high market capitalization, accounts for such a large share of the DJIA, for example, that a hiccup in its quarterly earnings moves the entire index. Just 20 or 30 years ago, the performance of Caterpillar or GM (the latter no longer part of the DJIA) could have similarly shaken up the market.

Furthermore, technology permeates companies. Worldwide corporate IT spending—an important barometer of the technology economy that focuses on corporate spending for hardware, software, data centers, networks, and staff, whether “internal” IT or outsourced services—is nearly $6 trillion per year. This amount is what it would cost to give a $500 smartphone and $350 tablet to each of the 7.1 billion people on Earth. If the global technology economy were a country and that spending its GDP, it would rank between the economies of China and Japan and would be more than twice the size of the UK economy. Corporate technology spending grew by a factor of almost 20 from 1980 through 2015, while global GDP barely tripled.

Of course, the $6 trillion figure for corporate IT spending does not include all the money companies spend on technology. It does not account for spending on the sensors, processors, and other technologies embedded in everyday products, including cars, aircraft engines, appliances, and the smart grid; nor does it include spending on robotics, process automation, and mobile technologies. If we include such investments, our technology-spending estimate increases dramatically.

IT spending data is a proxy for the technology economy. This measure of technology spending, which highlights the complexities of looking at technology through an economic lens, is a critical element of a company’s overall digital transformation.  Using technology intensity, we can shine a spotlight that reveals the economic impact of this massive amount of technology spending.

In the past, business leaders tended to examine the two metrics in isolation. But that doesn’t give leaders the whole picture. Revenues don’t automatically rise when companies spend more on technology. And it’s not necessarily a bad thing if a company’s technology spending is high relative to operating expenses. However, if leaders compare technology spending simultaneously with revenues and operating expenses, as technology intensity does, several interesting relationships emerge.

Across a range of industries, companies with high technology intensity have high gross margins. For instance, in the insurance sector, top-performing companies enjoy gross margins that are more than three times the margins of average performers and technology intensities that are more than 50% higher. In banking and financial services, companies with the highest gross margins have technology intensities and margins that are roughly double those of average performers. This industry has seen extremely high levels of automation over the past five years—including technology systems that streamline processes, and advances in artificial intelligence that allow robots to answer clients’ questions and, eventually, to execute trades. Michael Rogers, the president of State Street, estimated in Bloomberg Markets that by 2020, automation will have replaced one in five of the company’s workers. Within a decade, 1.8 million employees in US and European banks could be out of jobs.

We see not just a connection between technology intensity and gross margins but also a strong correlation. That is, technology intensity and gross margins tend to rise and decline together. This effect was seen before and after the recent world economic crash. In the run-up to the Great Recession that started in 2007, companies were investing more and more heavily in technology relative to revenues and operating expenses, and gross margins were rising. That trend accelerated through 2008 and until 2009, when companies belatedly realized the magnitude of what had happened and began to cut technology investment dramatically. After that, technology intensity dropped precipitously along with gross margins.

Along with the technology intensity metric, companies can add other measures to their management dashboard, such as income per dollar of technology spending. We define income as revenues minus operating expenses.

For example, the energy industry produces the highest income per dollar of technology spending ($24.24). At the other end of the spectrum, the software publishing and internet services industry produces the lowest ($0.98). In both total technology spending and the technology spending required just to “keep the lights on,” we saw a similar rise until 2008, followed by a plunge in income per dollar of technology spending during the market collapse. Afterward, we saw what might be called the failure of recovery as a result of sluggish growth. Income per dollar of technology spending in 2014 and 2015 has basically flatlined, reaching only precrash levels.

Another measure that companies can use to connect the dots between the business and the IT function is the IT cost of goods. For example, in the US, the IT cost per day of a hotel bed is $2.50, and for a hospital bed, it is $65. The IT cost of a car is $323.

More than such individual measures, however, companies require different measures at different points in time. It is not enough simply to measure whether a project is on time and on budget. When companies are in the early stages of building new IT systems, leaders need progress measures to tell them whether a project is on track. For example, a bank may invest in automation and artificial intelligence in order to process loans better, cheaper, and faster. It needs metrics to understand how these projects are progressing.

Later on, a company may need deployment measures that determine whether the original business case is still valid. For example, while the bank is building its new system, it might shift a lot of work to the Philippines, cutting the cost of loan processing in half. With the new system, however, the context may change and the original plan may no longer make sense.

Once a company has implemented a project, it needs realization measures that can discern whether the project has yielded the intended results. These microeconomic metrics aren’t the only way to look at the impact of technology spending, of course. Technology matters in a host of macroeconomic measures. In short, technology matters both to companies and to the larger economy.

Top performers are different from average companies. Many top performers achieve higher margins by spending their technology dollars more efficiently and with greater focus than average companies.

Consider the case of a global financial services company that for years had prided itself on its low levels of technology spending. However, the company’s gross margins were the lowest in its industry. Incidentally, its peers with higher margins had higher technology intensities. The company turned things around by rebalancing its technology spending and increasing automation. It invested hundreds of millions of dollars in technology, funded by the lower operating expenses and greater revenues it gained through automation. Now, compared with its peers, it is the only company whose gross margins are increasing faster than its change in technology spending relative to revenues.

To support this kind of digital transformation, executives must define metrics such as technology intensity as key performance indicators for the organization and benchmark their performance relative to that of competitors and companies in adjacent industries. They must then incorporate new metrics into monthly management reports and dashboards and review the role and purpose of technology investments in the light of these measures.

For their part, CIOs can embed key performance indicators into the business on the basis of metrics such as those outlined in this article, conducting regular reviews and supporting efforts to optimize performance. Executives should develop even more sophisticated metrics that truly measure the disruptions that technology fuels. Adopting best practices in these areas will enable a new generation of executive-level technology economists not only to measure what really matters to company performance but also to thrive in the technology economy.

 

Technological risks are becoming more prominent—and more dangerous. Six principles can guide banks as they manage them.

Technology is synonymous with the modern bank. From the algorithms used in proprietary trading strategies to the mobile applications customers use to deposit checks and pay bills, it supports and enhances every move banks and their customers make.

While banks have greatly benefited from the software and systems that power their work, they have also become more susceptible to the concomitant risks. Many banks now find that these technologies are involved in more than half of their critical operational risks, which typically include the disruption of critical processes outsourced to vendors, breaches of sensitive customer or employee data, and coordinated denial-of-service attacks. Cybersecurity alone can account for 10 percent of total information-technology spending, which is now growing at three times the rate of the budget of the technology being secured.

Exposure to these IT risks has grown in lockstep with the rapid increase in digital services provided directly to customers.1 For example, mobile transactions have expanded exponentially, presenting malicious external actors with billions of new entry points into bank systems. The complexity and growing vulnerability of the underlying IT systems are of equal concern. Big banks must manage hundreds or even thousands of applications. Many are outdated, having failed to keep pace with the radically changed processes they are supposed to support. Even banks that have successfully upgraded their infrastructure face upgrade-related risks—from project and data management to security problems that persist after the migration is complete.

When technology risks materialize, the financial, regulatory, and reputational implications can be severe. If banks lose customer data in a high-profile incident, they face legal liabilities and fleeing customers. Investors sell shares in the wake of cyberattacks, around 10 percent of which result in a more than 5 percent dip in the stock prices of the companies affected.2 Regulators penalize firms for noncompliance—from data breach–related fines to mandated remediation activities. Basel II could not be clearer on the topic: one of its seven level-one operational-risk categories is “business disruption and systems failure.”

To manage these risks, many banks simply deploy their considerable IT expertise on patching holes, maintaining systems, and meeting regulations. Some have set up specialized teams to cope with particularly acute problems, such as cybersecurity. But these half-measures are unlikely to afford sufficient protection. An IT-oriented approach, furthermore, may be unable to account for wider business implications and operational interdependencies. Institutions focused on compliance could ignore vulnerabilities outside the purview of the regulator and overlook applications critical to the business, with implications for business risk down the road.

Muddling through is no longer an option. The adequate mitigation of technology risk requires a coordinated effort that goes beyond IT-centered remedies. Leading banks are creating specialized teams within the enterprise-risk-management group to manage technology risk, in all its manifestations, across the organization. In this article, we will outline the six principles that these teams use to stay well connected and integrated with the rest of the bank, to develop the skills needed for these complex jobs, and to drive transformation and remediation activities. We conclude with some suggestions for getting these teams off to a good start.

These principles are not a step-by-step manual but rather guidance for creating best-practice technology-risk management. By adhering to them, bank leaders will be able to remain in control of the rising levels of risk associated with the digital age.

Companies can develop a complete picture of their information needs, uses, and risks only through a dialogue between IT and the business to identify the most critical business processes and information assets. The strongest controls can then be applied to the most valuable IT systems and data, the bank’s “crown jewels.” Proprietary trading algorithms stored on laptops, credit transaction data shared with third parties, and employee-health information—all may qualify. The IT-risk group should drive the assessment program, but the businesses need to be engaged with it and assume responsibility for the resulting prioritization, as they are the true risk owners. Only in this way will banks make the most effective investments in security. For example, an IT-led prioritization typically focuses too much on securing “big iron” applications while underemphasizing risks from unstructured data flowing through email and stored in collaboration platforms. For the crown jewels, remediation investments might include multifactor authentication, data-loss-prevention tools, and enhanced monitoring and analytics.

Thinking “business first” is especially important in information security. Data leaks, fraudulent transactions, blackmail, and “hacktivism” all pose dangers. Banks should consider their defenses in light of a threat’s potential adverse impact on the business, rather than defaulting to blanket security standards that ratchet up after each negative headline. Nevertheless, security and the customer experience need not be approached as a trade-off. Leading banks are finding ways to give their clients improved digital solutions that are simultaneously more secure and easier to use.

Most banks have established groups to manage some or all of the various realms in which technology risk can pop up. These typically include cybersecurity and disaster recovery—as well as, increasingly, vendor and third-party management; project and change management; architecture, development, and testing; data quality and governance; and IT compliance (exhibit). While such groups are interdependent in many ways, particularly when a new product or service is under development, they often are not formally connected.

Best-practice banks coordinate the work of the subdisciplines to capture significant risk-mitigation synergies. For example, housing crown-jewel data on servers other than those used for the main operational IT systems has implications for security, disaster recovery, and data management. Analyzing these three risks separately could lead to inadvertent gaps in risk management or to redundant overprotection. Coordinating the subdisciplines also avoids duplication of effort, such as a product manager completing a half-dozen overlapping risk reviews before product launch.

Banks have not always consistently applied the principles underlying the three lines of defense—the risk-management approach adopted by almost all financial institutions of any size—to technology risk. The three lines of defense is a more complicated approach for technology risks than for market or credit risk, for two main reasons. To begin with, the first line includes both the business and the IT function that enables it. Second, there are often “line one and a half” functions. In cybersecurity, for example, the chief information security officer (CISO) is responsible for setting policies and risk tolerances, as well as for managing operations to meet those expectations—both second-line activities. Yet the role usually resides in the first line, as part of the organization of the chief information officer (CIO). This blurring of the lines can create potentially problematic situations in which the group is “checking its own homework.” Similar boundary confusion can arise in certain subdisciplines, like disaster recovery, where both the first and second lines need real technology expertise.

Banks should carefully clarify the roles and responsibilities in managing technology risk for each line of defense. Increasingly, organizations are asking the IT-risk group to take on the policy, oversight, and assessment roles, while security operations remain within the CIO’s scope.

Careful distinctions like these are needed, for example, when institutions launch a new mobile-banking application. While the business sets out its commercial requirements, the IT group will work collaboratively to define the architectural and technical requirements. The second-line IT-risk function should be engaged from the start of such a project to identify risk exposures (such as the possibility of increased fraud or customer-identify theft) and provide an independent view on mitigation actions and feedback from testing results. Risks identified can be mitigated by the CISO and his or her team, through compensatory controls or design changes before the app is launched. This avoids the delays, cost overruns, and organizational tensions that arise from discovering exposures during a security review conducted too close to launch.

In many banks, technology-risk management is disconnected from enterprise risk management (ERM) and even from the operational-risk team. That inhibits the bank’s ability to prioritize the risks that are of critical importance and deploy the resources to remediate them. A contributing factor is often the absence of a common risk-management technology platform shared by both the IT-risk team and the ERM or operational-risk group. Without such a platform, banks struggle to aggregate risk information consistently, and managers are not equipped with the data they need to make decisions.

For example, as banks manage operational risks, they frequently balance the benefits of automation (to reduce opportunities for human error) against operational process controls (to improve behavior). Each option has advantages but also challenges—automation can introduce technology risk while operational controls can make systems unwieldy. Without a unified view of the risks involved, banks must often rely on advocates of particular initiatives when making risk-management decisions, rather than a holistic view of the available approaches and their merits. The bias can thus be to optimize within a risk category rather than to promote the good of the enterprise.

When the IT-risk group is integrated with ERM, on the other hand, real benefits can result—particularly if the technology-risk team comes under the same umbrella as other operational-risk-management teams. Decisions can be made at the level appropriate to the needs of the business and the potential severity of the risk. The business can make decisions about low-level exposures directly, while the tech- or op-risk group addresses the more significant risks and corporate ERM and senior management address the most significant ones.

Typical decisions with significant but underappreciated risk implications include those affecting a bank’s long-term architectural road map and risk-appetite decisions about testing requirements for major IT changes. When it comes to mobile apps, for example, some banks will choose to be early adopters, given the anticipated customer value, while others wait for best practices to develop. Both courses might be sensible, but only senior management should decide between them.

Two domains where ERM integration can yield great benefits are resilience and disaster recovery, and vendor and third-party management. To prevent the interruption of critical services, IT-risk managers should articulate a risk appetite that reflects the business impact of disruptions. Most banks will find that for a small percentage of their business processes, near-perfect IT resilience is essential. These are customer-initiated, time-critical processes (such as ATM withdrawals, brokerage transactions, and point-of-service purchases) with no real-time alternative. Risk investments in resilience and disaster recovery must focus on these specific processes and the relatively small number of systems that support them. For other processes, IT-risk managers should work with the IT function to define the needs for supporting processes where the appetite for risk is relatively high and banks should be able to make savings by reducing the level of support required.

IT-risk managers should also partner with the business and IT to establish standards for security, continuity, and disaster recovery for a bank’s external service providers. Given the sheer number of vendors that banks use, standards and audits must be applied in a risk-prioritized way. Banks should also consider involving their closest vendors and partners more significantly with internal ERM processes (to improve risk identification, assessment, and control) and also with incident response. Banks that use “war games” to test their crisis-response plans often find that the roles and responsibilities of third parties are outdated or poorly defined in service-level agreements, potentially leading to problems during a live breach.

Banks encourage IT managers to deliver projects on time and on budget and to maintain near-perfect levels of system availability. These objectives are obviously important, but overemphasizing them can mean that project managers do not do enough to minimize business-risk exposure. The prevailing culture encourages short-term delivery while underemphasizing long-tail but significant risks. For example, situations arise in which back-end systems are technically operational but the actual customer-facing business process is unavailable as a result of a lost database connection, for example, or a lost connection with a client and a delay while the backup system kicks in. Infrequent but high-impact outages are almost never mentioned in performance-management systems, which instead feature operational data.

To monitor risk, best-practice banks add forward-looking metrics, such as the time it takes to detect and mitigate cyberincidents, the volume of unknown devices connected to the internal network, vendors out of compliance with security requirements, and employees failing phishing tests. Leading banks also track the number of incidents and the actual recovery times for highly critical service chains, including systems supporting mobile banking, ATM services, and electronic trading. Such a performance-management system should work hand in hand with a value-assurance framework, which establishes, for each major IT project, the criteria for aligning stakeholders and the software-development life cycle. Research has shown that a failure to manage these elements is the most common cause of budget and schedule overruns.3 Aligning business and IT managers with appropriate risk-management mind-sets and behavior is critical.

Technology-risk management requires critical thinking and hands-on experience in technology, business, and risk. Individuals with all of these skills are hard to find and command high salaries—but they are indispensable. Only someone skilled in all of these areas can both effectively challenge IT teams and act as a thought partner to guide strategic decisions.

The good news for banks is that they can develop this kind of talent through part-time staffing models, training, and rotational programs. Some banks have succeeded by recruiting experienced IT specialists willing to learn risk-management skills and giving them appropriate training and a ladder for advancement. Banks can thus build a core group of IT-risk professionals with a strong knowledge of functions, technology subdisciplines, and operational-risk practices. These are essential skills for the core work of the group—exercising proper oversight from the second line of defense. They will also help the technology-risk team with other parts of the job. IT-risk managers should define architectural standards, sit on architectural-review committees, establish a consistent software-development life cycle across the enterprise, and monitor test results. They should ensure not only that individual IT changes are delivered efficiently but also that the IT environment is sustainable in the long run.

The IT-risk group must be aware of what is happening in all parts of the organization. As a bulwark of the second line of defense, it must have strong insights into the first line (both the businesses and the IT units that support them), have a strong connection to the central IT team, forge connections among the various subdisciplinary teams, and integrate its work with the core risk-management team driving ERM.

To accomplish this delicate two-step of independence and partnership, banks can consider two actions. First, they can establish a single unified mission for the IT-risk group, which should enable the core business and be a partner to other functions to improve the overall effectiveness of technology-risk management. The function’s activities in managing technology risks should focus on this vision, shared by the board and top management. The function’s mission is then to understand the specific risks facing the bank given its core operational processes and organizational structure, to identify the major challenges in remediating or managing these risks, and to allocate responsibility for the specific actions needed.

Second, banks should create effective interaction and communication models that reduce ambiguity and promote collaboration. Clear committee structures, the frequency of meetings, and reporting lines will both help avoid duplication and ensure that key functions are not left undone. In identifying and prioritizing risk, organizations can usually build on existing risk evaluations and analyses and add mechanisms to ensure collaboration.

The expectations of customers, shareholders, and regulators for the resilience of banks will continue to escalate. Recent events have exposed the ghost in the machine—how the failure of technology can cause lasting damage to an institution’s brand and reputation. Successful banks will establish an IT-risk group as a second line of defense that engages with the business and IT function while providing effective oversight and challenge. The group will also be staffed with experts in technology and risk management. With the right practices and capabilities, banks can effectively manage technology risk for the digital age.

When combined, digital innovation and operations-management discipline boost organizations’ performance higher, faster, and to greater scale than has previously been possible.

In every industry, customers’ digital expectations are rising, both directly for digital products and services and indirectly for the speed, accuracy, productivity, and convenience that digital makes possible. But the promise of digital raises new questions for the role of operations management—questions that are particularly important given the significant time, resources, and leadership attention that organizations have already devoted to improving how they manage their operations.

At the extremes, it can sound as if digitization is such a break from prior experience that little of this history will help. Some executives have asked us point blank: “If so much of what we do today is going to be automated—if straight-through processing takes over our operations, for example—what will be left to manage?” The answer, we believe, is “quite a lot.”

Digital capabilities are indeed quite new. But even as organizations balance lower investment in traditional operations against greater investment in digital, the need for operations management will hardly disappear. In fact, we believe the need will be more profound than ever, but for a type of operations management that offers not only stability—which 20th-century management culture provided in spades—but also the agility and responsiveness that digital demands.

The reasons we believe this are simple. First, at least for the next few years, to fully exploit digital capabilities most organizations will continue to depend on people. Early data suggest that human skills are actually becoming more critical in the digital world, not less. As tasks are automated, they tend to become commoditized; a “cutting edge” technology such as smartphone submission of insurance claims quickly becomes almost ubiquitous. In many contexts, therefore, competitive advantage is likely to depend even more on human capacity: on providing thoughtful advice to an investor saving for retirement or calm guidance to an insurance customer after an accident.

That leads us to our second reason for focusing on this type of operations management: building people’s capabilities. Once limited to repetitive tasks, machines are increasingly capable of complex activities, such as allocating work or even developing algorithms for mathematical modeling. As technologies such as machine learning provide ever more personalization, the role of the human will change, requiring new skills. A claims adjuster may start by using software to supplement her judgments, then help add new features to the software, and eventually may find ways to make that software more predictive and easier to use.

Acquiring new talents such as these is hard enough at the individual level. Multiplied across an organization it becomes exponentially more difficult, requiring constant cycles of experimentation, testing, and learning anew—a commitment that only the most resilient operations-management systems can support.

And if digital needs operations management, we believe it’s equally true that operations management needs digital. Digital advances are already making the management of operations more effective. Continually updated dashboards let leaders adjust people’s workloads instantly, while automated data analysis frees managers to spend more time with their teams.

The biggest breakthroughs, however, come from the biggest commitment: to embrace digital innovation and operations-management discipline at the same time. That’s how a few early leaders are becoming better performers faster than they ever thought possible. At a large North American property-and-casualty insurer, for example, a revamped digital channel has reduced call-center demand by 30 percent in less than a year, while improved management of the call-center teams has reduced workloads an additional 25 percent.

Digitization can be dangerous if it eliminates opportunities for productive human (or “analog”) intervention. The goal instead should be to find out where digital and analog can each contribute most.

That was the challenge for a B2B data-services provider, whose customized reports were an essential part of its white-glove business model. Rather than simply abandon digitization, however, the company enlisted both customers and frontline employees to determine which reports could be turned into automated products that customers could generate at will.

Working quickly via agile “sprints,” developers tested products with the front line, which was charged with teaching customers how to use the automated versions and gathering feedback on how they worked. The ongoing dialogue among customers, frontline employees, and the developer team now means the company can quickly develop and test almost any automated report, and successfully roll it out in record time.

Developing new digital products is only the beginning, as a global bank found when it launched an online portal. Most customers kept to their branch-banking habits—even for simple transactions and purchases that the portal could handle much more quickly and cheaply.

Building the portal wasn’t enough, nor was training branch associates to show customers how to use it. The whole bank needed to reorient its activities to showcase and sustain digital. That meant modifying roles for everyone from tellers to investment advisers, with new communications to anticipate people’s concerns during the transition and explain how customer service was evolving. New feedback mechanisms now ensure that developers hear when customers tell branch staff that the app doesn’t read their checks properly.

Within the first few months, use of the new portal increased 70 percent, while reductions in costly manual processing means bringing new customers on board is now 60 percent faster. And throughout the changes, employee engagement has actually improved.

The next shift redesigns internal roles so that they support the way customers work with the organization. That was the lesson a major European asset manager learned as it set out on a digital redesign of its complex, manual processes for accepting payments and for payouts on maturity. The entire organization consisted of small silos based on individual steps in each process, such as document review or payment processing—with no real correlation to what customers wanted to accomplish. The resulting mismatch wasted time and effort for customers, associates, and managers alike.

The company saw that to digitize successfully, it would have to rethink its structure so that customers could easily move through each phase of fulfilling a basic need: for instance, “I’ve retired and want my annuity to start paying out.” The critical change was to assign a single person to redesign each “customer journey,” with responsibility not only for overseeing its digital elements but also for working hand in glove with operations managers to ensure the entire journey worked seamlessly. The resulting reconfiguration of the organization and operations-management systems reduced handoffs by more than 90 percent and cycle times by more than half, effectively doubling total capacity.

The final shift is the furthest reaching: digital’s speed requires leaders and managers to develop much stronger day-to-day skills in working with their teams. Too often, even substantial behavior changes don’t last. That’s when digital actually becomes part of the solution.

About two years after a top-to-bottom transformation, cracks began to show at a large North American property-and-casualty insurer. Competitors began to catch up as associate performance slipped. Managers and leaders reported high levels of stress and turnover.

A detailed assessment found that the new practices leaders had adopted—the cycle of daily huddles, problem-solving sessions, and check-ins to confirm processes were working—were losing their punch. Leaders were paying too little attention to the quality of these interactions, which were becoming ritualized. Their people responded by investing less as well.

Digital provided a way for leaders to recommit. An online portal now provides a central view of the leadership activities of managers at all levels. Master calendars let leaders prioritize their on-the-ground work with their teams over other interruptions. Redefined targets for each management tier are now measured on a daily basis. The resulting transparency has already increased engagement among managers, while raising retention rates for frontline associates.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s