Wednesday, September 23, 2020

Evaluating New Technology? You’re More Biased Than You May Realize


In the same way that leaders may harbor an implicit bias about characteristics of groups of people, they may also harbor implicit biases about new technology — including new technology they might be considering investing in to improve productivity or competitiveness.

You may think that you make decisions about technology tools with an open mind and a clear process for evaluating options. But our review of hundreds of published studies on new technology adoption reveals that personal beliefs about new technology — that it’s wonderous, complex, and alien — prompt specific, unconscious biases about how and why it’s better than older options.

Implicit bias toward the dazzle of new tools can cause leaders to take unnecessary risks and ignore the advice of human experts in decision-making. Further, implicit bias toward new technology may lead to sizable investments in products and services that are unproven or even unsafe.

Beliefs and Biases About New Technology (and the Risks They Present)

We define new technology bias as automatically activated (that is, unconscious) perceptions of emerging technology. These implicit biases draw from general beliefs about technology, and they go on to influence our perceptions of everything from smartphone apps to flight instruments used to pilot an aircraft. Considering the high technological ferment companies are experiencing today, it is crucial for leaders to be aware not only of the existence of new technology bias but also of its consequences when it comes to adopting or discarding new tools. Here, we detail three general beliefs that people have about new technology, the bias that each leads to, and the risks that each bias presents.

Belief: New technology is mysterious and a “wonder.”

Bias it leads to: New technology is better than current options.

Risk: Leaders may favor a new technology even if it is unproven.

Any of us can easily conjure up thoughts of technological advances that seem miraculous — such as using nanotechnology to cure cancer — and advances that have changed our everyday lives, such as microwaves that make cooking faster, map apps on our phones that are updated by satellites in real time, and laser technology used to correct sight defects. Moreover, we tend to remember successful technological innovations and forget unsuccessful ones (can you name the earliest voice-recognition software?).

This sense of awe regarding new technology leads people to unconsciously perceive it as superior in performance compared with old technologies. This is particularly true of early adopters, who on average are more optimistically biased toward new technology. They tend to blame any failures of the technology on user error rather than on the product or service itself.

The risk for leaders is that they may unconsciously tilt toward new technology over existing systems merely because it is new, even when its newness may mask other problems or when existing technology that is tried and tested may work better.

There is danger, too, in buying into new technology that relies on an ecosystem — a collection of complementary services, standards, and regulations required for the technology to work — that is not yet mature. Think about how difficult it was to sell 100% electric vehicles before charging stations became common. One reason leaders may disregard the importance of a nascent ecosystem is that they are blinded by their implicit bias in favor of the new technology to begin with.

That’s part of what happened in the e-reader industry with Sony’s early entry, the Reader. While the product itself was solid, it needed — and didn’t have — a collection of e-books that could be read on it. This shortcoming was overlooked by executives who were wowed by the newness of the technology. When Amazon later introduced the Kindle, it solved this problem by introducing a huge collection of e-books that could be downloaded instantly and seamlessly. Amazon bulldozed together an ecosystem that would give its product a winning advantage.

Belief: New technology is complex and difficult to understand.

Bias it leads to: Leaders should follow the experts when they recommend new technology.

Risk: Leaders forgo due diligence and disregard the concerns of nonexperts.

Developments based on the latest scientific discoveries in chemistry, biology, physics, and computing — such as nanotechnology, financial technology, blockchain, and artificial intelligence — have certainly reinforced the belief that new technology is not comprehensible to the layperson. Strongly connected to this belief is a view of technology as the domain of quantitative scientists and engineers.

This sense of a new technology’s complexity leads people to view it as more legitimate and credible if experts, such as university scientists in math and science areas, recommend it. This is even true when other, less expert users disagree. (Our research shows this tendency is all the more prevalent when the expert is male — making this an area where interpersonal biases overlap with technology biases.)

The bias that new technology is credible when endorsed by experts can lead people to make decisions about technology and investments without confirming the claims around it. “Overtrusting bias” is represented dramatically by the early, unbridled support of Theranos, the now-defunct health technology company. Theranos claimed its technology could conduct many blood tests, very quickly, from a small amount of blood. The company’s affiliation with Stanford University (founder Elizabeth Holmes attended Stanford University and a prominent professor of engineering backed her idea), early partnership with the Cleveland Clinic, and star-studded board helped Theranos raise $700 million from investors — despite the company’s unwillingness to share detailed information about the blood-test technology and without providing financial statements audited by an independent public accounting firm. As Forbes wrote, investors “assumed that with all the luminaries associated with Theranos, someone must have done due diligence on its product.”

A sillier but still illustrative example is Juicero, a $400 vegetable juicer that epitomized Silicon Valley’s worst instincts. Venture capitalists threw $120 million at cold-press juice entrepreneur Doug Evans, whose technology sounded potentially impressive — a squeezer said to exert enough force to lift two Teslas, with a Wi-Fi-connected scanner to confirm the sell-by date of the veggie packs it pressed. But all Juicero did was squeeze juice from preformed $5 packages of organic vegetable matter, and soon videos emerged of people squeezing the packs manually. In the end, Juicero was tech for tech’s sake, and the low-tech alternative — putting veggies in an old-fashioned mechanical juicer (which appeared obvious to everyday consumers) — won out.

Belief: Some types of new technology are inherently alien — that is, they are not humanlike.

Bias it leads to: New technology that has humanlike features is more trustworthy.

Risk: Leaders will overtrust technology with humanlike features, despite performance shortfalls.

Most technology does not exhibit the social cues that help humans to trust one another, such as facial expressions or body language. For example, politeness and friendly chit-chat during interactions lead people to see others as sociable and warm — and ultimately trustworthy.

The sense that new technology is alien means that people tend to unconsciously overtrust new technology that does act more like a human. For instance, people are more likely to trust technology that is imbued with a human voice or social patterns, like turn-taking. This is why most interactive navigation systems are programmed with a friendly and warm human voice, rather than more robotic speech.

The risk from this bias is that leaders may be taken in by human features of technology, even though the actual performance of the technology is deficient. For example, research has shown that etiquette may overrule performance reliability. In one study, people’s trust in advice given by new information technology for diagnosing aircraft engine problems that was polite but reported to have only 60% reliability was equal to that of technology that was reported to have 80% reliability but engaged users in a rude manner.

How to Avoid New Technology Biases

Beliefs and biases can be tamped down. There are three actions leaders can take to avoid being misled by new technology bias in their decision-making:

1. Focus on a new technology’s functions, actual performance, and practical relevance. While downplaying a technology’s newness, leaders should focus on the problem they need to solve and how possible solutions compare. What does their company actually need? What are their customers really ready for?

Leaders also want to be cautious not to get caught in the bandwagon effect, another type of bias. For instance, there is an incredible amount of hype about artificial intelligence today. Businesses across the world are looking to this technology with the hopes of gaining a competitive advantage over their competitors, reducing operating costs, and improving customer experience. However, not all companies are ready to leverage AI. They need to carefully consider what specific business problems they need AI for, whether they have well-established data collection systems in place, and whether they have skilled specialists to implement and manage the algorithms.

2. Include nonexperts and everyday users on decision-making teams about new technology. We know from past research that technology geeks and scientists are more risk-seeking than nonexperts when it comes to new technology and overconfident about their ability to assess it. Many of the problems that arise with new technology become apparent only when nonexperts attempt to use it. Having them involved in decision-making makes it more likely that issues will surface that experts wouldn’t notice or would dismiss.

Consider the “antenna-gate” controversy that occurred with the Apple iPhone 4. Some consumers, when using the phone in their left hands, got a loss of signal strength and dropped calls. Their hands were apparently bridging the iPhone’s antenna. Initially, then-CEO Steve Jobs blamed users, telling them, “Just avoid holding it that way.” More testing, particularly by left-handed users, might have unearthed the problem earlier.

3. Separate the subjective “look” and “feel” of the technology from its objective performance. Because new technology that looks and acts human is likely to be overtrusted by potential adopters, decision makers must evaluate its performance in an objective manner, such as by the number of errors, time to complete tasks, time to learn the new technology, and privacy breaches. Decision makers should recognize which reactions are subjective, such as emotional reactions to the technology and even user satisfaction.

Consider personal digital assistants like Amazon’s Alexa or Apple’s Siri. A very futuristic idea only a few years ago, voice and chat assistants have found their way inside organizations: They observe data in real time and have the capability to pull information from sources such as smart devices and cloud services and put that information into context using AI, thus contributing to lower customer service costs.

But these tools have an often underestimated dark side. Much of the data they collect and use includes personal, potentially identifiable, and possibly sensitive information. Digital assistants can be hacked remotely, resulting in serious breaches of users’ privacy. Moreover, digital assistants have their failings. Humans who see several versions of a client’s name (including misspellings and nicknames) on different pieces of data will still know that this is the same person. In contrast, the AI algorithm that runs digital assistants won’t; it will classify spelling variations as different people. When adopting digital assistants in their businesses, therefore, managers need to be aware of the potential perils of overtrusting humanlike technology.

Let the Best Technology Win

Making decisions about the use of emerging technologies is now a routine part of most managers’ lives, and the pace at which such decisions need to be made only seems to be accelerating. Both of these trends make it more important than ever that managers understand both how implicit biases may influence their choices around new technology and how to minimize these influences to avoid damage to company performance and reputation.


Evaluating New Technology? You’re More Biased Than You May Realize

No comments:

Post a Comment