Voice-activated technology must advance to support hybrid workplaces


The pandemic altered everything about people’s lives, including the way they interact with voice technology. the Smart audio report from NPR Reveals More People Use Their Smart Devices Every Day; the number of people using voice commands at least once a day increased by 6 percentage points from December 2019 to April 2020.

Before COVID-19, many workers were out of their homes for eight hours or more. They did not have access to their smart devices and were generally more comfortable using voice commands in private. But the shift to remote work meant more time at home and more opportunities to explore technology.

This trend toward voice-activated technology shows no signs of stopping. More than 50% of employees would like to continue telecommuting, and about 25% want a mix of in-person and remote work, according to an Office Depot study. As the routines that people formed over the past year become firmly established, smart speakers and voice assistants will become mainstays of hybrid work.

How Voice Technology Can Evolve to Support Hybrid Workplaces

Voice technology has come a long way since Siri was first announced. During the pandemic, grocery stores and other retailers added voice technology and contactless payment options to self-checkout kiosks to provide safer customer experiences. Researchers are also exploring how voice assistants can support the healthcare industry.

The future of voice technology is undoubtedly bright, but it will need to continue to evolve to become a staple of the new hybrid workplace. People expect voice technology to fit naturally into existing workflows, so any hurdles or mistakes that deter adoption could spell trouble for continued adoption of voice-based technologies.

Here’s what will need to change as more remote workers buy and use smart devices:

1. Algorithms must be based on a variety of voices.

It is clear that some speech recognition technology has been trained and programmed using perfect diction, “standard North American English” and crisp recordings. Unfortunately, these algorithms are not very useful in the real world.

Smart speakers and other devices must be able to navigate ambient noise, background voices, regional dialects, international accents, imperfect pronunciations, speech impediments, and more before they can be useful in the hybrid workplace.

Fortunately, some companies are tackling these issues head on. I recently spoke with a woman whose son had a speech disability. They had spent hours in a Google recording studio to help improve the schedule for the company’s attendees. In addition, Apple has accumulated a database of almost 30,000 audio clips of speakers stuttering. Perfect speech recognition won’t happen overnight, but taking into account different ages, tones of voice, and other idiosyncrasies should help make the algorithms as accurate as possible.

2. New users need a superlative experience.

Much depends on the first experiences. When someone turns on their smart speaker or voice assistant and asks to make a call, they expect it to go smoothly. If the technology fails on that first exchange, users will be less willing to try again in the future. All of this relates to basic learning behaviors.

While smart speakers tend to get all the press, the rate of adoption of voice technology by smartphone users is holding up. substantially higher. To make smart devices more useful for hybrid workers, companies will need to prioritize the “wow” factor and do everything possible to make a great first impression.

For example, can technology be integrated with laptops and computers? Can the devices be controlled remotely? These are the questions workers will ask themselves in the future.

3. Training in voice technology should be more diverse and inclusive.

There are many examples of algorithms that adopt biases, such as Amazon recruiting assistant that favors men and a recidivism prediction tool called COMPAS misclassifying black defendants as more likely to commit additional crimes. These inequalities demonstrate that technology as a whole must perform better when it comes to diversity, equity and inclusion.

In a study that looked at speech recognition tools from Amazon, Google, IBM, Apple, and Microsoft, the collective software was 16% more likely to misidentify words if the speaker was black. This may not sound like a high percentage, but consider correcting four out of every 25 words you speak or write. Unless addressed, this issue will prevent people from adopting voice technology.

As in many other areas, the pandemic accelerated the adoption of voice-activated technologies. With employees around the world demanding greater flexibility and safety precautions, voice technology has likely secured a permanent place as a pillar of the future of work.

David ciccarelli

Founder and CEO of Voices

David Ciccarelli is the founder and CEO of Voices, the number one creative services marketplace with more than 2 million registered users. David is responsible for setting the vision, executing the growth strategy, creating a vibrant culture, and running the company on a day-to-day basis. It is published frequently in outlets such as The Globe and Mail, Forbes, and The Wall Street Journal.


readwrite.com

Leave a Reply

Your email address will not be published. Required fields are marked *