Sanas aims to convert one accent to another in real time for smoother customer service calls – TechCrunch


In the customer service industry, your accent dictates many aspects of your job. It shouldn’t be the case that there is a “better” or “worse” accent, but in today’s global economy (although who knows tomorrow’s) it is valuable to sound American or British. While many undergo accent neutralization training, Healthy is a startup with another focus (and a $ 5.5 million startup round): using speech recognition and synthesis to change a speaker’s accent in near real time.

The company has trained a machine learning algorithm to quickly and locally (i.e. without using the cloud) recognize a person’s speech at one end and, at the other, generate the same words with an accent chosen from a list or detected. automatically from someone else’s speech.

Screenshot of the Sanas desktop application.

Image credits: Sanas.ai

It plugs directly into the operating system’s sound stack, so it works out of the box with virtually any audio or video calling tool. Right now, the company is operating a pilot program with thousands of people in locations from the US and UK to the Philippines, India, Latin America and beyond. Accents supported will include American, Spanish, British, Indian, Filipino, and Australian by the end of the year.

Truth be told, the idea of ​​Sanas bothered me at first. It was felt as a concession to intolerant people who consider their accent superior and think that others are below them. Technology will fix it … accommodating the bigots. Excellent!

But even though I still have a bit of that feeling, I can see that there is more to it than this. Fundamentally speaking, it is easier to understand someone when they speak with an accent similar to yours. But customer service and technical support is a huge industry, and it’s mostly done by people outside the countries where customers are located. This basic disconnect can be remedied in a way that puts the onus of responsibility on the entry-level worker, or one that puts it on technology. Either way, the difficulty of making yourself understood remains and needs to be addressed: an automated system simply allows it to be done more easily and allows more people to do their work.

It is not magic; As you can see in this clip, the character and cadence of the person’s voice are only partially retained and the result is a considerably more artificial sound:

But technology is improving, and like any speech engine, the more it is used, the better it gets. And for someone not used to the original speaker’s accent, the American accent version may be more easily understood. For the person in the support role, this probably means better results for their calls – everyone wins. Sanas told me that the pilots are just getting started, so there are no numbers available for this implementation yet, but testing has suggested a considerable reduction in error rates and an increase in call efficiency.

In any case, it’s good enough to attract a $ 5.5 million seed round, featuring Human Capital, General Catalyst, Quiet Capital, and DN Capital.

“Sanas strives to make communication easy and friction-free, so that people can confidently speak and understand each other, wherever they are and with whom they are trying to communicate,” CEO Maxim Serebryakov said in the statement. announcement announcing the financing. It’s hard to disagree with that mission.

While the cultural and ethical issues of accents and power differences are unlikely to ever go away, Sanas is trying something new that can be a powerful tool for the many people who must communicate professionally and find that their speech patterns are an obstacle to that. It is an approach worth exploring and discussing even if in a perfect world we would simply understand each other better.


feedproxy.google.com

Leave a Reply

Your email address will not be published.