‘Not a deepfake company’: Voice cloning startup Replica Studios raises $3.7 million

Replica Studios

The Replica Studios team. Source: supplied.

Replica Studios last week closed a US$2.5 million ($3.7 million) raise, which it will use to propel its “Photoshop for voice” technology into its next stage of growth.

The Brisbane startup’s voice AI program analyses a five-minute clip of someone’s voice and then accurately recreates it digitally. It can also adjust the pitch and emotion of voices, similar to the effects of photo filters on social media platforms.

Funded by investors in both Silicon Valley and Sydney, the startup plans to capitalise on demand from the US while also taking advantage of opportunities in the relatively uncompetitive landscape Down Under.

Right place, right time

Co-founder and chief executive officer Shreyas Nivas tells StartupSmart that Replica “out of the blue” found itself in the right place at the right time.

When Replica’s founders launched their fundraising campaign four months ago, Google Assistant just happened to release its first celebrity voice test with John Legend.

“Before that, people didn’t really believe us,” he says.

“They thought ‘how big can this market actually be?’”

On the other hand, this was not all down to coincidence. When Google Assistant made the announcement, the Replica team was already in the States, acting on feedback its tech would have more potential there than locally.

And while this proved correct and it received its first cheque from an American backer, Nivas says the team feels no pressure to relocate.

“There aren’t that many opportunities for talent right now in Queensland, in Brisbane, and in Australia as a whole,” he says.

“So one of the competitive advantages of staying in Australia is we can attract the best talent, whereas if you’re in the middle of Silicon Valley, good luck trying to find someone who actually sticks with your company long term.”

When asked what the funding will be used for, Nivas is somewhat tight-lipped, hinting all will be revealed in an upcoming announcement, although he does say recruiting talent is at the top of Replica’s checklist.

Future-thinking protocols

It’s undeniable there is the potential for Replica users to misuse the technology and infringe on privacy, but the startup balks at the term “deepfake”.

“We’re the exact opposite of a deepfake company,” Nivas says.

“We don’t want to be impersonating people without their permission.”

According to Nivas, it’s inevitable that lawmakers will set guidelines and standards to protect privacy in the future. In the meantime, Replica is treading carefully.

For anyone interested in the program for private use, there is a waitlist for the beta version, complete with a questionnaire. A security protocol then uses the results to filter and restrict available features.

For example, non-commercial beta users can only replicate their own voice or select some samples from Replica’s licensed library. To extend these features to another voice, the voice talent in question would need to fill out legal papers authorising the use of their likeness.

“Not everyone is going to be allowed onto our platform,” Nivas says.

Next steps

On the other hand, Nivas says the commercial applications are progressing quicker than the personal offerings.

There are three applications in the works: personalising podcast advertisements, streamlining audiobook production, and making gaming voice-overs more dynamic.

That means podcasts with international followings could soon use this technology to tailor ads to listeners’ location, demographics and historical interests, similar to the way Google Ads works on websites.

It could also see more adaptations of audiobooks with celebrity voices, and gaming characters and voice-overs reacting to scoreboards and events in real-time.

Build your network

Replica is not Nivas’ first or second company — it’s his fourth, not counting other projects and experiments.

He says this one found success through its “angelic” network, and encourages other founders to build and expand their own supporter base.

“It’s not just me, or even just my team, in isolation,” he says.

“The biggest change that happened is that it wasn’t just me evangelising the company, but a whole network of people evangelising the company.”

READ MORE: ‘Photoshop for voice’: Meet the Brisbane startup creating an global marketplace for our voices

READ MORE: Why artificial intelligence in Australia needs to get ethical


Notify of
1 Comment
Newest Most Voted
Inline Feedbacks
View all comments
Sean Harris
Sean Harris
1 year ago

It’s glad to see that the company knows and seriously takes responsibility for each data’s security. Besides, AI voice cloning technology could be really meaningful in education, audiobooks, assistive technologies, and more. It’s nice to read articles like this that are raising awareness to safeguard the listeners from falling for artificial voices when they are used to mislead us.

SmartCompany Plus

Sign in

To connect a sign in method the email must match the one on your SmartCompany Plus account.
Or use your email
Forgot your password?

Want some assistance?

Contact us on: support@smartcompany.com.au or call the hotline: +61 (03) 8623 9900.