Aria.

Dipesh Trikam
5 min readJul 22, 2021

“Hi! My name is Aria, I am a Digital Human created by the Datacom Foundry. How can I help you?”

Only 7% of great communication is made up of the words we say

– the rest is how we say it using tone of voice and body language. Digital humans can see and listen to users to understand the meaning behind the words. They can then use their own tone of voice and body language to create lifelike human conversations.

We’ve all seen those echo dots and google nest devices and how they are transforming our world, helping those who are not tech savvy, have bad memory, limited mobility, limited screen time, can’t read/write or are multi-tasking; who are time starved for example a parent with a child. When talking about technology, “accessible” means tools that can be used successfully by people with a wide range of abilities and disabilities. When technology is accessible, each user is able to interact with it in ways that work best for him or her. It’s always important when creating new technologies we give some consideration to accessibility because this will intern benefit those who aren’t disabled for example whoever created subtitles needs a medal 🥇 because there have been times on a plane when the headphones just aren’t working, so being able to read as well is ideal. Conversational AI inherently meets this level of accessibility, it’s one reason I am so drawn to it.

Research has shown that talking to a virtual assistant can make customers feel less judged and they are more willing to speak openly. Being available after a long day at work or responding back instantly has made creating a business case easier for companies to adopt this technology more than ever. We saw during the COVID pandemic that those companies who had already adopted this technology thrived because there was another channel for customers to get information when they needed it the most, there was less stress on the call centre for repeated, mundane questions and hence less queue time. Increasing the general sentiment and Net Promoter Scores across the board.

Natural language understanding or NLU it is as a branch of artificial intelligence that uses computer software to understand input made in the form of sentences in text or speech format, it directly enables human computer interaction. It allows computers to understand commands without the formalised syntax of computer languages and for computers to communicate back to humans in their own languages. We generally talk about intents and utterances when it comes to NLU so a simple example is when you are greeting someone the intent here is to ‘welcome’ there are many ways of welcoming someone right you can say hey, what’s up, how’s it going these are all different utterances for the same intent and making sure this is the case is the basis of Natural Language processing for Machine learning.

Such systems rely on a predefined lexicon and a set of grammar rules. Sophisticated systems leverage machine learning and statistical models to determine the most likely meaning.

https://copilot.github.com/

Given any text prompt like a phrase or a sentence, GPT-3 returns a text completion in natural language. Developers can “program” GPT-3 by showing it just a few examples or “prompts.”

GitHub Copilot is powered by Codex, the new AI system created by OpenAI. GitHub Copilot understands significantly more context than most code assistants. So, whether it’s in a docstring, comment, function name, or the code itself, GitHub Copilot uses the context you’ve provided and synthesizes code to match. Together with OpenAI, we’re designing GitHub Copilot to get smarter at producing safe and effective code as developers use it.

Creating a good persona can make or break your virtual assistant whether that be a chatbot or a digital human. There is a set methodology that can be used to ensure you are on the right path. Datacom Foundry have a Rapid prototyping session across 5 days which has been tried and tested and works very well for ensuring you are getting something good without a huge amount of initial capital investment.

Below are the 4 steps in designing a persona.

  • Understand your brand
  • Understand your users
  • Decide on the tasks and the role
  • Write a biography and monologue for your character

When you see somebody on the street, a video, or a movie within the first few seconds you already infer their Like-ability, Trustworthiness, Intelligence, Gender, Age, and Locale. The Digital Human is your brand impression, other people will do the same thing to your digital human. So it’s important to make sure you consciously do this design process.

It’s important to think about personality traits in details. For example Inspector Gadget or Walter White from breaking bad (Being Crafty, Resourceful, Clever). Based on these words you have no idea what the character sounds like from just those adjectives.

When you write a monologue try think about bringing out a living person. This will help your words, they will to appear on the screen as if it they were being written on its own.

When you build a good persona, the customer can feel that person they are talking to even if it is a mundane task like telling the time they will feel that it is a faithful representation of the customer’s brand. The connection, if done right will make them want to come back and use it more. This persona can then be used on other business channels as well such as Twitter, emails, advertisements as well.

Moving forward there will be a trend for digital assistants to be part of your workforce and having this initial infrastructure starting to be developed now can really set up your business in the long term.

--

--

Dipesh Trikam

My opinions and insights in layman’s terms. 🤖 Check out some of my other work here: https://dipesht.myportfolio.com/