Skip to content


  • Add this article to your LinkedIn page
  • Add this article to your Twitter feed
  • Add this article to your Facebook page
  • Email this article
  • View or print a PDF of this page
  • Share further
  • Add this article to your Pinterest board
  • Add this article to your Google page
  • Share this article on Reddit
  • Share this article on StumbleUpon
  • Bookmark this page

Trust in the future of personal AI

2016 was the year of the chatbot, with over 30,000 of them now available on Facebook’s messenger platform1. Companies such as IPsoft are making great strides with the use of cognitive agents (in this case Amelia) to automate service desk tasks. The vast quantities of data that retailers such as Amazon and eBay now collect about us has enabled personalised recommendations akin to personal shoppers. With the rapid development of Siri, Cortana, Google Now and Alexa, it’s possible to foresee personalised chatbots powered by this wealth of data that approximate our identities in the virtual environment. In fact, this is already happening – a group of entrepreneurs from MIT are working on a service that brings the deceased back to life by creating a virtualised, interactive identity from their emails, photos and Facebook posts2. More usefully (perhaps), is that we already have a robot lawyer capable of disputing parking tickets on your behalf3. 

Indeed, CBS’s show Person of Interest makes use of a crime predicting AI called simply the ‘Machine’ where a whole data centre’s worth of servers are needed to drive its personality and search algorithms. When we think of sophisticated and useful chatbots, we have to accept they will run in the cloud and not on our wrists.

If we translate this to the world of virtual shopping, instead of setting up rules-based alerts for a rare vinyl copy of a particular album, your virtual identity would wander around the web trying new places, finding its own set of record collector websites. It would also assimilate a range of news items, synchronise with your music streaming account and downloads and suggest you might also like something it’s found for sale without being prompted.

Much of this could be achieved today, not tomorrow, and the next step is to imagine your virtual identity (let’s call it Tony 2.0) could also interact with various other such identities as it wanders around as a personal shopper. Tony 2.0 could also know from your diary you aren’t busy next weekend and have spotted a chance for you to take a short break in Devon. It could have priced some train tickets and even interacted with a holiday booking chatbot to give an exclusive price if you book today. Of course the weather forecast will have been checked and a tentative booking made at that seafood restaurant you like.

Tony 2.0 can also interact with real people on your behalf, though even today a growing fraction of entities on social media are already virtual. Online interactions with real people may start to become a rare event, replaced their own virtual identities. All very idyllic – imagine your virtual identity visiting a range of dating sites on your behalf and having dozens of virtual dates with other virtual personalities. It then suggests you go on a real world date with somebody who lives near you and is a far better match than the average of the long list originally proposed by the matchmaking site.

This is where it all runs the risk of going wrong. If your virtual identity is making decisions for you, it’s important that it understands your personality beyond Facebook likes and Twitter retweets. Does your online persona reflect who you really are? Much like current AI, its effectiveness is dependent on the quality of the data. The level of anonymity afforded by the internet can fundamentally change a person’s behaviour after all.

We find it easier to adopt pseudonyms (how many twitter handles are people’s actual names?), we find it easier to speak out, to criticise or heap lavish praise. We experiment, even if subconsciously, and many people overshare or spend their time trolling. To make this next wave of chatbot work it will need access to more personal data than many of us would like to trust to an autonomous agent. What if your agent were to cause offence and do more harm than good? This needs a higher level of trust than most of us are prepared to give social media sites when we think hard about sharing more than birthdays and holiday photos. 

We still (though this is has some generational elements) treat online as somewhere other than the offline world. Generation Z may be the first for whom virtual identities such as Tony 2.0 are as natural as everything else in their world. Meanwhile, the rest of us need to consider what the split between offline and online identity means as autonomic AI keeps evolving faster than we can learn how best to exploit the opportunities it offers.

Are you ready for the future of personal AI?

Facebook Messenger now allows payments in its 30,000 chat bots
A creepy new startup wants to create living avatars for dead people
A robot lawyer can sort out your parking tickets 

Contact the author

Contact the digital transformation team

By using this website, you accept the use of cookies. For more information on how to manage cookies, please read our privacy policy.