This months UXPA event was titled 'What AI means for UX'. While Hollywood encourages us to see AI (Artificial Intelligence) as evil and expects it to be the downfall of the human race, others believe it will actually save us. However you view it there is no denying it is already with us, if hidden in most cases, and it is the job of us UXers to make sure we do not design an intelligence that will destroy us. Best to learn all I can now then.
We had three really interesting speakers.
First up was another ex-General Assembly student, Tom Woodel, speaking about the company he works for, Saberr, who are working on using AI to offer coaching to help teams work better together. Currently they are working on a chat bot to work with teams called CoachBot.
With this sort of work, he said it was important that it is not just about the data. It needs to be human too. Tom said for UX to really have an impact designers need to be bought in right at conception where we can think about if we really need it in the first place and then about the real uses for it. But overall his biggest piece of advice was to make sure you set expectations correctly.
Next up was James Clemons from Cambridge Consultants. He firmly believes that if a product has good UX then users will use it, which will help it to learn and improve. Without this improvement users will leave and your product will die.
The biggest issue he sees for AI is getting people to understand how the 'magic' happens. People do not have to understand fully, after all you do not know exactly how your car works, but they need to understand enough to trust it. These trust issues need to be dealt with before AI will really take over. His biggest worry is about the boundaries round our data which helps the AI to learn and about how these boundaries are being broken down.
And his best advice for someone starting out on this journey was ask yourself 'how would a human do it?'
Last up was Pae Natwilai whos talk was titled 'How can you design a drone for 7 year olds?' At first I could not work out how this talk fitted in, as Pae talked about looking at the controllers for drones and how people use them. All interesting stuff but not really AI.
But then she explained how she had redesigned the controller based on how people use their hands to give directions. By using an app on a mobile phone people can direct the drone, while the computer works out what they mean. Suddenly you have a system which is much easier to use and far more accurate.
All three talks were fascinating and gave me a much better understanding of AI than I had had before. I am starting to see where it comes in and how much more it could do. This is not just chat bots or Alexa. This is the real future.