top of page

Lifewire. We need AI that can explain itself, says Travis Nixon

We love artificial intelligence. But no matter how great the benefits and results AI provides, we still don’t often know how it works. When it comes to AI making life-changing decisions, that’s a question we need answered.


One company meeting that expectation? SynerAI, an AI-driven investment insights platform that tells users the what as well as the how.


CEO Travis Nixon, spoke to Lifewire in an email interview on the value of explainable AI:


"Explainable AI is about being able to trust the output as well as understand how the machine got there," Travis Nixon, the CEO of SynerAI and Chief Data Science, Financial Services at Microsoft, told Lifewire in an email interview.


"'How?' is a question posed to many AI systems, especially when decisions are made or outputs are produced that aren't ideal," Nixon added. "From treating different races unfairly to mistaking a bald head for a football, we need to know why AI systems produce their results. Once we understand the 'how,' it positions companies and individuals to answer 'what next?'."


So how else can explainable AI be used? Nixon continued with another example.


In another example, Nixon said that during the creation of a quota setting model for a company's sales force, his company was able to incorporate explainable AI to identify what characteristics pointed to a successful new sales hire.


"With this output, this company's management was able to recognize which salespeople to put on the 'fast track' and which ones needed coaching, all before any major problems arose," he added.


Answering the ‘how’ seems like it should be an obvious part of any AI system, but so far it hasn’t been achievable. The lack of explainable AI in future systems could have a severe impact on the industry, notes Nixon.


Explainable AI is currently being used as a gut check for most data scientists, Nixon said. The researchers run their model through simple methods, ensure there's nothing completely out of order, then ship the model.


"This is in part because many data science organizations have optimized their systems around 'time over value' as a KPI, leading to rushed processes and incomplete models," Nixon added.


"I'm worried the blowback from irresponsible models could set the AI industry back in a serious way."



Does your artificial intelligence startup have insights to share? We can help get them heard. Contact our tech PR agency to turn your knowledge into media hits.


bottom of page