The boxes Josef checked.
Adam and the team had several criteria their AI tools needed to check to safely get them out the door.
1. Accurate answers they can trust
“The last thing I want is someone raising an issue from interfacing with the AI and giving them a completely wrong answer,” he says.
“We just can’t have them head off and give incorrect advice on an employment matter, for example.”
“The accuracy and quality of what the model’s producing is so important.”
2. Good governance and control
“We were looking for a tool that enabled us to apply levels of governance that you don’t get from something like ChatGPT or Copilot,” Adam says.
“The ability to know that there’s a level of governance around the answers, and that we could report on that with transparency, was really important to us at ART.”
3. Easy access
“The tools really needed to be integrated into our daily workflows on our intranet as that’s where the majority of people get their daily news and the latest resources.”
He adds, “By integrating seamlessly with what the team already do, we’ll be able to gradually shift behaviour over time.”