Sage has published The Ethics of Code: Developing AI for Business with Five Core Principles to provide guiderails for creating ethical and responsible AI for business users.
Sage is calling on the Australian tech community to take responsibility for the ethical development of AI for business through committing to the five core principles which were established while Sage was building its own machine learning and AI program.
“Building chatbots and AI that helps our customers is the easy part — the wider questions that the rising tide of AI bring are broad and currently very topical. Because of this, we developed our AI within a set of guiderails, these are the core principles that we believe help us to ensure our products are safe and ethical,” Sage vice president of bots and AI Kriti Sharma said.
“The Ethics of Code are designed to protect the user and to ensure that tech giants are building AI that is safe, secure, fits the use case and most importantly is inclusive and reflects the diversity of the users it serves.”
The principles are that AI should reflect the diversity of the users it serves, AI must be held to account – and so must users, reward AI for ‘showing its workings’, AI should level the playing field, and AI will replace, but it must also create.
Ms Sharma said that Sage is calling on others to bear these principles in mind when developing or deploying their own AI.