Artificial intelligence (AI) has been subjected to more than its fair share of hype; consensus seems to be forming among Fortune 500 organizations that AI will indeed transform business productivity.
Indeed, according to leading enterprise AI analysts Tractica, 42 percent of business leaders believe AI is of great importance to their business. The time is now ripe to ask the bigger, bolder societal questions around the full scale of the disruption brought about by AI in business.
Slowly but steadily, the issue of responsible AI is making its way to the front end of the boardroom discussion on AI, and the debate is about the best way to build fairness, transparency, data privacy, and security into brand-new, untested AI-powered systems and business practices.
Conscious and unconscious biases inherent in AI systems do have the potential to produce unfair or discriminatory results. Much has been written about the overarchingly male-dominated global community of AI developers – and the very limited inclusion of minorities as part of the technical teams engineering and implementing AI systems. This lack of diversity and inclusivity may well lead to algorithms that exclusively mirror their creators and their worldview; a holistic approach to securing a broader societal representation needs to be achieved when building and delivering AI systems.
Transparency is also critical, and business needs to be clear with regards to the data an AI system uses; the purpose the data is used for also needs to be disclosed and be readily available. When people directly or indirectly interact with an AI system, it needs to be completely transparent to users that this is the case.
Legal certainty as to questions of liability when AI systems act, deliver, and err is essential. Who takes the responsibility for when things go wrong, and when a decision, action, or recommendation is made by an AI system? Who needs to be held accountable? Questions of jurisdiction and liability need to be addressed prior to a wide-ranging use of AI systems, particularly in public-facing projects.
Data privacy and security also play a pivotal role in the conversation around responsible AI. What data is used in powering the systems, how is it stored and obtained, and how can it be transferred across organizations and functions? The European Union has introduced perhaps the tightest laws globally in the possession, analysis, and use of personal data, and this General Data Protection Regulation is changing the nature of the conversation around the speed with which AI can be implemented.
Still, the world has yet to welcome any kind of comprehensive legal framework on AI or on how it needs to be developed and implemented, including the standards required and the ethical considerations and implications. The industry is so far self-regulating, but several governments and international institutions (like the UN and EU) are in the process of consultations that may eventually lead to regulatory initiatives with a definitive approach to what is and isn’t acceptable through the use of AI.
Taking this a step further, a practical need will be born out of the disruptive nature of AI: some kind of mechanism to monitor and audit AI systems as and when they are implemented across organizations will become essential in the continuous evolution of the technology.
The socioeconomic impact of AI is also a significant item in the discussion around responsible AI, such as employment questions that may lead to societal disruption caused by mechanization and automation. Overall, there is an acute need to plan for new economies where humans work alongside AI and robots.
The time to address these issues is now – business leaders cannot wait until after AI is embedded into the functional fabric of their organizations. For AI to help create a world of enhanced human productivity, the right foundations need to be set now, with a responsible approach.