UPCOMING INDUSTRY EVENTS

AA-ISP Leadership Summit

Chicago

April 16-18, 2019

SALT Conference

Las Vegas

May 7-10, 2019

Re-Work Deep Learning & Deep Learning in Healthcare Summits

Boston

May 23-24

AI Business Summit

London

June 12-13, 2019

Re-Work AI for Good & Applied AI Summits

San Francisco

June 20-21, 2019

MAICON 2019

Cleveland, Ohio

July 16-18, 2019

Re-Work AI in Retail & Advertising Summit

London

Sept. 19-20, 2019

AI Business Summit

San Francisco

Sept. 24-26, 2019

Re-Work Responsible AI Summit

Montreal

Oct. 24-25, 2019

AI Business Summit

New York

Dec. 4-5, 2019

For all the hype, it's clear that artificial intelligence (AI) certainly will be at the forefront of the information revolution. That's why now, not later, we must ask the bigger and bolder societal questions around the full scale of the disruption brought about by AI in business.

Slowly but steadily, the issue of “Responsible AI” is making its way to the front-end of the boardroom discussion on AI. The debate is on about the best way to build fairness, transparency, data privacy and security into brand-new, untested AI-powered systems and business practices.

We know that AI can discriminate. There's a male-dominated global community of AI developers — and few of those responsible for engineering AI systems are minorities. This lack of diversity and inclusivity may lead to algorithms that exclusively mirror their creators and their worldview.

Transparency is also critical, and businesses need know what data AI uses, as well as why the data is being used.

Law of the land

There's no legal framework that covers who's responsible when AI systems make mistakes. Is the AI system responsible for its actions? Questions of jurisdiction and liability need to be addressed prior to a wide-ranging use of AI systems, particularly in public-facing projects.

The law is catching up when it comes to data privacy and security. The European Union has introduced perhaps the tightest laws globally in the possession, analysis and use of personal data. GDPR is changing the nature of the conversation around the speed with which AI can be implemented.

But the world has yet to welcome any kind of comprehensive legal framework on AI. The industry is, so far, self-regulating, while governments are only starting to figure out what is and isn’t acceptable through the use of AI.

The world will need some kind of mechanism to monitor and audit AI systems when they are implemented across organizations will become essential in the continuous evolution of the technology.

The socio-economic impact of AI is also a significant item in the discussion around responsible AI, including how society evolves with the disruptions caused by unemployment this new technology will surely bring.

Time to act

The time to address these issues is now. Business leaders cannot wait until after AI is embedded into the functional fabric of their organizations. For AI to help create a world of enhanced human productivity, the right foundations need to be set now with a responsible approach.