Data Science Investment Counter —Funding raised by UK data science companies in 2018.
£ 5.640 Million

DeepMind’s New Hires Will Protect us from Rogue Machines

A team of specialist in the art of preventing evil Artificial Intelligence from running amok are Google DeepMind’s latest hires.

According to Business Insider, which cites information sourced on LinkedIn and other social media, the London-based AI organisation recently snapped up AI experts Viktoriya Krakovna, Jan Leike, and Pedro Ortega to create a brand-new safety team.

As the Alphabet-owned company keeps scoring successes in its bid to engineer human-level Artificial Intelligence, worries are growing around the possibility that a Super-Intelligent machine would be too smart to control and could end up posing an existential threat to humankind. Physicist Stephen Hawking , Tesla CEO Elon Musk and Oxford philosopher Nick Bostrom are only the most high-profile figures of the AI-fearful bunch.

Maybe to allay such concerns, DeepMind has now embarked its trio of evil-machine-busters.

Harvard statistician Krakovna is a cofounder of Boston-based Future of Life Institute: an organisation aimed at staving off existential risks—including AI—, whose advisors include Hawking and Musk themselves. Business Insider’s story points out how Krakovna’s LinkedIn and Twitter information  reveal her new role as DeepMind’s “research scientist in AI safety.”

Similar information indicates that DeepMind hired Oxford-based Leike, who on his personal website describes himself as a specialist in reinforcement learning and “making machine learning robust and beneficial.” He is also a researcher at Bostrom-led Future of Humanity Institute.

The third hire is Cambridge University’s AI expert Ortega, whose focus  is “information-theoretic and statistical mechanical ideas to sequential decision-making.”

Image via bit.ly/2gBu8Qm

SHARE THIS ARTICLE:
BY SHACK15

Co-working space and blog dedicated to all things data science.

Subscribe to our newsletter