Alphabet Inc.’s Google plans to double the size of its team studying artificial-intelligence ethics in the coming years, as the company looks to strengthen a group that has had its credibility challenged by research controversies and personnel defections.
Vice President of Engineering Marian Croak said at The Wall Street Journal’s Future of Everything Festival that the hires will increase the size of the responsible AI team that she leads to 200 researchers. Additionally, she said that Alphabet Chief Executive Sundar Pichai has committed to boost the operating budget of a team tasked with evaluating code and product to avert harm, discrimination and other problems with AI.
“Being responsible in the way that you develop and deploy AI technology is fundamental to the good of the business,” Ms. Croak said. “It severely damages the brand if things aren’t done in an ethical way.”
Google announced in February that Ms. Croak would lead the AI ethics group after it fired the division’s co-head, Margaret Mitchell, for allegedly sharing internal documents with people outside the company. Ms. Mitchell’s exit followed criticism of Google’s suppression of research last year by a prominent member of the team, Timnit Gebru, who says she was fired because of studies critical of the company’s approach to AI. Mr. Pichai pledged an investigation into the circumstances around Ms. Gebru’s departure and said he would seek to restore trust.
In addition to straining the existing team, those personnel changes have frayed Google’s relationship with external groups focused on AI such as Black in AI and Queer in AI, which released a joint statement Monday criticizing Google for setting a “dangerous precedent for what type of research, advocacy, and retaliation is permissible in our community.” The statement was earlier covered by Wired.