The Wall Street Journal Future of Everything Festival just ended. Socialite Paris Hilton was there. Contemporary Chinese artist and activist Ai Weiwei was there. And, Shared Assessments was there to take in a few sessions.
The session most relevant to Risk Management and Privacy was an interview with Marian Croak, new VP of Engineering at Google. Croak discussed “Consumer Trust and Inherent Bias in Tech” focusing on “Ethical Artificial Intelligence.”
Croak spoke to how Google both uses and furthers ethical AI. Croak described Google as approaching responsible AI using two different work streams:
- AI for Social Good: Developing services and applications with a profoundly positive impact on humanity to address accessibility issues, health related issues, and disaster preparedness.
- Responsible AI: Developing technologies, best practices, and processes to ensure that AI systems are performing in a responsible way along dimensions of fairness, transparency, privacy, and safety.
Google’s past research on responsible AI has been diffused (rather than focused). A nascent field, Google founded principles around ethical AI only 4-5 years ago. Organizations and people are just now forming normative definitions of fairness or privacy and ensuring these factors are measurable.
As we enter a moment of higher ethical awareness, Croak described ethical, good work and business motives as being entwined. Being responsible in the way you develop and deploy technologies (including AI) is fundamental to the good of the business. It supports the image of the brand; there is no dichotomy between ethics and business.
Croak identified the need to involve people impacted by AI starting in the initial phases of product conceptualization. Croak stated that all throughout the product cycle Google is “asking questions, testing, and involving the people we are trying to serve.” Google uses model cards to deepen their understanding of how people will be involved or impacted by their products.
Model Cards are a tool for model transparency that provide a structured framework for reporting on a machine learning (ML) model’s provenance, usage and ethics. Model cards offer an evaluation of a model’s best uses and limitations. Through benchmarking, model cards can reveal cases where an application of ML is not used in the right context – for example, across different cultural, demographics or phenotypic groups (race, geographic location, sex).
Model cards fit into the “Responsible AI” category Croak mentioned – they are a methodological contribution guiding how all organizations should conduct technological research and development, especially when using AI and ML. Google’s solutions in the “AI for Social Good” category include:
- Health AI – Google has just released an algorithm via smart phone that allows a user to detect abnormalities in their heart rate by scanning blood flow and color of fingertip. The application compares image to data to make a prediction about your heart rate health. If Google used other biometric factors such as skin color, facial color (which are technically good indicators of heart health) there is bias as more errors or false positives arise with darker skinned people. Google corrected to make sure all users benefited from the solution.
- Diagnostic Tools – For patients who have difficulty accessing medical specialists in developing nations, Google has provided algorithms to non-specialists such as nurse practitioners or laypeople to help the care provider understand whether there is a serious condition that needs attention.
Croaks resounding message was that we cannot divorce social context from technology. As Google, as organizations, as individuals, we must remain aware of the culture we live and work in, and implement responsible practices that raise the collective ethical level.
Charlie Miller, Senior Advisor, Shared Assessments, reflected on the lessons Croak imparted: “Ethical Artificial Intelligence (AI) is a complex emerging futuristic view of how technologies can benefit humanity. To ensure Ethical AI succeeds, it is critical to note that intentional and unintentional bias is minimized as it can be introduced in AI models at many points along the way including: the data being selected and used, the diversity of AI development team, validation of the model’s outputs to ensure results align with expected outcomes and are not exposed to any interpretation biases. It will be worth keeping an eye on Marian Croak and the Google team to follow their internal success and ability to extend processes to other organizations and industries.”