The rise of foundation models that power the growth of generative AI and other AI use cases offers exciting possibilities—yet it also raises new questions and concerns about their ethical design, development, deployment, and use.
The IBM AI Ethics Board publication Foundation models: Opportunities, risks and mitigations addresses those concerns and explores the technology’s benefits, risks, guardrails, and mitigations.
The paper lays out the potential risks associated with foundation models through the lenses of ethics, laws, and regulations in three different categories:
- Traditional. Known risks from prior or earlier forms of AI systems.
- Amplified. Known risks now intensified because of intrinsic characteristics of foundation models, most notably their inherent generative capabilities.
- New. Emerging risks intrinsic to foundation models and their inherent generative capabilities.
These risks are structured in relation to whether they are associated with content provided to the foundation model — the input — or the content generated by it — the output — or if they are related to additional challenges. They are presented in a table, which highlights why these risks are a concern and why it is important to consider these risks during the development, release, and use of foundation models.
In addition, this paper highlights some of the mitigation strategies and tools available such as the watsonx enterprise data and AI platform and open-source trustworthy AI tools. These strategies focus on balancing safety with innovation and allowing users to experience the power of AI and foundation models.
The examples below highlight the use of information provided in the paper.
Education and awareness
The Risk Atlas provides an interactive educational experience to the taxonomy of risks described in this paper. It enables watsonx customers and the general public to explore in greater detail the risks, their implications for enterprises, examples, and IBM solutions to help mitigate these risks.
According to Michael Hind, Distinguished Research Staff Member in IBM Research, “The Risk Atlas enables risk managers, AI practitioners, and researchers to share a common AI risk vocabulary. It serves as a building block for risk mitigation strategies and new research technologies.”
Risk Identification Assessment
The risk atlas content is now available in watsonx.governance. The library of risks can be linked to AI use cases that use predictive models and generative AI. This process is automated using a Risk Identification Assessment questionnaire that can automatically copy the identified risks that may be applicable to the use case for further assessment by the use case owner. This can help users to create a risk profile of their AI use case with just a few clicks to put appropriate mitigations and controls in place. Once the use case risks have been assessed, the use case can be submitted for approval for model development.
“The new Risk Identification Assessment questionnaire powered by Risk Atlas helps watsonx.governance users understand the level of risk associated with a use case and understand the type and frequency of monitoring needed to manage risk. The risk profile is captured as part of the model life cycle audit trail and helps to establish explainability and transparency required for responsible AI adoption” said Heather Gentile, Director of watsonx.governance Product Management for IBM Data and AI and an AI Ethics Focal Point.
Design thinking
For designers of generative AI systems, incorporating risk mitigation at all stages of the design process is crucial, especially during solution definition. By articulating user inputs, defining the data and training required, and identifying the variability in the generated output — teams are empowered to better understand the training, tuning, and inference risks that may be associated with our designs. By incorporating this risk mapping into our design process through focused design thinking activities, businesses can proactively mitigate those risks through design iterations or through alternative solutions.
Adopting a human-centered design approach extends the assessment of risk to secondary and tertiary users, deepens our understanding of all risks, including non-technical and societal risks, and pinpoints their likely occurrence within the design and implementation phases. Addressing these risks at the onset of the process fosters the development of responsible and trustworthy AI solutions.
According to Adam Cutler, Distinguished Designer in AI Design, “Ethical decision-making isn’t another form of technical problem solving. Enterprise Design Thinking for data and AI helps teams to discover and solve data-driven problems while keeping human needs as the focus, by enabling whole-teams to be intentional about purpose, value, and trust before a single line of code is written (or generated).”
Begin your journey today
Foundation models: Opportunities, Risks and Mitigations will take you on a journey towards realizing the potential of foundation models, understanding the importance of the risks they could cause, and learning about strategies to mitigate their potential effects.
Read Foundation Models Opportunities, Risks and Mitigations
Explore the AI Risk Atlas and other watsonx product documentation
Read more about AI Ethics at IBM
Was this article helpful?
YesNo