5 things to know about AI model cards
Contents
5 things to know about AI model cards
Every conversation about artificial intelligence nowadays seemingly ends with a warning to balance innovation with responsible governance. One might even say with great computing power comes great responsibility. It is clear why responsible AI is important. Fear of discrimination, the inability to distinguish misinformation and even the future of TV rides on the safe and transparent use of the technology. Organizations developing and deploying AI, specifically generative AI, have turned to model cards as one way to promote explainability and achieve that transparency. Here are five things AI governance professionals should know about model cards.
What are model cards?
First proposed in 2018, model cards are short documents provided with machine learning models that explain the context in which the models are intended to be used, details of the performance evaluation procedures and other relevant information. A machine learning model intended to evaluate voter demographics, for example, would be released with a model card providing performance metrics across conditions like culture, race, geographic location, sex and intersectional groups that are relevant to the intended application.
Different stakeholders benefit from the deployment of model cards in different ways. AI practitioners can learn how a model would work for its intended use cases, while developers can compare a model's results to other models created for the same or a similar purpose and use the results to inform future builds or iterations of the model. Policymakers can use model cards to understand how a model will impact individuals affected by their policies, organizations can use model cards as specification sheets to help determine whether to adopt or incorporate a new tool or system, and privacy professionals can learn whether a model would use personal or sensitive information and how consumers may be affected by that use.
Most importantly, for the current uses of and innovations in AI, model cards provide details about the construction of the machine learning model, like its architecture and training data. Seeing what kind of data was used to train the model is a large part of determining whether the output of the model will be biased.
What is the transparency principle?
AI governance refers to a system of policies, practices and processes organizations implement to manage and oversee their use of AI technology, measuring and balancing the risks and benefits to ensure the AI aligns with an organization's objectives, is developed and used responsibly and ethically, and complies with applicable legal requirements. Principle-based AI governance involves taking security, safety, transparency, explainability, accountability, privacy, and bias, among other principles, into account when creating governance processes. Privacy pros are likely already familiar with the EU General Data Protection Regulation transparency principle, which requires any information addressed to the public data subject be concise, easily accessible, easy to understand, and in clear and plain language. Similarly, the transparency principle in AI governs the extent to which information regarding an AI system is made available to stakeholders, including an understandable explanation of how the system works. This is precisely what model cards do: explain information in the machine learning model to provide transparency into the AI system.
Transparency is related to explainability, but many see them as two distinct principles, especially as the policy community continues to debate the proper level of explanation provided to individuals about how a given model arrived at an output. Explainability seeks to lay a foundation into how a system works, while transparency lifts the lid on a system to show the inner workings. Model cards do both.
What should I be looking for?
Model cards typically contain certain details for researchers, testers, policymakers, developers and other interested parties to review. Though the concept of model cards was introduced five years ago, we are still in the early days of AI governance and do not yet have mandatory or established requirements for model cards. Researchers have provided the following as suggested contents of a model card:
- Model details. This section covers basic information about the model, including the people or organization developing the model, model date, version, type and architecture details, information about the training algorithms, parameters and fairness constraints, contact information for questions about the model, citation details and license information.
- Intended use. This section describes use cases for which the model was developed and those out of the model's intended scope, along with its intended users.
- Performance metrics. This section illustrates the real-world impact of the model and includes both relevant factors and evaluation factors that contribute to the model's intended performance, e.g., demographic groups, environmental conditions and technical attributes.
- Training data. Some organizations may provide limited information in this section because the training data or method is proprietary. In most cases, this section should include a general description of the provenance of the data, details of the statistical distribution of various factors in the training data sets, and other details on the data sets used in the creation of the model.
- Quantitative analysis. This section covers, for example, potential biases or limitations of the model, whether in its predictions or its ability to work outside of certain use cases.
- Ethical considerations and recommendations. This section covers ethical and responsible AI considerations when using the model, including concerns about privacy, fairness, and individual or societal impacts from the model's use. It would also include recommendations for further testing and monitoring the model.
What are organizations leading the AI effort doing?
Model cards are the current tool of choice in providing transparency in large language and machine learning models. Organizations developing their own AI systems have typically released model cards with a companion research paper.
- Meta and Microsoft jointly released their Llama 2 model card with accompanying research this summer. This is the second version of their open source large language model that is free for research and commercial use. Llama 2's model card resides in the research paper's appendix, and includes the model's details, intended use, hardware and software, training data, evaluation results, and ethical considerations and limitations. Most notably, the hardware and software section includes information about the model's carbon footprint.
- OpenAI’s GPT-3 model card includes a details section that covers the model date, type, version and links to their research paper, model-use section, data, performance, limitations section that covers biases, and feedback section with a link for submitting questions or comments about the model.
- Google's face detection model card, which is part of the company's Cloud Vision application programming interface, includes sections for the model's description and architecture, performance metrics, limitations, trade-offs, and feedback. There is also a section where users can upload their own images to test the model, with a notice stating Google does not keep a copy of the image. While the previous two model cards were text-based, Google's model card for their face detection AI includes images that help guide the reader. The limitations section includes images with each limitation, like an unfocused image of a woman with text that states the model may not detect blurry faces, and a picture of a crowd with text about a similar limitation on pictures with too many subjects a certain distance from the camera. Google also released a model card toolkit for developers and other users to employ as a baseline to develop their model cards.
- IBM took a different approach with its AI Factsheet, a document that, like a model card, contains details about the creation, deployment, purpose, data set characteristics and criticality of an AI model. The organization touts the AI Factsheet's ability to manage AI governance across a company by allowing users to view models in production, models in development and models that need validation, as well as manage the communication flow from data scientists to model administrators. Unlike with model cards, the main purpose of this tool is to manage and automate the AI lifecycle rather than act as the transparency solution for an AI model.
Is this the solution to the transparency challenge?
While model cards are a great example of transparency in AI, there are educational and professional barriers to entry. They serve the same purpose as, and are reminiscent of, privacy nutrition labels. First introduced in 2009, privacy nutrition labels were developed by researchers as a transparency mechanism to present the way organizations collect, use, process and share personal information to consumers. Like these nutrition labels, model cards are intended to be easy to decipher. However, due to the very nature of AI, it takes effort to break down complex models into essentially a one-page spec sheet.
Additionally, model cards are intended for the full range of stakeholders, including curious but not technologically fluent end users. In reality, they are geared more towards developers and researchers who are highly educated in AI and algorithmic development and are steeped in the world of AI in a way that end users typically are not. This is both a feature and a bug of model cards, as clarifying the purpose and limitations of models between developers and deployers is a major challenge that model cards address, while standardizing explainability to end users still needs to be addressed by AI governance.
Leaders also use model cards as their go-to answer when illustrating their organization's belief in transparency and fairness, but upon closer inspection, training data — one of the most important parts of a model card that directly impacts bias in an AI's output — is often summarized as open-source, publicly available information.
While this statement is not false for the models it describes, it does gloss over how publicly available data can include private or sensitive data made public through a breach, other unauthorized access or poor access controls. Those in the industry understand the harsh reality of the all-encompassing "publicly available data," in that what is public also includes what was once sensitive and private. This distinction is only commonly known in AI circles, indicating model cards may fail to provide meaningful transparency to "outsiders" and could turn into a crutch to check the box on organizational accountability, even as they increase the transparency of an algorithm generally.
That being said, there are obvious benefits to model cards. They promote transparency and accountability in AI development through a standardized document that can be compared across organizations and models, help organizations better iterate from existing models by providing a synopsis of model details and limitations, and promote collaboration among the AI community. Model cards are not the end-all, be-all solution, but are one innovation we need for the future of responsible AI.