This blog continues our series called “AI Essentials,” which aims to bridge the knowledge gap surrounding AI-related topics. It discusses what model weights are, their role in model training and related implications for startups & public policy.
As AI systems take in vast amounts of data, they have to determine which characteristics of the data are important. For example, if an AI model is being trained to differentiate between dogs and cats, the model will place more importance on relevant distinguishing features (like the shape of the ears or length of nose) and less importance on less relevant features (like the number of legs or color of fur).
These distinctions are reflected in model weights, which are numerical parameters that determine the importance of features in a dataset. Highly complex AI systems can have billions of weights—like GPT-3, which has over 175 billion model weights. You can think of weights as volume knobs that control how much influence each input (like an image detail or a text entry) has on the final decision by the AI. During training, these weights are continually adjusted as the model learns from the data, refining its accuracy by emphasizing and de-emphasizing certain inputs.
Model weights play a crucial role in determining the outputs of AI systems and access to model weights can enable an individual to make beneficial changes to the model (addressing a biased result or creating a new product) or to make malevolent changes (allowing the model to create CSAM or other illegal content). For certain models, like one designed for fraud-detection, securing model weights is tantamount because access to weights could enable criminal circumvention.
Conversely for general-use models, access to model weights allows for transparency and enables innovation. These models, often called open source or open-weight models, are pre-trained models with publicly available weights. Since the functionality of AI relies primarily on the configuration of weights, pre-trained models enable startups to sidestep the significant data and compute resources required for training AI from scratch. Like open source software—which contributed to an orders-of-magnitude reduction in the cost of launching a startup—open source AI promises to reduce the cost of AI innovation and bolster entrepreneurship.
Furthermore, open-weight models promote transparency and allow for more thorough scrutiny by researchers and policymakers, helping address many core concerns related to AI outputs like biases or hallucination.
Policymakers have highlighted these tradeoffs, for example, the Biden Administration's AI executive order directed the study of such models. The resulting report highlighted the benefits—for innovation, research, and transparency—and risks of abuse. The report concluded restrictions were not appropriate, but open source and access to model weights promises to be subject of AI policy debates into the future.