
In the coming months, as we learn from deployment and continue to improve our safety systems, we’ll iterate on this protocol, and we plan to enable developers using OpenAI models to integrate plugins into their own applications beyond ChatGPT.Ĭonnecting language models to external tools introduces new opportunities as well as significant new risks. We’re also beginning to roll out the ability for developers to create their own plugins for ChatGPT. Today, we’re beginning to gradually enable existing plugins from our early collaborators for ChatGPT users, beginning with ChatGPT Plus subscribers. We are working on an early attempt at what such a standard might look like, and we’re looking for feedback from developers interested in building with us. We expect that open standards will emerge to unify the ways in which applications expose an AI-facing interface. In response to a user’s explicit request, plugins can also enable language models to perform safe, constrained actions on their behalf, increasing the usefulness of the system overall. Though not a perfect analogy, plugins can be “eyes and ears” for language models, giving them access to information that is too recent, too personal, or too specific to be included in the training data. This text can contain useful instructions, but to actually follow these instructions you need another process. Furthermore, the only thing language models can do out-of-the-box is emit text. This information can be out-of-date and is one-size fits all across applications. The only information they can learn from is their training data. Language models today, while useful for a variety of tasks, are still limited. The first plugins have been created by Expedia, FiscalNote, Instacart, KAYAK, Klarna, Milo, OpenTable, Shopify, Slack, Speak, Wolfram, and Zapier.

Plugin developers who have been invited off our waitlist can use our documentation to build a plugin for ChatGPT, which then lists the enabled plugins in the prompt shown to the language model as well as documentation to instruct the model how to use each. We’re excited to build a community shaping the future of the human–AI interaction paradigm. We’re starting with a small set of users and are planning to gradually roll out larger-scale access as we learn more (for plugin developers, ChatGPT users, and after an alpha period, API users who would like to integrate plugins into their products). Users have been asking for plugins since we launched ChatGPT (and many developers are experimenting with similar ideas) because they unlock a vast range of possible use cases.


In line with our iterative deployment philosophy, we are gradually rolling out plugins in ChatGPT so we can study their real-world use, impact, and safety and alignment challenges-all of which we’ll have to get right in order to achieve our mission.
