Loading stock data...

The latest announcements from OpenAI

OpenAI

ChatGPT users now have more control over their data than ever before, with the introduction of new features that allow for better management and protection of user conversations.

Turn Off Chat History: A New Level of Control

We are excited to announce that we have introduced the ability to turn off chat history in ChatGPT. This feature allows users to choose which conversations can be used to train our models, providing an added layer of control over their data. Conversations started when chat history is disabled will not be used to train and improve our models, and will also not appear in the history sidebar.

How to Turn Off Chat History:

To turn off chat history, simply follow these steps:

  1. Open the ChatGPT settings
  2. Look for the "Chat History" section
  3. Toggle the switch to disable chat history

You can change this setting at any time, and it will not affect your ability to use ChatGPT.

What Happens When I Turn Off Chat History?

When you turn off chat history, we will retain new conversations for 30 days before permanently deleting them. We may review these conversations when needed to monitor for abuse, but they will not be used to train or improve our models.

Existing Opt-Out Process: A Better Alternative

We understand that some users may have concerns about their data being used in ChatGPT. Our existing opt-out process has been available for a while now, and we are committed to making it easier for users to manage their data.

However, we recognize that the opt-out process can be complex and time-consuming. That’s why we’ve introduced the ability to turn off chat history, providing a simpler and more straightforward way to control your data.

Benefits of Turning Off Chat History

By turning off chat history, you can:

  • Control which conversations are used to train our models
  • Keep your conversations private and secure
  • Enjoy a more seamless and intuitive user experience

At OpenAI, we believe that developing safe and advanced AI requires collaboration with experts from various fields. That’s why we’re excited to introduce the Bug Bounty Program, an initiative designed to recognize and reward security researchers who contribute to keeping our technology and company secure.

What is the Bug Bounty Program?

The Bug Bounty Program is a way for us to acknowledge and incentivize the valuable insights of security researchers who identify vulnerabilities or bugs in our systems. By participating in this program, you will play a crucial role in making our technology safer for everyone.

How Does the Bug Bounty Program Work?

Here’s how it works:

  1. Security researchers discover vulnerabilities or bugs in our systems
  2. They report these issues to us through the Bug Bounty Program page
  3. We review and validate the findings, then reward the researcher with a bounty

Benefits of Participating in the Bug Bounty Program

By participating in the Bug Bounty Program, you can:

  • Contribute to making our technology safer for everyone
  • Receive recognition and rewards for your valuable insights
  • Join a community of security researchers who share your passion for innovation and safety

At OpenAI, we are committed to developing powerful AI systems that prioritize safety and alignment. Here’s how we approach AI safety:

Rigorous Testing and Feedback

Before releasing any new system, we conduct rigorous testing and engage external experts for feedback. This ensures that our models are safe and reliable.

Reinforcement Learning with Human Feedback

We use techniques like reinforcement learning with human feedback to improve the behavior of our models and ensure they align with human values.

Broad Safety and Monitoring Systems

We build broad safety and monitoring systems to detect potential issues early on and prevent harm.

Examples of Our Commitment to AI Safety

For example, after our latest model, GPT-4, finished training, we spent more than 6 months working across the organization to make it safer and more aligned prior to releasing it publicly. This is just one example of how we prioritize safety and alignment in our development process.

Why Regulation is Needed

We believe that powerful AI systems should be subject to rigorous safety evaluations. That’s why we actively engage with governments on the best form of regulation to ensure that such practices are adopted.

We’re excited to introduce initial support for plugins in ChatGPT, a new way for users to interact and access up-to-date information. Plugins are tools designed specifically for language models with safety as a core principle, and help ChatGPT access up-to-date information, run computations, or use third-party services.

How Do Plugins Work?

Here’s how it works:

  1. We roll out plugins in ChatGPT gradually, studying their real-world use, impact, and safety and alignment challenges
  2. Users can access plugins through the ChatGPT interface, allowing them to customize their experience and interact with our models in new ways

Benefits of Plugins

By using plugins, users can:

  • Access up-to-date information and resources
  • Run computations and perform tasks efficiently
  • Enjoy a more seamless and intuitive user experience

We hope you enjoy these new features and initiatives! As always, we’re committed to making ChatGPT safer, more aligned, and more useful for everyone.