New Findings from OpenAI and Anthropic AI Research Shed Light on the Impact of LLMs on Security and Bias

Large language models are challenging to adjust due to their complex neuron-like structures, making it difficult for AI developers to modify their behavior without knowing which neurons connect to what concepts. Anthropic recently released a detailed map of its Claude AI model, while OpenAI published research on understanding GPT-4’s patterns. Anthropic’s map helps researchers explore how features, similar to neurons, impact a generative AI’s output. These features can be “safety relevant,” aiding in avoiding dangerous topics. By extracting interpretable features from the Claude 3 model, Anthropic is able to identify related concepts and topics. OpenAI’s research focuses on training sparse autoencoders to enhance feature understandability for future generative AI models. Both studies highlight the potential for improving AI safety and reducing bias. Anthropic’s exploration of cybersecurity features like unsafe code and backdoors could assist in tuning AI models to handle sensitive topics appropriately. The ability to manipulate features in AI models can help prevent biased or harmful speech, improving overall cybersecurity for businesses. Plans for using this research to enhance the safety of generative AI and identify undesirable behaviors in model fine-tuning are underway. TechRepublic has reached out to Anthropic for further insights on their research.

Unlock your business potential with our expert guidance. Get in touch now!