Databricks, a data and artificial intelligence (AI) company, has introduced DBRX, a large language model (LLM) that it claims surpasses other open source models in performance. DBRX outperforms existing LLMs such as Llama 2 70B and Mixtral-8x7B on various benchmarks including language understanding, programming, maths, and logic. The company aims to democratize the training and tuning of high-performing LLMs for every enterprise, eliminating the need to rely on a few closed models. DBRX enables enterprises to develop customized reasoning capabilities based on their own data and is expected to accelerate the trend of organizations replacing proprietary models with open source models. DBRX is optimized for efficiency through a mixture-of-experts architecture and is available for research and commercial use on GitHub, Hugging Face, and various cloud platforms. Databricks also offers services to help enterprises build and deploy generative AI applications using DBRX. The open source nature of the model gives customers more control over their data and model weights, making it more appealing to regulated industries. Databricks’ move towards open source models aligns with a growing preference among enterprise executives for open source models.