Qwen2 is the new series of Qwen large language models. For Qwen2, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters, including a Mixture-of-Experts model. This repo contains the 72B Qwen2 base language model.
Llama 3.1 405B is the first openly available model that rivals the top AI models when it comes to state-of-the-art capabilities in general knowledge, steerability, math, tool use, and multilingual translation.
Llama 3.1 70B is ideal for content creation, conversational AI, language understanding, R&D, and enterprise applications.
Llama 3.1 8B is best suited for limited computational power and resources. The model excels at text summarization, text classification, sentiment analysis, and language translation requiring low-latency inferencing.
Llama 3.2 is a suite of AI models released by Meta AI, designed to revolutionize edge AI and vision applications with open, customizable models.
Nous-Hermes-Llama2-13b is a state-of-the-art language model fine-tuned on over 300,000 instructions.
OpenChat is an innovative library of open-source language models, fine-tuned with C-RLFT - a strategy inspired by offline reinforcement learning.
WizardLM-2 7B is the fastest and achieves comparable performance with existing 10x larger opensource leading models.
Meta developed and released the Meta Llama 3 family of large language models (LLMs). The Llama 3 instruction tuned models are optimized for dialogue use cases and outperform many of the available open source chat models on common industry benchmarks.
Meta developed and released the Meta Llama 3 family of large language models (LLMs). The Llama 3 instruction tuned models are optimized for dialogue use cases and outperform many of the available open source chat models on common industry benchmarks.
The WizardLM-2 8x22B is a state-of-the-art large language model, demonstrating highly competitive performance in complex chat, multilingual, reasoning, and agent tasks.
The Mistral-7B Large Language Model (LLM) is a pretrained generative text model with 7 billion parameters.
A wild 7B parameter model that merges several models using the new task_arithmetic merge method from mergekit.
An uncensored, fine-tuned model based on the Mixtral mixture of experts model that excels at coding tasks.
The Mixtral 8x7b Large Language Model (LLM) is a pretrained generative Sparse Mixture of Experts.
Llama 2 is a pretrained and fine-tuned generative text models, This is the 13B pretrained model.
Stable Video Diffusion (SVD) Image-to-Video is a diffusion model that takes in a still image as a conditioning frame, and generates a video from it.
SD-XL Inpainting is a latent text-to-image diffusion model capable of generating photo-realistic images given any text input, with the extra capability of inpainting the pictures by using a mask.