The following information applies only to the Unstructured Ingest CLI and the Unstructured Ingest Python library.

The Unstructured SDKs for Python and JavaScript/TypeScript, and the Unstructured open-source library, do not support this functionality.

Concepts

You can use the Unstructured Ingest CLI or the Unstructured Ingest Python library to generate embeddings after the partitioning and chunking steps in an ingest pipeline. The chunking step is particularly important to ensure that the text pieces (also known as the documents or elements) can fit the input limits of an embedding model.

You generate embeddings by specifying an embedding model that is provided or used by an embedding provider. An embedding model creates arrays of numbers known as vectors, representing the text that is extracted by Unstructured. These vectors are stored or embedded next to the data itself.

These vector embeddings allow vector databases to more quickly and efficiently analyze and process these inherent properties and relationships between data. For example, you can save the extracted text along with its embeddings in a vector store. When a user queries a retrieval augmented generation (RAG) application, the application can use a vector database to perform a similarity search in that vector store and then return the documents whose embeddings are the closest to that user’s query.

Learn more about chunking and embedding.

Generate embeddings

To use the Ingest CLI or Ingest Python library to generate embeddings, do the following:

  1. Choose an embedding provider that you want to use from among the following allowed providers, and note the provider’s ID:

  2. Run the following command to install the required Python package for the embedding provider:

    • For aws-bedrock, run pip install "unstructured-ingest[bedrock]".
    • For huggingface, run pip install "unstructured-ingest[embed-huggingface]".
    • For mixedbread-ai, run pip install "unstructured-ingest[embed-mixedbreadai]".
    • For octoai, run pip install "unstructured-ingest[embed-octoai]".
    • For openai, run pip install "unstructured-ingest[openai]".
    • For togetherai, run pip install "unstructured-ingest[togetherai]".
    • For vertexai, run pip install "unstructured-ingest[embed-vertexai]".
    • For voyageai, run pip install "unstructured-ingest[embed-voyageai]".
  3. For the following embedding providers, you can choose the model that you want to use. If you do choose a model, note the model’s name:

  4. Note the special settings to connect to the provider:

    • For aws-bedrock, you’ll need an AWS access key value, the corresponding AWS secret access key value, and the corresponding AWS Region identifier. Get an AWS access key and secret access key.
    • For huggingface, if you use a gated model (a model with special conditions that you must accept before you can use it, or a privately published model), you’ll need an HF inference API key value, beginning with hf_. Get an HF inference API key. To learn whether your model requires an HF inference API key, see your model provider’s documentation.
    • For mixedbread-ai, you’ll need a Mixedbread API key value. Get a Mixedbread API key.
    • For octoai, you’ll need an Octo AI API token value. Get an Octo AI API token.
    • For openai, you’ll need an OpenAI API key value. Get an OpenAI API key.
    • For togetherai, you’ll need a together.ai API key value. Get a together.ai API key.
    • For vertexai, you’ll need the path to a Google Cloud credentials JSON file. Learn more here and here.
    • For voyageai, you’ll need a Voyage AI API key value. Get a Voyage AI API key.
  5. Now, apply all of this information as follows, and then run your command or code: