Skip to main content

Create a GenAI knowledge base

Contributors netapp-mwallis netapp-rlithman netapp-bcammett netapp-tonacki

After you've deployed the AI infrastructure and identified the data sources that you'll integrate in your knowledge base from your FSx for ONTAP datastores, you are ready to build the knowledge base using workload factory. As part of this step, you'll also define the AI characteristics and create conversation starters.

About this task

Knowledge bases have two data integration modalities - public mode and Enterprise mode.

Public mode

A knowledge base can be used without integrating data sources from your organization. In this case, an application integrated with the knowledge base will only provide results from publicly available information on the internet. This is known as a public mode integration.

Enterprise mode

In most cases you'll want to integrate data sources from your organization into the knowledge base. This is known as an Enterprise mode integration because it provides knowledge from your enterprise.

Data sources from your organization may contain Private Identity Information (PII). To safeguard this sensitive information, you can enable data guardrails when creating and configuring knowledge bases. Data guardrails, powered by BlueXP classification, identifies and masks PII, making it inaccessible and irretrievable.

Note Data guardrails can be enabled or disabled at any time. If you switch data guardrails enablement, Workload Factory scans the entire knowledge base from scratch, which incurs a cost.

Create and configure the knowledge base

The knowledge base defines characteristics such as the Bedrock AI models and embedding format that you want to use to create your knowledge base.

Steps
  1. Log in to workload factory using one of the console experiences.

  2. In the AI workloads tile, select Deploy & manage.

  3. From the Knowledge bases tab, select Add knowledge base.

  4. On the Define knowledge base page, configure the knowledge base settings:

    1. Name: Enter the name you want to use for the knowledge base.

    2. Description: Enter a detailed description for the knowledge base.

    3. Embedding model: The embedding model defines how your data will be converted into vector embeddings for the knowledge base. Workload factory supports the following models:

      • Titan Embeddings G1 - Text

      • Titan Embedding Text v2

      • Titan Multimodal Embeddings G1

        Note that you must have already enabled the embedding model from Amazon Bedrock.

    4. Chat model: Choose from Claude chat models that are integrated in Amazon Bedrock. Note that you must have already enabled the chat model from Amazon Bedrock.

      Learn more about the available models to help make your selection: Anthropic's Claude in Amazon Bedrock

    5. Data guardrails: Choose whether you want to enable or disable data guardrails. Learn about data guardrails, powered by BlueXP classification.

      The following prerequisites must be complete to enable data guardrails.

      • A service account is required to communicate with BlueXP classification. You must have the Organization admin role on your BlueXP tenancy account for service account creation. A member who has the Organization admin role can complete all actions in BlueXP. Learn how to add a role to a member in BlueXP

      • The AI engine must have access to the BlueXP API endpoint: https://api.bluexp.netapp.com.

      • You'll need to do the following as described in BlueXP classification documentation:

        1. Create a BlueXP Connector

        2. Ensure that your environment can meet the prerequisites

        3. Deploy BlueXP classification

    6. Conversation starters: Choose whether you want to provide up to four conversation starter prompts that are displayed to users who interact with a chatbot that uses this knowledge base. We recommend that you enable this setting.

      If you activate conversation starters, "Automatic mode" is selected by default. "Manual mode" can be enabled only after you've added data sources to your knowledge base. Learn how to modify knowledge base settings.

    7. FSx for ONTAP file system: When you define a new knowledge base, Workload factory creates a new Amazon FSx for NetApp ONTAP volume to store it. Choose an existing file system name and SVM (also called a storage VM) where the new volume will be created.

    8. Snapshot policy: Choose a snapshot policy from the list of existing policies defined in the workload factory storage inventory. Recurring snapshots of the knowledge base will automatically be created at a frequency based on the snapshot policy you select.

      If the snapshot policy you need doesn't exist, you can create a snapshot policy on the storage VM that contains the volume.

  5. Select Create knowledge base to add the knowledge base to GenAI.

    A progress indicator appears while the knowledge base is created.

    After the knowledge base is created, you have the option to add a data source to your new knowledge base or to end the process without adding a data source. We recommend that you select Add data source and add one or more data sources now.

Add data sources to the knowledge base

You can add one or more data sources to populate the knowledge base with your organization's data.

About this task

The maximum number of supported data sources is 10.

Steps
  1. After you select Add data source, the Select a file system page displays.

  2. Select a file system: Select the FSx for ONTAP file system where your data source files reside and select Next.

  3. Select a volume: Select the volume on which your data source files reside and select Next.

    When selecting files stored using the SMB protocol, you'll need to enter the Active Directory information, which includes the domain, IP address, user name, and password.

  4. Select a data source: Select the data source location based on where you have saved the files. This can be an entire volume, or just a specific folder or sub-folder in the volume, and select Next.

  5. Define AI parameters: In the Chunking strategy section, define the how the GenAI engine splits data source content into chunks when the data source is integrated with a knowledge base. You can choose one of the following strategies:

    • Multi-sentence chunking: Organizes information from your data source into sentence-defined chunks. You can choose how many sentences make up each chunk (up to 100).

    • Overlap-based chunking: Organizes information from your data source into character-defined chunks that can overlap neighboring chunks. You can choose the size of each chunk in characters, and how much each chunk overlaps with adjacent chunks. You can configure a chunk size of between 50 and 3000 characters, and an overlap percentage of between 1 and 99%.

      Note Choosing a high overlap percentage can greatly increase storage requirements with only slight improvements in retrieval accuracy.
  6. In the Permission aware section, which is available only when the data source you selected is on a volume that uses the SMB protocol, you can enable or disable the selection:

    • Enabled: Users of the chatbot who access this knowledge base will only get responses to queries from data sources to which they have access.

    • Disabled: Users of the chatbot will receive responses using content from all integrated data sources.

  7. Select Add to add this data source to your knowledge base.

Result

The data source starts to be embedded into your knowledge base. The status changes from "Embedding" to "Embedded" when the data source is completely embedded.

After you add a single data source to the knowledge base, you can test it locally in the chatbot simulator window and make any required changes before you make the chatbot available to your users. You can also follow the same steps to add additional data sources to the knowledge base.