Skip to main content

Configuring extended capabilities in AppDirect AI using additional data sources and actions

AppDirect AI provides a suite of configurable actions that extend your AI's functionality, allowing it to perform specialized tasks, interact dynamically with users, and retrieve knowledge from stored data.

API Function​

Add Add API Actions that can be invoked during interactions with your AI.

Python​

Enabling Python allows your AI to perform actions on files and execute custom Python scripts generated in response to specific queries. This capability is especially useful for tasks requiring data analysis, file processing, or other automated actions that require custom scripting. Select the Python option and load a Python interpreter as a data source to work with Python scripts during conversations with your AI.

User Inputs​

With this option enabled, AIs can prompt users for specific inputs directly within a chat. Based on the query, it generates input fields in real-time, allowing users to provide the necessary data seamlessly. This feature is ideal for gathering user-specific details, ensuring that responses are customized and relevant.

πŸ“ Note: For the best results, provide clear and well-defined instructions when using this feature.

Knowledge Retrieval​

πŸ“ Note: This feature is still in Beta.

The Knowledge Retrieval feature enables AppDirectAI to access relevant information from a vector store built from uploaded data sources. With this feature, the AI can efficiently retrieve and utilize stored knowledge, providing answers based on a comprehensive, customized database.

Also known as Retrieval-Augmented Generation (RAG), this process follows a sequential operation flow, where each step's output feeds into the next.

How It Works​

  1. Expand Queries (optional):
    If enabled in advanced options, this step broadens the search by generating multiple related queries based on the original query. This expands the scope of retrieval to include a wider variety of relevant documents.

  2. Retrieve:
    Retrieves documents that match the query from the vector store. You can set a limit for the number of documents to retrieve.

  3. Deduplicate:
    Removes duplicate results when files appear across multiple data sources.

  4. Rerank:
    Ranks documents based on relevancy using a dedicated machine learning model, which improves on the initial vector store ranking. Adjusting the retrieval document limit can improve accuracy and reduce noise.

  5. Enrich:
    Adds metadata from the database to the retrieved documents.

  6. Grade:
    Assesses document relevancy using a large language model (LLM). Relevant segments are highlighted, and documents receive a relevancy score from 0 to 10, with 10 being the highest. You can select an LLM for grading, though lighter models, such as GPT-4o Mini, are recommended to optimize data usage. Larger models may be reserved for specialized or complex topics.

  7. Filter:
    Excludes results below the minimum relevancy score.

  8. Compress:
    Removes irrelevant segments within documents to focus on pertinent information.

  9. Format:
    Structures data in a format suitable for LLM processing.

Was this page helpful?