23 KiB
Automate ITSM ticket classification and resolution using Gemini, Qdrant and ServiceNow
Automate ITSM ticket classification and resolution using Gemini, Qdrant and ServiceNow
1. Workflow Overview
This workflow automates the classification and resolution of IT Service Management (ITSM) tickets using a combination of Google Gemini (PaLM) language models, Qdrant vector search, and ServiceNow incident management. It targets IT support scenarios where user queries or issues are received, classified, enriched with knowledge base data, and either responded to automatically or routed for incident creation in ServiceNow.
The workflow is logically organized into these main blocks:
- 1.1 Input Reception: Receives user chat messages via webhook and triggers the workflow.
- 1.2 Text Classification: Classifies incoming queries into categories (Incident, Request, Other) using a Google Gemini-powered text classifier.
- 1.3 Incident Creation: Automatically creates incidents in ServiceNow when the category is "Incident."
- 1.4 Knowledge Base Query: Retrieves semantically relevant FAQ answers from a Qdrant vector store.
- 1.5 AI Response Generation: Uses Google Gemini language models and a LangChain AI Agent to generate contextual responses based on classification or retrieved FAQ data.
- 1.6 Knowledge Base Management: Loads and inserts FAQ content into the Qdrant vector store with embeddings generated by Google Gemini.
- 1.7 Summarization: Summarizes incident creation results or other outputs for concise reporting.
2. Block-by-Block Analysis
1.1 Input Reception
-
Overview:
This block listens for incoming chat messages from users via a public webhook, initiating the workflow. -
Nodes Involved:
- When chat message received
-
Node Details:
- When chat message received
- Type: Chat Trigger (Webhook)
- Configuration: Public webhook mode, responds with the last executed node's output.
- Input: External HTTP POST requests containing user chat input.
- Output: Emits an object with
chatInputproperty containing the user message. - Edge Cases: Webhook timeout, malformed input, unauthorized access (public webhook).
- Notes: Entry point for all user interactions.
- When chat message received
1.2 Text Classification
-
Overview:
Classifies user input into Incident, Request, or Other categories using a Google Gemini-powered text classifier with auto-fixing enabled. -
Nodes Involved:
- Text Classifier
- Google Gemini Chat Model1
-
Node Details:
-
Text Classifier
- Type: LangChain Text Classifier
- Configuration:
- System prompt instructs classification into predefined categories with JSON output only.
- Input text is from
chatInputof the webhook node. - Categories: Incident, Request, Other with descriptions.
- Auto-fixing enabled to correct user input or classifier output if necessary.
- Input: User chat text.
- Output: JSON with the classified category.
- Edge Cases: Misclassification, model errors, malformed JSON output.
- Notes: Crucial for routing downstream logic.
-
Google Gemini Chat Model1
- Type: Google Gemini LLM
- Configuration: Default options, connected as language model for the classifier.
- Credentials: Google Gemini (PaLM) API account.
- Input: Receives classification prompts.
- Output: Classification results.
- Edge Cases: API quota limits, latency, auth errors.
-
1.3 Incident Creation
-
Overview:
Automatically creates a new incident in ServiceNow when the classifier flags the input as an Incident. -
Nodes Involved:
- Create an incident
- Summarization Chain
-
Node Details:
-
Create an incident
- Type: ServiceNow node
- Configuration:
- Resource: Incident
- Operation: Create
- Authentication: Basic Auth with ServiceNow credentials.
- Short description set to the original user chat input.
- Input: Triggered when classification is Incident.
- Output: Incident creation result.
- Edge Cases: API authentication failures, ServiceNow downtime, malformed input.
- Notes: Automates ticket creation in ITSM system.
-
Summarization Chain
- Type: LangChain Summarization Chain
- Configuration: Default summarization options.
- Input: Incident creation output.
- Output: Summarized incident details.
- Edge Cases: Model timeout, incomplete summaries.
-
1.4 Knowledge Base Query
-
Overview:
Retrieves relevant information from a pre-embedded FAQ knowledge base stored in Qdrant using semantic vector search. -
Nodes Involved:
- Qdrant Vector Store
- AI Agent
- Embeddings Google Gemini1
-
Node Details:
-
Qdrant Vector Store
- Type: LangChain Vector Store (Qdrant)
- Configuration:
- Mode: retrieve-as-tool
- Tool Description dynamically includes user input.
- Collection: FAQBase
- Connected to embeddings node for vector search.
- Input: Embeddings from Google Gemini.
- Output: Retrieved FAQ documents matching query.
- Credentials: Qdrant API account.
- Edge Cases: API connection issues, empty results, data corruption.
- Sticky Note: Describes detailed purpose and usage.
-
AI Agent
- Type: LangChain AI Agent using Google Gemini
- Configuration:
- Prompt instructs agent to search the FAQBase and respond or say no answer found.
- Has output parser enabled to structure response.
- Input: Receives Qdrant search results as context.
- Output: Final user-facing answer.
- Credentials: Google Gemini (PaLM) API.
- Edge Cases: Parsing errors, API limits, no relevant data scenarios.
- Sticky Note: Explains central AI role in workflow.
-
Embeddings Google Gemini1
- Type: LangChain Embeddings with Google Gemini
- Configuration: Default embedding generation.
- Input: Raw user input text.
- Output: Vector embeddings.
- Credentials: Google Gemini (PaLM) API.
-
1.5 AI Response Generation
-
Overview:
Generates enriched or follow-up responses using Google Gemini after classification or retrieval steps. -
Nodes Involved:
- Google Gemini Chat Model
- Google Gemini Chat Model2
-
Node Details:
-
Google Gemini Chat Model
- Type: LangChain Chat LLM (Google Gemini)
- Configuration: Default options.
- Credentials: Google Gemini (PaLM) API.
- Role: Supports AI Agent and other nodes with language model capabilities.
- Edge Cases: API quota, latency.
-
Google Gemini Chat Model2
- Type: LangChain Chat LLM (Google Gemini)
- Configuration: Default.
- Credentials: Google Gemini (PaLM) API.
- Role: Typically used after classification or knowledge retrieval to provide final user response.
- Sticky Note: Details the node's role in output generation.
-
1.6 Knowledge Base Management
-
Overview:
Loads default FAQ documents, embeds them, and inserts them into the Qdrant collection for future retrieval. -
Nodes Involved:
- Edit Fields
- Qdrant Vector Store1
- Embeddings Google Gemini
- Default Data Loader
- When clicking ‘Execute workflow’ (manual trigger)
-
Node Details:
-
Edit Fields
- Type: Set node
- Configuration: Sets a multi-question FAQ string as
sample_kbfor embedding. - Input: Triggered manually or via workflow execution.
- Output: Text input for embedding generation.
-
Qdrant Vector Store1
- Type: LangChain Vector Store (Qdrant)
- Configuration:
- Mode: insert
- Collection: FAQBase
- Inserts embeddings into Qdrant.
- Credentials: Qdrant API account.
- Input: Receives embeddings from Embeddings Google Gemini node.
- Edge Cases: Insertion errors, API limits.
- Sticky Note: Describes node purpose and usage.
-
Embeddings Google Gemini
- Type: LangChain Embeddings (Google Gemini)
- Configuration: Default.
- Credentials: Google Gemini (PaLM) API.
- Input: Text from Edit Fields.
- Output: Embeddings for insertion.
-
Default Data Loader
- Type: LangChain Document Default Data Loader
- Configuration: Default options.
- Output: Documents for embedding or retrieval.
- Input: Could initiate knowledge base loading.
-
When clicking ‘Execute workflow’
- Type: Manual Trigger
- Purpose: Allows manual initiation of the knowledge base insertion process.
-
1.7 Miscellaneous Nodes
-
HTTP Request
- Type: HTTP Request (unused or placeholder node)
- Configuration: Calls "python.com" URL with default options.
- Role: Not connected to main flow; possibly for testing or placeholder.
-
Simple Memory
- Type: LangChain Memory Buffer Window
- Role: Maintains conversation context for AI Agent.
- Connected as AI memory input to AI Agent node.
3. Summary Table
| Node Name | Node Type | Functional Role | Input Node(s) | Output Node(s) | Sticky Note |
|---|---|---|---|---|---|
| When chat message received | @n8n/n8n-nodes-langchain.chatTrigger | Receives user chat messages via webhook | Text Classifier | ||
| Text Classifier | @n8n/n8n-nodes-langchain.textClassifier | Classifies input into Incident, Request, Other | When chat message received | Create an incident, HTTP Request, AI Agent | Describes classification categories and routing logic. |
| Create an incident | n8n-nodes-base.serviceNow | Creates ServiceNow incident for Incident inputs | Text Classifier | Summarization Chain | Explains incident creation config and use case. |
| Summarization Chain | @n8n/n8n-nodes-langchain.chainSummarization | Summarizes incident creation output | Create an incident | ||
| AI Agent | @n8n/n8n-nodes-langchain.agent | Conversational AI agent integrating search & LLM | Text Classifier, Qdrant Vector Store | Central AI node, handles intelligent responses and routing. | |
| Google Gemini Chat Model | @n8n/n8n-nodes-langchain.lmChatGoogleGemini | Provides LLM capabilities for classification and AI Agent | AI Agent, Text Classifier | ||
| Google Gemini Chat Model1 | @n8n/n8n-nodes-langchain.lmChatGoogleGemini | Supports Text Classifier LLM | Text Classifier | ||
| Google Gemini Chat Model2 | @n8n/n8n-nodes-langchain.lmChatGoogleGemini | Generates enriched responses after classification/retrieval | Summarization Chain | Describes role in post-classification response generation. | |
| Qdrant Vector Store | @n8n/n8n-nodes-langchain.vectorStoreQdrant | Retrieves relevant info from FAQBase via vector search | Embeddings Google Gemini1 | AI Agent | Detailed note about retrieval from FAQBase. |
| Qdrant Vector Store1 | @n8n/n8n-nodes-langchain.vectorStoreQdrant | Inserts or retrieves FAQ embeddings | Edit Fields, Embeddings Google Gemini | Note about insertion and retrieval roles. | |
| Embeddings Google Gemini | @n8n/n8n-nodes-langchain.embeddingsGoogleGemini | Generates embeddings for knowledge base texts | Edit Fields | Qdrant Vector Store1 | |
| Embeddings Google Gemini1 | @n8n/n8n-nodes-langchain.embeddingsGoogleGemini | Generates embeddings for user queries | Qdrant Vector Store | ||
| Edit Fields | n8n-nodes-base.set | Defines FAQ sample questions and answers | When clicking ‘Execute workflow’ | Qdrant Vector Store1 | Contains FAQ sample text for embedding. |
| Default Data Loader | @n8n/n8n-nodes-langchain.documentDefaultDataLoader | Loads default documents for embedding | Qdrant Vector Store1 | ||
| When clicking ‘Execute workflow’ | n8n-nodes-base.manualTrigger | Manual trigger to start knowledge base insertion | Edit Fields | ||
| Simple Memory | @n8n/n8n-nodes-langchain.memoryBufferWindow | Maintains conversational context for AI Agent | AI Agent | ||
| HTTP Request | n8n-nodes-base.httpRequest | Placeholder or test HTTP call | Text Classifier | Not connected to main workflow; possibly unused or for testing. | |
| Sticky Note | n8n-nodes-base.stickyNote | Comments and explanations | Various notes clarifying node purposes and configurations (multiple notes duplicated across relevant nodes). |
4. Reproducing the Workflow from Scratch
-
Create Webhook Trigger Node
- Node Type:
@n8n/n8n-nodes-langchain.chatTrigger - Name: "When chat message received"
- Mode: Webhook (public)
- Response mode: last node output
- Purpose: Receive user messages to start workflow.
- Node Type:
-
Add Text Classifier Node
- Node Type:
@n8n/n8n-nodes-langchain.textClassifier - Name: "Text Classifier"
- Set inputText to
={{ $json.chatInput }} - Configure categories: Incident, Request, Other (with provided descriptions)
- Enable Auto Fixing
- Connect output of "When chat message received" to this node.
- Node Type:
-
Add Google Gemini Chat Model1 Node
- Node Type:
@n8n/n8n-nodes-langchain.lmChatGoogleGemini - Name: "Google Gemini Chat Model1"
- Credentials: Configure with Google Gemini (PaLM) API credentials
- Connect as language model for "Text Classifier."
- Node Type:
-
Add ServiceNow Incident Creation Node
- Node Type:
n8n-nodes-base.serviceNow - Name: "Create an incident"
- Resource: Incident
- Operation: Create
- Authentication: Basic Auth (provide ServiceNow credentials)
- Set
short_descriptionto={{ $('When chat message received').item.json.chatInput }} - Connect first output of "Text Classifier" (Incident category) to this node.
- Node Type:
-
Add Summarization Chain Node
- Node Type:
@n8n/n8n-nodes-langchain.chainSummarization - Name: "Summarization Chain"
- Connect output of "Create an incident" to this node.
- Node Type:
-
Add Embeddings Google Gemini1 Node
- Node Type:
@n8n/n8n-nodes-langchain.embeddingsGoogleGemini - Name: "Embeddings Google Gemini1"
- Credentials: Google Gemini (PaLM) API
- Use to embed user input text for vector search.
- Node Type:
-
Add Qdrant Vector Store Node
- Node Type:
@n8n/n8n-nodes-langchain.vectorStoreQdrant - Name: "Qdrant Vector Store"
- Mode: retrieve-as-tool
- Collection: FAQBase
- Credentials: Qdrant API account
- Connect output of Embeddings Google Gemini1 to this node.
- Node Type:
-
Add AI Agent Node
- Node Type:
@n8n/n8n-nodes-langchain.agent - Name: "AI Agent"
- Text prompt:
You are agent search {{ $json.chatInput }} query in the Knowledge base "FAQBase" and give the response from that Qdrant Base otherwise tell no answer found. - Enable output parser.
- Credentials: Google Gemini (PaLM) API
- Connect outputs of "Text Classifier" and "Qdrant Vector Store" to this node.
- Node Type:
-
Add Google Gemini Chat Model Node
- Node Type:
@n8n/n8n-nodes-langchain.lmChatGoogleGemini - Name: "Google Gemini Chat Model"
- Credentials: Google Gemini (PaLM) API
- Connect as language model input to "AI Agent."
- Node Type:
-
Add Google Gemini Chat Model2 Node
- Node Type:
@n8n/n8n-nodes-langchain.lmChatGoogleGemini - Name: "Google Gemini Chat Model2"
- Credentials: Google Gemini (PaLM) API
- Connect to "Summarization Chain" node as language model.
- Node Type:
-
Add Manual Trigger Node
- Node Type:
n8n-nodes-base.manualTrigger - Name: "When clicking ‘Execute workflow’"
- Purpose: Manual start for knowledge base insertion.
- Node Type:
-
Add Edit Fields (Set) Node
- Node Type:
n8n-nodes-base.set - Name: "Edit Fields"
- Set a string field with multiple FAQ Q&A pairs as sample knowledge base content.
- Connect manual trigger output to this node.
- Node Type:
-
Add Embeddings Google Gemini Node
- Node Type:
@n8n/n8n-nodes-langchain.embeddingsGoogleGemini - Name: "Embeddings Google Gemini"
- Credentials: Google Gemini (PaLM) API
- Connect output of "Edit Fields" to this node.
- Node Type:
-
Add Qdrant Vector Store1 Node
- Node Type:
@n8n/n8n-nodes-langchain.vectorStoreQdrant - Name: "Qdrant Vector Store1"
- Mode: insert
- Collection: FAQBase
- Credentials: Qdrant API account
- Connect output of "Embeddings Google Gemini" to this node.
- Node Type:
-
Add Default Data Loader Node
- Node Type:
@n8n/n8n-nodes-langchain.documentDefaultDataLoader - Name: "Default Data Loader"
- Connect output to "Qdrant Vector Store1" via ai_document input if needed.
- Node Type:
-
Add Simple Memory Node
- Node Type:
@n8n/n8n-nodes-langchain.memoryBufferWindow - Name: "Simple Memory"
- Connect as AI memory input to "AI Agent" node.
- Node Type:
-
(Optional) Add HTTP Request Node
- Node Type:
n8n-nodes-base.httpRequest - Name: "HTTP Request"
- Configure URL as needed.
- Connect second output of "Text Classifier" if required.
- Node Type:
5. General Notes & Resources
| Note Content | Context or Link |
|---|---|
| The workflow leverages Google Gemini (PaLM) API for multiple LLM tasks including classification, embedding, chat completion, and summarization. | Google Gemini (PaLM) API credentials required. |
| Qdrant vector store is used as a semantic search engine for FAQ knowledge base, enhancing retrieval accuracy for ITSM queries. | Qdrant vector database (FAQBase collection) |
| ServiceNow Basic Authentication credentials must be securely configured for incident creation. | ServiceNow platform |
| Sticky notes in the workflow provide detailed explanations of node purposes and configurations, aiding maintenance and future modification. | n8n workflow sticky notes |
| Manual trigger is included for knowledge base content loading and embedding insertion, allowing controlled updates to the FAQ knowledge base in Qdrant. | Manual execution via n8n UI |
| Potential failure points include API authentication issues (Google Gemini, Qdrant, ServiceNow), classification errors, and network timeouts. | Monitoring and retry logic recommended |
| Workflow designed for extensibility: categories, knowledge base content, and AI prompt templates can be updated to adapt to changing ITSM requirements. | Modular LangChain nodes and reusable credentials |
This structured reference document fully explains the ITSM ticket classification and resolution workflow, enabling advanced users and AI agents to understand, reproduce, troubleshoot, and extend the automation effectively.