Remodel one-on-one buyer interactions: Construct speech-capable order processing brokers with AWS and generative AI

0
34
Transform one-on-one customer interactions: Build speech-capable order processing agents with AWS and generative AI


In at the moment’s panorama of one-on-one buyer interactions for putting orders, the prevailing observe continues to depend on human attendants, even in settings like drive-thru espresso retailers and fast-food institutions. This conventional method poses a number of challenges: it closely depends upon handbook processes, struggles to effectively scale with rising buyer calls for, introduces the potential for human errors, and operates inside particular hours of availability. Moreover, in aggressive markets, companies adhering solely to handbook processes may discover it difficult to ship environment friendly and aggressive service. Regardless of technological developments, the human-centric mannequin stays deeply ingrained so as processing, main to those limitations.

The prospect of using know-how for one-on-one order processing help has been out there for a while. Nevertheless, current options can typically fall into two classes: rule-based techniques that demand substantial effort and time for setup and maintenance, or inflexible techniques that lack the flexibleness required for human-like interactions with prospects. In consequence, companies and organizations face challenges in swiftly and effectively implementing such options. Fortuitously, with the arrival of generative AI and giant language fashions (LLMs), it’s now attainable to create automated techniques that may deal with pure language effectively, and with an accelerated on-ramping timeline.

Amazon Bedrock is a totally managed service that gives a selection of high-performing basis fashions (FMs) from main AI firms like AI21 Labs, Anthropic, Cohere, Meta, Stability AI, and Amazon through a single API, together with a broad set of capabilities you want to construct generative AI functions with safety, privateness, and accountable AI. Along with Amazon Bedrock, you need to use different AWS providers like Amazon SageMaker JumpStart and Amazon Lex to create totally automated and simply adaptable generative AI order processing brokers.

On this put up, we present you the way to construct a speech-capable order processing agent utilizing Amazon Lex, Amazon Bedrock, and AWS Lambda.

Answer overview

The next diagram illustrates our resolution structure.

image001

The workflow consists of the next steps:

  1. A buyer locations the order utilizing Amazon Lex.
  2. The Amazon Lex bot interprets the client’s intents and triggers a DialogCodeHook.
  3. A Lambda operate pulls the suitable immediate template from the Lambda layer and codecs mannequin prompts by including the client enter within the related immediate template.
  4. The RequestValidation immediate verifies the order with the menu merchandise and lets the client know through Amazon Lex if there’s one thing they wish to order that isn’t a part of the menu and can present suggestions. The immediate additionally performs a preliminary validation for order completeness.
  5. The ObjectCreator immediate converts the pure language requests into an information construction (JSON format).
  6. The client validator Lambda operate verifies the required attributes for the order and confirms if all crucial data is current to course of the order.
  7. A buyer Lambda operate takes the info construction as an enter for processing the order and passes the order whole again to the orchestrating Lambda operate.
  8. The orchestrating Lambda operate calls the Amazon Bedrock LLM endpoint to generate a ultimate order abstract together with the order whole from the client database system (for instance, Amazon DynamoDB).
  9. The order abstract is communicated again to the client through Amazon Lex. After the client confirms the order, the order shall be processed.

Stipulations

This put up assumes that you’ve got an energetic AWS account and familiarity with the next ideas and providers:

Additionally, with the intention to entry Amazon Bedrock from the Lambda features, you want to be sure that the Lambda runtime has the next libraries:

  • boto3>=1.28.57
  • awscli>=1.29.57
  • botocore>=1.31.57

This may be achieved with a Lambda layer or by utilizing a selected AMI with the required libraries.

Moreover, these libraries are required when calling the Amazon Bedrock API from Amazon SageMaker Studio. This may be achieved by operating a cell with the next code:

%pip set up --no-build-isolation --force-reinstall 
"boto3>=1.28.57" 
"awscli>=1.29.57" 
"botocore>=1.31.57"

Lastly, you create the next coverage and later connect it to any function accessing Amazon Bedrock:

{
    "Model": "2012-10-17",
    "Assertion": [
        {
            "Sid": "Statement1",
            "Effect": "Allow",
            "Action": "bedrock:*",
            "Resource": "*"
        }
    ]
}

Create a DynamoDB desk

In our particular situation, we’ve created a DynamoDB desk as our buyer database system, however you can additionally use Amazon Relational Database Service (Amazon RDS). Full the next steps to provision your DynamoDB desk (or customise the settings as wanted to your use case):

  1. On the DynamoDB console, select Tables within the navigation pane.
  2. Select Create desk.

image004

  1. For Desk title, enter a reputation (for instance, ItemDetails).
  2. For Partition key, enter a key (for this put up, we use Merchandise).
  3. For Type key, enter a key (for this put up, we use Dimension).
  4. Select Create desk.

image005

Now you possibly can load the info into the DynamoDB desk. For this put up, we use a CSV file. You’ll be able to load the info to the DynamoDB desk utilizing Python code in a SageMaker pocket book.

First, we have to arrange a profile named dev.

  1. Open a brand new terminal in SageMaker Studio and run the next command:
aws configure --profile dev

This command will immediate you to enter your AWS entry key ID, secret entry key, default AWS Area, and output format.
image008

  1. Return to the SageMaker pocket book and write a Python code to arrange a connection to DynamoDB utilizing the Boto3 library in Python. This code snippet creates a session utilizing a selected AWS profile named dev after which creates a DynamoDB consumer utilizing that session. The next is the code pattern to load the info:
%pip set up boto3
import boto3
import csv

# Create a session utilizing a profile named 'dev'
session = boto3.Session(profile_name="dev")

# Create a DynamoDB useful resource utilizing the session
dynamodb = session.useful resource('dynamodb')

# Specify your DynamoDB desk title
table_name="your_table_name"
desk = dynamodb.Desk(table_name)

# Specify the trail to your CSV file
csv_file_path="path/to/your/file.csv"

# Learn CSV file and put gadgets into DynamoDB
with open(csv_file_path, 'r', encoding='utf-8-sig') as csvfile:
    csvreader = csv.reader(csvfile)
    
    # Skip the header row
    subsequent(csvreader, None)

    for row in csvreader:
        # Extract values from the CSV row
        merchandise = {
            'Merchandise': row[0],  # Alter the index based mostly in your CSV construction
            'Dimension': row[1],
            'Value': row[2]
        }
        
        # Put merchandise into DynamoDB
        response = desk.put_item(Merchandise=merchandise)
        
        print(f"Merchandise added: {response}")
print(f"CSV information has been loaded into the DynamoDB desk: {table_name}")

Alternatively, you need to use NoSQL Workbench or different instruments to shortly load the info to your DynamoDB desk.

The next is a screenshot after the pattern information is inserted into the desk.
image010

Create templates in a SageMaker pocket book utilizing the Amazon Bedrock invocation API

To create our immediate template for this use case, we use Amazon Bedrock. You’ll be able to entry Amazon Bedrock from the AWS Administration Console and through API invocations. In our case, we entry Amazon Bedrock through API from the comfort of a SageMaker Studio pocket book to create not solely our immediate template, however our full API invocation code that we are able to later use on our Lambda operate.

  1. On the SageMaker console, entry an current SageMaker Studio area or create a brand new one to entry Amazon Bedrock from a SageMaker pocket book.

image012 1

  1. After you create the SageMaker area and person, select the person and select Launch and Studio. It will open a JupyterLab setting.
  2. When the JupyterLab setting is prepared, open a brand new pocket book and start importing the mandatory libraries.

image014

There are various FMs out there through the Amazon Bedrock Python SDK. On this case, we use Claude V2, a strong foundational mannequin developed by Anthropic.

The order processing agent wants just a few totally different templates. This could change relying on the use case, however we’ve designed a common workflow that may apply to a number of settings. For this use case, the Amazon Bedrock LLM template will accomplish the next:

  • Validate the client intent
  • Validate the request
  • Create the order information construction
  • Move a abstract of the order to the client
  1. To invoke the mannequin, create a bedrock-runtime object from Boto3.

image016

#Mannequin api request parameters
modelId = 'anthropic.claude-v2' # change this to make use of a distinct model from the mannequin supplier
settle for="utility/json"
contentType="utility/json"

import boto3
import json
bedrock = boto3.consumer(service_name="bedrock-runtime")

Let’s begin by engaged on the intent validator immediate template. That is an iterative course of, however because of Anthropic’s immediate engineering information, you possibly can shortly create a immediate that may accomplish the duty.

  1. Create the primary immediate template together with a utility operate that can assist put together the physique for the API invocations.

The next is the code for prompt_template_intent_validator.txt:

"{"immediate": "Human: I gives you some directions to finish my request.n<directions>Given the Dialog between Human and Assistant, you want to determine the intent that the human needs to perform and reply appropriately. The legitimate intents are: Greeting,Place Order, Complain, Communicate to Somebody. At all times put your response to the Human inside the Response tags. Additionally add an XML tag to your output figuring out the human intent.nHere are some examples:n<instance><Dialog> H: hello there.nnA: Hello, how can I aid you at the moment?nnH: Sure. I would love a medium mocha please</Dialog>nnA:<intent>Place Order</intent><Response>nGot it.</Response></instance>n<instance><Dialog> H: good daynnA: Hello, how can I aid you at the moment?nnH: my espresso doesn't style properly are you able to please re-make it?</Dialog>nnA:<intent>Complain</intent><Response>nOh, I'm sorry to listen to that. Let me get somebody that will help you.</Response></instance>n<instance><Dialog> H: hellonnA: Hello, how can I aid you at the moment?nnH: I want to converse to another person please</Dialog>nnA:<intent>Communicate to Somebody</intent><Response>nSure, let me get somebody that will help you.</Response></instance>n<instance><Dialog> H: howdynnA: Hello, how can I aid you at the moment?nnH:can I get a big americano with sugar and a pair of mochas with no whipped cream</Dialog>nnA:<intent>Place Order</intent><Response>nSure factor! Please give me a second.</Response></instance>n<instance><Dialog> H: hellonn</Dialog>nnA:<intent>Greeting</intent><Response>nHi there, how can I aid you at the moment?</Response></instance>n</directions>nnPlease full this request in line with the directions and examples supplied above:<request><Dialog>REPLACEME</Dialog></request>nnAssistant:n", "max_tokens_to_sample": 250, "temperature": 1, "top_k": 250, "top_p": 0.75, "stop_sequences": ["nnHuman:", "nnhuman:", "nnCustomer:", "nncustomer:"]}"

image018
image020 1

  1. Save this template right into a file with the intention to add to Amazon S3 and name from the Lambda operate when wanted. Save the templates as JSON serialized strings in a textual content file. The earlier screenshot reveals the code pattern to perform this as properly.
  2. Repeat the identical steps with the opposite templates.

The next are some screenshots of the opposite templates and the outcomes when calling Amazon Bedrock with a few of them.

The next is the code for prompt_template_request_validator.txt:

"{"immediate": "Human: I gives you some directions to finish my request.n<directions>Given the context do the next steps: 1. confirm that the gadgets within the enter are legitimate. If buyer supplied an invalid merchandise, advocate changing it with a legitimate one. 2. confirm that the client has supplied all the data marked as required. If the client missed a required data, ask the client for that data. 3. When the order is full, present a abstract of the order and ask for affirmation at all times utilizing this phrase: 'is that this right?' 4. If the client confirms the order, Don't ask for affirmation once more, simply say the phrase contained in the brackets [Great, Give me a moment while I try to process your order]</directions>n<context>nThe VALID MENU ITEMS are: [latte, frappe, mocha, espresso, cappuccino, romano, americano].nThe VALID OPTIONS are: [splenda, stevia, raw sugar, honey, whipped cream, sugar, oat milk, soy milk, regular milk, skimmed milk, whole milk, 2 percent milk, almond milk].nThe required data is: dimension. Dimension might be: small, medium, giant.nHere are some examples: <instance>H: I would love a medium latte with 1 Splenda and a small romano with no sugar please.nnA: <Validation>:nThe Human is ordering a medium latte with one splenda. Latte is a legitimate menu merchandise and splenda is a legitimate choice. The Human can be ordering a small romano with no sugar. Romano is a legitimate menu merchandise.</Validation>n<Response>nOk, I bought: nt-Medium Latte with 1 Splenda and.nt-Small Romano with no Sugar.nIs this right?</Response>nnH: yep.nnA:n<Response>nGreat, Give me a second whereas I attempt to course of your order</instance>nn<instance>H: I would love a cappuccino and a mocha please.nnA: <Validation>:nThe Human is ordering a cappuccino and a mocha. Each are legitimate menu gadgets. The Human didn't present the scale for the cappuccino. The human didn't present the scale for the mocha. I'll ask the Human for the required lacking data.</Validation>n<Response>nSure factor, however are you able to please let me know the scale for the Cappuccino and the scale for the Mocha? We've got Small, Medium, or Giant.</Response></instance>nn<instance>H: I would love a small cappuccino and a big lemonade please.nnA: <Validation>:nThe Human is ordering a small cappuccino and a big lemonade. Cappuccino is a legitimate menu merchandise. Lemonade will not be a legitimate menu merchandise. I'll counsel the Human a alternative from our legitimate menu gadgets.</Validation>n<Response>nSorry, we do not have Lemonades, would you wish to order one thing else as a substitute? Maybe a Frappe or a Latte?</Response></instance>nn<instance>H: Can I get a medium frappuccino with sugar please?nnA: <Validation>:n The Human is ordering a Frappuccino. Frappuccino will not be a legitimate menu merchandise. I'll counsel a alternative from the legitimate menu gadgets in my context.</Validation>n<Response>nI am so sorry, however Frappuccino will not be in our menu, would you like a frappe or a cappuccino as a substitute? maybe one thing else?</Response></instance>nn<instance>H: I would like two giant americanos and a small latte please.nnA: <Validation>:n The Human is ordering 2 Giant Americanos, and a Small Latte. Americano is a legitimate menu merchandise. Latte is a legitimate menu merchandise.</Validation>n<Response>nOk, I bought: nt-2 Giant Americanos and.nt-Small Latte.nIs this right?</Response>nnH: seems to be right, sure.nnA:n<Response>nGreat, Give me a second whereas I attempt to course of your order.</Response></instance>nn</Context>nnPlease full this request in line with the directions and examples supplied above:<request>REPLACEME</request>nnAssistant:n", "max_tokens_to_sample": 250, "temperature": 0.3, "top_k": 250, "top_p": 0.75, "stop_sequences": ["nnHuman:", "nnhuman:", "nnCustomer:", "nncustomer:"]}"

image022

The next is our response from Amazon Bedrock utilizing this template.
image024

The next is the code for prompt_template_object_creator.txt:

"{"immediate": "Human: I gives you some directions to finish my request.n<directions>Given the Dialog between Human and Assistant, you want to create a json object in Response with the suitable attributes.nHere are some examples:n<instance><Dialog> H: I desire a latte.nnA:nCan I've the scale?nnH: Medium.nnA: So, a medium latte.nIs this Appropriate?nnH: Sure.</Dialog>nnA:<Response>{"1":{"merchandise":"latte","dimension":"medium","addOns":[]}}</Response></instance>n<instance><Dialog> H: I would like a big frappe and a pair of small americanos with sugar.nnA: Okay, let me verify:nn1 giant frappenn2 small americanos with sugarnnIs this right?nnH: Sure.</Dialog>nnA:<Response>{"1":{"merchandise":"frappe","dimension":"giant","addOns":[]},"2":{"merchandise":"americano","dimension":"small","addOns":["sugar"]},"3":{"merchandise":"americano","dimension":"small","addOns":["sugar"]}}</Response>n</instance>n<instance><Dialog> H: I desire a medium americano.nnA: Okay, let me verify:nn1 medium americanonnIs this right?nnH: Sure.</Dialog>nnA:<Response>{"1":{"merchandise":"americano","dimension":"medium","addOns":[]}}</Response></instance>n<instance><Dialog> H: I would like a big latte with oatmilk.nnA: Okay, let me verify:nnLarge latte with oatmilknnIs this right?nnH: Sure.</Dialog>nnA:<Response>{"1":{"merchandise":"latte","dimension":"giant","addOns":["oatmilk"]}}</Response></instance>n<instance><Dialog> H: I desire a small mocha with no whipped cream please.nnA: Okay, let me verify:nnSmall mocha with no whipped creamnnIs this right?nnH: Sure.</Dialog>nnA:<Response>{"1":{"merchandise":"mocha","dimension":"small","addOns":["no whipped cream"]}}</Response>nn</instance></directions>nnPlease full this request in line with the directions and examples supplied above:<request><Dialog>REPLACEME</Dialog></request>nnAssistant:n", "max_tokens_to_sample": 250, "temperature": 0.3, "top_k": 250, "top_p": 0.75, "stop_sequences": ["nnHuman:", "nnhuman:", "nnCustomer:", "nncustomer:"]}"

image026
image028

The next is the code for prompt_template_order_summary.txt:

"{"immediate": "Human: I gives you some directions to finish my request.n<directions>Given the Dialog between Human and Assistant, you want to create a abstract of the order with bullet factors and embody the order whole.nHere are some examples:n<instance><Dialog> H: I would like a big frappe and a pair of small americanos with sugar.nnA: Okay, let me verify:nn1 giant frappenn2 small americanos with sugarnnIs this right?nnH: Sure.</Dialog>nn<OrderTotal>10.50</OrderTotal>nnA:<Response>nHere is a abstract of your order together with the full:nn1 giant frappenn2 small americanos with sugar.nYour Order whole is $10.50</Response></instance>n<instance><Dialog> H: I desire a medium americano.nnA: Okay, let me verify:nn1 medium americanonnIs this right?nnH: Sure.</Dialog>nn<OrderTotal>3.50</OrderTotal>nnA:<Response>nHere is a abstract of your order together with the full:nn1 medium americano.nYour Order whole is $3.50</Response></instance>n<instance><Dialog> H: I would like a big latte with oat milk.nnA: Okay, let me verify:nnLarge latte with oat milknnIs this right?nnH: Sure.</Dialog>nn<OrderTotal>6.75</OrderTotal>nnA:<Response>nHere is a abstract of your order together with the full:nnLarge latte with oat milk.nYour Order whole is $6.75</Response></instance>n<instance><Dialog> H: I desire a small mocha with no whipped cream please.nnA: Okay, let me verify:nnSmall mocha with no whipped creamnnIs this right?nnH: Sure.</Dialog>nn<OrderTotal>4.25</OrderTotal>nnA:<Response>nHere is a abstract of your order together with the full:nnSmall mocha with no whipped cream.nYour Order whole is $6.75</Response>nn</instance>n</directions>nnPlease full this request in line with the directions and examples supplied above:<request><Dialog>REPLACEME</Dialog>nn<OrderTotal>REPLACETOTAL</OrderTotal></request>nnAssistant:n", "max_tokens_to_sample": 250, "temperature": 0.3, "top_k": 250, "top_p": 0.75, "stop_sequences": ["nnHuman:", "nnhuman:", "nnCustomer:", "nncustomer:", "[Conversation]"]}"

image030
image032

As you possibly can see, we’ve used our immediate templates to validate menu gadgets, determine lacking required data, create an information construction, and summarize the order. The foundational fashions out there on Amazon Bedrock are very highly effective, so you can accomplish much more duties through these templates.

You will have accomplished engineering the prompts and saved the templates to textual content recordsdata. Now you can start creating the Amazon Lex bot and the related Lambda features.

Create a Lambda layer with the immediate templates

Full the next steps to create your Lambda layer:

  1. In SageMaker Studio, create a brand new folder with a subfolder named python.
  2. Copy your immediate recordsdata to the python folder.

image034

  1. You’ll be able to add the ZIP library to your pocket book occasion by operating the next command.
!conda set up -y -c conda-forge zip

image036

  1. Now, run the next command to create the ZIP file for importing to the Lambda layer.
!zip -r prompt_templates_layer.zip prompt_templates_layer/.

image038

  1. After you create the ZIP file, you possibly can obtain the file. Go to Lambda, create a brand new layer by importing the file immediately or by importing to Amazon S3 first.
  2. Then connect this new layer to the orchestration Lambda operate.

Now your immediate template recordsdata are domestically saved in your Lambda runtime setting. It will velocity up the method throughout your bot runs.

Create a Lambda layer with the required libraries

Full the next steps to create your Lambda layer with the required librarues:

  1. Open an AWS Cloud9 occasion setting, create a folder with a subfolder known as python.
  2. Open a terminal contained in the python folder.
  3. Run the next instructions from the terminal:
pip set up “boto3>=1.28.57” -t .
pip set up “awscli>=1.29.57" -t .
pip set up “botocore>=1.31.57” -t .

  1. Run cd .. and place your self inside your new folder the place you even have the python subfolder.
  2. Run the next command:
  1. After you create the ZIP file, you possibly can obtain the file. Go to Lambda, create a brand new layer by importing the file immediately or by importing to Amazon S3 first.
  2. Then connect this new layer to the orchestration Lambda operate.

Create the bot in Amazon Lex v2

For this use case, we construct an Amazon Lex bot that may present an enter/output interface for the structure with the intention to name Amazon Bedrock utilizing voice or textual content from any interface. As a result of the LLM will deal with the dialog piece of this order processing agent, and Lambda will orchestrate the workflow, you possibly can create a bot with three intents and no slots.

  1. On the Amazon Lex console, create a brand new bot with the tactic Create a clean bot.

image040

Now you possibly can add an intent with any acceptable preliminary utterance for the end-users to begin the dialog with the bot. We use easy greetings and add an preliminary bot response so end-users can present their requests. When creating the bot, be sure that to make use of a Lambda code hook with the intents; it will set off a Lambda operate that can orchestrate the workflow between the client, Amazon Lex, and the LLM.

  1. Add your first intent, which triggers the workflow and makes use of the intent validation immediate template to name Amazon Bedrock and determine what the client is attempting to perform. Add just a few easy utterances for end-users to begin dialog.

image042

You don’t want to make use of any slots or preliminary studying in any of the bot intents. In actual fact, you don’t want so as to add utterances to the second or third intents. That’s as a result of the LLM will information Lambda all through the method.

  1. Add a affirmation immediate. You’ll be able to customise this message within the Lambda operate later.

image044

  1. Beneath Code hooks, choose Use a Lambda operate for initialization and validation.

image046

  1. Create a second intent with no utterance and no preliminary response. That is the PlaceOrder intent.

When the LLM identifies that the client is attempting to position an order, the Lambda operate will set off this intent and validate the client request in opposition to the menu, and ensure that no required data is lacking. Keep in mind that all of that is on the immediate templates, so you possibly can adapt this workflow for any use case by altering the immediate templates.
image048

  1. Don’t add any slots, however add a affirmation immediate and decline response.

image050

  1. Choose Use a Lambda operate for initialization and validation.

image052

  1. Create a 3rd intent named ProcessOrder with no pattern utterances and no slots.
  2. Add an preliminary response, a affirmation immediate, and a decline response.

After the LLM has validated the client request, the Lambda operate triggers the third and final intent to course of the order. Right here, Lambda will use the thing creator template to generate the order JSON information construction to question the DynamoDB desk, after which use the order abstract template to summarize the entire order together with the full so Amazon Lex can go it to the client.

image054

  1. Choose Use a Lambda operate for initialization and validation. This could use any Lambda operate to course of the order after the client has given the ultimate affirmation.

image056

  1. After you create all three intents, go to the Visible builder for the ValidateIntent, add a go-to intent step, and join the output of the optimistic affirmation to that step.
  2. After you add the go-to intent, edit it and select the PlaceOrder intent because the intent title.

image058

  1. Equally, to go the Visible builder for the PlaceOrder intent and join the output of the optimistic affirmation to the ProcessOrder go-to intent. No enhancing is required for the ProcessOrder intent.
  2. You now must create the Lambda operate that orchestrates Amazon Lex and calls the DynamoDB desk, as detailed within the following part.

Create a Lambda operate to orchestrate the Amazon Lex bot

Now you can construct the Lambda operate that orchestrates the Amazon Lex bot and workflow. Full the next steps:

  1. Create a Lambda operate with the usual execution coverage and let Lambda create a task for you.
  2. Within the code window of your operate, add just a few utility features that can assist: format the prompts by including the lex context to the template, name the Amazon Bedrock LLM API, extract the specified textual content from the responses, and extra. See the next code:
import json
import re
import boto3
import logging

logger = logging.getLogger()
logger.setLevel(logging.DEBUG)

bedrock = boto3.consumer(service_name="bedrock-runtime")
def CreatingCustomPromptFromLambdaLayer(object_key,replace_items):
   
    folder_path="/choose/order_processing_agent_prompt_templates/python/"
    attempt:
        file_path = folder_path + object_key
        with open(file_path, "r") as file1:
            raw_template = file1.learn()
            # Modify the template with the customized enter immediate
            #template['inputs'][0].insert(1, {"function": "person", "content material": '### Enter:n' + user_request})
            for key,worth in replace_items.gadgets():
                worth = json.dumps(json.dumps(worth).exchange('"','')).exchange('"','')
                raw_template = raw_template.exchange(key,worth)
            modified_prompt = raw_template

            return modified_prompt
    besides Exception as e:
        return {
            'statusCode': 500,
            'physique': f'An error occurred: {str(e)}'
        }
def CreatingCustomPrompt(object_key,replace_items):
    logger.debug('replace_items is: {}'.format(replace_items))
    #retrieve person request from intent_request
    #we first propmt the mannequin with present order
    
    bucket_name="your-bucket-name"
    
    #object_key = 'prompt_template_order_processing.txt'
    attempt:
        s3 = boto3.consumer('s3')
        # Retrieve the prevailing template from S3
        response = s3.get_object(Bucket=bucket_name, Key=object_key)
        raw_template = response['Body'].learn().decode('utf-8')
        raw_template = json.masses(raw_template)
        logger.debug('uncooked template is {}'.format(raw_template))
        #template_json = json.masses(raw_template)
        #logger.debug('template_json is {}'.format(template_json))
        #template = json.dumps(template_json)
        #logger.debug('template is {}'.format(template))

        # Modify the template with the customized enter immediate
        #template['inputs'][0].insert(1, {"function": "person", "content material": '### Enter:n' + user_request})
        for key,worth in replace_items.gadgets():
            raw_template = raw_template.exchange(key,worth)
            logger.debug("Changing: {} nwith: {}".format(key,worth))
        modified_prompt = json.dumps(raw_template)
        logger.debug("Modified template: {}".format(modified_prompt))
        logger.debug("Modified template kind is: {}".format(print(kind(modified_prompt))))
        
        #modified_template_json = json.masses(modified_prompt)
        #logger.debug("Modified template json: {}".format(modified_template_json))
        
        return modified_prompt
    besides Exception as e:
        return {
            'statusCode': 500,
            'physique': f'An error occurred: {str(e)}'
        }
    
def validate_intent(intent_request):
    logger.debug('beginning validate_intent: {}'.format(intent_request))
    #retrieve person request from intent_request
    user_request="Human: " + intent_request['inputTranscript'].decrease()
    #getting present context variable
    current_session_attributes =  intent_request['sessionState']['sessionAttributes']
    if len(current_session_attributes) > 0:
        full_context = current_session_attributes['fullContext'] + 'nn' + user_request
        dialog_context = current_session_attributes['dialogContext'] + 'nn' + user_request
    else:
        full_context = user_request
        dialog_context = user_request
    #Getting ready validation immediate by including context to immediate template
    object_key = 'prompt_template_intent_validator.txt'
    #replace_items = {"REPLACEME":full_context}
    #replace_items = {"REPLACEME":dialog_context}
    replace_items = {"REPLACEME":dialog_context}
    #validation_prompt = CreatingCustomPrompt(object_key,replace_items)
    validation_prompt = CreatingCustomPromptFromLambdaLayer(object_key,replace_items)

    #Prompting mannequin for request validation
    intent_validation_completion = prompt_bedrock(validation_prompt)
    intent_validation_completion = re.sub(r'["]','',intent_validation_completion)

    #extracting response from response completion and eradicating some particular characters
    validation_response = extract_response(intent_validation_completion)
    validation_intent = extract_intent(intent_validation_completion)
    
    

    #enterprise logic relying on intents
    if validation_intent == 'Place Order':
        return validate_request(intent_request)
    elif validation_intent in ['Complain','Speak to Someone']:
        ##including session attributes to maintain present context
        full_context = full_context + 'nn' + intent_validation_completion
        dialog_context = dialog_context + 'nnAssistant: ' + validation_response
        intent_request['sessionState']['sessionAttributes']['fullContext'] = full_context
        intent_request['sessionState']['sessionAttributes']['dialogContext'] = dialog_context
        intent_request['sessionState']['sessionAttributes']['customerIntent'] = validation_intent
        return shut(intent_request['sessionState']['sessionAttributes'],intent_request['sessionState']['intent']['name'],'Fulfilled','Shut',validation_response)
    if validation_intent == 'Greeting':
        ##including session attributes to maintain present context
        full_context = full_context + 'nn' + intent_validation_completion
        dialog_context = dialog_context + 'nnAssistant: ' + validation_response
        intent_request['sessionState']['sessionAttributes']['fullContext'] = full_context
        intent_request['sessionState']['sessionAttributes']['dialogContext'] = dialog_context
        intent_request['sessionState']['sessionAttributes']['customerIntent'] = validation_intent
        return shut(intent_request['sessionState']['sessionAttributes'],intent_request['sessionState']['intent']['name'],'InProgress','ConfirmIntent',validation_response)

def validate_request(intent_request):
    logger.debug('beginning validate_request: {}'.format(intent_request))
    #retrieve person request from intent_request
    user_request="Human: " + intent_request['inputTranscript'].decrease()
    #getting present context variable
    current_session_attributes =  intent_request['sessionState']['sessionAttributes']
    if len(current_session_attributes) > 0:
        full_context = current_session_attributes['fullContext'] + 'nn' + user_request
        dialog_context = current_session_attributes['dialogContext'] + 'nn' + user_request
    else:
        full_context = user_request
        dialog_context = user_request
   
    #Getting ready validation immediate by including context to immediate template
    object_key = 'prompt_template_request_validator.txt'
    replace_items = {"REPLACEME":dialog_context}
    #validation_prompt = CreatingCustomPrompt(object_key,replace_items)
    validation_prompt = CreatingCustomPromptFromLambdaLayer(object_key,replace_items)

    #Prompting mannequin for request validation
    request_validation_completion = prompt_bedrock(validation_prompt)
    request_validation_completion = re.sub(r'["]','',request_validation_completion)

    #extracting response from response completion and eradicating some particular characters
    validation_response = extract_response(request_validation_completion)

    ##including session attributes to maintain present context
    full_context = full_context + 'nn' + request_validation_completion
    dialog_context = dialog_context + 'nnAssistant: ' + validation_response
    intent_request['sessionState']['sessionAttributes']['fullContext'] = full_context
    intent_request['sessionState']['sessionAttributes']['dialogContext'] = dialog_context
    
    return shut(intent_request['sessionState']['sessionAttributes'],'PlaceOrder','InProgress','ConfirmIntent',validation_response)
    
def process_order(intent_request):
    logger.debug('beginning process_order: {}'.format(intent_request))

     #retrieve person request from intent_request
    user_request="Human: " + intent_request['inputTranscript'].decrease()
    #getting present context variable
    current_session_attributes =  intent_request['sessionState']['sessionAttributes']
    if len(current_session_attributes) > 0:
        full_context = current_session_attributes['fullContext'] + 'nn' + user_request
        dialog_context = current_session_attributes['dialogContext'] + 'nn' + user_request
    else:
        full_context = user_request
        dialog_context = user_request
    #   Getting ready object creator immediate by including context to immediate template
    object_key = 'prompt_template_object_creator.txt'
    replace_items = {"REPLACEME":dialog_context}
    #object_creator_prompt = CreatingCustomPrompt(object_key,replace_items)
    object_creator_prompt = CreatingCustomPromptFromLambdaLayer(object_key,replace_items)
    #Prompting mannequin for object creation
    object_creation_completion = prompt_bedrock(object_creator_prompt)
    #extracting response from response completion
    object_creation_response = extract_response(object_creation_completion)
    inputParams = json.masses(object_creation_response)
    inputParams = json.dumps(json.dumps(inputParams))
    logger.debug('inputParams is: {}'.format(inputParams))
    consumer = boto3.consumer('lambda')
    response = consumer.invoke(FunctionName="arn:aws:lambda:us-east-1:<AccountNumber>:operate:aws-blog-order-validator",InvocationType="RequestResponse",Payload = inputParams)
    responseFromChild = json.load(response['Payload'])
    validationResult = responseFromChild['statusCode']
    if validationResult == 205:
        order_validation_error = responseFromChild['validator_response']
        return shut(intent_request['sessionState']['sessionAttributes'],'PlaceOrder','InProgress','ConfirmIntent',order_validation_error)
    #invokes Order Processing lambda to question DynamoDB desk and returns order whole
    response = consumer.invoke(FunctionName="arn:aws:lambda:us-east-1: <AccountNumber>:operate:aws-blog-order-processing",InvocationType="RequestResponse",Payload = inputParams)
    responseFromChild = json.load(response['Payload'])
    orderTotal = responseFromChild['body']
    ###Prompting the mannequin to summarize the order together with order whole
    object_key = 'prompt_template_order_summary.txt'
    replace_items = {"REPLACEME":dialog_context,"REPLACETOTAL":orderTotal}
    #order_summary_prompt = CreatingCustomPrompt(object_key,replace_items)
    order_summary_prompt = CreatingCustomPromptFromLambdaLayer(object_key,replace_items)
    order_summary_completion = prompt_bedrock(order_summary_prompt)
    #extracting response from response completion
    order_summary_response = extract_response(order_summary_completion)  
    order_summary_response = order_summary_response + '. Shall I finalize processing your order?'
    ##including session attributes to maintain present context
    full_context = full_context + 'nn' + order_summary_completion
    dialog_context = dialog_context + 'nnAssistant: ' + order_summary_response
    intent_request['sessionState']['sessionAttributes']['fullContext'] = full_context
    intent_request['sessionState']['sessionAttributes']['dialogContext'] = dialog_context
    return shut(intent_request['sessionState']['sessionAttributes'],'ProcessOrder','InProgress','ConfirmIntent',order_summary_response)
    

""" --- Principal handler and Workflow features --- """

def lambda_handler(occasion, context):
    """
    Route the incoming request based mostly on intent.
    The JSON physique of the request is supplied within the occasion slot.
    """
    logger.debug('occasion is: {}'.format(occasion))

    return dispatch(occasion)

def dispatch(intent_request):
    """
    Referred to as when the person specifies an intent for this bot. If intent will not be legitimate then returns error title
    """
    logger.debug('intent_request is: {}'.format(intent_request))
    intent_name = intent_request['sessionState']['intent']['name']
    confirmation_state = intent_request['sessionState']['intent']['confirmationState']
    # Dispatch to your bot's intent handlers
    if intent_name == 'ValidateIntent' and confirmation_state == 'None':
        return validate_intent(intent_request)
    if intent_name == 'PlaceOrder' and confirmation_state == 'None':
        return validate_request(intent_request)
    elif intent_name == 'PlaceOrder' and confirmation_state == 'Confirmed':
        return process_order(intent_request)
    elif intent_name == 'PlaceOrder' and confirmation_state == 'Denied':
        return shut(intent_request['sessionState']['sessionAttributes'],intent_request['sessionState']['intent']['name'],'Fulfilled','Shut','Acquired it. Let me know if I can assist you with one thing else.')
    elif intent_name == 'PlaceOrder' and confirmation_state not in ['Denied','Confirmed','None']:
        return shut(intent_request['sessionState']['sessionAttributes'],intent_request['sessionState']['intent']['name'],'Fulfilled','Shut','Sorry. I'm having hassle finishing the request. Let me get somebody that will help you.')
        logger.debug('exiting intent {} right here'.format(intent_request['sessionState']['intent']['name']))
    elif intent_name == 'ProcessOrder' and confirmation_state == 'None':
        return validate_request(intent_request)
    elif intent_name == 'ProcessOrder' and confirmation_state == 'Confirmed':
        return shut(intent_request['sessionState']['sessionAttributes'],intent_request['sessionState']['intent']['name'],'Fulfilled','Shut','Good! Your order has been processed. Please proceed to cost.')
    elif intent_name == 'ProcessOrder' and confirmation_state == 'Denied':
        return shut(intent_request['sessionState']['sessionAttributes'],intent_request['sessionState']['intent']['name'],'Fulfilled','Shut','Acquired it. Let me know if I can assist you with one thing else.')
    elif intent_name == 'ProcessOrder' and confirmation_state not in ['Denied','Confirmed','None']:
        return shut(intent_request['sessionState']['sessionAttributes'],intent_request['sessionState']['intent']['name'],'Fulfilled','Shut','Sorry. I'm having hassle finishing the request. Let me get somebody that will help you.')
        logger.debug('exiting intent {} right here'.format(intent_request['sessionState']['intent']['name']))
    increase Exception('Intent with title ' + intent_name + ' not supported')
    
def prompt_bedrock(formatted_template):
    logger.debug('immediate bedrock enter is:'.format(formatted_template))
    physique = json.masses(formatted_template)

    modelId = 'anthropic.claude-v2' # change this to make use of a distinct model from the mannequin supplier
    settle for="utility/json"
    contentType="utility/json"

    response = bedrock.invoke_model(physique=physique, modelId=modelId, settle for=settle for, contentType=contentType)
    response_body = json.masses(response.get('physique').learn())
    response_completion = response_body.get('completion')
    logger.debug('response is: {}'.format(response_completion))

    #print_ww(response_body.get('completion'))
    #print(response_body.get('outcomes')[0].get('outputText'))
    return response_completion

#operate to extract textual content between the <Response> and </Response> tags inside mannequin completion
def extract_response(response_completion):
    
    if '<Response>' in response_completion:
        customer_response = response_completion.exchange('<Response>','||').exchange('</Response>','').break up('||')[1]
        
        logger.debug('modified response is: {}'.format(response_completion))

        return customer_response
    else:
        
        logger.debug('modified response is: {}'.format(response_completion))

        return response_completion
        
#operate to extract textual content between the <Response> and </Response> tags inside mannequin completion
def extract_intent(response_completion):
    if '<intent>' in response_completion:
        customer_intent = response_completion.exchange('<intent>','||').exchange('</intent>','||').break up('||')[1]
        return customer_intent
    else:
        return customer_intent
        
def shut(session_attributes, intent, fulfillment_state, action_type, message):
    #This operate prepares the response within the appropiate format for Lex V2

    response = {
        "sessionState": {
            "sessionAttributes":session_attributes,
            "dialogAction": {
                "kind": action_type
            },
            "intent": {
                "title":intent,
                "state":fulfillment_state
                
            },
            
            },
        "messages":
            [{
                "contentType":"PlainText",
                "content":message,
            }]
            ,
        
    }
    return response

  1. Connect the Lambda layer you created earlier to this operate.
  2. Moreover, connect the layer to the immediate templates you created.
  3. Within the Lambda execution function, connect the coverage to entry Amazon Bedrock, which was created earlier.

image060

The Lambda execution function ought to have the next permissions.
image062

Connect the Orchestration Lambda operate to the Amazon Lex bot

  1. After you create the operate within the earlier part, return to the Amazon Lex console and navigate to your bot.
  2. Beneath Languages within the navigation pane, select English.
  3. For Supply, select your order processing bot.
  4. For Lambda operate model or alias, select $LATEST.
  5. Select Save.

image064

Create helping Lambda features

Full the next steps to create extra Lambda features:

  1. Create a Lambda operate to question the DynamoDB desk that you simply created earlier:
import json
import boto3
import logging

logger = logging.getLogger()
logger.setLevel(logging.DEBUG)
# Initialize the DynamoDB consumer
dynamodb = boto3.useful resource('dynamodb')
desk = dynamodb.Desk('your-table-name')

def calculate_grand_total(input_data):
    # Initialize the full value
    total_price = 0
    
    attempt:
        # Loop by way of every merchandise within the enter JSON
        for item_id, item_data in input_data.gadgets():
            item_name = item_data['item'].decrease()  # Convert merchandise title to lowercase
            item_size = item_data['size'].decrease()  # Convert merchandise dimension to lowercase
            
            # Question the DynamoDB desk for the merchandise based mostly on Merchandise and Dimension
            response = desk.get_item(
                Key={'Merchandise': item_name,
                    'Dimension': item_size}
            )
            
            # Examine if the merchandise was discovered within the desk
            if 'Merchandise' in response:
                merchandise = response['Item']
                value = float(merchandise['Price'])
                total_price += value  # Add the merchandise's value to the full
    
        return total_price
    besides Exception as e:
        increase Exception('An error occurred: {}'.format(str(e)))

def lambda_handler(occasion, context):
    attempt:
       
        # Parse the enter JSON from the Lambda occasion
        input_json = json.masses(occasion)

        # Calculate the grand whole
        grand_total = calculate_grand_total(input_json)
    
        # Return the grand whole within the response
        return {'statusCode': 200,'physique': json.dumps(grand_total)}
    besides Exception as e:
        return {
            'statusCode': 500,
            'physique': json.dumps('An error occurred: {}'.format(str(e)))

  1. Navigate to the Configuration tab within the Lambda operate and select Permissions.
  2. Connect a resource-based coverage assertion permitting the order processing Lambda operate to invoke this operate.

image066

  1. Navigate to the IAM execution function for this Lambda operate and add a coverage to entry the DynamoDB desk.

image068

  1. Create one other Lambda operate to validate if all required attributes had been handed from the client. Within the following instance, we validate if the scale attribute is captured for an order:
import json
import logging

logger = logging.getLogger()
logger.setLevel(logging.DEBUG)

def lambda_handler(occasion, context):
    # Outline buyer orders from the enter occasion
    customer_orders = json.masses(occasion)

    # Initialize a listing to gather error messages
    order_errors = {}
    missing_size = []
    error_messages = []
    # Iterate by way of every order in customer_orders
    for order_id, order in customer_orders.gadgets():
        if "dimension" not so as or order["size"] == "":
            missing_size.append(order['item'])
            order_errors['size'] = missing_size
    if order_errors:
        items_missing_size = order_errors['size']
        error_message = f"might you please present the scale for the next gadgets: {', '.be a part of(items_missing_size)}?"
        error_messages.append(error_message)

    # Put together the response message
    if error_messages:
        response_message = "n".be a part of(error_messages)
        return {
        'statusCode': 205,
        'validator_response': response_message
            }   
    else:
        response_message = "Order is validated efficiently"
        return {
        'statusCode': 200,
        'validator_response': response_message
        }

  1. Navigate to the Configuration tab within the Lambda operate and select Permissions.
  2. Connect a resource-based coverage assertion permitting the order processing Lambda operate to invoke this operate.

image070

Take a look at the answer

Now we are able to check the answer with instance orders that prospects place through Amazon Lex.

For our first instance, the client requested for a frappuccino, which isn’t on the menu. The mannequin validates with the assistance of order validator template and suggests some suggestions based mostly on the menu. After the client confirms their order, they’re notified of the order whole and order abstract. The order shall be processed based mostly on the client’s ultimate affirmation.

image076

In our subsequent instance, the client is ordering for big cappuccino after which modifying the scale from giant to medium. The mannequin captures all crucial adjustments and requests the client to verify the order. The mannequin presents the order whole and order abstract, and processes the order based mostly on the client’s ultimate affirmation.

image074

For our ultimate instance, the client positioned an order for a number of gadgets and the scale is lacking for a few gadgets. The mannequin and Lambda operate will confirm if all required attributes are current to course of the order after which ask the client to supply the lacking data. After the client offers the lacking data (on this case, the scale of the espresso), they’re proven the order whole and order abstract. The order shall be processed based mostly on the client’s ultimate affirmation.

image076

LLM limitations

LLM outputs are stochastic by nature, which signifies that the outcomes from our LLM can range in format, and even within the type of untruthful content material (hallucinations). Due to this fact, builders must depend on a very good error dealing with logic all through their code with the intention to deal with these situations and keep away from a degraded end-user expertise.

Clear up

If you happen to not want this resolution, you possibly can delete the next sources:

  • Lambda features
  • Amazon Lex field
  • DynamoDB desk
  • S3 bucket

Moreover, shut down the SageMaker Studio occasion if the applying is not required.

Value evaluation

For pricing data for the principle providers utilized by this resolution, see the next:

Notice that you need to use Claude v2 with out the necessity for provisioning, so total prices stay at a minimal. To additional cut back prices, you possibly can configure the DynamoDB desk with the on-demand setting.

Conclusion

This put up demonstrated the way to construct a speech-enabled AI order processing agent utilizing Amazon Lex, Amazon Bedrock, and different AWS providers. We confirmed how immediate engineering with a strong generative AI mannequin like Claude can allow sturdy pure language understanding and dialog flows for order processing with out the necessity for intensive coaching information.

The answer structure makes use of serverless parts like Lambda, Amazon S3, and DynamoDB to allow a versatile and scalable implementation. Storing the immediate templates in Amazon S3 permits you to customise the answer for various use instances.

Subsequent steps might embody increasing the agent’s capabilities to deal with a wider vary of buyer requests and edge instances. The immediate templates present a solution to iteratively enhance the agent’s abilities. Further customizations might contain integrating the order information with backend techniques like stock, CRM, or POS. Lastly, the agent could possibly be made out there throughout numerous buyer touchpoints like cellular apps, drive-thru, kiosks, and extra utilizing the multi-channel capabilities of Amazon Lex.

To study extra, discuss with the next associated sources:

  • Deploying and managing multi-channel bots:
  • Immediate engineering for Claude and different fashions:
  • Serverless architectural patterns for scalable AI assistants:

Concerning the Authors

Moumita 1Moumita Dutta is a Associate Answer Architect at Amazon Net Providers. In her function, she collaborates carefully with companions to develop scalable and reusable property that streamline cloud deployments and improve operational effectivity. She is a member of AI/ML group and a Generative AI knowledgeable at AWS. In her leisure, she enjoys gardening and biking.

FernandoFernando Lammoglia is a Associate Options Architect at Amazon Net Providers, working carefully with AWS companions in spearheading the event and adoption of cutting-edge AI options throughout enterprise items. A strategic chief with experience in cloud structure, generative AI, machine studying, and information analytics. He makes a speciality of executing go-to-market methods and delivering impactful AI options aligned with organizational objectives. On his free time he likes to spend time together with his household and journey to different international locations.

Mitul 1Mitul Patel is a Senior Answer Architect at Amazon Net Providers. In his function as a cloud know-how enabler, he works with prospects to grasp their objectives and challenges, and offers prescriptive steering to attain their goal with AWS choices. He’s a member of AI/ML group and a Generative AI ambassador at AWS. In his free time, he enjoys mountaineering and taking part in soccer.



Supply hyperlink

LEAVE A REPLY

Please enter your comment!
Please enter your name here