In order to be able to ask customers about their satisfaction after a chat with a human agent, for example, there is the option of automatically starting a downstream chatbot after the chat has been completed.
To set up such a chatbot, a few steps are necessary, which are described in this guide.
Inhalt/Content
Create/expand knowledge base
Create event
As soon as a subsequent chat is automatically started from an agent chat, an event is triggered in the knowledge base, which receives information about the initial chat. This information (event parameters) can be received in the event and stored in variables. Based on this information, the event can then decide whether to start another chatbot dialogue.
The event triggered in the knowledge base is called SUBSEQUENTCHAT. If such an event is not contained in the knowledge base, it must be created. To do this, open the ‘COMMON’->‘EVENTS’ area in the agent section of the knowledge base and create a new event via the context menu. Enter SUBSEQUENTCHAT as the name and save.
Now you can enter the action code for transferring the parameters in the ‘Action’ area. Please note: The variables used to receive the event parameters must be defined in advance in the “Global” area as ‘String’ parameters.
Example as a copy template:
previousChatId = EVENTPARAM1;
previousChatCategoryId = EVENTPARAM2;
previousCategoryName = EVENTPARAM3;
agentId= EVENTPARAM4;
agentName = EVENTPARAM5;
transactioncodeId = EVENTPARAM6;
transactioncodeName = EVENTPARAM7;

A ‘Link’ action can then be used to jump directly to the evaluation branch of the knowledge base. It is recommended to deactivate the rule that is jumped to, i.e. not to activate the “Active” switch on the rule linked via ‘Goto’. This prevents the evaluation from being started accidentally from another dialogue – it can then only be accessed via direct goto commands, as shown in our example above. Deactivated rules also appear in a lighter grey in the tree. At the top of the image, you can see that after receiving the event parameters, the first message is already being sent to the customer: ‘Subsequent chat started…’. This is, of course, optional. Finally, the goto to the evaluation rule is inserted in the image. This then starts with a greeting and asks the customer the first question.
Creating evaluation rules
Usually, related dialogue steps are created within a context (folder) in the tree. This serves not least to maintain clarity.
So we select the main node of the agent in the tree, i.e. ‘Basic_Agent’ in the example image above, and use the context menu (three-dot icon in orange) to create a new “Context” with the name ‘RATING’. The name can of course be chosen freely. We then save and close the context, select it in the tree and use the context menu to create the first welcome rule. In our example, the rule has been given the name ‘Welcome’.

In this rule, we welcome the customer and ask the first question. To ensure that the answer to this question can be stored correctly, we create another rule (in this example, ‘Agent-competence’) and link it via the ‘Analyzer’ type (see image above).
In the linked analyser rule ‘Agent competence’, we then receive the input via a MATCH rule – here we can also use regular expressions to check specific input formats. In our example, we keep it simple and simply receive everything that has been entered in the placeholder MATCH1:
(‘<MATCH1>.*<\MATCH1>’)
The dot in this expression means ‘any character’ and the asterisk means ‘any number of times’. So we capture everything that is entered. This is why it is particularly important that the rule is not activated. Otherwise, it will take precedence over any other rules that may still be stored in the knowledge base.
To enter this expression, we activate the ‘Edit Expression manually’ switch – and, of course, we set “Active” to ‘off’:

Now we can work with the MATCH1 variable in the action code and save the input in a global variable, which we must have defined in advance in the Global area:

Here, we have decided to create the global variable as a number and convert the input into a number. Please note: There is no error handling during conversion here – this example is therefore not 100% practical and has been deliberately kept simple for the sake of clarity!
The bot thus saves the input in the variable customerSatisfaction and then asks how the agent’s competence was. It then links to the next analyser rule. The configuration in this rule is very similar. Receive the input via MATCH1, save the result in the next variable and jump to the next rule via Analyser Link.
Once all data has been queried via analyser rules, everything that has been stored as answers in variables must be saved as data fields in the chat after the last answer, i.e. in the last analyser rule in the chain. This is done using an action code. First, a hash table is created in the action code. All parameters are then stored in it with their type, name and content:
Hashtable storage = jsonToFlattable("{}");
storage.put("0.key", "customerSatisfaction");
storage.put("0.value", customerSatisfaction);
storage.put("0.type", "double");
storage.put("1.key", "agentsCompetence");
storage.put("1.value", agentsCompetence);
storage.put("1.type", "double");
storage.put("2.key", "agentsLanguageSkills");
storage.put("2.value", agentsLanguageSkills);
storage.put("2.type", "double");
set("setStorageValues", flattableToJson(storage));
To clarify:

This completes the creation of all rules and prepares the knowledge base.
The knowledge base must now be activated on a help system that is accessible from the iAGENT system (firewalls, etc.).
Define chatbot in Supervisor
Next, we need to connect the chatbot to Supervisor. To do this, create a new entry in the list of chatbots in Supervisor via ‘Administration->Chat->Chatbot’. On the ‘Integration’ page, enter the URL for the Help REST API – this is usually <server URL>/nmIQ/api/rest. The timeouts can be set as required.

The connection between the chatbot configured here and the Messenger inbox account is then configured in the next step directly on the Messenger inbox account and NOT here on the tab, as only ‘upstream chatbots’ can be configured here!
Please note: After configuring a chatbot, it must still be activated via the list of chatbots!
Connect chatbot to Messenger inbox account
We still need to tell the system when to start this chatbot. To do this, we open the list of Messenger inbox accounts (‘Administration->System->Messenger inbox accounts’) and select the appropriate Messenger inbox account with a click. The editing mask opens.
On the ‘Automatic replies’ tab, we find a section called ‘Integration of a novomind iAGENT Help chatbot’. Here, select ‘Selected chatbot’ under ‘Chatbot’ and set up the new chatbot:

Now we just need to tick the box next to ‘Start bot chat after agent chat ends’ and select a category to be used for the subsequent chat.
Now make sure that the Messenger inbox account is also activated (again via the list) and you are ready to start your first test. Immediately after the agent has finished a chat via this Messenger inbox account, the new chatbot dialogue is started with the customer.
Evaluation of subsequent chats
The subsequent chats are completely independent chats that were started with the category specified in the Messenger inbox account.
These chats are linked to the original chat via a reference table ‘ISSUE_RELATION’ and the ‘ISSUE_RELATION_TYPE’ 5.
This connection allows data from the original chats, such as ‘date/time, category, agent’, to be displayed in a report together with data from the subsequent chat (evaluation criteria via CHAT_INFOS). Since the evaluation criteria can be defined completely freely in the knowledge base, there can be no standard report for this.
However, it is possible to generate a report using the report generator via chat views (e.g. ‘Chats’). The field selection then offers all previously used evaluation criteria as columns for the report. These can be found under the subheading ‘Properties’ in the list of columns.