Storing & recalling bot interactions☝️🤖

Enhancing JavaScript bot UI with localStorage.

Bot interfaces are fun for the users and advantageous for those who build them (if done right). The concept isn’t new, though today it’s especially powerful.

An implementation of a JavaScript bot library with localStorage in-place to memorize previous interactions. Notice the greyed-out text. This screenshot is of a production release of Archie.AI Google Chrome app.

For developers, bots mean less time spent designing and building custom interfaces. It’s just text bubbles; plus many existing platforms offer to completely avoid this process with their already-successful apps’ APIs (i.e. Google Assistant).

For users, bots mean a possibility of hands-free interaction (via voice) and a more natural and/or seamless way to converse with machines.

A simple solution.

When I build things I tend to look for simple solutions, sans bloat, which could be easily understood and customized. Unfortunately, when I was looking for one last year there were none for my use case…

My team and I have implemented and trained a natural language classification engine, Archie.AI, and gave it the power to understand and generate answers from Google Analytics. The bot can give daily briefings about users’ state of business, predict the number of future visitors and answer over 430 related questions. It’s an excellent way to save time when looking for a particular metric or an instant business advice.

It works wonderfully with Google Assistant and Alexa, however, when the time came to get a fast, clean interface for the web, there were no good-enough options. So I built one and kept it open-source. chat-bubble is the one-kilobyte JavaScript file with no dependencies that’s really easy to implement and understand:

var chatWindow = new Bubbles( document.getElementById("chat"), "chatWindow" ); chatWindow.talk({ "ice": { "says" : [ "Hello!" ] } });

Batteries included: a complete set of CSS styles and percision-timed animations, an ability to safely run functions in response to user actions, and a pluggable processing engine.

By “pluggable processing engine” I mean that the script is not going to tell you how to understand your user’s queries. It’s up to you to implement your own NLC. It’s up to you to either dynamically generate or write response scripts. However, it doesn’t leave you hanging. There are currently three ways to have it respond to your users:

Give your users options, which appear as buttons (see gif below). Nothing needs to be done here, this is built-in.Use the provided sample code that utilizes a simple fuzzy-matching logic to map your users’ input to the options you prescribe (see gif below).Plug-in your own NLC engine (see gif above).

Those options are for recognizing user input. The output (what the bot says) is just as customizable. It could be as simple as a structured JavaScript object variable. Or it could be dynamically imported JSON data. Of course, it doesn’t have to be just one huge JSON file — that would be inefficient! In our case (again, see gif above), we broke it up into individual answers for the responses that require a trip to the server (on-demand) and some calculations on our end.

Bot library example with built-in button controls and input keyboard with fuzzy-match logic implemented. The JSON script that prescribes this conversation structure is below:

var conversationScript = { ice: { says: ["Hi", "Would you like banana or ice cream?"], reply: [ { question: "Banana", answer: "banana" }, { question: "Ice Cream", answer: "ice-cream" } ] }, banana: { says: ["🍌"], reply: [ { question: "Start Over", answer: "ice" } ] }, "ice-cream": { says: ["🍦"], reply: [ { question: "Start Over", answer: "ice" } ] } }

Local memory.

Over the next few months, we tested the script in production with about a thousand users, while adding a few tweaks and improving performance on older browsers. It has also been downloaded over 700 times as of today.

The library works equally well on desktop and mobile. However, when it came time to publish it as a part of our Google Chrome browser extension user experience suffered. Because chat-bubble had no inherit persistence, the conversation history would evaporate every time the plugin window is closed. And that happened quite often as Chrome tends to kill the DOM of the plugins entirely each time the user shifts focus.

That has to be fixed.

There is no one way to keep the conversation history in on disk. I considered using Redux to manage the state, however, that’s a dependency and the philosophy so far is not to have one. That would also over-complicate things.

Instead, I decided to store a modified JSON object that would share the same structure as the conversation script in localStorage. It would be recalled every time the bot is brought up, however, it would also need to:

Have the potential to be used with a database or any other data storage method.Have different UI interactivity and style than the rest of the bot (a visual cue for the user).Be a progressive enhancement that doesn’t break the rest of the app.

chat-bubble-interactions is the LS key for keeping in-touch with the chat history.

Future-proofing.

Keeping an option open for implementing a server-side storage solution is pretty straight-forward. The entire library is less than 340 lines of non-compressed, commented JavaScript. Shall anyone attempt to implement that, all that would need to be changed is JSON.parse(localStorage.getItem(interactionsLS)) method for accessing the history and localStorage.setItem(interactionsLS, JSON.stringify(interactionsHistory)) method for saving the history.

The only roadblock I can see here is having to add a Promise -type checks to make sure that everything needed to display history is downloaded before proceeding. Something like this might take some work as there would need to be a few decisions made regarding when the download should start and what functions should it block. I’m leaving that for tomorrow.

Custom UI for recalled conversations.

Note the “greyed-out” style for this recollected conversation up top.

To keep things simple, previous conversations would appear in the chat as soon as the user opens it.

However, as a side-effect of that decision, those conversations would have to be styled differently to avoid confusing the user. Additionally, the user responses would need special attention when stored and recalled, since user responses can not have any interactivity associated with them.

What I mean is that while highlighting chat bubbles in black and floating them right for user responses isn’t that hard there are implications when the chat uses buttons instead of keyboard input messages. Read on.

Consider the example (below) when the user is presented with options to select one of the two or more buttons as a way of answering to the bot. Obviously storing answer options in history isn’t helpful — they have nothing to do with conversation structure after they’ve been interacted with. Only the user’s response (their selected answer bubble) is relevant. The trick is not storing anything in history until the user has created their final interaction.

Note how the answer options no longer appear in the conversation history.

For this purpose, I’ve created two functions for saving history: interactionsSave() and interactionsSaveCommit() — where the former would be called to mutate the proposed save object in RAM and the latter would commit that object to localStorage.

interactionSave() would be called every time the bot produces a response, but only after the user has committed their answer. Because when the user clicks a bubble our script has already “forgotten” what that button looked like in terms of DOM structure, a new one would be made, specifically for committing to conversation history:

// add re-generated user picks to the history stack if (_convo[key] !== undefined && content !== undefined) { interactionsSave( '<span class="bubble-button reply-pick">' + content + "</span>", "reply reply-pick" ) }

interactionsSaveCommit() would be called every time a new speech bubble is created in DOM by the means of addBubble() function.

Progressively enhancing.

This is a relatively new feature that not everyone would want to use, of course. It is also experimental and could easily be overdone (should someone try to remember a 1,000 interactions the performance and user experience would drop every time the boat would load). So by default I left it off:

recallInteractions = options.recallInteractions || 0

Getting it to function is super simple though:

var chatWindow = new Bubbles( document.getElementById("chat"), "chatWindow", { recallInteractions: 10 } );

…All that does is tells this interactionsSave() to toss unnecessary stuff away:

if (interactionsHistory.length > recallInteractions) interactionsHistory.shift()

For simplicity’s sake, all work on chat-bubble is done without any kind of build steps. All JavaScript is written and ran immediately in-browser (even though developers who implement it are given an option to use ES6 Importmethod). Because the browsers read JavaScript from the hard-drive in develop mode, any attempt to use localStorage breaks the entire code base as it’s not allowed (due to security restrictions). Which made me think: this could happen quite often in other environments. So I’ve implemented a fallback with a warning:

// local storage for recalling conversations upon restart var localStorageCheck = function() { var test = "chat-bubble-storage-test" try { localStorage.setItem(test, test) localStorage.removeItem(test) return true } catch (error) { console.error( "Your server does not allow storing data locally. Most likely it's because you've opened this page from your hard-drive. For testing, you can disable your browser's security or start a localhost environment." ) return false } } var localStorageAvailable = localStorageCheck() && recallInteractions > 0

Now everything should still work even if the browser can not access disk memory.

As of today, the updated script is available through NPM as chat-bubble@next. It is already performing on our Google Chrome extension and so far is able to save a lot of headache for the users, as well as dramatically reduce perceived loading time.

All of the code described here is available on this GitHub repo.