For as long as I can remember, there are two things true about me: I'm always creating something and I'm always learning something. When I was a kid, I expressed myself through drawing. During college, it was music. In my professional life, it's building software. My art is always accompanied by schooling. Since entering into industry from undergrad, I'm consistently taking courses in an evening grad program, participating in massive open online courses, or pursuing certifications.

There are lots of analogies between music production and software engineering. One of my favorite analogies is having the ability to 'jam' with others. It often takes years before a person's abilities get good enough to improvise creative solutions with a team in real time. It was an epic moment in my career when I reached that milestone. According to Wikipedia, a jam session in music is "a relatively informal musical event, process, or activity where musicians, typically instrumentalists, play improvised solos and vamp on tunes, songs and chord progressions. To 'jam' is to improvise music without extensive preparation or predefined arrangements." An equivalent to this in the software world, and more recently in the data science world, is the idea of hackathons.

In this post, I'll share one of my favorite hackathon projects.

IBM Watson Hackathon

The stage at the IBM Watson Hackathon

Back in 2015, the data science industry started booming in all kinds of new ways. AI was making a transition to the cloud. Big tech companies, like IBM, were making huge aquisitions in the AI space. More companies than ever before were forming data science teams. My company at the time, Red Ventures, was just forming their first data science team. My goal at the time was to create innovations for Red Ventures in my favorite sub-field of AI, natural language processing. To make that happen, I wanted to show Red Ventures the value of cognitive computing by performing research at IBM's Watson Hackathon. Once Red Ventures got the potential, I knew I'd have a chance to lead the way.

Our goal

I formed a team of some of my most talented colleages. From Red Ventures, I recruited Renato Pereyra and Nathan Johnson. From UGA's Master of Science in Artificial Intelligence program, I recruited my favorite computational linguist, Thomas Bailey.

We aimed to augment and optimize conversations between customers and sales professionals using natural language processing and the Watson Developer Cloud. More specifically, we set out to explore chat data and dig up insights. Moving forward, we planned to turn those insights into tools and recommendations for Red Venture's sales teams.

How it works

First, we used what was then called the Alchemy API (now called IBM's Natural Language Understanding and Classifier services) to enrich thousands of sales chat logs with linguistic metrics (sentiment, key words, etc). Second, we looked for correlation between linguistic metrics and successful chat outcomes. Unlike most hackathon projects, we made sure to follow the scientific method when performing our work.

Challenges we ran into

  • Alchemy API rate limits
  • Formatting data for Alchemy analysis
  • Data visualization (some representations hide important patterns)

Accomplishments that we're proud of

  • Verified three out of three hypotheses
  • Discovered trends that can be acted on to increase chat agent success

What we learned

We verified these hypotheses:

  • Chats with positive sentiment generate more follow-up calls from customers
  • Agents who talk like the customer generate more follow-up calls
  • Keywords from sales agent speech are good for predicting call outcomes

Visualizations of Watson Hackathon Results

We set forth to test specific hypotheses. Through the testing of these hypotheses, we turned raw data into insights. One powerful thing about natural language processing is the ability to scale out insight across many conversations simultaneously. During this hackathon, we were able to automate linguistic insight with software to perform analysis on over 3,000 chat conversations. We verified all three hypotheses we tested (though we still need to measure things like statistical significance and error rates).

Below, you will find some visualizations we used to better understand the results of our experiments. The scripts and modules we wrote for our hack mostly involved data transformation and analysis. To explore and understand the output, we visualized our results with Excel, R, and Tableau.

Chats with positive sentiment generate more follow-up calls from customers

We used Alchemy API's Sentiment Analysis for this portion. "Generally speaking, sentiment analysis aims to determine the attitude of a speaker or a writer with respect to some topic or the overall contextual polarity of a document" (http://en.wikipedia.org/wiki/Sentiment_analysis). We made sure to distinguish between the agent and the customer in the conversation. Here is a visualization of our results using Tableau:

Chats with positive sentiment generate more follow-up calls from customers

Insights gained

  • In general, you can see that the higher the sentiment rating for both the agent and the customer, the more successful the conversation. This is shown by the many chats in the top right quadrant.
  • We can also see a drop off in success when the agent is too positive. It seems if the agent is over zealous, the chats are less successful. The sweet spot seems to be between a 0.3 and 0.5 agent sentiment.
  • The agent's sentiment is consistently positive but the customer's can vary greatly. This makes sense since our agents are trying to make a sale and need to keep a positive vibe. The customer does not have to be positive.
  • Also notice how the customer's sentiment is commonly around zero. Zero reflects a neutral sentiment. The customer may just want to get down to business in these cases, showing no emotion.

Agents who talk like the customer generate more follow-up calls

In this case, we wanted to see what happens when the agent talks like the customer. To do this, we compared the vocabulary, or word choice, of both sides. An example of an agent matching the vocabulary of a customer would be if the agent repeated a question back to the customer for clarification. Our analysis clearly shows a benefit when an agent reflects the vocabulary of the customer.

We chose to compare vocabulary using cosine similarity (http://en.wikipedia.org/wiki/Cosine_similarity). Since we were doing our processing in Node.js, we were happy to find a package to do just this type of measurement: https://www.npmjs.com/package/cosine. Once the data was processed, we did a visual analysis of our results and created the following graph in Excel:
Agents who talk like the customer generate more follow-up calls

Insights gained

  • We can see the success ratio increase as we move right on the graph. This shows a strong positive trend with the greatest increase in success occurring between 0.2 and 0.4 similarity.
  • According to this graph, the more similar both sides of the conversation are to each other the better for our metrics. This probably won't hold true at extremely high values like 1.0 where the agent would literally be echoing the customer. We need to experiment with more data on the higher side of the similarity measure to see where success drops off.

Conclusion

We learned a lot as individuals and a company. Today, data science is still thriving at Red Ventures. After this event, I felt like a true member of the cognitive computing community. It marked the beginning of my journey as a cloud architect of AI solutions. I've now built AI solutions across Microsoft Azure, Amazon Web Services, the Google Cloud Platform, and the IBM Watson Developer Cloud. It's empowering!

My team is still active in the field. Just as we explored New York City during the World of Watson conference that year, we continue to explore the world of AI today.

Time to innovate!

A post shared by Paul Prae (@praeducer) on