AI-integrated
Search

Case Study

Role: UX Researcher
Date: 2023

The OCI UX Design team was charged with creating a support chatbot to improve the Oracle Cloud user experience, while also building chat-style responses into global search. To ensure we were providing the optimal experience and integrating AI in a thoughtful, scalable way, the research team was brought in to explore general user attitudes around AI assistance throughout the console and in a chatbot. To achieve this we launched a 300 person survey.

AI exploratory survey
A screenshot of a Support Chat interaction in a panel, displaying information about FastConnect requirements.
What would later become OCI's chatbot integration.
Details

Goal: Understanding user expectations in two focus areas: AI in general available throughout a console, and user expectations surrounding chatbots. Fold in this research into the larger AI exploratory projects and use insights gained to refine OCI's global AI solution.

Challenges: While not technically challenging, this project had several anticipated pain points. One of the primary challenges we predicted was addressing the general distrust of AI among users. Many users are skeptical about the reliability and effectiveness of AI, particularly in the context of support chatbots. As the findings from this research would define AI practices and guidelines for all OCI teams and services, it was crucial to gather unbiased data and set the correct standard. Establishing best practices for AI integration and making noticeable change in the complex foundations of AI - our documentation, RAG, and AI models themselves - would also be difficult.

Deliverables: UserZoom survey, result presentation, and documentation next steps.

Jump to

1. Overview & Research Ask

At the onset of the AI boom, cloud platforms began to recognize the necessity of integrating AI to stay competitive. I was part of a team of designers, product managers, SMEs, and a co-researcher tasked with understanding user expectations regarding AI in cloud workflows. There was already a plan to integrate AI assistance via smart suggestions throughout all applicable areas of Oracle Cloud Infrastructure's console, and to then guide users to a chatbot for more thorough assistance. The product team anticiapted that an intelligent chatbot would heavily leverage AI to mitigate common support requests.  Our product team wanted to set the correct standard, as our findings would define AI practices and guidelines that would impact all OCI teams and services.

Several research pieces were already underway. Refer to my writeup of the GenAI assistant and chatbot for examples of the competitive analyses and design brainstorming we conducted in parallel. This project augmented the AI-focused audience research with insights from a new group: database professionals with limited to no AI or cloud console experience. Our goal was to understand the needs and engagement patterns of users across the spectrum. The UX team decided that a broad survey would be the best starting point.

2. Research Methodology

My co-researcher and I agreed the best way to gain general opinions on AI would be via survey.  This would allow us to collect user sentiments and behaviors we should anticipate in our upcoming AI projects. We used UserZoom to launch a survey with the following criteria:

  • 300 responses from cloud platform users who do not work directly with AI or develop AI solutions, to eliminate bias. Participants were paid via Amazon gift cards per their successful completion of the survey by our agreement with UserZoom.
  • A total of 17 questions, a mix of multiple choice and rating scales on their sentiment or behavior around AI 
  • Two overall focus areas: some questions centered on understanding what users already expected of AI, and what they expected of chatbots in support scenarios.
Screenshot of a Create Instance creation UI with many configurations, displaying an AI suggestion.
An example of smart integration of AI suggestions...
Screenshot of a concept for a Create Instance creation workflow with a chatbot interaction on the right side panel.
...Which can lead the user to the Ask Oracle chatbot.

3. Findings

The project team collaborated on the question set we would pose to participants, all meant to expose user attitudes around AI and chatbots. Some questions focused more on chat interaction specifically for support, and gauged user sentiments around those interactions. A few examples of what we asked non-Oracle users include: 

  • "When you run into a challenge with your cloud platform, where do you turn for answers? Rank your top three choices: Web search, documentation, chat support, submit a ticket, or talk with a live agent." 
  • "Which of the following best describes your understanding of the term 'AI assistant'?"
  • "Which of the following best describes how you might expect to engage with an AI assistant when searching for something within your cloud platform?"
  • Do you have experience using generative AI tools? If so, which tools?" (Options provided)
  • "If you do engage chatbots to seek support, how do you typically feel when you initiate that experience?" 
Example of a question from the survey results. We found that most users that seek chatbot support are generally optimistic and confident.
Example of a question from the survey results. We found that most users gain trust in chatbots when they deliver useful results.

4. Results & Takeaway

The results presented an illuminating insight into the landscape of AI expectations. We found that overall, the fast-paced evolution of AI tools is quickly raising user expectations around the ability of their platforms to exhibit some level of intelligence. While those expectations aren’t solidified and this boom has users more open to exploration, they do expect to solve problems more easily than before. When faced with a challenge in their workflow, participants prefer to self-solve before seeking higher levels of support, generally engaging with the chatbot as a last resort: Although 60% of participants feel it’s easy to get support in their platforms, when faced with issues, they turn to a web search or documentation first.  Only after those fail would they attempt filing a support ticket or speaking to a live agent.

Chatbots are not widely used because they aren't inherently trusted yet. Their perceived intelligence or usefulness is still too low. Fifty percent of participants mentioned rarely using chatbots to solve issues within their cloud platforms because more than half of participants doubted the bot’s ability to solve common issues. More than 70% of participants note that when engaging with chatbots, they exit after three failed attempts.

With a better understanding of our users thanks to this analysis, we were able to start our AI explorations in earnest. The findings were presented to the project team and to the org at large. These insights meant that we now knew which areas needed improvement, and distributed the work to the relevant teams. Our research had the following significant business impacts: the Oracle AI teams received our feedback and began tackling the RAG and LLM models to improve our perceived intelligence. The Documentation team continued to improve the support material based off the most commonly-searched questions. And most importantly, we now had a better idea of how to implement our GenAI assistant, allowing users to approach the AI on their own terms and fleshing it out to be useful for completing common tasks.

Screenshot of survey results. The text says: "ChatGPT has set a new standard for how mainstream consumers approach and leverage AI… When asked about their experience with generative AI, 70% of participants named ChatGPT as their most substantial experience with AI tools. An additional 29% named Microsoft Bing, which is powered by ChatGPT. This rise in popularity has breathed new life (and confidence) into automation and AI."
Slide from the survey results. Alt text is available for full transcription.
A screenshot of AI exploration survey results. The text reads: "Based on shifting expectations spurred by the rise of ChatGPT, users have new ideas for where assistance should be found… Participants expect to find AI integrated into pathways where users might need recommendations to increase efficiency. When asked to rank where they expected to find assistance, participants noted: 
search, chat, and workflow initiation. "
Another slide from the survey results. Alt text is available for full transcription.
Takeaways

The survey revealed that while users' expectations of AI are rising, there is still a preference for self-solving issues via web searches or documentation before resorting to chatbots, which are not widely trusted. The findings led to several significant business impacts, including improvements to Oracle's AI models and support documentation, and informed the implementation of a GenAI assistant to enhance user experience.