leaf5.jpg

Nissan leaf

Interaction design

 
 

Story

The emergence of driverless cars is paving a new landscape for how we focus our energies during our daily commute. The driver now has more freedom to engage in the passenger experience, but can they do so without worry?

With Nissan's driverless car, Leaf, our team conceptualized a safe, worry-free, and enjoyable interaction model with this question in mind.

Overview

Our model simulates a seamlessly integrated voice command AI, augmented reality windshield and interactive control console, enabling the smooth transition between driver and driverless modes.

My role: UX Designer, Research Lead

Team: Sarah Ehrsam (Project Manager) & Therese Arcangel (UI Designer)

Tools: Sketch, Photoshop, InVision, Trello, Whiteboard

 

Problem statement

  • What does a hands-free interaction model look like for Nissan's driverless car?

  • What interactions might facilitate a frictionless driving experience while enhancing the driver's journey?

 

Research

Our initial research was aimed at understanding how users interact with their cars and spend time in the passenger role. We devised a survey and held user interviews to gather this data.

Car usage

We found most people use their cars for day-to-day errands, such as grocery shopping, or for their commute to work or school.

Car activity

All people we interviewed use navigation while driving, even if they know their destination; as they value real time traffic updates and fluid ETA.

The majority of those interviewed enjoy talking when other people are in the car.

Two-thirds of interviewees like listening to music, podcasts or the radio during their drive.

Passenger experience

When we asked what activities they may engage in as the driver, if they didn't have to focus on driving, their responses were as follows:

  • Play games or watch movies

  • Listen to music

  • Sleep or relax

  • Work

  • Enjoy the view

 

Revised problem statement

Based on this feedback we decided to refine our problem statement, emphasizing the navigational and entertainment experience during a daily commute.

  • How might we design a hands-free navigation experience for Nissan's driverless car, while also making the passenger experience enjoyable and safe?

 

With our research, we constructed a persona to test and validate our designs with.

Meet Steve,

Steve is a hard worker, always focused on his company's business objectives. He gets frustrated by his daily 60 minute commute each way, as the stop-and-go traffic is mentally fatiguing. He'd rather spend that time working or relaxing. The ideal candidate who could benefit from an efficient and reliable driverless car.

 

Design

Wall & paper prototype

With Steve in mind, we researched competitors such as Audi, Mercedes and Tesla to see how they handle or plan to handle their driverless models in the future. Since much of the technology is still developing, and the data available is limited, it required our team to get creative. We ran design studios generating various interaction and visualization models and combined them into one design. We felt something tangible and interactive would be the best way to get effective user feedback when testing our ideas.

Our model can be broken down into four parts

  • AR windshield: Navigation was our area of focus so we wanted it integrated throughout each interface. We found augmented reality to be an effective way to tap into this and add another layer of safety to the manual and driverless experience. For example, highlighting route options, street signs, or objects and accidents, a driver can better react if they are in manual control. In driverless mode, they'll be less likely to panic when Leaf is slowing, stopping, or accelerating if they can actively see what Leaf is identifying.

  • Car dashboard: We wanted all the key and vital car information in plain view of the driver. This includes maintenance alerts, battery charge status, mileage and current driver indicator.

  • Middle console: This is the area which focuses on the entertainment and leisure settings of Leaf. Allowing users to adjust temperature, seat settings, explore and listen to music, watch movies, get work updates, check e-mail, add customizable apps and widgets, fine-tune navigation and further tweak car settings. Tesla users strongly indicated how much they appreciated the size and functionality of the middle console, so we decided to make Leaf's very large and customizable.

  • Voice command AI: To make the drive and all interactions as seamless as possible, we integrated voice command AI which can function directly with all aspects of Leaf.

 

Usability testing

We used dry-erase markers to simulate the arrows and highlighting for the augmented reality windshield, swapped out paper printouts for the different functions of the middle console interface, and had a script we used when speaking as Leaf voice command. With this concept, we ran several usability tests to see if our model and ideas made sense based on our scenario for Steve.

Scenario

Users were asked to step into Nissan Leaf and have it drive them to work. During their commute, they could explore the entertainment options. At some point, an accident would occur in which Leaf would automatically reroute adding 10 minutes to their overall journey. If they wanted to save time, they could choose to take the wheel and drive themselves. The journey ends when they arrive at work.

Our tests found that our primary concepts made sense to users. They liked how their route was highlighted by augmented reality and understood how to use voice command and the middle console to interact with the navigation and entertainment functions.

There were a few issues confusing users as well:

  • "The countdown makes me feel anxious!" We had a countdown from 3 to 0 to confirm when Leaf would start driving. Multiple users said they did not know what was going to happen at zero and it was scary or made them feel anxious.

  • "I can't drive if I can't see." We had an alert pop up on the windshield when an accident was spotted. Users said the alert was too big and made it dangerous because they could no longer see what was happening in the environment.

  • "How do I use voice command or take control?" We did not clearly specify that voice command was available throughout the whole journey. Users were confused as to when they could take control of the wheel themselves and when Leaf was in control.

 

Solution

We incorporated this feedback and digitized each element to create a high fidelity prototype. We removed the countdown and added a prompt to appear each time the car starts. This indicates Leaf is set to voice command and explains how the user can take control of the car at any time. We reduced augmented reality and alert sizing while making each element more transparent so users could clearly see.

This video demonstrates Steve's daily work commute using our prototype. It showcases his safe and enjoyable journey, and seamless interaction between driver and driverless modes.

 
 
 
 

view more work