AI Assistant Mascot

Newton School | October 2023

Introduction

The aim of the AI-chat feature was to offer real-time doubt assistance to students as they watched lectures or solved coding assignments.

Functionally our character was going to sit in place of a chat button during this experience over video recordings (lectures) and web-based IDE (assigments).

It was going to be interactive, and so our character was going to behave in certain ways to effectively nudge, question and respond to students.

I have described how I thought all this through and shipped the first version of Neo below.

Problem Space

How can we provide constant help to our students so they don’t get discouraged while learning through our course?

A major unsolved problem in our students’ journey was that it was very lonely. Personal support through mentors and instructors wasn’t available 24*7. This meant that any minor inconvenience led to major drop-offs.

All of this was happening because their doubts were not getting solved in real-time.

An AI tool could impact a lot of dimensions of this problem, such as active participation in the course, and overall student performance, given it solves our students’ doubts really well in the first place.

Doubt-solving had been a major problem in our past courses…

Solution Vision

The idea was to design a character that lives on the student's interface, available for constant interactions whenever there’s a need.

In terms of thinking of a solution, we always benchmarked our experience to that of an offline physical setting. That’s how most of us learnt in college and high school.

Learning happened in these environments effectively, because your batch mates were always available for you. We were essentially self-learning together.

The challenge comes when you think of scaling this environment to more people. With this project, the vision was to provide our students with a real, sentient, engaging batchmate available throughout the course.

An AI batchmate could really help our students tackle roadblocks and move forward…

My contribution

My job was focused on solving the character’s identity, interactions, behaviour and name. I have broken down my problem areas below.

Branding problems

From a brand POV, I had to figure out the visual appearance of the character, how it appears as it behaves and responds to the student. At a deeper level, it was all about giving its appearance some meaning. Without meaning our character would lose its feel, and will appear like those old chatbots with unpleasant experiences.

In technical terms, the core problem was figuring out a character structure that is scalable, meaning:

To answer above questions, I had to coherently think about our product, the student, and the AI character to be part of a world and a story. World-building and story writing helped me explore answers that would be necessary to craft a meaningful experience.

Setting Constraints

Some pre-determined constraints for me included the positioning and placement of the feature and some of the interactions that had been scoped for the first release.

The initial constraints helped in defining and evaluating my explorations based on:

The level of detail our character needs.
Our imagination tends to be vivid, and hence was our world with its character. I had to tone down these ideas so that they could appear in legible

The amount of movement it would need to convey its responses.
The specific details had to be such that they could simply convey the character’s responses.

Positioning of AI Chat in the experience.

I loosely defined the behavioural traits of the character in scoped situations.

Initial Sketches

Initial iterations were messy, they had too many specific details and felt like they belonged to very random worlds

World building

After a few rounds of sketching, my approach was to:

  1. Ground the next iterations in some theme

  2. Use as few details as possible in whatever I generate

While sketching ideas, I remember reading Understanding Comics by Scott McCloud. He wonderfully explains how and why cartoons work and invoke the emotions the way they do.

It was also a gentle reminder that I was operating within some constraints, a smaller canvas required me to work on few specific details rather than a photorealistic image.

Below is a rough idea of the world I tried to create, and the setup in which our AI character existed.

Theme

A space theme gave me room to explore a range of emotions. The piloting story seemed to reflect the uncertainty and excitement of the real journey with our product very well.

I developed analogs for every visual experience I had the opportunity to create. This also helped me give all of it, along with the character some coherency.

Solving assignments became flight training, contests became races between pilots, topics became planetary bases, and so on.

Our AI character could now be placed perfectly as — a batchmate, a partner, a friend, copilot — in our students lives.

Specific Details

The “eyes” were enough detail to convey a lot of responses.

A “spherical body” gave it room for movement. We could also change the spherical-ness to convey heightened emotions.

A “wormhole” or a “ring” gave its body some base to rest upon.

I had to decide the other additional details, if any, based on canvas size, and the value it adds to the Character.

Iterations

I eventually started finding some structure and specific details in my iterations.

Some iterations failed on small canvas.

Some of the top contenders that made it to vector.

Finalised iteration

There was this one iteration that performed well as I tried to put it simply as a vector and build some interactions. It was also getting good reception from the team, and seemed to capture the personality really well.

The last specific detail

Apart from the spherical body, eyes, the portal, I added this little thing that I couldn’t (and didn’t) want to put a label on. Is it a star? Is it like a ponytail? Is it conscious? It seemed that the mysteriousness of the star completed the character’s personality. These details were enough for me to play with motion.

For example: In case of showing agreement, the body & eyes could move to show a “content” nod, while the star could move more energetically, or even spin to show cheering.

There was still work to be done. A little bit of decision making around was left, majorly

  1. how it should be painted

  2. the ratio of details

  3. how the details are styled

  4. and finally defining it’s idle state from which we can make interactions

Fine tuning

This was the hardest part of the process probably because to a level, the character felt finished. But any 1% changes like a tweak in its positioning, the way it’s eyes rendered or how brightly it shined on the interface — greatly changed its perception.

It was specially hard because these tweaks are hard to be deduced in reviews. (you’ll probably just keep hearing something like, “ehhhh, something’s still off IDK”)

Fine-tuning. Notice how mere changes in the eyes affects what the character must be like…

Exploring how it fits on the thumbnail.

Measuring by feeling

A good thing to do is to always keep this question active in your mind— “How does it make me feel?” We had been working on the “feel” for a while, trying to come up with an accurate description of how our product experience should overall be designed. These descriptions became guidelines as I fine-tuned our final iteration.

Newton’s very early version of brand values

Color explorations

Final version

Motion

Our character was going to reside in an ideal state, and switch to a different animated state based on system or user action.

Example: A nudge when our students don’t write anything on their IDE for >15 mins say, would change the state triggered by system. A similar nudge as soon as a student pauses or goes back during a lecture, would also change the state, this time being triggered by the user.

This was perfect moment to introduce Rive in our workflow. It allowed me to easily create each motion state, and manage them (using State Machines) in a way exactly how our developers would implement them. (Thanks to this, I was also able to work closely with devs at a level I had never done before)

My process here was simple:

  1. Create a rough idea of the animation states. This meant fabricating the behaviour of our character that I mentioned above. (This was done on paper, later in Figma for reviewing)

  2. Arrange the states, as they got approved in the state machine. (I moved to directly creating the animations in Rive instead of having them reviewed first over Figma after a point in the process)

Eventually, I had to make 2 layers in the state machine, one for all the interactions in its default state, one that switched its direction. This solved the In-Chat experience where our AI appeared.

Every arrow here would only move based on some “Input” that you can define in Rive. These allow developers to control their values and hence transitions between states depending on conditions.

Naming in Neo

The last bit of decision making went into how we are positioning and hence, communicating about our AI character. This meant thinking of its name, dialogues and its introduction on the platform and website. We followed the same guiding principles to design content around the character.

Naming our little guy was no easy job either. The problem was perhaps, that

a) we were already bad at generating names and

b) it's even hard to understand and contrast “feel” for a single word.

(Neo came from one of the designers, as we were trying to think of names that were just more resonating with Newton…)

Choosing Neo out of the shortlist was much more of a process of elimination. Neo was most suited as “fresh”, “wise” and “approachable”. The other finalist being “Cooper” but it got discarded. (Much due to it being a realistic foreign name, hence skepticism around its reception in an Indian audience)

Ending note

I feel this project was an insane trip down in the principles of branding.

The hard part here is that there is no real defined way to think. But in a way, it’s also liberating because you’re allowed to respect your imagination and ask all sorts of questions. This way you may eventually follow a “way of thinking” that yields the results you want with your creation.

Create a free website with Framer, the website builder loved by startups, designers and agencies.