Product Design // User Research @ Pearson
In most math courses, concepts are taught to the average student profile - there is no personalization of instruction, and everyone is given the same instruction regardless of their level of comprehension. We aimed to measure college-level remedial math students' proficiency to objectives and develop a personalized learning plan and assessment.
Code-name Project RIO is a developmental mathematics product that helps 2-year college students advance math skills using adaptive technology. One of the main focuses from a design process perspective was the working triad between Product, Dev, and Design. This project was the first time we've worked this closely as partners in early design stages.
This learning experience is a new product for developmental math students that will deliver the highest level of personalized content. Students control the pace at which they learn based on their answers to specific content. Ultimately, students feel more confident by focusing only on topics they don’t know and comprehending the material stress-free.
Teammates
Educators interviewed
User research studies
Avg. TAM Rating
Learners experience a personalized instruction journey, starting with a pre-assessment. Course content is tailored to what each student needs to work on to master a concept or skill. Student performance dictates the adaptive practice within each module and provides optimal support for each individual.
RIO introduces advanced instructor performance reporting and provides at-risk learner insights so instructors can support at-risk students before it’s too late. Instructors are in control by setting a distinct Proficiency Threshold.
The learning experience addresses personalized content, recognizing where students excel and where they struggle, remediating students at the exact moment, and in the precise method, they need - accelerating proficiency.
Product innovation and systems thinking approaches helped advance personalized, adaptive learning systems that scale globally to improve access and learning outcomes. Working toward a single technology ecosystem and single user identity will provide a unified, connected view of customers' needs.
Micro-interactions that take place as a student works through a problem will ensure students get the right help at the right time, to foster motivation and success.
A course pre-assessment gives insight into which topics, precisely, a student has already mastered, and which topics they still need to learn. Based on the pre-assessment, content is then tailored to each student. As students review streamlined instruction for each objective they need, practice adapts to each student's level of understanding, providing scaffolding for those who need it while fading learning aids for students to demonstrate true understanding.
RIO (working title) is a developmental math courseware currently in development, designed to help students progress through dev math content efficiently and successfully so they can move on to credit-bearing math courses. The tool understands where students excel and where they struggle, delivering a personalized learning path for each student with adaptive practice.
The learning tool will provide streamlined personalized instruction and practice through software - so will allow for acceleration, and will fit in any course format where instructors want students both learning and practicing.
Product Differentiators:
Dev math adaptive pre-assessment, focus on reporting results at student & class level. RIO aims to help faculty improve developmental math retention and completion rates. 60% of community college students are referred to developmental math, yet only a third of those students go on to finish.
We kicked the project off with a co-design session with community college developmental math instructors from Colorado. We sat down to better understand what impeded the teaching experience from both a class setup process and paper-work stand point. From their perspective, how students learn best in the modern classroom. These sessions were split into back-and-forth interview questions and simple design tasks to solve their biggest issues.
Start with a temperature check and set the benchmark
Understand what to spend time on and when in class
Identify low performers so they can intervene earlier
Group high and low performers so they can help each other
Review with students and create individualized study plans
The research sessions opened w/ a 40-minute interview that informed the co-design exercise where I would build on the screen what instructors wanted to see in a performance dashboard. This helped us better understand what truly mattered to assess the class quickly. We kept the back-and-forth fluid to create an open space for rapid validation that included correction from the instructor and on-the-fly iterations based on what they were seeing. Co-design findings:
We presented instructors with details about students who are struggling with specific content to cover in class or students who aren't completing designated work. Letting users know what is going on and presenting useful information.
These navigation and data point presentation options are good examples of collaboration with users before presenting multiple scale options to partners across product and engineering to ensure feasibility that might include possible API contracts for data point delivery from the adaptive system.
It was important to make sure that the information and data made clear and simple sense to instructors when assessing what the student content needs to brush up on. We worked with product and dev leads to grasp timelines for a simple MVP option that we’ve validated with instructors. There was also improvement potential as we iterated and improved upon the initial release.
We find any open-ended feedback session interesting because it reinforces how pressed for time these instructors truly are. The majority of instructors want some level of control, but don't have time to configure an endless page of settings they don't understand. They also want insights and actions specifically spelled out for them with the ability to dive deeper for more background information.
73
9
4.4
We further researched initial designs with instructors through one-click tests and task completion flows along with design element preference split tests at the end of sessions.
These sessions were unmoderated and it's always interesting to hear the direct feedback from instructors when they open up and feel safe talking through their thought process out loud.
It's essential to be thoughtful about interfacing with collaborators at the right moments in the design process:
Gaining direction and product shape visions from stakeholders at the right points. Include engineering and product for feasibility spot checks early and often.
Through iterative testing it became clear that multiple entry points to specific information was important. We didn’t get this right out of the gate, as instructors sometimes wanted to dig into the deep information from many different points in the experience when testing with task completion.
Diving into the details, I found through rapid prototyping that instructors need to understand the class-wide basics:
- What student is this? & How did they do?
- Let me see a summary of the questions
- Let me see if they got it right or wrong
- Let me navigate to their answer and see the error type
"Overall, it's really nicely displayed. Easy to click around and drill into it the specific chapters to plan lessons."
The bulk of our Rio research has been foundational, where benchmarks don’t apply. Our iterative tests are with limited interaction prototypes and tend to be heavily moderated. Our fast Agile workflow requires flexible and lean testing protocols and emphasizes qualitative insights. Tests for each iteration, then, often look different and ask different questions. A few defined benchmarks:
These data points are clear successes according to any task success benchmark. We employed the Technology Acceptance Model (TAM) for all user testing. Instructors rate how usable and useful the product is. We chose a shorthand version of this because it’s simple and can still root out strong trends, even with small samples. The TAM score on last end-to-end test was 4.41. Benchmark is the industry-accepted rating that predicts adoption: 3.00.
"It's very easy to see who needs help and who I would have a conversation with. It's also helpful to see the areas where I could shave off instruction time for high performing chapters."
Many instructors told us they spend substantial time trying to identify students who aren’t proficient in fundamental skills like basic arithmetic. Skills “they should have learned in second grade.” They take these deficiencies with them all the way to college. More broadly, instructors want to assess skills that pre-date and are pre-reqs for the current course they’re in. Current diagnostic largely assesses proficiency of the content that will be covered in class.
It'll be important to move from useful and usable results to innovative, workflow-changing insights. We have been successful surfacing and displaying metrics instructors currently rely on and find useful, but we can come up with better and more innovative ways to meet the need behind it. Instead of them sifting through last logins to see who’s been working, we can push alerts that put multiple data points together into an insight that say, “This student has ‘checked out.’ You might want to reach out to them.” Operationally, this might just mean working with PLA to intensify research efforts and apply output.
"The information displayed w/in the tool is helpful and direct. The design is very visually appealing and it's easy to select a chapter and quickly assess where I should focus my time."
We ran a end-of-the-year retro to find identify possible improvement areas moving into the new year covering:
There were obvious themes discovered across our retro cards focused on improving overall planning effectiveness and efficiency by transparent communication. With that being said, there were threads of design maturity.
We all acknowledged that there was stronger collaboration within our working relationship and interactions with dev or product compared to a year ago - or past projects. While also identifying additional tweaks to our design and development practices to ultimately deliver a useful and enjoyable experience.
I started each day with a creative outlet exercise to include our team name in an iconic album cover. I'd then post to Slack with the plan of the day before standup.
It was a great way to get into a creative flow to start the day and engage the team with something new every single morning.