Duration
14 weeks
Tools
Arduino, p5.js, Node.js, Socket.IO, WebRTC, Google Firebase
Collaborators
Janet Peng, Joseph Zhang
Role
Interaction Designer, Lead Hardware + Software Developer
Advisor
Daniel Rosenberg Munoz, Eric Anderson, Kristin Hughes

Open Door Museum

Democratizing museum curation

While museums provide a window into our culture, they often fail to reach diverse communities and the stories they can connect with. Open Door Museum is a network of distributed museums, where community members can become curators of their own artifacts.

This project was part of a larger initiative, where a group of over 40 designers and engineers prototyped an assistive computer system for people with Parkinson's Disease. Learn more about our project here.
01 | Problem Space

How might we monitor mental health in a reliable and engaging way for people with Parkinson's Disease?

Depression anxiety are common clinical symptoms of Parkinson's Disease (PD), affecting nearly half of people with PD. As PD is a progressive disorder, doctors want to gain a consistent understanding of a patient's mood over time. However, they don't have a reliable way of analyzing the patients' mental health outside of sporadic check-ups.To tackle this problem, our design team asked the question: how might we monitor mental health in a reliable and engaging way for people with PD?
🏆 This project won  "Best in Category, Connecting" at the 2022 IxDA Interaction Awards.

Core Experience

Siri, Summer '20

Machine Intelligence, Summer '21

Interested in learning more? Please reach out to me via email.
Core Experience

Design Principles

Transparency
Enhancing transparency in blackbox algorithms
Privacy
Making privacy a tangible, valuable material
Interactivity
Interacting fluidly with the underlying ML model

Key Interactions

Data at a glance
Users have full transparency over what data is being used by their AI-powered device.
Nudge to open
During specific interactions, relevant types of personal data lift up to nudge user towards learning more.
Understand your data
Users learn the specifics of their data.

Design Principles

Community
Building community identity through personal artifacts
Storytelling
Providing tools for effective object storytelling
Collection
Archiving underrepresented cultures in digital media

Key Components

Helper Box
Press the buttons to scan, record and submit.
Scanning Cabinet
Archive any artifact of your choice.
Viewing Cabinet
Experience the artifacts and their stories, at the tip of your finger.

Core Experience

Example Art

Key Interactions

Team bonding
Ice break on Zoom through a lighthearted collaboration
Webcam as paint
Use your webcam and sounds to capture colors and textures found in your room.
Collective Art
See your teammates' art come to life on the canvas.

Key Highlights

Visualizing the Autonomy Stack

Empowering Remote Operators of Autonomous Vehicles

Looking for a case study? Please reach out to me via email.
This project was part of a larger initiative, where a group of over 40 designers and engineers prototyped an assistive computer system for people with Parkinson's Disease. Learn more about our project here.

How might we monitor mental health in a reliable and engaging way for people with Parkinson's Disease?

Depression anxiety are common clinical symptoms of Parkinson's Disease (PD), affecting nearly half of people with PD. As PD is a progressive disorder, doctors want to gain a consistent understanding of a patient's mood over time. However, they don't have a reliable way of analyzing the patients' mental health outside of sporadic check-ups.To tackle this problem, our design team asked the question: how might we monitor mental health in a reliable and engaging way for people with PD?
Outcome
Reflect on your day with Plume.
User: "Open Plume Assistant."
First, ease into the conversation.
Plume: "How was your day?"
Next, talk about your feelings.
Plume: "Tell me more. How are you feeling?"
Finally, hear recommendations to enhance your mood, if you're ever at your lows.
Plume: "I'm sorry to hear that. Want to play some music to cheer you up?"
Doctor Dashboard of Patient's mood, based on emotion data sent from Plume Assistant
Understand the conversational flow that drives Plume.
Process Documentation
1A | User Interview Findings
In the beginning of the project, our team conducted user interviews with key stakeholders in the experience, including: a person with PD, their caregiver, and two doctors specializing in PD. After the entire team developed baseline and visionary scenarios to understand our key users, I gained the following key insights ​​​regarding mood and PD.
🔎 Key Insights
1. Nearly half the people with PD suffer from depression or anxiety.

2. People with PD typically suffer from an array of different symptoms, both physical and psychiatric. As such, they have regular check-ups with primary physicians to monitor their conditions.

3. During check-ups, primary physicians often overlook mental health conditions in their patients due to an abundance of other symptoms to look over.

3. Once a physician identifies mental health conditions in a patient, patients can seek appropriate medical help. Until then, however, their conditions can go untreated for a long time.
Through research, it became clear that primary physicians need a clear window into the emotional health of people with PD. As such, I narrowed down to the two main goals of our project:
Primary Goals
01 | Create a reliable method to monitor mood over time

02 | Make the monitoring process engaging and effortless for people with PD
1B | Explorations
A. Choosing Voice Assistants
After establishing our primary goals, the question then became: what technology do we choose to monitor emotions?

Voice assistants became an interesting choice for two reasons. One, current emotion-detecting methods are long and arduous, whereas a voice assistant can be a lightweight way for users to check-in.

Two, according to the audio-based emotion recognition system developed by professors of our class, speech analysis can be highly accurate in recognizing a person's emotions. Based on this finding, both the design and engineering team agreed that incorporating a voice assistant is suitable to pursue our design goals.
B. Extracting mood through conversation
Initial explorations with conversational design involved exploring various dialogues and scenarios. These dialogues were inspired from real conversations with friends, to understand how raw feelings can best be derived from a conversation.

Opening phrase
Closing action (Negative scenario)
02 | Detailed Design
Defining the conversation

Key Design Principles

During the detailed design phase, I identified key phases in the conversation to make a successful emotional check-in. I also worked closely with engineering to scope the project into a prototype they could build in time. The following were some key principles behind my design:
01 | Alert privacy
Let the user know that their data is sent to their doctors and caregiver. People with PD can also have memory problems, so repeat in the beginning of each session.

Plume: Please remember, your responses from now on will be shared with your doctors and caregiver.
02 | Confirm, before proceeding.
Plume should confirm its emotional analysis with the patient, before sending the data to the doctors, in case of errors in machine learning intelligence.
Plume: How was your day?

User: I had a terrible time at boxing today.

Plume: It sounds like you didn't have such a great day. Did I hear you correctly?

03 | Clarify feelings
If the patient's response is not clear, follow up with a question to open up the patient's feelings.
Plume: How was your day?

User: It was alright.

Plume: I see. Can you tell me more about how you're feeling?

User: I feel... anxious.

04 | Talk less, listen more.
Each session with Plume allows the patients to talk at length about their day. Don't interrupt while the user is speaking.
Plume: How was your day?

User: I went dancing with my wife today...

Plume: <listening>

User: My body didn't feel as stiff as it did last week, thank God. You know what she said? She said....
05 | Reward
After the user engages with a session, suggest activities to enhance their mood.
User: ....It was so bad!

Plume: I'm sorry to hear that. Talking to your loved ones can help you feel better. Want to call your daughter now?
02 | Personalization
With the architecture of the check-in sessions established, I had to determine how to make these sessions enjoyable and personal over time. As such, I created an additional flow called "Surprise me." The flow essentially tries to understand the patient's interests, in order to generate more personal questions in the upcoming sessions.
"Surprise me" flow for understanding a close person.
03 | Implementation
Our design team were in close contact with the engineering team to make sure that our final prototype reflects the scope of our capabilities. In the process, we found out that the emotion detection model by the engineering team could identify 5 states: happiness, sadness, anger, surprise and neutral.
Pixelate images to achieve different textures
Codebase of the emotion detection model
Taking this into account, I solidified the conversational flow to run in 3 branches: good, bad, and neutral. If the patient expresses anger or sadness in their response, the response would go into the "bad day" bucket. If the patient expresses "happiness," the response would be part of the "good day" bucket.
Different treatments of negative, positive and neutral emotions .
04 | Takeaways
Through this project, I gained a newfound understanding on the importance of user research. Before our design team had identified mental health as an indisputable pain point for both patients and doctors, our class had considered removing emotions entirely from the overall system we were building. However, by providing a clear reason for the importance of emotion monitoring, we were able to fill in the gap that the medical community could not fully address.

Process Documentation
01 | Background
Teachable Machine + Zoom

How might we enhance remote interactions using machine learning?

Quick individual prototype of a rock, paper, scissors game.  Used Teachable Machine to recognize hand signals and p5.js for interactivity.
By the end of 2020, the pandemic flattened our lives into mere pixels on the web. To reinvigorate our monotonous remote calls, our team was challenged to imagine novel interactions on Zoom through a creative use of Teachable Machine. Below is our self-defined problem scope:
👤 Who
College students in remote learning.
🔎 What
A lighthearted, collaborative experience on Zoom.
💡 How
Teachable Machine and p5.js.
❓Why
Enhance human connection in remote interactions.
03 | Ideations
Brainstorming exercises

What does creative collaboration look like?

Idea 1: Co-creating collages
Idea: Make Zoom background collages out of patterns found in your room.

👎 Fun idea, no direct correlation to ML.
Idea 2: Co-creating fictional characters
Idea: Create fictional characters by capturing human features from inanimate objects.

👎 Limited visual outputs with the given task.
Idea 3: Zoom Scavenger Hunt
Idea: Capture different qualities of to reveal a mystery animal/object.

👎 Engaging, but hard to demo with Teachable Machine alone. Can't collect data for every possible prompt in the game.
💡Iterate to balance all considerations
[user context in a Zoom call, affordances of a webcam/Teachable Machine, and effective engagement of the activity itself]

Even though we proposed a lighthearted experience, no idea was "perfect" within our constraints. We had to make multiple iterations to reach the sweet spot across all the constraints.
04 | Final Direction
Imaginary to Real Tech

Scoping the Idea for Prototyping

After our interim presentation, we realized that we were severely limited in developing a working prototype, as the proposed idea was not well suited for the simplicity of Teachable Machine. As such, we re-worked our experience for improved feasibility.
05 | Iteration A
Networked Canvas

Developing the game

Sending and Receiving Data (p5.js, Socket.IO + Node.js)
Joseph and I collaborated on the development, with the following framework in mind. (p5.js was used to capture webcam footage and place on the canvas.)
💡Clean code is essential for collaboration
While the actual development wasn't very complex after solidifying our pipeline, I paid extra attention to make sure the code was clean and well commented for easy understanding.
05 | Iteration B
Prototype + Design

Flushing out the experience

The use case for Teachable Machine was still quite fuzzy, as no one application felt intentional in the design. We juggled with different ideas to see what fits with our developing prototype.
Capture when color is detected?
👎 Color selection is more efficient on the keyboard. Why use webcam in the first place?
Mapping poses to filters?
👎 Poses and filters aren't well correlated. Also less efficient than the keyboard.
Pixelate images to achieve different textures
👍 Different degrees of pixelation in the webcam image creates interesting texture on the canvas!
💡Prototype to test assumptions
We began our "pixel art" approach with the intention of using a single color per cell. However, prototyping the webcam-to-server experience revealed that textured pixels produced a far more interesting outcome than flat colors.
Final: Detect Different Sounds to Change Brush Pixelation
👍 Teachable Machine used to detect "clapping" vs. "rustling paper" sounds to change pixelation in webcam.
Final: Change brush size with space bar
👍 Since the webcam footage had diverse textures, increasing the brush size proved to have interesting results.
05 | Demo Day
Prototype + Design

Final Experience Deck

07 | Reflections
I learned a lot about how to collaborate effectively through this project. We had many fruitful discussions regarding creative collaboration, and even though many of them didn't end up becoming part of our final prototype, the remnants of our discussions still lived through our ever-changing Figma page.

A note on machine learning-based experiences: finding the right application for Teachable Machine was rather difficult, and our current use case still doesn't feel very compelling. Looking back at the project now, I wonder if we could've done more small experiments with Teachable Machine to understand its affordances, rather than thinking of it as an add-on to our pixel-art game. It may have led to a more meaningful use of a machine learning-backed creation that could spark ideas on what creativity really means in the age of AI.
Process Documentation
01 | Background
Teachable Machine + Zoom

How might we enhance remote interactions using machine learning?

Quick individual prototype of a rock, paper, scissors game.  Used Teachable Machine to recognize hand signals and p5.js for interactivity.
By the end of 2020, the pandemic flattened our lives into mere pixels on the web. To reinvigorate our monotonous remote calls, our team was challenged to imagine novel interactions on Zoom through a creative use of Teachable Machine. Below is our self-defined problem scope:
👤 Who
College students in remote learning.
🔎 What
A lighthearted, collaborative experience on Zoom.
💡 How
Teachable Machine and p5.js.
❓Why
Enhance human connection in remote interactions.
03 | Ideations
Brainstorming exercises

What does creative collaboration look like?

Idea 1: Co-creating collages
Idea: Make Zoom background collages out of patterns found in your room.

👎 Fun idea, no direct correlation to ML.
Idea 2: Co-creating fictional characters
Idea: Create fictional characters by capturing human features from inanimate objects.

👎 Limited visual outputs with the given task.
Idea 3: Zoom Scavenger Hunt
Idea: Capture different qualities of to reveal a mystery animal/object.

👎 Engaging, but hard to demo with Teachable Machine alone. Can't collect data for every possible prompt in the game.
💡Iterate to balance all considerations
[user context in a Zoom call, affordances of a webcam/Teachable Machine, and effective engagement of the activity itself]

Even though we proposed a lighthearted experience, no idea was "perfect" within our constraints. We had to make multiple iterations to reach the sweet spot across all the constraints.
04 | Final Direction
Imaginary to Real Tech

Scoping the Idea for Prototyping

After our interim presentation, we realized that we were severely limited in developing a working prototype, as the proposed idea was not well suited for the simplicity of Teachable Machine. As such, we re-worked our experience for improved feasibility.
05 | Iteration A
Networked Canvas

Developing the game

Sending and Receiving Data (p5.js, Socket.IO + Node.js)
Joseph and I collaborated on the development, with the following framework in mind. (p5.js was used to capture webcam footage and place on the canvas.)
💡Clean code is essential for collaboration
While the actual development wasn't very complex after solidifying our pipeline, I paid extra attention to make sure the code was clean and well commented for easy understanding.
05 | Iteration B
Prototype + Design

Flushing out the experience

The use case for Teachable Machine was still quite fuzzy, as no one application felt intentional in the design. We juggled with different ideas to see what fits with our developing prototype.
Capture when color is detected?
👎 Color selection is more efficient on the keyboard. Why use webcam in the first place?
Mapping poses to filters?
👎 Poses and filters aren't well correlated. Also less efficient than the keyboard.
Pixelate images to achieve different textures
👍 Different degrees of pixelation in the webcam image creates interesting texture on the canvas!
💡Prototype to test assumptions
We began our "pixel art" approach with the intention of using a single color per cell. However, prototyping the webcam-to-server experience revealed that textured pixels produced a far more interesting outcome than flat colors.
Final: Detect Different Sounds to Change Brush Pixelation
👍 Teachable Machine used to detect "clapping" vs. "rustling paper" sounds to change pixelation in webcam.
Final: Change brush size with space bar
👍 Since the webcam footage had diverse textures, increasing the brush size proved to have interesting results.
05 | Demo Day
Prototype + Design

Final Experience Deck

07 | Reflections
I learned a lot about how to collaborate effectively through this project. We had many fruitful discussions regarding creative collaboration, and even though many of them didn't end up becoming part of our final prototype, the remnants of our discussions still lived through our ever-changing Figma page.

A note on machine learning-based experiences: finding the right application for Teachable Machine was rather difficult, and our current use case still doesn't feel very compelling. Looking back at the project now, I wonder if we could've done more small experiments with Teachable Machine to understand its affordances, rather than thinking of it as an add-on to our pixel-art game. It may have led to a more meaningful use of a machine learning-backed creation that could spark ideas on what creativity really means in the age of AI.
Process Documentation
00 | Background
Voice intelligence gave us the gift of interacting more fluidly with our computers. It's one step closer to computers adapting to our minds, rather than constricting our interactions to the language of a machine. However, as Ben Shneiderman argued in the infamous debate Direct Manipulation vs. Interface Agents, too much help from the AI leads to bouts of frustration and lack of agency in interactions. Gifting users with natural tools for problem solving is far more fruitful than disengaging them from the algorithm completely. Yet AI speakers strip users of control, especially when the interaction goes off-course.

Moreover, the intangible nature of voice interactions have led to uneasy sense of distrust in systems we let into our homes. While privacy is a key value among AI product users, they are often unaware of how to protect themselves and the types of personal data used in the devices. As the data collected and accessed by AI services are expansive and often invasive, I began the investigation with how users can regain their agency with AI-driven speakers.
"I always just unplug [the Amazon Echo]… I don’t trust it"
- User A in paper "IoT Data in the Home: Observing Entanglements and Drawing New Encounters" 
01 | Initial Research + Ideation
Questioning Natural Interfaces

How might we regain agency in an intelligent environment?

AI systems today do a poor job of providing explanations – an integral aspect of a robust user experience. As such, the ideation phase began with the core goals of ensuring transparency and control.
The Data Landscape
Research Inspirations for Explainability
White-box Explanation [5]

A white-box explanation of college applicant acceptance model. Detailed overview of features and interactivity improved objective understanding of how applicants were evaluated.

How can an interactive, explanatory interface be adapted in everyday interactions?
Data Tarot Readings [6]

Exploring data by extracting varying analysis of personal data. "Mixing new datasets to find new meanings" was particularly interesting.

How can tangible play be integrated into AI-driven experiences?
Exploratory Sketches
Provocative artifact sketches. Focuses on transparency, autonomy, privacy, and provocation.
03 | Iteration A: Form
Thinking through making

Rapid Prototypes

Prototype, By Hand and Paper
⟵ "Loading"
↑ Revealing:Partial
Revealing:All ⟶
With the above states in mind, I drafted a quick storyboard to demonstrate a potential interaction. At this point in the process, the focus was placed on defining the mechanics, rather than flush out the experience. This allowed me to focus on the language of the form, defined by themes of transparency and control, which would later feed into the function of the artifact.
Quick Storyboard
Prototype, Using Arduino

Prototype plan. Stepper motor for full rotation around circle. Servo for linearly actuating individual petal, for the "revealing" state.

Actual inner structure.
Movements

"Loading," using rotational movement.

"Revealing," using linear motion.
The initial prototype offered an appropriate level of fidelity for demonstrating the interactions, but I received feedback that showing the inner layer of the shrub would be more effective to better embody the “revealing” feature.

With the servo mechanism, the internal structure would not be able to work. After consulting my advisor, nitinol wire became the tool of choice.
04 | Iteration B: Form
Fabrication + Physical Computing

Materiality, Mechanics, and Form

Shoutout to Maker Nexus for fabrication access during the pandemic!
"Training" Nitinol Wire using heat.
Structure + Interaction Sketches
Form language
What could be inside the leaves
What could be inside the leaves
Concrete interactions, based on common scenarios, like showing the shrub is listening to you and deleting detailed data
Figuring out joints
Figuring out joints
Figuring out joints
Structure Iterations
⟵ String mechanism
Nitinol Wire test⟶
⟵ Lasercutting
Lasercut inner structure ⟶
⟵ Soldered nitinol wire
Nitinol Wire test, with new structure ⟶
⟵ A "works like" prototype of personal data icon reveal.

When wire heats up, the icon, painted with thermochromic ink, changes its color from black to white.

05 | Iteration C
Wizard of Oz

Final Prototype

Final storyboard. AI reveals reasoning through its petals. User can find out details by lifting up the petal. At last, they can choose to delete the data, if deemed too intrusive, incorrect, etc.
Final structure. Petals were laser-cut on mylar paper for better materiality.
Final assembly
Hardware set-up for Wizard of Oz. Potentiometers to control nitinol wire bending. Transistors to lower current to prevent nitinol wire from burning out.
07 | Reflections
This speculative project explores how casual users can gain control in a world where AI dominates decisions in our everyday lives. I wanted to build a provocative experience that questions the level of transparency in AI decision making. This was with the hopes of empowering users with more insight into how their AI systems operate, as well as investigating how users react when given access to their data under the veils.

In my initial research, I discovered that AI as a design material is inherently hard to work with, due to the lack of clarity on its potentials and explanations. While this project offers an abstracted version of how explainability can be improved in current human-AI interaction scenarios, limitations still apply in feasibility and sheer lack of reasoning behind certain AI decisions.

Standardized procedures for human-AI interactions must be established, not only to hold products accountable of fairness but also give lay users more autonomy in daily life. 

This was my first time designing with a hardware-based system. The possibilities for a platform with layers seemed limitless, when applying the principles of direct manipulation into hardware. I imagine it to be harder to deploy once scaled to an actual product, due to the fallibility of many moving pieces. However, it was an interesting exploration nonetheless.
1. Qian Yang, Aaron Steinfeld, Carolyn Rosé, and John Zimmerman. 2020. Re-examining Whether, Why, and How Human-AI Interaction Is Uniquely Difficult to Design. Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. Association for Computing Machinery, New York, NY, USA, 1–13. DOI:https://doi.org/10.1145/3313831.3376301

2.Anatomy of an AI system. Anatomy of an AI System. (n.d.). Retrieved January 15, 2022, from https://anatomyof.ai/

3. Wang, R., Harper, F. M., & Zhu, H. (2020). Factors Influencing Perceived Fairness in Algorithmic Decision-Making: Algorithm Outcomes, Development Procedures, and Individual Differences. CoRR, abs/2001.09604. Opgehaal van https://arxiv.org/abs/2001.09604

4.Rose, D. (2015). SIX FUTURE FANTASIES. In Enchanted objects: Innovation, design, and the future of Technology (pp. 390–412). essay, Scribner.

5. Hao-Fei Cheng, Ruotong Wang, Zheng Zhang, Fiona O'Connell, Terrance Gray, F. Maxwell Harper, and Haiyi Zhu. 2019. Explaining Decision-Making Algorithms through UI: Strategies to Help Non-Expert Stakeholders. Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. Association for Computing Machinery, New York, NY, USA, Paper 559, 1–12. DOI:https://doi.org/10.1145/3290605.3300789

6. Audrey Desjardins, Heidi R. Biggs, Cayla Key, and Jeremy E. Viny. 2020. IoT Data in the Home: Observing Entanglements and Drawing New Encounters. Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. Association for Computing Machinery, New York, NY, USA, 1–13. DOI:https://doi.org/10.1145/3313831.3376342
More to come...
Design explorations of revealing personal data in the home
Extraordinary Ordinary Things Exhibit at CMOA. Our starting point of inquiry for taking museums outside the four walls.
01 | Initial Inquiry + Ideation
Questioning the role of museums

How might we democratize museum curation?

Initial Reflections
Conceptual inquiries after the exhibit visit to CMOA. Identifying the gaps in current museum experience.
Untold stories in artifacts
Some contexts missing from objects in traditional museums. We found human stories to be a golden thread behind all the themes.
Our inspiration
'You Can't Lay Down Your Memory' by Tejo Remy. The artist likens the disarrayed nature of our memories to a disorderly chest of drawers. This was our main inspiration for our proposed interaction.
Interaction Ideas
1. Connected Drawers. Connect two strangers through a networked drawer. Support nonverbal collaboration.
2. Untold Stories. Hear untold stories between strangers through a game of telephone.
3. Object Collage (Final Direction). Get a portrait of Pittsburgh by scanning cherished objects from the community in a drawer.
03 | Iteration A
Materializing the concept

Proposal + Working Prototype

Working Prototype
Viewing drawer with a holographic projection.
Unity+Arduino Prototype. Each button press triggers the next object to float.
💡 Paint the big picture, then substantiate with a prototype
We crafted our narrative in two chapters. One, showing how the experience unfolds at a community level through a network of cabinets. Two, showing what it scales down to, through a tangible prototype of a cabinet.

This strategy helped materialize the overall experience for the stakeholders involved. As a result, we received positive feedback on our proposal from CMOA's curators and external critics.
04 | Iteration B
Prototyping, Defining the tech

Scanning Experiments

Inspirations for different scanning methods. What is the best way to capture the essence of an object?
While I wanted a novel capturing method to scan the object in 3d, this proved to be a difficult technological feat given the current state of 3d capture. After a quick experimentation by my teammate, a video capture of the object spinning in 360 proved to be effective for our purposes of producing realtime scans.
Video capture of object rotating 360.
Attempt 1: ArduCam + Particle Argon
Particle Argon – an IoT development processor – was enticing for being able to upload photos directly to the cloud. It would better contextualize the cabinet as an IoT platform in the streets. I debugged a deprecated open source code to test out the tech. However, the upload speed was too slow and unreliable to further develop.
Attempt 2: ArduCam + Processing
Another attempt at using the camera module ArduCam. While the capture was successful using Arduino connected to an open source code on Processing, the capture quality was too poor when the subject had to be taken in dim lighting.
Attempt 3: Webcam + p5.js
An external webcam broadcasted through Javascript was the most robust solution. The technical architecture was rooted in this core scanning method.
💡 Adapt appropriately to setbacks
Finding the right tech to build the scanning function was difficult, as there were multiple approaches with unforeseen tradeoffs. Trial and error helped narrow down to the right tools, without losing the essence of our original design.
Physical Set-up
Post-it annotations to note interaction feedback.
Hardware + Software + Cabinet Set-up. Prototyped in foam core, by Janet.
05 | Iteration C
Prototyping, Continued

Viewing Experiments

Explorations for projecting the viewed objects. Upon experimentation, adding too many UI elements broke the illusion of floating objects, so we maintained the simple look of having a single object per view.
Saving and viewing multiple captures
Video capture using MediaRecorder API. Projection is faked using iPad Screen Mirroring.
Audio + Softpot switching
Soft Potentiometer as a non-intrusive interface. Gestures can be encoded to swipe through artifacts.
Semi-final prototype
Full-scale prototype.
05 | Iteration D
Prototyping, Continued

Detailed Design of Interface

Finalized Bill of Materials for hardware.
1. Breaking down detailed user journey.  2. Proposal for 'object recognition -> generate question using AI' pipeline to support effective storytelling regarding the artifact.
Figma prototype on iPhone + LCD Display prototype. Hardware + software coming together.
💡 Prioritize a robust interaction over a novel one.
As the interaction system became increasingly more complex, creating a robust user experience at each step was crucial for a reliable interaction.

For instance, the user should receive visual feedback on where their artifact is placed when closing the door for a scan. Otherwise, the object can be completely out of frame.

The same went for a scan button. The extra layer of confirmation makes sure that the user is cognizant when scanning their personal object, even if it may not feel as novel as simply closing the door to initiate the scan.
06 | Iteration E
Software + Hardware Cleanup

Final Overhaul

💡 Strengthen the demo for user testing
In previous steps, interactions were made with lightweight developments simply to test out the design. In the end, key functions were fully developed for a robust demo at the final showcase.
1.  The Data – Node.js/Socket.IO server ➝ Firebase database
With temporary data on Node.js, users could review their artifact before a final export to Firebase. More importantly, the iPad in the viewing cabinet could access all artifact submissions between refreshes, making the system flow smoothly between the cabinets.
Data flow chart for secure artifact submission process.
2. Hardware to Software – Arduino ➝ Javascript
Arduino and p5.serialport library were used to connect hardware and software. Code structure had to be meticulously organized to streamline events on Arduino and Javascript.
3. Video/Audio Recording – MediaRecorder API
The Problem: The MediaRecorder API presented a lot of headaches during development. A perfectly fine video playing on the laptop could not be loaded on the iPad. This was my first time running into a platform-related issue.

The Solution: After numerous Google searches... I found that the file format had to be 'mp4,' not the more optimized 'webm' used on Chrome, to properly play on both the iPad and Mac Safari.
Video successfully uploaded to server, accessible on both the iPad and Macbook.
4. Live Streaming – Open Source Code using WebRTC
To display realtime feedback on the Helper Box interface, I used Gabriel Tanner's open source code to be loaded on the iPhone.
Realtime broadcasting to help users place object in frame.
Final Exports on the iPad.
07 | Demo Day
Working Prototype Demo

"Future of Museums" Exhibition

We successfully delivered a working prototype on exhibition day, receiving over 16 scans and 40 participants, as many submitted a collection of objects as a group. We noticed a lot of delight when the scan initiated physically with the stepper motor, as well as when they viewed the final exports in the viewing cabinet.
We also noticed many improvements for the future. One, the helper box interface needed more clarification, as participants had difficulty identifying which button to press on certain instructions. Two, participants often forgot to pick up their scanned artifact – demonstrating a clear need for some reminder after viewing the artifact.

Finally, the prototype inspired new use cases we hadn't initially anticipated. Open Door Museum could be a time capsule for friend groups that want to cherish a memory. It could also be an exhibition for local arts and craftsmen, who want to showcase their talents through a quality scan. At a cemetery, families could submit objects owned by their loved ones and share in their collective grief. For underrepresented communities, this could be a vehicle for digital recording of their culture. The beauty of participatory experiences bloomed with this prototype – I wanted to know more about its potentials in the hands of more people.
07 | Reflections
Domenico Remps, Cabinet of Curiosities (c. 1690)
The origin of museums is the "Cabinet of Curiosities," through which wealthy aristocrats told stories of collected objects as a form of entertainment. While the cabinet held information about the objects themselves, they served as a reflection of the owner's personal history.

In some ways, the internet has exploded into pockets of these cabinets, where we have an overabundance of personal stories through mediums beyond a cabinet. However, the lack of friction in this habitual ritual has shifted personal records from a reflective activity to a toxic sea of noise.

By forcing visitors to choose their artifact, visit the cabinet, and record a live story, Open Door Museum hopes that visitors can have a deeper reflection on their individual and collective identity. Through the power of IoT, we believe that Open Door Museum can achieve both the accessibility of digital media and the rich experience of physical interactions. We hope that more people can stop and think about the culture they participate in through an enchanted object like Open Door Museum.