Commissioned by
V2_ Institute for the Unstable Media
Idea, research and concept
Jerry Estié & Julia Luteijn
Character Design & Animation
Jerry Estié
Working Prototype
Julia Luteijn
TAMAI is a tamagotchi that you ‘feed’ by answering questions about human behaviour. Based on the answers you give, TAMAI will pick up human cognitive biases and change its reasoning, phrasing or even hide behind a paywall. The project originated from our research on the current state of bias in AI and how it originated: Cognitive bias from us, humans.
The focus was on users with a low technology/artificial intelligence literacy to inform them about the dangers of bias in AI in a playful, non-lecturing, way. The tamagotchi visuals helped to create a familiar framing device. Through use of the prototype, users are confronted with well-known cognitive biases/effects and how they influence not only TAMAI, but AI systems in general. TAMAI was created together with Julia Luteijn as a prototype for the open call ‘computer says no’ by V2_ in Rotterdam.
TAMAI grows by asking the user about human day to day life. Depending on what you answer TAMAI might pick up on a human cognitive bias and make that part of its reasoning.
Taking inspiration from the 90s, the visuals are a mix of tamagotchi and Gameboy aesthetics combined with the look and feel of modern voice assistants like Siri, Google Assistant and Cortana.
After hatching from its pixel, tamai can grow up to five times if it is taken care of properly.
TAMAI can pick up many biases based on the answers given to it. This will change its reasoning or phrasing of questions.
Learning Rhyme as Reason
Rhyming Questions
Google Effect
Teaching Tamai about love
Opening UP To Love
Commissioned by
V2_ Institute for the Unstable Media
Idea, research and concept
Jerry Estié & Julia Luteijn
Character Design & Animation
Jerry Estié
Working Prototype
Julia Luteijn
TAMAI is a tamagotchi that you ‘feed’ by answering questions about human behaviour. Based on the answers you give, TAMAI will pick up human cognitive biases and change its reasoning, phrasing or even hide behind a paywall. The project originated from our research on the current state of bias in AI and how it originated: Cognitive bias from us, humans.
The focus was on users with a low technology/artificial intelligence literacy to inform them about the dangers of bias in AI in a playful, non-lecturing, way. The tamagotchi visuals helped to create a familiar framing device. Through use of the prototype, users are confronted with well-known cognitive biases/effects and how they influence not only TAMAI, but AI systems in general. TAMAI was created together with Julia Luteijn as a prototype for the open call ‘computer says no’ by V2_ in Rotterdam.
TAMAI grows by asking the user about human day to day life. Depending on what you answer TAMAI might pick up on a human cognitive bias and make that part of its reasoning.
After hatching from its pixel, tamai can grow up to five times if it is taken care of properly.
TAMAI can pick up many biases based on the answers given to it. This will change its reasoning or phrasing of questions.
What is love?
Opening UP To Love