In a three-week university module, we built a speech-to-drawing system that turns spoken requests into custom pen illustrations. The goal was to explore novel experiences using natural language and AI models: Using GPT-3.5, Stable Diffusion, and a custom-built plotter, it generates unique images drawn on paper for visitors to keep.
LLM Drawing Robot
Context
My Role
Conducted user research on human-robot interaction solutions
Facilitated ideation rounds following Design Thinking principles
Designed and developed voice input frontend
Industrial design of the drawing robot
Result
The resulting prototype serves as a speculative proof of concept for pseudo-sentience in AI. Our core goals were to learn more about this field and to shape human interaction with robotic systems.
Video

Process


Interaction
We aimed at creating a straightforward speech-input GUI inspired by social media voice messages to encourage natural, human-like interaction. This interface was central to our goal of exploring novel ways to engage with LLMs by replicating AI sentience through conversational exchanges.

Results



