LLM Drawing Robot

2023

University Module

David Polke, Elia Salerno, Andreas Kohler, Stepan Vedunov

Human-Robot Interaction, Voice Interface

LLM Drawing Robot

Context

In a three-week university module, we built a speech-to-drawing system that turns spoken requests into custom pen illustrations. The goal was to explore novel experiences using natural language and AI models: Using GPT-3.5, Stable Diffusion, and a custom-built plotter, it generates unique images drawn on paper for visitors to keep.

My Role

Conducted user research on human-robot interaction solutions

Facilitated ideation rounds following Design Thinking principles

Designed and developed voice input frontend

Industrial design of the drawing robot

Result

The resulting prototype serves as a speculative proof of concept for pseudo-sentience in AI. Our core goals were to learn more about this field and to shape human interaction with robotic systems.

Video

Video capturing the interaction with the robot. thumbnail
Click to play
Video capturing the interaction with the robot.

Process

Initial Brainstorming
Brainstorming possible input mechanisms
Tech Pipeline
Tech pipeline of the entire process from human input to final drawing
Sketches
Sketching possible versions of the industrial design
CAD assembly
CAD assembly of the drawing robot

Interaction

We aimed at creating a straightforward speech-input GUI inspired by social media voice messages to encourage natural, human-like interaction. This interface was central to our goal of exploring novel ways to engage with LLMs by replicating AI sentience through conversational exchanges.

Interactive GUI
Video showing the speech input interaction

Results

3D Rendering
Final 3D rendering of the robot
Project Exhibition
Project Exhibition
User Interaction Photo
User prompting the robot
Robot drawing
LLM drawing robot in action