LLM Drawing Robot

LLM Drawing Robot

2023

University Module

David Polke, Elia Salerno, Andreas Kohler, Stepan Vedunov

Industrial Design, GUI

About

In a three-week university module, we built a speech-to-drawing system that turns spoken requests into custom pen illustrations. The goal was to explore novel experiences using natural language and AI models: Using GPT-3.5, Stable Diffusion, and a custom-built plotter, it generates unique images drawn on paper for visitors to keep.

Video

Video capturing the interaction with the robot. thumbnail
Click to play
Video capturing the interaction with the robot.

Process

Initial Brainstorming
Brainstorming possible input mechanisms
Tech Pipeline
Tech pipeline of the entire process from human input to final drawing
Sketches
Sketching possible versions of the industrial design
CAD assembly
CAD assembly of the drawing robot

Interaction

We aimed at creating a straightforward speech-input GUI inspired by social media voice messages to encourage natural, human-like interaction. This interface was central to our goal of exploring novel ways to engage with LLMs by replicating AI sentience through conversational exchanges.

Interactive GUI
Video showing the speech input interaction

Results

3D Rendering
Final 3D rendering of the robot
Project Exhibition
Project Exhibition
User Interaction Photo
User prompting the robot
Robot drawing
LLM drawing robot in action

Learnings

The resulting prototype serves as a proof of concept for pseudo-sentience in AI. This foundation is core to the novel developments in HRI enabling more natural and meaningful interactions between humans and robots.