Paper Vision
Bridging physical drawings with digital interactivity
Inspiration
Paper is becoming increasingly obsolete in today’s digital society. PaperVision was created to launch physical paper into the 21st century by transforming hand-drawn concepts into interactive digital experiences.
What it does
PaperVision enables users to:
- Draw ideas on paper and see them come alive on computer screens
- Create interactive experiences from physical drawings
- Transform hand-drawn mazes into playable digital puzzles
- Convert sketched graphs into functional digital visualizations
- Interact with paper-based creations through digital interfaces
Key Features
Maze Transformation
- Draw any maze on paper
- PaperVision digitizes the structure
- Play the maze digitally with interactive controls
- Track completion times and paths
Graphing Tools
- Sketch graphs and charts on paper
- System recognizes axes, labels, and data points
- Converts sketches into interactive digital graphs
- Manipulate and analyze data in real-time
How We Built It
Computer Vision Pipeline:
- Image capture using standard webcam
- Preprocessing for perspective correction
- Feature detection for shapes and lines
- Maze recognition and path extraction
- Graph element identification (axes, points, curves)
Interactive Components:
- Web-based interface for user interaction
- Physics engine for maze navigation
- Data visualization library for graphs
- Real-time synchronization between paper and digital display
Challenges We Faced
- Accurate recognition of hand-drawn elements
- Handling variations in paper quality and lighting
- Distinguishing between intentional marks and artifacts
- Creating real-time feedback between physical and digital
- Developing intuitive user interactions
What We’re Proud Of
- Creating a seamless paper-to-digital conversion system
- Developing functional prototypes for multiple use cases
- Maintaining the creator’s original artistic intent in digitization
- Enabling true interactivity with physical drawings
- Completing a working proof-of-concept in hackathon timeframe
What We Learned
- Advanced computer vision techniques
- Image processing and feature extraction
- Real-time system synchronization
- User experience design for hybrid physical/digital systems
- Challenges in interpreting ambiguous human drawings
Built With
- Python
- OpenCV
- Javascript
- Flask
- Tensorflow
More Info
Devpost
GitHub Repository
Add live demo link if available
*Created during PennApps XVI, with Abhishek Patel, Zarir Hamza, and Kunal Adhia. *