HyperBlend

December 9, 2020

This project was a part of my final project for Immersive Computing and Virtual Reality (CS 294-137) taught by Professors Bjorn Hartmann and Allen Yang at UC Berkeley in Fall 2020.

HyperBlend is a virtual reality application that allows users to engage in a unique hyperspectral painting experience. With the HyperBlend mobile, web, and Google Cardboard setup, users can draw on a canvas using a hyperspectral color palette, which renders slightly varying colors to each eye in a virtual reality interface. Through binocular fusion (the physiological process of combining signals from both eyes into a single blended image), users interpret the slightly varying colors shown to both eyes as a lustrous, hyperspectral color. By providing a flexible and accessible interface for painting with hyperspectral colors, HyperBlend prototypes a novel interaction with hyperspectral colors while creating art.

Most humans have three retinal cone cells that produce trichromatic color experiences. However, there exist a few rare individuals with tetrachromatic or four-dimensional color vision. Simulating tetrachromatic or higher dimensional color vision for trichromats is a rich problem space, where little is known about the experiential nature of higher dimensional color vision. In our project, we leverage the physiological process of binocular fusion to introduce a fourth “dimension” of color, by slightly varying the colors shown to each eye.

The virtual reality painting space is an appropriate and innovative medium to explore the effects of binocular fusion on color vision in a creative domain. By embedding hyperspectral colors into an interactive, immersive painting application, we propose a new context for color vision researchers to probe deeper into how individuals interact with higher dimensional color experiences when creating, and understand the potential for hyperspectral color experiences in creative contexts.

To use HyperBlend, the user wears a Google Cardboard interfacing with the HyperBlend mobile application and provides user input (i.e. paint strokes and color selections) via a trackpad and keyboard connected to the HyperBlend desktop web client. The Hyperblend mobile app interfaces with the desktop web client via a web socket to synchronize changes across both screens. The app is built using p5.js, which is a web-based graphics library for creating graphic and interactive experiences.

The HyperBlend app screen is split into two halves corresponding to the left and right eyes. When viewed through a Google cardboard, the screen appears as a single canvas with a single row of controls along the bottom.

The controls along the bottom for each half of the screen (left to right): (1) a paintbrush color palette, (2) reset button to clear the canvas, (3) a custom color-wheel color picker, (4) a background image file selector, and (5) a background color palette.

The paint brush color palette (left-most control) and the background color palette (right-most control) contain a predefined selection of four “regular” colors (same color shown to both eyes) in the bottom half of the palette and four hyperspectral colors (slightly different color shown to each eye) in the top half of the palette. The predefined hyperspectral colors are selected based on informal interview data from two dichromats, conducted by a student group in Computational Color (CS 294-164) this semester.

Furthermore, users can select distinct custom colors for each half of the screen from a color wheel (located in the bottom center of each half of the screen). Users may also upload a background image, which HyperBlend transforms from the RGB color space to an RGG’B color space by displaying the image with a slightly different green color channel to each eye. In other words, the original RGB image is rendered to the left eye and an RG’B image is rendered to the right eye.

The setup for using HyperBlend includes a laptop, a smartphone, and a Google Cardboard. To accommodate different screen sizes of phones, users may calibrate their app screen and resize the canvas before the drawing via the web client. They can drag and click on their mouse to draw paint strokes on the PC. They can also change the brush size and switch left/right brush with keyboards. The same actions will be synchronized to the phone and multiple clients can visualize the drawing in real-time on the Google Cardboard with a smartphone inside.

Gallery

No items found.

Keming Gao, Yudi Tan, Yuhan Yang

latest projects

See all projects