Shibuya is an AI system that enables cameras to identify various forms of semantic spaces: perspective-based forms, heads, objects, blocks of color - and it then isolates each space and replaces it with any other form of visual imagery. The
example shown here uses LiDAR to identify spaces, and then has them replaced by, for example, images from your various social media feeds as well as your search histories and so on. The result is a seeming dull space converted into something like Tokyo's electronic neighborhoods dense with lights and motion. In a way Shibuya also creates a form of electronic canvas - an artificial subconscious in which unexpected clashes of imagery and motion simulate the brain's deeper levels, akin to automated surrealism or to artificial dreams.
Cameras in our devices now try to understand our space. Take a photo and your camera attempts to identify objects like humans and furniture, and maps the area in three dimensions. It uses reflection of light and lasers, calculates depth differences of environment, and attempts object shape detection. The understandings however are often full of errors, light reflections dance in misshapen forms, object boundaries clash with other objects, depth bleeds misunderstanding of space. Perhaps it is helpful to think of the camera's mapping as a task performed by an alien life that's showing us a new form of vision: "What is an object?" and "what is space?". What if instead we appreciate this computational alien life for what it is? We decide to believe the camera's vision as a new understanding of object and space forms, evolving a symbiotic new understanding of what surrounds us.