Human history has been a continuing attempt to preserve our lives in the memory of ourselves and others. Sixty-six thousand years ago we began to record our experiences on cave walls, using chunks of charcoal, colored rocks, and saliva. We drew representations of ourselves, our families and our environment in an attempt to memorialize who we were, what we did, and what our brief lives were about.
Once we developed language and then the written word we began to tell more precise stories of the days of our lives. We continued on in our quest to capture our moments, creating monochrome imagery, then color, then on to motion, pushing further and further until it happened. Using a pinch of math and a smattering of light, we converted our celluloid experiences into ones and zeroes and the Digital Age began.
More people than ever can now capture, store, and share their digital memories, anytime and anywhere. Yet with all our cleverness we never broke free from the shackles of passive observation. The only difference since the days of the Paleolithic painter, is that the paint on our screens is higher resolution.
But what if we could go back in time to the cave and stand with the painter, walk around as he paints, hear the dwindling crackle of a fading fire, sit down near his sleeping family and look up as he paints the story of their lives?
Welcome to the MemoryVerse Project, a non-profit research organization dedicated on building technology to preserve human memories.
The future of digital life is a true photorealistic medley of experience and MemoryVerse aims to build the infrastructure for it. The content medium of the future will go far beyond today’s 2D video and into a world of immersive 3D memories captured in space and time. MemoryVerse will be an integral part of this future by providing the AI-enhanced photorealism needed to record, create, share, and consume immersive 3D experiences of real life. We call these immersions of the real world, Memories, and each Memory is anchored to its real world location and time, thus creating a digital immortality where no one is ever lost.
"More real than real"
Memories are captured using using existing consumer-grade hardware (ie latest-gen iPhone camera and LiDAR). The captures are then converted to photorealistic 3D virtual representations, using advanced generative AI and modern Computer Vision calibrated to fill in any missing visual and spatial information.
Memories are visually indexed in space and time and keyword searchable on a virtual Earth map interface similar to Google Earth and Street View.
Memories are processed, fragmented, stored and later retrieved on a decentralized computational network.
MV Memory Capture + Representation
MS Engineering & CompSci, UC Berkeley
Technical Designer: Bullet Time & Universal
Capture, The Matrix movies.
MetaHuman at Epic Games,
Electronic Arts R & D Director
Multiple Sci-tech Academy Awards
Global Innovator of 3D Computer Graphics
MV Decentralized Hardware Systems
BS CompSci & Math, Univ of DE
CTO Penguin Computing. Past: SGI,
AI and Visual Sim at Scale
High Performance HW architect (the RSC @Meta)
Open source HW & SW Architect, Linux Mage
MemoryVerse Dynamics & Ecosystem
AB, AM, PhD, Harvard University
Asst. Professor of Strategy
University College London,
Past: Ex-Google (Earth, Maps, StreetView)
Author: The Uncertainty Mindset
Utility Mechanics
MemoryVerse Project Development Lead
BS, UWF. Past: Silicon Graphics Inc,
Ticketbud, Everfest
Global Wildlife Conservation
Surfrider Foundation, EFF, SFJazz
MemoryVerse Academy
PhD, University of WI, Madison
AB, UC Berkeley
Professor Software Mgmt, Carnegie Mellon Uni.
Past: Founder/CEO, Interactive Dev Env
Fellow: IEEE, ACM, IFIP
Open source adoption & use
MemoryVerse Product
BS Electrical & Computer Engineering
UT at Austin
Coinbase
Past: Tagomi, AMD Athlon64
Product Engineering
Sci-Fi and Gaming Enthusiast
MemoryVerse Visual Design
BFA Industrial Design
Past: Visual Effects Art Director
@ Industrial Light & Magic (ILM)
Senior Conceptual Designer
ADG Award for excellence: Dune,
Blade Runner 2049, Elysium.
The Cloud Atlas
MV Memory Capture NeRF + AI
MS Media Arts & Sciences
PhD candidate CompSci ASU
XR + NeRF Researcher Meteor Studio
Digital Story Teller
MemoryVerse UX
BFA Honors, Auburn University
Product Design: Jasper.ai
Past: Bill.com, Divvy
Web 3 Product Design
MemoryVerse Experience
ID & Info studies MS, BS Robotics
University of Tokyo
teamLab Co-founder & COO
Wearable remote control systems, Humanoid
Robots, Interactive Digital Art
(Award-winning global installations)
MemoryVerse AI Architecture Advisor
BS Applied Math, Brown
PhD, UC Berkeley
Stanford Institute of Human Centered AI
Co-Author, Artificial Intelligence:
A Modern Approach
ex-Google, NASA Aames Research
Sun Microsystems, USC
Fellow: ACM, AAAI
For detailed information on the MemoryVerse please request the project’s white paper.
The MemoryVerse Project seeks partnerships from foundations, university and research institutions. For partnership interest, please contact: MemoryVerse Project.
100% of all intellectual property created by the MemoryVerse Research team is open-source.