Attended the lightfield display event at Marvell HQ last night with Mitch Williams.  I had attended the lightfield event with the SCIEN group at Stanford University a few months ago where I got to try the Oculus Crescent Bay and Otoy octane renderer technology.  Mitch Williams used to work for otoy and him and I just wont a VR hackathon event at Galvanize in san Francisco.  At the SCIEN Stanford event I was meeting with nvidia, Dr. Lanman of Oculus, and many others.  I was able to test some of the lightfield technology at this SCIEN event.

Dr Wetzstein gave us a preview of his Siggraph 2015 paper that focuses on new uses of lightfield technology in head mounted displays.  I found his new camera on velocity pixel tracking to be interesting.  He mentioned lumii, a new startup using the latest in lightfield display technologies.  Also how this new tech could be used in glasses free televisions to provide new entertainment experiences.  Terrabytes per second of data streaming was mentioned.  I queried Gordon about this as Jaunt VR was just at the National Assocation of Broadcasters event saying Oculus’s strategy of 360  streaming of content like the National Basketball Association had many problems.  One being the consumer level pc’s having the CPU and GPU power for this, and the distribution networks to stream this media, with Jaunt VR in their keynote at NAB claiming the high speed fiber networks needed globally to do this streaming were not optimal and it would be sometime before they were.

If Wetzstein technology is incorporated soon, it could be an entirely new field of hmd’s and other displays that will be game changers.  Oculus did hire his colleague Lanman away from NVidia after all.  Gordon mentioned Toshiba and adobe having 4k and 8k displays at this point and how eye tracking would be important to the future of this technology in some ways.  I discussed the new FOVE eye tracking hmd with him and the eye tracking latencies involved in the technology.  ACM also mentioned that Dan Cook, my friend with eyemindbvr will be giving an ACM talk next month about nuerogaming and nuerofeedback.

20150528_205431 20150528_205825 20150528_203233 20150528_204847 20150528_205814 R0010069 R001007020150528_201727 20150528_201759 20150528_202547 20150528_202529 20150528_202517

Emerging Trends and Applications of Light Field Display Systems

Thursday, May 28, 2015, 7:30 PM

Marvell Semiconductor Inc
5488 Marvell Lane Santa Clara, CA

76 Members Went

Gordon Wetzstein  Assistant Professor, Electrical Engineering Department  Assistant Professor (by courtesy), Computer Science Department  Stanford UniversityAgenda7:30 Networking & Snacks   8:00 Presentation  Please arrive before 8 due to securityAbstractWhat if your mobile phone’s display would correct your vision deficiency instead of yo…

Check out this Meetup →


Gordon Wetzstein
Assistant Professor, Electrical Engineering Department
Assistant Professor (by courtesy), Computer Science Department
Stanford University


7:30 Networking & Snacks
8:00 Presentation
Please arrive before 8 due to security


What if your mobile phone’s display would correct your vision deficiency instead of your glasses? Light field display technology can assess and correct the user’s vision. In this talk, we discuss a wide range of unconventional applications that are facilitated by light field technology, a novel inexpensive computational display technology. Light field displays are expected to be “the future of 3D displays”, although many believe that the recent hype about stereoscopic 3D displays is over. One reason why the consumers haven’t widely adopted 3D television can be the lack of a unique or useful enhancement of the 2D viewing experience. Wetzstein discusses why it is believed that a new technology – light field displays – delivering an experience that consumers haven’t adopted in the past will work in the future. The talk begins with a short historical review of light field displays and recent trends towards compressive light field display, followed by a discussion of applications in projection systems, vision assessment and correction, wearable displays, and a brief comparison to holography.

Speaker Bio

Prior to joining Stanford University’s Electrical Engineering Department as an Assistant Professor in 2014, Gordon Wetzstein was a Research Scientist in the Camera Culture Group at the MIT Media Lab. His research focuses on computational imaging and display systems as well as computational light transport. At the intersection of computer graphics, machine vision, optics, scientific computing, and perception, this research has a wide range of applications in next-generation consumer electronics, scientific imaging, human-computer interaction, remote sensing, and many other areas. Gordon’s cross-disciplinary approach to research has been funded by DARPA, NSF, Samsung, Intel, and other grants from industry sponsors and research councils. In 2006, Gordon graduated with Honors from the Bauhaus in Weimar, Germany, and he received a Ph.D. in Computer Science from the University of British Columbia in 2011. His doctoral dissertation focuses on computational light modulation for image acquisition and display and won the Alain Fournier Ph.D. Dissertation Annual Award. He organized the IEEE2012 and 2013 International Workshops on Computational Cameras and Displays, founded as a forum for sharing computational display design instructions with the DIY community, and presented a number of courses on Computational Displays and Computational Photography at ACM SIGGRAPH. Gordon won best paper awards at the International Conference on Computational Photography (ICCP) in 2011 and 2014 as well as a Laval Virtual Award in 2005.