Thursday, November 23, 2017

VR Explorations

Throughout 2016 and the beginning of 2017, when I was still living in Oslo and working for Postmenn-Stripe (now PXLR), we did several internal R&D projects to explore the possibilities and limitations of Virtual Reality - and how to best produce content for the platform.

Coming from a visual effects background, focusing on commercials, movies and TV, the visual quality of an image or video clip has always been paramount. This we wanted to take with us when producing content for this new medium.

The part of the projects described here was handled by me together with my colleague Mariusz Kolodziejczak.


Photogrammetry is the process of creating digital models, of real world objects and locations, based on photographs. If done right - it can create imagery virtually indistinguishable from the original. Although labor intensive and not suited for all kind of objects (ie glass and transparent objects will give a poor result) it has great potential when you want to share an experience in VR.

For a test case we chose the Stave Church (Stavkirke) at the Norwegian Museum of Cultural History at Bygdøy in Oslo, Norway. This old, wooden building has a lot of character and many beautiful details to experience.

Inside the Stave Church

It took around 800 pictures, clocking in at 21 megapixels each, to reconstruct the inside of the church. We shot on two Canon 5D Mk II cameras which were kitted with a 24 and 50mm lens. The exposure times were quite long (around 60 seconds) since we only had the natural light available to us which passed through the two open doors in the building. We also took great care to not disturb the other visitors - since the shoot was done during a regular visiting hours. With two photographers we spent around 3 hours shooting the interior this way.

The resulting images were processed in Agisoft Photoscan where we had to take care to mask out tourists (and each other). We also processed all the RAW images to floating point EXR files to preserve the dynamic of the natural light to the best possible degree.

Post processing and cleanup were done in a mix of Autodesk Maya and 3DCoat. Since there was quite a bit of overlapping geometry further up towards the roof we had to spend a significant amount of time doing this.

The final model had to be split up before bringing it over to Unreal Engine - since we wanted to preserve as much detail as possible (although we did reduce the polygon density of our original model quite a bit). Several 8K texture maps were generated as well.

As a final touch we experimented with adding the sound of munks, church bells and creaking wood - to enhance the experience. Using the HTC Vive VR Headset you can walk around and look at whatever peak your interest.

The video below shows the experience from outside and inside.

We also did stereoscopic stripmaps suitable for the Samsung GearVR or other platforms, like Google Cardboard, with the proper viewer. These will give you full depth perception and have excellent image quality, but will not allow for movement and sound naturally.

Stereoscopic stripmap
You can download the full size image here (you'll need a proper viewer for this to make sense).


Since we were there shooting the interior of the Stave Church we also did a quick test outside. And by quick I mean not more than 15 minutes by one photographer to walk around and snap images from the ground level. The captures was a mix of 24mm and 100mm lens shots.

As with the interior model we processed these with Agisoft Photoscan, but since there was less occlusion and overlap we did not spend much time doing cleanup before bringing the model into Unreal Engine.

We also imported a model of a house from Røros - which I photographed some years ago when I went there for a visit. The main challenge with this one was the grass on the roof - which we had to remove since the photogrammetry software could not create a good enough model there. This was mainly because of the wind made the grass move between the frames.

Old Artifacts

Preservation of cultural heritage and old artefacts is another venue where photogrammetry is well suited.  Capturing organic forms with intricate details is no extra work this way. You can then choose to bring the captured object into VR, 3D print it or save for later reference in case the object in question is lost, stolen or destroyed.

To evaluate small scale objects with a lot of details we were lucky enough to get access to an image-set of a "carved scull". The original images were provided by the photographer Steffen Aaland at Glitch Studios and were shot using a Phase One IQ250, a 50 megapixel medium format camera. Focus stacking was utilised to get required depth of field (i.e. get the whole scull in focus).

We brought the scull into VR as well and it was pretty incredible to be able to pick it up from the table and inspect it up close.

Bringing reality into VR

As part of our SkatteFUNN project we also looked into a plethora of other methods to "acquire reality" with the highest possible fidelity.

Areas of extra interest were HDRI (High Dynamic Range Imaging), to ensure the experience in VR would be as close to "being there" as possible, as well as making sure the source material held as high a resolution as possible. Even though the screens of todays VR headsets are pretty low resolution this will improve - and when this happens we can re-export the images for the new formats.

To optimise the process we photographed the scenes in a number of ways - from more traditional pano-stitch (although with offset per eye) - to a full stitch with Cara VR for Nuke. The latter giving the overall best result, but was way more demanding in artist-time and processing. As an example stitching 30-something 25 megapixel stills would consume over 125GB or system RAM. For one frame! This scene would crash in Windows, every time, but would process fine on Linux.

Some examples of images we did were the following three. The one from Vøyen gård were processed in CaraVR while the rest were stitched more traditionally per eye, but converted to stripmap in Nuke.

The main challenge with the latter were that we had to shoot one set of images per eye (and using 5 image bracketing for the HDRI), which led to session length of around 40+ minutes per location. In that timespan the sun and sky (in case of clouds) managed to move quite a bit - forcing us to replace the sky in several of the locations.

Vøyen Gård, Oslo
Vøyen Gård download (you'll need a proper viewer for this to make sense).

Blå, Oslo
Blå download (you'll need a proper viewer for this to make sense).

Vulcan, Oslo
Vulcan download (you'll need a proper viewer for this to make sense).

Thursday, May 4, 2017

Ubuntu 17.04 SAMBA woes

Having upgraded, done fresh installs, of Ubuntu 17.04 I noticed that the SAMBA/cifs client didn't behave as expected. When mounting a volume shared from OS X (Sierra) it would work for a while before the client started DDOS'ing the server to such an extent that no other users could log in.

We then tried to connect to a windows share (windows server edition), but with a basic fstab entry it would throw an error:
mount error(5): Input/output error
Refer to the mount.cifs(8) manual page (e.g. man mount.cifs)

Apparently the cifs protocol defaults to using v1.0 of the protocol. By adding "vers=2.1" or "vers=3.0" to the mount options it will mount correctly.