Thursday, November 23, 2017

VR Explorations

Throughout 2016 and the beginning of 2017, when I was still living in Oslo and working for Postmenn-Stripe (now PXLR), we did several internal R&D projects to explore the possibilities and limitations of Virtual Reality - and how to best produce content for the platform.

Coming from a visual effects background, focusing on commercials, movies and TV, the visual quality of an image or video clip has always been paramount. This we wanted to take with us when producing content for this new medium.

The part of the projects described here was handled by me together with my colleague Mariusz Kolodziejczak.

Photogrammetry

Photogrammetry is the process of creating digital models, of real world objects and locations, based on photographs. If done right - it can create imagery virtually indistinguishable from the original. Although labor intensive and not suited for all kind of objects (ie glass and transparent objects will give a poor result) it has great potential when you want to share an experience in VR.

For a test case we chose the Stave Church (Stavkirke) at the Norwegian Museum of Cultural History at Bygdøy in Oslo, Norway. This old, wooden building has a lot of character and many beautiful details to experience.

Inside the Stave Church



It took around 800 pictures, clocking in at 21 megapixels each, to reconstruct the inside of the church. We shot on two Canon 5D Mk II cameras which were kitted with a 24 and 50mm lens. The exposure times were quite long (around 60 seconds) since we only had the natural light available to us which passed through the two open doors in the building. We also took great care to not disturb the other visitors - since the shoot was done during a regular visiting hours. With two photographers we spent around 3 hours shooting the interior this way.

The resulting images were processed in Agisoft Photoscan where we had to take care to mask out tourists (and each other). We also processed all the RAW images to floating point EXR files to preserve the dynamic of the natural light to the best possible degree.

Post processing and cleanup were done in a mix of Autodesk Maya and 3DCoat. Since there was quite a bit of overlapping geometry further up towards the roof we had to spend a significant amount of time doing this.

The final model had to be split up before bringing it over to Unreal Engine - since we wanted to preserve as much detail as possible (although we did reduce the polygon density of our original model quite a bit). Several 8K texture maps were generated as well.

As a final touch we experimented with adding the sound of munks, church bells and creaking wood - to enhance the experience. Using the HTC Vive VR Headset you can walk around and look at whatever peak your interest.

The video below shows the experience from outside and inside.


We also did stereoscopic stripmaps suitable for the Samsung GearVR or other platforms, like Google Cardboard, with the proper viewer. These will give you full depth perception and have excellent image quality, but will not allow for movement and sound naturally.

Stereoscopic stripmap
You can download the full size image here (you'll need a proper viewer for this to make sense).

Exteriors

Since we were there shooting the interior of the Stave Church we also did a quick test outside. And by quick I mean not more than 15 minutes by one photographer to walk around and snap images from the ground level. The captures was a mix of 24mm and 100mm lens shots.

As with the interior model we processed these with Agisoft Photoscan, but since there was less occlusion and overlap we did not spend much time doing cleanup before bringing the model into Unreal Engine.

We also imported a model of a house from Røros - which I photographed some years ago when I went there for a visit. The main challenge with this one was the grass on the roof - which we had to remove since the photogrammetry software could not create a good enough model there. This was mainly because of the wind made the grass move between the frames.



Old Artifacts

Preservation of cultural heritage and old artefacts is another venue where photogrammetry is well suited.  Capturing organic forms with intricate details is no extra work this way. You can then choose to bring the captured object into VR, 3D print it or save for later reference in case the object in question is lost, stolen or destroyed.

To evaluate small scale objects with a lot of details we were lucky enough to get access to an image-set of a "carved scull". The original images were provided by the photographer Steffen Aaland at Glitch Studios and were shot using a Phase One IQ250, a 50 megapixel medium format camera. Focus stacking was utilised to get required depth of field (i.e. get the whole scull in focus).


We brought the scull into VR as well and it was pretty incredible to be able to pick it up from the table and inspect it up close.

Bringing reality into VR

As part of our SkatteFUNN project we also looked into a plethora of other methods to "acquire reality" with the highest possible fidelity.

Areas of extra interest were HDRI (High Dynamic Range Imaging), to ensure the experience in VR would be as close to "being there" as possible, as well as making sure the source material held as high a resolution as possible. Even though the screens of todays VR headsets are pretty low resolution this will improve - and when this happens we can re-export the images for the new formats.

To optimise the process we photographed the scenes in a number of ways - from more traditional pano-stitch (although with offset per eye) - to a full stitch with Cara VR for Nuke. The latter giving the overall best result, but was way more demanding in artist-time and processing. As an example stitching 30-something 25 megapixel stills would consume over 125GB or system RAM. For one frame! This scene would crash in Windows, every time, but would process fine on Linux.

Some examples of images we did were the following three. The one from Vøyen gård were processed in CaraVR while the rest were stitched more traditionally per eye, but converted to stripmap in Nuke.

The main challenge with the latter were that we had to shoot one set of images per eye (and using 5 image bracketing for the HDRI), which led to session length of around 40+ minutes per location. In that timespan the sun and sky (in case of clouds) managed to move quite a bit - forcing us to replace the sky in several of the locations.

Vøyen Gård, Oslo
Vøyen Gård download (you'll need a proper viewer for this to make sense).

Blå, Oslo
Blå download (you'll need a proper viewer for this to make sense).

Vulcan, Oslo
Vulcan download (you'll need a proper viewer for this to make sense).

Thursday, May 4, 2017

Ubuntu 17.04 SAMBA woes

Having upgraded, done fresh installs, of Ubuntu 17.04 I noticed that the SAMBA/cifs client didn't behave as expected. When mounting a volume shared from OS X (Sierra) it would work for a while before the client started DDOS'ing the server to such an extent that no other users could log in.

We then tried to connect to a windows share (windows server edition), but with a basic fstab entry it would throw an error:
mount error(5): Input/output error
Refer to the mount.cifs(8) manual page (e.g. man mount.cifs)

Apparently the cifs protocol defaults to using v1.0 of the protocol. By adding "vers=2.1" or "vers=3.0" to the mount options it will mount correctly.

Sunday, October 4, 2015

Weekend Project: Stamp

One of my favourite go-to materials is Sugru and I've been wanting to try out making a stamp from it for a while now. But first I needed a handle. I wipped up a quick one in Fusion 360.


Printing didn't take long, even at .1mm layer height. Infill was 30%. On the next print I will increase the top layers to 7, instead of 5, since the printer wasn't able to create a completely smooth top surface. It would probably work just fine it the infill was higher - like 50%.



Since I'm printing on a glass surface the bottom was pretty smooth. I made good use of a knife to make more grooves to be sure that the Sugru had plenty to adher to.



Printed the "Thumbs up" on a secondary printer. The depth of the relief could probably be half as deep in retrospect. In this one it was 2mm deep. That would make it easier to keep fine details when separating the mold from the stamp.

Note to self: don't mirror the print. Since we are making a positiva - the stamp will be automatically inverted for you.


Before adding the Sugru I swabbed the pattern lightly with canola oil so it wouldn't stick. A Q-tip was perfect for this. I think it came out pretty well considering the finished quality of the printed surface.


Make sure the Sugru get plenty of time to cure. I left it a couple of days since we didn't have the ink-pad yet.


Not perfect, but enough to show that the principle works. The ink doesn't adhere extremely well to the stamp, but that might hopefully change over time as the surface get roughened up. Maybe using some fine sandpaper would help as well.

The stamp handle and "thumbs-up" model can be downloaded here:
http://www.thingiverse.com/thing:1052496

Sunday, September 20, 2015

Fun with Unreal Engine and VR

I've been meaning to try out Unreal Engine ever since it became "free". Recently found some time to do just that. Since I haven't coded any C++ for at least 15 years I was curious to see what was needed to get a basic project up and running - which turns out - no code at all. Blueprint (a node based graph network) and tweaking options was more than enough.

Since I haven't upgraded my VR rig yet, and still use my old Oculus DK1 I was pleasantly surprised that it worked out of the box with the latest Oculus 0.7 driver and Unreal Engine 4.9 under Windows 7.

This spring and summer Otoy did a competition called Render The Metaverse and I wanted to view some of the resulting images in VR - since that was the premise of the whole competition. If you own a Gear VR you are in luck, but unfortunately, as far as I have been able to find out, there's no viewer for these stereoscopic cube map images available for the Rift. This was an excellent opportunity to check out how hard it is to develop something in Unreal Engine.

Singe eye cube mapped
As it turns out, it's not hard at all. The most difficult part was to have the cube map for the left and right eye - go to the correct eye. The default behavior is to have a texture mapped to a object and then have the engine create a stereo pair out of that which will get the proper depth when viewed in a VR headset. This, of course, will give you a cube with flat surfaces - although with depth for the cube itself. Since the cube maps provided have depth included we need to take special care to only display the relevant texture to each eye.

The solution here was to create a shader that detects which eye that is being rendered and provide the correct texture. After some searching I found the magic node to be the "ScreenPosition" node. We only need the horizontal component so make sure you add a "BreakOutFloatToComponents" node before feeding the output to the "If" node.

Shader graph network
I also love the "VR Preview" mode in Unreal Engine which lets you test out stuff in the VR goggles easily from within the gui.

Ouput to VR-goggles
Although this solution works really well with the default setup in UE - it remains to bee seen if I can get an even better and more correct result. As it is now, we use the same cube for both textures and this cube is scaled arbitrarily without any thought to real world scale. This might cause issues since the cube itself will be rendered with a depth and then the texture will on it will, in some ways, inherit that depth. That again might work against the depth baked into the stereo cube maps and the result might be a feeling of wrong scale. Therefore we might have to work with the IPD (eye separation) a bit. At this point I'm only guessing, but we might have to set the IPD to 0, but I'm not sure yet.

Further testing needed.

Friday, July 31, 2015

FABtotum Filament Spool

Back from vacation and found some time to play around with 3D printing again. Made a filament spool that actually fit inside the FABtotum. The STL file can be found on Thingiverse.

Stringing is still an issue
The spool consists of two halves so you need to print the part twice.

Test fit.
This version of the spool fits nicely within the filament bay, but I'll probably modify the next version to make it a bit thinner. This way there's less chance of friction.

Parts "glued" together with the 3Doodler
 You need to glue the two halves together. I used a 3Doodler for the purpose, but a regular glue gun should do nicely.

1lb (about half a kg) of black filament winded.
I'm not sure how much filament the spool can hold but after adding about half a kg I would guess it could hold at least three times that amount.

Winding is also something that will have to be addressed in the next version. Maybe some attachment to fit it to a drill.

Monday, June 15, 2015

FABtotum in da house

Finally, after waiting almost nine months, my new 3D printer has arrived!



The FABtotum is a hybrid machine which does both 4-axis milling and 3D scanning in addition to extruding hot plastic.



Back in August last year, when I ordered this machine, I was originally researching new architectures to implement on a new printer build. Then I discovered the FABtotum blog and one page in particular where they discussed different cartesian configurations and their pros and cons. The design decisions seemed really thought through and seeing they had recently finished a successful Indigogo campaign I read up on the rest of the design which also looked solid. It was time to try out a turn-key solution!

Currently I'm going through the calibration process so time will show if it delivers on it's promise.

Saturday, June 6, 2015

Explosive plates for VFX

Disclaimer: this is a big "do not try this at home"-post as it involves flammable gasses and explosions, so don't! Wait, who am I kidding - you're totally going to try this at home aren't you... just don't say I didn't warn you when you stand there missing an eyebrow!

Light it up! Watch your fingers!
So, we recently did a project at work where we needed some fiery effects to spice up a car shot. To be precise we wanted the car, a CG rendered Lamborghini Aventador, to have exhaust flames - and really cool ones at that.

The usual route would be to create these digitally using a variety of tools, for example Trapcode Particular. As we had just a week and a half to produce the whole spot we had to be creative to make sure we finished on time. That meant going old school and shooting a practical effect in camera - allowing us to bag that particular effect in 20 minutes and focus on the rest of the 3D and compositing instead.



What you need
1 x small plastic bottle. We used a 33cl bottle from Ramlösa. You can go bigger, but this was plenty to get the effect we were looking for.
1 x refill canister of Butane gas (ie lighter fluid). You probably can use a regular lighter as well, but you'll need more time to fill the bottle.
1 x lighter, preferably the "long nose" or "expanded reach" kind. If you have long matches that'll work as well. The important part is to keep your fingers safe.
1 x roll of gaffer tape to anchor the bottle to something sturdy so it doesn't shoot off when ignited.
1 x needle, nail or piece of steel wire. I used a paper clip.
1 x pliers to hold the poking device (see previous)
1 x camera on tripod
1 x dark room
Enough safety glasses for everyone present

Construction
There's not much too it really. Holding the nail in the pliers heat it using the lighter. Then, quickly, poke a hole in the bottom part of the bottle. You might have to reheat the nail a couple of times to be able to expand the hole so it becomes big enough.

Just about perfect

Execution
First of all make sure you are in a well ventilated room where there's nothing flammable.

You'll need something sturdy and heavy enough to fasten the bottle to. This way it won't fly away when you ignite the gas. Gaffer tape is nice to use for the fastening. If you get the matte black kind it can also be used to make sure the bottle doesn't reflect any of the light - making the compositing easier later on.

Set up your camera and frame your shot. Now we add the gas.

Add a tiny amount of liquid gas (less than a second)
Insert the butane gas nozzle into the hole you made earlier and give it a quick push. You don't need much since it will be liquid when pushing the container down onto the bottle. To much butane will not create a bigger explosion since the air/gas mix will be wrong resulting in no explosion at all. Put the butane canister somewhere safe away from everything.

Now turn on the camera, turn off the light in the room and set the flame from the lighter to the small where you added the gas. If you got the mix correct you will hear a satisfying pop and see a bluish flame. If not you might try to add some oxygen. A bicycle pump should work. Use the small hole for this as well. Keep going until all the gas has been burned away.


Before trying again you have to add fresh oxygen by using the pump for a little while more. Add more gas, light, retry.

Ooh, pretty!
Only steps left is to choose best part of shot and add to comp. I'm not going to go into that part here since it's fairly trivial.

And the final TVC