After Effects is a 2.5D compositing and animation package from Adobe. Although there is an extensive toolset that covers a lot of the needs for compositing and motion graphics, the lack of a true 3D workspace makes is hard to stitch footage completely within After Effects. But through using a technique usually reserved for visual effects work, it is possible to translate templates created in PtGui to After Effects using UV-mapping.
This workflow is based on the established workflows for mapping textures onto 3D geometry after they have been rendered. To explain it briefly, the process needed to assign a flat texture to a 3D object requires that the texture be “wrapped” onto the model. The co-ordinates of the wrapping is stored as a UV map. Each part of a UV map refers to a location on the 3D model and is stored as a flat image. We’ll just be using it to move a flat image around – no 3D required, but it’s helpful to know how it works.
By drawing on this idea we can use a texture with a unique colour at each pixel as a guide for transferring the coordinates between PtGui and After Effects. To handle the vast number of pixels present in a video frame we’ll be using a 32-bit image.
A linear gradient running along the x-axis of the image will become our reference for the horizontal position of the pixels in the footage. For the vertical axis a gradient along the y-axis will be used. To store both independently in the same file, we’ll use the red and green channels of an OpenEXR file, with the other channels empty. This arrangement is the industry standard for presenting UV-maps. Interestingly, the same logic of encoding is used for Normal and vector maps creating by 3D software, though both of these include a z-axis usually mapped to the blue channel as well.
By using this arrangement, the software can reference a texture’s displacement in 3D space even after the render and can be used to transfer additional texture details onto pre-rendered footage.
We’ll use this to store the position distortions done by PtGui on each of our footage items and in so doing – be able to track where each pixel ends up in the final panorama. By exporting this map back to after Effects and using RE:Map by Re:vision software(the makers of the ever popular Reel smart Motion Blur) we’ll be able to very accurately warp our videos to the correct locations.
The two important factors for the UV-map are that it is in the same ratio as the footage, and 32-bit. I’ve included a number of pre-prepared UV-layouts in some of the popular shooting formats in the download section at the end of the chapter. They are named as: Specification_ratio_resolution.exr e.g. HD_16x9_1080.exr
To allow us to export the template to after effects for further processing, create a stitch as normal using PtGui. A guide can be found here. Do not optimize the exposure, or change the colour in the picture at all. That part of the process will be handled in After Effects.
Once the stitch has been made, but before exporting, swop to the images tab and replace all the footage files with a UV-map with the same aspect ratio. Ignore the warnings and do not align the images again.
Change to the Exposure/HDR tab and make sure to tick the HDR box as below. The footage used in this example was shot with a Nikon camera and is low-dynamic range, but the UV-map is in fact very high Dynamic range and to preserve all the information we’ll need to go out in the same format we came in.
Swop to the export tab, you should have an LDR (low dynamic range) and HDR (High Dynamic Range) option visible for the export. Select Individual layers as your export type. We’ll be using each image separately, allowing us to control the blending ourselves. Make sure “with transparency” is enabled in the export!
In After Effects, open a new project and change the project settings to use 32-bit. This will ensure our EXR files can be used without artifacts, and increases the quality output overall. Import the layer exports from PtGui, and under the color management tab of Interpret footage tick the “Preserve RGB” option. After Effects automatically applies a gamma correction to footage on import, especially for linear, high dynamic range images. Although useful at times, it changes the values of the pixel colours and since our pixel colours are actually referring to a position in 2D space, it changes the position of our footage files.
To Map our footage to their correct spots in the panorama we’ll need a plugin that can read the UV data. Some plugins that can do this are: Re:Map, Youveelizer and ft-UV
Add all the UV layers to a new comp with the same dimensions as the final panorama and apply Re:Map to each one.
Add all the video files to the comp and turn off their visibility. We won’t be using them directly.
Match up the footage to the UV maps under the “layer source” of each UV layer’s Re:Map plugin. The footage will appear instead of the red and green in the viewport. If the panorama looks wrong, make sure that the right layers have been chosen.
The panorama can now be masked by using roto masks to blend the footage.
Layers can be edited, trimmed and shifted along the timeline to fine tune synchronization.
To preview the delivery, create a comp with a camera, assign “CC Environment” to a new solid. CC Environment creates a sphere infinitely far away around the scene, usually used for sky replacements but quite useful for quality checking our Panoramic video. Add the panorama and select it as the source for the environment. Now the scene viewed from the camera, will closely resemble the final delivery. You can change the camera’s focal length to experiment with different views of the scene.
Effects and grading works very differently in VR, so if you want to do further post-production – read the next chapter.