A light probe is an omni-directional (360° panoramic) high-dynamic range image. Because they 'see' in all directions and can record actual light levels, light probes are useful for providing measurements of the incident illumination. As such they can be used to provide interesting and realistic lighting environments and backgrounds for rendered graphics.
This tutorial describes how to create a light probe using HDRShop. In general, this involves the following steps:
One method of obtaining a light probe is to take a high-dynamic range image of a mirrored sphere, such as the Precision Grade Chrome Steel Balls at McMaster-Carr . Assuming your camera is orthogonal, then in theory a single photograph of the mirrored ball can 'see' in all directions. That is, anything visible from the viewpoint of the mirrored ball will be visible to the camera as a reflection in the ball.
Unfortunately, things that are reflected near the edge of the ball will become extremely stretched and distorted, giving a poor image when it is unwarped. Additionally, in the center of the ball will be a reflection of the camera used to take the photograph, which will obscure some of the background.
To alleviate these problems, we can take two pictures of the mirrored sphere from different angles and blend them together to remove the camera and the regions of poor sampling.
Since the two 'bad' spots in the mirrored ball are directly towards the camera, and directly away from the camera, the two pictures should be taken from positions 90° apart from each other (see figure 1). This way the regions of bad sampling and camera interference will be in different locations in the two images. (Note that taking the images from opposite sides of the ball will not work, as the region of bad sampling in one image will be the location of the camera in the other image, and vice versa.)
You should have a pair of images that look something like this (in high-dynamic range):
![]() |
![]() |
HDRShop can then be used to warp these images to a better panoramic format, and rotate them in 3D so that their orientation matches.
The next step is to crop the images to the very edge of the mirrored ball:
First make sure that the "Circle" option is checked. This can be found in the Select menu under Draw Options. Checking this will draw an ellipse inscribed in your selection rectangle, which can be useful for matching up the edge of the mirrored ball with your selection.
If the mirrored ball goes off some of the edges in your photograph, make sure that the "Restrict Selection to Image" option is unchecked. This can be found in the Select menu under Select Options. Unchecking this will allow the selection tool to select regions outside the bounds of the image.
Select the region around the mirrored ball, and adjust it until the circle borders the edge of the ball. It should look something like the image below.
When you have the circle lined up, crop the image using the Crop command in the Image menu.
Once you have cropped them, you should have two images that look like this:
![]() |
![]() |
In order to match the two images, we will need to find the rotation between them. HDRShop can do this semi-automatically, but you will need to provide the coordinates of two points (corresponding to the same features in the environment) in each mirrored ball image.
The easiest way to get these coordinates is to use the Point Editor, which is available under the Window menu (cancel out of the Panoramic Transform window if you have it open). Once the Points window is open, Ctrl-clicking on an image will create a new point. You can also drag existing points around. For our purposes, we will need two points in each image, positioned on the same features. In our example below, I've chosen the two light sources above either side of the altar.
In the current public version of HDRShop, you need to write these coordinates
down. (Future versions will allow you to use the points directly) In our example
the coordinates are:
X | Y | |
Image A: | 93.18750 | 140.93750 |
236.18750 | 198.06250 | |
Image B: | 233.87500 | 186.12500 |
417.43750 | 187.56250 |
Now we can have HDRShop apply a 3D rotation to one of the panoramas in order to line it up to the other one, while simultaneously warping the images to the light probe (angular map) panoramic format.
To do this we will use the Panoramic Transform command. It can be found on the Image menu, under Panorama, Panoramic Transformations...
The source image should be the image you wish to unwarp. The destination image can be left as 'New Image'. Our source image format is 'mirrored ball', and we wish our destination image to be in 'Light Probe (Angular Map)' format. You can change the destination image resolution to the desired size, and increase the supersampling if you want a better quality result.
In this case, we will warp image B with no rotation, and then warp and rotate image A to match image B. So we choose image B as our source image, select 'None' for 3D Rotation, and click OK. This should produce a warped version of image B.
Now, going back to the Panoramic Transform dialog, we can set everything
up again for image A. This time, select 'Match Points' under '3D Rotation'.
Clicking on 'Settings', we can enter in the coordinates we have written down:
In this case, we have set it up to rotate image A to match image B.
Once all the fields are filled in correctly click OK for both Match Points and Panoramic Transform. You should now have two panoramas in light-probe format; something like this:
The final step is to merge these images together. For this we will need a mask: an image whose pixels are 0 where we wish to use image A, 1 where we wish to use image B, and an intermediate value when we wish to blend between them. You can create a mask in any paint program, though Adobe Photoshop has some nice features that make it easier.
If you are familiar with Photoshop, save out JPEG versions of our two panoramas and load them into Photoshop. You can copy/paste one panorama as a layer on top of the other one, and then add a mask to that layer. This way, as you paint on the mask you can see the result visually. If you don't have Photoshop, you'll just have to draw a mask and try it. When you have a good mask, save the mask out as a Windows BMP, TIFF, or other uncompressed format that HDRShop supports.
The completed mask should look something like this:
You can download this mask here: mask2.bmp
Next load this mask into HDRShop. To merge the two panoramas using the
mask, choose Calculate from the Image menu, and choose the
values like so:
This does an alpha blend between image B and image A using the mask.
The Results
The result of this is your finished light probe! The two cameras and the
regions of bad sampling have been removed, and it is ready to use as an illumination
environment.
Return to the tutorials