原文

传送门

Surround 360 Calibration

In order to produce a more accurate and comfortable result in VR, the Surround 360 rendering software uses several calibration config files to correct optical and mechanical issues. This document describes the process of generating the calibration config files.

为了制作更加精准的、令人观看舒适的VR视频,这个“360度全包围反射软件”用一些校准配置文件来修正某些视觉上的、机械上的参数。本文档讲述了制作校准配置文件的过程。

WARNING: you should not attempt to render videos captured with Surround 360 without first reading this document. Uncalibrated results may be severely distorted in VR to the point of breaking stereo perception of 3D.

警告:别试着没看此文档就用本软件来制作VR视频,因为会产生乱序的,未校准的,很糟糕的立体3D的结果。

There are three specialized calibration processes: color calibration, optical vignetting calibration, and geometric calibration. Each part of calibration requires different data and different steps to process with our software. For best results, execute each calibration process in the order below.

有三个特别的校准过程:色彩校准、光晕校准、立体校准。使用本软件,以上三部分都需要不同的数据、不同的步骤。为了最好的效果,请根据以下要求进行校准操作。

Color Calibration

When converting RAW images to RGB we use the files cmosis_sunex.json and cmosis_fujinon.json in /surround360_render/res/config/isp. These contain fields to configure the soft ISP for each side and top/bottom cameras, respectively. A detailed description of each field can be found on the README.txt file under the same directory.

为了把RAW原生图片数据转换成RGB格式的图片,我们需要/surround360_render/res/config/isp 目录中的cmosis_sunex.json 和 cmosis_fujinon.json 这两个文件。这些包含的字段,将分别地配置上下两个摄影机的软图像信号处理器(Image Signal Processor)。详细的描述可以参考在那个目录下的README.txt文档。

Even though all the sensors from the same camera model should behave the same way, in practice there are subtle differences that can cause two images from two sensors to look different under the same illuminant (light source).

尽管理论上,在同一个光源下,所有的摄影机的传感器应该有同样的操作,能得到相同的图像,但是实际上两个传感器产生的图像总有细微的差差别。

This calibration process lets us create ISP config json files for a set of cameras, which can be then used by referencing them in cam_to_isp_config.json.

所以我们为这一组摄影机创建了ISP配置的json文件,这可以让程序在运行时,参考配置文件的参数。

To calibrate against a known illuminant, we need something on the scene for which we know its ground truth RGB values. We use a MacBeth ColorChecker, which contains 24 square color patches. We also use a SpyderCUBE device, which allows us to find the darkest point on the image for black level adjustment.

为了在一个已知光源下校准ISP,我们需要利用镜头中某个东西来推测场景的真实RGB值。我们用24色格的“MacBeth色彩检查器”。我们也用一个叫“立方蜘蛛”的设备来找到图像上最黑的点,来做黑阶测试。

The steps below describe the color calibration process for a set of cameras from the same rig.

接来下的步骤,描述了对同一控制下,一组摄影机色彩校准的过程。

  • Under the known illuminant, place the MacBeth chart and the SpyderCUBE in front of a camera and take a picture using our camera control software. An example image can found in /surround360_render/res/example_data/color_calibration_input.png. Repeat for each camera.

在已知光源下,用我们摄影机控制软件拍一张“MacBeth”表格,和“立体蜘蛛”的照片。这里有一个样本图像,/surround360_render/res/example_data/color_calibration_input.png. 每个摄影机都重复一次。

  • Save the images inside a directory called “charts”. For this example, we are assuming they are under ~/Desktop/color_calibration/charts/.tiff. Go to /surround360_render and run the following command:
    python scripts/color_calibrate_all.py \
    --data_dir ~/Desktop/color_calibration \
    --black_level_hole
    

把图像储存在“charts”目录中。例如,假设该目录如下“ ~/Desktop/color_calibration/charts/.tiff”,请进入“ /surround360_render”目录并且运行以下程序


python scripts/color_calibrate_all.py \
–data_dir ~/Desktop/color_calibration \
–black_level_hole

  • This generates a directory called “isp”, with all the ISP json config files. It also generates an output directory with several debug images of each step of the detection process, for all the cameras. /surround360_render/res/example_data/color_calibration_output.png is an example of the output of the last step (gamma correction) on the input image mentioned above.

在ISP Json配置文件下,本程序将生成一个叫“isp”的目录,也会为每一个摄影机生成一些根据步骤储存debug图像的目录。例如,“/surround360_render/res/example_data/color_calibration_output.png”就是在有上述图像的情况下,最后一步的输出结果。

  • Check the file scripts/color_calibrate_all.py for more options. Use the attributes min_area_chart_perc and max_area_chart_perc to set a range for the expected size of the color chart; this is useful when pictures are taken at different distances. Use the attribute black_level_adjust to set the black level of each camera to the median of all the cameras; this is useful when we expect the black level to be the same in all the cameras (note that this is not true for all the sensors.)

检查“scripts/color_calibrate_all.py”文件以获得更多可选项。用“min_area_chart_perc” 和“ max_area_chart_perc”来设置色域的宽度,这对于在不同地点的摄影十分重要。使用“black_level_adjust”属性来设置黑电平,取所有摄影机的中位数。当我们希望所有摄影机的黑电平值相同时,这就尤为重要了。(不是对所有传感器都有效)

To run the pipeline with these new ISP config files we just need to copy the generated ISP config files to the output directory, e.g. ~/Desktop/render/config/isp/.json, and then run run_all.py as usual. Note that the ISP config files are named after each corresponding camera serial number.

对新的ISP 配置文件运行管道程序,我们需要把产生的ISP配置文件复制到输出目录中,例如~/Desktop/render/config/isp/.json,运行run_all.py。注意,ISP配置文件以摄影机序列号命名。

Optical Vignetting Calibration

All lenses create a type of vignetting called optical vignetting, which results in a fall-off in brightness as we move away from the center of the image and towards the edges. This effect is especially undesirable in overlapping camera scenarios, since we expect overlapping areas to have the same color and brightness.

所有的柔性焦距透镜组(lenses)会产生一种光学晕影,表现为从摄影机中心到边缘,亮度逐渐递减。这种现象在过度曝光的场景中更加明显,但是我们希望过度曝光的部分和其他部分要有相同的亮度和色彩。

This calibration process lets us model the vignetting fall-off and update the camera ISP json config file accordingly. This is done by taking a picture of a gray chart while rotating the camera along its exit pupil (or as close as possible) so that we get samples of the chart across the entire image region.

因此这个校准过程需要我们需要设置摄影界ISP配置文件,并且对光晕建模。做法是,边旋转镜头边拍着一张灰度的表格,于是我们就的得到了整个图片区域的灰度。

The steps below describe the calibration process for a set of cameras.
接下来的步骤描述如何操作一组摄影机。

  • Under a uniform and constant illuminant, place the grayscale chart in front of a camera and take as many pictures as desired (recommended more than 20) using our camera control software so as to cover the entire image region with samples of the chart in all positions. An example image can found in /surround360_render/res/example_data/vignetting_calibration_sample.tiff. Repeat for each camera.

在唯一稳定连续的光源下,在摄影机前放置一张灰度表,建议使用摄影机控制软件拍摄20张以上图片,要求覆盖所有图像区域和方位。例如,在“/surround360_render/res/example_data/vignetting_calibration_sample.tiff”中有一个样例图像。每个摄影机重复此步骤。

  • Save the set of RAW images for each camera inside a directory called “charts”. For this example, we acquired 100 images per camera, for 17 cameras, and we placed them under ~/Desktop/vignetting_calibration//charts/[000000-000099].tiff. Note the file structure, where each camera has its own directory. We also assume that color calibration has been run on these cameras, and we already have a directory ~/Desktop/vignetting_calibration/isp with each camera’s ISP json config file. Go to /surround360_render and run the following command:
    python scripts/vignetting_calibrate.py \
    --data_dir ~/Desktop/vignetting_calibration \
    --num_cams 17 \
    --save_debug_images
    

在“charts”目录下存储每个摄影机的原始的数据RAW。例如,我每个摄影机拍了100张图片,一共17台摄影机,我们把照片放到了“~/Desktop/vignetting_calibration//charts/[000000-000099].tiff. ”这里。请注意文件的架构模式,每台摄影机都要有自己的目录。我们假设色彩校准已经做过了,并且我们也已经有了“ ~/Desktop/vignetting_calibration/isp ”目录,也有了每台摄影机的ISPjson配置文件。那么在“/surround360_render”目录下运行以下程序


python scripts/vignetting_calibrate.py \
–data_dir ~/Desktop/vignetting_calibration \
–num_cams 17 \
–save_debug_images

  • This generates a directory called isp_new with all the updates ISP config files. Also, each camera directory has two new directories. 1) acquisition: contains a mask image with all the detected charts, a data file called data.json, containing location and color intensity values for each patch, and other debugging data. 2) calibration: contains plots of the vignetting models for each channel, an updated ISP json config file, and other debugging data. /surround360_render/res/example_data/vignetting_calibration_fit.png is an example of a surface fit that models the vignetting of the red channel of the camera used in this example. It shows the center of the image and the point of minimum vignetting, as well as the Bezier control points on the top left.

运行程序后,根据所有ISP配置文件,会产生一个叫“isp_new”的目录,每台摄影机的目录下有两个子目录:1、“acquisition”,包含所有检测到的表格的“蒙片”,一个叫“data.json”的数据文件,包含每个碎片的坐标、颜色、亮度的值,以及其他debug数据。
2、“校准”,一个包含了每个频道的光晕模型的绘图,一个更新过的ISP配置文件,以及其他debug文件。
/surround360_render/res/example_data/vignetting_calibration_fit.png 就是一个对红光频率光晕建模例子。表现出图像中心的最小光晕,并且用“Bezier”控制了左上方。

  • Check the file scripts/vignetting_calibrate.py for more options. Use the attributes image_width and image_height if using non-default image size. Use the attribute load_data to load the location and color intensity data, skip the acquisition step and go straight to the calibration step.

检查“scripts/vignetting_calibrate.py”文件,以获得更多可选项。如果要使用非默认的图像大小,请设置长宽属性。使用load_data来加载坐标和色彩亮度数据,可以跳过“acquisition”步骤,直接调到校准步骤。

To run the pipeline with these new ISP config files, just replace the original ones with the ones created by the script.

如果要用新的ISP配置文件来运行管道程序,只要用新生成的文件覆盖老的文件即可。

Geometric Calibration

No matter how well constructed the rig is, our software needs to know the geometric properties of the cameras (intrinsic and extrinsic) in order to accurately perform stereo reconstruction.

无论你的摄影设备架构设计的多么牛掰,为了精准的立体重构,我们的软件都需要知道摄影机的立体参数(内在的和外部的)。

The steps below describe the geometric calibration process for a camera rig.

接下来的步骤描述了为摄影设备的立体校准过程。

  • Capture a single frame using the Surround360 capturing software in a scene with plenty of features, that is, containing objects with sharp edges and corners of different sizes. A good example is the interior of an office.

用本软件抓取一张,带有一些棱角分明的物体的场景图。

  • Unpack the frames and run the ISP step to get RGB images. Put them in a separate directory. For this example we assume they are in ~/Desktop/geometric_calibration/rgb/cam[0-16]/000000.png

解构画面(帧),运行ISP步骤,获得RGB格式图像。将他们放到不同的目录下。例如我们假设把他们放到了“~/Desktop/geometric_calibration/rgb/cam[0-16]/000000.png”。

  • Go to surround360_render and run the following command:

    python scripts/geometric_calibration.py \
    –data_dir ~/Desktop/geometric_calibration \
    –rig_json $PWD/res/config/camera_rig.json \
    –output_json ~/Desktop/geometric_calibration/camera_rig.json \
    –save_debug_images
    
    
    

在“surround360_render”目录下运行以下程序


python scripts/geometric_calibration.py \
–data_dir ~/Desktop/geometric_calibration \
–rig_json $PWD/res/config/camera_rig.json \
–output_json ~/Desktop/geometric_calibration/camera_rig.json \
–save_debug_images

  • This generates a new JSON file, camera_rig.json, to be used when rendering by just copying it to the output directory, e.g. ~/Desktop/render/config/camera_rig.json. It also generates debug images under ~/Desktop/geometric_calibration showing the accuracy of the calibration process.

运行程序后,会产生一个叫“camera_rig.json”的新json文件,将其拷贝到输出目录,例如“ ~/Desktop/render/config/camera_rig.json.”,在实施时会被使用。
程序也会产生一些“~/Desktop/geometric_calibration”目录下的debug图像,表示校准过程的精准性。