A Production Oriented Method for Intrinsic Data Extraction from Scanned Data

This was a study done in conjunction with Tandent Images, the creators of a program called Lightbrush, a Faro Scanner, and with the use of a program called Agisoft Photoscan.

The Premise was simple and noticeable, laser and photo-scanning surfaces was gaining popularity and accessibility for 3d Image creation or 3d printing, but by default the captured color of the surface came with the shadows cause by the light sources used to illuminate the subject.

My main goal for this study was to be able to scan a model with a device ( I thought of Google Glass at the time ) , reconstruct the surface and texture information and then, potentially, this object would get on the cloud to get "de-shadowed" with this Intrinsic Data Extraction, for it to be later on shared with another user, and then use the "User B" (recipient of the surface or 3D Model ) environment light and make a better User Experience for when the model is placed virtually somewhere.

I contacted Tandent after consulting with Paul Debevec about Intrinsic Data extraction from Images. At the time there were only early studies and technical papers that had some tests done by some people, but nothing as solid and reliable, and not to mention fully functional as what Tandent had done with Lightbrush.

I got in touch with the folks over at Tandent and told them about the research that I wanted to drive, they were very excited and inmediately jumped on board, they were very helpful and even gave me a full 30 days working license to conduct initial studies.

One of the findings I came across early on was that the method for surveying the data of the images from which extract the data was manual, in other words, you had to open the image you wanted processed, and manually select areas of light and shadows on the same surface (ie: wall on light and then click on the same wall in shadows) , and after having a substantial set of paired points the image was ready to be processed.

This was not much of a painful process for one image but I had to take in consideration that for the process of photo scanning to be more accurate, a high amount of images is needed, which would make this manual survey pretty useless into the process.

THE 3 STRIKES

My first approach was to follow Agisoft PhotoScan workflow:

  1. I loaded all the images and build the surface in PhotoScan.

  2. I created the texture an exported it from PhotoScan.

  3. Then Instead of doing the manual survey in Lightbrush to all the pictures I would just do it to ONE big mushy looking picture (the constructed texture) and finally "De-Shadowed".

FIRST OUT: Tandent's Algorithm would not work with the Textures that are created from PhotoScan due to their HIGH level of visual complexity and lack of feature continuity. (if you see the textures you know what I'm taking about)

My second approach was to:

  1. PAINFULLY do all the manual survey to each photo ( 18 Pictures, 6 sets of two pairs on each, one light section and one dark section)

  2. Load all the images (the reflectance map results) and build the surface in PhotoScan.

  3. Therefore I would have a reconstructed texture free of Shadows.

SECOND OUT: Agisoft Algorithm would not work with the Intrinsic Data ( reflectance ) that is created from Tandent's Lightbrush (lack of information for geo reconstruction...?)

My Last Approach was cheat:

  1. I loaded all the images (unprocessed) and build the surface in PhotoScan.

  2. PAINFULLY do all the manual survey to each photo ( 18 Pictures, 6 sets of two pairs on each, one light section and one dark section).

  3. Rename ALL the photos USED to create the Surface and Swap them with the Reflectance Maps from Lightbrush.

  4. and VOILA IT WORKED

The one remaining painful process left was the manual surveying of the light and dark areas that had to be done picture by picture, but I had an Idea for that. Since PhotoScaning recognizes common points between images I thought It would be nice to create a pluggin that worked either ither Agisoft's side or from Tandent's side and shared the same survey light and dark pairs of points across a "project" that could later on be refined, Similarly to the way PT Gui (panorama stitching program) worked.

Finally I had a meeting with the guys from Tandent and shared my findings and they implemented some of my research into a batch processing feature for the new version.

Unfortunately Agisoft (I understand everybody is busy) was not able to get back to me as Tandent did and Tandent discontinued Lighbrush. :(

Anyways It was a fun process...

 

Thanks for your time!