ISO meeting day 2, and a big storm

Day 2 of the ISO Photography standards meeting was all technical discussions. We talked about standards for measuring low light performance, specifying camera-related vocabulary definitions, defining transformation maps for converting between standard and high dynamic range images, updating definitions of camera technical specification to handle new technologies, measuring the information-theoretic capacity of camera images and systems, and measuring autofocus performance.

One of the interesting quotes from the discussion concerned the autofocus standard. The authors wanted to allow measurement of autofocus under conditions that simulate being held by hand – with the camera shaking and wobbling due to hand unsteadiness. In a formal testing situation, you need to simulate this with a robotic device that is programmed to shake the camera in the same manner as a human hand. Another expert said that it seemed weird to have this, rather than just using a tripod to hold the camera, since we already have a different standard for measuring imaging performance when hand-held. The author responded that (my paraphrasing): Almost 100% of photos taken are hand-held, so requiring a tripod for a performance test is somewhat perverse.

Another interesting concern that was raised came about because of the recent explosion in AI algorithms. Someone pointed out that we have standards for measuring image quality that work by having the camera take a photo of a standardised test chart, and then comparing the quality of the image to an ideal reproduction of the chart, noting where the image from the camera is degraded. This reflects the real world performance, since photos of scenes will be degraded in the same way. But someone pointed out that digital cameras are increasingly using image processing to improve image quality, and soon no doubt they’ll be using AI algorithms. And if an AI algorithm knows the standard test chart it can recognise when you try to take a photo of one… and output an image which is a perfect reproduction of the test chart. So when you take a photo of a test chart, the measured “camera performance” will be absolutely perfect, but this will not reflect the camera’s actual performance when photographing a scene.

This is something we actually have to think about, to try to design a performance test that can’t be cheated in this way. There are options, such as randomising the test charts or procedurally generating them, but this all requires very careful design and testing. So we have plenty of work ahead of us in the next few years!

Tonight while teaching my new ethics class on Exploration, there was a big thunderstorm. Lots of lightning and heavy rain and wind. We had 20 mm of rain in a couple of hours, and no doubt there’ll have been some flash flooding and probably some trees down across the city. No problem here, thankfully.

New content today:

One thought on “ISO meeting day 2, and a big storm”

  1. But someone pointed out that digital cameras are increasingly using image processing to improve image quality, and soon no doubt they’ll be using AI algorithms. And if an AI algorithm knows the standard test chart it can recognise when you try to take a photo of one… and output an image which is a perfect reproduction of the test chart. So when you take a photo of a test chart, the measured “camera performance” will be absolutely perfect, but this will not reflect the camera’s actual performance when photographing a scene.

    There’s already a scandal brewing about this, in fact: https://www.reddit.com/r/Android/comments/11nzrb0/samsung_space_zoom_moon_shots_are_fake_and_here/

Leave a Reply

Your email address will not be published. Required fields are marked *