Low Light Computer Vision
Today I learned that folks are trying to get computer vision to work in (extreme) low-light settings, via this paper.
It turns out it’s a tricky problem, but there’s a group of researchers that made an interesting development for pose estimation. Part of their effort revolved around gathering a new dataset (ExLPose). The trick in this dataset is that they gathered pairs of images, from the same camera with fancy hardware, that represent the low- and highlight setting.
The high light images are then used to train a priviledged teacher model. A student model is then tasked to learn the priviledged knowledge from the teacher, while only seeing the low light data as input. This results in a large model made up of batch norm blocks that looks like this:
The paper proceeds with some statistics that show their approach works well. But it also mentions the difference between single person situations and groups.
There’s also a nice image in the appendix. It seems that the latent representation of the teacher/student model can also be used to reconstruct the image.
Interesting stuff. It’s not just a creative model that is on display, but I also like the extra effort of collecting a custom dataset to investigate this task.