Memes and other strange images
The other day I learned about a new dataset called "Memecap", which is a dataset for captioning and interpreting memes created by EunJeong Hwang and Vered Shwartz. The paper can be found on arxiv and the dataset is on github.
Here's a figure from the article that explains what the dataset is about.
The goal of this dataset is to offer a new benchmark for interpreting visual metaphors and it's a fairly big one too: ~6.3K images with titles, captions and details.
It also seems like this "understanding memes" task is something that humans totally excel at. You need a bunch of background knowledge to understand what is in the image and humor is also an area where algorithms tend to have a hard time.
Via this paper I also learned about the Whoops! Benchmark. This benchmark contains "strange images" that are generated synthetically and the goal of the algorithm is to explain what makes them strange.
These seem like pretty hard tasks, but these datasets that are fun to explore in an afternoon or during a hackathon.