Can frontier AI models actually read a painting? [R]
I wrote up a small experiment on whether frontier multimodal models can appraise art from vision alone.
I tested 4 frontier models on 15 paintings worth about $1.46B in total auction value, in two settings:
- image only
- image + basic metadata
The main thing I found was what I describe as a recognition vs commitment gap.
In several cases, models appeared able to identify the work or artist from pixels alone, but that did not always translate into committing to the valuation from the image alone. Metadata helped some models a lot more than others.
Gemini 3.1 Pro was strongest in both settings. GPT-5.4 improved sharply once metadata was added.
I thought this was interesting because it suggests that for multimodal models, “seeing” something and actually relying on what is seen are not the same thing.
Would be curious what people think about:
- whether this is a useful framing
- how to design cleaner tests for visual reliance vs textual reliance
- whether art appraisal is a reasonable probe for multimodal grounding
Blog post: https://arcaman07.github.io/blog/can-llms-see-art.html
[link] [comments]
Want to read more?
Check out the full article on the original site