Color models for humans and devices | MDN Blog
Briefly

The article explores the complexity behind how images are perceived, focusing on color vision theory and color representation in digital formats. It begins by explaining how screen pixels emit light at varying intensities and wavelengths, which humans perceive as color. The role of the retina is then highlighted, detailing how rods and cones contribute to our ability to see in different lighting conditions and colors. The post aims to demystify why images may appear differently across devices, providing foundational knowledge for developers, designers, and curious readers alike.
Every screen pixel emits light with a different intensity and wavelength. This difference in wavelength is perceived as a difference in color by humans (and by cats as well, but that's a different story).
Rods help us see in low light, but they don't detect different wavelengths - that's why everything looks like shades of gray in the dark. However, in daylight, rods aren't active.
Cones are responsible for color vision, and there are three types, each sensitive to slightly different wavelength spectrum. S-cones are most sensitive to blue, M-cones to green, and L-cones to red.
The signals from these cells take light sensitivity into account. The retina processes signals from all these cells and the signal is sent to the brain.
Read at MDN Web Docs
[
|
]