Google announces multi-modal Gemini 1.5 with million token context length
Briefly

This allows use cases where you can add a lot of personal context and information at the moment of the query...I view it as one of the bigger breakthroughs we have done.
The multimodal capabilities of the model means you can interact in sophisticated ways with entire books, very long document collections, codebases of hundreds of thousands of lines across hundreds of files, full movies, entire podcast series, and more.
Read at InfoQ
[
add
]
[
|
|
]