TerraMind’s any-to-any generative capabilities demonstrated on a scene over Boston. From left to right: (1) optical input, (2) synthetic radar generated from optical imagery, and (3) generated land use classification. TerraMind leverages dual-scale multimodal representations to capture both high-level semantics and fine-grained spatial details, enabling cross-modal generation even in zero-shot settings.
Read full story: ESA and IBM collaborate on TerraMind