Yes, but only with Quantization!
Originally, Flux.2 (32B) requires ~37GB VRAM (FP16/FP8), which is impossible on consumer cards.
However, using the Q4_K_M (4-bit) quantization shown in this calculator, the memory usage drops to ~22.7 GB.
It is a tight fit, but it runs natively on 24GB cards like the RTX 4090 or 7900 XTX without slow CPU offloading.