Gadgets
fromInfoQ
2 weeks agoCactus v1: Cross-Platform LLM Inference on Mobile with Zero Latency and Full Privacy
Cactus enables fast, energy-efficient on-device AI inference with sub-50ms latency, cross-platform SDKs, privacy-by-default, model versioning, and optional cloud fallback.