Liquid AI Releases LocalCowork Powered By LFM2-24B-A2B to Execute Privacy-First Agent Workflows Locally Via Model Context Protocol (MCP)

“`html

Liquid AI Launches LocalCowork Empowered by LFM2-24B-A2B: A Privacy-First Milestone in Local AI Automation

By Amr Abdeldaym, Founder of Thiqa Flow

In an era where AI automation is revolutionizing business workflows, Liquid AI has taken a significant leap forward by releasing LocalCowork, an open-source desktop agent application powered by the cutting-edge LFM2-24B-A2B model. This innovative solution enables privacy-first, local execution of AI workflows by eliminating cloud API dependencies and preventing data egress, delivering enhanced security and efficiency for regulated enterprise environments.

Understanding the Innovation: LFM2-24B-A2B and LocalCowork

The LFM2-24B-A2B model is a 24-billion parameter AI designed with a Sparse Mixture-of-Experts (MoE) architecture that activates roughly 2 billion parameters per token during inference. This unique design fosters a broad knowledge base while optimizing computational efficiency, enabling sub-second response times on consumer-grade hardware.

Complementing this powerful model is LocalCowork — a fully offline AI agent adhering to the Model Context Protocol (MCP). This protocol seamlessly connects the AI to a curated ecosystem of local tools, facilitating tasks ranging from file management to security scanning — all while maintaining a rigorous local audit log.

Key Features and Architecture

Feature Description
Model Architecture Sparse Mixture-of-Experts (MoE), activating ~2B of 24B parameters per token
Hardware Tested Apple M4 Max laptop, 36 GB unified memory, 32 GPU cores
Serving Engine llama-server with flash attention enabled
Quantization Format Q4_K_M GGUF
Memory Footprint Approximately 14.5 GB RAM
Local Tool Integration 75 tools across 14 MCP servers; demo deploys 20 tools on 6 servers
Tool Categories Filesystem ops, OCR, security scanning, document processing, audit logging

Performance Benchmarks: Speed Meets Accuracy

  • Latency: Average of ~385 milliseconds per tool-selection response — enabling near real-time, interactive workflows.
  • Accuracy: 80% on single-step tool execution, with a 26% success rate on complex multi-step chains.
Metric Single-Step Execution Multi-Step Chain Execution
Accuracy ~80% ~26%
Latency ~385 ms Varies based on chain length (3-6 tool steps)
Use Case Fit Highly reliable for guided, human-in-the-loop executions Currently limited by tool-selection ambiguity (‘sibling confusion’)

Implications for AI Automation and Business Efficiency

Liquid AI’s LocalCowork and LFM2-24B-A2B model embody a transformative step in AI automation that prioritizes enterprise data privacy without sacrificing performance. By enabling local execution of AI workflows:

  • Privacy is enhanced—no sensitive data leaves the client device, addressing compliance and regulatory concerns.
  • Latency is minimized—swift tool selection empowers time-sensitive business processes.
  • Operational costs can be reduced—dependence on costly cloud APIs and bandwidth is eliminated.
  • Scalable modularity: The Model Context Protocol facilitates integration with diverse tools for tailored workflows.

This architecture is ideal for businesses in industries such as finance, healthcare, and legal sectors where securing sensitive information is paramount while streamlining operational workflows.

Conclusion

Liquid AI’s release of LocalCowork, backed by the powerful yet efficient LFM2-24B-A2B model, exemplifies how AI can be leveraged to maximize business efficiency through local, privacy-conscious automation. While single-step tool operations showcase impressive accuracy and responsiveness, ongoing work to improve multi-step chain execution signals optimistic prospects for fully autonomous AI agents in the near future.

For organizations seeking to harness AI automation while maintaining strict privacy and compliance, LocalCowork represents a compelling, ready-to-deploy solution that blends innovation with practical usability.

Explore the technical details and open-source resources on Liquid AI’s official GitHub Cookbook, join the growing AI community on Reddit and Telegram, and stay updated through their Twitter feed.


Looking for custom AI automation for your business? Connect with me at https://amr-abdeldaym.netlify.app/

“`