Backend & Core Logic:
• Programming Language: Python v3.10
• Web Framework: FastAPI/RestfulAPI
• Asynchronous Programming: Asyncio
• Data Validation: Pydantic
• Logging: Loguru
AI & Orchestration:
• LLM Integrations: Clients for OpenAI (GPT models), Google (Gemini models), Anthropic (Claude models), and potentially local models (e.g., Mistral).
• Orchestration: Custom-built hierarchical agent system (Director-Manager-Worker architecture).
Data Storage:
• Database: PostgreSQL
• Database Interaction: SQLAlchemy
Deployment & Infrastructure:
• Containerization: Docker (using optimized Dockerfiles for deployment).
• Cloud Platform: Digital Ocean (App Platform recommended, Droplets as an alternative).
• Web Server: Uvicorn (ASGI server for FastAPI).
• Reverse Proxy: Nginx
• SSL
Frontend:
• Framework: React
• Styling: Tailwind CSS
Director-Manager-Worker Hierarchy
The Thinkazoo Orchestrator implements a sophisticated three-tier hierarchical structure:
1. Director Agent
◦ Serves as the central orchestrator
◦ Manages high-level task planning and delegation
◦ Maintains global memory and context
◦ Coordinates between multiple Manager agents
2. Manager Agents
◦ Specialize in specific domains (e.g., research, coding, analysis)
◦ Break down complex tasks into subtasks
◦ Assign subtasks to appropriate Worker agents
◦ Synthesize results from multiple Workers
3. Worker Agents
◦ Perform specialized tasks with high efficiency
◦ Execute specific capabilities (text generation, code generation, reasoning)
◦ Report results back to their Manager
◦ Operate with limited, task-specific memory
The system implements a sophisticated memory management system with three scopes:
1. Global Memory: Accessible by the Director and authorized Managers
2. Team Memory: Shared among specific groups of Managers and Workers
3. Local Memory: Private to individual agents
Memory entries are structured with namespaces, keys, and values, allowing for efficient storage and retrieval of information across the hierarchy.
1. Adaptive Learning
The Thinkazoo Orchestrator implements a feedback loop system that allows agents to learn from task outcomes and improve their performance over time.
2. Cross-Model Knowledge Synthesis
The system can combine insights from multiple LLMs to generate more comprehensive and accurate responses than any single model could provide.
3. Hierarchical Prompt Engineering
Specialized prompt templates are used at each level of the hierarchy, optimized for the specific role and task of each agent.
4. Dynamic Resource Allocation
The Director agent can reallocate computational resources based on task priority and complexity, ensuring efficient use of available resources.
5. Explainable AI
The system provides detailed explanations of its decision-making process, increasing transparency and user trust.
6. Autonomous Agent Collaboration
Agents can autonomously form teams to tackle complex problems, sharing context and knowledge as needed.
7. Continuous Learning
The system maintains a knowledge repository that grows over time, allowing it to build on past experiences and improve its capabilities.