Huncy — AI-Powered E-commerce Customer Service Platform
- Quick Overview
TL;DR
- • Agentic assistant with LangGraph; order‑to‑tracking via tool calls
- • Async FastAPI microservices; SQLAlchemy models; unified API gateway
- • Arize Phoenix tracing and evaluations; supports A/B testing
- • Docker Compose deployment; built for scale and low latency
Core Technologies
- • AI/ML: LangGraph, LangChain, Groq API, MCP
- • Backend: FastAPI, SQLAlchemy, AsyncIO
- • Observability: Arize Phoenix
- • Deployment: Docker, Docker Compose
Project Overview
Huncy is a comprehensive AI-powered e-commerce customer service platform that demonstrates advanced integration of multiple AI frameworks, microservices architecture, and modern web technologies. The project showcases expertise in building intelligent conversational agents, API development, and scalable system design.
Key Components
1. LangGraph E-commerce Assistant
- Intelligent state management with TypedDict-based conversation history
- Tool integration for order lookup and transit tracking
- Multi-step reasoning across workflows (lookup → tracking → shipment status)
- Session-scoped conversation context
- Arize Phoenix for LLM tracing, evaluation, and monitoring
- Automated evaluation with custom e-commerce metrics and A/B tests
2. E-commerce API Service
- SQLAlchemy models for Orders and Transit tracking
- Full async/await for high throughput
- RESTful endpoints with proper status codes
- Pydantic schemas for validation
- Comprehensive error handling
3. MCP (Model Context Protocol) Integration
- Order Lookup MCP server for order status queries
- Math Operations MCP as an example tool server
- Multi-server client connecting several MCP servers simultaneously
- Clean tool abstraction separating AI logic from external services
4. Client Applications & Testing
- Async client for end-to-end API testing
- Jupyter notebooks for interactive development
- Twitter automation integration
- Refund processing workflow implementation
Technical Highlights
Advanced AI Integration
- Multi-model support: Groq models (Llama, Qwen, Gemma)
- Conversation memory and history management
- Dynamic tool calling based on user intent
- Robust error recovery and user guidance
- LLM observability with Arize Phoenix
- Automated evaluation: response quality, hallucination detection, relevance scoring
Microservices Architecture
- Clear separation across services
- API gateway pattern for orchestration
- Async SQLAlchemy data layer
- Containerization with Docker for consistent deployments
Development Excellence
- Comprehensive type hints
- Async programming for performance
- Environment-based configuration with dotenv
- Diverse testing approaches and clients
Business Value
- Automates customer service to reduce ticket volume
- Scalable for high-volume interactions
- Multi-channel readiness: web, mobile, social
- Fast responses using Groq's optimized inference
Innovation & Learning
This project demonstrates advanced understanding of modern AI agent frameworks (LangGraph vs. traditional LangChain), production AI observability and monitoring (Arize Phoenix), LLM evaluation and performance optimization, microservices architecture, async Python development, API design and integration, container orchestration, and real-world AI application deployment. The codebase reflects production-ready practices including proper error handling, logging, configuration management, and scalable architecture design.