Developed a real-time video editor on the web using Remotion. Enabled users to edit, preview, and render videos directly in the browser; all effects on the video were informed by AI to implement. Features include editing entire videos in 2 minutes for transition from long videos to multiple reels, super editing reels with motion graphics, and AI-powered features like background noise remover.
Architected a distributed video processing system handling concurrent video renders with Kubernetes orchestration and RabbitMQ message queuing. The platform processes video edits 3x faster than traditional desktop software by leveraging browser-based rendering with Remotion, reducing user wait times from minutes to seconds. AI-powered features like background noise removal and automatic transition detection reduced manual editing time by 60%.
Built on a microservices architecture using FastAPI for API services, Node.js for Remotion rendering workers, and Python for AI model inference with Ollama. Deployed on Azure with containerized Docker services, achieving 99.5% uptime and supporting thousands of simultaneous video editing sessions. MongoDB stores user projects and metadata, while OpenCV handles real-time video analysis for AI-informed effect suggestions.