Production-grade automated pipeline deploying ML models to Google Cloud Platform with zero manual intervention, Docker containerization, and serverless infinite scaling
An enterprise-grade MLOps pipeline that automates the entire journey from code commit to production deployment with comprehensive testing, containerization, and cloud-native serving
Complete automation using GitHub Actions for continuous integration and deployment. Triggered automatically on every commit to main branch with comprehensive testing, building, and deployment stages executed in parallel for maximum efficiency.
Multi-stage Docker builds optimized for production with minimal image size. Includes model artifacts, dependencies, and serving infrastructure in a single portable container. Versioned images stored in GCP Artifact Registry.
Deploy to GCP Cloud Run for fully managed serverless compute. Automatic scaling from zero to thousands of instances based on traffic. Pay only for what you use with per-request billing and built-in load balancing.
Enterprise security with Google Workload Identity for authentication. Secrets managed via GitHub Secrets and GCP Secret Manager. HTTPS endpoints with automatic SSL certificates and DDoS protection.
Complete version control with DVC for data and models. Every deployment tagged with Git commit SHA for full traceability. Rollback capability to any previous version with single command.
Health checks, logging, monitoring, and error handling built-in. Fast API endpoints with automatic request validation and OpenAPI documentation. Graceful degradation and circuit breaker patterns implemented.
Cloud-native architecture leveraging GCP services for scalable, reliable ML model serving
Fully automated deployment workflow from code commit to production
β’ Automated testing (unit, integration)
β’ Code quality checks (linting, formatting)
β’ Dependency vulnerability scanning
β’ Model validation tests
β’ Performance benchmarking
β’ Multi-stage Docker build
β’ Optimized layer caching
β’ Security vulnerability scanning
β’ Image signing & verification
β’ Push to Artifact Registry
β’ Zero-downtime deployment
β’ Automatic traffic routing
β’ Health check validation
β’ Rollback on failure
β’ Production monitoring
Built with industry-leading tools and cloud-native technologies
From git push to production in under 5 minutes. Fully automated pipeline with parallel testing, building, and deployment stages for maximum speed.
Cloud Run automatically scales from zero to thousands of instances. Handle traffic spikes effortlessly with per-request autoscaling.
Pay only for actual requests. Scale to zero when idle. No charges for idle time, making it perfect for development and staging environments.
Built-in DDoS protection, automatic HTTPS, Workload Identity for authentication, and vulnerability scanning at every deployment.
Integrated logging and monitoring with Cloud Logging and Cloud Monitoring. Track metrics, logs, and traces in real-time.
Blue-green deployments with automatic traffic switching. Health checks ensure new versions are stable before routing traffic.
Deploy your ML model to production in minutes with this step-by-step guide
Get started by cloning the repository and setting up your local environment
git clone https://github.com/RohitDusane/MLOps-DVC-Git-Actions.git
cd MLOps-DVC-Git-Actions
Create a new GCP project and enable required APIs
# Create project
gcloud projects create YOUR_PROJECT_ID
# Enable APIs
gcloud services enable run.googleapis.com
gcloud services enable artifactregistry.googleapis.com
gcloud services enable cloudbuild.googleapis.com
Add GCP credentials to GitHub repository secrets for CI/CD
# Required GitHub Secrets:
GCP_PROJECT_ID: your-project-id
GCP_SA_KEY: service-account-json-key
GCP_REGION: us-central1
SERVICE_NAME: ml-model-service
Train your ML model locally or use the provided example model
# Install dependencies
pip install -r requirements.txt
# Train model
python src/train.py
# Test API locally
uvicorn app:app --reload
Push to main branch to trigger automatic deployment
git add .
git commit -m "Deploy ML model"
git push origin main
# GitHub Actions will automatically:
# 1. Run tests
# 2. Build Docker image
# 3. Push to Artifact Registry
# 4. Deploy to Cloud Run
Once deployed, your API will be available at the Cloud Run URL
# Test your deployed API
curl -X POST https://your-service-url.run.app/predict \
-H "Content-Type: application/json" \
-d '{"features": [1, 2, 3, 4, 5]}'
# View API documentation
open https://your-service-url.run.app/docs
Try out the deployed credit prediction model with real-time inference
π Try Live DemoDesigned for production-grade ML deployment with best practices
Zero manual steps from commit to production. GitHub Actions handles testing, building, scanning, and deployment automatically.
Automated testing, linting, and security scanning at every stage. Deployments only proceed if all checks pass.
Serverless architecture scales automatically based on demand. Handle 1 request or 1 million without configuration changes.
Simple workflow: code, commit, push. No complex deployment procedures or manual server configuration required.
Built-in health checks, automatic rollbacks, and zero-downtime deployments ensure your service is always available.
Comprehensive logging and monitoring with GCP's native tools. Track every request, error, and performance metric.