API Usage
The Moveo One API allows developers to interact with predictive models programmatically — fetch predictions, inspect models, trigger retraining, and retrieve model metrics.
All endpoints require your organization token via Bearer authentication.
Base URL:
https://api.moveo.one/v1
Authentication
Use your organization token in the header for all API requests:
-H "Authorization: Bearer YOUR_TOKEN_HERE"
Example test call:
curl -X GET https://api.moveo.one/v1/health \
  -H "Authorization: Bearer YOUR_TOKEN_HERE"
✅ Expected response:
{ "status": "ok", "version": "1.0.0" }
1. Get Predictions
Endpoint:
GET /predict
Fetches real-time predictions for a given user or session.
Example request:
curl -X GET "https://api.moveo.one/v1/predict?userId=user_123" \
  -H "Authorization: Bearer YOUR_TOKEN_HERE"
✅ Example response:
{
  "userId": "user_123",
  "predictions": [
    {
      "model": "checkout_completion",
      "label": "likely_to_complete",
      "confidence": 0.87
    },
    {
      "model": "session_dropout",
      "label": "high_dropout_risk",
      "confidence": 0.68
    }
  ]
}
2. List Available Models
Endpoint:
GET /models
Retrieves all models active for your organization.
Example request:
curl -X GET https://api.moveo.one/v1/models \
  -H "Authorization: Bearer YOUR_TOKEN_HERE"
✅ Example response:
[
  {
    "id": "mdl_checkout_completion",
    "targetEvent": "checkout_complete",
    "accuracy": 0.91,
    "lastRetrained": "2025-10-09T14:20:00Z"
  },
  {
    "id": "mdl_user_retention",
    "targetEvent": "session_end",
    "accuracy": 0.88,
    "lastRetrained": "2025-10-06T10:15:00Z"
  }
]
3. Get Model Details
Endpoint:
GET /models/{modelId}
Example request:
curl -X GET https://api.moveo.one/v1/models/mdl_checkout_completion \
  -H "Authorization: Bearer YOUR_TOKEN_HERE"
✅ Example response:
{
  "id": "mdl_checkout_completion",
  "targetEvent": "checkout_complete",
  "accuracy": 0.91,
  "precision": 0.88,
  "recall": 0.86,
  "featureImportance": [
    { "feature": "scroll_depth", "weight": 0.41 },
    { "feature": "time_in_cart", "weight": 0.29 }
  ]
}
4. Trigger Retraining
Endpoint:
POST /models/retrain
Use this to manually start retraining a model.
Example request:
curl -X POST https://api.moveo.one/v1/models/retrain \
  -H "Authorization: Bearer YOUR_TOKEN_HERE" \
  -H "Content-Type: application/json" \
  -d '{
        "modelId": "mdl_checkout_completion"
      }'
✅ Example response:
{
  "modelId": "mdl_checkout_completion",
  "status": "retraining",
  "startedAt": "2025-10-09T15:30:00Z"
}
5. Retrieve Monitoring Data
Endpoint:
GET /models/monitor/{modelId}
Provides health and performance metrics for a deployed model.
Example request:
curl -X GET https://api.moveo.one/v1/models/monitor/mdl_checkout_completion \
  -H "Authorization: Bearer YOUR_TOKEN_HERE"
✅ Example response:
{
  "modelId": "mdl_checkout_completion",
  "accuracy": 0.91,
  "latencyMs": 75,
  "throughput": 410,
  "drift": {
    "metric": "confidence_mean",
    "delta": -0.12,
    "status": "moderate"
  }
}
6. Export Predictions (Batch Mode)
Endpoint:
GET /export/predictions
Used to retrieve historical model predictions for analysis or auditing.
Example request:
curl -X GET "https://api.moveo.one/v1/export/predictions?since=2025-10-01" \
  -H "Authorization: Bearer YOUR_TOKEN_HERE"
✅ Example response:
[
  {
    "userId": "user_123",
    "model": "checkout_completion",
    "confidence": 0.87,
    "timestamp": "2025-10-09T14:00:05Z"
  },
  {
    "userId": "user_456",
    "model": "session_dropout",
    "confidence": 0.72,
    "timestamp": "2025-10-09T14:03:11Z"
  }
]
Error Handling
All API errors follow a standard format:
{
  "error": "invalid_token",
  "message": "The provided API token is invalid or expired."
}
| HTTP Code | Description | 
|---|---|
| 200 | Success | 
| 400 | Invalid request or missing parameter | 
| 401 | Unauthorized (invalid token) | 
| 404 | Resource not found | 
| 500 | Internal server error | 
Rate Limits
Default API rate limits:
- 300 requests/minute per token
- Bursting temporarily allowed up to 600 RPM
If exceeded, you’ll receive:
{
  "error": "rate_limited",
  "retry_after": 60
}
Best Practices
✅ Recommended
- Cache model metadata and avoid frequent polling
- Use Webhooks for prediction updates in real time
- Securely store your API tokens
❌ Avoid
- Sending duplicate prediction requests per user
- Triggering retrains too frequently
TODOs
-  Add /models/deleteendpoint reference
-  Add section for bulk prediction requests (/predict/batch)
- Add Python client example snippet