AI Artifact Management & Traceability via H2O MLOps | Part 9
Machine-readable: Markdown · JSON API · Site index
Описание видео
How H2O MLOps links registered models to their experiments, datasets, artifacts, and managed scoring runtimes.
Models should never exist in isolation from their origin. H2O MLOps automatically links every registered model back to its exact Driverless AI experiment—including training configurations, comparison data, AutoDoc reports, feature analysis, and MOJO scoring pipelines. System administrators can pre-configure containerized scoring runtimes tailored to standard, GPU-enabled, or regulated environments, allowing data scientists to deploy securely without requiring infrastructure expertise.
Technical Capabilities & Resources
➤ Linked Model Metrics & Artifacts: Auto-link evaluation metrics, AutoDoc reports, and scoring pipelines to registered models for complete lineage.
🔗 https://docs.h2o.ai/mlops/models/understand-models
➤ Experiment Management via API: Programmatically query and manage experiments linked to registered models.
🔗 https://docs.h2o.ai/mlops/py-client/examples/manage-experiments
➤ Model Import & Export (MOJO & Python Pipelines): Import external models or export Driverless AI MOJO and Python scoring pipelines.
🔗 https://docs.h2o.ai/mlops/models/mlops-model-support#h2o-driverless-ai-mojo-pipeline--python-scoring-pipeline
➤ Managed Container Runtimes: Admin-configured runtimes for specific workloads, GPU requirements, and regulatory environments.
🔗 https://docs.h2o.ai/mlops/model-deployments/scoring-runtimes