Inside Ori’s Web UI: A Developer-Friendly, Operator-Ready Console for the AI Cloud

“Simplicity is not about reducing capability; it’s about removing friction so developers and operators can focus on what actually matters: building, deploying, and running AI without wrestling with the machinery behind it.”
The AI ecosystem is advancing rapidly, and the most impactful platforms are those that transform advanced infrastructure into an effortless experience. Ori’s Web UI was designed with a clear goal: bring the full breadth of an AI-native cloud platform into a clean, intuitive, and powerful interface that works just as well for AI/ML developers as it does for cloud operators and administrators orchestrating infrastructure fleets. From deploying AI services in minutes to managing organizations, quotas, and locations at scale, the Ori Console brings the AI cloud into a single, elegant operational surface.
An Intuitive Interface for Developers and ML Teams
Dashboard: A Clear View of Your AI Environment
The Dashboard gives users an instant snapshot of their environment, from running GPU instances to active endpoints, storage consumption, and billing status. Everything important surfaces in one place: current usage across Virtual Machines, Serverless Kubernetes, Endpoints, Supercomputers, and Storage. The UI highlights both operational activity and next steps with helpful “Get Started” guides on the right, making it easy for new users to onboard and for experienced teams to quickly navigate to their workloads.

Settings: Seamless Management of Accounts and Resources
Within Settings, users can manage billing, API tokens, SSH keys, and quotas through clean, well-structured panels. The layout is straightforward and minimizes clutter, making it easy to update access credentials, monitor resource allocation, and configure usage safeguards. Everything, from payment methods to GPU limits, is discoverable at a glance, which helps teams stay on top of cloud management.

AI Cloud Services
Supercomputers: With Ori Supercomputers, you can self-serve powerful bare-metal GPU clusters directly from the Web UI, no approvals or reservations needed. You can launch multi-node clusters with a click and instantly get access to hundreds of top-tier GPUs, optimized for large-scale training or inference. At any time, you can scale your cluster up or down by adding or removing nodes to match your workload requirements.

GPU Instances (Virtual Machines): provide developers with on-demand access to high-performance compute optimized for training, fine-tuning, and inference. The Ori Web UI lets users select GPU types, choose regions, attach storage and launch VMs in a single click. Each instance comes preconfigured with optimized OS images and optional init scripts, ensuring fast environment setup.
Serverless Kubernetes: help you deploy GPU-powered workloads without managing clusters, nodes, or infrastructure overhead. The UI abstracts away Kubernetes operations, letting users define pods, scale workloads, and monitor performance through clean, guided workflows. Developers can focus entirely on containers and job logic while Ori handles scheduling, GPU placement, and orchestration behind the scenes.
Inference Endpoints: With Ori Inference Endpoints, you can self-serve scalable, production-ready model with a single click: select a model, configure your settings, and deploy an API endpoint in minutes. Endpoints automatically scale up under load and scale to zero when idle, giving you flexibility and cost-efficiency. You can choose between serverless token-based billing or dedicated GPU-backed endpoints. Endpoints also comes with an interactive Playground in the UI, allowing users to test prompts, evaluate responses, and iterate on model behavior before integrating it into applications.

Model Registry: The Model Registry offers a centralized system for storing models, tracking versions, and promoting artifacts into production workflows. Users can upload new models at a location of their choice including Private locations, add version information, and deploy models. It brings clarity to model lifecycle management and ensures organizations always know what’s running and where.
Fine-Tuning Studio: provides an effortless and simple way to customize foundation models using your own datasets. The UI guides users through dataset selection, training configuration, and job execution while abstracting away infrastructure provisioning. Users can select which checkpoint to register based on the training and validation loss for each epoch. Once registered, the model is uploaded and made available to them in the registry configured for the desired location.
Object Storage: Ori Object Storage gives ML teams a high-performance, scalable repository for datasets, model artifacts, logs, and pipelines. Whether storing fine-tuning datasets or multi-GB checkpoints, the storage interface is built for ML workflows, with robust performance.
An Operational Cockpit for Cloud Operators and Administrators
Beyond developer workflows, Ori AI Fabric’s Web UI offers a powerful operational control panel for administrators running private, hybrid, or sovereign AI clouds. The console exposes the full set of governance, infrastructure, and lifecycle tools needed to manage large-scale environments.
Locations
The Locations view provides a global map of the infrastructure footprint: regions, SKUs, country codes etc. Each location is tagged as Public or Private, giving operators full visibility into sovereignty requirements and deployment constraints.

Tenancy
Tenancy in Ori AI Fabric ensures that an organization’s workloads always run on dedicated GPU nodes that are never shared with other tenants, providing strong isolation and predictable performance. Administrators can assign a tenancy mode to each organization and configure it independently across different locations. This allows operators to enforce strict separation where required—such as in regulated or sensitive environments—while still maintaining flexibility in how resources are allocated.

Organizations
The Organizations panel provides a structured view of all tenants on the platform. Operators can quickly specify tenancy types and assign the right location access: Public, Private or Hybrid.
Users
Administrators can monitor all user accounts, verify roles, track activity, and confirm admin status. This supports strong identity governance and reduces operational risk.
Memberships
The Memberships page surfaces every user-to-organization mapping with clear visibility into roles and status. Operators can revoke, update, or inspect memberships to maintain secure access across multiple tenants and environments.
Quotas
Quota management is a critical operator function, and the Ori UI brings clarity to a traditionally complex task. Pending requests, GPU limits, service types, and approval workflows are all visible in one table. Operators can approve or decline requests with a single click.
Templates
Templates allow administrators to define standardized compute configurations across GPU instances, Kubernetes clusters, and supercomputer deployments. The UI lists every template with details such as GPU type, regions, price per hour, and availability. This creates consistent, reusable blueprints that developers can consume.
Init Scripts (Cloud Configs)
Init Scripts bring powerful customization to provisioning workflows. Operators can define ML-ready compute via CUDA tool kit, PyTorch, Keras or Jupyter Notebooks, and quickly apply them to GPU Instances or Supercomputers. Users can assign script names, OS compatibility, keeping infrastructure extensible and organized.
The Best of Both Worlds: Effortless for Developers, Powerful for Operators
Ori’s Web UI is more than a console, it’s a unified operational layer for the entire AI cloud. Developers get an elegant, AI-native environment where they can deploy, fine-tune, and serve models with simplicity. Operators get a robust control plane for governance, quotas, identities, templates, and multi-region infrastructure management.
Start Building on Ori Today
Whether you’re spinning up your first GPU instance, deploying a production endpoint, or operating an enterprise or sovereign cloud, the Ori Web UI is built to help you move faster with clarity and confidence.

