📋
docs.binaexperts.com
  • Introduction
  • Get Started
  • Organization
    • Create an Organization
    • Add Team Members
    • Role-Based Access Control
  • Datasets
    • Creating a Project
    • Uploading Data
      • Uploading Video
    • Manage Batches
    • Create a Dataset Version
    • Preprocessing Images
    • Creating Augmented Images
    • Add Tags to Images
    • Manage Categories
    • Export Versions
    • Health Check
    • Merge Projects and Datasets
    • Delete an Image
    • Delete a Project
  • annotate
    • Annotation Tools
    • Use BinaExperts Annotate
  • Train
    • Train
    • Framework
      • Tensorflow
      • PyTorch
      • NVIDIA TAO
      • TFLite
    • Models
      • YOLO
      • CenterNet
      • EfficientNet
      • Faster R-CNN
      • Single Shot Multibox Detector (SSD)
      • DETR
      • DETECTRON2 FASTER RCNN
      • RETINANET
    • dataset healthcheck
      • Distribution of annotations based on their size relative
      • Distribution of annotations based on their size relative
    • TensorBoard
    • Hyperparameters
    • Advanced Hyperparameter
      • YAML
      • Image Size
      • Validation input image size
      • Patience
      • Rectangular training
      • Autoanchor
      • Weighted image
      • multi scale
      • learning rate
      • Momentum
  • Deployment
    • Deployment
      • Legacy
      • Deployment model (Triton)
    • Introducing the BinaExperts SDK
  • ابزارهای نشانه گذاری
  • استفاده از نشانه گذاری بینااکسپرتز
  • 🎓آموزش مدل
  • آموزش
  • چارچوب ها
    • تنسورفلو
    • پایتورچ
    • انویدیا تاو
    • تنسورفلو لایت
  • مدل
    • یولو
    • سنترنت
    • افیشنت نت
    • R-CNN سریعتر
    • SSD
    • DETR
    • DETECTRON2 FASTER RCNN
  • تست سلامت دیتاست
    • توزیع اندازه نسبی
    • رسم نمودار توزیع
  • تنسوربرد
  • ابرمقادیر
  • ابرمقادیر پیشرفته
    • YAML (یامل)
    • اندازه تصویر
    • اعتبار سنجی تصاویر ورودی
    • انتظار
    • آموزش مستطیلی
  • مستندات فارسی
    • معرفی بینااکسپرتز
    • آغاز به کار پلتفرم بینااکسپرتز
  • سازماندهی
    • ایجاد سازمان
    • اضافه کردن عضو
    • کنترل دسترسی مبتنی بر نقش
  • مجموعه داده ها
    • ایجاد یک پروژه
    • بارگذاری داده‌ها
      • بارگذاری ویدیو
    • مدیریت دسته ها
    • ایجاد یک نسخه از مجموعه داده
    • پیش‌پردازش تصاویر
    • ایجاد تصاویر افزایش یافته
    • افزودن تگ به تصاویر
    • مدیریت کلاس‌ها
  • برچسب گذاری
    • Page 3
  • آموزش
    • Page 4
  • استقرار
    • Page 5
Powered by GitBook
On this page

Was this helpful?

  1. Deployment

Deployment

PreviousMomentumNextLegacy

Last updated 1 year ago

Was this helpful?

In the development section, you can search for your model based on its ID or the filters:

like framework, model and history , ...

In the deployment phase, there are two versions available for development:

  1. Legacy Deployment:

Legacy Deployment offers traditional deployment methods for AI models. Users can deploy their trained models using this approach. The deployment process involves packaging the model along with its dependencies and deploying it on the desired infrastructure.

2. Deployment Model (Triton-based):

Deployment Model utilizes Triton, an advanced model serving platform developed by NVIDIA. Triton provides high-performance inference with support for various hardware accelerators. It offers features such as dynamic batching, concurrent model execution, and multi-model serving.

Features:

  • Model Training: Train your AI models using custom datasets and algorithms.

  • Legacy Deployment: Deploy trained models using traditional methods.

  • Triton-based Deployment: Deploy models on Triton for optimized performance.

  • Scalability: Scale deployments based on demand and resource availability.

  • Monitoring: Monitor model performance, resource utilization, and server health.

  • API Integration: Integrate the service with your applications via RESTful APIs.

In box number 2, you can access your trained models by specifying filters.

In box number 3 and 4 You can view the information and specifications of the model.

In box number 5 You can obtain output and inference from the desired model.

By pressing "MORE," you can start development and obtain the API address or download the model.