Clarifai Guide
Clarifai Home
v7.0
v7.0
  • Welcome
  • Getting Started
    • Quick Start
    • Applications
      • Create an Application
      • Application Settings
      • Collaboration
    • Authentication
      • App-Specific API Keys
      • Personal Access Tokens
      • Scopes
      • Authorize
      • 2FA
    • Glossary
  • How-To
    • Portal
      • Auto Annotation
      • Custom Models
      • Text Classification
      • Visual Text Recognition
    • API
      • Auto Annotation
      • Batch Predict CSV on Custom Text Model
      • Custom KNN Face Classifier Workflow
      • Custom Models
      • Custom Text Model
      • Visual Text Recognition
  • API Guide
    • API overview
      • API Clients
      • Using Postman with Clarifai APIs
      • Status Codes
      • Pagination
      • Patching
    • Data Mode
      • Supported Formats
      • Create, Get, Update, Delete
      • Collectors
    • Concepts
      • Create, Get, Update
      • Languages
      • Search by Concept
      • Knowledge Graph
    • Scribe Label
      • Annotations
      • Training Data
      • Positive and Negative Annotations
      • Tasks
      • Task Annotations
    • Enlight Train
      • Clarifai Models
      • Model Types
      • Create, Get, Update, Delete
      • Deep Training
      • Evaluate
        • Interpreting Evaluations
        • Improving Your Model
    • Mesh Workflows
      • Base Workflows
      • Create, Get, Update, Delete
      • Input Nodes
      • Workflow Predict
    • Armada Predict
      • Images
      • Video
      • Prediction Parameters
      • Multilingual Classification
    • Spacetime Search
      • Search Overview
      • Combine or Negate
      • Filter
      • Rank
      • Index Images for Search
      • Legacy Search
        • Combine or Negate
        • Filter
        • Rank
        • Saved Searches
  • Portal Guide
    • Portal Overview
    • Data Mode
      • Supported Formats
      • Bulk Labeling
      • CSV and TSV
      • Collectors
    • Concepts
      • Create, Get, Update, Delete
      • Knowledge Graph
      • Languages
    • Scribe Label
      • Create a Task
      • Label Types
      • Labeling Tools
      • AI Assist
      • Workforce Management
      • Review
      • Training Data
      • Positive and Negative Annotations
    • Enlight Train
      • Training Basics
      • Clarifai Models
      • Model Types
      • Deep Training
      • Evaluate
        • Interpreting Evaluations
        • Improving Your Model
    • Mesh Workflows
      • Base Workflows
      • Setting Up a Mesh Workflow
      • Input Nodes
    • Armada Predict
    • Spacetime Search
      • Rank
      • Filter
      • Combine or Negate
      • Saved Searches
      • Visual Search
  • Data Labeling Services
    • Scribe LabelForce
  • Product Updates
    • Upcoming API Changes
    • Changelog
      • Release 7.0
      • Release 6.11
      • Release 6.10
      • Release 6.9
      • Release 6.8
      • Release 6.7
      • Release 6.6
      • Release 6.5
      • Release 6.4
      • Release 6.3
      • Release 6.2
      • Release 6.1
      • Release 6.0
      • Release 5.11
      • Release 5.10
Powered by GitBook
On this page
  • How It Works
  • Requirements
  • Running Evaluation

Was this helpful?

  1. Portal Guide
  2. Enlight Train

Evaluate

Learn about model evaluation tools.

PreviousDeep TrainingNextInterpreting Evaluations

Last updated 4 years ago

Was this helpful?

Now that you've successfully trained the model, you may want to test its performance before using it in the production environment. The Model Evaluation tool allows you to perform a cross validation on a specified model version. Once the evaluation is complete, you’ll be able to view various metrics that will inform the model’s performance.

How It Works

Model Evaluation performs a K-split cross validation on data you used to train your custom model.

In the cross validation process, it will: 1. Set aside a random 1/K subset of the training data and designate as a test set, 2. Train a new model with the remaining training data, 3. Pass the test set data through this new model to make predictions, 4. Compare the predictions against the test set’s actual labels, and 5. Repeat steps 1) through 4) across K splits to average out the evaluation results.

Requirements

To run the evaluation on your custom model, it will need the meet the following criteria:

  • A custom trained model model version with:

    1. At least 2 concepts

    2. At least 10 training inputs per concept (At least 50 inputs per concept is recommended)

Running Evaluation

The evaluation may take up to 30 minutes. Once it is complete, the Evaluate button will become View button. Click on the View button to see the evaluation results.

Note that the evaluation may result in an error if the model version doesn’t satisfy the requirements above.

You can run the evaluation on a specific model version of your custom model in the . Go to your Application, click on your model of interest, and select the Versions tab. Simply click on the Evaluate button for the specific model version.

Portal
cross validation