Clarifai Guide
Clarifai Home
v6.9
v6.9
  • Welcome
  • Getting Started
    • Quick Start
    • Applications
      • Create an Application
      • Base Workflows
      • Application Settings
      • Collaboration
    • Authentication
      • App-Specific API Keys
      • Personal Access Tokens
      • Scopes
      • Authorize
      • SSO
      • 2FA
    • Glossary
  • API Guide
    • API overview
      • API Clients
      • Status Codes
      • Pagination
      • Patching
    • Data
      • Supported Formats
      • Create, Get, Update, Delete
      • Collectors
        • Collectors
    • Concepts
      • Create, Get, Update
      • Languages
      • Search by Concept
      • Knowledge Graph
    • Annotate
      • Annotations
      • Training Data
      • Positive and Negative Annotations
      • Tasks
      • Task Annotations
    • Model
      • Clarifai Models
      • Model Types
      • Create, Get, Update, Delete
      • Deep Training
      • Evaluate
        • Interpreting Evaluations
        • Improving Your Model
    • Workflows
      • Create, Get, Update, Delete
      • Input Nodes
      • Workflow Predict
    • Predict
      • Images
      • Video
      • Prediction Parameters
      • Multilingual Classification
    • Search
      • Index Images for Search
      • Search
        • Combine or Negate
        • Filter
        • Rank
      • Legacy Search
        • Combine or Negate
        • Filter
        • Rank
        • Saved Searches
    • Walkthroughs
      • Custom Models
      • Custom Text Model
      • Custom KNN Face Classifier Workflow
      • Batch Predict CSV on Custom Text Model
      • Auto Annotation
      • Visual Text Recognition
  • Portal Guide
    • Portal Overview
    • Data
      • Supported Formats
      • CSV and TSV
      • Collectors
        • Collectors
    • Concepts
      • Create, Get, Update, Delete
      • Knowledge Graph
      • Languages
    • Labeler
      • Create a Task
      • Label Types
      • Labeling Tools
      • Workforce Management
      • Training Data
      • Positive and Negative Annotations
    • Model
      • Clarifai Models
      • Model Types
      • Deep Training
      • Evaluate
        • Interpreting Evaluations
        • Improving Your Model
    • Workflows
      • Input Nodes
    • Predict
    • Search
      • Rank
      • Filter
      • Combine or Negate
      • Saved Searches
      • Visual Search
    • Walkthroughs
      • Custom Models
      • Auto Annotation
      • Text Classification
      • Visual Text Recognition
  • Data Labeling Services
    • Data Labeling Services
  • Product Updates
    • Upcoming API Changes
    • Changelog
      • Release 6.9
      • Release 6.8
      • Release 6.7
      • Release 6.6
      • Release 6.5
      • Release 6.4
      • Release 6.3
      • Release 6.2
      • Release 6.1
      • Release 6.0
      • Release 5.11
      • Release 5.10
Powered by GitBook
On this page
  • Possible Areas of Improvement
  • Tips

Was this helpful?

  1. API Guide
  2. Model
  3. Evaluate

Improving Your Model

The evaluation metrics are meant to help you diagnose the quality of your model. Your model may belong to one or more of many categories, including, but not limited to:

  1. Good model with all great concepts.

  2. OK model with a few bad concepts.

  3. Bad model with all bad concepts.

  4. Biased model: the model is consistently picking up certain visual cues other than what you’d like to pick up.

  5. Model with variance: there is no consistency in the way the model is predicting on inputs.

Possible Areas of Improvement

The performance of your model depends on the performance of each concept, which is trained on a set of inputs. We’d recommend that you look at both inputs and concepts when diagnosing areas of improvement.

Inputs

  1. Diversity: try to include all perspectives of the concept, e.g. include all angles of a “dog”, if you’re building a “dog” concept.

  2. Strong positives: Images that are the true representation of your concept.

  3. Training data should be representative of the real world data -- avoid making models where the data is too ‘easy’, i.e. unrealistic set of data.

  4. Number: minimum 50 inputs per concept; more inputs the better.

  5. File dimensions: minimum 512px x 512px.

Concepts

  1. Concepts: avoid concepts that do not rely on visual cues within the image. Also, current custom training does not perform well on training to identify faces.

  2. Labels: check to see if any inputs are labeled with wrong concepts.

Tips

When improving your model, there is no one-size-fits-all answer. Here are some tips to keep in mind:

  1. Although we use ROC AUC as a general top-level ‘score’ for both concept and model, we do not recommend that you rely on 1 metric only to draw your final conclusion on your model performance.

  2. Refer to both Concepts by Concepts Results as well as Selection Details to get a better grasp of your model.

  3. When interpreting the evaluation results, keep in mind the nature of your model. Specifically, pay attention to whether or not you have labeled the inputs with more than 1 concept (i.e. non-mutually exclusive concepts environment), vs. only 1 concept per image.

  4. Remember, the rule of diminishing returns may also apply to training models. After a certain point, the changes may not make a big difference in the model quality.

PreviousInterpreting EvaluationsNextWorkflows

Last updated 4 years ago

Was this helpful?