Annotations (also known as labels) describe your inputs. When you add inputs to your app, we will create an input level annotation for each input. This input level annotation contains any data you provided in POST /inputs call. Models in your default workflow can also write annotations.
Once your input is successfully indexed, you can add additional annotations such as concepts and bounding boxes.
Add Annotations
You can label your inputs by calling the POST /annotations endpoint. For example, you can add concept(s) to an image, draw a bounding box, or label concept(s) in a video frame.
When you add an annotation, the app's default workflow will not run by default. This means that the annotation will not be immediately available for training of your custom model or for visual search. To make the annotation available for AI based search and training, you need to provide embed_model_version_id field. This field specifies how to associate the annotation for your input to one of the embedding models in your default workflow. When associated during patching then we know how to index it for training and visual search, therefore if your use case includes those features it is recommended to provide this field on each add annotation call.
You can add from 1 up to 128 annotations in a single API call.
Each annotation should contain at most one region. If it is a video, each annotation should contain 1 frame. If there are multiple regions in a frame you want to label, you can add multiple annotations for each regoin and each annotation will be contained within the same frame but a different region.
Annotate images with concepts
To annotate a concept present anywhere in an image:
# Insert here the initialization code as outlined on this page:# https://docs.clarifai.com/api-guide/api-overview/api-clients#client-installation-instructionspost_annotations_response = stub.PostAnnotations( service_pb2.PostAnnotationsRequest( annotations=[ resources_pb2.Annotation( input_id="{YOUR_INPUT_ID}", data=resources_pb2.Data( concepts=[ resources_pb2.Concept(id="tree", value=1.), # 1 means true, this concept is present. resources_pb2.Concept(id="water", value=0.) # 0 means false, this concept is not present. ] ), embed_model_version_id="{EMBED_MODEL_VERSION_ID}" ) ] ), metadata=metadata)if post_annotations_response.status.code != status_code_pb2.SUCCESS:raiseException("Post annotations failed, status: "+ post_annotations_response.status.description)
importjava.util.List;importcom.clarifai.grpc.api.*;importcom.clarifai.grpc.api.status.*;// Insert here the initialization code as outlined on this page:// https://docs.clarifai.com/api-guide/api-overview/api-clients#client-installation-instructionsMultiAnnotationResponse postAnnotationsResponse =stub.postAnnotations(PostAnnotationsRequest.newBuilder().addAnnotations(Annotation.newBuilder().setInputId("{YOUR_INPUT_ID}").setData(Data.newBuilder().addConcepts(Concept.newBuilder().setId("tree").setValue(1f) // 1 means true, this concept is present..build() ).addConcepts(Concept.newBuilder().setId("water").setValue(0f) // 0 means false, this concept is not present..build() ) ).setEmbedModelVersionId("{EMBED_MODEL_VERSION_ID}") // so the concept can be used for custom model training
.build() ).build());if (postAnnotationsResponse.getStatus().getCode() !=StatusCode.SUCCESS) {thrownewRuntimeException("Post annotations failed, status: "+postAnnotationsResponse.getStatus());}
// Insert here the initialization code as outlined on this page:// https://docs.clarifai.com/api-guide/api-overview/api-clients#client-installation-instructionsstub.PostAnnotations( { annotations: [ {