What are AI tasks?
AI tasks are standalone actions that can run on your image assets, using modern LLMs to visually analyze an image and turn that information into reliable metadata. You ask any business-specific question in plain English, and the AI selects answers only from your approved vocabulary. This can help your tags and custom fields stay consistent, schema-safe, and ready for DAM workflows at scale, or can also be used to QC assets as per your business requirements.
Key features:
- Analyze using natural language instructions: No complex regex or coding required. You can set up questions to be asked in plain English. For example, you can ask, “Is there any person in this picture?”
- Custom vocabulary: Unlike generic AI tagging, you can also specify the vocabulary AI should stick to while answering the question, making it the same as the vocabulary your team and processes should use internally. For example, you can define that the AI should respond with “True / False” or “Yes / No” or “1 / 0” in response to a question.
- Works at scale: Process thousands of assets in seconds during upload or via bulk updates or by using these AI tasks via Saved extensions in Path policies.
- Eliminate human bias: Ensure the same logic is applied to every asset, improving search and filtering without human mistakes or bias during evaluation.
AI tasks consume extension units. Learn about extension unit pricing here.
Use cases
Most businesses across industries that manage their assets with ImageKit DAM use AI tasks to solve real-world metadata management, QC, and automation at scale. Given below are examples of use cases across different industries.
Add very specific tags and metadata to your images: This is the primary use case of AI tasks - to organize your assets using business-specific, customized tags and metadata on your images. Unlike generic AI tagging solutions that will add not-so-useful, generic tags to your images, You can ask questions in plain English that make sense for your business and teams’ processes. For example, this is how different industries benefit using AI tags
E-commerce: Instead of generic tags like “t-shirt” on your images, you can use AI tasks for customized product and attribute categorization using questions like “Is there a male or a female model in this image?”, “What is the kind of collar of this t-shirt - round, polo, or v-neck?, “Is this a full sleeve or a half sleeve t-shirt?”
Automotive: AI tasks can be used for body style classification, PII detection, and more, using questions such as “Is this the shot of the interior or exterior of the car?”, “Does this image show the car front seats, the rear seats, or the boot space?”, “Is there any person standing next to the car or sitting inside the car?” “Is this image taken in a studio, on a city road, or on an outdoor trail?”
Travel and Hospitality: AI tasks can help with room or space classification, amenity detection (pool, balcony, workspace), and view categorization using questions such as “Identify the part of the hotel shown in the image?”, “Is this a shot from outside the hotel or inside the hotel?”, “Is there a twin bed or a double bed in the room?”.
News and Media: AI tasks can accelerate categorization, PII flagging, and QC of assets resulting in faster time to market. One can ask questions such as “Does this image contain a person’s face?”, “Is there a known celebrity in this image?”, “Does this image contain any NSFW content?”, and more.
Image QC: Not only can AI tasks help you organize the assets correctly as explained above, you can also use them to run standard quality checks on your images. While the end result of these AI tasks would still be some specific tags and metadata, you can use these tags and metadata to search for assets that need a manual review, or automatically reject them if they do not meet the quality standards set for images in your organization. For example, you can use AI tasks to ask questions such as -
- Identify if the images show any celebrities?
- Is the image quality blurred or grainy?
- Is the car in the image completely visible, or is it getting cropped out?
You can use the AI Task action "yes_no" (explained below) and, depending on positive or negative answers to the above questions, add tags such as "needs review" or "production-ready" to your images.
- Automatic subsequent processes and automations: Accurate, business-specific tagging is the most important piece to get right for automating publishing processes at scale. Once your assets are tagged correctly, the steps that follow - either via the dashboard or programmatically using ImageKit’s DAM APIs- remain consistent at scale. For example, once AI tasks run on all your images and add suitable business-specific tags and metadata, you can reliably find assets that depict a “round neck”, “t-shirt”, on a “man”, in a “studio setup”. You can then use these assets to create personalized banners with text and image overlays, or send them in a push notification to a user looking for t-shirts on your website. This improves time-to-market for your assets and campaigns, helps you find the right assets every time, and elevates the experience for your end users.
Since AI tasks can run across thousands of assets without manual intervention, your business can classify assets correctly in seconds instead of taking weeks for the same, and always struggling with manual errors.
How AI tasks work
AI tasks analyze images and apply metadata based on your business rules.
Within an AI task, you can configure multiple sub-tasks to run in one go, with each sub-task handling a different aspect of categorization. Consider each sub-task to be one question and action that you want to ask the LLM to evaluate on your image. Each question you want to ask becomes a separate sub-task within the same AI task.
Each sub-task of an AI task has three components:
Instruction - A clear, natural language question or instruction that tells the AI what aspect of the image to evaluate. For example, "Does this image show the car front seats, the rear seats, or the boot space?".
Actions - What happens with the AI's analysis, how many tags get added, which metadata fields get set, or how does the conditional logic execute. This is defined primarily using the action type, along with other action-specific configuration options. For example, to add tags based on the question "Does this image show the car front seats, the rear seats, or the boot space?", the type will be
select_tags.Vocabulary - An optional, predefined list of possible values the AI can select from. This is your controlled vocabulary or business taxonomy. AI can only choose from the values you define (1-30 items per vocabulary). For example, for the question "Does this image show the car front seats, the rear seats, or the boot space?", the vocabulary can be "front","rear", and "boot".
AI tasks can only be a part of a Saved Extension. So, you first need to create a Saved Extension, either from your dashboard from Settings → Media Library → Saved Extensions, or using the Saved Extension API. Within the saved extension, you can define the AI task along with its sub-tasks using a JSON configuration.
For example, a saved extension with one AI task with two sub-tasks will look like this
{
"name": "ai-tasks", //fixed value to identify AI tasks
"tasks": [
{ //sub-task 1
"type": "select_tags",
"instruction": "What is the body style of the vehicle shown in this image?",
"vocabulary": ["sedan","suv","hatchback"],
},
{ //sub-task 2
"type": "select_metadata",
"instruction": "What is the primary color of the vehicle's exterior?",
"vocabulary": ["white","black","silver"]
"field": "color"
}
]
}
We have included complete examples for different actions and different industries in this documentation to make it easier for you to copy-paste and modify the AI task configurations as per your requirements.
Alternatively, your technology team, or ImageKit's support team, or your dedicated Customer Success Manager, can also help you configure the AI tasks.
AI Task Actions
AI tasks support three task types, each designed for different metadata management needs:
| Task Type | What It Does | When to Use |
|---|---|---|
| select_tags | Selects and applies tags from your vocabulary | Categorization, product attributes, building searchable taxonomies, or applying multiple labels |
| select_metadata | Sets custom metadata field values from your vocabulary | Structured data like color, season, type, status, or single/multi-select dropdown fields |
| yes_no | Asks yes/no questions and executes conditional actions | Quality checks, compliance verification, binary classifications, or conditional workflows |
Select tags
Analyzes the image and adds relevant tags from your controlled vocabulary. The AI compares what it sees in the image against your instruction, selects matching tags from your vocabulary while respecting min/max selection constraints, and adds the selected tags to the file. For example, an image of a living room might receive tags: ["sofa", "chair", "table", "lamp"].
Configuration:
{
"type": "select_tags",
"instruction": "What types of furniture are visible in this image?",
"vocabulary": ["sofa", "chair", "table", "desk", "bed", "shelving", "cabinet", "lamp"],
"min_selections": 1,
"max_selections": 4
}
Parameters:
| Parameter | Type | Required | Description |
|---|---|---|---|
type | string | Yes | Must be "select_tags" |
instruction | string | Yes | Question or instruction (1-2000 characters) |
vocabulary | array | No | Possible tag values (1-30 items, max 500 chars combined, no % character) |
min_selections | number | No | Minimum tags to select (≥ 0). Default: no minimum |
max_selections | number | No | Maximum tags to select (≥ 1). Default: no maximum |
Select metadata
Analyzes the image and sets a custom metadata field value from your vocabulary. The AI evaluates the image against your instruction, selects the best matching value(s) from vocabulary, validates against field type constraints, and sets the custom metadata field. For example, a metadata field lighting might be set to "golden-hour".
Configuration:
{
"type": "select_metadata",
"instruction": "What is the dominant lighting condition in this image?",
"field": "lighting",
"vocabulary": ["natural-daylight", "golden-hour", "overcast", "indoor-artificial", "low-light", "night"],
"min_selections": 1,
"max_selections": 1
}
Parameters:
| Parameter | Type | Required | Description |
|---|---|---|---|
type | string | Yes | Must be "select_metadata" |
instruction | string | Yes | Question or instruction (1-2000 characters) |
field | string | Yes | Custom metadata field name (must already exist in your media library) |
vocabulary | array | No | Possible values matching field type (1-30 items) |
min_selections | number | No | Minimum values to select (≥ 0). Default: no minimum |
max_selections | number | No | Maximum values to select (≥ 1). Default: no maximum |
Important:
- The custom metadata field must exist before using it in AI tasks. Create fields using the Custom Metadata Fields API or in your dashboard under Settings → Media Library → Custom Metadata Fields.
- Your vocabulary must match the field's schema definition. If you later change the field schema, AI tasks may fail to set values. Check the asset history to see why values were or weren't set.
Yes/No
Asks a yes/no question about the image and executes different actions based on the answer. The AI evaluates the image and returns one of three responses: Yes, No, or Unknown (when the AI cannot confidently determine the answer). Each response can trigger different actions—tags added/removed and metadata set/unset.
For example, a high-quality image might receive tags ["print-ready", "high-quality", "approved"] and metadata updates for quality status. If the AI cannot confidently assess quality, the on_unknown actions execute instead (e.g., tagging for manual review).
Configuration:
{
"type": "yes_no",
"instruction": "Does this image meet quality standards for print publication (sharp focus, good lighting, high resolution)?",
"on_yes": {
"add_tags": ["print-ready", "high-quality", "approved"],
"set_metadata": [
{ "field": "quality_status", "value": "approved" },
{ "field": "print_approved", "value": true }
]
},
"on_no": {
"add_tags": ["web-only", "needs-improvement"],
"remove_tags": ["print-ready", "approved"],
"set_metadata": [
{ "field": "quality_status", "value": "rejected" },
{ "field": "print_approved", "value": false }
]
},
"on_unknown": {
"add_tags": ["needs-review"],
"set_metadata": [
{ "field": "quality_status", "value": "pending" }
]
}
}
Parameters:
| Parameter | Type | Required | Description |
|---|---|---|---|
type | string | Yes | Must be "yes_no" |
instruction | string | Yes | Yes/no question (1-2000 characters) |
on_yes | object | No* | Actions to execute if AI determines answer is "yes" |
on_no | object | No* | Actions to execute if AI determines answer is "no" |
on_unknown | object | No | Actions to execute if AI cannot confidently determine yes or no |
At least one of on_yes or on_no is required.
Action objects:
Each action object for on_yes, on_no, and on_unknown can include:
{
"add_tags": ["tag1", "tag2"],
"remove_tags": ["tag3", "tag4"],
"set_metadata": [
{ "field": "field_name", "value": "some_value" }
],
"unset_metadata": [
{ "field": "field_to_remove" }
]
}
| Property | Type | Description |
|---|---|---|
add_tags | array | Tags to add to the file |
remove_tags | array | Tags to remove from the file |
set_metadata | array | Array of objects with field (string) and value (any) to set metadata fields |
unset_metadata | array | Array of objects with field (string) to remove metadata fields |
Example: Fashion e-commerce
This comprehensive example combines all three task types for a fashion retailer:
{
"name": "ai-tasks",
"tasks": [
{
"type": "select_tags",
"instruction": "What types of clothing or accessories are visible in this product image?",
"vocabulary": [
"dress", "shirt", "blouse", "t-shirt", "sweater", "jacket",
"coat", "pants", "jeans", "skirt", "shorts", "shoes",
"boots", "sneakers", "bag", "belt", "hat", "scarf", "jewelry"
],
"min_selections": 1,
"max_selections": 5
},
{
"type": "select_metadata",
"instruction": "What is the primary color of the main product?",
"field": "primary_color",
"vocabulary": [
"black", "white", "gray", "beige", "brown",
"red", "pink", "orange", "yellow", "green",
"blue", "navy", "purple", "multi-color", "metallic"
],
"min_selections": 1,
"max_selections": 1
},
{
"type": "select_metadata",
"instruction": "What season or weather is this product suitable for?",
"field": "season",
"vocabulary": ["spring", "summer", "fall", "winter", "all-season"],
"min_selections": 1,
"max_selections": 2
},
{
"type": "yes_no",
"instruction": "Is this a formal or dressy item (suitable for office, weddings, formal events)?",
"on_yes": {
"add_tags": ["formal", "dressy", "occasion-wear"],
"set_metadata": [
{ "field": "style_category", "value": "formal" },
{ "field": "dress_code", "value": "business-formal" }
]
},
"on_no": {
"add_tags": ["casual", "everyday"],
"set_metadata": [
{ "field": "style_category", "value": "casual" },
{ "field": "dress_code", "value": "casual" }
]
}
},
{
"type": "yes_no",
"instruction": "Does this product appear to be luxury or high-end (designer labels, premium materials, high-end styling)?",
"on_yes": {
"add_tags": ["luxury", "premium", "designer"],
"remove_tags": ["budget", "value"],
"set_metadata": [
{ "field": "price_tier", "value": "premium" },
{ "field": "target_market", "value": "luxury" }
]
},
"on_no": {
"add_tags": ["accessible", "value"],
"remove_tags": ["luxury", "premium"],
"set_metadata": [
{ "field": "price_tier", "value": "standard" },
{ "field": "target_market", "value": "mass-market" }
]
}
}
]
}
This configuration gives you complete product categorization in one upload. Each product image automatically gets tagged with product types, assigned a primary color, categorized by season, classified by style (formal vs. casual), and marked as luxury or standard—all without manual intervention. Upload thousands of products, and every single one gets consistent, searchable metadata that your team and customers can immediately filter and browse.
Example: Travel Industry
This example adds only tags to the images for a travel company.
{
"name": "ai-tasks",
"tasks": [
{
"type": "select_tags",
"instruction": "In which city is this place located? If you are not able to identify the city, don't provide any tag.",
"max_selections": 1
},
{
"type": "select_tags",
"instruction": "In which country is this place located? If you are not able to identify the city, don't provide any tag.",
"max_selections": 1
},
{
"type": "select_tags",
"instruction": "What is the landscape in this image? Add any suitable tags to the image",
"max_selections": 5
},
{
"type": "select_tags",
"instruction": "At what time of the day has this picture been taken possibly?",
"vocabulary": [
"morning", "afternoon", "evening", "night"
],
"max_selections": 1
},
{
"type": "select_tags",
"instruction": "Does this image consist of no people, only one people, or two, or three or more?",
"vocabulary": [
"no people", "one people", "two people", "three people", "more people"
],
"max_selections": 1
},
{
"type": "select_tags",
"instruction": "Does this image consist of a male or a female or both male and female or no people?",
"vocabulary": [
"male", "female", "both male and female", "no people"
],
"max_selections": 1
},
{
"type": "select_tags",
"instruction": "Does this image consist of a group of solo travellers, a couple, or a family, or no people?",
"vocabulary": [
"solo travellers", "family", "couple", "no people"
],
"max_selections": 1
},
{
"type": "select_tags",
"instruction": "If there is a famous monument in this image, then identify it. If you can't identify it, then don't add any tag.",
"max_selections": 1
}
]
}
Example: Automotive (Cars)
This example adds only tags to the images for an automotive company.
{
"name": "ai-tasks",
"tasks": [
{
"type": "select_tags",
"instruction": "What is the body style of the vehicle shown in this image?",
"vocabulary": [
"sedan", "suv", "hatchback", "coupe","convertible","pickup truck", "minivan","luxury","sports car","wagon"
],
"max_selections": 1
},
{
"type": "select_tags",
"instruction": "Which part of the car is primarily featured in this image?",
"vocabulary": [
"full exterior", "front view","rear view", "side profile", "dashboard", "steering wheel", "front seats", "rear seats", "trunk/boot", "engine bay", "wheel/rim", "headlight", "infotainment screen"
],
"max_selections": 1
},
{
"type": "select_tags",
"instruction": "If the brand (make) of the car is clearly visible or recognizable, identify it. If not recognizable, do not add a tag.",
"max_selections": 1
},
{
"type": "select_tags",
"instruction": "What is the primary color of the vehicle's exterior?",
"vocabulary": [
"white", "black", "silver", "grey", "blue", "red", "green", "yellow", "orange", "brown"
],
"max_selections": 1
},
{
"type": "select_tags",
"instruction": "Is the image taken in an indoor setting (showroom/garage) or an outdoor setting?",
"vocabulary": [
"indoor - showroom", "indoor - garage", "outdoor - city/road", "outdoor - nature/offroad", "studio - plain background"
],
"max_selections": 1
},
{
"type": "select_tags",
"instruction": "Are there any people present in or around the vehicle?",
"vocabulary": [
"no people", "driver only", "driver and passengers", "person standing outside", "model posing with car"
],
"max_selections": 1
},
{
"type": "select_tags",
"instruction": "Does the vehicle appear to be a new/modern model or a vintage/classic car?",
"vocabulary": [
"modern",
"vintage/classic"
],
"max_selections": 1
},
{
"type": "select_tags",
"instruction": "Identify any specific features visible in the image.",
"vocabulary": [
"sunroof", "alloy wheels", "leather seats", "digital instrument cluster", "spoiler", "roof rails", "led headlights"
],
"max_selections": 3
}
]
}
Applying AI tasks
You can apply AI tasks through the dashboard UI or programmatically via API, both at upload time or to existing files. You can also automate AI task application using Path policies based on destination in the Media Library.
Using the UI
To use AI tasks through the dashboard, first create a saved extension with your AI tasks configuration.
At upload, open settings and select the saved extension from the Extensions list.
For existing files, select files in Media Library → Right-click → Apply Saved extensions → Choose your AI tasks extension → Apply
Programmatically using API
Apply AI tasks when uploading new files. Include AI tasks in the extensions parameter:
curl -X POST 'https://upload.imagekit.io/api/v1/files/upload' \
-u your_private_key: \
-F 'file=@image.jpg' \
-F 'fileName=image.jpg' \
-F 'extensions=[{
"name": "ai-tasks",
"tasks": [
{
"type": "select_tags",
"instruction": "What product categories are visible in this image?",
"vocabulary": ["apparel", "footwear", "accessories", "bags", "jewelry"],
"max_selections": 2
},
{
"type": "select_metadata",
"instruction": "What is the dominant color?",
"field": "primary_color",
"vocabulary": ["black", "white", "red", "blue", "green", "beige", "brown", "multi-color"],
"max_selections": 1
}
]
}]'
import ImageKit from '@imagekit/nodejs';
import fs from 'fs';
const client = new ImageKit({
privateKey: "your_private_api_key"
});
try {
const result = await client.files.upload({
file: fs.createReadStream('path/to/image.jpg'),
fileName: 'image.jpg',
extensions: [
{
name: 'ai-tasks',
tasks: [
{
type: 'select_tags',
instruction: 'What product categories are visible in this image?',
vocabulary: ['apparel', 'footwear', 'accessories', 'bags', 'jewelry'],
max_selections: 2
},
{
type: 'select_metadata',
instruction: 'What is the dominant color?',
field: 'primary_color',
vocabulary: ['black', 'white', 'red', 'blue', 'green', 'beige', 'brown', 'multi-color'],
max_selections: 1
}
]
}
]
});
console.log(result);
} catch (error) {
console.log(error);
}
from imagekitio import ImageKit
client = ImageKit(
private_key="your_private_api_key"
)
with open("path/to/image.jpg", "rb") as file:
upload = client.files.upload(
file=file,
file_name="image.jpg",
extensions=[
{
"name": "ai-tasks",
"tasks": [
{
"type": "select_tags",
"instruction": "What product categories are visible in this image?",
"vocabulary": ["apparel", "footwear", "accessories", "bags", "jewelry"],
"max_selections": 2
},
{
"type": "select_metadata",
"instruction": "What is the dominant color?",
"field": "primary_color",
"vocabulary": ["black", "white", "red", "blue", "green", "beige", "brown", "multi-color"],
"max_selections": 1
}
]
}
]
)
print(upload)
require 'imagekitio'
client = Imagekitio::Client.new(
private_key: 'your_private_api_key'
)
result = client.files.upload(
file: Pathname('path/to/image.jpg'),
file_name: 'image.jpg',
extensions: [
{
name: 'ai-tasks',
tasks: [
{
type: 'select_tags',
instruction: 'What product categories are visible in this image?',
vocabulary: ['apparel', 'footwear', 'accessories', 'bags', 'jewelry'],
max_selections: 2
},
{
type: 'select_metadata',
instruction: 'What is the dominant color?',
field: 'primary_color',
vocabulary: ['black', 'white', 'red', 'blue', 'green', 'beige', 'brown', 'multi-color'],
max_selections: 1
}
]
}
]
)
puts result
package main
import (
"context"
"fmt"
"os"
"github.com/imagekit-developer/imagekit-go/v2"
"github.com/imagekit-developer/imagekit-go/v2/option"
"github.com/imagekit-developer/imagekit-go/v2/shared"
)
func main() {
client := imagekit.NewClient(
option.WithPrivateKey("your_private_api_key"),
)
file, err := os.Open("path/to/image.jpg")
if err != nil {
panic(err.Error())
}
defer file.Close()
upload, err := client.Files.Upload(
context.TODO(),
imagekit.FileUploadParams{
File: file,
FileName: "image.jpg",
Extensions: []shared.ExtensionUnionParam{
{
OfAITasks: &shared.ExtensionAITasksParam{
Tasks: []shared.ExtensionAITasksTaskUnionParam{
{
OfSelectTags: &shared.ExtensionAITasksTaskSelectTagsParam{
Instruction: "What product categories are visible in this image?",
Vocabulary: []string{"apparel", "footwear", "accessories", "bags", "jewelry"},
MaxSelections: imagekit.Int(2),
},
},
{
OfSelectMetadata: &shared.ExtensionAITasksTaskSelectMetadataParam{
Instruction: "What is the dominant color?",
Field: "primary_color",
Vocabulary: []shared.ExtensionAITasksTaskSelectMetadataVocabularyUnionParam{
{OfString: imagekit.String("black")},
{OfString: imagekit.String("white")},
{OfString: imagekit.String("red")},
{OfString: imagekit.String("blue")},
{OfString: imagekit.String("green")},
{OfString: imagekit.String("beige")},
{OfString: imagekit.String("brown")},
{OfString: imagekit.String("multi-color")},
},
MaxSelections: imagekit.Int(1),
},
},
},
},
},
},
},
)
if err != nil {
panic(err.Error())
}
fmt.Println(upload)
}
Using the update file details API:
curl -X PATCH 'https://api.imagekit.io/v1/files/fileId/details' \
-H 'Content-Type: application/json' \
-u your_private_key: \
-d '{
"extensions": [
{
"name": "ai-tasks",
"tasks": [
{
"type": "select_tags",
"instruction": "What product categories are visible in this image?",
"vocabulary": ["apparel", "footwear", "accessories", "bags", "jewelry"],
"max_selections": 2
},
{
"type": "select_metadata",
"instruction": "What is the dominant color?",
"field": "primary_color",
"vocabulary": ["black", "white", "red", "blue", "green", "beige", "brown", "multi-color"],
"max_selections": 1
}
]
}
]
}'
import ImageKit from '@imagekit/nodejs';
const client = new ImageKit({
privateKey: "your_private_api_key"
});
try {
const result = await client.files.update('fileId', {
extensions: [
{
name: 'ai-tasks',
tasks: [
{
type: 'select_tags',
instruction: 'What product categories are visible in this image?',
vocabulary: ['apparel', 'footwear', 'accessories', 'bags', 'jewelry'],
max_selections: 2
},
{
type: 'select_metadata',
instruction: 'What is the dominant color?',
field: 'primary_color',
vocabulary: ['black', 'white', 'red', 'blue', 'green', 'beige', 'brown', 'multi-color'],
max_selections: 1
}
]
}
]
});
console.log(result);
} catch (error) {
console.log(error);
}
from imagekitio import ImageKit
client = ImageKit(
private_key="your_private_api_key"
)
result = client.files.update(
file_id="fileId",
extensions=[
{
"name": "ai-tasks",
"tasks": [
{
"type": "select_tags",
"instruction": "What product categories are visible in this image?",
"vocabulary": ["apparel", "footwear", "accessories", "bags", "jewelry"],
"max_selections": 2
},
{
"type": "select_metadata",
"instruction": "What is the dominant color?",
"field": "primary_color",
"vocabulary": ["black", "white", "red", "blue", "green", "beige", "brown", "multi-color"],
"max_selections": 1
}
]
}
]
)
print(result)
require 'imagekitio'
client = Imagekitio::Client.new(
private_key: 'your_private_api_key'
)
result = client.files.update(
'fileId',
update_file_request: {
extensions: [
{
name: 'ai-tasks',
tasks: [
{
type: 'select_tags',
instruction: 'What product categories are visible in this image?',
vocabulary: ['apparel', 'footwear', 'accessories', 'bags', 'jewelry'],
max_selections: 2
},
{
type: 'select_metadata',
instruction: 'What is the dominant color?',
field: 'primary_color',
vocabulary: ['black', 'white', 'red', 'blue', 'green', 'beige', 'brown', 'multi-color'],
max_selections: 1
}
]
}
]
}
)
puts result
package main
import (
"context"
"fmt"
"github.com/imagekit-developer/imagekit-go/v2"
"github.com/imagekit-developer/imagekit-go/v2/option"
"github.com/imagekit-developer/imagekit-go/v2/shared"
)
func main() {
client := imagekit.NewClient(
option.WithPrivateKey("your_private_api_key"),
)
result, err := client.Files.Update(
context.TODO(),
"fileId",
imagekit.FileUpdateParams{
Extensions: []shared.ExtensionUnionParam{
{
OfAITasks: &shared.ExtensionAITasksParam{
Tasks: []shared.ExtensionAITasksTaskUnionParam{
{
OfSelectTags: &shared.ExtensionAITasksTaskSelectTagsParam{
Instruction: "What product categories are visible in this image?",
Vocabulary: []string{"apparel", "footwear", "accessories", "bags", "jewelry"},
MaxSelections: imagekit.Int(2),
},
},
{
OfSelectMetadata: &shared.ExtensionAITasksTaskSelectMetadataParam{
Instruction: "What is the dominant color?",
Field: "primary_color",
Vocabulary: []shared.ExtensionAITasksTaskSelectMetadataVocabularyUnionParam{
{OfString: imagekit.String("black")},
{OfString: imagekit.String("white")},
{OfString: imagekit.String("red")},
{OfString: imagekit.String("blue")},
{OfString: imagekit.String("green")},
{OfString: imagekit.String("beige")},
{OfString: imagekit.String("brown")},
{OfString: imagekit.String("multi-color")},
},
MaxSelections: imagekit.Int(1),
},
},
},
},
},
},
},
)
if err != nil {
panic(err.Error())
}
fmt.Println(result)
}
Both APIs also accept the saved extension ID if you want to avoid specifying the task configuration JSON with each request.
Restrictions and limits
- Sub-tasks per AI Task configuration: 1-10 sub-tasks.
- Vocabulary size: 1-30 items per task.
- Vocabulary character length (select_tags only): Max 500 characters combined.
- Instruction length: 1-2000 characters per task.
- Custom metadata fields (select_metadata): Field must exist before use, vocabulary type must match field type.
- Yes/no tasks: Must have at least one of
on_yesoron_nodefined. - Tag values: Cannot contain
%character. - Processing time: Typically 1-5 seconds per image.
Best practices
- Start with 1-2 tasks on a small batch, validate results, refine configuration, then scale to production.
- Keep instructions under 200 characters, be specific and direct.
✅ "What types of furniture are visible?"
❌ "Describe this image" (too broad) - For yes/no tasks, phrase instructions as yes/no questions.
✅ "Does this image contain people?"
❌ "Check if people are present" (not a question) - Use distinct vocabulary terms without overlap.
✅["modern", "traditional", "rustic"]
❌["modern", "very modern", "somewhat modern"](ambiguous) - Test with sample images before large-scale deployment.
- Review AI selections regularly and refine instructions based on results.
Next steps
- Create custom metadata fields - Set up the fields you'll use with AI tasks.
- Configure your first AI task - Start with a simple select_tags task.
- Save your configuration - Create reusable extensions for your workflows.
- Automate with Path policies - Set up automatic application based on upload paths.
Need help? Contact support or join our community.