
Active learning transforms how we train AI models by intelligently selecting the most valuable data for annotation. When paired with powerful LLMs like Google Gemini, it creates efficient annotation pipelines that reduce manual effort while maintaining high data quality.
This guide explores how to build such pipelines using the Adala framework – a powerful yet underutilized tool for autonomous data labeling.
We'll implement a medical symptom classifier that leverages Gemini's capabilities through a structured active learning workflow.
Understanding Active Learning for Data Annotation

Active learning tackles the key challenge in supervised learning: obtaining large quantities of labeled data. Rather than randomly selecting data points for annotation, active learning algorithms identify the most informative samples that will contribute most to model improvement.
Why active learning matters:
The Adala framework brings these benefits into production workflows by providing modular components that streamline the active learning process. Before diving into implementation, let's examine what makes Adala particularly suited for integration with modern LLMs like Google Gemini.
What is Adala? An Introduction to the Framework
Adala (Autonomous Data Labeling Agent) is an open-source framework designed specifically for implementing specialized agents for data processing. Unlike traditional annotation tools, Adala embraces an agent-based approach that combines:
Looking at Adala's quickstart example, we can see how it structures sentiment classification:
python
import pandas as pd
from adala.agents import Agent
from adala.environments import StaticEnvironment
from adala.skills import ClassificationSkill
from adala.runtimes import OpenAIChatRuntime
from rich import print
# Train dataset
train_df = pd.DataFrame([
["It was the negative first impressions, and then it started working.", "Positive"],
["Not loud enough and doesn't turn on like it should.", "Negative"],
["I don't know what to say.", "Neutral"],
["Manager was rude, but the most important that mic shows very flat frequency response.", "Positive"],
["The phone doesn't seem to accept anything except CBR mp3s.", "Negative"],
["I tried it before, I bought this device for my son.", "Neutral"],
], columns=["text", "sentiment"])
# Test dataset
test_df = pd.DataFrame([
"All three broke within two months of use.",
"The device worked for a long time, can't say anything bad.",
"Just a random line of text."
], columns=["text"])
agent = Agent(
# connect to a dataset
environment=StaticEnvironment(df=train_df),
# define a skill
skills=ClassificationSkill(
name='sentiment',
instructions="Label text as positive, negative or neutral.",
labels=["Positive", "Negative", "Neutral"],
input_template="Text: {text}",
output_template="Sentiment: {sentiment}"
),
# define runtimes
runtimes = {
'openai': OpenAIChatRuntime(model='gpt-4o'),
},
teacher_runtimes = {
'default': OpenAIChatRuntime(model='gpt-4o'),
},
default_runtime='openai',
)
agent.learn(learning_iterations=3, accuracy_threshold=0.95)
predictions = agent.run(test_df)
For our medical symptom classification task, we'll adapt this architecture to integrate Google Gemini while implementing a custom active learning strategy.
Setting Up Your Environment
Let's begin by installing Adala and required dependencies:
python
# Install Adala directly from GitHub
!pip install -q git+https://github.com/HumanSignal/Adala.git
# Verify installation
!pip list | grep adala
# Install additional dependencies
!pip install -q google-generativeai pandas matplotlib numpy
We'll also need to clone the repository for direct access to its components:
python
# Clone the repository for access to source files
!git clone https://github.com/HumanSignal/Adala.git
# Ensure the package is in our Python path
import sys
sys.path.append('./Adala')
# Import key components
from Adala.adala.annotators.base import BaseAnnotator
from Adala.adala.strategies.random_strategy import RandomStrategy
from Adala.adala.utils.custom_types import TextSample, LabeledSample
Integrating Google Gemini as a Custom Annotator
Unlike the original implementation that used a basic wrapper around Google Gemini, we'll build a more robust annotator that follows Adala's design patterns. This makes our solution more maintainable and extensible.
First, we need to set up the Google Generative AI client:
python
import google.generativeai as genai
import os
# Set API key from environment or enter manually
GEMINI_API_KEY = os.getenv("GEMINI_API_KEY") or getpass("Enter your Gemini API Key: ")
genai.configure(api_key=GEMINI_API_KEY)
Now, we'll create a custom annotator by extending Adala's BaseAnnotator class:
python
import json
import re
from typing import List, Dict, Any, Optional
class GeminiAnnotator(BaseAnnotator):
"""Custom annotator using Google Gemini for medical symptom classification."""
def __init__(self,
model_name: str = "models/gemini-2.0-flash-lite",
categories: List[str] = None,
temperature: float = 0.1):
"""Initialize the Gemini annotator.
Args:
model_name: The Gemini model to use
categories: List of valid classification categories
temperature: Controls randomness in generation (lower = more deterministic)
"""
self.model = genai.GenerativeModel(
model_name=model_name,
generation_config={"temperature": temperature}
)
self.categories = categories or ["Cardiovascular", "Respiratory",
"Gastrointestinal", "Neurological"]
def _build_prompt(self, text: str) -> str:
"""Create a structured prompt for the model.
Args:
text: The symptom text to classify
Returns:
A formatted prompt string
"""
return f"""Classify this medical symptom into one of these categories:
{', '.join(self.categories)}.
Return JSON format: {{"category": "selected_category",
"confidence": 0.XX, "explanation": "brief_reason"}}
SYMPTOM: {text}"""
def _parse_response(self, response: str) -> Dict[str, Any]:
"""Extract structured data from model response.
Args:
response: Raw text response from Gemini
Returns:
Dictionary containing parsed fields
"""
try:
# Extract JSON from response even if surrounded by text
json_match = re.search(r'(\{.*\})', response, re.DOTALL)
result = json.loads(json_match.group(1) if json_match else response)
return {
"category": result.get("category", "Unknown"),
"confidence": result.get("confidence", 0.0),
"explanation": result.get("explanation", "")
}
except Exception as e:
return {
"category": "Unknown",
"confidence": 0.0,
"explanation": f"Error parsing response: {str(e)}"
}
def annotate(self, samples: List[TextSample]) -> List[LabeledSample]:
"""Annotate a batch of text samples.
Args:
samples: List of TextSample objects
Returns:
List of LabeledSample objects with annotations
"""
results = []
for sample in samples:
prompt = self._build_prompt(sample.text)
try:
response = self.model.generate_content(prompt).text
parsed = self._parse_response(response)
# Create labeled sample with metadata
labeled_sample = LabeledSample(
text=sample.text,
labels=parsed["category"],
metadata={
"confidence": parsed["confidence"],
"explanation": parsed["explanation"]
}
)
except Exception as e:
# Graceful error handling
labeled_sample = LabeledSample(
text=sample.text,
labels="Unknown",
metadata={"error": str(e)}
)
# Store reference to original sample
labeled_sample._sample = sample
results.append(labeled_sample)
return results
This implementation provides significant improvements over the original:
- It follows proper class inheritance from Adala's BaseAnnotator
- Implements private helper methods for prompt building and response parsing
- Uses structured error handling and type hints
- Provides complete documentation
Building a Symptom Classification Pipeline
Let's create a dataset of medical symptoms for our classification task. Unlike the original implementation, we'll use a more diverse dataset with balanced representation across categories:
python
# Create a more comprehensive dataset
symptom_data = [
# Cardiovascular symptoms
"Chest pain radiating to left arm during exercise",
"Heart palpitations when lying down",
"Swollen ankles and shortness of breath",
"Dizziness when standing up quickly",
# Respiratory symptoms
"Persistent dry cough with occasional wheezing",
"Shortness of breath when climbing stairs",
"Coughing up yellow or green mucus",
"Rapid breathing with chest tightness",
# Gastrointestinal symptoms
"Stomach cramps and nausea after eating",
"Burning sensation in upper abdomen",
"Frequent loose stools with abdominal pain",
"Yellowing of skin and eyes",
# Neurological symptoms
"Severe headache with sensitivity to light",
"Numbness in fingers of right hand",
"Memory loss and confusion",
"Tremors in hands when reaching for objects"
]
# Convert to TextSample objects
text_samples = [TextSample(text=text) for text in symptom_data]
Implementing Advanced Active Learning Strategies
The original implementation used a simple priority scoring mechanism. We'll enhance this with multiple strategies to demonstrate Adala's flexibility:
python
import numpy as np
from typing import List, Callable
class PrioritizationStrategy:
"""Base class for sample prioritization strategies."""
def score_samples(self, samples: List[TextSample]) -> np.ndarray:
"""Assign priority scores to samples.
Args:
samples: List of samples to score
Returns:
Array of scores, higher values indicate higher priority
"""
raise NotImplementedError("Subclasses must implement this method")
def select(self, samples: List[TextSample], n: int = 1) -> List[TextSample]:
"""Select the top n highest scoring samples.
Args:
samples: List of samples to select from
n: Number of samples to select
Returns:
List of selected samples
"""
if not samples:
return []
scores = self.score_samples(samples)
indices = np.argsort(-scores)[:n] # Descending order
return [samples[i] for i in indices]
class KeywordPriority(PrioritizationStrategy):
"""Prioritize samples based on medical urgency keywords."""
def __init__(self, keyword_weights: Dict[str, float]):
"""Initialize with keyword weights.
Args:
keyword_weights: Dictionary mapping keywords to priority weights
"""
self.keyword_weights = keyword_weights
def score_samples(self, samples: List[TextSample]) -> np.ndarray:
scores = np.zeros(len(samples))
for i, sample in enumerate(samples):
# Base score
scores[i] = 0.1
# Add weights for each keyword found
text_lower = sample.text.lower()
for keyword, weight in self.keyword_weights.items():
if keyword in text_lower:
scores[i] += weight
return scores
class UncertaintyPriority(PrioritizationStrategy):
"""Prioritize samples based on model uncertainty."""
def __init__(self, model_fn: Callable[[List[TextSample]], List[float]]):
"""Initialize with uncertainty model function.
Args:
model_fn: Function that returns uncertainty scores for samples
"""
self.model_fn = model_fn
def score_samples(self, samples: List[TextSample]) -> np.ndarray:
# Higher uncertainty = higher priority
return np.array(self.model_fn(samples))
# Create a combined strategy
keyword_weights = {
"chest": 0.5,
"pain": 0.4,
"breathing": 0.4,
"dizz": 0.3,
"head": 0.2,
"numb": 0.2
}
keyword_strategy = KeywordPriority(keyword_weights)
Now, let's implement our enhanced active learning loop:
python
from matplotlib import pyplot as plt
from IPython.display import clear_output
import time
def run_active_learning_loop(
samples: List[TextSample],
annotator: GeminiAnnotator,
strategy: PrioritizationStrategy,
iterations: int = 5,
batch_size: int = 1,
visualization_interval: int = 1
):
"""Run an active learning loop with visualization.
Args:
samples: Pool of unlabeled samples
annotator: Annotation system
strategy: Sample selection strategy
iterations: Number of learning iterations
batch_size: Samples to annotate per iteration
visualization_interval: How often to update visualizations
Returns:
List of labeled samples
"""
labeled_samples = []
remaining_samples = list(samples)
print("\nStarting Active Learning Loop:")
for i in range(iterations):
print(f"\n--- Iteration {i+1}/{iterations} ---")
# Filter out already labeled samples
remaining_samples = [
s for s in remaining_samples
if s not in [getattr(l, '_sample', l) for l in labeled_samples]
]
if not remaining_samples:
print("No more samples to label. Stopping.")
break
# Select most important samples
selected = strategy.select(remaining_samples, n=batch_size)
# Annotate selected samples
newly_labeled = annotator.annotate(selected)
labeled_samples.extend(newly_labeled)
# Display annotation results
for sample in newly_labeled:
print(f"Text: {sample.text}")
print(f"Category: {sample.labels}")
print(f"Confidence: {sample.metadata.get('confidence', 0):.2f}")
explanation = sample.metadata.get('explanation', '')
print(f"Explanation: {explanation[:100]}..." if len(explanation) > 100 else explanation)
print()
# Visualize results periodically
if (i + 1) % visualization_interval == 0:
visualize_results(labeled_samples)
return labeled_samples
def visualize_results(labeled_samples: List[LabeledSample]):
"""Create visualizations of annotation results.
Args:
labeled_samples: List of labeled samples to visualize
"""
if not labeled_samples:
return
# Extract data
categories = [s.labels for s in labeled_samples]
confidence = [s.metadata.get("confidence", 0) for s in labeled_samples]
texts = [s.text[:30] + "..." for s in labeled_samples]
# Set up plots
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(15, 6))
# Plot 1: Confidence by category
category_counts = {}
category_confidence = {}
for cat, conf in zip(categories, confidence):
if cat not in category_counts:
category_counts[cat] = 0
category_confidence[cat] = 0
category_counts[cat] += 1
category_confidence[cat] += conf
for cat in category_confidence:
category_confidence[cat] /= category_counts[cat]
cats = list(category_counts.keys())
counts = list(category_counts.values())
avg_conf = list(category_confidence.values())
x = np.arange(len(cats))
width = 0.35
ax1.bar(x - width/2, counts, width, label='Count')
ax1.bar(x + width/2, avg_conf, width, label='Avg Confidence')
ax1.set_xticks(x)
ax1.set_xticklabels(cats, rotation=45)
ax1.set_title('Category Distribution and Confidence')
ax1.legend()
# Plot 2: Individual sample confidence
sorted_indices = np.argsort(confidence)
ax2.barh(range(len(texts)), [confidence[i] for i in sorted_indices])
ax2.set_yticks(range(len(texts)))
ax2.set_yticklabels([texts[i] for i in sorted_indices])
ax2.set_title('Sample Confidence')
ax2.set_xlabel('Confidence')
plt.tight_layout()
plt.show()
Running the End-to-End Pipeline
Now we can run our complete active learning pipeline:
python
# Initialize components
categories = ["Cardiovascular", "Respiratory", "Gastrointestinal", "Neurological"]
annotator = GeminiAnnotator(categories=categories)
strategy = keyword_strategy
# Run the active learning loop
labeled_data = run_active_learning_loop(
samples=text_samples,
annotator=annotator,
strategy=strategy,
iterations=5,
visualization_interval=2
)
# Final visualization and analysis
visualize_results(labeled_data)
# Print summary statistics
print("\nAnnotation Summary:")
print(f"Total samples annotated: {len(labeled_data)}")
categories = [s.labels for s in labeled_data]
unique_categories = set(categories)
print(f"Categories found: {len(unique_categories)}")
for category in unique_categories:
count = categories.count(category)
print(f" - {category}: {count} samples ({count/len(labeled_data):.1%})")
avg_confidence = sum(s.metadata.get("confidence", 0) for s in labeled_data) / len(labeled_data)
print(f"Average confidence: {avg_confidence:.2f}")
Practical Applications and Extensions
This pipeline has numerous practical applications beyond medical symptom classification:
1. Content Moderation
2. Customer Feedback Analysis
3. Clinical Trial Document Processing
You can extend this implementation by:
AiMojo Recommends:
Conclusion
The integration of Adala and Google Gemini provides a powerful framework for building intelligent annotation pipelines. By leveraging active learning strategies, we can dramatically reduce the manual effort required while maintaining high-quality annotations.
The modular design patterns demonstrated in this tutorial allow for easy adaptation to different domains and annotation tasks.
For those interested in exploring further, the Adala GitHub repository offers additional examples and documentation to extend these concepts to more complex annotation scenarios.