• Home
  • Pricing
  • Integrations
  • Blog
  • Documentation
Sign InSign Up

The next-generation cloud platform designed exclusively for AI agents

© Copyright 2026 Cognitora. All Rights Reserved.

Product
  • Documentation
  • Blog
  • Integrations
  • Roadmap
  • FAQ
Company
  • Drop me an email
Documentation
  • Getting Started
  • Code Interpreter API
  • Containers API
  • Cognitora SDK Guide
  • Technical Architecture

Code Interpreter API

Execute code securely in isolated environments with persistent sessions, file uploads, and real-time output streaming using Cognitora's Code Interpreter API.

The Code Interpreter API provides secure, isolated code execution environments for AI agents using Kata Containers + Firecracker microVMs. Execute Python, JavaScript, and Bash code with persistent sessions, file uploads, and real-time output streaming.

Understanding Code Interpreter

Cognitora's Code Interpreter is designed specifically for AI agents that need to execute code dynamically. Unlike traditional code execution platforms, every code execution runs in a secure, isolated Firecracker microVM with hardware-level isolation.

Key Features

  • Multi-Language Support: Execute Python, JavaScript, and Bash code
  • Persistent Sessions: Maintain state across multiple code executions
  • File Upload Support: Upload text and binary files (string/base64 encoding)
  • Networking Control: Optional internet access with security-focused defaults
  • Secure Isolation: Each session runs in a dedicated Firecracker microVM
  • Real-time Output: Stream execution results and logs
  • Resource Management: Configurable CPU, memory, and storage limits
  • Session Management: Create, list, and terminate sessions
  • Execution History: Track and manage executions across sessions

Supported Languages

LanguageRuntimeKey Features
pythonPython 3.11NumPy, Pandas, Matplotlib, Requests, and more
javascriptNode.js 20ES2022 support, built-in modules
bashBash 5.xFull shell environment with common utilities

Basic Code Execution

One-Shot Execution

Execute code without managing sessions for simple, stateless operations:

cURL Example

bash
Copy
curl -X POST https://api.cognitora.dev/api/v1/interpreter/execute \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer <api-key>" \
  -d '{
    "language": "python",
    "code": "import math\nresult = math.sqrt(16)\nprint(f\"Square root of 16 is {result}\")",
    "timeout_seconds": 60
  }'

Python SDK Example

python
Copy
from cognitora import Cognitora

client = Cognitora(api_key="your_api_key")

# Execute Python code directly with networking enabled (default)
result = client.code_interpreter.execute(
    code="""
import math
import datetime
import requests

print("Hello from Cognitora Code Interpreter!")
print(f"Current time: {datetime.datetime.now()}")
print(f"Pi value: {math.pi}")

# Simple calculation
result = 42 * 1337
print(f"42 * 1337 = {result}")

# Example with networking (enabled by default for code interpreter)
try:
    response = requests.get('https://api.github.com/zen')
    print(f"GitHub Zen: {response.text}")
except Exception as e:
    print(f"Network request failed: {e}")
    """,
    language="python",
    networking=True,  # Default for code interpreter
    timeout_seconds=60
)

print(f"Status: {result.data.status}")

for output in result.data.outputs:
    print(f"{output.type}: {output.content}")

JavaScript/TypeScript SDK Example

typescript
Copy
import { Cognitora } from '@cognitora/sdk';

const client = new Cognitora({ apiKey: 'your_api_key' });

// Execute JavaScript code with networking enabled (default)
const result = await client.codeInterpreter.execute({
  code: `
    const https = require('https');
    const data = [1, 2, 3, 4, 5];
    const sum = data.reduce((a, b) => a + b, 0);
    const average = sum / data.length;
    
    console.log('Data:', data);
    console.log('Sum:', sum);
    console.log('Average:', average);
    
    // Example with networking (enabled by default for code interpreter)
    https.get('https://api.github.com/zen', (res) => {
      let data = '';
      res.on('data', chunk => data += chunk);
      res.on('end', () => console.log('GitHub Zen:', data));
    }).on('error', (err) => {
      console.log('Network request failed:', err.message);
    });
    
    // JSON output
    console.log(JSON.stringify({
      count: data.length,
      sum: sum,
      average: average
    }));
  `,
  language: 'javascript',
  networking: true,  // Default for code interpreter
  timeout_seconds: 60
});

console.log(`Status: ${result.data.status}`);
result.data.outputs.forEach(output => {
  console.log(`${output.type}: ${output.content}`);
});

Networking Control

The Code Interpreter API supports optional networking control for security and functionality requirements:

Default Networking Behavior

ServiceDefault NetworkingSecurity Rationale
Code Interpretertrue (enabled)Needs package installs, data fetching

Secure Execution (Networking Disabled)

python
Copy
# Execute code without internet access for security
result = client.code_interpreter.execute(
    code="""
import numpy as np
import math

# Secure computation without external dependencies
data = np.random.randn(1000)
mean_value = np.mean(data)
std_value = np.std(data)

print(f"Generated {len(data)} random numbers")
print(f"Mean: {mean_value:.4f}")
print(f"Standard deviation: {std_value:.4f}")
print(f"Pi value: {math.pi}")
    """,
    language="python",
    networking=False  # Disable networking for secure execution
)

Networking-Enabled Execution (Default)

python
Copy
# Execute code with internet access (default behavior)
result = client.code_interpreter.execute(
    code="""
import requests
import pandas as pd

# Fetch external data
response = requests.get('https://api.coindesk.com/v1/bpi/currentprice.json')
bitcoin_data = response.json()

print(f"Bitcoin price: {bitcoin_data['bpi']['USD']['rate']}")

# Install and use external packages
import subprocess
subprocess.run(['pip', 'install', 'matplotlib'], check=True)

import matplotlib.pyplot as plt
plt.plot([1, 2, 3, 4], [1, 4, 2, 3])
plt.title('Sample Plot')
plt.show()
    """,
    language="python",
    networking=True  # Explicitly enable networking (default)
)

Execution Management

List All Executions

python
Copy
# List all interpreter executions across all sessions
executions = client.code_interpreter.list_all_executions(
    limit=50,
    status='completed'
)

print(f"Found {len(executions)} executions")
for execution in executions:
    print(f"Execution {execution['id']}: {execution['status']}")
    print(f"  Language: {execution['language']}")
    print(f"  Runtime: {execution['execution_time_ms']}ms")

Get Execution Details

python
Copy
# Get detailed information about a specific execution
execution_details = client.code_interpreter.get_execution('exec_123456')

print(f"Execution status: {execution_details['status']}")
print(f"Code executed: {execution_details['code'][:100]}...")
print(f"Session ID: {execution_details['session_id']}")
print(f"Output: {execution_details['outputs']}")

Session Execution History

python
Copy
# Get all executions for a specific session
session_executions = client.code_interpreter.get_session_executions(
    'session_123456',
    limit=20
)

print(f"Session has {len(session_executions)} executions")
for i, execution in enumerate(session_executions):
    print(f"{i+1}. {execution['status']} - {execution['execution_time_ms']}ms")

JavaScript SDK Execution Management

typescript
Copy
// List all interpreter executions
const allExecutions = await client.codeInterpreter.listAllExecutions({
  limit: 20,
  status: 'completed'
});

console.log(`Found ${allExecutions.executions.length} executions`);

// Get specific execution details
const executionDetails = await client.codeInterpreter.getExecution('exec_123456');
console.log(`Execution status: ${executionDetails.status}`);

// Get executions for a specific session
const sessionExecutions = await client.codeInterpreter.getSessionExecutions(
  'session_123456',
  { limit: 10 }
);

Session Management

For stateful executions where variables and context should be maintained across multiple calls, use persistent sessions.

Creating a Session

Sessions provide persistent environments where variables, imported modules, and state are maintained between executions:

python
Copy
# Create a persistent Python session (defaults to 1 day if no timeout specified)
session = client.code_interpreter.create_session(
    language="python",
    timeout_seconds=1800,  # 30 minutes (optional)
    resources={
        "cpu_cores": 2,
        "memory_mb": 1024,
        "storage_gb": 8
    },
    environment={
        "PYTHONPATH": "/custom/path",
        "DEBUG": "true"
    }
)

session_id = session.data.session_id
print(f"Created session: {session_id}")

Session Timeout Management

Sessions use a simplified timeout approach:

Default Behavior

  • If no timeout is specified, sessions default to 1 day (86400 seconds)
  • This provides sufficient time for development and AI agent workflows

Custom Timeout Options

Option 1: Timeout in Seconds

python
Copy
session = client.code_interpreter.create_session(
    language="python",
    timeout_seconds=3600  # 1 hour from now
)

Option 2: Exact Expiration Time

python
Copy
session = client.code_interpreter.create_session(
    language="python",
    expires_at="2025-01-07T15:00:00Z"  # Exact ISO 8601 datetime
)

Option 3: Default (Recommended)

python
Copy
session = client.code_interpreter.create_session(
    language="python"  # Automatically expires after 1 day
)

Executing Code in Sessions

Variables and state persist across executions within the same session:

python
Copy
# First execution - set variables
result1 = client.code_interpreter.execute(
    code="""
import pandas as pd
import numpy as np

# Create dataset
data = {
    'name': ['Alice', 'Bob', 'Charlie'],
    'age': [25, 30, 35], 
    'salary': [50000, 60000, 70000]
}
df = pd.DataFrame(data)
print("Dataset created:")
print(df)

# Store for later use
dataset_stats = {
    'mean_age': df['age'].mean(),
    'mean_salary': df['salary'].mean(),
    'count': len(df)
}
print(f"Stats calculated: {dataset_stats}")
    """,
    session_id=session_id
)

# Second execution - use previously set variables
result2 = client.code_interpreter.execute(
    code="""
# Variables from previous execution are still available
print("Previously calculated stats:")
print(f"Mean age: {dataset_stats['mean_age']}")
print(f"Mean salary: ${dataset_stats['mean_salary']:,.2f}")
print(f"Total records: {dataset_stats['count']}")

# Add more analysis
df['salary_normalized'] = df['salary'] / dataset_stats['mean_salary']
print("\nDataset with normalized salaries:")
print(df)
    """,
    session_id=session_id
)

print("Both executions completed successfully!")

Session List and Management

python
Copy
# List all active sessions
sessions = client.code_interpreter.list_sessions()
for session in sessions.data.sessions:
    print(f"Session {session.session_id}: {session.language} ({session.status})")
    print(f"  Created: {session.created_at}")
    print(f"  Expires: {session.expires_at}")

# Get detailed session information
session_details = client.code_interpreter.get_session(session_id)
print(f"Session details: {session_details.data}")

# Terminate session when done
client.code_interpreter.delete_session(session_id)
print("Session terminated")

File Upload and Processing

Upload files to the execution environment for data processing:

Python File Processing Example

python
Copy
# Create files to upload
csv_data = """name,age,city
Alice,30,New York
Bob,25,Los Angeles  
Charlie,35,Chicago
Diana,28,Miami"""

config_data = {
    "processing_mode": "batch",
    "output_format": "json",
    "include_stats": True
}

# Upload files and process
result = client.code_interpreter.execute(
    code="""
import pandas as pd
import json

# Read uploaded CSV
df = pd.read_csv('data.csv')
print("Data loaded:")
print(df)

# Read configuration
with open('config.json', 'r') as f:
    config = json.load(f)
print(f"\\nConfiguration: {config}")

# Process based on config
if config['include_stats']:
    stats = {
        'total_records': len(df),
        'average_age': df['age'].mean(),
        'cities': df['city'].unique().tolist(),
        'age_distribution': df['age'].describe().to_dict()
    }
    
    print(f"\\nStatistics: {stats}")
    
    # Save results
    if config['output_format'] == 'json':
        output = {
            'data': df.to_dict('records'),
            'statistics': stats
        }
        with open('results.json', 'w') as f:
            json.dump(output, f, indent=2)
        print("Results saved to results.json")
    """,
    files=[
        {
            "name": "data.csv",
            "content": csv_data,
            "encoding": "string"
        },
        {
            "name": "config.json", 
            "content": json.dumps(config_data),
            "encoding": "string"
        }
    ],
    language="python",
    timeout_seconds=120
)

print("File processing completed!")
for output in result.data.outputs:
    print(f"{output.type}: {output.content}")

JavaScript File Processing

javascript
Copy
const files = [
  {
    name: 'package.json',
    content: JSON.stringify({
      name: "data-processor",
      version: "1.0.0",
      dependencies: {}
    }),
    encoding: 'string'
  },
  {
    name: 'data.txt',
    content: '1,2,3,4,5\n6,7,8,9,10\n11,12,13,14,15',
    encoding: 'string'
  }
];

const result = await client.codeInterpreter.execute({
  code: `
    const fs = require('fs');
    
    // Read package.json
    const pkg = JSON.parse(fs.readFileSync('package.json', 'utf8'));
    console.log('Package info:', pkg);
    
    // Process data file
    const data = fs.readFileSync('data.txt', 'utf8');
    const rows = data.trim().split('\\n');
    const numbers = rows.map(row => row.split(',').map(Number));
    
    console.log('Data matrix:', numbers);
    
    // Calculate statistics
    const flat = numbers.flat();
    const stats = {
      total: flat.length,
      sum: flat.reduce((a, b) => a + b, 0),
      avg: flat.reduce((a, b) => a + b, 0) / flat.length,
      min: Math.min(...flat),
      max: Math.max(...flat)
    };
    
    console.log('Statistics:', stats);
    
    // Save results
    fs.writeFileSync('results.json', JSON.stringify(stats, null, 2));
    console.log('Results saved!');
  `,
  files: files,
  language: 'javascript',
  timeout_seconds: 60
});

Advanced Code Execution Features

Environment Variables and Configuration

python
Copy
# Execute with custom environment variables
result = client.code_interpreter.execute(
    code="""
import os

print("Environment variables:")
print(f"API_URL: {os.getenv('API_URL')}")
print(f"DEBUG: {os.getenv('DEBUG')}")
print(f"WORKER_ID: {os.getenv('WORKER_ID')}")
print(f"LOG_LEVEL: {os.getenv('LOG_LEVEL')}")

# Use environment for configuration
if os.getenv('DEBUG') == 'true':
    print("Debug mode enabled")
    import sys
    print(f"Python version: {sys.version}")
    print(f"Available modules: {sys.modules.keys()}")
    """,
    environment={
        "API_URL": "https://api.example.com",
        "DEBUG": "true",
        "WORKER_ID": "interpreter-001",
        "LOG_LEVEL": "INFO"
    },
    language="python"
)

Error Handling and Debugging

python
Copy
# Example of error handling in code execution
try:
    result = client.code_interpreter.execute(
        code="""
# This will cause an error
undefined_variable = some_undefined_var + 1
print("This won't be reached")
        """,
        language="python"
    )
    
    print(f"Execution status: {result.data.status}")
    
    if result.data.status == "failed":
        print("Execution failed!")
        for output in result.data.outputs:
            if output.type == "stderr":
                print(f"Error: {output.content}")
    
except Exception as e:
    print(f"API call failed: {e}")

Long-Running Computations

python
Copy
# For long-running computations, use appropriate timeouts
result = client.code_interpreter.execute(
    code="""
import time
import numpy as np

print("Starting long computation...")

# Simulate heavy computation
data = np.random.randn(10000, 1000)
print(f"Generated matrix: {data.shape}")

# Complex calculation
result = np.linalg.svd(data, full_matrices=False)
print(f"SVD completed. Shapes: {[r.shape for r in result]}")

# More processing
for i in range(10):
    partial_result = np.mean(data[i*1000:(i+1)*1000])
    print(f"Batch {i+1}/10: mean = {partial_result:.4f}")
    time.sleep(1)  # Simulate processing time

print("Computation completed successfully!")
    """,
    language="python",
    timeout_seconds=300  # 5 minutes for heavy computation
)

Multi-Language Examples

Python Data Science

python
Copy
result = client.code_interpreter.execute(
    code="""
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from datetime import datetime, timedelta

# Generate sample time series data
dates = pd.date_range(start='2024-01-01', end='2024-12-31', freq='D')
values = np.cumsum(np.random.randn(len(dates))) + 100

df = pd.DataFrame({
    'date': dates,
    'value': values
})

print("Time series data sample:")
print(df.head())
print(f"\\nData shape: {df.shape}")

# Statistical analysis
stats = {
    'mean': df['value'].mean(),
    'std': df['value'].std(),
    'min': df['value'].min(),
    'max': df['value'].max(),
    'trend': 'increasing' if df['value'].iloc[-1] > df['value'].iloc[0] else 'decreasing'
}

print(f"\\nStatistics: {stats}")

# Simple visualization
plt.figure(figsize=(12, 6))
plt.plot(df['date'], df['value'])
plt.title('Time Series Analysis')
plt.xlabel('Date')
plt.ylabel('Value')
plt.grid(True)
plt.savefig('timeseries.png', dpi=300, bbox_inches='tight')
print("\\nChart saved as timeseries.png")
    """,
    language="python"
)

JavaScript API Integration

typescript
Copy
const result = await client.codeInterpreter.execute({
  code: `
    const https = require('https');
    const fs = require('fs');
    
    function fetchData(url) {
      return new Promise((resolve, reject) => {
        https.get(url, (res) => {
          let data = '';
          res.on('data', chunk => data += chunk);
          res.on('end', () => {
            try {
              resolve(JSON.parse(data));
            } catch (e) {
              reject(e);
            }
          });
          res.on('error', reject);
        });
      });
    }
    
    async function main() {
      try {
        console.log('Fetching GitHub API data...');
        const repoData = await fetchData('https://api.github.com/repos/microsoft/vscode');
        
        const summary = {
          name: repoData.name,
          description: repoData.description,
          stars: repoData.stargazers_count,
          forks: repoData.forks_count,
          language: repoData.language,
          updated: repoData.updated_at,
          topics: repoData.topics?.slice(0, 5) || []
        };
        
        console.log('Repository Summary:');
        console.log(JSON.stringify(summary, null, 2));
        
        // Save to file
        fs.writeFileSync('repo_summary.json', JSON.stringify(summary, null, 2));
        console.log('\\nSummary saved to repo_summary.json');
        
      } catch (error) {
        console.error('Error:', error.message);
      }
    }
    
    main();
  `,
  language: 'javascript',
  timeout_seconds: 60
});

Bash System Operations

python
Copy
result = client.code_interpreter.execute(
    code="""
#!/bin/bash

echo "=== System Information Report ==="
echo "Date: $(date)"
echo "Hostname: $(hostname)"
echo ""

echo "=== System Resources ==="
echo "CPU Information:"
cat /proc/cpuinfo | grep "model name" | head -1
echo ""

echo "Memory Information:"
free -h
echo ""

echo "Disk Usage:"
df -h | head -5
echo ""

echo "=== Network Configuration ==="
echo "Network interfaces:"
ip addr show | grep -E "^[0-9]+:" | awk '{print $2}' | sed 's/://'
echo ""

echo "=== Process Information ==="
echo "Top 5 processes by memory usage:"
ps aux --sort=-%mem | head -6
echo ""

echo "=== Environment Variables ==="
echo "PATH: $PATH"
echo "USER: $USER"
echo "HOME: $HOME"
echo ""

echo "=== File System ==="
echo "Current directory: $(pwd)"
echo "Directory contents:"
ls -la
echo ""

echo "Report completed at $(date)"
    """,
    language="bash",
    timeout_seconds=60
)

Integration Patterns

AI Agent Integration

python
Copy
class CognitoraCodeAgent:
    """AI Agent that uses Cognitora Code Interpreter for code execution"""
    
    def __init__(self, api_key):
        self.client = Cognitora(api_key=api_key)
        self.active_sessions = {}
    
    def create_session(self, language="python", session_name=None):
        """Create a new code execution session"""
        session = self.client.code_interpreter.create_session(
            language=language,
            timeout_seconds=3600,  # 1 hour
            resources={
                "cpu_cores": 2,
                "memory_mb": 2048,
                "storage_gb": 10
            }
        )
        
        session_id = session.data.session_id
        self.active_sessions[session_name or session_id] = session_id
        return session_id
    
    def execute_code(self, code, language="python", session_name=None):
        """Execute code with optional session context"""
        session_id = None
        if session_name and session_name in self.active_sessions:
            session_id = self.active_sessions[session_name]
        
        result = self.client.code_interpreter.execute(
            code=code,
            language=language,
            session_id=session_id,
            timeout_seconds=300
        )
        
        return {
            "status": result.data.status,
            "outputs": [{"type": o.type, "content": o.content} for o in result.data.outputs],
            "execution_time": result.data.execution_time_ms,
            "session_id": result.data.session_id
        }
    
    def analyze_data(self, data_file_content, analysis_type="basic"):
        """Specialized method for data analysis"""
        session_id = self.create_session("python", "data_analysis")
        
        # Upload data
        result1 = self.client.code_interpreter.execute(
            code=f"""
import pandas as pd
import numpy as np
from io import StringIO

# Load data
data = '''
{data_file_content}
'''
df = pd.read_csv(StringIO(data))
print("Data loaded successfully:")
print(df.head())
print(f"Shape: {df.shape}")
            """,
            session_id=session_id
        )
        
        # Perform analysis based on type
        if analysis_type == "basic":
            analysis_code = """
print("\\n=== Basic Analysis ===")
print("\\nData types:")
print(df.dtypes)
print("\\nSummary statistics:")
print(df.describe())
print("\\nMissing values:")
print(df.isnull().sum())
            """
        elif analysis_type == "correlation":
            analysis_code = """
print("\\n=== Correlation Analysis ===")
numeric_cols = df.select_dtypes(include=[np.number]).columns
if len(numeric_cols) > 1:
    correlation_matrix = df[numeric_cols].corr()
    print("Correlation matrix:")
    print(correlation_matrix)
else:
    print("Not enough numeric columns for correlation analysis")
            """
        
        result2 = self.client.code_interpreter.execute(
            code=analysis_code,
            session_id=session_id
        )
        
        return {
            "data_load": result1,
            "analysis": result2
        }

Jupyter-Style Development

python
Copy
class CognitoraNotebook:
    """Jupyter-like notebook interface using Cognitora"""
    
    def __init__(self, api_key, language="python"):
        self.client = Cognitora(api_key=api_key)
        self.language = language
        self.session_id = None
        self.cell_outputs = []
        
        # Create persistent session
        session = self.client.code_interpreter.create_session(
            language=language,
            timeout_seconds=7200  # 2 hours
        )
        self.session_id = session.data.session_id
        print(f"Notebook session started: {self.session_id}")
    
    def run_cell(self, code, cell_id=None):
        """Execute a code cell"""
        result = self.client.code_interpreter.execute(
            code=code,
            session_id=self.session_id,
            language=self.language
        )
        
        cell_output = {
            "cell_id": cell_id or len(self.cell_outputs),
            "code": code,
            "outputs": result.data.outputs,
            "status": result.data.status,
            "execution_time": result.data.execution_time_ms
        }
        
        self.cell_outputs.append(cell_output)
        
        # Display output
        print(f"[{cell_output['cell_id']}] Status: {cell_output['status']}")
        for output in result.data.outputs:
            if output.type == "stdout":
                print(output.content)
            elif output.type == "stderr":
                print(f"ERROR: {output.content}")
        
        return cell_output
    
    def display_history(self):
        """Show execution history"""
        for cell in self.cell_outputs:
            print(f"Cell {cell['cell_id']} ({cell['status']}):")
            print(f"  Code: {cell['code'][:50]}...")
            print(f"  Time: {cell['execution_time']}ms")

# Usage example
notebook = CognitoraNotebook("your_api_key", "python")

# Cell 1: Data setup
notebook.run_cell("""
import pandas as pd
import numpy as np

# Create sample dataset
np.random.seed(42)
data = pd.DataFrame({
    'x': np.random.randn(100),
    'y': np.random.randn(100),
    'category': np.random.choice(['A', 'B', 'C'], 100)
})

print("Dataset created:")
print(data.head())
""", "setup")

# Cell 2: Analysis
notebook.run_cell("""
# Analysis from previous cell's data
print("Analysis results:")
print(f"Mean x: {data['x'].mean():.3f}")
print(f"Mean y: {data['y'].mean():.3f}")
print(f"Category counts:")
print(data['category'].value_counts())
""", "analysis")

Session Logs and Debugging

Retrieving Session Logs

python
Copy
# Get comprehensive session logs
logs = client.code_interpreter.get_session_logs(
    session_id,
    limit=100,
    offset=0
)

print("Session execution history:")
for log_entry in logs.data.logs:
    print(f"[{log_entry.timestamp}] {log_entry.type}: {log_entry.content}")

Session Performance Monitoring

python
Copy
def monitor_session_performance(client, session_id):
    """Monitor session resource usage and performance"""
    session = client.code_interpreter.get_session(session_id)
    
    performance_metrics = {
        "session_age": (datetime.now() - session.data.created_at).total_seconds(),
        "executions_count": len(session.data.execution_history or []),
        "average_execution_time": 0,
        "resource_usage": session.data.resources,
        "status": session.data.status
    }
    
    if session.data.execution_history:
        total_time = sum(exec.execution_time_ms for exec in session.data.execution_history)
        performance_metrics["average_execution_time"] = total_time / len(session.data.execution_history)
    
    return performance_metrics

# Usage
metrics = monitor_session_performance(client, session_id)
print(f"Session Performance: {metrics}")

Best Practices

Code Organization

  1. Use Sessions for Related Operations: Group related code executions in sessions to maintain context
  2. Manage Session Lifecycle: Create sessions when needed, terminate when done
  3. Handle Timeouts Appropriately: Set realistic timeouts based on your code complexity
  4. File Management: Clean up temporary files when possible

Error Handling

  1. Check Execution Status: Always verify result.data.status before processing outputs
  2. Handle Different Output Types: Process stdout, stderr, and error outputs appropriately
  3. Implement Retry Logic: For transient failures, implement exponential backoff
  4. Resource Monitoring: Monitor session resource usage to prevent unexpected costs

Security

  1. Environment Variables: Use environment variables for sensitive configuration
  2. Input Validation: Validate and sanitize code inputs when accepting user code
  3. Resource Limits: Set appropriate resource limits for untrusted code execution
  4. Session Isolation: Each session provides complete isolation from other sessions

Performance

  1. Session Reuse: Reuse sessions for multiple related executions to avoid startup overhead
  2. Optimize Resource Allocation: Choose appropriate CPU/memory based on your workload
  3. Batch Operations: Group related operations in single executions when possible
  4. File Upload Efficiency: Minimize file upload size and frequency

REST API Reference

For direct API access using cURL or other HTTP clients:

Execute Code

bash
Copy
POST /api/v1/interpreter/execute
Content-Type: application/json
Authorization: Bearer your_api_key

{
  "language": "python",
  "code": "print('Hello World')",
  "session_id": "optional_session_id",
  "timeout_seconds": 60,
  "environment": {
    "DEBUG": "true"
  },
  "files": [
    {
      "name": "data.txt",
      "content": "file content here",
      "encoding": "string"
    }
  ]
}

Create Session

bash
Copy
POST /api/v1/interpreter/sessions
Content-Type: application/json
Authorization: Bearer your_api_key

{
  "language": "python",
  "timeout_seconds": 1800,
  "resources": {
    "cpu_cores": 2,
    "memory_mb": 1024,
    "storage_gb": 5
  }
}

List Sessions

bash
Copy
GET /api/v1/interpreter/sessions
Authorization: Bearer your_api_key

Get Session Details

bash
Copy
GET /api/v1/interpreter/sessions/{session_id}
Authorization: Bearer your_api_key

Delete Session

bash
Copy
DELETE /api/v1/interpreter/sessions/{session_id}
Authorization: Bearer your_api_key

File Uploads

The Code Interpreter API supports uploading files that will be available in your execution environment. Files can be uploaded with both one-shot executions and persistent sessions, enabling powerful data processing workflows.

File Upload Interface

typescript
Copy
interface FileUpload {
  name: string;           // Filename (can include relative path)
  content: string;        // File content
  encoding: 'string' | 'base64';  // Content encoding type
}

Supported Encodings

  • string: Plain text content (UTF-8) for JSON, CSV, code files, etc.
  • base64: Base64 encoded binary content for images, PDFs, archives, etc.

Basic File Upload Example

Upload text files with your code execution:

bash
Copy
curl -X POST https://api.cognitora.dev/api/v1/interpreter/execute \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer <api-key>" \
  -d '{
    "language": "python",
    "code": "import json\nwith open(\"config.json\", \"r\") as f:\n    config = json.load(f)\nprint(f\"Environment: {config[\"environment\"]}\")",
    "files": [
      {
        "name": "config.json",
        "content": "{\"environment\": \"production\", \"version\": \"2.0\"}",
        "encoding": "string"
      }
    ]
  }'

Multiple File Upload

Upload multiple files for complex data processing:

bash
Copy
curl -X POST https://api.cognitora.dev/api/v1/interpreter/execute \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer <api-key>" \
  -d '{
    "language": "python",
    "code": "import pandas as pd\nimport json\n\n# Read configuration\nwith open(\"config.json\", \"r\") as f:\n    config = json.load(f)\nprint(f\"Dataset: {config[\"dataset\"]}\")\n\n# Process CSV data\ndf = pd.read_csv(\"data.csv\")\nprint(f\"Data shape: {df.shape}\")\nprint(df.head())",
    "files": [
      {
        "name": "config.json",
        "content": "{\"dataset\": \"sales\", \"analysis\": \"quarterly\"}",
        "encoding": "string"
      },
      {
        "name": "data.csv",
        "content": "product,sales,quarter\\nLaptop,1500,Q1\\nMouse,500,Q1\\nKeyboard,800,Q2",
        "encoding": "string"
      }
    ]
  }'

Binary File Upload (Base64)

Upload binary files using base64 encoding:

bash
Copy
# First encode your binary file
FILE_CONTENT=$(base64 < your_file.pdf)

curl -X POST https://api.cognitora.dev/api/v1/interpreter/execute \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer <api-key>" \
  -d "{
    \"language\": \"python\",
    \"code\": \"import os\\nprint(f'File size: {os.path.getsize(\\\"document.pdf\\\")} bytes')\",
    \"files\": [
      {
        \"name\": \"document.pdf\",
        \"content\": \"$FILE_CONTENT\",
        \"encoding\": \"base64\"
      }
    ]
  }"

Session-Based File Upload

Files uploaded to a session persist across multiple executions, enabling stateful workflows:

bash
Copy
# 1. Create a session
SESSION_ID=$(curl -s -X POST https://api.cognitora.dev/api/v1/interpreter/sessions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer <api-key>" \
  -d '{"language": "python", "timeout_seconds": 3600}' | jq -r '.data.session_id')

# 2. Upload files in first execution
curl -X POST https://api.cognitora.dev/api/v1/interpreter/execute \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer <api-key>" \
  -d "{
    \"language\": \"python\",
    \"session_id\": \"$SESSION_ID\",
    \"code\": \"import json\\nwith open('dataset.json', 'w') as f:\\n    json.dump({'processed': True}, f)\\nprint('Files processed')\",
    \"files\": [
      {
        \"name\": \"raw_data.json\",
        \"content\": \"{\\\"users\\\": 1000, \\\"revenue\\\": 50000}\",
        \"encoding\": \"string\"
      }
    ]
  }"

# 3. Access files in subsequent execution (no re-upload needed)
curl -X POST https://api.cognitora.dev/api/v1/interpreter/execute \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer <api-key>" \
  -d "{
    \"language\": \"python\",
    \"session_id\": \"$SESSION_ID\",
    \"code\": \"import json\\n\\n# Read original uploaded file\\nwith open('raw_data.json', 'r') as f:\\n    raw = json.load(f)\\nprint(f'Raw data: {raw}')\\n\\n# Read file from previous execution\\nwith open('dataset.json', 'r') as f:\\n    processed = json.load(f)\\nprint(f'Processed: {processed}')\"
  }"

Using SDKs for File Upload

JavaScript SDK

javascript
Copy
import { Cognitora } from '@cognitora/sdk';

const client = new Cognitora({ apiKey: 'your-api-key' });

const result = await client.codeInterpreter.execute({
  code: `
    import pandas as pd
    df = pd.read_csv('data.csv')
    print(f"Loaded {len(df)} rows")
    print(df.head())
  `,
  language: 'python',
  files: [
    {
      name: 'data.csv',
      content: 'name,age,city\\nAlice,25,New York\\nBob,30,San Francisco',
      encoding: 'string'
    }
  ]
});

Python SDK

python
Copy
from cognitora import Cognitora, FileUpload

client = Cognitora(api_key="your-api-key")

result = client.code_interpreter.execute(
    code="""
import pandas as pd
df = pd.read_csv('data.csv')
print(f"Loaded {len(df)} rows")
print(df.head())
    """,
    files=[
        FileUpload(
            name="data.csv",
            content="name,age,city\\nAlice,25,New York\\nBob,30,San Francisco",
            encoding="string"
        )
    ]
)

Environment Variables

When files are uploaded, environment variables are automatically set:

  • COGNITORA_FILES_COUNT: Number of files uploaded in the current execution
  • COGNITORA_FILES_LIST: Comma-separated list of uploaded filenames
python
Copy
import os

print(f"Files uploaded: {os.environ.get('COGNITORA_FILES_COUNT', '0')}")
print(f"File list: {os.environ.get('COGNITORA_FILES_LIST', 'none')}")

File Upload Limits

  • File Size: Maximum 5MB per file (1MB recommended for optimal performance)
  • File Count: Maximum 10 files per execution
  • Total Size: Maximum 10MB total upload size per execution
  • Filename: Alphanumeric characters, hyphens, underscores, and periods only

Language-Specific Considerations

Python

  • Files are created using standard file operations
  • Base64 files are decoded using Python's base64 module
  • All uploaded files are placed in the current working directory

JavaScript (Node.js)

  • Files are created using fs.writeFileSync()
  • Base64 files are decoded using Buffer.from(content, 'base64')
  • File paths are relative to the execution directory

Bash

  • Files are created using echo commands
  • Base64 files are decoded using the base64 command
  • Standard shell file operations apply

Best Practices

  1. Use appropriate encoding:

    • string for text files (JSON, CSV, code)
    • base64 for binary files (images, PDFs, archives)
  2. Optimize file sizes:

    • Compress large files before encoding
    • Split large datasets into smaller chunks
  3. Use sessions for workflows:

    • Upload files once, use multiple times
    • Maintain state across processing steps
  4. Handle errors gracefully:

    python
    Copy
    try:
        with open('data.json', 'r') as f:
            data = json.load(f)
    except FileNotFoundError:
        print("Error: File not found")
    except json.JSONDecodeError:
        print("Error: Invalid JSON format")
    

Common Use Cases

  • Data Analysis: Upload CSV/JSON datasets for processing with pandas
  • Image Processing: Upload images for manipulation with PIL/OpenCV
  • Configuration Management: Upload config files for application settings
  • Multi-step Workflows: Upload scripts and data, process across multiple executions
  • Document Processing: Upload PDFs/documents for analysis and extraction

File uploads enable powerful, stateful code execution workflows while maintaining the security and isolation of Cognitora's execution environment.