Content Suite — Complete Specification
What This Is
A fully programmable content creation system. The lifecycle:
Research → Create → Distribute → Analyze → Research (loop)
Every stage is:
- CLI-accessible (
content research topics,content video generate) - API-exposed (same handler, REST)
- MCP-callable (agent can run the full pipeline without a human)
- Board-visible (human reviews, approves, overrides at any stage)
- Widget-installable (
ss widget install local:packages/boards/content-ops)
This is not a SaaS product. It is infrastructure that deploys into a family's workspace. A non-technical content creator uses the board. A developer uses the CLI. An agent uses MCP. All three hit identical command handlers.
Adversarial Review Findings
Every item below was proposed as "new build" in the original spec but already exists or uses the wrong pattern. Addressed in every section that follows.
Finding 1 — content-ops board ALREADY EXISTS (do not rebuild)
packages/boards/content-ops/ is a full soul widget with soul-widget.yaml, board.yaml, and a 1,200-line board component (board/index.tsx) that already has:
- Pipeline view — kanban by stage, filtering, bulk actions
- Channels view — channel list, owned vs. competitor tabs
- Distribution view — scheduled/posted/failed queue
- Analytics view — per-channel metrics
- Jobs view — bg_jobs live status
- Definitions view — stage definitions
Action: Extend content-ops board, do NOT create a new wise-songs-media widget. The content-ops widget is generic by design; this is exactly the extension point for wise_songs specifics.
Install command:
ss widget install local:packages/boards/content-ops
Finding 2 — Python scripts are NOT to be ported to TypeScript (wrong pattern)
The spec listed "Port to TypeScript" for:
scene_pipeline.py— all stagesvideo_pipeline.py— Ken Burns + xfade assemblysuno_library.py— Suno automationjob_queue.py— job trackingyoutube_upload.py/youtube_analytics.py— YouTube operations
Architecture rule: TypeScript dispatches { jobId, task, config, secrets } → Python worker reads stdin/args, streams JSON progress to stdout. No FastAPI. No Python HTTP server. Wrap, do not port.
The video-automation package (packages/video-automation/) already demonstrates the correct pattern: bin/video-gen.js spawns python3 src/orchestrator.py via child_process.spawn. Apply this same pattern for all content-tracker Python scripts.
Action: Define a WorkerDispatcher TypeScript class in @supernal/media-pipeline that uses child_process.spawn to call Python workers. Never rewrite the Python.
Finding 3 — video-automation package already has CostTracker + VideoGenerationOrchestrator
packages/video-automation/src/orchestrator.py:
CostTracker— budget enforcement, spend tracking, transaction logVideoGenerationOrchestrator— multi-segment generation withThreadPoolExecutorSegmentResult/VideoResult— typed output dataclasses
packages/video-automation/src/lyric_analyzer.py — lyric analysis for audio-synced content
Action: @supernal/media-pipeline TypeScript orchestrator dispatches to these Python workers via subprocess, using video-automation's existing bin/video-gen.js entry point as the model. Do not reduplicate CostTracker logic in TypeScript.
Finding 4 — job_queue.py is NOT a port target; it already integrates with content.db
content-tracker/scripts/job_queue.py writes to content.db's bg_jobs table with full progress tracking (Job.progress(), Job.done(), Job.fail()). The content-ops board agent already reads from this via file watch trigger (content-db-watch in soul-widget.yaml).
Action: Python workers continue writing progress to content.db via job_queue.py. TypeScript reads bg_jobs via the same SQLite file. No port needed.
Finding 5 — Secrets are already stored; stop proposing re-setup
Already in sc secret list:
connector-secrets:GOOGLE_CLIENT_ID,GOOGLE_CLIENT_SECRET,GOOGLE_REFRESH_TOKEN,YOUTUBE_CHANNEL_IDcontent-tracker:REPLICATE_API_TOKEN,GEMINI_API_KEYsi:ANTHROPIC_API_KEY
Not stored yet (needs sc secret set):
content-tracker OPENAI_API_KEY— for gpt-image-1 backendcontent-tracker SUNO_SESSION_TOKEN— for Suno automation
All secrets access in TypeScript handlers: sc secret get <service> <key>. Never process.env.KEY directly.
Finding 6 — defineKeyRegistry is required; original spec used createNames for state keys
The spec showed createNames from @supernal/interface for the L2 nav — that's correct for component name registries. But any KV/state/cache keys (job IDs, board state, channel config cache) must use defineKeyRegistry. The spec had no key registry for board state.
Action: Define ContentSuiteKeys using defineKeyRegistry for all board KV state. See Worker Contract Spec section.
Finding 7 — @supernal/media-pipeline package name collision with existing architecture doc
The existing wise-songs-media-pipeline-architecture.md already specced this package name and its internal structure (PipelineOrchestrator, ScenePlanner, ImageChain, VideoAssembler, etc.). The content-suite spec reproduced it with slight differences.
Action: Use the architecture from wise-songs-media-pipeline-architecture.md as the canonical design. The ownership boundary documented there (supernal-coding = generic platform, supernal-family = family instance) is correct and must be maintained.
Finding 8 — content-ops board already covers Queue, Distribution, and Analytics panels
The spec proposed building Queue, Distribution, and Analytics as new panels. All three exist in the content-ops board (pipeline, distribution, analytics views with ChannelsView, DistributionView, PipelineView, JobsPanel).
Action: The Storyboard gate (human-in-the-loop before image spend) and the Studio creation workspace are the net-new pieces. Everything else extends content-ops.
Finding 9 — Research panel competitor tracking routes to competitive-intel skill (correct in spec, needs no rebuild)
The spec correctly identified competitive-intel skill. Confirm: it lives at ~/.openclaw/skills/competitive-intel/ and tracks competitor channels in Google Sheets. Wire to Research panel; do not rebuild.
Finding 10 — supernal-tts is a running service at :3030, not a buildable dependency
The spec listed TTS preview as "existing, wire to Script sub-panel." Correct — this is a running process, not a package to import. The board calls it via HTTP. No build work needed beyond the API call in the Script sub-panel component.
Finding 11 — content.db schema is NOT at ~/.openclaw/data/content.db for the pipeline
The schema lives at ~/.openclaw/data/content.db per content-tracker SKILL.md. The job_queue.py default is Path.home() / '.openclaw' / 'data' / 'content.db'. The pipeline spec refers to a separate extended schema in content.db with pipeline_jobs, pipeline_scenes, pipeline_costs. These are EXTENSIONS to the 4-table schema, not replacements.
Action: DB migration extends the existing 4-table schema. init_db.py gains migration support.
Finding 12 — No FastAPI. No Python HTTP server.
The original spec implied Python services providing APIs (@supernal/media-pipeline as a Python package). Python workers are process-level leaf nodes only. TypeScript is the API surface. See Worker Contract Spec.
Worker Contract Spec
All Python leaf workers follow this contract. TypeScript dispatches; Python executes.
Dispatch (TypeScript → Python)
// TypeScript side — WorkerDispatcher
import { spawn } from 'child_process';
interface WorkerInput {
jobId: string;
task: string; // e.g. 'detect-scene-breaks', 'generate-image', 'assemble-video'
config: Record<string, unknown>;
secrets: Record<string, string>; // loaded via sc secret get, injected as env vars
}
interface WorkerProgress {
type: 'progress' | 'result' | 'error';
jobId: string;
pct?: number; // 0–100
log?: string; // human-readable status line
result?: unknown; // present when type === 'result'
error?: string; // present when type === 'error'
}
class WorkerDispatcher {
dispatch(scriptPath: string, input: WorkerInput): AsyncIterable<WorkerProgress> {
// spawn python3 scriptPath
// write JSON.stringify(input) to stdin
// parse each stdout line as WorkerProgress JSON
// propagate stderr to log
}
}
Worker (Python side)
# Every Python worker reads from stdin, writes JSON progress to stdout.
# No FastAPI. No HTTP server. No argparse required (use stdin).
import json, sys
def main():
payload = json.loads(sys.stdin.read())
job_id = payload['jobId']
task = payload['task']
config = payload['config']
# secrets come in as env vars (injected by dispatcher)
def progress(pct: float, log: str = ''):
print(json.dumps({'type': 'progress', 'jobId': job_id, 'pct': pct, 'log': log}), flush=True)
def result(data):
print(json.dumps({'type': 'result', 'jobId': job_id, 'result': data}), flush=True)
def error(msg: str):
print(json.dumps({'type': 'error', 'jobId': job_id, 'error': msg}), flush=True)
sys.exit(1)
progress(0, 'starting')
# ... do work, call progress() as stages complete ...
result({'outputPath': '/path/to/output'})
if __name__ == '__main__':
main()
Worker scripts and their tasks
| Script | Tasks served |
|---|---|
content-tracker/scripts/scene_pipeline.py | build-visual-world, detect-scene-breaks, match-timestamps, generate-images, vision-eval, assemble-video |
content-tracker/scripts/video_pipeline.py | ken-burns, viral-lyric, compile-hook |
content-tracker/scripts/suno_library.py | suno-create, suno-status, suno-download |
content-tracker/scripts/youtube_upload.py | youtube-upload, youtube-schedule |
content-tracker/scripts/youtube_analytics.py | youtube-analytics-sync |
packages/video-automation/src/orchestrator.py | segment-generate, segment-combine |
Workers are installed on-demand via ss widget install dep check — not pre-bundled. The soul-widget.yaml lists pip install requirements; ss widget install runs the dep check and installs.
Secrets injection pattern
// TypeScript handler — never pass secrets in WorkerInput.config
const secrets = {
REPLICATE_API_TOKEN: await secretGet('content-tracker', 'REPLICATE_API_TOKEN'),
ANTHROPIC_API_KEY: await secretGet('si', 'ANTHROPIC_API_KEY'),
OPENAI_API_KEY: await secretGet('content-tracker', 'OPENAI_API_KEY'),
};
// inject as env vars in spawn options, not in stdin payload
const proc = spawn('python3', [scriptPath], {
env: { ...process.env, ...secrets },
stdio: ['pipe', 'pipe', 'pipe'],
});
proc.stdin.write(JSON.stringify(input));
proc.stdin.end();
Competitive Context
| Competitor | What they do | Their lock-in | Our advantage |
|---|---|---|---|
| Runway | Graph-based video gen | SaaS, $35+/mo | Programmable, self-hosted |
| Descript | Transcript editing | SaaS, export-only | Agent-callable, full pipeline |
| HeyGen / Synthesia | Avatar video | SaaS, per-minute | Bring-your-own style |
| CapCut / Opus Clip | Short-form repurposing | Mobile/SaaS | Channel-aware, cost-visible |
| Adobe Premiere AI | Pro editing + AI | Creative Cloud | No-code board option |
| Revid | Template video | SaaS, limited API | We integrate Revid as a backend |
| Canva | Design + basic video | SaaS | Widget deploys into your stack |
The moat: No competitor is fully CLI + API + MCP + board in one system.
Board: Consolidated Structure
What to Eliminate
The content-ops soul widget (packages/boards/content-ops/) already has Pipeline, Channels, Distribution, Analytics, and Jobs views. The following are not new boards — they are view extensions within content-ops:
Content Studio→ new "Studio" view in content-ops (Storyboard sub-view is net new)Content Pipeline→ already "Pipeline" view in content-ops — extend, don't rebuildMedia Board→ merge into content-ops Pipeline viewContent Ops→ already the content-ops soul widgetDistribution→ already "Distribution" view in content-ops
Final Board Structure
Extend content-ops board with a "Studio" view. Total views in extended content-ops:
[Pipeline] [Studio] [Channels] [Distribution] [Analytics] [Jobs]
Studio view adds the Storyboard → Scenes → Production sub-panels that are net-new.
Panel 1 — Research
Purpose: Discover what to make before making it.
L2 Nav
[Trends] [Topics] [Keywords] [Competitors] [Calendar]
Trends sub-panel
- YouTube trending topics by category (kids, education, music)
- Search volume signals (Google Trends API)
- Gap analysis: topics with demand but low competition
- "What's working" across our own channels (cross-feeds from Analytics)
Topics sub-panel
- Curated topic backlog per channel
- Status:
idea→scripted→in-production→published - Tag-based filtering (age group, subject, style)
- AI-assist: "Generate 10 topic ideas for [channel]"
Keywords sub-panel
- YouTube SEO keyword planner per topic
- Title/description templates with SEO scores
- Auto-populated from topic → feeds into video metadata at publish
Competitors sub-panel
- Track competitor channels (view counts, upload frequency, top videos)
- Cross-referenced with our topic backlog ("they made X, we haven't")
- Feeds from
competitive-intelskill (existing, at~/.openclaw/skills/competitive-intel/)
Calendar sub-panel
- Publishing schedule per channel
- Gaps highlighted ("Wise Fables hasn't published in 12 days")
- Connects to Distribution view for actual scheduling
Backend commands (universal-command — all become CLI + API + MCP):
content research topics --channel aesops_fables
content research trends --category kids-education
content research competitor add UCxxxxx --channel aesops_fables
content research calendar --month 2026-05
What exists to reuse:
competitive-intelskill — competitor tracking in Google Sheets (already live)content.dbsongs table — song backlog already tracked- YouTube Analytics sync —
content-tracker/scripts/youtube_analytics.py(subprocess worker)
What needs building (net new only):
- YouTube Trends API integration (new universal-command + Python worker)
- Keyword planner (YouTube Data API v3) (new universal-command + Python worker)
- Research → production pipeline link (topic → queued job) — new DB field + command
Panel 2 — Studio (NEW view in content-ops)
Purpose: The creation workspace. Storyboard → generate → assemble.
L2 Nav
[Storyboard] [Script] [Scenes] [Production] [Assets]
Storyboard sub-panel — MOST IMPORTANT NET-NEW PIECE
Human-in-the-loop BEFORE any image spend. Pipeline stops here and waits.
┌───────────────────────────────────────────────────────────────┐
│ STORYBOARD — The Bundle of Sticks Cost estimate: $0.15 │
│ │
│ [1] 0:00–5.9s opening_conflict │
│ "Father gathered his three quarreling sons..." │
│ Father (greying beard) standing in farmstead yard, │
│ three sons behind him, evening light │
│ │
│ [2] 5.9–18.1s single_sticks_given │
│ "He handed each a single stick..." │
│ Close-up: pale birch twig filling foreground, │
│ son's hands receiving it from father │
│ │
│ ... (7 scenes total) │
│ │
│ [Edit Scene] [Regenerate Plan] [Approve → Generate Images] │
└───────────────────────────────────────────────────────────────┘
- Storyboard generated from lyrics + visual world (via
scene_pipeline.pystages 0–1, dispatched as subprocess worker) - User can edit any scene description before image generation
- Cost estimate shown before approval
- Backend:
content video generate --slug X --storyboard-onlystops here
Script sub-panel
- Lyrics editor with structure annotation (verse/chorus/bridge)
- Whisper word timestamps displayed inline (after audio attached)
- TTS preview via HTTP call to
supernal-ttsservice at:3030(no build needed) - Export to Suno format (for re-generation)
Scenes sub-panel
- Scene-by-scene image gallery post-generation
- Vision eval scores shown per scene (content/style/anatomy)
- Manual override: regenerate individual scene with custom prompt
- Backend selector per scene (OpenAI / Gemini / FLUX)
Production sub-panel
- Live pipeline status (which stage is running) — reads from
bg_jobsincontent.db - Log stream
- Cost running total
- Backend config (image backend, video backend, style)
Assets sub-panel
- All generated assets for this video: images, audio, final MP4, hook MP4
- Download individual assets
- Re-use assets in other videos
Backend commands (universal-command):
content video generate --slug X --channel Y [--storyboard-only] [--backend openai]
content video scene regenerate --job-id X --scene 3 [--prompt "custom description"]
content video status --job-id X
What exists to reuse:
scene_pipeline.py— all stages:build_visual_world,detect_scene_breaks,match_breaks_to_timestamps,generate_scene_images,run_vision_eval_pass,assemble_video— use as subprocess workers, NOT portedvideo_pipeline.py— Ken Burns + xfade assembly, viral lyric mode, hook extraction — use as subprocess workerpackages/video-automation/src/orchestrator.py—VideoGenerationOrchestrator,CostTracker— use as subprocess workerpackages/video-automation/bin/video-gen.js— demonstrates correct Node→Python subprocess pattern (copy this pattern)job_queue.py— job tracking writes tocontent.db;content-opsagent already reads from there- Whisper integration —
get_whisper_words()inscene_pipeline.py(subprocess) - Vision eval —
run_vision_eval_pass()inscene_pipeline.py(subprocess) - TTS preview —
supernal-ttsat:3030(HTTP call only)
What needs building (net new only):
@supernal/media-pipelineTypeScript package withWorkerDispatcher(subprocess orchestration layer)- Storyboard review gate UI (board component +
--storyboard-onlyflag in command) content video generateandcontent video scene regenerateuniversal-commands- OpenAI
gpt-image-1backend inscene_pipeline.py(skeleton exists at_generate_image_openai, needs API key + completion) - Scene-level manual override dispatch in board
Panel 3 — Queue (ALREADY EXISTS in content-ops)
Purpose: Pipeline status across all jobs and channels.
This is the Pipeline view (pipeline) in the existing content-ops board, with JobsPanel and JobsView components already implemented.
[Active] [Pending Approval] [Completed] [Failed]
What exists to reuse:
content-opsboard Pipeline view — kanban by stage, filtering, bulk movecontent-opsboard Jobs view — bg_jobs live statusjob_queue.py— Python worker writes job progress tocontent.dbbg_jobstable + file watch trigger insoul-widget.yaml
What needs building (net new only):
storyboard_pendingstatus inbg_jobsstatus enum (new DB migration)- "Pending Approval (storyboard)" column in Pipeline view — filter on
storyboard_pendingstatus - Real-time refresh already handled by
content-db-watchtrigger (5s debounce file watch)
Backend commands:
content video queue --channel aesops_fables [--status active|pending|review]
content video queue --all
Panel 4 — Review
Purpose: Human approval gate before publish.
L2 Nav
[Full Video] [Hook] [Scenes] [Metadata] [Publish]
The review concept exists in the content-ops pipeline view (item detail drawer with actions). The video player + approve/reject + metadata form is net-new because content-ops is generic and doesn't know about video files.
Backend commands:
content video review --job-id X --action approve [--notes "..."]
content video review --job-id X --action reject --notes "bundle wrong"
content video publish --job-id X [--schedule "2026-04-20T09:00"]
What exists to reuse:
- YouTube upload —
youtube_upload.py(subprocess worker viaWorkerDispatcher) - Publish events table in
content.db— already tracked bycontent-tracker - Google OAuth credentials — already stored:
connector-secrets.GOOGLE_CLIENT_ID,GOOGLE_CLIENT_SECRET,GOOGLE_REFRESH_TOKEN,YOUTUBE_CHANNEL_ID
What needs building (net new only):
- Review panel board component (video player, approve/reject actions, metadata form)
content video reviewandcontent video publishuniversal-commands- Publish scheduling (cron-backed queue + trigger)
- Playlist management UI
Panel 5 — Channels (ALREADY EXISTS in content-ops)
Purpose: Manage channels, their styles, costs, and platform connections.
The Channels view (channels) already exists in content-ops. Extend it with style config and backend assignment fields.
My Channels sub-panel
- Channel creation form (architecture already specced in
wise-songs-media-pipeline-architecture.md) - Style config per channel (visual style, audience, COPPA)
- Backend assignment (which image/video backend this channel uses)
- Estimated cost per video shown live
Platforms sub-panel
- Connected accounts: YouTube, TikTok, Instagram Reels
- OAuth status per platform
- Default publish settings per platform per channel
Backends sub-panel
- Image backends: OpenAI gpt-image-1, Gemini Imagen 4 (via
_generate_image_gemini), FLUX-schnell, FLUX-dev (via_generate_image_flux) - Video backends: Ken Burns (built in
video_pipeline.py), LTX Video, Wan, Revid - TTS backends: supernal-tts (OpenAI Coral, ElevenLabs) — service at
:3030 - Music backends: Suno (existing
suno_library.pyautomation) - Cost per operation shown for each
Costs sub-panel
Wise Fables channel — April 2026
Per video avg: $0.18
Budget: $20/mo
Spend to date: $4.50 (22% of budget)
Videos produced: 25
Revenue est: $12–28 RPM (kids, COPPA)
Backend commands:
content channel create --name "Wise Fables" --audience kids --style storybook
content channel update wise_fables --backend openai
content channel cost --channel wise_fables [--month 2026-04]
content backend list
content backend set-default --channel wise_fables --image openai
Panel 6 — Analytics (ALREADY EXISTS in content-ops)
Purpose: Close the loop. What performed → what to make next.
The Analytics view (analytics) already exists in content-ops. Extend with auto-insights.
What exists to reuse:
- YouTube Analytics sync —
youtube_analytics.py(subprocess worker) youtube_analytics.pyfunctions:fetch_channel_analytics,fetch_video_analytics- Cost tracking in
content.db—pipeline_coststable (from extended schema)
What needs building (net new only):
- Auto-insights generation (Claude analysis of performance data) — new universal-command
- Research panel signal feed (analytics surfaced as topic suggestions)
The Full Lifecycle, Wired Together
Research panel
└─ Topic approved → creates song entry in DB
Studio panel (Script)
└─ Lyrics attached → audio generated (suno_library.py subprocess) or TTS preview (HTTP :3030)
Studio panel (Storyboard) ← HUMAN GATE
└─ scene_pipeline.py stages 0–1 via WorkerDispatcher
└─ Storyboard approved → image generation queued (bg_jobs status: storyboard_pending → queued)
Queue panel (content-ops Pipeline view)
└─ Job active → Studio panel shows live progress via content.db file watch
Studio panel (Production)
└─ scene_pipeline.py stages 2–4 via WorkerDispatcher
└─ Video assembled → bg_jobs status = "needs_review"
Queue panel (Needs Review)
└─ Clicks → Review panel (new view)
Review panel ← HUMAN GATE
└─ Approved + metadata → Publish button active
└─ content video publish → youtube_upload.py subprocess
Analytics panel (content-ops Analytics view)
└─ youtube_analytics.py subprocess syncs daily
└─ Auto-insights → surfaces in Research panel as signal
Two human gates: Storyboard (before spend) and Review (before publish). Everything else can run fully automated by an agent.
Backend Package: @supernal/media-pipeline
TypeScript orchestration layer. No Python code. Dispatches to Python subprocess workers.
// WorkerDispatcher — the only thing TypeScript does for media work
class WorkerDispatcher {
async dispatch(script: string, task: string, config: unknown): AsyncIterable<WorkerProgress>
}
// PipelineOrchestrator — coordinates dispatch calls
class PipelineOrchestrator {
async buildVisualWorld(title: string, lyrics: string): Promise<VisualWorld>
// dispatches: scene_pipeline.py task=build-visual-world
async detectSceneBreaks(lyrics: string, world: VisualWorld): Promise<SceneBreak[]>
// dispatches: scene_pipeline.py task=detect-scene-breaks
async matchTimestamps(breaks: SceneBreak[], audioPath: string): Promise<SceneBreak[]>
// dispatches: scene_pipeline.py task=match-timestamps
async generateStoryboard(breaks: SceneBreak[]): Promise<Storyboard>
// pure TypeScript — assembles storyboard object from break data, no Python
async generateImages(breaks: SceneBreak[], backend: string): Promise<SceneBreak[]>
// dispatches: scene_pipeline.py task=generate-images
async visionEval(breaks: SceneBreak[]): Promise<SceneBreak[]>
// dispatches: scene_pipeline.py task=vision-eval
async assembleVideo(breaks: SceneBreak[], audioPath: string): Promise<string>
// dispatches: video_pipeline.py task=ken-burns (or other backend)
}
// Backends — TypeScript interface, Python implementation
interface ImageBackend {
name: string
workerScript: string // path to Python worker
task: string // task name the worker accepts
costPerImage: number
}
// Implementations are config entries, not TypeScript classes
const BACKENDS: Record<string, ImageBackend> = {
flux: { name: 'FLUX-schnell', workerScript: 'scene_pipeline.py', task: 'generate-images', costPerImage: 0.003 },
gemini: { name: 'Gemini Imagen 4', workerScript: 'scene_pipeline.py', task: 'generate-images', costPerImage: 0.02 },
openai: { name: 'OpenAI gpt-image-1', workerScript: 'scene_pipeline.py', task: 'generate-images', costPerImage: 0.04 },
}
Commands (universal-command — all become CLI + API + MCP):
export const contentVideoGenerate = new UniversalCommand({
name: 'content video generate',
input: {
parameters: [
{ name: 'slug', type: 'string', required: true },
{ name: 'channel', type: 'string', required: true },
{ name: 'storyboard-only', type: 'boolean', default: false },
{ name: 'backend', type: 'enum', values: ['openai','gemini','flux'], default: 'openai' },
{ name: 'vision-eval', type: 'boolean', default: true },
]
},
handler: async (args) => {
const orchestrator = new PipelineOrchestrator(args.channel)
if (args['storyboard-only']) {
return await orchestrator.generateStoryboard(...) // stops, writes storyboard_pending to bg_jobs
}
return await orchestrator.runFull(...)
}
})
// Auto-generates:
// CLI: content video generate --slug X --channel Y
// API: POST /api/content/video/generate
// MCP: content_video_generate tool
L2 Navigation — Implementation
// Uses @supernal/interface createNames for component name registries (correct use)
import { createNames, defineKeyRegistry } from '@supernal/interface'
// Component names (createNames) — for typed component/nav identifiers
const ContentSuiteNav = createNames('content-suite', {
studio: ['storyboard', 'script', 'scenes', 'production', 'assets'],
// Note: pipeline/channels/distribution/analytics/jobs already exist in content-ops
// Only Studio is net-new nav
})
// Board state keys (defineKeyRegistry) — for ALL KV/cache/state
const ContentSuiteKeys = defineKeyRegistry('content-suite', {
activeJob: (channel: string) => `${channel}:active-job`,
storyboardDraft: (jobId: string) => `${jobId}:storyboard-draft`,
channelConfig: (channelId: string) => `channel:${channelId}`,
analyticsCache: (channelId: string, month: string) => `analytics:${channelId}:${month}`,
})
// ContentSuiteKeys.activeJob('aesops_fables') → 'content-suite:aesops_fables:active-job'
// ContentSuiteKeys.storyboardDraft('job-123') → 'content-suite:job-123:storyboard-draft'
function StudioPanel({ activeL2 }) {
return (
<>
<L2Nav items={ContentSuiteNav.studio} active={activeL2} />
{activeL2 === 'storyboard' && <StoryboardView />}
{activeL2 === 'script' && <ScriptEditor />}
{activeL2 === 'scenes' && <SceneGallery />}
{activeL2 === 'production' && <ProductionStatus />}
{activeL2 === 'assets' && <AssetLibrary />}
</>
)
}
What Already Exists — Accurate Reuse Map
| What we need | What already exists | Correct reuse decision |
|---|---|---|
| Job queue | job_queue.py in content-tracker | Keep as-is — Python writes bg_jobs, TypeScript reads |
| DB schema | content.db 4-table schema | Extend with migration — do NOT recreate |
| YouTube publish | youtube_upload.py in content-tracker | Subprocess worker — do NOT port |
| YouTube analytics sync | youtube_analytics.py in content-tracker | Subprocess worker — do NOT port |
| Suno automation | suno_library.py in content-tracker | Subprocess worker — do NOT port |
| Cost tracking | video-automation/src/orchestrator.py CostTracker | Subprocess — extend, do NOT port to TS |
| Scene break detection | scene_pipeline.py detect_scene_breaks | Subprocess worker — do NOT port |
| Whisper timestamps | scene_pipeline.py match_breaks_to_timestamps | Subprocess worker — do NOT port |
| Ken Burns assembly | video_pipeline.py | Subprocess worker — do NOT port |
| Vision eval | scene_pipeline.py run_vision_eval_pass | Subprocess worker — do NOT port |
| TTS preview | supernal-tts service at :3030 | HTTP call from board/command — no build |
| Video segment orchestration | video-automation/src/orchestrator.py | Subprocess via video-gen.js pattern |
| Pipeline board | packages/boards/content-ops/ full soul widget | Extend — do NOT recreate |
| Queue view | content-ops Pipeline + Jobs views | Extend — add storyboard_pending status |
| Channels view | content-ops Channels view | Extend — add style/backend fields |
| Distribution view | content-ops Distribution view | Use as-is |
| Analytics view | content-ops Analytics view | Extend — add auto-insights |
| Competitor tracking | competitive-intel skill | Wire to Research panel — no build |
| Storage keys | @supernal/interface defineKeyRegistry | Use for ALL KV/state/cache keys |
| Component names | @supernal/interface createNames | Use for nav/component name registries |
| Plugin install | ss widget install local:packages/boards/content-ops | Already works |
| FLUX image gen | scene_pipeline.py _generate_image_flux | Subprocess — works today |
| Gemini image gen | scene_pipeline.py _generate_image_gemini | Subprocess — works today |
| OpenAI image gen | scene_pipeline.py _generate_image_openai skeleton | Complete in Python — do NOT port |
| Channel config | channels.yaml concept (per architecture doc) | DB + commands (architecture doc is correct) |
| Secrets | REPLICATE, GEMINI, GOOGLE, YOUTUBE — all in sc secret list | No new setup needed for these |
| OpenAI API key | Not yet stored — verified via sc secret list (2026-04-13) | sc secret set content-tracker OPENAI_API_KEY <key> |
| Suno session token | Not yet stored — verified via sc secret list (2026-04-13) | sc secret set content-tracker SUNO_SESSION_TOKEN <token> |
| content.db schema | Actual tables: songs, audio_assets, video_assets, publish_events, suno_library, youtube_videos, video_analytics, channel_daily, traffic_sources, compilations, compilation_songs, pipeline_reviews, bg_jobs, resonance_scores (14 tables total — NOT 4 as stated in an earlier version) | Extend with migration |
| content-tracker secrets | REPLICATE_API_TOKEN and GEMINI_API_KEY stored under content-tracker service — verified | OPENAI_API_KEY still missing |
sc planning epic list | Command exists but epic list shows only subcommands (create, audit) — no epics created yet | Create planning hierarchy before Phase 1 tasks |
sc planning feature audit | 66 feature files exist; 49 have lint issues (legacy priority label field) | Non-blocking; fix with priorityScore |
Build Sequence
Phase 1 — Worker infrastructure (1 week)
[ ] `WorkerDispatcher` class in @supernal/media-pipeline (TypeScript subprocess orchestration)
[ ] stdin/stdout JSON protocol for scene_pipeline.py (modify to accept stdin task dispatch)
[ ] stdin/stdout JSON protocol for video_pipeline.py
[ ] stdin/stdout JSON protocol for youtube_upload.py + youtube_analytics.py
[ ] `sc secret set content-tracker OPENAI_API_KEY` — add key
[ ] Complete `_generate_image_openai` in scene_pipeline.py (skeleton exists)
[ ] DB migration: add `storyboard_pending` to bg_jobs status enum
Phase 2 — Core commands (1 week)
[ ] `content video generate` universal-command (with --storyboard-only flag)
[ ] `content video scene regenerate` universal-command
[ ] `content video status` universal-command
[ ] `content video review` + `content video publish` universal-commands
[ ] `content channel create/update/list` universal-commands (spec in architecture doc)
[ ] All commands registered via @supernal/universal-command — no raw Commander.js
Phase 3 — Studio board view (1–2 weeks)
[ ] Add "Studio" view to content-ops board (new entry in VIEWS array)
[ ] Storyboard sub-panel component (scene list, edit controls, approve button)
[ ] Script editor sub-panel (lyrics + TTS preview via HTTP :3030)
[ ] Scene gallery sub-panel (post-generation, vision eval scores, override)
[ ] Production status sub-panel (reads bg_jobs from content.db)
[ ] Add `storyboard_pending` column to Pipeline view in content-ops
Phase 4 — Review + Publish (1 week)
[ ] Review view in content-ops board (video player, approve/reject, metadata)
[ ] Playlist management UI
[ ] Publish scheduling (cron + trigger)
Phase 5 — Research panel (1 week)
[ ] Research view in content-ops board (topics backlog, calendar)
[ ] YouTube Trends API worker (Python subprocess)
[ ] Keyword planner worker (YouTube Data API v3 subprocess)
Phase 6 — Auto-insights + loop closure (1 week)
[ ] Auto-insights generation command (Claude analysis of analytics data)
[ ] Analytics → Research signal feed
[ ] ContentSuiteKeys defineKeyRegistry for all board state
Phase 7 — Advanced backends + marketplace (ongoing)
[ ] LTX Video backend (new Python worker, worker contract)
[ ] Revid backend (Playwright automation worker)
[ ] Additional image backends as workers
[ ] Marketplace publish (from soulshare-marketplace-spec.md)
The Widget as a Product
From soulshare-marketplace-spec.md:
- The
content-opswidget, once complete, is publishable viass widget install github:supernal-family/content-ops - A non-technical media creator installs the entire pipeline with one command
ss widget installruns dep check: verifies Python 3, ffmpeg, pip installs requirements.txt from widget package- Tiers: Free (10 videos/mo, 2 channels), Pro ($19/mo), Studio ($49/mo)
- The dogfooding loop: every rough edge we hit building wise-songs is a rough edge a customer hits
The spec for marketplace publication already exists. Phase 7 executes it.
The soul-widget.yaml for content-ops already has:
cronsfor hourly pipeline sync, daily distribution check, weekly analyticstriggersfor webhook events andcontent.dbfile watchconnectorsfor YouTube, TikTok, Instagram, Buffer, Suno
Verified Inventory
Every claim here is backed by actual file reads, marked [READ] or [NOT READ]. Audit date: 2026-04-13.
content-ops board [READ]
Source: /Users/saiterminal/git/supernal/families/supernal-coding/packages/boards/content-ops/board/index.tsx
Views (VIEWS constant, lines 42–49): Pipeline, Channels, Distribution, Analytics, Jobs, Definitions — 6 views total.
Pipeline view capabilities (read from component):
- Kanban layout grouped by stage, with horizontal scroll for many stages
PipelineViewToolbar: pipeline-definition selector dropdown, archived-items toggle with count badge, "Add Item" buttonFilterBar: text search on title, status dropdown (all/pending/running/awaiting_review/done/failed), assignee text filter, clear buttonBulkActionBar: fixed-bottom floating bar when items selected — move-to-stage select, archive, clear selection- Per-card actions: advance, send back, approve, reject, open detail drawer, edit, archive, delete, drag-and-drop between stages
- Auto-advance rules parsed from
auto_advance_rules_jsonand displayed as stage indicators - Archived items hidden by default, toggled with count
Channels view: owned vs. competitor tabs with platform icons (YouTube, TikTok, Instagram, LinkedIn, X, Email)
Distribution view: scheduled/posted/failed status queue with status badges
Analytics view: per-channel metrics
Jobs view: JobsView component (separate file, not inlined in index.tsx)
Definitions view: DefinitionsView component (separate file)
Sub-components imported: PipelineItemCard, ItemDetailDrawer, ItemFormModal, JobsPanel, JobsView, DefinitionsView
content-ops board — types.ts [READ]
Source: /Users/saiterminal/git/supernal/families/supernal-coding/packages/boards/content-ops/board/types.ts
Types defined:
StageDefinition—{ id, label, type: 'automated' | 'review' | 'external_tool' | 'manual' }ContentPipelineDefinition— pipeline schema withstages_json,auto_advance_rules_json,connector_mapping_json,webhook_rules_jsonContentPipelineItem— item withcurrent_stage,stage_status(pending/running/awaiting_review/done/failed/skipped),notes,due_date,assignee,is_archivedContentItemHistory— append-only audit log with 14 event_type values includingjob_started,job_completed,auto_advance,connector_syncBgJob— background job withstatus(queued/running/done/failed/cancelled),progress_pct,log_tail,output_path,remote_job_idContentChannel— platform channel withchannel_type(owned/competitor),distribution_tool,oauth_status,subscriber_countContentDistributionEntry— distribution queue entry withstatus(draft/scheduled/posting/posted/failed/cancelled)ContentAnalyticsSnapshot— point-in-time metric per channel/itemConnectorType—'youtube' | 'buffer' | 'tiktok' | 'manual'ContentActionResponse— async action result polled by boardItemFormValues— form state for new item creation
content-ops board — schema.ts [READ]
Source: /Users/saiterminal/git/supernal/families/supernal-coding/packages/boards/content-ops/schema.ts
Tables defined in ContentOpsSchema (typed DataLocationContract):
content_channelscontent_pipeline_definitionscontent_pipeline_itemscontent_distribution_queuecontent_analytics_snapshotscontent_item_historycontent_action_responses
Table name constants exported as CONTENT_OPS_TABLES (also includes bg_jobs).
All tables: writtenBy: 'content-ops-agent', readBy: ['content-ops']. Scoped by repo_path and company_slug.
content-ops board — soul-widget.yaml [READ]
Source: /Users/saiterminal/git/supernal/families/supernal-coding/packages/boards/content-ops/soul-widget.yaml
id: content-ops, version0.4.0equipment: content-ops-agent
Crons:
hourly-pipeline-sync—0 * * * *— refresh-datadaily-distribution-check—0 8 * * *— refresh-dataweekly-analytics—0 9 * * 1— generate-analysis
Triggers:
webhook-content-event— webhook type, debounce 15s, secret fromenv:CONTENT_OPS_WEBHOOK_SECRETcontent-db-watch— file_change on~/.supernal/content.db, debounce 5s
Tags: content, pipeline, channels, distribution, analytics, youtube, buffer, music, video, social
content-ops agent [READ]
Source: /Users/saiterminal/git/supernal/families/supernal-coding/packages/boards/content-ops/agent/index.js
Single file, 101.5K. Key facts from reading the first ~150 lines:
- Uses
AgentStateDBfrom@supernalintelligence/agent-tools/state-db - Uses
detectConnectorfrom../../lib-dist/connectors.js - Uses
loadBoardSecretsfrom../../lib-dist/secrets.js - Connector modes supported:
manual,youtube(requiresYOUTUBE_API_KEY),buffer(requiresBUFFER_ACCESS_TOKEN),tiktok(not yet implemented, falls back to warning) - Poll intervals: pipeline sync 5 min, analytics 1 hr, distribution check 1 min, heartbeat 5 min, action poll 30 sec
- Table constants match schema.ts exactly (8 tables +
content_pipeline_definition_snapshots) COMPANY_SLUGfromprocess.env.WIDGET_COMPANY ?? null— fully multi-tenant
content-studio board [READ]
Source: /Users/saiterminal/git/supernal/families/supernal-coding/apps/supernal-dashboard/src/boards/content-studio/index.tsx
This is a wise-songs-specific dashboard board, NOT the generic content-ops board. It imports views from ../pipeline-board/views/ (AllStagesView, ScheduleView, AnalyticsView) and adds its own JobsPanel + StudioTab.
Views present:
StudioTab— three channel cards (supernal-family, supernal-dance, supernal-intelligence) each showing compilation count, songs compiled, available songs; plus a compilations table (title, channel, style, duration, song count, date, YouTube link, file size)- Uses
AllStagesViewfrom pipeline-board (see below) - Uses
ScheduleViewfrom pipeline-board (see below) - Uses
AnalyticsViewfrom pipeline-board (see below) JobsPanel— inlined in this file; polls/api/content-jobs?limit=50every 2 seconds; shows per-job status badges, progress bar for running jobs, log_tail, error_message
Data sources: /api/pipeline (PipelineResponse), /api/supernal-media (SupernalMediaResponse), /api/content-jobs
This board is hardcoded to the three Supernal YouTube channels. The generic content-ops board is the correct extension point for multi-tenant use.
pipeline-board views [READ]
Source: /Users/saiterminal/git/supernal/families/supernal-coding/apps/supernal-dashboard/src/boards/pipeline-board/views/
Files: AllStagesView.tsx, AnalyticsView.tsx, ScheduleView.tsx, QueueView.tsx
AllStagesView.tsx [READ]
- "Pipeline" tab showing all songs grouped across stages
- Stage filter pills (all, needs-action, per-stage)
- List view (default) and kanban view (kanban shows "coming soon" placeholder)
- "Needs Action" quick-filter: stages
upload,video_review,suno_review - Per-song slot delegation via
Slotcomponent from@supernal/dashboard-sdk - Stages hardcoded:
lyrics,suno_gen,suno_review,video_gen,video_review,upload,published,rejected
AnalyticsView.tsx [READ]
- Multi-platform resonance analytics
- Tabs: Overview, YouTube, TikTok, Instagram, X, LinkedIn, Email
- YouTube tab: songs sorted by view count from
/api/pipeline - Other platform tabs: fetch from
/api/resonance?platform=<id>— readsresonance_scorestable fromcontent.db - Per-platform signal labels: YouTube (watch_time_pct, ctr, likes, comments, shares), TikTok (completion_rate, share_rate), Instagram (save_rate, reach_rate), X (reply_rate, retweet_rate), LinkedIn (comment_to_impression, dwell), Email (click_to_open_rate, forward_rate)
ScheduleView.tsx [READ]
- "Schedule" tab — display-only (no live publish wiring)
- Shows songs at
uploadstage with suggested publish windows - Scheduling algorithm: prefers Thu > Sat > Fri > Wed (YouTube Kids engagement); time window 14:00–16:00; ±20 min deterministic jitter from song ID hash; 1 publish per day max; looks forward 60 days
- Display only — actual publish wiring needs
--publish-atflag passed toyoutube_upload.py
QueueView.tsx [READ]
- "Action Queue" tab — surfaces only actionable stages:
upload,video_review,suno_review - Shows "Queue is clear" empty state when no actionable items
- Per-song slot delegation via
Slotfrom dashboard-sdk (same pattern as AllStagesView)
Python pipeline — scene_pipeline.py [READ]
Source: /Users/saiterminal/sai-workspace/skills/content-tracker/scripts/scene_pipeline.py (47.3K)
All functions:
claude(prompt, system, max_tokens)— calls Claude Haiku (model:claude-haiku-4-5-20251001) via direct HTTP (no SDK)build_visual_world(title, lyrics)— Stage 0: Claude generates locked visual world JSON (setting, characters, props, palette, mood) to ensure frame coherencedetect_scene_breaks(title, lyrics, channel, visual_world)— Stage 1: Claude identifies 5–9 semantic scene break positions with descriptions; visual_world passed as contextevaluate_breaks(title, breaks)— Stage 1b: Claude reviews break positions and adjusts if needed (self-eval gate)get_whisper_words(mp3)— Stage 2a: runs local Whisper via subprocess to get word-level timestampsmatch_breaks_to_timestamps(breaks, whisper_words)— Stage 2b: matches each scene's first_significant_word to Whisper timestamps_generate_image_flux(prompt, out_path, retries, seed)— generates one image via Replicate FLUX-schnell (~$0.003)_generate_image_openai(prompt, out_path)— generates one image via OpenAI gpt-image-1 (~$0.04)_generate_image_gemini(prompt, out_path)— generates one image via Gemini Imagengenerate_image(scene, channel_style, backend, out_dir)— Stage 3 dispatcher: selects backend (flux/openai/gemini) and builds full prompt with style prefixevaluate_image_description(scene, channel_style)— Stage 3 eval gate: Claude checks if image prompt matches scene content before generationvision_eval_image(scene, image_path, channel_style)— Stage 3b: Claude vision checks generated image against description; returns pass/fail/notesrun_vision_eval_pass(breaks, channel_style)— runs vision eval on all scenes; flags failuresgenerate_scene_images(breaks, channel, out_dir, backend)— Stage 3 orchestrator: generates all images with per-scene eval and 1 retry on failureget_audio_duration(mp3)— gets audio length via ffprobeassemble_video(breaks, mp3, output, title)— Stage 4: ffmpeg Ken Burns + xfade timed assemblyrun_scene_pipeline(slug, channel, backend)— top-level entry point: runs all 4 stages in sequence
Channel styles hardcoded for: aesops_fables, gre_word_wizards, actually_useful_nursery_rhymes, cerebral_songs, mental_models. Content dir: ~/sai-workspace/content/wise-songs. Songs dir: ~/git/supernal/families/supernal-family/docs/wise_songs.
Python pipeline — video_pipeline.py [READ]
Source: /Users/saiterminal/sai-workspace/skills/content-tracker/scripts/video_pipeline.py (72.2K)
All functions:
_channel_style(category)— returns visual style string for a category (per-channel palette)_scene_desc_from_lyrics(lyric_text)— extracts keywords from lyrics for FLUX prompt (no AI call)_video_subdir(song_data)— resolves category subfolder under OUTPUT_DIR_hook_subdir(song_data)— resolves category subfolder under HOOKS_DIRget_song_data(slug)— queriessongs+suno_librarytables in content.dbfind_mp3(song_data)— finds MP3 by path or searches ~/Downloads/supernal-songs/get_lyrics(song_data)— reads markdown file and strips frontmatterget_audio_duration(mp3_path)— ffprobe to get duration in secondsgenerate_images_openai(topic, style_hint, n, api_key)— N background images via OpenAI gpt-image-1 (~$0.02/image); returns [] if no OPENAI_API_KEYgenerate_images_replicate(prompts, api_key)— per-prompt images via Replicate FLUX-schnell (~$0.003); handles rate limiting and pollinggenerate_background_images(topic, style_hint, n)— tries OpenAI first, falls back to Replicatemake_gradient_images(n, output_dir)— generates solid gradient images locally (no API, $0.00)download_suno_image(image_url, output_dir)— downloads cover art from Suno CDN URLmake_image_variations(base_image, n, output_dir)— PIL hue/saturation variants of a base imageparse_lyric_segments(lyrics)— splits lyrics into verse/chorus/bridge segmentsget_whisper_timestamps(mp3)— local Whisper for word-level timestamps (sync mode)match_segments_to_timestamps(segments, words)— matches lyric segments to Whisper wordsbuild_segment_prompt(seg, title, category, style_idx)— builds FLUX prompt for a lyric segmentbuild_timed_ken_burns_video(mp3, images_with_times, title, output, watermark)— ffmpeg Ken Burns with Whisper-timed cutsbuild_ken_burns_video(mp3, images, title, output, watermark, format)— ffmpeg Ken Burns with even-duration cuts (standard mode, $0.00)clean_lyrics_for_display(lyrics)— strips frontmatter markers for lyric overlaybuild_lyric_video(mp3, lyrics, title, output)— scrolling lyrics on gradient background via PIL + ffmpeg ($0.00)build_waveform_video(mp3, title, output)— audio waveform visualizer via ffmpeg ($0.00)build_compilation(slugs, label, output, channel, style)— concatenates multiple song videos into one compilation; updates compilations + compilation_songs tables in content.dbgenerate_story_copy(song_data, lyrics)— generates YouTube title/description/tags via Claude Haikurun_single(song_data, mp3, output, level, format, hook_duration)— top-level runner for one song; dispatches to correct mode (standard/sync/lyric/waveform/viral)_get_local_cover_art(song_data)— finds local cover art file_distribute_timed_segments(n_segments, duration_sec)— evenly distributes segment timings_draw_text_shadow(draw, text, ...)— PIL text drawing with shadow for lyric overlaysbuild_viral_video(mp3, lyrics, title, output, format, hook_duration, style_hint)— verse-by-verse fade/slide text on blurred cover art background; supports 16:9 and 9:16 (TikTok/Shorts); hook trimming via--hook Nextract_hook(video_path, duration, output)— trims first N seconds to a hook clip
Modes: standard ($0.00 — Suno cover art + hue variants), sync (~$0.02 — Whisper + FLUX), lyric ($0.00 — scrolling text), waveform ($0.00 — ffmpeg visualizer), viral ($0.00 — text overlay)
Compilation mode: --compile <label> concatenates all songs tagged with label.
video-automation package [READ]
Source: /Users/saiterminal/git/supernal/families/supernal-coding/packages/video-automation/
package.json: name @supernalintelligence/video-automation, version 0.1.0, Python project with bin/video-gen.js Node entry point. Tests via pytest, lint via ruff.
src/orchestrator.py (read): Python-only. Key classes:
SegmentResult— dataclass: segment_number, success, video_path, video_url, prompt, duration, cost, generation_time, error, retry_countVideoResult— dataclass: success, video_path, segments_generated, total_cost, generation_time, quality_score, platform_used, audio_duration, error_logCostTracker— budget enforcement:can_spend(amount),spend(amount, description),remaining(),summary(); default budget $10.00ReplicateClient— calls Replicate PixVerse v4 API;COST_PER_SECOND = 0.075($0.075/sec for 720p)
This package is distinct from the content-tracker Python scripts. It is the established Node→Python subprocess pattern. The bin/video-gen.js is the correct model for WorkerDispatcher.
content.db schema [READ]
Location: ~/.openclaw/data/content.db (found at this path, not ~/.supernal/content.db)
Actual tables (14):
| Table | Purpose |
|---|---|
songs | Song catalog — slug, title, file_path, category, suno_style, lyrics, tts_enabled, draft, tags, share copy; extended columns: age_min, age_max, pipeline_stage, pipeline_updated_at |
audio_assets | Downloaded/uploaded audio — suno_mp3, suno_wav, tts_openai, soundcloud; status: pending/downloaded/uploaded/published/failed |
video_assets | Video assets — revid, youtube, replicate, direct; status: pending/generating/downloaded/uploaded/published/failed |
publish_events | Distribution events per platform (twitter/linkedin/facebook/instagram/youtube/soundcloud/tts/tiktok); triggered_by column present |
suno_library | Suno-generated tracks — suno_id (UUID), style_prompt, tags, duration, cover art, MP3 path; extended: is_liked, is_trashed, is_hidden, reaction_type, needs_regen, regen_count |
youtube_videos | YouTube video catalog — video_id, channel_id, channel_handle, video_type (video/short), view_count, wise_song_id FK |
video_analytics | Per-video per-day analytics — views, watch_minutes, avg_view_duration, avg_view_pct, likes, shares, subs_gained |
channel_daily | Channel-level daily rollup — views, watch_minutes, subs_gained |
traffic_sources | Per-day traffic source breakdown |
compilations | Compilation records — id, title, channel, style, mp4_path, youtube_id, duration_sec, song_ids, uploaded_at |
compilation_songs | Junction: compilation_id, song_id, position |
pipeline_reviews | Human review log per song/stage — action, suno_id, notes |
bg_jobs | Background job tracking — workflow_type, item_id, stage, provider, status (queued/running/done/failed/cancelled), progress_pct, log_tail, error_message, output_path |
resonance_scores | Multi-platform resonance index — item_id, company_id, platform, post_url, resonance_index, raw_metrics |
The earlier spec claim of "4-table schema" is incorrect. There are 14 tables. The content-ops board schema (content_channels, content_pipeline_definitions, content_pipeline_items, content_distribution_queue, content_analytics_snapshots, content_item_history, content_action_responses) uses a SEPARATE schema from the content-tracker content.db. They are distinct databases: content-tracker writes ~/.openclaw/data/content.db; content-ops agent writes to AgentStateDB tables.
Secrets stored [READ]
Output of sc secret list (2026-04-13):
Under content-tracker service:
REPLICATE_API_TOKEN— storedGEMINI_API_KEY— stored
Under connector-secrets service:
HUBSPOT_API_KEY,GOOGLE_CLIENT_ID,GOOGLE_CLIENT_SECRET,GOOGLE_REFRESH_TOKEN,YOUTUBE_CHANNEL_ID— stored
Under si service:
ANTHROPIC_API_KEY— stored
Missing (not stored):
OPENAI_API_KEY— not in any service (needed for_generate_image_openaiandgenerate_story_copyin video_pipeline.py)SUNO_SESSION_TOKEN— not storedYOUTUBE_API_KEY— not stored undercontent-tracker(exists underconnector-secretsas part of Google OAuth; unclear if content-ops agent can use it)CONTENT_OPS_WEBHOOK_SECRET— not stored (required by soul-widget.yaml trigger)
What genuinely does NOT exist (verified gap)
@supernal/media-pipelinepackage — does not exist. TheWorkerDispatcherTypeScript class described in the spec has not been built. Thebin/video-gen.jspattern fromvideo-automationexists as a model but a general-purpose dispatcher package does not exist.- stdin/stdout JSON protocol for Python workers — not implemented. The Python scripts (
scene_pipeline.py,video_pipeline.py) accept CLI args only; there is no stdin task dispatch mode. - "Studio" view in content-ops board — not yet in the
VIEWSarray. The board has 6 views: Pipeline, Channels, Distribution, Analytics, Jobs, Definitions. No Studio view. content video generateuniversal-command — not found in any package during this audit.content channel create/update/listcommands — not found.storyboard_pendingstatus in bg_jobs — not in actualcontent.dbschema (status CHECK constraint: queued/running/done/failed/cancelled).- Planning epics for content suite —
sc planning epic listshowed only thecreateandauditsubcommands with no epics created yet. content-trackerAgentStateDB tables (content_channels,content_pipeline_definitions, etc.) — these do NOT exist in the actualcontent.db. They are the content-ops agent's target schema written to AgentStateDB, a separate database.
Corrections to previous spec claims
- "content.db 4-table schema" — incorrect. The actual
content.dbhas 14 tables: songs, audio_assets, video_assets, publish_events, suno_library, youtube_videos, video_analytics, channel_daily, traffic_sources, compilations, compilation_songs, pipeline_reviews, bg_jobs, resonance_scores. - "content.db at ~/.supernal/content.db" — incorrect path. Database found at
~/.openclaw/data/content.db. - "content-studio board = content-ops board" — incorrect.
content-studiois a separate wise-songs-specific board inapps/supernal-dashboard/src/boards/content-studio/. It imports views frompipeline-board/views/and is hardcoded to 3 Supernal channels.content-opsis the generic soul widget inpackages/boards/content-ops/. These are different boards. - "
_generate_image_openaiskeleton exists" — confirmed correct; function is present invideo_pipeline.pyand inscene_pipeline.py(both read). Not a skeleton — the implementation is complete invideo_pipeline.py;scene_pipeline.pyalso has a working implementation. What is missing is the storedOPENAI_API_KEY. - "scene_pipeline.py has evaluate_breaks (Stage 1b)" — confirmed. Not mentioned in previous spec. This is an additional LLM self-eval gate before image generation.
- "scene_pipeline.py has vision_eval_image (Stage 3b)" — confirmed. Claude Vision checks generated images against descriptions. Not in previous spec's function list.
- "build_visual_world (Stage 0) exists" — confirmed. Not mentioned in previous spec. Runs before Stage 1 to lock character/setting/prop consistency across all scene images.
- "ScheduleView is live scheduling" — incorrect. File header explicitly says "Display only — to wire live scheduling, pass publishAt to youtube_upload.py." No actual publish action is wired.
- "AllStagesView kanban is implemented" — incorrect. Kanban button exists but shows "coming soon" placeholder.
- "content-ops agent supports TikTok connector" — partially correct. Code comment says "not yet implemented; falls back to a warning." TikTok is listed as a connector type but does nothing.
- OPENAI_API_KEY claim "not yet stored" — confirmed correct as of 2026-04-13
sc secret listoutput.